0% found this document useful (0 votes)
6 views

Atp Module 4

Uploaded by

aadhikassim
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Atp Module 4

Uploaded by

aadhikassim
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

MODULE-4

COMPUTATIONAL APPROACHES TO PROBLEM-SOLVING

Brute-force Approach - Padlock, Password guessing, Divide-and-conquer Approach - The Merge Sort
Algorithm, Advantages , Disadvantages, Dynamic Programming Approach-Fibonacci series, Recursion vs
Dynamic Programming, Greedy Algorithm Approach, Randomized Approach

Solving a problem involves finding a way to move from a current situation to a desired outcome. To be able to
solve a problem using computational approaches, the problem itself needs to have certain characteristics such as:
o The problem needs to be clearly defined — this means that one should be able to identify the current
situation, the end goal, the possible means of reaching the end goal, and the potential obstacles
o The problem needs to be computable — one should consider what type of calculations are required, and if
these are feasible within a reasonable time frame and processing capacity
o The data requirements of the problem need to be examined, such as what types of data the problem involves,
and the storage capacity required to keep this data
o One should be able to determine if the problem can be approached using decomposition and abstraction, as
these methods are key for tackling complex problems

Brute-force Approach

• The brute-force approach involves a straightforward and direct method of solving problems, without
considering more sophisticated or optimized techniques.
• It does not focus on reducing the number of steps or operations needed to reach a solution, often resulting in
an inefficient process.
• The approach systematically examines every possible candidate or option in the search space to find the
desired solution.
• This exhaustive process of checking all possibilities is why it is also known as exhaustive search.
• In searching problems, the brute-force method involves going through an entire list of candidates one by one
to locate the desired object.
• The method ignores any inherent structure or patterns in the problem that could help in skipping irrelevant
options and speeding up the search process.
• For example, in a grocery store search for a frozen pie, a logical approach would involve directly checking
the frozen food aisle, while brute force would require searching every aisle, even those clearly unrelated.
• While simple to implement, the brute-force approach is computationally expensive, requiring significant
time and resources when applied to large datasets or complex problems.
Example: Padlock
® The padlock has 4 digits, each ranging from 0 to 9.
® Since the combination is forgotten, you decide to use the brute-force method to unlock it.
® The brute-force method involves trying every possible combination, starting from 0000 and
incrementing sequentially: 0001, 0002, 0003, and so on.
® In total, there are 104 = 10,000 possible combinations for the lock, considering all four digits.
® In the worst-case scenario, you might have to attempt all 10,000 combinations before finding the correct
one.
® This process is exhaustive and time-consuming but guarantees the lock will eventually open.
® The brute-force method is practical for small-scale problems like this but becomes inefficient for larger-
scale problems with more possibilities.
Algorithm
1. Start
2. Initialize the first possible combination (e.g., 0000 for a 4-digit password).
3. Attempt to unlock using the current combination.
4. If the combination is correct, stop and unlock the system.
5. If the combination is incorrect, increment to the next possible combination.
6. Repeat steps 3-5 until the correct combination is found or all possibilities are exhausted.
7. Stop
Pseudocode
Start
Set combination to 0000 // Initialize the first possible combination
While combination <= 9999 do:
Attempt to unlock with combination
If combination is correct:
Stop // Unlock successful
Else:
Increment combination by 1 // Move to the next possible combination
Stop // All possibilities exhausted or lock opened

Divide-and-conquer Approach

In the divide and conquer strategy, a problem is solved recursively by applying three steps at each level of the
recursion: Divide, conquer, and combine.
Divide
® Divide is the first step of the divide and conquer strategy.
® The problem is divided into smaller sub-problems until it is small enough to be solved.
® Each sub-problem represents a part of the original problem, but at a smaller scale.
® Recursion is used to implement the divide and conquer algorithm.
® A recursive algorithm calls itself with smaller or simpler input values, known as the recursive case.
® The divide step determines the recursive case, which divides the problem into smaller sub-problems.
® After dividing, the "conquer" step comes into play, where each sub-problem is solved directly.
® The input has already been divided into the smallest possible parts, and now basic operations are
performed to solve them.
® The conquer step is implemented with recursion, specifying the recursive base case.
® Once the sub-problems are small enough that they can't be divided further, recursion "bottoms out" and
reaches the base case.
® Upon reaching the base case, the sub-problems are solved and the solution is obtained.

Combine

® In the combine step, the solutions of the smaller sub-problems are merged to form the
solution to the entire problem.
® Once the base case is solved, the results are passed as input to larger sub-problems, which
will now be solved using the information from the smaller sub-problems.
® After reaching the base case, the algorithm begins to work its way back up, solving larger
sub-problems by using the results returned from smaller sub-problems.
® This step involves merging the output from the conquer phase to build the solutions for
progressively larger sub-problems.
® Solutions to smaller sub-problems gradually propagate from the base case upwards, combining them until
they culminate in the solution to the original problem.
Algorithm

1. Start

2. If the array has 1 or fewer elements, return the array (base case).

3. Divide the array into two halves: Left and Right.

4. Recursively apply MergeSort to the Left half.

5. Recursively apply MergeSort to the Right half.

6. Merge the two sorted halves (LeftSorted and RightSorted) into a single sorted array.

7. Return the sorted array.

8. In the merge step, compare elements from Left and Right and combine them into a sorted order.

9. If one of the halves is exhausted, append the remaining elements from the other half to the sorted

array.

10. Stop
Let the given array be:

Divide the array into two halves

Again, divide each subpart recursively into two halves until you get individual elements.

Now, combine the individual elements in a sorted manner. Here, conquer and combine steps go side by side.

1. Divide: Split the array into two halves.


2. Conquer: Recursively sort both halves.
3. Combine: Merge the two sorted halves to produce the sorted array.
1. Function mergeSort
• Check if the array has one or zero elements. If true, return the array
as it is already sorted.
• Otherwise, find the middle index of the array.
• Split the array into two halves: from the beginning to the middle and from the middle to the end.
• Recursively apply mergeSort to the first half and the second half.
• Merge the two sorted halves using the merge function.
• Return the merged and sorted array.

2. Function merge
• Create an empty list called sorted_arr to store the sorted elements.
• While both halves have elements:
– Compare the first element of the left half with the first element of the right half.
– Remove the smaller element and append it to the sorted_arr list.
• If the left half still has elements, append them all to the sorted_arr list.
• If the right half still has elements ,append them all to the sorted_arr list.
• Return the sorted_arr list, which now contains the sorted elements from both halves.
Advantages of Divide and Conquer Algorithms:

1. Simplicity in Problem Solving: By breaking a problem into smaller subproblems, each subproblem is
simpler to understand and solve, making the overall problem more manageable.
2. Efficiency: Many divide-and-conquer algorithms, such as merge sort and quicksort, have optimal or near-
optimal time complexities. These algorithms often have lower time complexities compared to iterative
approaches.
3. Modularity: Divide-and-conquer promotes a modular approach to problem- solving, where each
subproblem can be handled by a separate function or module. This makes the code easier to maintain and
extend.
4. Reduction in Complexity: By dividing the problem, the overall complexity is reduced, and solving
smaller subproblems can lead to simpler and more efficient solutions.
5. Parallelism: The divide-and-conquer approach can easily be parallelized because the subproblems can
be solved independently and simultaneously on different processors, leading to potential performance
improvements.
6. Better Use of Memory: Some divide-and-conquer algorithms use mem- ory more efficiently. For
example, the merge sort algorithm works well with large data sets that do not fit into memory, as it can
process subsets of data in chunks.

Disadvantages of Divide and Conquer Approach

1. Overhead of Recursive Calls: The recursive nature can lead to signif- icant overhead due to function
calls and maintaining the call stack. This can be a problem for algorithms with deep recursion or large
subproblem sizes.
2. Increased Memory Usage: Divide-and-conquer algorithms often re- quire additional memory for storing
intermediate results, which can be a drawback for memory-constrained environments.
3. Complexity of Merging Results: The merging step can be complex and may not always be
straightforward. Efficient merging often requires additional algorithms and can add to the complexity of
the overall solution.
4. Not Always the Most Efficient: For some problems, divide-and-conquer might not be the most efficient
approach compared to iterative or dynamic programming methods. The choice of strategy depends on the
specific problem and context.
5. Difficulty in Implementation: Implementing divide-and-conquer algorithms can be more challenging,
especially for beginners. The recursive nature and merging steps require careful design to ensure
correctness and efficiency.
6. Stack Overflow Risk: Deep recursion can lead to stack overflow errors if the recursion depth exceeds
the system’s stack capacity, particularly with large inputs or poorly designed algorithms.
Dynamic Programming Approach

® Dynamic programming is a method for solving complex problems by breaking them down into
simpler subproblems. It is a way of combining solutions to overlapping subproblems to avoid
redundant calculations
® Dynamic programming (DP) is similar to divide and conquer because both break problems into
smaller sub-problems that can be solved recursively.
® In DP, the results of solving smaller sub-problems are saved and reused to solve larger sub-
problems, while divide and conquer does not reuse previous results.
® DP follows a bottom-up approach, starting by solving the smallest sub-problems and using those
results to solve larger sub-problems until the original problem is solved.
® Divide and conquer follows a top-down approach, where the original problem is divided into
smaller sub-problems, and previous results are not reused.
® DP is most effective when there is overlap in the sub-problems, meaning the same sub-problems
are solved multiple times.
® To avoid redundant calculations, DP stores the results in a table (memoization), which is then
used to solve larger sub-problems.
® Retrieving a result from the table takes Θ(1) time, making the process efficient.
® DP is often used for optimization problems where there are many possible solutions, each with an associated
cost, and the goal is to find the optimal solution with the smallest cost.

1. Optimal Substructure: A problem has optimal substructure if the best solution to the overall problem can
be constructed from the best solutions to its smaller subproblems. This means that if you have the optimal
solu- tions for the smaller components of the problem, you can combine them to find the best solution for
the entire problem. This property allows Dynamic Programming to build solutions incrementally, using
previously computed results to achieve the most efficient outcome.

Example: Shortest Path in a Grid

Imagine you need to find the shortest path from the top-left corner to the bottom-right corner of a grid. You
can only move right or down. Each cell in the grid has a certain cost associated with entering it, and your
goal is to minimize the total cost of the path.

Problem Breakdown:

a. Smaller Subproblems: To find the shortest path to a particular cell (i, j), you can look at the shortest
paths to the cells immediately above it (i − 1,j) and to the left of it (i,j − 1). The cost to reach cell (i,
j) will be the minimum of the costs to reach these neighboring cells plus the cost of the current cell.
b. Optimal Substructure: If you know the shortest paths to cells (i − 1, j) and (i, j − 1), you can use
these to determine the shortest path to cell (i,j). The optimal path to cell (i,j) can be constructed from
the optimal paths to its neighboring cells.
How it Works:

• You start by solving the problem for the smallest subproblems (the cells directly above and to
the left).
• You then build up solutions incrementally, using the results of the smaller subproblems to solve
larger parts of the grid.
• Finally, you combine the results to find the shortest path to the bottom-right corner of the grid.

2. Overlapping Subproblems: Many problems require solving the same subproblems multiple times. Dynamic
Programming improves efficiency by storing the results of these subproblems in a table to avoid redun- dant
calculations. By caching these results, the algorithm reduces the number of computations needed, leading to
significant performance improvements.

Dynamic programming breaks problems down into overlapping subproblems, storing solutions to
avoid redundant calculations.

Example: Fibonacci Series

Let’s find the Fibonacci sequence up to the 5th term. A Fibonacci series is the sequence of numbers
in which each number is the sum of the two preceding ones. For example, 0,1,1, 2, 3. Here, each
number is the sum of the two preceding numbers.

Algorithm

We are calculating the Fibonacci sequence up to the 5th term.

1. The first term is 0.


2. The second term is 1.
3. The third term is sum of 0 (from step 1) and 1(from step 2), which is 1.
4. The fourth term is the sum of the third term (from step 3) and second term (from step 2) i.e. 1 + 1 = 2.
5. The fifth term is the sum of the fourth term (from step 4) and third term (from step 3) i.e. 2 + 1 = 3.

Hence, we have the sequence 0,1,1, 2, 3. Here, we have used the results of the previous steps as shown below.
This is called a dynamic programming approach.

F(0) = 0
F(1) = 1
F(2) = F(1) + F(0)
F(3) = F(2) + F(1)
F(4) = F(3) + F(2)

Dynamic Programming Algorithm

1. Start
2. Input the problem details, including the set of sub-problems and constraints.
3. Define a table (or array) to store the solutions of smaller sub-problems.
4. Identify the base cases (smallest sub-problems) and initialize the table with their
solutions.
5. Use an iterative approach to solve larger sub-problems:
o For each sub-problem, calculate its solution using the results of previously
solved sub-problems stored in the table.
o Store the calculated result in the table for future use.
6. Continue solving sub-problems in increasing order of size until the solution to the
original problem is obtained.
7. Retrieve the final solution from the table.
8. Output the result.
9. Stop

Fundamental Principles of Dynamic Programming

The fundamental principles that make Dynamic Programming an effective problem-solving technique,
focusing on overlapping subproblems, optimal substructure, and the two primary approaches:
memoization and tabulation.

Overlapping Subproblems: Dynamic Programming is particularly useful for problems with overlapping
subproblems. This means that when solving a larger problem, you encounter smaller subproblems that are
repeated multiple times. Instead of recomputing these subproblems each time they are encountered, Dy-
namic Programming saves their solutions in a data structure, such as an array or hash table. This avoids
redundant calculations and significantly improves efficiency.

For example, in a recursive approach to solving a problem, the same function might be called multiple
times with the same arguments. Without Dynamic Pro- gramming, this leads to wasted time as the same
subproblems are recalculated repeatedly. By using Dynamic Programming, the solutions to these
subprob- lems are stored once computed, which optimizes overall algorithm efficiency.

Optimal Substructure: Another key principle of Dynamic Programming is optimal substructure. This
property means that an optimal solution to the larger problem can be constructed from the optimal
solutions to its smaller sub- problems. In other words, if you can determine the best solution for smaller
problems, you can use these solutions to build the best solution for the entire problem.

Optimal substructure is central to Dynamic Programming’s recursive nature. By solving subproblems


optimally and combining their solutions, you ensure that the final solution is also optimal.
Approaches in Dynamic Programming

Dynamic Programming can be implemented using two main approaches: memorization (top-down)
and tabulation (bottom-up).

1. Memoization (Top-Down Approach)

Memoization involves solving the problem recursively and storing the results of subproblems in a table
(usually a dictionary or array). This way, each subproblem is solved only once, and subsequent calls to
the subproblem are served from the stored results.

Steps:
1. Identify the base cases.
2. Define the recursive relation.
3. Store the results of subproblems in a table.
4. Use the stored results to solve larger subproblems.

The fib function leverages memoization to optimize the calculation of Fibonacci numbers by storing
the results of previously computed numbers in a dictionary. This approach significantly reduces the
time complexity of the algorithm from exponential to linear by avoiding redundant calculations.

Memoization is often easier to implement and understand. It starts with the original problem and
solves subproblems as needed. However, it may have overhead due to recursive function calls and may
not be as efficient for some problems.

2. Tabulation (Bottom-Up Approach)

Tabulation involves solving the problem iteratively and filling up a table (usu- ally an array) in a
bottom-up manner. This approach starts with the smallest subproblems and uses their solutions to
construct solutions to larger subprob- lems.

Steps:

1. Identify the base cases.


2. Define the table to store solutions to subproblems.
3. Fill the table iteratively using the recursive relation.
4. Extract the solution to the original problem from the table.

This approach reduces the time complexity and the space complexity, making it much more efficient
than the naive recursive approach.
Tabulation tends to be more memory-efficient and can be faster than mem- oization due to its iterative
nature. However, it requires careful planning to set up the data structures and dependencies correctly.

The core strength of dynamic programming lies in turning recursive problems into iterative
solutions by reusing past work.

Solving Computational Problems Using Dynamic Programming Approach

Here is a step-by-step guide on how to solve computational problems using the dynamic programming approach:

1. Identify the Subproblems: Break down the problem into smaller sub- problems. Determine what the
subproblems are and how they can be combined to solve the original problem.
2. Define the Recurrence Relation: Express the solution to the problem in terms of the solutions to
smaller subproblems. This usually involves finding a recursive formula that relates the solution of a
problem to the solutions of its subproblems.
3. Choose a Memoization or Tabulation Strategy: Decide whether to use a top-down approach with
memoization or a bottom-up approach with tabulation.

• Memoization (Top-Down): Solve the problem recursively and store the results of subproblems
in a table (or dictionary) to avoid redundant computations.
• Tabulation (Bottom-Up): Solve the problem iteratively, starting with the smallest subproblems
and building up the solution to the original problem.

4. Implement the Solution: Write the code to implement the dynamic programming approach, making
sure to handle base cases and use the table to store and retrieve the results of subproblems.
5. Optimize Space Complexity (if necessary): Sometimes, it is possible to optimize space complexity by
using less memory. For example, if only a few previous states are needed to compute the current state, you
can reduce the size of the table.

Advantages of the Dynamic Programming Approach

1. Efficiency: DP reduces the time complexity of problems with overlapping


subproblems by storing solutions to subproblems and reusing them.
2. Optimal Solutions: DP ensures that the solution to the problem is optimal by solving each subproblem
optimally and combining their solutions.
3. Versatility: DP can be applied to a wide range of problems across different domains.

Disadvantages of the Dynamic Programming Approach

1. Space Complexity: DP often requires additional memory to store the results of subproblems, which can
be a limitation for problems with a large number of subproblems.
2. Complexity of Formulation: Developing a DP solution requires a deep understanding of the problem’s
structure and properties, which can be challenging.
3. Overhead of Table Management: Managing and maintaining the DP table or memoization structure
can add overhead to the algorithm.

Dynamic programming is not just a technique; it is a framework for efficiently solving problems
with a recursive structure.

Dynamic programming is a powerful technique for solving problems with overlapping subproblems and
optimal substructure. By breaking down problems into simpler subproblems and storing their solutions, DP
achieves efficiency and guarantees optimal solutions. Despite its complexity and memory requirements, DP’s
versatility and effectiveness make it an essential tool in algorithm design.

Recursion vs Dynamic Programming

Aspect Recursion Dynamic Programming

An optimization technique to solve problems by


A technique where a function calls itself to
Definition storing the results of overlapping sub-problems to
solve smaller sub-problems of the same type.
avoid redundant calculations.

Used to break down a problem into smaller Used when a problem has overlapping sub-problems
Usage
sub-problems. and optimal substructure.

Eliminates redundancy by storing the results of


May recompute the same sub-problems
Redundancy previously computed sub-problems
multiple times, leading to inefficiency.
(memoization/tabulation).

Follows a top-down approach (solves the Can be implemented with either a top-down (with
Approach
problem from the original to the base case). memoization) or bottom-up (tabulation) approach.

Not inherently optimized unless combined


Optimization Optimized due to reuse of precomputed results.
with dynamic programming.

Memory usage depends on recursion stack Additional memory is required to store computed
Memory Usage
size. values (usually in a table or array).

May have exponential time complexity if


Generally has polynomial time complexity due to
Efficiency overlapping sub-problems are present (e.g.,
avoiding redundant calculations.
Fibonacci sequence).

Useful for problems without overlapping sub- Useful for optimization problems with overlapping
Applicability problems, such as Divide and Conquer sub-problems (e.g., Fibonacci, shortest paths,
algorithms (e.g., Merge Sort, Quick Sort). knapsack).

Fibonacci sequence, Longest Common Subsequence


Examples Merge Sort, Quick Sort, Tower of Hanoi.
(LCS), 0/1 Knapsack Problem.
Greedy Algorithm Approach

The greedy approach makes local choices at each step, aiming for immediate benefit in hopes of
finding the global optimum.
• All these approaches aim to simplify complex problems by breaking them into smaller, more
manageable subproblems. Divide-and-conquer All these approaches aim to simplify complex
problems by breaking them into smaller, more manageable subproblems. Divide-and-conquer
• A greedy algorithm makes decisions by selecting the option with the smallest immediate (local)
cost at each decision point.
• It does not look ahead to determine if the local choice leads to the optimal global solution.
• A locally optimal choice is the best decision for a small portion of the problem's available
information.
• Greedy algorithms are simple and efficient, requiring minimal computational effort for each
decision.
• For general optimization problems, greedy algorithms may fail to produce globally optimal
solutions.
• Certain problems guarantee globally optimal solutions using a greedy strategy, such as Huffman
coding, Prim's algorithm, and Kruskal's algorithm.
• For a greedy algorithm to work optimally, the problem must exhibit the greedy-choice property and optimal
substructure.

Greedy Algorithm

1. Start
2. Input the list of elements along with their associated costs or values and the problem
constraints.
3. If required, sort the elements based on a specific criterion (e.g., smallest cost, highest
value, or earliest finish time).
4. Initialize an empty solution set S to store the selected elements.
5. For each element e in the sorted list:
o Check if including e in S satisfies the problem’s constraints.
§ If the constraints are satisfied, add e to the solution set S.
§ If the constraints are violated, skip e.
6. Repeat the process until:
o All elements are considered, or
o The constraints of the problem are fully met (e.g., capacity is exhausted).
7. Output the solution S as the final optimized result.
8. Stop
Characteristics of the Greedy Approach

1. Local Optimization: At each step, the algorithm makes the best pos- sible choice without considering
the overall problem. This choice is made with the hope that these local optimal decisions will lead to a
globally optimal solution.
2. Irrevocable Decisions: Once a choice is made, it cannot be changed. The algorithm proceeds to the
next step, making another locally optimal choice.
3. Efficiency: Greedy algorithms are typically easy to implement and run quickly, as they make decisions
based on local information and do not need to consider all possible solutions.

Motivations for Greedy Approach

Here are the reasons for using the greedy approach:

• The greedy approach has a few trade-offs, which may make it suitable for optimization.
• One prominent reason is to achieve the most feasible solution immediately. In the activity
selection problem (Explained below), if more activities can be done before finishing the current
activity, these activities can be performed within the same time.
• Another reason is to divide a problem recursively based on a condition, with no need to combine
all the solutions.
• In the activity selection problem, the “recursive division” step is achieved by scanning a list of
items only once and considering certain activities.

Characteristics of the Greedy Algorithm

1. Local Optimization:

Greedy algorithms make the best possible choice at each step by considering only the current problem state
without regard to the overall problem. This local choice is made with the hope that these local optimal choices
will lead to a globally optimal solution.

2.Irrevocable Decisions:
Once a choice is made, it cannot be changed. This means that the algorithm does not backtrack or
reconsider previous decisions.

3. Problem-Specific Heuristics:

Greedy algorithms often rely on problem-specific heuristics to guide their decision-making process. These
heuristics are designed based on the properties of the problem.
4. Optimality:

Greedy algorithms are guaranteed to produce optimal solutions for some problems (e.g., Coin change,
Huffman coding, Kruskal’s algo- rithm for Minimum Spanning Tree) but not for some other problems. The
success of a greedy algorithm depends on the specific character- istics of the problem.

5. Efficiency:

Greedy algorithms are generally very efficient regarding both time and space complexity because they
make decisions based on local information and do not need to explore all possible solutions.

By choosing the best option at every stage, greedy algorithms often provide efficient and simple solutions
to complex problems.

Solving Computational Problems Using Greedy Approach

Problem-1 (Task Completion Problem)

Given an array of positive integers each indicating the completion time for a task, find the maximum
number of tasks that can be completed in the limited amount of time that you have.

In the problem of finding the maximum number of tasks that can be com- pleted within a limited amount of
time, the optimal substructure can be identified by recognizing how smaller sub-problems relate to the
overall problem. Here’s how it works:

1. Break Down the Problem: Consider a subset of the tasks and deter- mine the optimal solution for this
subset. For example, given a certain time limit, find the maximum number of tasks that can be completed
from the first k tasks in the array.
2. Extend to Larger Sub-problems: Extend the solution from smaller sub-problems to larger ones. If you
can solve the problem for k tasks, you can then consider the (k + 1)th task and decide if including this task
leads to a better solution under the given time constraint.
3. Recursive Nature: The optimal solution for the first k tasks should help in finding the optimal solution for
the first (k + 1) tasks. This recursive approach ensures that the overall solution is built from the solutions of
smaller sub-problems.
4. Greedy Choice: At each step, make the greedy choice of selecting the task with the shortest completion
time that fits within the remaining available time. This choice reduces the problem size and leads to a
solution that maximizes the number of tasks completed.

Greedy algorithms excel in problems with optimal substructure, where the problem can be broken down
into smaller, solvable components.
Advantages of the Greedy Approach

1. Simplicity: Greedy algorithms are generally easy to understand and im- plement.
2. Speed: These algorithms typically run quickly, making them suitable for large input
sizes.
3. Optimal for Certain Problems: For some problems, like the Coin Change Problem
with certain denominations, greedy algorithms provide an optimal solution.

Disadvantages of the Greedy Approach

1. Suboptimal Solutions: Greedy algorithms do not always produce the optimal


solution for every problem. They are most effective when the problem has the greedy-
choice property, meaning a global optimum can be reached by making local optimal
choices.
2. Irrevocable Decisions: Once a choice is made, it cannot be changed, which may lead
to a suboptimal solution in some cases.
3. Lack of Backtracking: Greedy algorithms do not explore all possible solutions or
backtracks, which means they can miss better solutions.

Although greedy algorithms may not always find the perfect solution, they often provide fast
and close-to-optimal answers.

Greedy Algorithms vs Dynamic Programming

Aspect Greedy Algorithms Dynamic Programming


Purpose Used for optimization problems. Used for optimization problems.
Makes a locally optimal choice at each Solves sub-problems optimally and
Approach step in the hopes of finding a globally combines their results to find the global
optimal solution. optimum.
Decisions are made based on immediate Decisions are based on previously solved
Decision Process benefits without considering future sub-problems, ensuring informed and
implications. optimal choices.
Does not guarantee a globally optimal Guarantees a globally optimal solution if
Global Optimality
solution. applicable.
Overlapping Sub- Not necessarily used for problems with Specifically designed to handle problems
Problems overlapping sub-problems. with overlapping sub-problems.
Works only if the problem exhibits the
Optimal Works when the problem has optimal
greedy-choice property and optimal
Substructure substructure.
substructure.
Generally faster with lower time Typically slower due to solving and
Complexity complexity as it does not solve all sub- storing all sub-problems, though avoids
problems. redundancy.
Requires additional memory to store
Requires minimal memory as it does
Memory Usage intermediate results
not store results of sub-problems.
(memorization/tabulation).
Fibonacci sequence, Longest Common
Activity selection, Prim's and Kruskal's
Examples Subsequence (LCS), 0/1 Knapsack
algorithms, fractional knapsack.
problem.
Randomized Approach

® The performance of a randomized approach depends not only on the input data but also on random values
generated by a random number generator.
® When an algorithm involves choosing between multiple alternatives and determining the optimal choice is
difficult, a randomized approach can help by making the choice randomly instead of spending time
calculating the best alternative.
® This approach is particularly useful when there are many alternatives, most of which are "good," and finding
the best one is challenging or unnecessary.
® Randomizing an algorithm typically does not improve its worst-case running time but helps prevent the
algorithm from always producing the worst-case behavior on certain inputs.
® Because the behavior of a randomized algorithm is determined by random numbers, it is uncommon for the
algorithm to behave the same way on consecutive runs, even with the same input data.
® Randomized algorithms are often used in situations where fairness is required, especially in game-theoretic
contexts where mutual suspicion exists.
® Randomized approaches are widely applied in computer and information security, as well as in computer-
based games, to ensure fairness and unpredictability.

Example :

• There are n distinct types of coupons.


• Each time you collect a coupon, it is chosen randomly from one of the n types.
• The goal is to determine how many coupon collections are required (on average) to collect all n distinct
coupons.
• The coupon collected in each trial is chosen randomly.
• As you collect more coupons, the probability of obtaining a new type decreases because fewer distinct
types are left.
• At the start, it's easy to get new coupon types because most of the n types are still missing.
• As you collect more coupons, the chances of collecting a new type become smaller, making it
progressively harder to collect all n types.
• The expected number of collections needed to collect all n coupons is approximately n×ln⁡(n)n×ln(n),
meaning that it grows logarithmically with the number of distinct coupon types.
• The problem is randomized because each coupon is collected randomly, leading to unpredictable behavior,
though the expected number of collections can be mathematically determined.
• This problem and its solution are used to model situations like sampling, search algorithms, or exploration
in random spaces, where choices or outcomes are randomly determined.

Motivations for the Randomized Approach

The randomized approach to problem-solving offers several compelling advantages that can make it a
valuable tool in both theoretical and practical applications. Some of them are as follows:

1. Complexity Reduction: A randomized approach often simplifies com- plex problems by introducing
probabilistic choices that lead to efficient solutions. For example, imagine you are organizing a community
health screening event in a large city. You need to decide on the number of screening stations and their
locations to maximize coverage and efficiency. Instead of analyzing every possible combination of locations
and station numbers—which would be highly complex and time-consuming - you could randomly select
several potential locations and test their effectiveness. By evaluating a sample of these random setups, you
can identify patterns or clusters of locations that work well. This method simplifies the complex problem of
optimizing station placement by reducing the number of sce- narios you need to explore in detail.

2. Versatility: Applicable across diverse domains, from combinatorial optimization to stochastic


simulations, where deterministic solutions may be impractical or infeasible. For example, consider a
company that is developing a new app and wants to test its usability. Testing every feature with every possible
user scenario could be impractical. Instead, the company could randomly select a diverse group of users and
a subset of features to test. By analyzing how this sample of users interacts with the app and identifying any
issues they encounter, the company can gain insights that are broadly applicable to all users. This approach
allows the company to obtain useful feedback and make improvements without needing to test every possible
combination of user and feature.

3. Performance: In certain scenarios, a randomized approach can offer significant performance


improvements over deterministic counterparts, particularly when dealing with large datasets or complex
systems For example, imagine a large library that wants to estimate how often books are checked out. Instead
of tracking every single book’s check-out frequency— which would be a massive task—the library staff
could randomly sample a selection of books from different genres and record their check-out rates over a
period of time. By analyzing this sample, they can estimate the average check-out frequency for the entire
collection. This approach im- proves performance in terms of both time and resources, allowing the library
to make informed decisions about which books to keep, acquire, or remove based on practical data from the
sampled books.

The power of randomness lies in its ability to break symmetries and explore solution spaces that deterministic
methods may overlook.

Characteristics of Randomized Approach

Randomized approaches, which incorporate elements of randomness into their decision-making processes,
possess distinct characteristics that differentiate them from deterministic methods. Some of them are as
follows:

1. Probabilistic Choices: A randomized approach makes decisions based on random sampling or


probabilistic events, leading to variable but statistically predictable outcomes. For instance,
consider a company deciding where to place new vending machines in a large office building.
Instead of assessing every possible location in detail, the company could randomly select a few
potential spots, test their performance, and use this data to make a final decision. Although the
locations chosen may vary each time the process is conducted, the overall approach helps identify
the most effective spots based on statistical analysis of the sampled data.
2. Efficiency: They often achieve efficiency by sacrificing deterministic guarantees for
probabilistic correctness, optimizing performance in scenarios where exhaustive computation is
impractical. For instance, suppose you need to determine the most popular menu items in a large
restaurant chain. Instead of surveying every customer, which would be time-consuming and
expensive, you might randomly select a subset of customers and analyze their preferences.
Although this method does not guarantee that you will capture every preference perfectly, it
provides a practical and efficient way to understand overall trends without needing to gather data
from every single customer.
3. Complexity Analysis: Evaluating the performance of randomized approaches involves
analyzing their average-case behavior or expected out- comes over multiple iterations, rather
than deterministic worst-case scenarios. For example, if you are estimating the average time it
takes for customers to complete a purchase at an online store, you might randomly sample
customer transactions over a period of time. Instead of focusing on the longest possible wait time,
you analyze how the average wait time behaves across many transactions. This approach
provides a practical understanding of performance under typical conditions, rather than the
extremes, offering a more balanced view of how the system performs in real-world scenarios.

Overall, the characteristics of randomized approaches—probabilistic choices, efficiency, and average-


case complexity analysis—highlight their adaptability and practical advantages. These features make
them a powerful tool for tackling complex problems where deterministic methods may fall short,
offering a balance between performance and reliability in a wide range of computational scenarios.

Randomized approaches balance simplicity and performance, trading deterministic precision for
probabilistic guarantees of correctness.

You might also like