AI Problem Formulation Explained
AI Problem Formulation Explained
1. Production Systems:
Definition:
Production systems are rule-based systems used in AI to model intelligent
behaviour and solve problems.
Components:
They typically consist of:
o Rules (Productions): IF-THEN statements that define actions based on
conditions.
o Working Memory (Global Database): Stores the current state of the
problem and data relevant to the rules.
o Control System: Determines which rules to apply and how to apply them,
guiding the problem-solving process.
Functionality:
Rules fire (apply) to modify the working memory based on their conditions,
iteratively moving the system closer to a solution.
2. Control Strategies:
Purpose:
Control strategies are crucial for efficient problem-solving in production
systems. They dictate the order in which rules are applied and how the system
processes data.
Key Aspects:
o Rule Selection: Determining which rule(s) to apply from the available
options.
o Conflict Resolution: Handling situations where multiple rules are
applicable simultaneously.
oEfficiency: Ensuring the strategy leads to a solution in a reasonable time
and with minimal computation.
Examples:
o Forward Chaining: Starting from initial facts and applying rules to derive
new facts until a goal is reached.
o Backward Chaining: Starting from a goal and working backward to find
the rules and facts that support it.
o Heuristic Search: Using rules of thumb (heuristics) to guide the search
towards promising solutions.
3. Search Strategies:
[Link]
Role:
Search strategies are employed within control strategies to explore the problem
space and find the optimal solution.
Types of Search:
o Uninformed Search: Algorithms like Breadth-First Search and Depth-First
Search explore the search space systematically without using any problem-
specific knowledge.
o Informed Search: Algorithms like A*, Best-First Search, and Hill Climbing
utilize heuristics to guide the search and prioritize promising paths.
Example:
In a game like chess, the search strategy might explore different move sequences
to identify the best move based on factors like material advantage, piece activity,
and board control
Problem Characteristics:
Initial State: The starting point or current situation of the problem.
Goal State: The desired outcome or solution to the problem.
Operators/Actions: The steps or processes that can be applied to move from
the initial state towards the goal state.
Constraints: Limitations or restrictions that affect the possible actions or
solutions.
Complexity: The difficulty or intricacy of the problem, which can vary
depending on the number of states, operators, and constraints.
Problem-Solving Methods:
Trial and Error: Trying different solutions until one works.
Heuristic Search: Using rules of thumb or educated guesses to guide the
search for a solution.
Algorithm: A step-by-step procedure for solving a problem.
Divide and Conquer: Breaking down a complex problem into smaller, more
manageable subproblems.
Means-Ends Analysis: Identifying the difference between the current state
and the goal state, and then finding actions to reduce that difference.
Working Backwards: Starting from the goal state and working backward to
find a path to the initial state.
Constraint Satisfaction: Finding a solution that satisfies all the given
constraints.
Problem Graphs:
Nodes: Represent the different states of the problem.
Edges/Arcs: Represent the transitions or actions that can be taken between
states.
Path: A sequence of nodes and edges that leads from the initial state to the
goal state.
Search Algorithms: Algorithms like Breadth-First Search or Depth-First
Search can be used to explore the problem graph and find the optimal path to
the solution.
Example:
Problem Graphs:
Problem graphs are a way to model a problem as a graph where nodes
represent states of the problem, and edges represent possible actions or
transitions between states.
This representation is useful for visualizing the search space and applying
various search algorithms to find a solution.
Matching:
Matching is the process of finding compatible elements within a problem. In the
context of graphs, this could involve finding a set of edges that connect different
parts of the graph in a specific way, like finding a maximum matching.
Matching problems are common in various applications, such as resource
allocation, scheduling, and network design.
Indexing:
Indexing refers to creating data structures that allow for efficient access to
specific information within a larger dataset.
In problem-solving, indexing can be used to store and retrieve information about
states, actions, or solutions, speeding up the search process.
Heuristic Functions:
Heuristic functions are used in informed search algorithms to estimate the cost
of reaching a goal state from a given state.
They provide a "guess" or estimate of the remaining cost, guiding the search
towards promising paths.
A good heuristic function can significantly improve the efficiency of a search
algorithm.
Examples of heuristic search algorithms include A*, Greedy Best-First Search,
and Hill Climbing.
Hill Climbing:
Definition:
A local search algorithm that iteratively moves towards a solution by selecting the
best neighbour (or a randomly selected better neighbour) at each step.
Types:
o Steepest Ascent/Descent: Always chooses the best immediate
neighbour.
o First-Choice: Evaluates neighbours in random order and moves to the first
one that is better than the current state.
o Stochastic: Randomly selects a neighbour, which may not be the best, but
still better than the current state.
Related Concepts:
Heuristic search, local optima.
Example:
Finding the peak in a mountainous terrain, where the algorithm moves uphill
towards the peak.
Depth-First Search (DFS):
Definition:
Explores a tree or graph by going as deep as possible along each branch before
backtracking.
How it works:
Starts at a root node and explores the first child, then the first child of that child,
and so on, until reaching a leaf or a previously visited node.
Related Concepts:
Tree/graph traversal, recursion, backtracking.
Example:
Exploring a maze by following one path as far as possible before trying a different
path.
Breadth-First Search (BFS):
Definition: Explores a tree or graph level by level.
How it works: Starts at the root node and explores all the immediate
neighbors, then explores the neighbors of those neighbors, and so on.
Related Concepts: Tree/graph traversal, queue data structure.
Example: Finding the shortest path from a starting point to a destination in a
graph, where all paths of length 1 are explored first, then all paths of length 2,
and so on.
Constraint Satisfaction Problems (CSPs):
Definition:
A problem-solving approach where the goal is to find a set of values for variables
that satisfy a given set of constraints.
How it works:
Involves defining variables, their domains (possible values), and constraints that
limit the possible combinations of variable values.
Related Algorithms:
o Backtracking: Explores the search space by systematically trying different
combinations of values and backtracking when a constraint is violated.
o Forward Checking: A technique to improve backtracking by looking ahead
and eliminating values that violate constraints after assigning a value to a
variable.
o Constraint Propagation: A technique to reduce the search space by
eliminating inconsistent values from the domains of variables based on the
constraints.
Example:
Sudoku, where the goal is to fill a grid with numbers such that each row, column,
and 3x3 subgrid contains all digits from 1 to 9.
Relationships:
Hill Climbing can be used as a local search strategy within a CSP to find a good
solution within a given set of constraints.
DFS and BFS can be used to explore the search space of a CSP, but they are
not tailored to constraint satisfaction.
Algorithms like backtracking, forward checking, and constraint propagation are
specifically designed to solve CSPs more efficiently.
In essence, these algorithms represent different approaches to problem-
solving in AI, with Hill Climbing being a heuristic search, DFS and BFS being
foundational search algorithms, and CSPs providing a framework for
problems with constraints.
How Does the Hill Climbing Algorithm Work?
In the Hill Climbing algorithm, the process begins with an initial solution, which
is then iteratively improved by making small, incremental changes. These changes
are evaluated by a heuristic function to determine the quality of the solution. The
algorithm continues to make these adjustments until it reaches a local maximum—
a point where no further improvement can be made with the current set of moves.
Basic Concepts of Hill Climbing Algorithms
Hill climbing follows these steps:
1. Initial State: Start with an arbitrary or random solution (initial state).
2. Neighbouring States: Identify neighbouring states of the current
solution by making small adjustments (mutations or tweaks).
3. Move to Neighbour: If one of the neighbouring states offers a better
solution (according to some evaluation function), move to this new state.
4. Termination: Repeat this process until no neighbouring state is better
than the current one. At this point, you’ve reached a local maximum or
minimum (depending on whether you’re maximizing or minimizing).
Hill Climbing as a Heuristic Search in Mathematical Optimization
Hill Climbing algorithm often used for solving mathematical optimization
problems in AI. With a good heuristic function and a large set of inputs, Hill
Climbing can find a sufficiently good solution in a reasonable amount of time,
although it may not always find the global optimal maximum.
In mathematical optimization, Hill Climbing is commonly applied to problems that
involve maximizing or minimizing a real function. For example, in
the Traveling Salesman Problem, the objective is to minimize the distance
traveled by the salesman while visiting multiple cities.
What is a Heuristic Function?
A heuristic function is a function that ranks the possible alternatives at any
branching step in a search algorithm based on available information. It helps the
algorithm select the best route among various possible paths, thus guiding the
search towards a good solution efficiently.
Features of the Hill Climbing Algorithm
1. Variant of Generating and Testing Algorithm: Hill Climbing is a
specific variant of the generating and testing algorithms. The process
involves:
Generating possible solutions: The algorithm creates
potential solutions within the search space.
Testing solutions: Each generated solution is evaluated to
determine if it meets the desired criteria.
Iteration: If a satisfactory solution is found, the algorithm
terminates; otherwise, it returns to the generation step.
This iterative feedback mechanism allows Hill Climbing to refine its
search by using information from previous evaluations to inform future
moves in the search space.
2. Greedy Approach: The Hill Climbing algorithm employs a greedy
approach, meaning that at each step, it moves in the direction that
optimizes the objective function. This strategy aims to find the optimal
solution efficiently by making the best immediate choice without
considering the overall problem context.
Types of Hill Climbing in Artificial Intelligence
1. Simple Hill Climbing Algorithm
Simple Hill Climbing is a straightforward variant of hill climbing where the
algorithm evaluates each neighbouring node one by one and selects the first node
that offers an improvement over the current one.
Algorithm for Simple Hill Climbing
1. Evaluate the initial state. If it is a goal state, return success.
2. Make the initial state the current state.
3. Loop until a solution is found or no operators can be applied:
Select a new state that has not yet been applied to the current
state.
Evaluate the new state.
If the new state is the goal, return success.
If the new state improves upon the current state, make it the
current state and continue.
If it doesn't improve, continue searching neighbouring states.
4. Exit the function if no better state is found.
2. Steepest-Ascent Hill Climbing
Steepest-Ascent Hill Climbing is an enhanced version of simple hill climbing.
Instead of moving to the first neighbouring node that improves the state, it
evaluates all neighbours and moves to the one offering the highest improvement
(steepest ascent).
Algorithm for Steepest-Ascent Hill Climbing
1. Evaluate the initial state. If it is a goal state, return success.
2. Make the initial state the current state.
3. Repeat until the solution is found or the current state remains unchanged:
Select a new state that hasn't been applied to the current state.
Initialize a ‘best state’ variable and evaluate all neighbouring
states.
If a better state is found, update the best state.
If the best state is the goal, return success.
If the best state improves upon the current state, make it the
new current state and repeat.
4. Exit the function if no better state is found.
3. Stochastic Hill Climbing
Stochastic Hill Climbing introduces randomness into the search process. Instead of
evaluating all neighbours or selecting the first improvement, it selects a random
neighbouring node and decides whether to move based on its improvement over
the current state.
Algorithm for Stochastic Hill Climbing:
1. Evaluate the initial state. If it is a goal state, return success.
2. Make the initial state the current state.
3. Repeat until a solution is found or the current state does not change:
Apply the successor function to the current state and generate
all neighbouring states.
Choose a random neighbouring state based on a probability
function.
If the chosen state is better than the current state, make it the
new current state.
If the selected neighbour is the goal state, return success.
4. Exit the function if no better state is found.
State-Space Diagram in Hill Climbing: Key Concepts and
Regions
In the Hill Climbing algorithm, the state-space diagram is a visual
representation of all possible states the search algorithm can reach, plotted against
the values of the objective function (the function we aim to maximize).
In the state-space diagram:
X-axis: Represents the state space, which includes all the possible states
or configurations that the algorithm can reach.
Y-axis: Represents the values of the objective function corresponding
to each state.
The optimal solution in the state-space diagram is represented by the state where
the objective function reaches its maximum value, also known as the global
maximum.
Video link-
[Link]
[Link]
Video----- [Link]
Best First Search (Informed Search)
Best First Search is a heuristic search algorithm that selects the most promising
node for expansion based on an evaluation function. It prioritizes nodes in the
search space using a heuristic to estimate their potential. By iteratively choosing
the most promising node, it aims to efficiently navigate towards the goal state,
making it particularly effective for optimization problems.
The idea is to use priority queue or heap to store the costs of edges that have lowest evaluation
function value and operate similar to BFS algorithm.
[Link]