0% found this document useful (0 votes)
24 views14 pages

IAI_Module 2 Notes (1)

Uploaded by

rudecheats
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views14 pages

IAI_Module 2 Notes (1)

Uploaded by

rudecheats
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Notes on Module–II

Solving problems by searching:

Problem-Solving Agents

Intelligent agents aim to maximize their performance measure. Simplifying this process often involves setting specific goals. By
focusing on goals, agents can better organize their behavior and decision-making.

Goal formulation involves identifying the set of states where the goal is satisfied. The agent's task is to determine how to act
to reach these states.

Problem formulation is about deciding which actions and states are relevant given the goal. It involves focusing on broader
actions rather than detailed, less meaningful ones.

Scenario Without Information: Without a map, the agent may not know the best route from Arad to Bucharest, resulting in
random choices.

Scenario With Information: With a map, the agent can plan a route by examining possible actions and states. The map
provides information about the states (cities) and actions (roads) needed to reach Bucharest.

Observable: The agent always knows its current state (e.g., city names are visible).

Discrete: The environment allows for a finite number of actions at each state (e.g., driving to a limited number of cities).

Known: The agent knows the outcomes of actions (e.g., roads lead to specific cities).

Deterministic: Each action leads to a predictable outcome (e.g., driving to Sibiu always gets the agent to Sibiu).

Search: Finding a sequence of actions that leads to the goal is called search.

Algorithm: A search algorithm processes the problem and provides a sequence of actions to achieve the goal.

Execution: Once a solution is found, the agent follows the sequence of actions. This is called the execution phase.

In an open-loop system, the agent follows a pre-determined plan without considering real-time feedback. This assumes the
environment behaves as expected.

Well-defined problems and solutions:


A problem can be defined formally by five components:

Initial State
The initial state is the starting point of the agent.
Example: In the context of an agent in Romania, it can be described as In(Arad).
Actions
Actions are the possible operations that the agent can perform from a given state.
Applicable Actions: For a specific state s, the function ACTIONS(s) returns a set of actions that can be executed in that
state.
Example: From the state In(Arad), the applicable actions are:
{Go(Sibiu), Go(Timisoara), Go(Zerind)}
Transition Model
The transition model describes the outcome of performing an action in a particular state.
This is defined by the function RESULT(s, a), which returns the state that results from taking action a in state s.

Prepared by
Dr. Ravi Kumar
Saidala BTech, MTech, PhD,
The term successor refers to any state reachable from a given state via a single action.
Example:
RESULT(In(Arad), Go(Zerind)) = In(Zerind)
State Space
The state space encompasses all states that can be reached from the initial state by any sequence of actions.
It can be represented as a directed network or graph, where:
Nodes represent states.
Links (edges) represent actions.
A path in the state space is a sequence of states connected by actions.
Example: The map of Romania can be viewed as a state-space graph, where roads represent driving actions in both
directions.
Goal Test
The goal test determines if a given state is a goal state.
There can be explicit goal states or abstract properties that define the goal.
Example:
The agent’s goal in Romania is the singleton set {In(Bucharest)}.
In chess, the goal is to reach a state called “checkmate”.
Path Cost
The path cost function assigns a numeric cost to each path taken by the agent.
The agent selects a cost function that reflects its performance measure.
For an agent trying to reach Bucharest, the path cost might be based on distance (in kilometers).
Step Cost: The cost of taking action a in state s to reach state s′ is denoted by c(s, a, s').
In Romania, step costs can be illustrated as route distances, and it is assumed that step costs are non-negative.

Prepared by
Dr. Ravi Kumar
Saidala BTech, MTech, PhD,
1. Toy problems

Example 1: Vacuum world can be formulated as a problem as follows:

Example 2: 8-puzzle:

Prepared by
Dr. Ravi Kumar
Saidala BTech, MTech, PhD,
Example 3: 8-queens problem

Prepared by
Dr. Ravi Kumar
Saidala BTech, MTech, PhD,
2. Real-world problems:

Example 1: route-finding

problem

Prepared by
Dr. Ravi Kumar
Saidala BTech, MTech, PhD,
Searching For Solutions

After formulating problems, the next step is to solve them using search algorithms. A solution is defined as a sequence of actions
that lead to the goal state.
1. Search Tree
The possible action sequences starting from the initial state create a search tree.
Root Node: The root node of the tree represents the initial state (e.g., In(Arad)).
Branches: Each branch represents an action, while nodes correspond to different states in the state space.
2. Expanding Nodes
The process begins by checking if the root node (initial state) is a goal state. This is essential for solving problems where
the goal state might be the same as the initial state.
If the root is not a goal state, we expand the current state by applying all applicable actions. This generates new states.
Example: From In(Arad), after applying actions, we can generate child nodes:
In(Sibiu)
In(Timisoara)
In(Zerind)
3. Exploring Options
After expanding a node, we need to choose which child node to explore further.
This is the essence of search—choosing one option now while keeping others aside for future consideration.
Example: If we choose Sibiu, we check if it is a goal state (it is not) and expand it further to get:
In(Arad)
In(Fagaras)
In(Oradea)
In(RimnicuVilcea)
4. Leaf Nodes and Frontier
Leaf Node: A node with no children. Each of the nodes generated in the above example is a leaf node.
Frontier: The set of all leaf nodes available for expansion at any given moment. This is also known as the open list.
5. Continuing the Search
The expansion process continues until a solution is found or there are no more states left to explore.
The general tree-search algorithm follows a structure based on how states are selected for expansion, known as the
search strategy.
6. Repeated States and Loopy Paths
A repeated state occurs when the search tree includes paths that revisit previous states (e.g., moving from Arad to Sibiu
and back to Arad).
Such paths create loopy paths, leading to an infinite search tree while the actual state space is finite (in this case, 20 states).
Redundant Path: Refers to different paths that lead to the same state, where one path is better than the other. For
instance:
Arad → Sibiu (140 km)
Arad → Zerind → Oradea → Sibiu (297 km) (the latter is redundant).
7. Eliminating Redundant Paths
Redundant paths do not need to be stored, as reaching a goal state through one path suffices for consideration.
In specific problems (e.g., the 8-queens problem), the formulation can be adjusted to eliminate redundant paths, ensuring
that each state is reachable by only one path.

Infrastructure for search algorithms:

Prepared by
Dr. Ravi Kumar
Saidala BTech, MTech, PhD,
Measuring problem-solving performance

Uninformed Search Strategies: classical search, adversarial search, constraint satisfaction Problems

Uninformed (also called blind search) search strategies: Uninformed search strategies, also known as blind search strategies,
explore the search space without any domain-specific knowledge beyond the problem definition. They rely solely on the
structure of the search space. Here’s a breakdown of the main types

Type1: Breadth-First Search (BFS):

BFS explores the search space level by level, starting from the root node and expanding all nodes at the present depth
before moving on to the next level.

Prepared by
Dr. Ravi Kumar
Saidala BTech, MTech, PhD,
Type 2: Depth-First Search (DFS):

BFS explores the search space level by level, starting from the root node and expanding all nodes at the present depth
before moving on to the next level.

Type 3: Dijkstra’s algorithm or uniform-cost search

Prepared by
Dr. Ravi Kumar
Saidala BTech, MTech, PhD,
Type 4: Depth-limited search (DLS):

DLS is a variant of Depth-First Search (DFS) that incorporates a limit on the depth of the search to avoid infinite loops
and excessive resource usage. It's useful in scenarios where the search space might be infinite or extremely large.

Adversarial search: Adversarial search is an essential concept in artificial intelligence, especially relevant to game playing and
scenarios where multiple agents (players) compete against each other. It differs from standard search problems, as the
outcome relies not only on the agent's decisions but also on the decisions of its opponents.

Type 1: Minimax algorithm is a foundational algorithm used in adversarial search. A decision rule for minimizing the possible
loss for a worst-case scenario, assuming the opponent is also playing optimally.

Game Representation
Game State: A specific configuration of the game at any given point. For instance, in chess, the arrangement of all pieces
on the board is a game state.
Players: In a typical adversarial game, there are two types of players:
Maximizer: The player whose goal is to maximize their score or chances of winning.
Minimizer: The opponent whose goal is to minimize the maximizer's score.
Prepared by
Dr. Ravi Kumar
Saidala BTech, MTech, PhD,
Game Tree
The game can be visualized as a game tree, where:
Nodes represent different game states.
Edges represent the possible moves from one state to another.
The root node represents the current state of the game, while leaf nodes represent terminal states (win, loss, or
draw).
Terminal and Non-terminal States
Terminal States: These are the end points of the game where the outcome is decided. They have utility values assigned
based on the game's result:
Win for maximizer: +1
Win for minimizer: -1
Draw: 0
Non-terminal States: These are states where the game is still in progress. These states require evaluation to determine the
best possible move.

Type 2: Alpha-Beta Pruning

Alpha-Beta Pruning is an optimization technique that enhances the efficiency of the Minimax algorithm.
Basic Concepts
Alpha: The best value (highest) that the maximizer currently can guarantee at that level or above.
Beta: The best value (lowest) that the minimizer currently can guarantee at that level or above.
Pruning Process
1. As the algorithm explores the game tree, it keeps track of the alpha and beta values.
2. If the algorithm determines that a node's value cannot influence the final decision (e.g., a minimizer node has a
value less than or equal to alpha), it prunes (ignores) that node and its descendants.
3. This results in significant reductions in the number of nodes evaluated, allowing the search to proceed deeper in the
tree.
Alpha-Beta Algorithm Steps
Begin at the root node.
Use two parameters (alpha and beta) to keep track of the best values.
Traverse the tree recursively, applying the pruning condition to skip branches that do not need to be evaluated.
Prepared by
Dr. Ravi Kumar
Saidala BTech, MTech, PhD,
Prepared by
Dr. Ravi Kumar
Saidala BTech, MTech, PhD,
Constraint Satisfaction Problems (CSPs) are a class of problems where the goal is to find values for variables that satisfy a set
of constraints. They are widely used in fields such as artificial intelligence, operations research, and scheduling.

A CSP consists of a set of variables, each with a domain of possible values, and a set of constraints that specify allowable
combinations of values for these variables. The aim is to assign values to all variables in such a way that all constraints are
satisfied.

Example problem 1: Map colouring

Prepared by
Dr. Ravi Kumar
Saidala BTech, MTech, PhD,
Example problem 2: Sudoku

Prepared by
Dr. Ravi Kumar
Saidala BTech, MTech, PhD,
Prepared by
Dr. Ravi Kumar
Saidala BTech, MTech, PhD,

You might also like