0% found this document useful (0 votes)
2 views

AI 3rd Module.pptx

The document outlines the syllabus for an Artificial Intelligence course, focusing on informed search strategies, logical agents, and heuristic functions. It discusses various search algorithms, including Greedy Best First Search, A* search, and IDA*, detailing their mechanisms, optimality conditions, and applications. Additionally, it covers knowledge-based agents, their knowledge representation, and inference processes.

Uploaded by

Rishee
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

AI 3rd Module.pptx

The document outlines the syllabus for an Artificial Intelligence course, focusing on informed search strategies, logical agents, and heuristic functions. It discusses various search algorithms, including Greedy Best First Search, A* search, and IDA*, detailing their mechanisms, optimality conditions, and applications. Additionally, it covers knowledge-based agents, their knowledge representation, and inference processes.

Uploaded by

Rishee
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 85

Artificial Intelligence-Module 3

Prof. Salma Itagi


Asst. Professor
Dept. of CSE,SVIT
SYLLABUS
• Problem‐solving: Informed Search Strategies, Heuristic functions
• Logical Agents: Knowledge–based agents, The Wumpus world, Logic, Propositional
logic,
• Reasoning patterns in Propositional Logic
• Chapter 3 - 3.5, 3.6
• Chapter 7 - 7.1, 7.2, 7.3, 7.4

Text book: Stuart J. Russell and Peter Norvig, Artificial Intelligence, 3rd Edition,
Pearson, 2015
Informed (Heuristic Search Strategies)
• This section shows how an informed search strategy—one that uses
problem-specific knowledge beyond the definition of the problem
itself—can find solutions more efficiently than an uninformed strategy.
• The general approach we consider is called best-first search.
• Best-first search is an instance of the general TREE-SEARCH or
GRAPH-SEARCH algorithm in which a node is selected for expansion
based on an evaluation function, f(n).
• The evaluation function is construed as a cost estimate, so the node with the
lowest evaluation is expanded first.
• The implementation of best-first graph search is identical to that for
uniform-cost search except for the use of f instead of g to order the priority
queue.
Prof. Salma Itagi,Dept. of CSE,SVIT 3
• The choice of f determines the search strategy.

• Most best-first algorithms include as a component of f a heuristic


function, denoted h(n):

• h(n) = estimated cost of the cheapest path from the state at node n to a
goal state.

• Heuristic functions are the most common form in which additional


knowledge of the problem is imparted to the search algorithm.
Greedy Best First Search

• Greedy best-first search tries to expand the node that is closest to the goal, on the

ground that this is likely to lead to a solution quickly.

• Thus, it evaluates nodes by using just the heuristic function; that is, f(n) = h(n).

• Greedy search Algorithm ignores the cost of the path that has already been traversed to

reach node n.

• Hence the solution is not always optimal.

Prof. Salma Itagi,Dept. of CSE,SVIT 5


The Algorithm

1. Initialize a tree with the root node being the start node in the open list.
2. If the open list is empty, return a failure, otherwise, add the current
node to the closed list.
3. Remove the node with the lowest h(x) value from the open list for
exploration.
4. If a child node is the target, return a success. Otherwise, if the node
has not been in either the open or closed list, add it to the open list for
exploration.
C has the lowest cost of 6.

U has the lowest cost compared to M and R, so the search will


continue by exploring U.
Finally, S has a heuristic value of 0 since that is the target
node:

The total cost for the path (P -> C -> U -> S) evaluates to 11. The potential problem with a greedy best-first
search is revealed by the path (P -> R -> E -> S) having a cost of 10, which is lower than
(P -> C -> U -> S).
Greedy best-first search ignored this path because it does not consider the edge weights.
A* search: Minimizing the total estimated solution cost
• The most widely known form of best-first search is called A∗.
• It evaluates nodes by combining g(n), the cost to reach the node, and h(n), the cost to get
from the node to the goal:
f(n) = g(n) + h(n)
• Since g(n) gives the path cost from the start node to node n, and h(n) is the estimated cost of
the cheapest path from n to the goal, we have
f(n) = estimated cost of the cheapest solution through n .
• Thus, if we are trying to find the cheapest solution, a reasonable thing to try first is the
node with the lowest value of g(n) + h(n).
• It turns out that this strategy is more than just reasonable: provided that the heuristic
function h(n) satisfies certain conditions,
• A∗ search is both complete and optimal.
• The algorithm is identical to UNIFORM-COST-SEARCH except that A∗ uses g + h
instead of g.

Prof. Salma Itagi,Dept. of CSE,SVIT 11


Conditions for optimality: Admissibility and consistency
• The first condition we require for optimality is that h(n) be an
admissible heuristic.
• An admissible heuristic is one that never overestimates the cost to reach
the goal.
• Because g(n) is the actual cost to reach n along the current path, and
f(n)=g(n) + h(n), we have as an immediate consequence that f(n) never
overestimates the true cost of a solution along the current path through n.
• Admissible heuristics are by nature optimistic because they think the
cost of solving the problem is less than it actually is.

Prof. Salma Itagi,Dept. of CSE,SVIT 23


• A second, slightly stronger condition called consistency (or sometimes monotonicity) is required only for
applications of A∗ to graph search.

|
• A heuristic h(n) is consistent if, for every node n and every successor n of n generated by any action a, the

estimated cost of reaching the goal from n is no greater than the step cost of getting to n | plus the estimated
cost of reaching the goal from n| :

h(n) ≤ c(n, a, n| ) + h(n| ) .

• This is a form of the general triangle inequality, which stipulates that each side of a triangle cannot be
longer than the sum of the other two sides.

• Here, the triangle is formed by n, n| and the goal G n closest to n.

• For an admissible heuristic, the inequality makes perfect sense: if there were a route from n to G n via n| that
was cheaper than h(n), that would violate the property that h(n) is a lower bound on the cost to reach G n .

Prof. Salma Itagi,Dept. of CSE,SVIT 24


Optimality of A*
A∗ has the following properties:
• the tree-search version of A ∗ is optimal if h(n) is admissible,
• while the graph-search version is optimal if h(n) is consistent.

The first step is to establish the following:


• if h(n) is consistent, then the values of f(n) along any path are nondecreasing.
• The proof follows directly from the definition of consistency.
• Suppose n| is a successor of n; then g(n| )=g(n) + c(n, a, n| ) for some action a, and we have
f(n| ) = g(n| )+ h(n| ) = g(n) + c(n, a, n| ) + h(n| ) ≥ g(n) + h(n) = f(n) .

• The next step is to prove that whenever A ∗ selects a node n for expansion, the optimal path to that node has been
found.
• Were this not the case, there would have to be another frontier node n on the optimal path from the start node to n,
by the graph separation property.

Prof. Salma Itagi,Dept. of CSE,SVIT 26


Definitions
1.Heuristic h(n):
An estimate of the cost to reach the goal from node n.

1.Cost Function c(n,a,n′):


The actual cost to move from node n to successor n′ via
action a.

1.Function f(n):
The total estimated cost of the cheapest solution through
node n, defined as:
f(n)=g(n)+h(n)
where g(n) is the cost to reach node n from the start node.

Conclusion
Thus, we have shown that:
f(n′)≥f(n)
IDA* Algorithm
❖ Iterative deepening A (IDA)* is a powerful graph traversal and pathfinding algorithm
designed to find the shortest path in a weighted graph.

❖ This method combines features of iterative deepening depth-first search


(IDDFS) and the A search algorithm* by using a heuristic function to estimate the
remaining cost to the goal node.

❖ IDA* is often referred to as a memory-efficient version of A*, as it requires


significantly less memory while still ensuring optimal pathfinding.

Prof. Salma Itagi,Dept. of CSE,SVIT 31


Step-by-Step Process of the IDA* Algorithm

1. Initialization: Set the root node as the current node and compute its f-score.
2. Set Threshold: Initialize a threshold based on the f-score of the starting
node.
3. Node Expansion: Expand the current node’s children and calculate their
f-scores.
4. Pruning: If the f-score exceeds the threshold, prune the node and store it for
future exploration.
5. Path Return: Once the goal node is found, return the path from the start
node to the goal.
6. Update Threshold: If the goal is not found, increase the threshold based on
the minimum pruned value and repeat the process.(from step 2)
Graph Problem

Prof. Salma Itagi,Dept. of CSE,SVIT 33


Prof. Salma Itagi,Dept. of CSE,SVIT 34
Prof. Salma Itagi,Dept. of CSE,SVIT 35
Prof. Salma Itagi,Dept. of CSE,SVIT 36
Prof. Salma Itagi,Dept. of CSE,SVIT 37
SMA * Algorithm

Prof. Salma Itagi,Dept. of CSE,SVIT 38


Iteration 1:

Prof. Salma Itagi,Dept. of CSE,SVIT 39


Iteration 2:
Iteration 3:

Iteration 4:
Iteration 5:

Iteration 6:
Iteration 6:

The memory is full, which means we need to remove the “C” node, because it has the
highest f value, and it means that if we eventually reach the goal from this node the total
cost at best case scenario will be equal to 6, which is worse than we already have.
At this stage, the algorithm terminates, and we have found the shortest path from the “S”
node to the “G” node.
Comparison of IDA*,RBFS and SMA*
Comparison of IDA* and RBFS
Learning to search Better
• Could an agent learn how to search better?
• The answer is yes, and the method rests on an important concept called the metalevel
state space.

• Each state in a metalevel state space captures the internal (computational) state of a
program that is searching in an object-level state space.

• The goal of learning is to minimize the total cost of problem solving, trading off
computational expense and path cost.
CHAPTER 3;3.6:

HEURISTIC FUNCTIONS
The effect of heuristic accuracy on performance
1. Depends on effective branching factor.

2. Generating admissible heuristics from relaxed problems

❑ A problem with fewer restrictions on the actions is called a relaxed problem. The state-space

graph of the relaxed problem is a super graph of the original state space because the removal of

restrictions creates added edges in the graph.

❑ Hence, the cost of an optimal solution to a relaxed problem is an admissible heuristic for the

original problem.
3. Generating admissible heuristics from subproblems:
Pattern databases

• A pattern database is a precomputed table of solutions to specific


subproblems of a larger problem.
• These subproblems are chosen carefully to capture relevant aspects of
the original problem.
• The heuristic derived from a pattern database is admissible because it
reflects the exact cost of reaching a goal state from the subproblem
configurations.
• Steps to Create a Pattern Database
1. Identify Patterns:
1. Choose a subset of variables or elements from the original problem to create patterns. For
example, in the 15-puzzle, you might focus on specific tiles and ignore the others.
2. The patterns should be representative enough to provide useful information about the
overall problem.
2. Define Subproblems:
1. For each pattern, define a corresponding subproblem that involves only the chosen
elements. The goal is still to reach a goal state, but only considering the configurations of
the selected pieces.
3. Compute Exact Costs:
1. Solve each subproblem using an appropriate algorithm (like A* or Dijkstra's) to find the
optimal cost from each possible configuration of the subproblem to the goal state.
2. Store these costs in a database where the keys are the states of the subproblems.
4. Create the Heuristic:
1. For any given state of the original problem, extract the relevant configuration of the selected
pattern(s).
2. Look up the cost from the pattern database to use as the heuristic value for that state in the
original problem.
Disjoint pattern databases work for sliding-tile puzzles because the problem can be divided up in
such a way that each move affects only one subproblem—because only one tile is moved at a time.
4. Learning heuristics from experience
❑ “Experience” here means solving lots of 8-puzzles, for instance. Each optimal solution to an 8-puzzle problem
provides examples from which h(n) can be learned.
❑ Inductive learning methods work best when supplied with features of a state that are relevant to predicting the
state’s value, rather than with just the raw state description.

•Relevant Features: Instead of using raw state descriptions (like the full configuration of the puzzle), it’s more effective
to extract features that are predictive of the state’s distance from the goal.

For the 8-puzzle, you might consider:


❑ Feature x1(n): The number of misplaced tiles.
❑ Feature x2(n): The number of pairs of adjacent tiles that are not adjacent in the goal state.

•A common approach to combining features for predicting h(n)is through a linear combination:
h(n)=c1x1(n)+c2x2(n) ;Here, c1nd c2 constants that need to be determined.
• Learning heuristics from experience is a promising approach to enhance informed search strategies. By
leveraging feature extraction, statistical analysis, and machine learning techniques, agents can develop
effective heuristics that significantly improve search efficiency.
CHAPTER 7🡪7.1 to 7.4:

LOGICAL AGENTS:
Knowledge–based agents, The Wumpus world, Logic,
Propositional logic, Reasoning patterns in Propositional Logic
KNOWLEDEGE BASED AGENTS
• The central component of a knowledge-based agent is its knowledge base, or KB.
• A knowledge edge base is a set of sentences.
• Each sentence is expressed in a language called a knowledge representation
language and represents some assertion about the world.
• Sometimes we dignify a sentence with the name axiom, when the sentence is
taken as given without being derived from other sentences.
• There must be a way to add new sentences to the knowledge base and a way to
query what is known.(TELL and ASK). Both operations may involve
inference—that is, deriving new sentences from old.
Each time the agent program is called, it does three things.

1. First, it TELLs the knowledge base what it perceives.

2. Second, it ASKs the knowledge base what action it should perform. In the process

of answering this query, extensive reasoning may be done about the current state

of the world, about the outcomes of possible action sequences, and so on.

3. Third, the agent program TELLs the knowledge base which action was chosen,

and the agent executes the action.


• MAKE-PERCEPT-SENTENCE constructs a sentence asserting that the agent perceived the
given percept at the given time.

• MAKE-ACTION-QUERY
constructs a sentence that asks what action should be done at the current time.

Finally,
• MAKE-ACTION-SENTENCE constructs a sentence asserting that the chosen action was
executed.

The details of the inference mechanisms are hidden inside TELL and ASK.
• A knowledge-based agent can be built simply by TELLing it what it needs to know.

• Starting with an empty knowledge base, the agent designer can TELL sentences one
by one until the agent knows how to operate in its environment. This is called the
declarative approach to system building.

• In contrast, the procedural approach encodes desired behaviors directly as


program code.

• We now understand that a successful agent often combines both declarative and
procedural elements in its design, and that declarative knowledge can often be
compiled into more efficient procedural code.

• A learning agent can be fully autonomous.


THE WUMPUS WORLD
WUMPUS WORLD PROBLEM
• The Wumpus world is a cave which has 4/4 rooms connected with passageways.
• So there are total 16 rooms which are connected with each other.
• We have a knowledge-based agent who will go forward in this world.
• The cave has a room with a beast which is called Wumpus, who eats anyone who
enters the room.
• The Wumpus can be shot by the agent, but the agent has a single arrow.
• In the Wumpus world, there are some Pits rooms which are bottomless, and if agent
falls in Pits, then he will be stuck there forever.
• The exciting thing with this cave is that in one room there is a possibility of finding a
heap of gold.
• So the agent goal is to find the gold and climb out the cave without fallen into Pits or
eaten by Wumpus.
• The agent will get a reward if he comes out with gold, and he will get a penalty if
eaten by Wumpus or falls in the pit.
PEAS for the WUMPUS WORLD PROBLEM
Performance measure:
• +1000 reward points if the agent comes out of the cave with the gold.
• -1000 points penalty for being eaten by the Wumpus or falling into the pit.
• -1 for each action, and -10 for using an arrow.
• The game ends if either agent dies or came out of the cave.

Environment:
• A 4*4 grid of rooms.
• The agent initially in room square [1, 1], facing toward the right.
• Location of Wumpus and gold are chosen randomly except the first square [1,1].
• Each square of the cave can be a pit with probability 0.2 except the first square.
PEAS for the WUMPUS WORLD PROBLEM(cont..,)
Actuators:
• Left turn, Right turn Move forward ,Grab, Release, Shoot.

Sensors:
• The agent will perceive the stench if he is in the room adjacent to the Wumpus. (Not diagonally).
• The agent will perceive breeze if he is in the room directly adjacent to the Pit.
• The agent will perceive the glitter in the room where the gold is present.
• The agent will perceive the bump if he walks into a wall.
• When the Wumpus is shot, it emits a horrible scream which can be perceived anywhere in the cave.
• These percepts can be represented as five element list, in which we will have different indicators for
each sensor.
• Example if agent perceives stench, breeze, but no glitter, no bump, and no scream then it can be
represented as
[Stench, Breeze, None, None, None]
WUMPUS WORLD PROPERTIES
• Partially observable: The Wumpus world is partially observable
because the agent can only perceive the close environment such as an
adjacent room.
• Deterministic: It is deterministic, as the result and outcome of the
world are already known.
• Sequential: The order is important, so it is sequential.
• Static: It is static as Wumpus and Pits are not moving.
• Discrete: The environment is discrete.
• One agent: The environment is a single agent as we have one agent
only and Wumpus is not considered as an agent.
Agent's First step:
Initially, the agent is in the first room
or on the square [1,1], and we already
know that this room is safe for the
agent, so to represent on the below
diagram (a) that room is safe we will
add symbol OK.
Symbol A is used to represent agent,
symbol B for the breeze, G for Glitter
or gold, V for the visited room, P for
pits, W for Wumpus.
At Room [1,1] agent does not feel any
breeze or any Stench which means the
adjacent squares are also OK.
Agent's second Step:

Now agent needs to move forward, so it will


either move to [1, 2], or [2,1].
Let's suppose agent moves to the room [2, 1],
at this room agent perceives some breeze
which means Pit is around this room.
The pit can be in [3, 1], or [2,2], so we will
add symbol P? to say that, is this Pit room?
Now agent will stop and think and will not
make any harmful move.
The agent will go back to the [1, 1] room.
The room [1,1], and [2,1] are visited by the
agent, so we will use symbol V to represent
the visited squares.
Agent's third step:
At the third step, now agent will move to the
room [1,2] which is OK.
In the room [1,2] agent perceives a stench
which means there must be a Wumpus
nearby.
But Wumpus cannot be in the room [1,1] as
by rules of the game, and also not in [2,2]
(Agent had not detected any stench when he
was at [2,1]).
Therefore agent infers that Wumpus is in the
room [1,3], and in current state, there is no
breeze which means in [2,2] there is no Pit
and no Wumpus.
So it is safe, and we will mark it OK, and the
agent moves further in [2,2].
Agent's fourth step:
At room [2,2], here no stench and no
breezes present so let's suppose agent
decides to move to [2,3].

At room [2,3] agent perceives glitter,


so it should grab the gold and climb
out of the cave.
KNOWLEDGE BASE for the WUMPUS
WORLD
• Atomic proposition variable for Wumpus world:
• Let Pi,j be true if there is a Pit in the room [i, j].
• Let Bi,j be true if agent perceives breeze in [i, j], (dead or alive).
• Let Wi,j be true if there is wumpus in the square[i, j].
• Let Si,j be true if agent perceives stench in the square [i, j].
• Let Vi,j be true if that square[i, j] is visited.
• Let Gi,j be true if there is gold (and glitter) in the square [i, j].
• Let OKi,j be true if the room is safe.
Prove that Wumpus is in the room (1, 3)
Apply Modus Ponens to ¬S21, and R2
Modes Ponens Rule:(A,A->B then B)
Now we will apply Modus Ponens to ¬S21 and R2 which is
¬S21 → ¬ W21 ∧¬ W22 ∧ ¬ W31, which will give the Output as
¬ W21 ∧ ¬ W22 ∧¬ W31

Apply Modus Ponens to S12 and R4 which is


S12 → W13 ∨ W12 ∨ W22 ∨ W11, we will get the
output as

W13∨ W12 ∨ W22 ∨ W11


Apply Unit resolution on W13 ∨ W12 ∨ W22 ∨W11 and ¬ W11 :
Apply Unit Resolution on W13 ∨ W12 and ¬ W12 :
After Applying Unit resolution on W13 ∨ W12 and ¬ W12, we will get W13 as an output, hence
it is proved that the Wumpus is in the room [1, 3].
LOGIC
• Knowledge bases consist of sentences.
• These sentences are expressed according to the syntax of the representation
language, which specifies all the sentences that are well formed.
• The notion of syntax is clear enough in ordinary arithmetic:
• “x + y = 4” is a well-formed sentence, whereas “x4y+ =” is not.
• A logic must also define the semantics or meaning of sentences.
• The semantics defines the truth of each sentence with respect to each possible
world. For example, the semantics for arithmetic specifies that the sentence
• “x + y =4” is true in a world where x is 2 and y is 2,
• but false in a world where x is 1 and y is 1.
• When we need to be precise, we use the term model in place of “possible world.”
• If a sentence α is true in model m, we say that m satisfies α or sometimes m is a model of α.
• We use the notation M(α) to mean the set of all models of α.
• This involves the relation of logical entailment between sentences—the idea that a sentence follows
logically from another sentence.
• In mathematical notation, we write
α |= β to mean that the sentence α entails the sentence β.
• The formal definition of entailment is this: α |= β if and only if, in every model in which α is true, β is
also true.
• Using the notation we can write
α |= β if and only if M(α) ⊆ M(β)
• (Note the direction of the ⊆ here: if α |= β, then α is a stronger assertion than β: it rules out more
possible worlds.)
• The relation of entailment is familiar from arithmetic; we are happy with the idea that the sentence
• x = 0 entails the sentence xy = 0. Obviously, in any model where x is zero, it is the case that xy is
zero (regardless of the value of y).
α1 = “There is no pit in [1,2].”
α2 = “There is no pit in [2,2].”
• This distinction is embodied in some formal notation: if an inference algorithm
• i can derive α from KB, we write,

• which is pronounced “α is derived from KB by i” or “i derives α from KB.”


PROPOSITIONAL LOGIC
SYNTAX:
• The syntax of propositional logic defines the allowable sentences.
• The atomic sentences consist of a single proposition symbol.
• Each such symbol stands for a proposition that can be true or false.
• We use symbols that start with an uppercase letter and may contain other letters or subscripts,
• for example: P, Q, R, W1,3 and North.
• The names are arbitrary but are often chosen to have some mnemonic value—we use W1,3 to stand
for the proposition that the wumpus is in [1,3].
• True is the always-true proposition and False is the always-false proposition.
• Complex sentences are constructed from simpler sentences, using parentheses and logical connectives. There are
five connectives in common use:
1. ¬ (not). A sentence such as ¬W1,3 is called the negation of W1,3.
• A literal is either an atomic sentence (a positive literal) or a negated atomic sentence (a negative literal).

2.CONJUNCTION ∧ (and). A sentence whose main connective is ∧, such as W1,3 ∧ P3,1, is called a con junction;
its parts are the conjuncts. (The ∧ looks like an “A” for “And.”)
3. DISJUNCTION ∨ (or). A sentence using ∨, such as (W1,3 ∧ P3,1) ∨ W 2,2
is a disjunction of the disjuncts
(W1,3 ∧ P3,1) and W 2,2.
4. IMPLICATION ⇒ (implies). A sentence such as (W1,3 ∧ P3,1) ⇒ ¬ W 2,2 is called an implication (conditional or
PREMISE)
• Its premise or antecedent is (W1,3 ∧ P3,1), and its conclusion or consequent is ¬W2,2.
• Implications are also known as rules or if–then statements.
• The implication symbol is sometimes written in other books as ⊃ or →.
5. BICONDITIONAL ⇔ (if and only if). The sentence W1,3 ⇔ ¬W2,2 is a biconditional. Some other books write
this as ≡.
Semantics
• The semantics defines the rules for determining the truth of a sentence with respect to a particular
model.
• In propositional logic, a model simply fixes the truth value—true or false—for every proposition
symbol.
• For example, if the sentences in the knowledge base make use of the proposition symbols P1,2, P2,2, and
P3,1, then one possible model is
• m1 = {P1,2 =false, P2,2 =false, P3,1 =true}
• The semantics for propositional logic must specify how to compute the truth value of any sentence,
given a model.
• All sentences are constructed from atomic sentences and the five connectives; therefore, we need to
specify how to compute the truth of atomic sentences and how to compute the truth of sentences formed
with each of the five connectives.
• Atomic sentences are easy:
• True is true in every model and False is false in every model.
• The truth value of every other proposition symbol must be specified directly in the model. For example,
in the model m1 given earlier, P1,2 is false.
• For complex sentences, we have five rules, which hold for any subsentences P and Q in any model m (here
“iff” means “if and only if”):
• ¬P is true iff P is false in m.
• P ∧ Q is true iff both P and Q are true in m.
• P ∨ Q is true iff either P or Q is true in m.
• P ⇒ Q is true unless P is true and Q is false in m.
• P ⇔ Q is true iff P and Q are both true or both false in m.
• The rules can also be expressed with truth tables that specify the truth value of a complex sentence for each
possible assignment of truth values to its components.
• Truth tables for the five connectives
A simple Knowledge Base

• The sentences we write will suffice to derive ¬P1,2 (there is no pit in [1,2]) .We label each sentence Ri so
that we can refer to them:

There is no pit in [1,1]: R1 : ¬P1,1 .


• A square is breezy if and only if there is a pit in a neighboring square.

• This has to be stated for each square; for now, we include just the relevant squares:
R2 : B1,1 ⇔ (P1,2 ∨ P2,1) .
R3 : B2,1 ⇔ (P1,1 ∨ P2,2 ∨ P3,1) .
• The preceding sentences are true in all wumpus worlds.
• Now we include the breeze percepts for the first two squares visited in the specific world
the agent is in R4 : ¬B1,1 . R5 : B2,1 .

You might also like