0% found this document useful (0 votes)
24 views61 pages

Ttnt 05 Logical Agent

The document outlines key concepts in artificial intelligence, focusing on intelligent agents, problem-solving through searching, and logical reasoning. It introduces the Wumpus World as a case study for knowledge-based agents, detailing their operations, perceptions, and decision-making processes. Additionally, it covers propositional logic, entailment, and inference rules relevant to AI reasoning.

Uploaded by

n22dccn077
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views61 pages

Ttnt 05 Logical Agent

The document outlines key concepts in artificial intelligence, focusing on intelligent agents, problem-solving through searching, and logical reasoning. It introduces the Wumpus World as a case study for knowledge-based agents, detailing their operations, perceptions, and decision-making processes. Additionally, it covers propositional logic, entailment, and inference rules relevant to AI reasoning.

Uploaded by

n22dccn077
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

Artificial Intelligence

Hai Thi Tuyet Nguyen

1
Outline
CHAPTER 1: INTRODUCTION (CHAPTER 1)
CHAPTER 2: INTELLIGENT AGENTS (CHAPTER 2)
CHAPTER 3: SOLVING PROBLEMS BY SEARCHING (CHAPTER 3)
CHAPTER 4: INFORMED SEARCH (CHAPTER 3)
CHAPTER 5: LOGICAL AGENT (CHAPTER 7)
CHAPTER 6: FIRST-ORDER LOGIC (CHAPTER 8, 9)
CHAPTER 7: QUANTIFYING UNCERTAINTY(CHAPTER 13)
CHAPTER 8: PROBABILISTIC REASONING (CHAPTER 14)
CHAPTER 9: LEARNING FROM EXAMPLES (CHAPTER 18)
2
5.1 Knowledge-Based Agents

CHAPTER 5: 5.2 The Wumpus World


5.3 Logic
5.4 Propositional Logic
LOGICAL AGENT 5.5 Propositional Theorem Proving
5.6 Inference Rules, Theorem Proving

3
5.1 Knowledge-Based Agents
● Knowledge base (KB) = a set of sentences in a formal language (i.e., knowledge
representation language)
● Declarative approach to build an agent:
○ TELL it what it needs to know
○ ASK itself what to do - answer should follow from the KB

4
5.1 A simple knowledge-based agent.
● The agent takes a percept as input and returns an action.
It maintains a knowledge base
● How it works:
○ TELLs the knowledge base what it perceives.
○ ASKs the knowledge base what action it should perform.
○ TELLs the knowledge base which action was chosen, and executes the action.

5
5.2 Wumpus World PEAS description
● Performance measure:
○ +1000: climb out of the cave with the gold,
○ –1000: fall into a pit or being eaten by the wumpus
○ –1: each action
○ –10: use up the arrow.
○ The game ends: the agent dies or it climbs out of the cave
● Environment: A 4 × 4 grid of rooms.
○ Start location of the agent: the square labeled [1,1]
○ Locations of the gold and the wumpus: random
● Actuators: Move Forward, Turn Left, Turn Right, Grab, Climb, Shoot
● Sensors: the agent will perceive
○ Stench: in the square containing the monster (called wumpus) and in the directly adjacent squares
○ Breeze: in the squares directly adjacent to a pit
○ Glitter: in the square where the gold is
○ Bump: into a wall
○ Scream: anywhere in the cave when the wumpus is killed
6
5.2 Wumpus World PEAS description
● The first percept is [None,None,None,None,None] ~
[stench,breeze,glitter,bump,scream]
=> its neighboring squares, [1,2] and [2,1], are OK.

7
5.2 Wumpus World PEAS description
The agent decides to move forward to [2,1].

8
5.2 Wumpus World PEAS description
● The agent perceives a breeze (denoted by “B”) in [2,1]
[None,breeze,None,None,None]
=> there must be a pit in a neighboring square.
● The pit cannot be in [1,1] => so there must be a pit in [2,2] or [3,1] or both.
● The agent will turn around,
go back to [1,1],
and then proceed to [1,2].

9
5.2 Wumpus World PEAS description
● The agent perceives a stench in [1,2] ~ [stench,None,None,None,None]
=> there must be a wumpus nearby ([2,2] or [1,3])
● The lack of stench when the agent was in [2,1]
=> wumpus cannot be in [2,2]
=> wumpus is in [1,3]
● The lack of a breeze in [1,2]
=> there is no pit in [2,2]
=> [2,2]: safe, OK

10
5.2 Wumpus World PEAS description
● The agent draws a conclusion from the available information
● The conclusion is guaranteed to be correct if the available information is correct.

11
5.3 Logic
● Logics are formal languages for representing information
● Syntax defines the sentences in the language
E.g., “x + y = 4” is a well-formed sentence, whereas “x4y+ =” is not
● Semantics defines the “meaning” of sentences OR the truth of each sentence with respect to
each possible world (i.e., model).
E.g., the sentence “x + y = 4” is true in a world where x is 2 and y is 2,
but false in a world where x is 1 and y is 1

12
5.3 Entailment
● Entailment means that one thing follows from another:
● Knowledge base KB entails sentence α if and only if α is true in all possible worlds
(models) where KB is true
KB |= α
● E.g., KB containing “the Giants won” and “the Reds won” entails “Either the Giants won
or the Reds won”

13
5.3 Models
● Logicians typically think in terms of models, which are formally structured worlds
with respect to which truth can be evaluated
● We say m is a model of a sentence α if α is true in m
● M(α) is the set of all models of α
● Then KB |= α if and only if M(KB) ⊆ M(α)

E.g. KB = Giants won and Reds won


α = Giants won

14
5.3 Entailment in the wumpus world
Situation after detecting nothing in [1,1], moving right, breeze in [2,1]

Consider possible models for ?s assuming only pits

15
Wumpus models
8 possible models

16
Wumpus models
KB = wumpus-world rules + observations
α1 = “[1,2] is safe”, KB |= α1

17
Wumpus models
KB = wumpus-world rules + observations
α2 = “[2,2] is safe”, KB |= α2

18
5.3 Inference
● KB ⊢i α = sentence α can be derived from KB by algorithm i
● Consequences of KB are a haystack; α is a needle.
Entailment = needle in haystack; inference = finding it
● Soundness:
i is sound if whenever KB ⊢i α, it is also true that KB |= α
● Completeness:
i is complete if whenever KB |= α, it is also true that KB ⊢i α

19
5.4 Propositional logic: Syntax
● Propositional logic is the simplest logic - illustrates basic ideas
● The proposition symbols P1, P2, … are sentences
● Logical connectives
○ ¬ (not).
○ ∧ (and).
○ ∨ (or).
○ ⇒ (implies)
○ ⇔ (if and only if)
● Operator precedence: ¬,∧,∨,⇒,⇔

20
5.4 Propositional logic: Syntax
● If S is a sentence, ¬S is a sentence (negation)
● If S1 and S2 are sentences, S1 ∧ S2 is a sentence (conjunction)
● If S1 and S2 are sentences, S1 ∨ S2 is a sentence (disjunction)
● If S1 and S2 are sentences, S1 ⇒ S2 is a sentence (implication)
● If S1 and S2 are sentences, S1 ⇔ S2 is a sentence (biconditional)

21
5.4 Propositional logic: Semantics
● Each model specifies true/false for each proposition symbol
E.g. P1,2 P2,2 P3,1
true true false

● Rules for evaluating truth with respect to a model m:

22
5.4 Truth tables for connectives

23
5.4 A simple knowledge base - Wumpus world
sentences
Px,y is true if there is a pit in [x, y].

Bx,y is true if the agent perceives a breeze in [x, y].

There is no pit in [1,1]: R1 : ¬P1,1

A square is breezy if and only if there is a pit in a neighboring square.


R2: B1,1 ⇔ (P1,2 ∨ P2,1)
R3 : B2,1 ⇔ (P1,1 ∨ P2,2 ∨ P3,1)
The breeze percepts on the first two squares of the agent
R4 : ¬B1,1
R5 : B2,1

24
5.4 A simple knowledge base - Wumpus world
sentences
● Goal: to decide whether KB |= α for some sentence α
KB
α: ¬P1,2
prove: KB |= ¬P1,2

● A simple inference procedure


A model-checking approach:
○ enumerate the models
○ check that α is true in every model in which KB is true

25
5.4 A simple knowledge base - Wumpus world
sentences
With 7 symbols, there are 2^7 = 128 possible models; in 3 of these, KB is true.
In those 3 models, ¬P1,2 is true or there is no pit in [1,2].

26
5.5 Propositional Theorem Proving
● Determine entailment by theorem proving:
applying rules of inference directly to the sentences in our knowledge base to
construct a proof of the desired sentence without consulting models

● Some additional concepts related to entailment:


○ Logical equivalence
○ Validity
○ Satisfiability

27
5.5 Logical equivalence
● Two sentences α and β are logically equivalent if they are true in the same set of models
● Two sentences α and β are equivalent only if each of them entails the other: α ≡ β if and
only if α |= β and β |= α

28
5.5 Validity and satisfiability
● A sentence is valid if it is true in all models, e.g., True, A∨¬A, A ⇒ A
○ Valid sentences are also known as tautologies
○ Validity is connected to inference:
KB|=α if and only if (KB ⇒ α) is valid

● A sentence is satisfiable if it is true in some models, e.g., A ∨ B, C


○ A sentence is unsatisfiable if it is true in no models
E.g., A ∧ ¬A
○ Satisfiability is connected to inference:
KB |= α if and only if (KB ∧ ¬α) is unsatisfiable

29
5.5 Inference rules Apply biconditional elimination to R2 to obtain
and proofs
R6: (B1,1 ⇒ (P1,2 ∨ P2,1)) ∧ ((P1,2 ∨ P2,1) ⇒ B1,1)
Modus Ponens
Apply And-Elimination to R6 to obtain
And-Elimination R7 : ((P1,2 ∨ P2,1) ⇒ B1,1)

Apply contraposition to R7 to obtain


E.g. Wumpus world
R8 : (¬B1,1 ⇒ ¬(P1,2 ∨ P2,1))
R1 : ¬P1,1
R2 : B1,1 ⇔ (P1,2 ∨ P2,1) Apply Modus Ponens with R8 and R4
R3 : B2,1 ⇔ (P1,1 ∨P2,2 ∨ P3,1) to obtain
R4 : ¬B1,1 R9: ¬(P1,2∨P2,1)
R5 : B2,1
Apply De Morgan’s rule, giving the conclusion
prove ¬P1,2
R10 : ¬P1,2 ∧ ¬P2,1
30
5.5 Proof by resolution
● Conjunctive Normal Form (CNF): conjunction of clauses, clauses are disjunctions of literals
E.g., (A ∨ ¬B) ∧ (B ∨ ¬C ∨ ¬D)

● Resolution inference rule (for CNF): complete for propositional logic

where li and mj are complementary literals (i.e., one is the negation of the other).

31
5.5 Conversion to CNF
Convert R2 : B1,1 ⇔ (P1,2 ∨ P2,1) into CNF

1. Eliminate⇔, replacing α ⇔ β with (α ⇒ β) ∧ (β ⇒ α)


(B1,1 ⇒ (P1,2 ∨ P2,1)) ∧ ((P1,2 ∨ P2,1) ⇒ B1,1)

2. Eliminate ⇒, replacing α ⇒ β with ¬α ∨ β


(¬B1,1 ∨ P1,2 ∨ P2,1) ∧ (¬(P1,2 ∨ P2,1) ∨ B1,1)
3. Move ¬ inwards using de Morgan’s rules and double-negation:
(¬B1,1 ∨ P1,2 ∨ P2,1) ∧ ((¬P1,2 ∧ ¬P2,1) ∨ B1,1)
4. Apply distributivity law (∨ over ∧):
(¬B1,1 ∨ P1,2 ∨ P2,1) ∧ (¬P1,2 ∨ B1,1) ∧ (¬P2,1 ∨ B1,1)

32
5.5 Resolution algorithm
Proof by contradiction, i.e., show KB ∧ ¬α unsatisfiable

33
5.5 Proof by resolution
KB = (B1,1 ⇔ (P1,2 ∨ P2,1)) ∧ ¬B1,1
α = ¬P1,2

34
5.5 Proof by resolution
E.g. Wumpus world

R1 : ¬P1,1.
R2 : B1,1 ⇔ (P1,2 ∨ P2,1).
R3 : B2,1 ⇔ (P1,1 ∨P2,2 ∨ P3,1).
R4 : ¬B1,1.
R5 : B2,1.

Prove ¬P1,2 by resolution

35
5.5 Proof by resolution 1. convert (KB ∧ ¬α) to CNF

(¬P1,2 ∨ B1,1) ∧ (¬B1,1 ∨ P1,2 ∨ P2,1)


∧ (¬P2,1 ∨ B1,1) ∧ ¬B1,1 ∧ ¬P1,2

KB =R2 ∧R4 =(B1,1 ⇔ (P1,2 ∨P2,1))∧¬B1,1 2. resolve pairs

α as ¬P1,2 (¬P1,2 ∨ B1,1), (¬B1,1 ∨ P1,2 ∨ P2,1): P2,1

(¬P1,2 ∨ B1,1), ¬B1,1: ¬P1,2

(¬P2,1 ∨ B1,1), (¬B1,1 ∨ P1,2 ∨ P2,1): P1,2

(¬P2,1 ∨ B1,1), ¬B1,1: P2,1

3. resolve pairs

¬P1,2, P1,2: empty

Result: KB |= ¬P1,2
36
5.5 Horn clauses and definite clauses
● Definite clause: a disjunction of literals of which exactly one is positive.
E.g., (¬L1,1 ∨ ¬B ∨ B1,1) is a definite clause
● Horn clause: a disjunction of literals of which at most one is positive
● Goal clauses: clauses with no positive literals

37
Figure 7.14 A grammar for conjunctive normal form, Horn clauses, and definite clauses.
5.5 Forward and backward chaining
Horn Form (restricted)
KB: conjunction of Horn clauses
Horn clause:
proposition symbol;
or (conjunction of symbols) ⇒ symbol
E.g., C ∧ (B ⇒ A) ∧ (C ∧ D ⇒ B)

Modus Ponens (for Horn Form): complete for Horn KBs

Can be used with forward chaining or backward chaining.


These algorithms are very natural and run in linear time
38
5.5 AND–OR graphs
● In AND–OR graphs,
○ multiple links joined by an arc indicate a conjunction
○ multiple links without an arc indicate a disjunction
● How the graphs work:
○ The known leaves are set, inference propagates up the graph as far as possible.
○ Where a conjunction appears, the propagation waits until all the conjuncts are known before proceeding.

39
5.5 Forward chaining
Idea: fire any rule whose premises are satisfied in the KB,
add its conclusion to the KB, until query is found or no further inferences can be made.

40
5.5 Forward chaining

41
5.5 Forward chaining

42
5.5 Forward chaining

43
5.5 Forward chaining

44
5.5 Forward chaining

45
5.5 Forward chaining

46
5.5 Forward chaining

47
5.5 Forward chaining

48
5.5 Backward chaining
● Idea: work backwards from the query q to prove q by BC,
○ check if q is known already, or
○ prove by BC all premises of some rule concluding q

● Avoid loops: check if new subgoal is already on the goal stack


● Avoid repeated work: check if new subgoal
○ 1) has already been proved true, or
○ 2) has already failed

49
5.5 Backward chaining

50
5.5 Backward chaining

51
5.5 Backward chaining

52
5.5 Backward chaining

53
5.5 Backward chaining

54
5.5 Backward chaining

55
5.5 Backward chaining

56
5.5 Backward chaining

57
5.5 Backward chaining

58
5.5 Backward chaining

59
5.5 Backward chaining

60
5.5 Forward vs. backward chaining
● FC is data-driven: automatic, unconscious processing,
○ e.g., object recognition, routine decisions
○ May do lots of work that is irrelevant to the goal

● BC is goal-driven, appropriate for problem-solving,


○ e.g., Where are my keys? How do I get into a PhD program?

61

You might also like