Ttnt 05 Logical Agent
Ttnt 05 Logical Agent
1
Outline
CHAPTER 1: INTRODUCTION (CHAPTER 1)
CHAPTER 2: INTELLIGENT AGENTS (CHAPTER 2)
CHAPTER 3: SOLVING PROBLEMS BY SEARCHING (CHAPTER 3)
CHAPTER 4: INFORMED SEARCH (CHAPTER 3)
CHAPTER 5: LOGICAL AGENT (CHAPTER 7)
CHAPTER 6: FIRST-ORDER LOGIC (CHAPTER 8, 9)
CHAPTER 7: QUANTIFYING UNCERTAINTY(CHAPTER 13)
CHAPTER 8: PROBABILISTIC REASONING (CHAPTER 14)
CHAPTER 9: LEARNING FROM EXAMPLES (CHAPTER 18)
2
5.1 Knowledge-Based Agents
3
5.1 Knowledge-Based Agents
● Knowledge base (KB) = a set of sentences in a formal language (i.e., knowledge
representation language)
● Declarative approach to build an agent:
○ TELL it what it needs to know
○ ASK itself what to do - answer should follow from the KB
4
5.1 A simple knowledge-based agent.
● The agent takes a percept as input and returns an action.
It maintains a knowledge base
● How it works:
○ TELLs the knowledge base what it perceives.
○ ASKs the knowledge base what action it should perform.
○ TELLs the knowledge base which action was chosen, and executes the action.
5
5.2 Wumpus World PEAS description
● Performance measure:
○ +1000: climb out of the cave with the gold,
○ –1000: fall into a pit or being eaten by the wumpus
○ –1: each action
○ –10: use up the arrow.
○ The game ends: the agent dies or it climbs out of the cave
● Environment: A 4 × 4 grid of rooms.
○ Start location of the agent: the square labeled [1,1]
○ Locations of the gold and the wumpus: random
● Actuators: Move Forward, Turn Left, Turn Right, Grab, Climb, Shoot
● Sensors: the agent will perceive
○ Stench: in the square containing the monster (called wumpus) and in the directly adjacent squares
○ Breeze: in the squares directly adjacent to a pit
○ Glitter: in the square where the gold is
○ Bump: into a wall
○ Scream: anywhere in the cave when the wumpus is killed
6
5.2 Wumpus World PEAS description
● The first percept is [None,None,None,None,None] ~
[stench,breeze,glitter,bump,scream]
=> its neighboring squares, [1,2] and [2,1], are OK.
7
5.2 Wumpus World PEAS description
The agent decides to move forward to [2,1].
8
5.2 Wumpus World PEAS description
● The agent perceives a breeze (denoted by “B”) in [2,1]
[None,breeze,None,None,None]
=> there must be a pit in a neighboring square.
● The pit cannot be in [1,1] => so there must be a pit in [2,2] or [3,1] or both.
● The agent will turn around,
go back to [1,1],
and then proceed to [1,2].
9
5.2 Wumpus World PEAS description
● The agent perceives a stench in [1,2] ~ [stench,None,None,None,None]
=> there must be a wumpus nearby ([2,2] or [1,3])
● The lack of stench when the agent was in [2,1]
=> wumpus cannot be in [2,2]
=> wumpus is in [1,3]
● The lack of a breeze in [1,2]
=> there is no pit in [2,2]
=> [2,2]: safe, OK
10
5.2 Wumpus World PEAS description
● The agent draws a conclusion from the available information
● The conclusion is guaranteed to be correct if the available information is correct.
11
5.3 Logic
● Logics are formal languages for representing information
● Syntax defines the sentences in the language
E.g., “x + y = 4” is a well-formed sentence, whereas “x4y+ =” is not
● Semantics defines the “meaning” of sentences OR the truth of each sentence with respect to
each possible world (i.e., model).
E.g., the sentence “x + y = 4” is true in a world where x is 2 and y is 2,
but false in a world where x is 1 and y is 1
12
5.3 Entailment
● Entailment means that one thing follows from another:
● Knowledge base KB entails sentence α if and only if α is true in all possible worlds
(models) where KB is true
KB |= α
● E.g., KB containing “the Giants won” and “the Reds won” entails “Either the Giants won
or the Reds won”
13
5.3 Models
● Logicians typically think in terms of models, which are formally structured worlds
with respect to which truth can be evaluated
● We say m is a model of a sentence α if α is true in m
● M(α) is the set of all models of α
● Then KB |= α if and only if M(KB) ⊆ M(α)
14
5.3 Entailment in the wumpus world
Situation after detecting nothing in [1,1], moving right, breeze in [2,1]
15
Wumpus models
8 possible models
16
Wumpus models
KB = wumpus-world rules + observations
α1 = “[1,2] is safe”, KB |= α1
17
Wumpus models
KB = wumpus-world rules + observations
α2 = “[2,2] is safe”, KB |= α2
18
5.3 Inference
● KB ⊢i α = sentence α can be derived from KB by algorithm i
● Consequences of KB are a haystack; α is a needle.
Entailment = needle in haystack; inference = finding it
● Soundness:
i is sound if whenever KB ⊢i α, it is also true that KB |= α
● Completeness:
i is complete if whenever KB |= α, it is also true that KB ⊢i α
19
5.4 Propositional logic: Syntax
● Propositional logic is the simplest logic - illustrates basic ideas
● The proposition symbols P1, P2, … are sentences
● Logical connectives
○ ¬ (not).
○ ∧ (and).
○ ∨ (or).
○ ⇒ (implies)
○ ⇔ (if and only if)
● Operator precedence: ¬,∧,∨,⇒,⇔
20
5.4 Propositional logic: Syntax
● If S is a sentence, ¬S is a sentence (negation)
● If S1 and S2 are sentences, S1 ∧ S2 is a sentence (conjunction)
● If S1 and S2 are sentences, S1 ∨ S2 is a sentence (disjunction)
● If S1 and S2 are sentences, S1 ⇒ S2 is a sentence (implication)
● If S1 and S2 are sentences, S1 ⇔ S2 is a sentence (biconditional)
21
5.4 Propositional logic: Semantics
● Each model specifies true/false for each proposition symbol
E.g. P1,2 P2,2 P3,1
true true false
22
5.4 Truth tables for connectives
23
5.4 A simple knowledge base - Wumpus world
sentences
Px,y is true if there is a pit in [x, y].
24
5.4 A simple knowledge base - Wumpus world
sentences
● Goal: to decide whether KB |= α for some sentence α
KB
α: ¬P1,2
prove: KB |= ¬P1,2
25
5.4 A simple knowledge base - Wumpus world
sentences
With 7 symbols, there are 2^7 = 128 possible models; in 3 of these, KB is true.
In those 3 models, ¬P1,2 is true or there is no pit in [1,2].
26
5.5 Propositional Theorem Proving
● Determine entailment by theorem proving:
applying rules of inference directly to the sentences in our knowledge base to
construct a proof of the desired sentence without consulting models
27
5.5 Logical equivalence
● Two sentences α and β are logically equivalent if they are true in the same set of models
● Two sentences α and β are equivalent only if each of them entails the other: α ≡ β if and
only if α |= β and β |= α
28
5.5 Validity and satisfiability
● A sentence is valid if it is true in all models, e.g., True, A∨¬A, A ⇒ A
○ Valid sentences are also known as tautologies
○ Validity is connected to inference:
KB|=α if and only if (KB ⇒ α) is valid
29
5.5 Inference rules Apply biconditional elimination to R2 to obtain
and proofs
R6: (B1,1 ⇒ (P1,2 ∨ P2,1)) ∧ ((P1,2 ∨ P2,1) ⇒ B1,1)
Modus Ponens
Apply And-Elimination to R6 to obtain
And-Elimination R7 : ((P1,2 ∨ P2,1) ⇒ B1,1)
where li and mj are complementary literals (i.e., one is the negation of the other).
31
5.5 Conversion to CNF
Convert R2 : B1,1 ⇔ (P1,2 ∨ P2,1) into CNF
32
5.5 Resolution algorithm
Proof by contradiction, i.e., show KB ∧ ¬α unsatisfiable
33
5.5 Proof by resolution
KB = (B1,1 ⇔ (P1,2 ∨ P2,1)) ∧ ¬B1,1
α = ¬P1,2
34
5.5 Proof by resolution
E.g. Wumpus world
R1 : ¬P1,1.
R2 : B1,1 ⇔ (P1,2 ∨ P2,1).
R3 : B2,1 ⇔ (P1,1 ∨P2,2 ∨ P3,1).
R4 : ¬B1,1.
R5 : B2,1.
35
5.5 Proof by resolution 1. convert (KB ∧ ¬α) to CNF
3. resolve pairs
Result: KB |= ¬P1,2
36
5.5 Horn clauses and definite clauses
● Definite clause: a disjunction of literals of which exactly one is positive.
E.g., (¬L1,1 ∨ ¬B ∨ B1,1) is a definite clause
● Horn clause: a disjunction of literals of which at most one is positive
● Goal clauses: clauses with no positive literals
37
Figure 7.14 A grammar for conjunctive normal form, Horn clauses, and definite clauses.
5.5 Forward and backward chaining
Horn Form (restricted)
KB: conjunction of Horn clauses
Horn clause:
proposition symbol;
or (conjunction of symbols) ⇒ symbol
E.g., C ∧ (B ⇒ A) ∧ (C ∧ D ⇒ B)
39
5.5 Forward chaining
Idea: fire any rule whose premises are satisfied in the KB,
add its conclusion to the KB, until query is found or no further inferences can be made.
40
5.5 Forward chaining
41
5.5 Forward chaining
42
5.5 Forward chaining
43
5.5 Forward chaining
44
5.5 Forward chaining
45
5.5 Forward chaining
46
5.5 Forward chaining
47
5.5 Forward chaining
48
5.5 Backward chaining
● Idea: work backwards from the query q to prove q by BC,
○ check if q is known already, or
○ prove by BC all premises of some rule concluding q
49
5.5 Backward chaining
50
5.5 Backward chaining
51
5.5 Backward chaining
52
5.5 Backward chaining
53
5.5 Backward chaining
54
5.5 Backward chaining
55
5.5 Backward chaining
56
5.5 Backward chaining
57
5.5 Backward chaining
58
5.5 Backward chaining
59
5.5 Backward chaining
60
5.5 Forward vs. backward chaining
● FC is data-driven: automatic, unconscious processing,
○ e.g., object recognition, routine decisions
○ May do lots of work that is irrelevant to the goal
61