0% found this document useful (0 votes)
22 views

CS 4 - Knowledge Representation-1

This document contains lecture slides about artificial intelligence topics including knowledge representation, search algorithms, and logic. It discusses DFS, DLS, IDDFS, informed searches like greedy best-first search and A*, state space representation including factored representation, knowledge-based agents and the Wumpus World example. The Wumpus World is used to demonstrate how a knowledge-based agent can perform logical reasoning to solve problems by representing knowledge in its state space and querying that knowledge base.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

CS 4 - Knowledge Representation-1

This document contains lecture slides about artificial intelligence topics including knowledge representation, search algorithms, and logic. It discusses DFS, DLS, IDDFS, informed searches like greedy best-first search and A*, state space representation including factored representation, knowledge-based agents and the Wumpus World example. The Wumpus World is used to demonstrate how a knowledge-based agent can perform logical reasoning to solve problems by representing knowledge in its state space and querying that knowledge base.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 66

BITS Pilani

Pilani Campus
Artificial Intelligence-[SESIZC444]
CS 4 -Knowledge representation &inference

BITS Pilani Dr. Vijayalakshmi Anand


Pilani Campus
DFS algorithm

 The algorithm starts at the root node


 traverses the tree “deepest node first”
 It uses last in- first-out strategy and hence it is
implemented using a stack.

BITS Pilani, Pilani Campus


Example

BITS Pilani, Pilani Campus


Depth Limited Search (DLS)

Running DFS with a predetermined depth


limit l

Completeness: No, cannot guarantee a goal


if l < d
Optimal: No
Time complexity:
Space Complexity:

BITS Pilani, Pilani Campus


Iterative Deepening Depth First
Search (IDDS)
 Run Depth Limited Search (DLS) by gradually increasing
the limit l
 First with l=1, then l=2, l=3 and so on – until goal is
found

 Its is a combination of Depth First Search + Breadth First


Search

BITS Pilani, Pilani Campus


IDDS -example

BITS Pilani, Pilani Campus


Informed search

Information about Goal state is present

Efficient algorithm than uninformed search

It is a search which tries to reduce amount of


search that must be done by making intelligent
choices for the nodes that are selected for
expansion

Find optimal solution to reach the goal state.

BITS Pilani, Pilani Campus


Informed (Heuristic) Search
Strategies

 Also called heuristic search

 In a heuristic search each state is assigned a


“heuristic value” (h-value) that the search uses in
selecting the “best” next step

1. Greedy Best First Search


2. A* Search

BITS Pilani, Pilani Campus


Greedy best first search algorithm

 It is the combination of depth-first search and


breadth-first search algorithms.
At each step of the BFS process, we select the
most promising of the nodes we have generated
so far.
It uses the heuristic function and search.

BITS Pilani, Pilani Campus


Contd..

To implement the algorithm we need two list

 ‘Open’ list that keeps track of the current


‘immediate’ nodes available for traversal
 ‘CLOSED’ list that keeps track of the nodes
already traversed. 
 Best First Search is complete
 Best First Search is not optimal

BITS Pilani, Pilani Campus


Example

BITS Pilani, Pilani Campus


A* Search

 Pronounced A-star search


 Most widely used
 Evaluation function f(n) = g(n) + h(n)
• Where g(n) – the cost to reach the
node
• h(n) – the expected cost to go from
node to goal
• Hence, f(n) – estimated cost of
cheapest path through node n

BITS Pilani, Pilani Campus


A* Search Example

BITS Pilani, Pilani Campus


State-space representation

 In Search Strategies, the states are represented


as
• Atomic representation
• Treated as black box
• Many factors are ignored, e.g., gas in car,
money available for toll, etc.
 To include more factors in our state
representation, we have
• Factored Representation

BITS Pilani, Pilani Campus


Factored Representation

Each state will be represented with


 Factors
 Each factor can be a Boolean or real-
valued variable
 E.g., City Arad can now be represented
as
 {“lat”: 46.1866, “lon”: 21.3123,
“has_gas_till_next_station”: yes}

BITS Pilani, Pilani Campus


Knowledge-based Agents

Solving problems by
 Representing the knowledge in state-space
 Reasoning about the solution in logical steps

BITS Pilani, Pilani Campus


Knowledge-based Agents
 Central Component: Knowledge Base (KB)

 KB – set of sentences, not the English sentences

 Sentence
 representation of knowledge in a language called Knowledge representation
language
 Represents an axiom, when the sentence is taken as given without being derived
from other sentences
 TELL operation: Add new sentences to the knowledge base
 ASK operation: Query what is known

 Each time a knowledge-based agent is called, it does three things


 TELL – the Knowledge Base about the percepts (inputs)
 ASK – the Knowledge Base what action it should perform.
 Extensive reasoning about the outcomes of possible action and current state
 TELL – the Knowledge Base about selected action (to update) and agent executes
the action

BITS Pilani, Pilani Campus


Example: Knowledge-based
Agents

Wumpus World

BITS Pilani, Pilani Campus


Wumpus World

 Defining the task environment (PEAS)


 Performance Measure:
 +1000 for climbing out with gold,
 -1000 for falling into a pit or being eaten by Wumpus,
 -1 for each action taken and
 -10 for using an arrow

 Environment: 4x4 grid of rooms. Always starts at [1, 1]


facing right. The location of Wumpus and Gold are random.
Agent dies if entered a pit or live Wumpus.

BITS Pilani, Pilani Campus


Wumpus World

Actuators –
 Forward,
 TurnLeft by 90,
 TurnRight by 90,
 Grab – pick gold if present,
 Shoot – fire an arrow, it either hits a
wall or kills wumpus. Agent has only
one arrow.
 Climb – Used to climb out of cave, only
from [1, 1]

BITS Pilani, Pilani Campus


Wumpus World

 Sensors. The agent has five sensors


 Stench: In all adjacent (but not diagonal) squares of
Wumpus
 Breeze: In all adjacent (but not diagonal) squares of a pit
 Glitter: In the square where gold is
 Bump: If agent walks into a wall
 Scream: When Wumpus is killed, it can be perceived
everywhere

 Percept Format: [Stench?, Breeze?, Glitter?, Bump?,


Scream?]
E.g., [Stench, Breeze, None, None, None]

BITS Pilani, Pilani Campus


Wumpus World

Goal: Discover the location of Wumpus, Pits and


Gold so that the agent can navigate to Gold
without falling prey to Wumpus or Pits

This would be agent’s knowledge for transition


Requires a configuration of the environment to do
logical reasoning

BITS Pilani, Pilani Campus


Wumpus World

Symbolic Notation
A = Agent
B = Breeze
G = Glitter, Gold
OK = Safe Square
P = Pit
S = Stench
V = Visited
W = Wumpus

BITS Pilani, Pilani Campus


Wumpus World

Percept 1: [None, None, None, None, None]

Action: Forward

BITS Pilani, Pilani Campus


Wumpus World

Percept 2: [None, Breeze, None, None, None]

Action: Move to 1, 2 (the other non-visited OK state)

BITS Pilani, Pilani Campus


Wumpus World

Percept 3: [Stench, None, None, None,


None]

Action: Move to [2, 2]

BITS Pilani, Pilani Campus


Wumpus World

Logical Agents: combine the current percept with the


existing knowledge (percept history) and do logical
reasoning in identifying state information

BITS Pilani, Pilani Campus


Logic

Reasoning conducted or assessed according to


available knowledge about the domain

Syntax: Every sentence in our knowledge base


are expressed according to a syntax of the
representing language.
E.g., in Arithmetic, “x + y = 4” is a well-formed
sentence, “xy+4=“ is not

BITS Pilani, Pilani Campus


Logic – Semantics

Semantics: Meaning of sentence. They


define the truth of a sentence according to a
possible world.
E.g., in a world where x=2, y=2, “x + y =
4” is a true sentence, whereas in a world where
x=1, y=1, the same sentence is not true

In standard Logic, every sentence has to be


either True or False but cannot be in between

BITS Pilani, Pilani Campus


Logic – Model

Model: Any possible world. Not necessarily the


reality. Any combination of assignment of truth
values to sentences in our KB.

E.g., Two sentences S1 and S2 in our KB.


Possible models: m1: {S1: True, S2: True}, m2:
{S1: False, S2: True}, m3: {S1:True, S2: False},
m4: {S1:False, S2:False}
S1 is true in m1, m3  m1 satisfies S1 or m3
satisfies S1
M(S1) is set of models that satisfy S1, i.e., [m1, m3]

BITS Pilani, Pilani Campus


Logic – Entailment

Logical Reasoning: Logical entailment between


sentences, i.e., a sentence follows from
another sentence

X Y , it means that sentence X entails the


sentence Y
 In every model where X is true, Y is also true
X Y if and only if M(X) M(Y)

BITS Pilani, Pilani Campus


Logic – Entailment

 Example: In Wumpus World, the agent is in [2,1]


and detected a breeze
 The agent is interested in squares [1, 2], [2, 2], [3,
1] for next move.
 Now, each square might or might not contain a pit
(total 23 = 8 possible models)
 Our KB tells us that in [1, 1] we didn’t receive a
breeze and hence [2, 1] doesn’t have a pit

BITS Pilani, Pilani Campus


Logic – Entailment

Define = “There is no pit in [1, 2]”


In every model, where KB is true,
is also true
Hence, KB
M(KB) M()

Model Checking: This


enumeration of all models to
verify logical inference
(entailment) is called Model
Checking

BITS Pilani, Pilani Campus


Logic – Entailment

Define = “There is no pit in


[2, 2]”

Here, is true in models


where KB is not true

Hence, KB

BITS Pilani, Pilani Campus


Logic – Entailment

Sound – An inference algorithm that derives


only entailed sentences
E.g., Model Checking is a sound inference
procedure

Completeness – An inference algorithm is


complete if it can derive any sentence that is
entailed

If KB is true, any sentence derived using Sound


inference procedure is also true in real world.

BITS Pilani, Pilani Campus


Logic - Grounding

Grounding – connection between logical reasoning


and real environment in which agent exists
How do we know that KB is true in the real world?

BITS Pilani, Pilani Campus


Propositional Logic

 A simple representation language for building knowledge-


based agents
 Proposition Symbol – A symbol that stands for a proposition.
E.g., W1,3 – “Wumpus in [1,3]” is a proposition and W1,3 is
the symbol
 Proposition can be true or false
 Usually represented with uppercase letter and may contain
other letters or subscripts. E.g., P, Q, R, W1,3, North
 Two proposition symbols have fixed meaning
• True – always true proposition
• False – always false proposition

BITS Pilani, Pilani Campus


Propositional Logic – Syntax

 Syntax – Defines the language for allowable


sentences
 Atomic proposition – consist of a single
propositional symbol
 Complex proposition– Constructed from atomic
sentences using parenthesis and logical
connectives

BITS Pilani, Pilani Campus


Propositional Logic –
Connectivites

(not) – A sentence like W1,3 is the negation of W1,3 .


(and) – Used in AND combination of sentences, i.e., W1,3 P3,1 . It is
called a conjunction and its parts are called conjuncts
(or) – Used in OR combination of sentences, i.e., W1,3 P3,1 . It is called
a disjunction and its parts are called disjuncts
(implies) – A sentence such as (W1,3 P3,1) W2,2 is called an implication
(or conditional). Its premise (antecedent) is LHS and its conclusion
(consequent) is RHS
(if and only if) – A sentence like W1,3 W2,2 is a biconditional

BITS Pilani, Pilani Campus


Propositional Logic – Syntax

Operator Precedence: , , , ,
In ambiguous sentences like , resolution is
done using precedence, such that rather than

BITS Pilani, Pilani Campus


Propositional Logic –
Semantics

 Semantics defines the rules for determining the


truth of a sentence w.r.t a particular model
 E.g., sentences in knowledge base are {P1, P2,
P3}, a possible model
m1 = {P1 = false, P2 = true, P3 = false}, a model
provides truth values for proposition symbols

BITS Pilani, Pilani Campus


Propositional Logic –
Semantics

 Semantics must specify how to compute the truth


value of any sentence, given a model
 Compute the truth of atomic sentences and
 Compute the truth of sentences formed with five
connectives

 The truth of atomic sentences is given by the


model

BITS Pilani, Pilani Campus


Propositional Logic –
Semantics
 For complex sentences, we can do
recursive evaluation using truth
table

 E.g., evaluated in m1 gives true (false


true) = true (true) = true

BITS Pilani, Pilani Campus


Propositional Logic – Example
KB

A simple KB for wumpus world.


 For each [x, y] location
 Px,y is true if there is a pit in [x, y]
 Wx,y is true if there is a wumpus in [x, y]
 Bx,y is true if agent perceives a breeze in [x, y]
 Sx,y is true if agent perceives a stench in [x, y]

BITS Pilani, Pilani Campus


Propositional Logic – Example
KB

Each sentence will be denoted as


Ri so that we can refer to them

R1 : P1,1
R2 : B1,1 (P1,2 P2,1)
R3 : B2,1 (P1,1 P2,2 P3,1 )
R4 : B1,1
R5 : B2,1

BITS Pilani, Pilani Campus


Propositional Logic (PL) –
Inference

Given our example KB, can we infer a


sentence
E.g., P1,2 entailed by our KB?

BITS Pilani, Pilani Campus


PL Inference - Model Checking

Step 1: Get relevant Propositional symbols from our


KB, B1,1, B2,1, P1,1, P1,2, P2,1, P2,2, P3,1 . With 7
symbols, we would have 27 = 128 possible models.
Step 2: Enumerate all models with combination of
truth values to propositional symbols
Step 3: In all the models, find those models where
KB is true, i.e., every sentence R1, R2, R3, R4, R5
are true
Step 4: In those models where KB is true, find if
query sentence P1,2 is true
Step 5: If query sentence P1,2 is true in all models
where KB is true, then it entails, otherwise it won’t

BITS Pilani, Pilani Campus


PL Inference - Model Checking

BITS Pilani, Pilani Campus


PL Inference - Model Checking

In three models, KB is true


P1,2 is true in all three of those
models,hence, P1,2 is entailed from
KB, i.e., P1,2 is true
There is no pit in [1, 2]

P2,2 is true in two of the three models


and false in one, so we cannot infer if
there is a pit in [2,2] based on our
simple KB

BITS Pilani, Pilani Campus


PL Inference - Model Checking

Model checking algorithm


 If KB and contain n symbols, then
there will be 2n models
 Time Complexity: O(2n)
 Space Complexity: O(n) because
enumeration is depth-first

BITS Pilani, Pilani Campus


PL Inference - Theorem
Proving
Theorem Proving – applying rules of inference
directly to sentences in our KB to prove query
sentence without consulting models

Logical Equivalence: two sentences and are


logically equivalent if they are true in the same set
of models denoted as
E.g., (P1,2 B2,1) and (B2,1 P1,2 )are logically equivalent

BITS Pilani, Pilani Campus


PL Inference - Theorem
Proving

BITS Pilani, Pilani Campus


PL Inference - Theorem Proving
Validity: A sentence is valid if it is true in all models. E.g., P1,2
P1,2 is always true
Deduction Theorem: For any sentences and , if and only if
the sentence is valid
If we prove is equivalent to True, then entailment is proved

Satisfiability: A sentence is satisfiable if it is true in some


model. E.g., our simple KB is satisfied only by three models
SAT: The problem of determining the satisfiability of sentences
in propositional logic
Unsatisfiable: If a sentence is false in all models. E.g., P1,2 P1,2
is always false

BITS Pilani, Pilani Campus


PL Inference – Theorem
Proving

 Using Logical equivalence and


Deductive theorem above
 For any sentences and , if and only if the
sentence is unsatisfiable
 This is Proof by Refutation or Proof by
Contradiction

BITS Pilani, Pilani Campus


PL Inference – Theorem
Proving

Inference Rules
Modus Ponens (Latin for Mode that affirms) :
Whenever any sentence of the form () and are
given, then can be inferred
E.g., ((WumpusAhead WumpusAlive) Shoot
If (WumpusAhead WumpusAlive) is given, then
Shoot can be inferred

BITS Pilani, Pilani Campus


PL Inference – Theorem
Proving

Inference Rules
And-Elimination: Given a conjecture, any of
the conjuncts can be inferred: or
E.g., If (WumpusAhead WumpusAlive) is
given, then WumpusAhead can be inferred

BITS Pilani, Pilani Campus


PL Inference – Theorem
Proving

Inference Rules
All Logical Equivalence
rules can be used as
inference rules

BITS Pilani, Pilani Campus


PL Inference – Theorem
Proving

Example: Given our simple KB of Wumpus World


R1 : P1,1
R2 : B1,1 (P1,2 P2,1)
R3 : B2,1 (P1,1 P2,2 P3,1 )
R4 : B1,1
R5 : B2,1

Query: P1,2 . Can we prove if this sentence be


entailed from KB using inference rules?

BITS Pilani, Pilani Campus


PL Inference – Theorem
Proving

• R2 : B1,1 (P1,2 P2,1)


• Apply Biconditional Elimination
• R6 : (B1,1 (P1,2 P2,1)) ((P1,2 P2,1) B1,1)
• Apply And-Elimination to R6 to obtain
• R7 : ((P1,2 P2,1) B1,1)
• Logical equivalence for contrapositives gives
• R8 : ( B1,1 (P1,2 P2,1))
• Apply Modus Ponens with R8 and R4 to obtain
• R9 : (P1,2 P2,1)
• Finally, using De Morgan’s rule
• R10 : P1,2 P2,1
BITS Pilani, Pilani Campus
PL Inference – Theorem
Proving
 This proof can be efficiently computed using
search algorithms we discussed earlier. We need
to define the proof problem as

 Initial State: the initial knowledge base


 Actions: all inference rules that could match the
sentences with the top half of inference rule
 Result: the bottom half of inference rule
 Goal: The state containing the query sentence

 In many practical cases, searching a proof can be


more efficient because the proof can ignore
irrelevant propositions

BITS Pilani, Pilani Campus


example

BITS Pilani, Pilani Campus


solution

BITS Pilani, Pilani Campus


Exercise

Solving a murder case

BITS Pilani, Pilani Campus


Thank you

BITS Pilani, Pilani Campus


IMP Note to Self

BITS Pilani, Pilani Campus

You might also like