0% found this document useful (0 votes)
13 views80 pages

Module 2 Ai

This document provides an overview of logical agents and propositional logic in artificial intelligence. Logical agents use logical reasoning over internal knowledge representations to derive new representations and determine actions. Propositional logic is a formal language that represents knowledge with propositional symbols and logical connectives. Logical entailment determines if one statement logically follows from another based on their truth in possible worlds. Model checking exhaustively evaluates all possible worlds to test logical entailment.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views80 pages

Module 2 Ai

This document provides an overview of logical agents and propositional logic in artificial intelligence. Logical agents use logical reasoning over internal knowledge representations to derive new representations and determine actions. Propositional logic is a formal language that represents knowledge with propositional symbols and logical connectives. Logical entailment determines if one statement logically follows from another based on their truth in possible worlds. Model checking exhaustively evaluates all possible worlds to test logical entailment.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 80

Fundamentals of

Artificial Intelligence

Logical Agents and


Propositional Logic
Knowledge Based Agents

• The intelligence of humans is achieved not by purely reflex mechanisms but by processes of
reasoning that operate on internal representations of knowledge.
• In AI, this approach to intelligence is embodied in knowledge-based agents.
• Logical agents (knowledge-based agents) can
• form representations of a complex world,
• use a process of inference to derive new representations about the world, and
• use these new representations to deduce what to do.
• Knowledge-based agents is supported by logic such as propositional logic or first-order predicate
logic.
Knowledge Base

• A knowledge base (KB) is a set of sentences in a formal language.


• Inference: Deriving new sentences from old sentences in KB.

• Each time a knowledge-based agent does three things.


1. TELLs the knowledge base what it perceives. TELL operations add new sentences (agent’s
perceptions) to the knowledge base.
2. ASKs the knowledge base what action it should perform.
• Extensive reasoning (inference) may be done about the current state of the world, about the
outcomes of possible action sequences.
3. The agent TELLs the knowledge base which action was chosen, and the agent executes the
action.
A Simple Knowledge-Based Agent

• MAKE-PERCEPT-SENTENCE constructs a sentence asserting that the agent perceived the given percept
at the given time.
• MAKE-ACTION-QUERY constructs a sentence that asks what action should be done at the current time.
• MAKE-ACTION-SENTENCE constructs a sentence asserting that the chosen action was executed.
• The details of the inference mechanisms are hidden inside TELL and ASK.
Wumpus World

• The wumpus world is a simple computer


game, but it illustrates some important points
about intelligence.
• Wumpus eats anyone who enters its room.
• The wumpus can be shot by an agent
(wumpus dies if it is shot), but the agent has
only one arrow.
• The agent can safely enter a death
wumpus’s room.
• Some rooms contain bottomless pits that will
trap anyone who wanders into these rooms.
• A room contains gold.
Wumpus World Description

Performance measure: gold +1000, death -1000, -1 per step, -10 for using the arrow
Environment: A 4×4 grid of rooms. The agent always starts in the square [1,1], facing to the right. The
locations of the gold and the wumpus are chosen randomly, with a uniform distribution, from the
squares other than the start square. Each square other than the start can be a pit, with probability 0.2.
• Squares adjacent to wumpus are smelly
• Squares adjacent to pit are breezy
• Glitter iff gold is in the same square
• Shooting kills wumpus if you are facing it
• Shooting uses up the only arrow
• Grabbing picks up gold if in same square
• Releasing drops the gold in same square

Actuators: Left turn, Right turn, Forward, Grab, Release, Shoot


Sensors: Breeze, Glitter, Smell, (Bump, Scream)
Exploring A Wumpus World
Exploring A Wumpus World
Exploring A Wumpus World
Exploring A Wumpus World
Exploring A Wumpus World
Exploring A Wumpus World
Exploring A Wumpus World
Exploring A Wumpus World
Some Tight Spots in Wumpus World
Logic

LOGICS are formal languages for representing information such that conclusions can be drawn.

SYNTAX: The Syntax of a logic defines the sentences (well-formed formulas) in that logic.

SEMANTICS: The Semantics of a logic defines the meaning of sentences.


• The semantics defines the truth of each sentence with respect to each possible world.
• The term model is also used in place of possible world.

SATISFACTION:
• If a sentence α is TRUE in model m, we say that m SATISFIES α (or sometimes we simply
say that m is a model of α).
• The notation M(α) to mean the set of all models of α
(i.e. M(α) is the set of all models that satisfy α) .
Logical Entailment
(other names: Logical Consequence or Logical Implication)

• Logical Entailment between sentences means that a sentence follows logically from another
sentence. In mathematical notation, 𝜶 ⊨ 𝜷

The formal definition of entailment :

𝜶 logically entails 𝜷 𝜶 ⊨ 𝜷
if and only if,
𝜷 is true in all worlds where 𝜶 is true

• 𝜶 ⊨ 𝜷 if and only if M(𝜶) ⊆ M(𝜷) .


• if 𝜶 ⊨ 𝜷 , then 𝜶 is a stronger assertion than 𝜷 : it rules out more possible worlds.
Logical Entailment: Models

• Logicians typically think in terms of models, which are formally structured worlds with respect to
which truth can be evaluated
• We say m is a model of a sentence α if α is true in m
• M(α) is the set of all models of α (all models satisfy α)
• Then KB ⊨ α if and only if M(KB) ⊆ M(α) M(α)

• E.g. KB = Giants won and Reds won


α = Giants won
Entailment in Wumpus World

Situation after detecting nothing in [1,1],


moving right, breeze in [2,1]

Consider possible models for ?s


assuming only pits

3 Boolean choices  8 possible models


Entailment in Wumpus World: Wumpus Models
Wumpus Models: is [1,2] safe?

KB = wumpus-world rules + observations


Wumpus Models: is [1,2] safe?

KB = wumpus-world rules + observations

α 1 = “[1,2] is safe”, KB ⊨ α 1,
proved by model checking
Wumpus Models: is [2,2] safe?

KB = wumpus-world rules + observations


Wumpus Models: is [2,2] safe?

KB = wumpus-world rules + observations

α 2 = “[2,2] is NOT safe”, KB ⊭ α 2,


proved by model checking
• the agent cannot conclude that there is
no pit in [2,2]. (Nor can it conclude that
there is a pit in [2,2].
Model Checking

• In Wumpus world, the entailment is applied to derive conclusions from KB; i.e. a logical inference
is carried out by checking the models.
• The algorithm used in this process is called as model checking.
• Model Checking is an inference algorithm that enumerates all possible models (of KB) to check that
α is true in all models in which KB is true, that is, that M(KB) ⊆ M(α). Thus KB ⊨ α
• Model checking algorithm can be expensive (or impractical) because KB can have too many models
(or it can have infinite models in some logics).
Inference (Proofs)

• If an inference algorithm i can derive α from KB (generally by syntactic operations), we write


KB ⊢i α
which is pronounced “α is derived from KB by inference procedure i”.
(or α is proved/deducted from KB)

• In a derivation, we start from premises (sentences (in KB) assumed to be true) and axioms
(sentences that are true in every world).
• Then, we apply an inference rule to sentences that are already derived (proved) to derive a new
sentence at each step of the derivation.
Inference: Soundness and Completeness

Soundness:
An inference algorithm i is sound (or truth-preserving) if
whenever KB ⊢i α, it is also true that KB ⊨ α

• If an inference algorithm is sound, all derivable sentences from KB are also logical consequences
of KB.

Completeness:
An inference algorithm i is complete if
whenever KB ⊨ α, it is also true that KB ⊢i α

• If an inference algorithm is complete, all entailed sentences from KB are also derivable from KB.
Propositional Logic

• Propositional Logic is a simple but powerful logic.

• The syntax of propositional logic defines the allowable sentences.

• The semantics of propositional logic defines the rules for determining the truth of a sentence with
respect to a particular model.

• Propositional Logic contains proposition symbols and logical connectives (logical operators).

• The meaning of a propositional symbol can be true or false.

• The meaning of a logical connective is given by its truth table.


Propositional Logic: Syntax

NEGATION
CONJUNCTION
DISJUNCTION
IMPLICATION
BICONDITIONAL

• The atomic sentences consist of a single proposition symbol.


• Each proposition symbol stands for a proposition that can be true or false.
• There are two proposition symbols with fixed meanings (constants): True is the always-true
proposition and False is the always-false proposition.
• Complex sentences are constructed from other sentences, using parentheses and logical connectives.
Propositional Logic: Syntax

• A literal is either an atomic sentence (a positive literal) or a negated atomic sentence (a negative
literal).
• A sentence whose main connective is ∧, such as P ∧ Q, is called a conjunction; its parts are the
conjuncts.
• A sentence whose main connective is ∨, such as P ∨ Q, is called a disjunction; its parts are the
disjuncts.
• A sentence such as P ⇒ Q is called an implication (or conditional). Its premise or antecedent is P,
and its conclusion or consequent is Q.
Propositional Logic: Semantics

• The semantics defines the rules for determining the truth of a sentence with respect to a particular
model.
• In propositional logic, a model simply fixes the truth value—true or false—for every proposition
symbol.

P Q R
Ex. Model: true true true
• If the sentences in the knowledge base make use of the true true false
proposition symbols P, Q, and R, then one possible model is true false true

m1 = {P=false, Q=false, R=true} . true false false


false true true
• With three proposition symbols, there are 23=8 possible models
false true false
false false true
false false false
Propositional Logic: Semantics

• The semantics for propositional logic must specify how to compute the truth value of any
sentence, for a given a model m.

• Since all sentences are constructed from atomic sentences and the five connectives; we need to
specify
• how to compute the truth of atomic sentences and
• how to compute the truth of sentences formed with each of the five connectives.
Propositional Logic: Semantics
Rules for evaluating truth with respect to a model m

Atomic Sentences:
• True is true in every model and False is false in every model.
• The truth value of every other proposition symbol must be specified directly in the model m.

Complex Sentences: five rules which hold for any sub-sentences P and Q in any model m
• ¬P is true iff P is false in m.
• P ∧ Q is true iff both P and Q are true in m.
• P ∨ Q is true iff either P or Q is true in m.
• P ⇒ Q is true unless P is true and Q is false in m.
• P ⇔ Q is true iff P and Q are both true or both false in m.
Propositional Logic: Semantics
Truth Tables

• The semantic rules can also be expressed with truth tables that specify the truth value of a complex
sentence for each possible assignment of truth values to its components.
Propositional Logic
Wumpus world sentences

• Px,y is true if there is a pit in [x, y].


• Wx,y is true if there is a wumpus in [x, y], dead or alive.
• Bx,y is true if the agent perceives a breeze in [x, y].
• Sx,y is true if the agent perceives a stench in [x, y].

• There is no pit in [1,1]:


• A square is breezy if and only if there is a pit in a neighboring square.

• The breeze percepts for the first two squares visited in the specific world the agent is in.

• From these sentences, we can derive (there is no pit in [1,2]).


Propositional Logic
Inference by Enumeration of Models

• Our goal now is to decide whether KB ⊨ α for some sentence α.


• For example, is ¬P1,2 entailed by our KB?

• Inference by Enumeration of Models is a direct implementation of the definition of entailment:


• Enumerate the models, and check that α is true in every model in which KB is true.
• Models are assignments of true or false to every proposition symbol.

• Inference by Enumeration of Models algorithm is sound and complete.

• If KB and α contain n symbols in all, then there are 2n models.


• Thus, the time complexity of the algorithm is O(2n).
Propositional Logic
Inference by Enumeration of Models
Propositional Logic
Inference by Enumeration of Models

• Enumerate rows (different assignments to • In all 3 rows, P1,2 is false, so there is no pit in [1,2].
symbols), • On the other hand, there might (or might not) be a
• if KB is true in row, check that α is too pit in [2,2].
• Since there are 7 propositional symbols, there are
128 rows.
Logical Equivalence
Validity and Satisfiability
Inference and Proofs

• A proof is a sequence of sentences that leads to the desired goal.


• A proof starts with given premises and each new sentence in the sequence is obtained as a result of
the application of an inference rule to previous sentences in the sequence.

Some Inference Rules:


Modus Ponens: And-Elimination:

• All logical equivalences can be used as inference rules.


Inference and Proofs

• Inference rules are sound. i.e. when their premises are true their conclusions are also true.

Wumpus World Proof Example:

1. (WumpusAhead ∧WumpusAlive) ⇒ Shoot Premise


2. (WumpusAhead ∧ WumpusAlive) Premise
3. Shoot Modus Ponens 1 and 2

Another Example:
1. (WumpusAhead ∧ WumpusAlive) Premise
2. WumpusAlive And-Elimination 1
Inference and Proofs
Derivation (proof) of ¬P1,2 from given premises

1. ¬P1,1 Premise
2. B1,1 ⇔ (P1,2 ∨ P2,1) Premise
3. B2,1 ⇔ (P1,1 ∨ P2,2 ∨ P3,1) Premise
4. ¬B1,1 Premise
5. B2,1 Premise
6. (B1,1 ⇒ (P1,2 ∨ P2,1)) ∧ ((P1,2 ∨ P2,1) ⇒ B1,1) Biconditional elimination to 2
7. ((P1,2 ∨ P2,1) ⇒ B1,1) And-Elimination to 6
8. (¬B1,1 ⇒ ¬(P1,2 ∨ P2,1)) Logical equivalence for contrapositives to 7
9. ¬(P1,2 ∨ P2,1) Modus Ponens to 8 and 4
10. ¬P1,2 ∧ ¬P2,1 De Morgan’s rule to 9
11. ¬P1,2 And-Elimination to 10
Searching for Proofs

• Searching for proofs is an alternative to enumerating models (exponential) and it is more efficient.
• We can apply any of the search algorithms to find a sequence of steps that constitutes a proof. We
just need to define a proof problem as follows:

INITIAL STATE: The initial knowledge base.


ACTIONS: The set of actions consists of all the inference rules applied to all the sentences that match
the top half of the inference rule.
RESULT: The result of an action is to add the sentence in the bottom half of the inference rule.
GOAL: The goal is a state that contains the sentence we are trying to prove.
Proof by Resolution

• The inference rules that we look at are sound, but we have not discussed the question of
completeness for the inference algorithms that use them.
• Search algorithms such as iterative deepening search are complete in the sense that they will find
any reachable goal.
• But if the available inference rules are inadequate, then the goal is not reachable—no proof exists that uses
only those inference rules.
• For example, if we removed the biconditional elimination rule, the Wumpus World proof would not go
through.
• So, our proof system will not be complete.

• A single inference rule, RESOLUTION, that yields a complete inference algorithm when coupled
with any complete search algorithm.
Clause

• Resolution inference rule works on clauses.


• A clause is a disjunction of literals.
• A literal is either an atomic sentence (a positive literal) or a negated atomic sentence (a negative literal)
• Clause Example:
P ∨ R ∨ Q ∨ S
• A single literal can be viewed as a disjunction of one literal, also known as a unit clause.

• Resolution is sound and complete for propositional logic


Resolution

Resolution Inference Rule

• where li and mj are complementary literals. The resolution takes two clauses and produces a new
clause containing all the literals of the two original clauses except the two complementary literals.
• Conclusion of the rule is called as resolvent.

P ∨ R ∨ Q ∨ S T ∨ Q ∨ M
------------------------------------------------
P ∨ R ∨ S ∨ T ∨ M
Conjunctive Normal Form

• The resolution rule applies only to clauses.


• knowledge bases and queries should consist of clauses.
• Every sentence of propositional logic is logically equivalent to a conjunction of clauses.
• A sentence expressed as a conjunction of clauses is said to be in conjunctive normal form (CNF).
Converting to CNF

1. Eliminate ⇔, replacing α ⇔ β with (α ⇒ β) ∧ (β ⇒ α)


2. Eliminate ⇒, replacing α ⇒ β with α ∨ β
3. CNF requires  to appear only in literals, so we “move  inwards” by repeated application of the
following equivalences

4. Now we have a sentence containing nested ∧ and ∨ operators applied to literals. We apply the
distributivity law, distributing ∨ over ∧ wherever possible.
Converting to CNF
• Example:
A Simple Resolution Algorithm
for Propositional Logic

• PL-RESOLVE returns the set of all possible clauses obtained by resolving its two inputs.
PL-RESOLUTION Example

• Partial application of PL-RESOLUTION to a simple inference in the Wumpus world. ¬P1,2 is shown
to follow from the first four clauses in the top row.
Horn Clauses and Definite Clauses

• The definite clause, which is a disjunction of literals of which exactly one is positive.
• Horn clause is a disjunction of literals of which at most one is positive.
• So all definite clauses are Horn clauses,
• Clauses with no positive literals; these are called goal clauses.
• Knowledge bases containing only definite clauses are interesting for three reasons:
1. Every definite clause can be written as an implication whose premise is a conjunction of
positive literals and whose conclusion is a single positive literal.
• In Horn form, the premise is called the body and the conclusion is called the head.
• A sentence consisting of a single positive literal is called a fact.
2. Inference with Horn clauses can be done through the forward chaining and backward
chaining algorithms. This type of inference is the basis for logic programming.
3. Deciding entailment with Horn clauses can be done in time that is linear in the size of the
knowledge base
Horn Clauses and Definite Clauses

Definite Clause Examples:


P ∨ R ∨ S R∧S⇒P
P P

Goal Clause Example:


R ∨ S R ∧ S ⇒ False or R ∧ S ⇒
Forward-Chaining Algorithm
for Propositional Logic
Forward-Chaining Algorithm Example

FC Algorithm: Fire any rule whose premises are satisfied in the KB, add its conclusion to the KB,
until query is found.

A set of Horn clauses. The corresponding AND–OR graph.


Forward-Chaining Algorithm Example
Forward-Chaining Algorithm Example
Forward-Chaining Algorithm Example
Forward-Chaining Algorithm Example
Forward-Chaining Algorithm Example
Forward-Chaining Algorithm Example
Forward-Chaining Algorithm Example
Forward-Chaining Algorithm Example
Proof of Completeness of FC
Backward Chaining

• Backward chaining is a form of goal-directed reasoning.


Backward Chaining Example
Backward Chaining Example
Backward Chaining Example
Backward Chaining Example
Backward Chaining Example
Backward Chaining Example
Backward Chaining Example
Backward Chaining Example
Backward Chaining Example
Backward Chaining Example
Backward Chaining Example
Resolution Proof Example

P ∨ Q Proof of Q
L ∨ M ∨ P
B ∨ L ∨ M Clause1 Clause2 Resolvent
A ∨ P ∨ L A A ∨ B ∨ L B ∨ L
A ∨ B ∨ L B B ∨ L L
A B B ∨ L ∨ M L ∨ M
B L L ∨ M M
L L ∨ M ∨ P M ∨ P
M M ∨ P P
P P ∨ Q Q
Forward vs. Backward Chaining
Summary

• Logical agents apply inference to a knowledge base to derive new information and make decisions
• Basic concepts of logic:
• syntax: formal structure of sentences
• semantics: truth of sentences wrt models
• entailment: necessary truth of one sentence given another
• inference: deriving sentences from other sentences
• soundness: derivations produce only entailed sentences
• completeness: derivations can produce all entailed sentences
• Wumpus world requires the ability to represent partial and negated information, reason by cases, etc.
• Forward, backward chaining are linear-time, complete for Horn clauses
• Resolution is complete for propositional logic
• Propositional logic lacks expressive power

You might also like