Ai 3
Ai 3
Chapter 3
Knowledge Representation
Knowledge representation: Knowledge representation and reasoning (KR, KRR) is the part of
Artificial intelligence which concerned with AI agents thinking and how thinking contributes
to intelligent behavior of agents.
What to Represent?
o Object: All the facts about objects in our world domain. E.g., Guitars contains strings,
trumpets are brass instruments.
o Events: Events are the actions which occur in our world.
o Performance: It describe behavior which involves knowledge about how to do things.
o Meta-knowledge: It is knowledge about what we know.
o Facts: Facts are the truths about the real world and what we represent.
o Knowledge-Base: The central component of the knowledge-based agents is the
knowledge base. It is represented as KB. The Knowledgebase is a group of the
Sentences (Here, sentences are used as a technical term and not identical with the
English language).
Types of knowledge
1. Declarative Knowledge:
2. Procedural Knowledge
3. Meta-knowledge:
4. Heuristic knowledge:
5. Structural knowledge:
AI knowledge cycle:
An Artificial intelligence system has the following components for displaying intelligent
behavior:
o Perception
o Learning
o Knowledge Representation and Reasoning
o Planning
o Execution
The above diagram is showing how an AI system can interact with the real world and what
components help it to show intelligence. AI system has Perception component by which it
retrieves information from its environment. It can be visual, audio or another form of sensory
input. The learning component is responsible for learning from data captured by Perception
comportment. In the complete cycle, the main components are knowledge representation and
Reasoning. These two components are involved in showing the intelligence in machine-like
humans. These two components are independent with each other but also coupled together. The
planning and execution depend on analysis of Knowledge representation and reasoning.
Knowledge-based agents:
o Knowledge-based agents are those agents who have the capability of maintaining an
internal state of knowledge, reason over that knowledge, update their knowledge
after observations and take actions. These agents can represent the world with
some formal representation and act intelligently.
o Knowledge-based agents are composed of two main parts:
o Knowledge-base and
o Inference system.
Inference system
Inference means deriving new sentences from old. Inference system allows us to add a new
sentence to the knowledge base. A sentence is a proposition about the world. Inference system
applies logical rules to the KB to deduce new information.
Inference system generates new facts so that an agent can update the KB. An inference system
works mainly in two rules which are given as:
o Forward chaining
o Backward chaining
Following are three operations which are performed by KBA in order to show the
intelligent behavior:
1. TELL: This operation tells the knowledge base what it perceives from the
environment.
2. ASK: This operation asks the knowledge base what action it should perform.
3BCA Dept. of CS,
KFGSC Tiptur
5
function KB-AGENT(percept):
t=t+1
return action
The knowledge-based agent takes percept as input and returns an action as output. The agent
maintains the knowledge base, KB, and it initially has some background knowledge of the real
world. It also has a counter to indicate the time for the whole process, and this counter is
initialized with zero.
Each time when the function is called, it performs its three operations:
The MAKE-ACTION-QUERY generates a sentence to ask which action should be done at the
current time.
MAKE-ACTION-SENTENCE generates a sentence which asserts that the chosen action was
executed.
A knowledge-based agent can be viewed at different levels which are given below:
1. Knowledge level
Knowledge level is the first level of knowledge-based agent, and in this level, we need to
specify what the agent knows, and what the agent goals are. With these specifications, we can
fix its behavior. For example, suppose an automated taxi agent needs to go from a station A to
station B, and he knows the way from A to B, so this comes at the knowledge level.
2. Logical level:
At this level, we understand that how the knowledge representation of knowledge is stored. At
this level, sentences are encoded into different logics. At the logical level, an encoding of
knowledge into logical sentences occurs. At the logical level we can expect to the automated
taxi agent to reach to the destination B.
3. Implementation level:
This is the physical representation of logic and knowledge. At the implementation level agent
perform actions as per logical and knowledge level. At this level, an automated taxi agent
actually implement his knowledge and logic so that he can reach to the destination.
The Wampus World: The Wumpus world is a simple world example to illustrate the worth
of a knowledge-based agent and to represent knowledge representation. It was inspired by a
video game Hunt the Wumpus by Gregory Yob in 1973.
The Wumpus world is a cave which has 4/4 rooms connected with passageways. So there are
total 16 rooms which are connected with each other. We have a knowledge-based agent who
will go forward in this world. The cave has a room with a beast which is called Wumpus, who
eats anyone who enters the room. The Wumpus can be shot by the agent, but the agent has a
single arrow. In the Wumpus world, there are some Pits rooms which are bottomless, and if
agent falls in Pits, then he will be stuck there forever. The exciting thing with this cave is that
in one room there is a possibility of finding a heap of gold. So the agent goal is to find the gold
and climb out the cave without fallen into Pits or eaten by Wumpus. The agent will get a reward
if he comes out with gold, and he will get a penalty if eaten by Wumpus or falls in the pit.
Following is a sample diagram for representing the Wumpus world. It is showing some rooms
with Pits, one room with Wumpus and one agent at (1, 1) square location of the world.
There are also some components which can help the agent to navigate the cave. These
components are given as follows:
a. The rooms adjacent to the Wumpus room are smelly, so that it would have some stench.
b. The room adjacent to PITs has a breeze, so if the agent reaches near to PIT, then he will
perceive the breeze.
c. There will be glitter in the room if and only if the room has gold.
d. The Wumpus can be killed by the agent if the agent is facing to it, and Wumpus will
emit a horrible scream which can be heard anywhere in the cave.
Performance measure:
o +1000 reward points if the agent comes out of the cave with the gold.
o -1000 points penalty for being eaten by the Wumpus or falling into the pit.
o -1 for each action, and -10 for using an arrow.
o The game ends if either agent dies or came out of the cave.
Environment:
Actuators:
o Left turn,
o Right turn
o Move forward
o Grab
o Release
o Shoot.
Sensors:
o The agent will perceive the stench if he is in the room adjacent to the Wumpus. (Not
diagonally).
o The agent will perceive breeze if he is in the room directly adjacent to the Pit.
o The agent will perceive the glitter in the room where the gold is present.
o The agent will perceive the bump if he walks into a wall.
o When the Wumpus is shot, it emits a horrible scream which can be perceived anywhere
in the cave.
o These percepts can be represented as five element list, in which we will have different
indicators for each sensor.
o Example if agent perceives stench, breeze, but no glitter, no bump, and no scream then
it can be represented as:
[Stench, Breeze, None, None, None].
o Partially observable: The Wumpus world is partially observable because the agent can
only perceive the close environment such as an adjacent room.
o Deterministic: It is deterministic, as the result and outcome of the world are already
known.
o Sequential: The order is important, so it is sequential.
o Static: It is static as Wumpus and Pits are not moving.
o Discrete: The environment is discrete.
o One agent: The environment is a single agent as we have one agent only and Wumpus
is not considered as an agent.
Now we will explore the Wumpus world and will determine how the agent will find its goal by
applying logical reasoning.
Initially, the agent is in the first room or on the square [1,1], and we already know that this
room is safe for the agent, so to represent on the below diagram (a) that room is safe we will
add symbol OK. Symbol A is used to represent agent, symbol B for the breeze, G for Glitter or
gold, V for the visited room, P for pits, W for Wumpus.
At Room [1,1] agent does not feel any breeze or any Stench which means the adjacent squares
are also OK.
Now agent needs to move forward, so it will either move to [1, 2], or [2,1]. Let's suppose agent
moves to the room [2, 1], at this room agent perceives some breeze which means Pit is around
this room. The pit can be in [3, 1], or [2,2], so we will add symbol P? to say that, is this Pit
room?
Now agent will stop and think and will not make any harmful move. The agent will go back to
the [1, 1] room. The room [1,1], and [2,1] are visited by the agent, so we will use symbol V to
represent the visited squares.
At the third step, now agent will move to the room [1,2] which is OK. In the room [1,2] agent
perceives a stench which means there must be a Wumpus nearby. But Wumpus cannot be in
the room [1,1] as by rules of the game, and also not in [2,2] (Agent had not detected any stench
when he was at [2,1]). Therefore agent infers that Wumpus is in the room [1,3], and in current
state, there is no breeze which means in [2,2] there is no Pit and no Wumpus. So it is safe, and
we will mark it OK, and the agent moves further in [2,2].
At room [2,2], here no stench and no breezes present so let's suppose agent decides to move to
[2,3]. At room [2,3] agent perceives glitter, so it should grab the gold and climb out of the cave.
Logic: logic is the systematic approach to structure and evaluate arguments, drawing
conclusions from given premises.
Propositional Logic:
Propositional logic (PL) is the simplest form of logic where all the statements are made by
propositions. A proposition is a declarative statement which is either true or false. It is a
technique of knowledge representation in logical and mathematical form.
Example:
a) It is Sunday.
d) 5 is a prime number.
The syntax of propositional logic defines the allowable sentences for the knowledge
representation. There are two types of Propositions:
a. Atomic Propositions
b. Compound propositions
Example:
Example:
Logical Connectives:
Logical connectives are used to connect two simpler propositions or representing a sentence
logically. We can create compound propositions with the help of logical connectives. There are
mainly five connectives, which are given as follows:
Truth Table:
In propositional logic, we need to know the truth values of propositions in all possible
scenarios. We can combine all the possible combination with logical connectives, and the
representation of these combinations in a tabular format is called Truth table. Following are
the truth table for all logical connectives:
We can build a proposition composing three propositions P, Q, and R. This truth table is made-
up of 8n Tuples as we have taken three proposition symbols.
Precedence of connectives:
Just like arithmetic operators, there is a precedence order for propositional connectors or logical
operators. This order should be followed while evaluating a propositional problem. Following
is the list of the precedence order for operators:
Logical equivalence:
Logical equivalence is one of the features of propositional logic. Two propositions are said to
be logically equivalent if and only if the columns in the truth table are identical to each other.
Let's take two propositions A and B, so for logical equivalence, we can write it as A⇔B.
In below truth table we can see that column for ¬A∨ B and A→B, are identical hence A is
Equivalent to B
Properties of Operators:
o Commutativity:
o P∧ Q= Q ∧ P, or
o P ∨ Q = Q ∨ P.
o Associativity:
o (P ∧ Q) ∧ R= P ∧ (Q ∧ R),
o (P ∨ Q) ∨ R= P ∨ (Q ∨ R)
o Identity element:
o P ∧ True = P,
o P ∨ True= True.
o Distributive:
o P∧ (Q ∨ R) = (P ∧ Q) ∨ (P ∧ R).
o P ∨ (Q ∧ R) = (P ∨ Q) ∧ (P ∨ R).
o DE Morgan's Law:
o ¬ (P ∧ Q) = (¬P) ∨ (¬Q)
o ¬ (P ∨ Q) = (¬ P) ∧ (¬Q).
o Double-negation elimination:
o ¬ (¬P) = P.
o We cannot represent relations like ALL, some, or none with propositional logic.
Example:
a. All the girls are intelligent.
b. Some apples are sweet.
a different approach to using logic to solve problems is to use logical rules of inference to
generate logical implications
in some cases, this can be less work than model-checking (i.e. generating a truth table) or even
SAT solving
plus, logical rules of inference are at the level of logical reasoning that humans consciously
strive to do
theorem-proving is interested in entailment, e.g. given a logical sentence A and a logical
sentence B, we can ask if A entails B
the general idea is that sentence A is what the agent knows, and sentence B is something that
the agent can infer from what it knows
sentence A might be very big, i.e. the entire “brain” of an agent encoded in logic
an important basic result in logic is the deduction theorem, which says:
A entails B if, and only if, A => B (A implies B) is a tautology (i.e. valid for all assignments of
values to its variables)
so we can answer the question “Does A entail B?” by showing that the sentence A => B is a
tautology
recall that a sentence is unsatisfiable if no assignment of values to its variables makes it true
so A => B is a tautology, then !(A=>B) is unsatisfiable, and !(A=>B) == !(!(A & !B)) == A &
!B
so we can re-write the deduction theorem like this:
A entails B if, and only if, A & !B is unsatisfiable
this means you can use a SAT solver to figure out entailment!
Rules of Inference
recall that there are various rules of inference that can be used to create proofs, i.e. chains of
correct inferences
e.g. modus ponens is this rule of inference:
A, A => B
--------- modus ponens
B
this rule says that if you are given a sentence A, and a sentence A => B, you may infer B
e.g. and-elimination is this pair of rules:
A&B
----- and-elimination
A
A&B
-----
B
these two rules encode the (obvious!) fact that if the sentence A & B is true, then A is true, and
also B is true
logical equivalences can also be stated as inference rules, e.g.:
A <==> B
---------------
(A=>B) & (B=>A)
there are many inference rules, and choosing a reasonable set of rules for a theorem-prover
turns out to be important
we can think of the application of one rule of inference as an action performed to the
state of the world, i.e. the rule of inference adds more facts to the knowledge base
if there are multiple inference rules to choose from, then we need knowledge (i.e.
heuristics) to help decide which rule to use
o or we could rely on backtracking, i.e. just pick a rule at random, apply it, and
keep going until it “seems” like a proof is not being reached
plus, how do we know that the rules of inference we are using are complete, i.e. if a
proof exists, how do we know our set of inference rules will find it?
o e.g. suppose the two and-elimination rules were our only rules of inference; is
this enough to prove any entailment in propositional logic?
no!
for example, (P | P) -> P is clearly true, but and-elimination doesn’t
apply to this sentence
Proof by Resolution
it turns out that only one particular inference rule is needed to prove any logical entailment (that
can be proved): the resolution rule
in a 2-variable form, the resolution inference rule is this (| here means “or”):
A | B, !B
----------
A
resolution
A | !B, B
3BCA Dept. of CS,
KFGSC Tiptur
18
----------
A
in English, the first rules says that if A or B is true, and B is not true, then A must be true; the
second rule says the same thing, but B and !B are swapped
note that A | !B is logically equivalent to B -> A, and so the resolution rules can be translated to
this:
!B -> A, !B
----------- variation of modus ponens
A
B -> A, B
--------- modus ponens
A
in other words, resolution inference could be viewed as a variation of inference with modus
ponens
we can generalize resolution as follows:
we’ve written A_i and !A_i, but those could be swapped: the key is that they are
complementary literals
notice that both sentence above and below the inference line are CNF clauses
recall that a CNF clause consists of literals (a variable, or the negation of a variable), or-
ed together
here, L_i is a literal that has the opposite sign of a literal in Clause_1
Clause_2 is the same as Clause_1, but with the literal that is the opposite of L_i removed
e.g.:
-----------------
!B | !C
Clause_1 Clause_2
------------------ full resolution
Clause_3
here, we assume that some literal L appears in Clause_1, and the complement of L appears in
Clause_2
Clause_3 contains all the literals (or-ed together) from both Clause_1 and Clause_2, except for
L and its opposite — those are not in Clause_3
e.g.:
what is surprising is that full resolution is complete: it can be used to proved any entailment
(assuming the entailment can be proven)
to do this, resolution requires that all sentences be converted to CNF
this can always be done … see the textbook for a sketch of the basic idea
the same requirement for most SAT solvers!
resolution theorem proving proves that A entails B by proving that A & !B is unsatisfiable
it follows these steps:
!P | Q
!Q | R
P
!R
next we pick pairs of clauses and resolve them (i.e. apply the resolution inference rule)
if we can; we add any clauses produced by this to the collection of clauses:
!P | Q
!Q | R
P
!R
!P | R // from resolving (!P | Q) with (!Q | R)
Q // from resolving (!P | Q) with (P)
!Q // from resolving (!Q | R) with (!R)
we have Q and !Q, meaning we’ve reach a contradiction
this means KB & !(P->R) is unsatisfiable
which means KB entails P->R
this is a proof of entailment, and the various resolutions are the steps of the proof
the exact order in which clauses are resolved could result in shorter or longer proofs, and in
practice you usually want short proofs, and so heuristic would be needed to help make
decisions
Propositional Logic based Agent:
Throughout the last few decades, the field of artificial intelligence (AI) has experienced
significant advancement. Scientists and researchers are developing a variety of AI models to
mimic human intelligence as a result of advances in technology and computer science. The
agent based on propositional logic is one of the foundational AI techniques. This article will
examine the definition, operation, and numerous uses of a propositional logic-based agent.
A subset of mathematical logic known as propositional logic deals with propositions, which
are statements that can either be true or wrong. Sentential logic or statement logic are other
names for it. The symbols P, Q, R, and other symbols are used in propositional logic to express
propositions. Compound propositions, which are composed of one or more separate
propositions, are created using these symbols. Moreover, to link propositions, propositional
logic makes use of logical connectives like "and," "or," "not," "implies," and "if and only if."
An AI agent that utilises propositional logic to express its knowledge and make decisions is
known as a propositional logic-based agent. A straightforward form of agent, it decides what
to do depending on what it knows about the outside world. A knowledge base, which is made
up of a collection of logical phrases or sentences, serves as a representation of the propositional
logic-based agent's knowledge.
The agent's knowledge is empty, however as it observes the outside world, it fills it with fresh
data. To decide what actions to do in response to the environment, the agent uses its knowledge
base. Depending on the logical inference it makes on its knowledge base, the agent takes
judgements.
Deductive inference is the process of inferring new information using logical principles from
already known information. The process of generalizing from specific data to arrive at a
broader conclusion is known as inductive inference. Based on the objectives it seeks to attain,
the agent decides what course of action to take.
Perception, reasoning, and action are the three stages of the agent's decision-making process.
Observing the surroundings and updating the information base are steps in the perception
process. In order to generate new information, the reasoning stage requires using logical
inference to the knowledge base. The action phase entails choosing an action based on the
information that was gathered and the agent's objectives.
In the field of AI, propositional logic-based agents have several uses. Expert system
applications are one of the most popular uses. Expert systems are artificial intelligence
programs created to address difficulties in a particular field. They represent their subject
knowledge in a knowledge base, and they draw new information from the knowledge base
using a reasoning engine.
In the area of natural language processing, propositional logic-based agents are also used
(NLP). The area of AI known as NLP deals with how computers and human languages interact.
The meaning of natural language phrases can be represented by and new information can be
derived from them using propositional logic-based agents.
Knowledge Representation
The fact that propositional logic offers a straightforward and understandable method of
conveying knowledge is one of its benefits. Propositional logic uses simple to comprehend
logical symbols and logical connectives to depict relationships between propositions.
Logical Inference
The technique of inferring new knowledge from knowledge already known is known as logical
inference. Propositional logic-based agents should have logical inference because it enables
the agent to reason regarding the external world and gather new knowledge that can be applied
to decision-making. Deductive inference as well as inductive inference are the two different
categories of logical inference.
By using logical principles, deductive inference is the act of obtaining new knowledge based
on previously data obtained. It is predicated on the idea that if an argument's premises are true,
then it follows that the argument's conclusion must also be true. Propositional logic-based
agents draw new knowledge from the body of knowledge through deductive inference.
Decision Making
A crucial function of propositional logic-based agents is decision-making. The agent bases its
decisions on knowledge of the outside world and its desired outcomes. Three steps make up
the decision-making process: perception, justification, and execution.
Observing the environment and updating the agent's knowledge base is the process of
perception. Using logical inference to extract new information from the knowledge base is the
process of reasoning. Action is the process of choosing a course of action based on the
knowledge that has been obtained and the agent's goals.
Limitations
Although agents based on propositional logic offer numerous benefits, they also have certain
drawbacks. One of the drawbacks is that they lack expressiveness and are unable to depict
intricate interactions between propositions. They are unable to depict, for instance, causal or
temporal links between assertions.
Another drawback is that propositional logic-based agents are unable to deal with uncertainty
or inadequate data. As a result, they are unable to handle circumstances in which there is a lack
of information or uncertainty regarding the environment.
Fuzzy logic, Bayesian networks, and neural networks, among other forms of AI models, have
been developed to get around these restrictions. These models offer a more powerful and
expressive means of describing knowledge and making judgements.
In the topic of Propositional logic, we have seen that how to represent statements using
propositional logic. But unfortunately, in propositional logic, we can only represent the facts,
which are either true or false. PL is not sufficient to represent the complex sentences or natural
language statements. The propositional logic has very limited expressive power. Consider the
following sentence, which we cannot represent using PL logic.
To represent the above statements, PL logic is not sufficient, so we required some more
powerful logic, such as first-order logic.
3BCA Dept. of CS,
KFGSC Tiptur
24
First-Order logic:
The syntax of FOL determines which collection of symbols is a logical expression in first-order
logic. The basic syntactic elements of first-order logic are symbols. We write statements in
short-hand notation in FOL.
Variables x, y, z, a, b,....
Connectives ∧, ∨, ¬, ⇒, ⇔
Atomic sentences:
o Atomic sentences are the most basic sentences of first-order logic. These sentences are
formed from a predicate symbol followed by a parenthesis with a sequence of terms.
o We can represent atomic sentences as Predicate (term1, term2, ......, term n).
Complex Sentences:
Consider the statement: "x is an integer.", it consists of two parts, the first part x is the
subject of the statement and second part "is an integer," is known as a predicate.
Universal Quantifier:
Universal quantifier is a symbol of logical representation, which specifies that the statement
within its range is true for everything or every instance of a particular thing.
o For all x
o For each x
o For every x.
Example:
Let a variable x which refers to a man so all x can be represented in UOD as below:
It will be read as: There are all x where x is a man who drink coffee.
Existential Quantifier:
Existential quantifiers are the type of quantifiers, which express that the statement within its
scope is true for at least one instance of something.
It is denoted by the logical operator ∃, which resembles as inverted E. When it is used with a
predicate variable then it is called as an existential quantifier.
If x is a variable, then existential quantifier will be ∃x or ∃(x). And it will be read as:
Example:
It will be read as: There are some x where x is a boy who is intelligent.
Points to remember:
Properties of Quantifiers:
The quantifiers interact with variables which appear in a suitable way. There are two types of
variables in First-order logic which are given below:
Free Variable: A variable is said to be a free variable in a formula if it occurs outside the scope
of the quantifier.
Bound Variable: A variable is said to be a bound variable in a formula if it occurs within the
scope of the quantifier.
Inference Engine:
Inference Engine is a component of the expert system that applies logical rules to the
knowledge base to deduce new information. It interprets and evaluates the facts in the
knowledge base in order to provide an answer.
A knowledgebase is a structured collection of facts about the system’s domain.
Forward Chaining:
Forward Chaining the Inference Engine goes through all the facts, conditions and derivations
before deducing the outcome i.e When based on available data a decision is taken then the
process is called as Forwarding chaining, It works from an initial state and reaches to the
goal(final decision).
Example:
A
A -> B
B
He is running.
If he is running, he sweats.
He is sweating.
Backward Chaining:
In this, the inference system knows the final decision or goal, this system starts from the
goal and works backwards to determine what facts must be asserted so that the goal can be
achieved, i.e it works from goal(final decision) and reaches the initial state.
Example:
B
A -> B
A
—————————–
He is sweating.
If he is running, he sweats.
He is running.
Difference between Forwarding Chaining and Backward Chaining:
When based on available data a decision Backward chaining starts from the goal and works
is taken then the process is called as backward to determine what facts must be asserted
Forward chaining. so that the goal can be achieved.
Slow as it has to use all the rules. Fast as it has to use only a few rules.
It operates in forward direction i.e it It operates in backward direction i.e it works from
works from initial state to final decision. goal to reach initial state.
Forward chaining is used for the It is used in automated inference engines, theorem
planning, monitoring, control, and proofs, proof assistants and other artificial
interpretation application. intelligence applications.