0% found this document useful (0 votes)
3 views

Unit 3 Representation of Knowledge

The document discusses the representation of knowledge in artificial intelligence, covering topics such as first-order predicate logic, Prolog programming, unification, and various reasoning techniques like forward and backward chaining. It also explores knowledge representation, ontological engineering, and the relationship between categories, objects, and events in AI systems. Key concepts include the importance of knowledge-based agents, types of knowledge, and the role of mental events and objects in cognitive processes.

Uploaded by

Raj Raj
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Unit 3 Representation of Knowledge

The document discusses the representation of knowledge in artificial intelligence, covering topics such as first-order predicate logic, Prolog programming, unification, and various reasoning techniques like forward and backward chaining. It also explores knowledge representation, ontological engineering, and the relationship between categories, objects, and events in AI systems. Key concepts include the importance of knowledge-based agents, types of knowledge, and the role of mental events and objects in cognitive processes.

Uploaded by

Raj Raj
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 116

ARTIFICIAL INTELLIGENCE

UNIT III REPRESENTATION OF KNOWLEDGE


UNIT III REPRESENTATION OF KNOWLEDGE
• First Order Predicate Logic
• Prolog Programming
• Unification
• Forward Chaining
• Backward Chaining
• Resolution
• Knowledge Representation
UNIT III REPRESENTATION OF KNOWLEDGE
• Ontological Engineering
• Categories and Objects
• Events
• Mental Events and Mental Objects
• Reasoning Systems for Categories
• Reasoning with Default Information.
First-Order Predicate Logic (FOL)
First-Order Predicate Logic (FOL)
First-Order Predicate Logic (FOL)
Example
Example
Prolog Programming
⮚ Prolog is a logic programming language. It has important role in artificial intelligence.
⮚ Unlike many other programming languages, Prolog is intended primarily as a declarative programming
language.
⮚ In prolog, logic is expressed as relations (called as Facts and Rules).

3 Important things in PROLOG Programming. When we program in ProLog we need to provide following three
things
⮚ – Declaring some facts about objects and their relationships
⮚ – Defining some rules about objects and their relationships
⮚ – Asking questions about objects and their relationships
Prolog Programming
•Problem solving in PROLOG

⮚ Insert facts and rules into the database


⮚ Ask questions (queries) based on the contents of the database

• Facts
∙ Used to represent unchanging information about objects and their relationships.
∙ Only facts in the PROLOG database can be used for problem solving.
∙ Insert facts into the database by, typing the facts into a file and loading (consulting) the file into a running
PROLOG system
Prolog Programming
• Queries
∙ Retrieve information from the database by entering QUERIES
∙ A query is a pattern that PROLOG is asked to match against the database

A query will cause PROLOG to

● look at the database


● try to find a match for the query pattern
● execute the body of the matching head
● return an answer
Syntax and Basic Fields
In prolog, We declare some facts. These facts constitute the Knowledge Base of the system. We can query
against the Knowledge Base. We get output as affirmative if our query is already in the knowledge Base or it is
implied by Knowledge Base, otherwise we get output as negative. So, Knowledge Base can be considered
similar to database, against which we can query.
⮚ Prolog facts are expressed in definite pattern.
⮚ Facts contain entities and their relation.
⮚ Entities are written within the parenthesis separated by comma (, ).
⮚ Their relation is expressed at the start and outside the parenthesis.
⮚ Every fact/rule ends with a dot (.). So, a typical prolog fact goes as follows :
Example
Example
Unification
Unification in First Order Logic

Following are some basic conditions for unification:


• Predicate symbol must be same, atoms or expression with different
predicate symbol can never be unified.
• Number of Arguments in both expressions must be identical.
• Unification will fail if there are two similar variables present in the
same expression.
Unification
Unification
Unification
Forward Chaining
• Forward chaining is a data-driven reasoning approach. It starts from
known facts and applies rules to infer new facts until the goal is
reached.
• Diagram: A forward chaining example could be visualized as a
flowchart that starts with initial facts and proceeds through rules to
derive new facts:
Backward Chaining
• Backward chaining is goal-driven reasoning. It starts with a goal and
works backward to find facts that support it.
• Diagram: In backward chaining, the process starts with the goal
happy(john) and looks for rules that could lead to that conclusion:
• Start with happy(john).
• Look for a rule that can satisfy this goal, e.g., happy(X) :- likes(X,
icecream).
• Look for the fact likes(john, icecream).
Resolution
• Resolution is a theorem proving technique that proceeds by building
refutation proofs, i.e., proofs by contradictions.
• It was invented by a Mathematician John Alan Robinson in the year
1965.
• Resolution is used, if there are various statements are given, and we
need to prove a conclusion of those statements.
• Resolution is a single inference rule which can efficiently operate on
the conjunctive normal form or clausal form.
Steps for Resolution
• Conversion of facts into first-order logic (FOL).
• Convert FOL statements into conjunctive normal form (CNF).
• Negate the statement which needs to prove (proof by contradiction)
• Draw resolution graph (unification).
Facts to FOL
Example 2
•2. convert FOL in to CNF
•2.a Eliminate all implications and rewrite
• 2b.Move negation (¬)inwards and rewrite
• 2c.Rename variables or standardize variables ( not applicable)
• 2d.Eliminate existential instantiation quantifier by elimination. ( not
applicable)
• 2e. Drop Universal quantifiers. ( not applicable)
• Step-3: Negate the statement to be proved
• Step-4: Draw Resolution graph:
Knowledge Representation
• Knowledge representation and reasoning (KR, KRR) is the part
of Artificial intelligence which is concerned with AI agents'
thinking and how thinking contributes to intelligent behavior
of agents.
• Knowledge: Knowledge is awareness or familiarity gained by
experiences of facts, data, and situations.
• An intelligent agent needs knowledge about the real world
for taking decisions and reasoning to act efficiently.
Knowledge Representation
● Knowledge-based agents are those agents who have the capability of
maintaining an internal state of knowledge, reason over that
knowledge, update their knowledge after observations and take
actions. These agents can represent the world with some formal
representation and act intelligently.
● Knowledge-based agents are composed of two main parts:
○ Knowledge-base and
○ Inference system.
Knowledge Representation
● Object: All the facts about objects in our world domain. E.g., Guitars contain strings,
trumpets are brass instruments.
● Events: Events are the actions which occur in our world.
● Performance: It describes behavior which involves knowledge about how to do
things.
● Meta-knowledge: It is knowledge about what we know.
● Facts: Facts are the truths about the real world and what we represent.
● Knowledge-Base: The central component of the knowledge-based agents is the
knowledge base. It is represented as KB. The Knowledgebase is a group of the
Sentences (Here, sentences are used as a technical term and not identical with the
English language).
Types of knowledge:
1. Declarative Knowledge:
2. Procedural Knowledge
3. Meta-knowledge:
4. Heuristic knowledge:
5. Structural knowledge:
The relation between knowledge and intelligence:
AI knowledge cycle:

• An Artificial intelligence system has the following (diagram) components for


• displaying intelligent behavior
Approaches to knowledge representation:
There are mainly four approaches to knowledge representation, which
are given below:
1. Simple relational knowledge:
2. Inheritable knowledge:
3. Inferential knowledge:
4. Procedural knowledge:
Ontological Engineering
● Idea of a general ontology - organizes everything in the world into a
hierarchy of categories
● In “toy” domains, the choice of representation is not that important;
many choices will work
● Complex domains such as shopping on the Internet or driving a car
in traffic require more general and flexible representations
● This section deals with creating these representations,
concentrating on general concepts—such as Events, Time, Physical
Objects, and Beliefs— that occur in many different domains
● Representing these abstract concepts is sometimes called
ontological engineering.
Ontological Engineering
● The prospect of representing everything in the world is tiring and tough. (not
necessary)
● Placeholders can be defined where new knowledge for any domain can fit in
● For example, we will define what it means to be a physical object, and the details
of different types of objects—robots, televisions, books, or whatever—can be filled
in later
● This is analogous to the way that designers of an object-oriented programming
framework (such as the Java Swing graphical framework) define general concepts
like Window, expecting users to use these to define more specific concepts like
Spreadsheet Window
● The general framework of concepts is called an upper ontology because of the
convention of drawing graphs with the general concepts at the top and the more
specific concepts below them
● Ontological representation refers to a graphical representation
Ontological Engineering
Semantic network

● Semantic network is a special case of ontology graph


● In a semantic network, the knowledge is represented by a directed
graph
● In each node, a concept is defined by – Concept (object) name; –
Attributes; and – Attribute values
● An edge defines the relation between two concepts
example of a semantic network
Those ontologies that do exist have been
created along four routes:
1. By a team of trained ontologist/logicians, who architect the ontology and write axioms.
The CYC system was mostly built this way (Lenat and Guha, 1990).
2. By importing categories, attributes, and values from an existing database or databases.
DBPEDIA was built by importing structured facts from Wikipedia (Bizer et al., 2007).
3. By parsing text documents and extracting information from them. TEXTRUNNER was
built
by reading a large corpus of Web pages (Banko and Etzioni, 2008).
4. By enticing unskilled amateurs to enter commonsense knowledge. The OPENMIND
system was built by volunteers who proposed facts in English (Singh et al., 2002; Chklovski
and Gil, 2005).
Categories (or Classes)
A category (or class) is a group or type of things that share common
properties or characteristics. In knowledge representation, categories
help to classify or group related objects based on shared features or
behaviors. Categories provide a way of organizing knowledge in a
hierarchical or structured manner.

For example:
• A "Vehicle" category may contain objects like cars, trucks, bicycles, etc.
• A "Animal" category could include categories like dogs, cats, and birds.
• A "Shape" category could contain circles, squares, and triangles.
Categories (or Classes)
Categories are often represented using a taxonomy or hierarchical
structure, where each category can have subcategories (called
subclasses), and subclasses can inherit properties from their parent
categories.

Key points:
• Categories are abstract and general.
• Categories can be further divided into subclasses (inheritance).
• They represent concepts, types, or classes of objects in the world.
Objects (or Instances)
An object (or instance) is a specific example or member of a category.
Objects have specific attributes or properties that define their unique
characteristics. While categories represent general concepts, objects are
concrete instances that embody those concepts.

For example:
• In the "Vehicle" category, an object might be a "Toyota Camry" or a
"Honda Civic"—specific cars.
• In the "Animal" category, an object might be a "German Shepherd" or a
"Persian Cat"—specific animals.
• In the "Shape" category, an object could be a "Red Circle" or a "Blue
Square".
Objects (or Instances)
Key points:
• Objects are specific instances of categories.
• They have concrete attributes or properties (e.g., a red circle may
have a radius and a position).
• Objects often have relations to other objects or concepts in the
system.
Relationship Between Categories and Objects
• Categories and objects are closely linked in knowledge
representation. Categories define the types of objects that exist,
while objects instantiate these categories in the real world.
• Inheritance: Objects inherit properties from the categories they
belong to. For example, a Honda Civic (object) inherits properties
like "has wheels" and "can drive" from the Vehicle category.
• Taxonomy or Ontology: A well-organized system of categories helps
represent knowledge in a way that allows systems to reason about
relationships between different categories and objects. It also allows
the system to infer new facts from known relationships.
Example: Vehicles
• Category: Vehicle
• Subcategory: Car
• Object: Toyota Camry
• Object: Honda Accord
• Subcategory: Bicycle
• Object: Mountain Bike
• Object: Road Bike
• In this example, the Vehicle category contains subcategories like Car
and Bicycle. Each object, such as Toyota Camry and Mountain Bike,
is an instance of a specific subcategory.
Events
• An event refers to something that occurs in the world, often at a
specific time or place.
• Events typically involve changes or actions and can be physical or
abstract.
• Events are key components of knowledge representation because
they describe actions, occurrences, or phenomena that happen in
the real world.
• AI systems use events to understand the flow of time, cause-and-
effect relationships, and the dynamic nature of the world.
Events
Examples:
• A car passing through a red light: An event that occurs in the world.
• A user clicking a button on a website: An event triggered by an
interaction.
• Rainfall: An event that changes the state of the environment.
Characteristics of Events:
• Temporal: Events often have a time associated with them (e.g., "The
car passed at 3 PM").
• Dynamic: They cause or represent changes in the state of the world.
• Can involve agents: Events typically involve entities that perform or
are affected by the event (e.g., "John ate the cake").
Mental Events
• Mental events are events that occur in the mind or mental world of an agent
(such as a human or an AI system). They represent the cognitive or psychological
processes involved in reasoning, decision-making, perception, and more.
• Mental events are critical in AI because they allow systems to simulate or mimic
human-like thinking, learning, and awareness.

Examples:
• Remembering a fact: When an agent recalls a piece of information.
• Believing something: When an agent holds a belief (e.g., "I believe it is raining
outside").
• Desiring an outcome: When an agent wishes for something to happen (e.g., "I
want to go home").
• Perceiving an object: When an agent perceives or senses something (e.g., "I see a
car approaching").
Mental Events
Characteristics of Mental Events:
• Subjective: Mental events are typically internal to the agent (e.g.,
belief, desire, memory).
• Intentional: They are directed at some object or content, like belief
about the world, desire for a goal, or intention to do something.
• Influence Behavior: Mental events often drive an agent's decisions,
actions, and responses.
• Involve cognitive processes: These include reasoning, learning,
attention, and perception
Mental Objects
• Mental objects refer to concepts or entities that exist in the mental realm of an agent,
whether or not they correspond to actual entities in the physical world.
• They can be representations, ideas, beliefs, goals, or even imagined constructs.
• Mental objects are important for understanding how agents in AI model and process
knowledge internally, as they enable the agent to "think" about things without directly
interacting with the physical world.
Examples:
• Belief about a cat: An agent may represent the idea of a cat as a mental object, even if
no actual cat is present.
• Goal of reaching a destination: An agent may have the mental object of "reaching a
specific place" as a goal.
• Plan: A mental object that involves a sequence of steps to achieve a particular outcome.
• Concepts: General ideas, like "car," "city," or "freedom."
Mental Objects
Characteristics of Mental Objects:
• Abstract: Mental objects are often abstract in nature and not
directly observable.
• Cognitive Role: They play a role in reasoning, planning, and decision-
making.
• Representational: Mental objects represent knowledge, beliefs,
desires, or goals within an agent's cognitive system.
• Dynamic: Mental objects can evolve over time as the agent learns,
modifies its beliefs, or updates its goals.
Relationship Between Events & Mental Events
Events and Mental Events:
• Events in the world can trigger mental events in an agent's mind. For
instance, seeing a dog (a physical event) may lead to the mental
event of remembering a dog (a mental event).
• Mental events are responses to the perception or processing of real-
world events. For example, if an agent learns that the stock market
is down (an event), it may experience a mental event of concern or
fear about losing money.
Relationship Between Mental Objects & Events
Mental Objects and Events:
• Mental objects are representations of things or concepts that can be
involved in both physical events and mental events. For example, an
agent may have a mental object representing the concept of "rain."
When the event of rain occurs, the agent's mental object may be
activated or updated.
• Mental events can change mental objects. For example, the event of
learning new information can change an agent's mental objects, like
updating a belief or knowledge.
Relationship Between Mental Objects & Mental Events

Mental Objects and Mental Events:


• Mental objects can trigger mental events. For example, the mental
object of a goal can trigger a mental event of action planning.
Similarly, a mental object representing a fear (e.g., "I'm afraid of the
dark") may trigger a mental event related to avoiding dark places.
Reasoning Systems for Categories

Categories are the primary building blocks of large-scale knowledge


representation schemes.

There are two closely related families of systems:


• 1. Semantic networks : It provide graphical aids for visualizing a
knowledge base and efficient algorithms for inferring properties of
an object on the basis of its category membership;
• 2. Description logics: It provide a formal language for constructing
and combining category definitions and efficient algorithms for
deciding subset and superset relationships between categories.
Reasoning Systems for Categories
1. Semantic Networks
Semantic networks are a graphical representation of knowledge, where
nodes represent concepts or objects, and edges represent relationships
between them. This is a way of organizing knowledge in a more visual
form, which can be easier to interpret and analyze.
• Graphical Representation: Concepts (or categories) are connected by
labeled edges, making it easier to visualize relationships and
categories.
• Inheritance: Objects in a semantic network inherit properties from
the categories or concepts to which they belong. For example, if a
"dog" is a type of "animal," the dog will inherit properties of animals,
like the ability to "eat" or "breathe."
• Inference: One of the key benefits of semantic networks is their
ability to infer new knowledge. By following relationships between
nodes, new facts can be deduced. For example, if we know that all
dogs are mammals and that some mammals are endangered, we can
infer that some dogs may be endangered.
2. Description Logics (DL)
Description logics provide a more formal approach to representing knowledge.
They are a family of formal logic systems designed to describe and reason about
the properties of concepts and the relationships between them.
• Formal Language: DL uses a well-defined formal language to represent
categories and their relationships. This helps ensure consistency and
precision in reasoning about knowledge.
• Classification: Description logics can be used to define categories formally,
and algorithms can then be applied to determine if one category is a subset
or superset of another. For example, if we define "bird" as a subclass of
"animal," a specific "sparrow" could be classified as a member of both
categories.
• Subset and Superset Relationships: DL provides efficient algorithms for
determining relationships between categories, such as checking if a category
is a subset or superset of another. This is key for inferring hierarchical
relationships and for managing large-scale classification schemes.
• Reasoning: DL allows reasoning about concepts in a way that is logically
sound. For instance, DL systems can infer new knowledge, such as whether
an entity belongs to a certain category based on the logical relationships
defined.
Reasoning with Default Information
Reasoning with default information allows AI systems to make assumptions or conclusions
based on typical or expected knowledge when complete facts are unavailable. It helps the
system fill in gaps using reasonable assumptions and adapt when new data emerges.

Key Concepts
• Default Assumptions: These are generalizations, e.g., "Assume birds can fly unless
proven otherwise."
• Default Rules: Rules that allow inference under typical conditions. For example, "If an
animal is a bird, assume it can fly unless it’s an ostrich."
• Nonmonotonic Reasoning: New information can change previous conclusions. For
example, the assumption that all birds can fly is revised when learning that an animal
is an ostrich.
• Exceptions: Not all cases fit the default rule. For example, not all birds can fly, and
exceptions must be handled.
Reasoning with Default Information
Types of Default Reasoning
• Default Inference: Drawing conclusions from default assumptions,
revising them if new information contradicts the assumption.
• Defeasible Reasoning: The ability to override previous conclusions
with new facts.
• Circumscription: Limiting the possible interpretations of a situation
by focusing on what is known.
• Nonmonotonic Logic: Allows conclusions to be revised as more
information becomes available.
Reasoning with Default Information
Applications
• Expert Systems: Make decisions based on typical assumptions in fields like medicine.
• Robotics: Handle incomplete data, like assuming a path is clear unless an obstacle is
detected.
• NLP: Interpret ambiguous terms in context, like "bank" (financial institution vs.
riverbank).
• Autonomous Vehicles: Use default reasoning to interpret sensor data and make
decisions.
Example
• Default Rule: "Assume a bird can fly."
• New Info: "The bird is an ostrich."
• Revised Conclusion: The ostrich cannot fly, so the system adjusts its assumption.
Unit 3 – Completed

You might also like