Unit 3 Representation of Knowledge
Unit 3 Representation of Knowledge
3 Important things in PROLOG Programming. When we program in ProLog we need to provide following three
things
⮚ – Declaring some facts about objects and their relationships
⮚ – Defining some rules about objects and their relationships
⮚ – Asking questions about objects and their relationships
Prolog Programming
•Problem solving in PROLOG
• Facts
∙ Used to represent unchanging information about objects and their relationships.
∙ Only facts in the PROLOG database can be used for problem solving.
∙ Insert facts into the database by, typing the facts into a file and loading (consulting) the file into a running
PROLOG system
Prolog Programming
• Queries
∙ Retrieve information from the database by entering QUERIES
∙ A query is a pattern that PROLOG is asked to match against the database
For example:
• A "Vehicle" category may contain objects like cars, trucks, bicycles, etc.
• A "Animal" category could include categories like dogs, cats, and birds.
• A "Shape" category could contain circles, squares, and triangles.
Categories (or Classes)
Categories are often represented using a taxonomy or hierarchical
structure, where each category can have subcategories (called
subclasses), and subclasses can inherit properties from their parent
categories.
Key points:
• Categories are abstract and general.
• Categories can be further divided into subclasses (inheritance).
• They represent concepts, types, or classes of objects in the world.
Objects (or Instances)
An object (or instance) is a specific example or member of a category.
Objects have specific attributes or properties that define their unique
characteristics. While categories represent general concepts, objects are
concrete instances that embody those concepts.
For example:
• In the "Vehicle" category, an object might be a "Toyota Camry" or a
"Honda Civic"—specific cars.
• In the "Animal" category, an object might be a "German Shepherd" or a
"Persian Cat"—specific animals.
• In the "Shape" category, an object could be a "Red Circle" or a "Blue
Square".
Objects (or Instances)
Key points:
• Objects are specific instances of categories.
• They have concrete attributes or properties (e.g., a red circle may
have a radius and a position).
• Objects often have relations to other objects or concepts in the
system.
Relationship Between Categories and Objects
• Categories and objects are closely linked in knowledge
representation. Categories define the types of objects that exist,
while objects instantiate these categories in the real world.
• Inheritance: Objects inherit properties from the categories they
belong to. For example, a Honda Civic (object) inherits properties
like "has wheels" and "can drive" from the Vehicle category.
• Taxonomy or Ontology: A well-organized system of categories helps
represent knowledge in a way that allows systems to reason about
relationships between different categories and objects. It also allows
the system to infer new facts from known relationships.
Example: Vehicles
• Category: Vehicle
• Subcategory: Car
• Object: Toyota Camry
• Object: Honda Accord
• Subcategory: Bicycle
• Object: Mountain Bike
• Object: Road Bike
• In this example, the Vehicle category contains subcategories like Car
and Bicycle. Each object, such as Toyota Camry and Mountain Bike,
is an instance of a specific subcategory.
Events
• An event refers to something that occurs in the world, often at a
specific time or place.
• Events typically involve changes or actions and can be physical or
abstract.
• Events are key components of knowledge representation because
they describe actions, occurrences, or phenomena that happen in
the real world.
• AI systems use events to understand the flow of time, cause-and-
effect relationships, and the dynamic nature of the world.
Events
Examples:
• A car passing through a red light: An event that occurs in the world.
• A user clicking a button on a website: An event triggered by an
interaction.
• Rainfall: An event that changes the state of the environment.
Characteristics of Events:
• Temporal: Events often have a time associated with them (e.g., "The
car passed at 3 PM").
• Dynamic: They cause or represent changes in the state of the world.
• Can involve agents: Events typically involve entities that perform or
are affected by the event (e.g., "John ate the cake").
Mental Events
• Mental events are events that occur in the mind or mental world of an agent
(such as a human or an AI system). They represent the cognitive or psychological
processes involved in reasoning, decision-making, perception, and more.
• Mental events are critical in AI because they allow systems to simulate or mimic
human-like thinking, learning, and awareness.
Examples:
• Remembering a fact: When an agent recalls a piece of information.
• Believing something: When an agent holds a belief (e.g., "I believe it is raining
outside").
• Desiring an outcome: When an agent wishes for something to happen (e.g., "I
want to go home").
• Perceiving an object: When an agent perceives or senses something (e.g., "I see a
car approaching").
Mental Events
Characteristics of Mental Events:
• Subjective: Mental events are typically internal to the agent (e.g.,
belief, desire, memory).
• Intentional: They are directed at some object or content, like belief
about the world, desire for a goal, or intention to do something.
• Influence Behavior: Mental events often drive an agent's decisions,
actions, and responses.
• Involve cognitive processes: These include reasoning, learning,
attention, and perception
Mental Objects
• Mental objects refer to concepts or entities that exist in the mental realm of an agent,
whether or not they correspond to actual entities in the physical world.
• They can be representations, ideas, beliefs, goals, or even imagined constructs.
• Mental objects are important for understanding how agents in AI model and process
knowledge internally, as they enable the agent to "think" about things without directly
interacting with the physical world.
Examples:
• Belief about a cat: An agent may represent the idea of a cat as a mental object, even if
no actual cat is present.
• Goal of reaching a destination: An agent may have the mental object of "reaching a
specific place" as a goal.
• Plan: A mental object that involves a sequence of steps to achieve a particular outcome.
• Concepts: General ideas, like "car," "city," or "freedom."
Mental Objects
Characteristics of Mental Objects:
• Abstract: Mental objects are often abstract in nature and not
directly observable.
• Cognitive Role: They play a role in reasoning, planning, and decision-
making.
• Representational: Mental objects represent knowledge, beliefs,
desires, or goals within an agent's cognitive system.
• Dynamic: Mental objects can evolve over time as the agent learns,
modifies its beliefs, or updates its goals.
Relationship Between Events & Mental Events
Events and Mental Events:
• Events in the world can trigger mental events in an agent's mind. For
instance, seeing a dog (a physical event) may lead to the mental
event of remembering a dog (a mental event).
• Mental events are responses to the perception or processing of real-
world events. For example, if an agent learns that the stock market
is down (an event), it may experience a mental event of concern or
fear about losing money.
Relationship Between Mental Objects & Events
Mental Objects and Events:
• Mental objects are representations of things or concepts that can be
involved in both physical events and mental events. For example, an
agent may have a mental object representing the concept of "rain."
When the event of rain occurs, the agent's mental object may be
activated or updated.
• Mental events can change mental objects. For example, the event of
learning new information can change an agent's mental objects, like
updating a belief or knowledge.
Relationship Between Mental Objects & Mental Events
Key Concepts
• Default Assumptions: These are generalizations, e.g., "Assume birds can fly unless
proven otherwise."
• Default Rules: Rules that allow inference under typical conditions. For example, "If an
animal is a bird, assume it can fly unless it’s an ostrich."
• Nonmonotonic Reasoning: New information can change previous conclusions. For
example, the assumption that all birds can fly is revised when learning that an animal
is an ostrich.
• Exceptions: Not all cases fit the default rule. For example, not all birds can fly, and
exceptions must be handled.
Reasoning with Default Information
Types of Default Reasoning
• Default Inference: Drawing conclusions from default assumptions,
revising them if new information contradicts the assumption.
• Defeasible Reasoning: The ability to override previous conclusions
with new facts.
• Circumscription: Limiting the possible interpretations of a situation
by focusing on what is known.
• Nonmonotonic Logic: Allows conclusions to be revised as more
information becomes available.
Reasoning with Default Information
Applications
• Expert Systems: Make decisions based on typical assumptions in fields like medicine.
• Robotics: Handle incomplete data, like assuming a path is clear unless an obstacle is
detected.
• NLP: Interpret ambiguous terms in context, like "bank" (financial institution vs.
riverbank).
• Autonomous Vehicles: Use default reasoning to interpret sensor data and make
decisions.
Example
• Default Rule: "Assume a bird can fly."
• New Info: "The bird is an ostrich."
• Revised Conclusion: The ostrich cannot fly, so the system adjusts its assumption.
Unit 3 – Completed