AI Unit 3 Part II
AI Unit 3 Part II
1. Convert all statements into clause form (a standardized logical format using disjunctions of
literals). Step-by-Step Explanation
2. Negate the goal (what you want to prove) and add it to the set of clauses. Step 1: Convert to Clause Form
3. Apply the resolution rule repeatedly: We rewrite all the statements in a standard format called "clause form." Clause form uses
disjunctions (OR), and we remove quantifiers systematically.
o Identify two parent clauses with complementary literals.
1. Fact 1: Human(Alice)
o Remove the complementary literals and combine the rest to form the resolvent.
This is already in clause form.
4. Repeat until:
2. Fact 2: ∀x Human(x) ⇒ Likes(x, Coffee) ∨ Likes(x, Tea)
o An empty clause is derived (contradiction found, proving the goal).
o Remove implication (A ⇒ B becomes ¬A ∨ B):
o No more resolvents can be formed. ¬Human(x) ∨ Likes(x, Coffee) ∨ Likes(x, Tea)
o Done!
Question:
Clause List:
Does Alice like coffee?
1. Human(Alice)
Step 2.1: Resolve Clause (4) ¬Likes(Alice, Coffee) with Clause (2) 1. Clause Form: Convert statements into a format where literals are connected using
disjunction (OR), and remove implications and quantifiers.
Clause (2): ¬Human(x) ∨ Likes(x, Coffee) ∨ Likes(x, Tea)
2. Negation: Negate the statement you want to prove and add it to the set of clauses.
Substitution: Replace x with Alice.
New clause: ¬Human(Alice) ∨ Likes(Alice, Tea) 3. Resolution Rule: Combine clauses by eliminating complementary literals to derive new
clauses until an empty clause is reached.
Why? The complementary literals are ¬Likes(Alice, Coffee) (Clause 4) and Likes(x, Coffee)
(Clause 2). These cancel out. 4. Contradiction: The empty clause indicates that the negation of the goal is inconsistent,
proving the original goal true.
Result:
¬Human(Alice) ∨ Likes(Alice, Tea)
Step 2.2: Resolve the Result ¬Human(Alice) ∨ Likes(Alice, Tea) with Clause (1) Human(Alice) Automated Theorem Proving: Resolving logical problems in mathematics and computer
science.
Clause (1): Human(Alice)
Artificial Intelligence: Used in expert systems and logic-based decision-making systems.
Complementary literals: ¬Human(Alice) (from the result) and Human(Alice) (Clause 1). These
cancel out. Propositional and First-Order Logic: To verify logical consistency in statements.
Result:
Likes(Alice, Tea)
Knowledge Representation: Simplified Explanation
Step 2.3: Resolve Likes(Alice, Tea) with Clause (3) ¬Likes(Alice, Tea)
What is Knowledge?
Clause (3): ¬Likes(Alice, Tea)
Definition: Knowledge is information that is useful and organized. It includes facts, principles,
Complementary literals: Likes(Alice, Tea) (from the result) and ¬Likes(Alice, Tea) (Clause 3). and experiences gathered by humans.
These cancel out.
Importance: Intelligence (in humans or computers) requires access to knowledge for
Result: decision-making.
Empty Clause (Contradiction)
Knowledge in Organisms vs. Computers
3. We derived a contradiction (empty clause), proving that Alice must like coffee.
Types of Knowledge
1. Procedural Knowledge: o Limitation: Cannot do much reasoning or inference.
1. Knowledge Base: Stores all the knowledge (facts, rules, etc.). Methods to Represent Knowledge
2. Inference Engine: Uses the knowledge to draw conclusions or make decisions. 1. Propositional Logic:
Frame: Person o Starts with a goal → Traces back to see if it can be proven.
Going to a restaurant: Knowledge representation allows systems to store and use knowledge effectively.
1. Enter the restaurant. Different methods (logic, frames, scripts, rules) provide flexibility for various tasks.
2. Sit down. A good knowledge-based system can reason, learn, and make decisions just like humans.
3. Order food.
o Definition: Represents how words relate in sentences (who did what to whom). Ontology Engineering:
Sentence: “John gave a book to Mary.” Uses formal "classes" (like sets in math) to organize ideas.
Recipient: Mary Focuses on designing systems and their behavior (e.g., in programming).
6. Production Rules: Uses "classes" as templates to create objects (e.g., objects in Java).
o Definition: Rules that guide decisions in an “IF-THEN” format. Assumes we know everything about the system ("Closed World Assumption").
o Starts with facts → Uses rules to reach conclusions. o "Dogs are animals."
Fact: "All cats are mammals." If we are asked, "Does Max the dog know how to swim?", and no one told us this information
yet, under OWA, we don’t assume Max can’t swim.
Rule: "Mammals have hearts." Instead, we say, "We don’t know if Max can swim or not."
Logic Type Open Logic (Monotonic) Closed Logic
Closed World Assumption (CWA) Adding Info Changes conclusions. Conclusions stay the same.
What it means:
Axioms in OWL Ontology
If we don’t know something, we assume it’s false.
OWL (Web Ontology Language) uses "axioms" (rules) to define relationships:
Example:
Using the same scenario: Class Rules:
o If no one told us, "Max can swim", under CWA, we assume Max can’t swim because o C ⊑ D: Class C is part of Class D (e.g., Cats are Animals).
the absence of information means it’s false.
o C ≡ D: Class C and Class D are the same.
Role Rules:
Key Difference:
o R ⊑ S: Role R is part of Role S.
OWA leaves room for uncertainty (lack of knowledge ≠ false).
o Func(R): Role R links only one item to another (like one owner per car).
CWA assumes completeness (lack of knowledge = false).
o Trans(S): Role S connects across items (e.g., "friend of a friend" is also a friend).
Individual Rules:
Open Logic (Monotonic Logic)
o a:C: Individual 'a' belongs to Class C (e.g., Nemo is a Fish).
What it means:
o <a,b>:R: Individual 'a' is connected to 'b' through Role R (e.g., Nemo is in Water).
If you get more information, your knowledge can change.
Example:
Initially, you don’t know if Max can swim. Later, someone tells you, "Max can swim." Example of Ontology Rules:
Now your understanding is updated—you know Max can swim.
Fish ⊑ Animal ⊓ CanSwim: Fish are animals that can swim.
This means adding new knowledge changes your answers.
Jellyfish ⊑ Fish ⊓ ¬hasVertebra: Jellyfish are fish that don’t have backbones.
Closed Logic
How to Create an Ontology (Ontology Engineering Process):
What it means:
Once you make a conclusion, adding new information doesn’t change your answers. 1. Decide the scope: What topic or area will your ontology cover?
Example: 2. Reuse existing ontologies: Use any pre-made knowledge maps if they fit your needs.
If we assume, based on current knowledge, that Max can’t swim, then even if someone later 3. List terms: Write down all key terms or ideas.
tells you, "Max can swim," you stick to your original conclusion that Max can’t swim.
4. Define classes: Group similar ideas together.
4. Merging Ontologies: How do we combine different ontologies into one? We can represent categories in two ways:
1. Predicates: For example, if we say "Basketball(b)", this means "b" is a basketball. It’s a simple
way of saying an object belongs to a category.
Key Points to Remember:
2. Reification: We can treat the category itself like an object. For example, "BB9 ∈ Basketballs"
There is no single "perfect" ontology; designs vary based on needs.
means BB9 is in the category of basketballs.
Ontology creation is creative and subjective.
How Categories Work with Properties
The success of an ontology depends on how well it works in real-world applications.
Categories have properties that describe them. For example:
Instead of thinking about one dog, we just think about the "dog" category and what all dogs o If we say "Tomatoes ⊂ Fruit", this means tomatoes are a type of fruit.
do.
3. Inheritance of Properties:
Subclass Relations and Taxonomy (Hierarchy)
o If a category has a property (like all tomatoes are red and round), all objects in that
Subclass Relations mean some categories are smaller parts of bigger categories. For category inherit these properties. So, if you know something is a tomato, you know
example: it’s red and round.
o "Dog" is a smaller part of the category "Animal" because all dogs are animals. 4. Decompositions:
o This is called a taxonomy—basically, a family tree where categories are organized o Sometimes, we can break down a category into smaller parts. For example, we can
from general to specific. break the category of fruits into smaller categories like apples, bananas, etc.
Basketball is a category. Categories help us organize things into groups (e.g., "dogs", "cats", "fruits").
A specific basketball like BB9 is an individual object within that category. Objects are individual things (e.g., "Buddy" the dog, or "Golden Delicious" apple).
You might want any basketball, not just a specific one like BB9.
Categories have properties that apply to all objects in that category (e.g., all tomatoes are o 1 inch = 2.54 centimeters. So, to convert from inches to centimeters, you multiply by
red). 2.54.
Categories can be part of larger categories, and objects inherit properties from their Example with Days and Hours:
categories.
d ∈ Days ⇒ Duration(d) = 24 hours
1. Delhi is part of India (Delhi → India). 2. Measurements: Describes numbers or amounts related to things, like size, cost, or time.
2. India is part of South Asia (India → South Asia). 3. Unit Conversions: Shows how to change measurements from one unit to another, like from
inches to centimeters.
3. South Asia is part of Asia (South Asia → Asia).
1. Mental Events and Mental Objects
4. Asia is part of Earth (Asia → Earth).
Mental Events
Properties of PartOf:
Mental events are things happening in your mind, like thinking, remembering, or feeling
1. Transitive Property:
emotions.
If A is part of B, and B is part of C, then A is also part of C.
Example: Examples of mental events:
If Delhi is part of India (Delhi → India), and India is part of South Asia (India → South Asia), o Thinking about something.
then Delhi is part of South Asia (Delhi → South Asia).
o Remembering your friend's birthday.
2. Reflexive Property:
o Feeling happy or sad.
Every object is always part of itself.
Example: In simple terms, mental events are like activities or actions in the mind.
Delhi is part of Delhi (Delhi → Delhi), because an object is always considered part of itself. Mental Objects
2. Measurements Mental objects are what you think about during mental events. These can be things like
ideas, concepts, or images.
Measurements describe properties of things, like how tall, heavy, or expensive something is.
o Concrete mental objects are things you can imagine with your senses (e.g., an apple,
Example:
a dog).
o Diameter of Basketball12 = 9.5 inches (this is a measurement about the basketball's
o Abstract mental objects are ideas or concepts, like love, justice, or mathematics.
size).
So, when you think about something, that thing is a mental object. For example, if you are
o Price of Basketball12 = $19 (this is a measurement about how much it costs).
thinking about a cat, the cat is the mental object.
Converting Between Units: Sometimes, we need to convert measurements into different
units. For example:
2. Mental Events in Event Calculus
o You can measure length in inches or centimeters.
In AI, event calculus is a way to describe actions and changes over time. It helps us represent
o 1.5 inches is equal to 3.81 centimeters (this is a conversion between inches and
mental events in formal logic (which is like a set of rules for reasoning).
centimeters).
A fluents is a fact or state that can change over time.
Conversion Rule: A rule to convert between units:
o Example: "Shankar is in Berkeley" is a fluent (a fact that could change).
o T(At(Shankar, Berkeley), t) means that at a certain time t, the fact "Shankar is in Meta-Belief Rule
Berkeley" is true.
Sometimes, an agent can believe that it believes something. This is called a meta-belief.
Events are actions or occurrences. These are also represented in event calculus using rules.
Example:
Example:
If the robot believes "It is raining", it can also believe "I believe it is raining".
o A flying event for Shankar from San Francisco to Washington, D.C. could be
represented like this:
6. Examples of Belief Representation
E1 ∈ Flyings ∧ Flyer(E1, Shankar) ∧ Origin(E1, SF) ∧ Destination(E1, DC)
Learning a Fact
This means the event E1 is a flying event, Shankar is the flyer, and the flight
goes from San Francisco (SF) to Washington, D.C.. When an agent learns something new, it updates its beliefs.
o Example: If a robot learns that "The sun is hot", it updates its belief to "The sun is
hot".
3. Mental Events and Objects in Human Thinking
Lois Knows Superman Can Fly
Mental events are the activities that happen inside the mind (e.g., thinking, remembering,
solving problems). If Lois knows that Superman can fly, we can represent that as:
Mental objects are the things the mind focuses on during these events (e.g., concepts like o Knows(Lois, "Superman can fly").
love, justice, or a specific memory).
But, if Superman = Clark Kent, then Lois also knows that Clark Kent can fly:
In AI, beliefs are treated as mental objects. Holds is a way of saying that a belief or fact is true at a certain time.
If the robot believes "If it rains, the ground is wet" and it also believes "It is raining", then it This helps agents (like robots) understand the world and take action.
can logically infer that "The ground is wet".
In Simple Terms: o Example: If I touch fire, I know it will burn (because I’ve experienced it before).
Mental events: Activities that happen in the mind (thinking, feeling, etc.). 5. Monotonic Reasoning:
Mental objects: Things that the mind focuses on (ideas, concepts, objects). o Once you make a conclusion, it cannot be changed even if you get new information.
Beliefs: Things an agent knows or thinks are true. o Example: We know that Earth revolves around the Sun, and this will always be true.
Meta-beliefs: Beliefs about one's own beliefs. o Conclusions can change if new information comes in.
Learning: When an agent learns something new, it updates its beliefs. o Example:
What is Reasoning? First: Birds can fly → Pitty is a bird → Pitty can fly.
Reasoning is like thinking logically to figure out answers or make predictions based on what New Information: Pitty is a penguin → Pitty cannot fly anymore.
you already know.
o For example, if you know that all dogs bark and Buddy is a dog, you can reason that
Organizing and Reasoning with Categories
Buddy will bark.
1. Semantic Networks:
o These are like diagrams that help you organize categories and their relationships.
Types of Reasoning
o They help AI systems visualize how things are related, like how a dog is an animal
1. Deductive Reasoning:
and has a tail.
o This type of reasoning goes from general facts to specific conclusions.
2. Description Logics:
o Example:
o This is a way of describing categories and their relationships in a formal language
General Fact: All humans eat veggies. that AI can understand.
Conclusion: So, Suresh eats veggies. Subsumption: Checking if one category is a smaller part of another category
(like "sparrow" being a type of "bird").
2. Inductive Reasoning:
Classification: Checking if an object belongs to a certain category (like "dog"
o This type of reasoning goes from specific observations to general conclusions.
being a type of "animal").
o Example:
Consistency: Making sure there are no contradictions in the definitions (like
Observation: All pigeons we've seen are white. not saying a "dog" is also a "cat").
o This is reasoning to find the best possible explanation for something that happened. 1. Frames:
Observation: The cricket ground is wet. o Example: A frame for "Person" might include properties like "Age", "Height", and
"Name".
Conclusion: It might have rained (this is the best guess).
2. Slots:
4. Common Sense Reasoning:
o Slots are like the details inside the frames. They define specific characteristics.
o This type of reasoning uses everyday experiences to make assumptions.
o Example: For the Person frame, a slot might be "Age", and its value might be 28 2. Adding Exceptions:
years old.
o If we learn some AI articles are boring, we cannot assume all AI articles are
o Slots can have things like: interesting anymore.
Default Value: What the value should be if nothing is specified. o So, new facts can change the conclusion.
Range: The type of values it can have (like Age being an integer between 0 3. Overriding Rules:
and 140).
o Specific rules can override general rules.
If-Needed: This means that the slot gets activated only when needed (e.g.,
o Example: We might have a general rule that says "Articles about AI are interesting",
when reading a value).
but there could be a specific rule that says "Articles about formal logic are boring".
If-Added: This means it gets activated when a value is added (e.g., when a So, articles about formal logic are not interesting, even if they are about AI.
new age is provided).
o Default reasoning helps you start with general rules and only make exceptions when
needed. In Super-Simplified Terms
o There are two types: Reasoning helps AI make decisions like humans do by drawing conclusions from what it
already knows.
Monotonic Reasoning: Once a conclusion is made, it doesn’t change even if
new facts come in. There are different ways of reasoning:
Non-Monotonic Reasoning: Conclusions can change when new facts come o Deductive: General → Specific.
in.
o Inductive: Specific → General.
3. How Does Default Reasoning Work?
o Abductive: Finding the best guess for something.
o Normal Assumption (H): Assume everything is normal unless you learn otherwise.
o Common Sense: Using everyday knowledge to make assumptions.
o What Follows (F): Use the assumption to make conclusions.
Default Logic: Assumes things are true unless proven wrong, making AI work even with
o Explanation: Justify why the conclusion is true. incomplete info.
Frames & Slots: Frames are categories (like "Person"), and slots are the details (like "Age").
Examples of Default Reasoning
1. Basic Example: