Pmu Ai Notes
Pmu Ai Notes
L T P C
Course Code YCS204C
3 1 0 4
Course Name
C P A ARTIFICIAL INTELLIGENCE L T P H
2.8 0 0.2 3 1 0 4
PREREQUISITE: Nil
COURSE OUTCOMES DOMAIN LEVEL
CO1 Analyse AI problems and Space Search Cognitive Remember
1
UNIT 1
1.1 INTRODUCTION
1.2 DEFINITION
The study of how to make computers do things at which at the moment, people are better.
“Artificial Intelligence is the ability of a computer to act like a human being”.
Figure 1.1 Some definitions of artificial intelligence, organized into four categories
2
(a) Intelligence - Ability to apply knowledge in order to perform better in an environment.
(b) Artificial Intelligence - Study and construction of agent programs that perform well
in a given environment, for a given agent architecture.
(c) Agent - An entity that takes action in response to precepts from an environment.
(d) Rationality - property of a system which does the “right thing” given what it knows.
(e) Logical Reasoning - A process of deriving new sentences from old, such that the new
sentences are necessarily true if the old ones are true.
The Turing Test, proposed by Alan Turing (1950), was designed to provide a
satisfactory operational definition of intelligence. A computer passes the test if a human
interrogator, after posing some written questions, cannot tell whether the written responses
come from a person or from a computer.
3
Total Turing Test includes a video signal so that the interrogator can test the
subject’s perceptual abilities, as well as the opportunity for the interrogator to pass physical
objects “through the hatch.” To pass the total Turing Test, the computer will need
• computer vision to perceive objects, and robotics to manipulate objects and move
about.
Analyse how a given program thinks like a human, we must have some way of
determining how humans think. The interdisciplinary field of cognitive science brings
together computer models from AI and experimental techniques from psychology to try to
construct precise and testable theories of the workings of the human mind.
The Greek philosopher Aristotle was one of the first to attempt to codify ``right
thinking,'' that is, irrefutable reasoning processes. His famous syllogisms provided patterns
for argument structures that always gave correct conclusions given correct premises.
For example, ``Socrates is a man; all men are mortal; therefore Socrates is mortal.''
These laws of thought were supposed to govern the operation of the mind, and
initiated the field of logic.
Acting rationally means acting so as to achieve one's goals, given one's beliefs. An
agent is just something that perceives and acts.
The right thing: that which is expected to maximize goal achievement, given the
available information
For Example - blinking reflex- but should be in the service of rational action.
4
1.4 FUTURE OF ARTIFICIAL INTELLIGENCE
Education: Textbooks are digitized with the help of AI, early-stage virtual tutors
assist human instructors and facial analysis gauges the emotions of students to help
determine who’s struggling or bored and better tailor the experience to their
individual needs.
Media: Journalism is harnessing AI, too, and will continue to benefit from it.
Bloomberg uses Cyborg technology to help make quick sense of complex financial
reports. The Associated Press employs the natural language abilities of Automated
Insights to produce 3,700 earning reports stories per year — nearly four times more
than in the recent past
Customer Service: Last but hardly least, Google is working on an AI assistant that
can place human-like calls to make appointments at, say, your neighborhood hair
salon. In addition to words, the system understands context and nuance.
Situatedness
The agent receives some form of sensory input from its environment, and it performs
some action that changes its environment in some way.
Autonomy
The agent can act without direct intervention by humans or other agents and that it has
control over its own actions and internal state.
Adaptivity
The agent is capable of interacting in a peer-to-peer manner with other agents or humans
Human Sensors:
Eyes, ears, and other organs for sensors.
Human Actuators:
Hands, legs, mouth, and other body parts.
Robotic Sensors:
Mic, cameras and infrared range finders for sensors
Robotic Actuators:
Motors, Display, speakers etc An agent can be:
Human-Agent: A human agent has eyes, ears, and other organs which work for
sensors and hand, legs, vocal tract work for actuators.
Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for
sensors and various motors for actuators.
Software Agent: Software agent can have keystrokes, file contents as sensory input
and act on those inputs and display output on the screen.
Hence the world around us is full of agents such as thermostat, cell phone, camera,
and even we are also agents. Before moving forward, we should first know about sensors,
effectors, and actuators.
Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through sensors.
6
Actuators: Actuators are the component of machines that converts energy into
motion. The actuators are only responsible for moving and controlling a system. An actuator
can be an electric motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can be
legs, wheels, arms, fingers, wings, fins, and display screen.
An environment is everything in the world which surrounds the agent, but it is not a
part of an agent itself. An environment can be described as a situation in which an agent is
present.
The environment is where agent lives, operate and provide the agent with something
to sense and act upon it.
If an agent sensor can sense or access the complete state of an environment at each
point of time then it is a fully observable environment, else it is partially observable.
Example: chess – the board is fully observable, as are opponent’s moves. Driving
– what is around the next bend is not observable and hence partially observable.
1. Deterministic vs Stochastic
If an agent's current state and selected action can completely determine the next state
of the environment, then such environment is called a deterministic environment.
7
A stochastic environment is random in nature and cannot be determined completely
by an agent.
In a deterministic, fully observable environment, agent does not need to worry about
uncertainty.
2. Episodic vs Sequential
In an episodic environment, there is a series of one-shot actions, and only the current
percept is required for the action.
3. Single-agent vs Multi-agent
If only one agent is involved in an environment, and operating by itself then such an
environment is called single agent environment.
The agent design problems in the multi-agent environment are different from single
agent environment.
4. Static vs Dynamic
If the environment can change itself while an agent is deliberating then such
environment is called a dynamic environment else it is called a static environment.
Static environments are easy to deal because an agent does not need to continue
looking at the world while deciding for an action.
However for dynamic environment, agents need to keep looking at the world at each
action.
5. Discrete vs Continuous
If in an environment there are a finite number of precepts and actions that can be
performed within it, then such an environment is called a discrete environment else it
is called continuous environment.
A chess game comes under discrete environment as there is a finite number of moves
that can be performed.
Known and unknown are not actually a feature of an environment, but it is an agent's
state of knowledge to perform an action.
In a known environment, the results for all actions are known to the agent. While in
unknown environment, agent needs to learn how it works in order to perform an
action.
If an agent can obtain complete and accurate information about the state's
environment, then such an environment is called an Accessible environment else it is
called inaccessible.
Task environments, which are essentially the "problems" to which rational agents
are the "solutions."
Performance
The output which we get from the agent. All the necessary results that an agent gives
after processing comes under its performance.
Environment
All the surrounding things and conditions of an agent fall in this section. It basically
consists of all the things under which the agents work.
Actuators
The devices, hardware or software through which the agent performs any actions or
processes any information to produce a result are the actuators of the agent.
Sensors
The devices through which the agent observes and perceives its environment are the
sensors of the agent.
9
Figure 1.5 Examples of agent types and their PEAS descriptions
Rational Agent - A system is rational if it does the “right thing”. Given what it knows.
For every possible percept sequence, a rational agent should select an action that is
expected to maximize its performance measure, given the evidence provided by the percept
sequence and whatever built-in knowledge the agent has.
An omniscient agent knows the actual outcome of its actions and can act
accordingly; but omniscience is impossible in reality.
Ideal Rational Agent precepts and does things. It has a greater performance measure.
Eg. Crossing road. Here first perception occurs on both sides and then only action. No
perception occurs in Degenerate Agent.
Eg. Clock. It does not view the surroundings. No matter what happens outside. The
clock works based on inbuilt program.
Ideal Agent describes by ideal mappings. “Specifying which action an agent ought to
take in response to any given percept sequence provides a design for ideal agent”.
10
Eg. SQRT function calculation in calculator.
A rational agent should be autonomous-it should learn from its own prior knowledge
(experience).
Agents can be grouped into four classes based on their degree of perceived
intelligence and capability :
The Simple reflex agents are the simplest agents. These agents take decisions on the
basis of the current percepts and ignore the rest of the percept history (past State).
The Simple reflex agent does not consider any part of percepts history during their
decision and action process.
The Simple reflex agent works on Condition-action rule, which means it maps the
current state to action. Such as a Room Cleaner agent, it works only if there is dirt in
the room.
11
o Mostly too big to generate and to store.
The Model-based agent can work in a partially observable environment, and track the
situation.
A model-based agent has two important factors:
o Model: It is knowledge about "how things happen in the world," so it is called a
Model-based agent.
o Internal State: It is a representation of the current state based on percept history.
These agents have the model, "which is knowledge of the world" and based on the
model they perform actions.
Updating the agent state requires information about:
o How the world evolves
o How the agent's action affects the world.
12
Figure 1.7 A model-based reflex agent
o The knowledge of the current state environment is not always sufficient to decide for
an agent to what to do.
o The agent needs to know its goal which describes desirable situations.
o Goal-based agents expand the capabilities of the model-based agent by having the
"goal" information.
o These agents may have to consider a long sequence of possible actions before
deciding whether the goal is achieved or not. Such considerations of different scenario
are called searching and planning, which makes an agent proactive.
13
Utility Based Agents
o These agents are similar to the goal-based agent but provide an extra component of
utility measurement (“Level of Happiness”) which makes them different by
providing a measure of success at a given state.
o Utility-based agent act based not only goals but also the best way to achieve the goal.
o The Utility-based agent is useful when there are multiple possible alternatives, and an
agent has to choose in order to perform the best action.
o The utility function maps each state to a real number to check how efficiently each
action achieves the goals.
o A learning agent in AI is the type of agent which can learn from its past experiences,
or it has learning capabilities.
o It starts to act with basic knowledge and then able to act and adapt automatically
through learning.
b. Critic: Learning element takes feedback from critic which describes that how
well the agent is doing with respect to a fixed performance standard.
14
d. Problem generator: This component is responsible for suggesting actions that
will lead to new and informative experiences.
o Hence, learning agents are able to learn, analyze performance, and look for new ways
to improve the performance.
Some of the most popularly used problem solving with the help of artificial
intelligence are:
1. Chess.
2. Travelling Salesman Problem.
3. Tower of Hanoi Problem.
4. Water-Jug Problem.
5. N-Queen Problem.
Problem Searching
15
Searching is the most commonly used technique of problem solving in artificial
intelligence.
Problem: Problems are the issues which comes across any system. A solution is needed to
solve that particular problem.
Defining The Problem: The definition of the problem must be included precisely. It
should contain the possible initial as well as final situations which should result in acceptable
solution.
1. Analyzing The Problem: Analyzing the problem and its requirement must be done as
few features can have immense impact on the resulting solution.
3. Choosing a Solution: From all the identified solutions, the best solution is chosen
basis on the results produced by respective solutions.
16
Completeness: Is the algorithm guaranteed to find a solution when there is one?
Optimality: Does the strategy find the optimal solution?
Time complexity: How long does it take to find a solution?
Space complexity: How much memory is needed to perform the search?
1. Search Space: Search space represents a set of possible solutions, which a system
may have.
3. Goal test: It is a function which observe the current state and returns whether the
goal state is achieved or not.
Search tree: A tree representation of search problem is called Search tree. The root of
the search tree is the root node which is corresponding to the initial state.
Actions: It gives the description of all the available actions to the agent.
Solution: It is an action sequence which leads from the start node to the goal node.
Optimal Solution: If a solution has the lowest cost among all solutions.
Example Problems
Toy Problems
Vacuum World
States: The state is determined by both the agent location and the dirt locations. The
agent is in one of the 2 locations, each of which might or might not contain dirt. Thus there
are 2*2^2=8 possible world states.
17
Actions: In this simple environment, each state has just three actions: Left, Right, and
Suck. Larger environments might also include Up and Down.
Transition model: The actions have their expected effects, except that moving Left in
the leftmost squ are, moving Right in the rightmost square, and Sucking in a clean square
have no effect. The complete state space is shown in Figure.
Goal test: This checks whether all the squares are clean.
Path cost: Each step costs 1, so the path cost is the number of steps in the path.
1) 8- Puzzle Problem
States: A state description specifies the location of each of the eight tiles and the
blank in one of the nine squares.
18
Initial state: Any state can be designated as the initial state. Note that any given goal
can be reached from exactly half of the possible initial states.
The simplest formulation defines the actions as movements of the blank space Left,
Right, Up, or Down. Different subsets of these are possible depending on where the blank is.
Transition model: Given a state and action, this returns the resulting state; for
example, if we apply Left to the start state in Figure 3.4, the resulting state has the 5 and the
blank switched.
Goal test: This checks whether the state matches the goal configuration shown in
Figure. Path cost: Each step costs 1, so the path cost is the number of steps in the path.
Queens Problem
Consider the given problem. Describe the operator involved in it. Consider the water
jug problem: You are given two jugs, a 4-gallon one and 3-gallon one. Neither has any
measuring marker on it. There is a pump that can be used to fill the jugs with water. How can
you get exactly 2 gallon of water from the 4-gallon jug ?
Explicit Assumptions: A jug can be filled from the pump, water can be poured out of
a jug on to the ground, water can be poured from one jug to another and that there are no
other measuring devices available.
Here the initial state is (0, 0). The goal state is (2, n) for any value of n.
19
State Space Representation: we will represent a state of the problem as a tuple (x, y)
where x represents the amount of water in the 4-gallon jug and y represents the amount of
water in the 3-gallon jug. Note that 0 ≤ x ≤ 4, and 0 ≤ y ≤ 3.
To solve this we have to make some assumptions not mentioned in the problem. They
are:
Operators - we must define a set of operators that will take us from one state to another.
Table 1.1
20
Figure 1.15 Solution
Table 1.2
Solution
How can you get exactly 2 gallon of water into the 4-gallon jug?
21
PROBLEM SOLVING BY SEARCH
Operator or successor function - for any state x returns s(x), the set of states
reachable from x with one action
State space - all states reachable from initial by any sequence of actions
Path cost - function that assigns a cost to a path. Cost of a path is the sum of costs
of individual actions along the path
What is Search?
Search is the systematic examination of states to find path from the start/root state
to the goal state.
The set of possible states, together with operators defining their connectivity
constitute the search space.
The output of a search algorithm is a solution, that is, a path from the initial state to a
state that satisfies the goal test.
Problem-solving agents
To illustrate the agent’s behavior, let us take an example where our agent is in the city
of Arad, which is in Romania. The agent has to adopt a goal of getting to Bucharest.
22
Goal formulation, based on the current situation and the agent’s performance
measure, is the first step in problem solving.
The agent’s task is to find out which sequence of actions will get to a goal state.
Problem formulation is the process of deciding what actions and states to consider
given a goal.
Problem formulation
A problem is defined by four items:
initial state e.g., “at Arad"
successor function S(x) = set of action-state pairs e.g., S(Arad) = {[Arad -
>Zerind;Zerind],….} goal test, can be
explicit, e.g., x = at Bucharest" implicit, e.g., NoDirt(x)
path cost (additive)
e.g., sum of distances, number of actions executed, etc. c(x; a; y) is the step cost,
assumed to be >= 0
A solution is a sequence of actions leading from the initial state to a goal state.
Goal formulation and problem formulation
EXAMPLE PROBLEMS
The problem solving approach has been applied to a vast array of task environments.
Some best known problems are summarized below. They are distinguished as toy or real-
world problems
A real world problem is one whose solutions people actually care about.
23
2.2 TOY PROBLEMS
o States: The agent is in one of two locations, each of which might or might not contain
dirt. Thus there are 2 x 22 = 8 possible world states.
o Successor function: This generates the legal states that results from trying the three
actions (left, right, suck). The complete state space is shown in figure
o Goal Test: This tests whether all the squares are clean.
o Path test: Each step costs one, so that the path cost is the number of steps in the path.
The 8-puzzle
An 8-puzzle consists of a 3x3 board with eight numbered tiles and a blank space. A
tile adjacent to the balank space can slide into the space. The object is to reach the goal state,
as shown in Figure 2.4
24
Figure 2.2 A typical instance of 8-puzzle
o States : A state description specifies the location of each of the eight tiles and the
blank in one of the nine squares.
o Initial state : Any state can be designated as the initial state. It can be noted that any
given goal can be reached from exactly half of the possible initial states.
o Successor function : This generates the legal states that result from trying the four
actions(blank moves Left, Right, Up or down).
o Goal Test : This checks whether the state matches the goal configuration shown in
Figure(Other goal configurations are possible)
o Path cost : Each step costs 1,so the path cost is the number of steps in the path.
The 8-puzzle belongs to the family of sliding-block puzzles, which are often used as
test problems for new search algorithms in AI. This general class is known as NP-complete.
The 8-puzzle has 9!/2 = 181,440 reachable states and is easily solved.
The 15 puzzle ( 4 x 4 board ) has around 1.3 trillion states, an the random instances
can be solved optimally in few milli seconds by the best search algorithms.
The 24-puzzle (on a 5 x 5 board) has around 1025 states and random instances are still
quite difficult to solve optimally with current machines and algorithms.
8-Queens problem
The goal of 8-queens problem is to place 8 queens on the chessboard such that no
queen attacks any other.(A queen attacks any piece in the same row, column or diagonal).
Figure 2.3 shows an attempted solution that fails: the queen in the right most column
is attacked by the queen at the top left.
25
Figure 2.3 8-queens problem
A better formulation would prohibit placing a queen in any square that is already
attacked.
o States : Arrangements of n queens ( 0 <= n < = 8 ),one per column in the left most
columns, with no queen attacking another are states.
o Successor function : Add a queen to any square in the left most empty column
such that it is not attacked by any other queen.
This formulation reduces the 8-queen state space from 3 x 1014 to just 2057,and
solutions are easy to find.
For the 100 queens the initial formulation has roughly 10400 states whereas the
improved formulation has about 1052 states. This is a huge reduction, but the improved state
space is still too big for the algorithms to handle.
REAL-WORLD PROBLEMS
ROUTE-FINDING PROBLEM
26
AIRLINE TRAVEL PROBLEM
o States: Each is represented by a location (e.g., an airport) and the current time.
o Successor function: This returns the states resulting from taking any scheduled flight
(further specified by seat class and location),leaving later than the current time plus
the within-airport transit time, from the current airport to another.
o Path cost: This depends upon the monetary cost, waiting time, flight time, customs
and immigration procedures, seat quality, time of date, type of air plane, frequent-
flyer mileage awards, and so on.
TOURING PROBLEMS
As with route-finding the actions correspond to trips between adjacent cities. The state
space, however, is quite different.
Urziceni,Vaslui}”. The goal test would check whether the agent is in Bucharest and
Is a touring problem in which each city must be visited exactly once. The aim is to
find the shortest tour. The problem is known to be NP-hard. Enormous efforts have been
expended to improve the capabilities of TSP algorithms. These algorithms are also used in
tasks such as planning movements of automatic circuit-board drills and of stocking
machines on shop floors.
VLSI layout
27
Heuristic Search techniques:
Generate and Test - Hill Climbing- Best-First - Means-end analysis. Knowledge representation
issues: Representations and mappings -Approaches to Knowledge representations -Issues in
Knowledge representations - Frame Problem.
ROBOT navigation
The example includes assembly of intricate objects such as electric motors. The aim in
assembly problems is to find the order in which to assemble the parts of some objects. If the
wrong order is choosen, there will be no way to add some part later without undoing some
work already done. Another important assembly problem is protein design, in which the goal
is to find a sequence of Amino acids that will be fold into a three-dimensional protein with
the right properties to cure some disease.
In recent years there has been increased demand for software robots that perform
Internet searching, looking for answers to questions, for related information, or for shopping
deals. The searching techniques consider internet as a graph of nodes(pages) connected by
links.
28
UNINFORMED SEARCH STRATGES
Strategies that know whether one non goal state is “more promising” than another are
called
o Breadth-first search
o Uniform-cost search
o Depth-first search
o Depth-limited search
o Iterative deepening search
Breadth-first search
o Breadth-first search is a simple strategy in which the root node is expanded first, then
all successors of the root node are expanded next, then their successors, and so on. In
general, all the nodes are expanded at a given depth in the search tree before any
nodes at the next level are expanded.
Figure 2.5 Breadth-first search on a simple binary tree. At each stage, the node to be
expanded next is indicated by a marker.
29
Properties of breadth-first-search
Assume every state has b successors. The root of the search tree generates b nodes at
the first level, each of which generates b more nodes, for a total of b2 at the second level.
Each of these generates b more nodes, yielding b3 nodes at the third level, and so on. Now
suppose, that the solution is at depth d. In the worst case, we would expand all but the last
node at level d, generating bd+1 - b nodes at level d+1.
Every node that is generated must remain in memory, because it is either part of the
fringe or is an ancestor of a fringe node. The space compleity is, therefore, the same as the
time complexity
Instead of expanding the shallowest node, uniform-cost search expands the node n
with the lowest path cost. Uniform-cost search does not care about the number of steps a path
has, but only about their total cost.
30
2.4 DEPTH-FIRST-SEARCH
Depth-first-search always expands the deepest node in the current fringe of the search
tree. The progress of the search is illustrated in Figure 1.31. The search proceeds immediately
to the deepest level of the search tree, where the nodes have no successors. As those nodes
are expanded, they are dropped from the fringe, so then the search “backs up” to the next
shallowest node that still has unexplored successors.
Figure 2.7 Depth-first-search on a binary tree. Nodes that have been expanded and have
node scendants in the fringe can be removed from the memory; these are shown in
black. Nodes at depth 3 are assumed to have no successors and M is the only goal node.
For a state space with a branching factor b and maximum depth m, depth-first-search
requires storage of only bm + 1 nodes.
Using the same assumptions as Figure, and assuming that nodes at the same depth as
the goal node have no successors, we find the depth-first-search would require 118 kilobytes
instead of 10 petabytes, a factor of 10 billion times less space.
31
Drawback of Depth-first-search
The drawback of depth-first-search is that it can make a wrong choice and get stuck
going down very long(or even infinite) path when a different choice would lead to solution
near the root of the search tree. For example, depth-first-search will explore the entire left
subtree even if node C is a goal node.
A variant of depth-first search called backtracking search uses less memory and only
one successor is generated at a time rather than all successors.; Only O(m) memory is needed
rather than O(bm)
DEPTH-LIMITED-SEARCH
Depth limited search will be nonoptimal if we choose l > d. Its time complexity is
l
O(b ) and its space complete is O(bl). Depth-first-search can be viewed as a special case of
depth- limited search with l = oo Sometimes, depth limits can be based on knowledge of the
problem. For, example, on the map of Romania there are 20 cities. Therefore, we know that if
there is a solution, it must be of length 19 at the longest, So l = 10 is a possible choice.
However, it can be shown that any city can be reached from any other city in at most 9 steps.
This number known as the diameter of the state space, gives us a better depth limit.
32
Depth-limited-search can be implemented as a simple modification to the general
tree- search algorithm or to the recursive depth-first-search algorithm. The pseudocode for
recursive depth- limited-search is shown in Figure.
It can be noted that the above algorithm can terminate with two kinds of failure : the
standard failure value indicates no solution; the cutoffvalue indicates no solution within the
depth limit. Depth-limited search = depth-first search with depth limit l,returns cut off if any
path is cut off by depth limit
33
Figure 2.10 The iterative deepening search algorithm, which repeatedly applies
depth-limited- search with increasing limits. It terminates when a solution is found or
if the depth limited search returns failure, meaning that no solution exists.
34
Figure 2.12 Iterative search is not as wasteful as it might seem
Properties of iterative deepening search
35
Bidirectional Search
The idea behind bidirectional search is to run two simultaneous searches – one
forward from the initial state and the other backward from the goal, stopping when the two
searches meet in the middle
The motivation is that bd/2 + bd/2 much less than, or in the figure, the area of the two
small circles is less than the area of one big circle centered on the start and reaching to the
goal.
Figure 2.14 A schematic view of a bidirectional search that is about to succeed, when
a Branch from the Start node meets a Branch from the goal node.
• Before moving into bidirectional search let’s first understand a few terms.
• We must traverse the tree from the start node and the goal node and wherever they
meet the path from the start node to the goal through the intersection is the optimal
solution. The BS Algorithm is applicable when generating predecessors is easy in
both forward and backward directions and there exist only 1 or fewer goal states.
36
Figure 2.15 Comparing Uninformed Search Strategies
Figure 2.16 Evaluation of search strategies, b is the branching factor; d is the depth
of the shallowest solution; m is the maximum depth of the search tree; l is the depth
limit. Superscript caveats are as follows: a complete if b is finite; b complete if step
costs >= E for positive E; c optimal if step costs are all identical; d if both directions
use breadth-first search.
37
o No sensor
o Initial State(1,2,3,4,5,6,7,8)
o After action [Right] the state (2,4,6,8)
o After action [Suck] the state (4, 8)
o After action [Left] the state (3,7)
o After action [Suck] the state (8)
o Answer : [Right, Suck, Left, Suck] coerce the world into state 7 without any sensor
o Belief State: Such state that agent belief to be
38
Figure 2.18 states and actions
39
m, the maximum length of any path in the state space.
Search-cost - typically depends upon the time complexity but can also include the
term for memory usage.
Total–cost – It combines the search-cost and the path cost of the solution found.
2.15 INFORMED SEARCH AND EXPLORATION
Informed (Heuristic) Search Strategies
Informed search strategy is one that uses problem-specific knowledge beyond the
definition of the problem itself. It can find solutions more efficiently than uninformed
strategy.
Best-first search
Best-first search is an instance of general TREE-SEARCH or GRAPH-SEARCH
algorithm in which a node is selected for expansion based on an evaluation function f(n).
The node with lowest evaluation is selected for expansion, because the evaluation measures
the distance to the goal.
This can be implemented using a priority-queue, a data structure that will maintain the
fringe in ascending order of f-values.
2.16 HEURISTIC FUNCTIONS
A heuristic function or simply a heuristic is a function that ranks alternatives in
various search algorithms at each branching step basing on an available information in order
to make a decision which branch is to be followed during a search.
The key component of Best-first search algorithm is a heuristic function, denoted by
h(n): h(n) = estimated cost of the cheapest path from node n to a goal node.
For example, in Romania, one might estimate the cost of the cheapest path from Arad
to Bucharest via a straight-line distance from Arad to Bucharest (Figure 2.19).
Heuristic function are the most common form in which additional knowledge is
imparted to the search algorithm.
Greedy Best-first search
Greedy best-first search tries to expand the node that is closest to the goal, on the
grounds that this is likely to a solution quickly.
It evaluates the nodes by using the heuristic function f(n) = h(n).
Taking the example of Route-finding problems in Romania, the goal is to reach
Bucharest starting from the city Arad. We need to know the straight-line distances to
Bucharest from various cities as shown in Figure. For example, the initial state is
In(Arad),and the straight line distance heuristic hSLD (In(Arad)) is found to be 366.
Using the straight-line distance heuristic hSLD, the goal state can be reached faster.
40
Figure 2.19 Values of hSLD - straight line distances to Bucharest
41
Figure shows the progress of greedy best-first search using hSLD to find a path from
Arad to Bucharest. The first node to be expanded from Arad will be Sibiu, because it is closer
to Bucharest than either Zerind or Timisoara. The next node to be expanded will be Fagaras,
because it is closest. Fagaras in turn generates Bucharest, which is the goal.
o Complete: No–can get stuck in loops, e.g., Iasi !Neamt !Iasi !Neamt !
Complete in finite space with repeated-state checking
o Time: O(bm), but a good heuristic can give dramatic improvement
o Space: O(bm) - keeps all nodes in memory
o Optimal: No
The worst-case time and space complexity is O(b m),where m is the maximum depth of
the search space.
A* SEARCH
A* Search is the most widely used form of best-first search. The evaluation function
f(n) is obtained by combining
A* Search is optimal if h(n) is an admissible heuristic – that is, provided that h(n)
never overestimates the cost to reach the goal.
The values of ‘g ‘ are computed from the step costs shown in the Romania map(figure).
Also the values of hSLD are given in Figure
42
Figure 2.21 A* Search
o In many optimization problems, the path to the goal is irrelevant; the goal state itself
is the solution
o For example, in the 8-queens problem, what matters is the final configuration of
queens, not the order in which they are added.
o In such cases, we can use local search algorithms. They operate using a single
current state (rather than multiple paths) and generally move only to neighbors of
that state.
43
o The important applications of these class of problems are (a) integrated-circuit design,
(b) Factory-floor layout, (c) job-shop scheduling, (d) automatic programming, (e)
telecommunications network optimization, (f) Vehicle routing, and (g) portfolio
management.
Key advantages of Local Search Algorithms
(1) They use very little memory – usually a constant amount; and
(2) they can often find reasonable solutions in large or infinite(continuous) state spaces
for which systematic algorithms are unsuitable.
2.18 OPTIMIZATION PROBLEMS
In addition to finding goals, local search algorithms are useful for solving pure
optimization problems, in which the aim is to find the best state according to an objective
function.
State Space Landscape
To understand local search, it is better explained using state space landscape as
shown in Figure.
A landscape has both “location” (defined by the state) and “elevation” (defined by
the value of the heuristic cost function or objective function).
If elevation corresponds to cost, then the aim is to find the lowest valley – a global
minimum; if elevation corresponds to an objective function, then the aim is to find the
highest peak – a global maximum.
Local search algorithms explore this landscape. A complete local search algorithm
always finds a goal if one exists; an optimal algorithm always finds a global
minimum/maximum.
44
Hill-climbing search
current ←MAKE-NODE(INITIAL-STATE[problem])
loop do
neighbor ← a highest valued successor of current
if VALUE [neighbor] ≤ VALUE[current] then return STATE[current]
current ←neighbor
Figure 2.24 The hill-climbing search algorithm (steepest ascent version), which is
the most basic local search technique. At each step the current node is replaced
by the best neighbor; the neighbor with the highest VALUE. If the heuristic cost
estimate h is used, we could find the neighbor with the lowest h.
Local maxima: a local maximum is a peak that is higher than each of its neighboring
states, but lower than the global maximum. Hill-climbing algorithms that reach the
vicinity of a local maximum will be drawn upwards towards the peak, but will then be
stuck with nowhere else to go
Plateaux: A plateau is an area of the state space landscape where the evaluation
function is flat. It can be a flat local maximum, from which no uphill exit exists, or a
shoulder, from which it is possible to make progress.
45
Figure 2.25 Illustration of why ridges cause difficulties for hill-climbing. The grid
of states(dark circles) is superimposed on a ridge rising from left to right,
creating a sequence of local maxima that are not directly connected to each other.
From each local maximum, all the available options point downhill.
Hill-climbing variations
Stochastic hill-climbing
o Random selection among the uphill moves.
o The selection probability can vary with the steepness of the uphill move.
First-choice hill-climbing
o cfr. stochastic hill climbing by generating successors randomly until a better
one is found.
Random-restart hill-climbing
o Tries to avoid getting stuck in local maxima.
A hill-climbing algorithm that never makes “downhill” moves towards states with
lower value (or higher cost) is guaranteed to be incomplete, because it can stuck on a local
maximum. In contrast, a purely random walk –that is, moving to a successor choosen
uniformly at random from the set of successors – is complete, but extremely inefficient.
46
Simulated annealing was first used extensively to solve VLSI layout problems in the early
1980s. It has been applied widely to factory scheduling and other large-scale optimization
tasks.
Genetic algorithms
A Genetic algorithm (or GA) is a variant of stochastic beam search in which successor
states are generated by combining two parent states, rather than by modifying a single state
Like beam search, Gas begin with a set of k randomly generated states, called the
population. Each state, or individual, is represented as a string over a finite alphabet – most
commonly, a string of 0s and 1s. For example, an 8 8-quuens state must specify the positions
of 8 queens, each in a column of 8 squares, and so requires 8 x log2 8 = 24 bits.
47
Figure shows a population of four 8-digit strings representing 8-queen states. The
production of the next generation of states is shown in Figure
In (b) each state is rated by the evaluation function or the fitness function.
In (c),a random choice of two pairs is selected for reproduction, in accordance with
the probabilities in (b).
Each constraint Ci involves some subset of variables and specifies the allowable
combinations of values for that subset.
48
Example for Constraint Satisfaction Problem
Figure shows the map of Australia showing each of its states and territories. We are
given the task of coloring each region either red, green, or blue in such a way that the
neighboring regions have the same color. To formulate this as CSP, we define the variable to
be the regions
{(red,green),(red,blue),(green,red),(green,blue),(blue,red),(blue,green)}.
The constraint can also be represented more succinctly as the inequality WA not =
NT, provided the constraint satisfaction algorithm has some way to evaluate such
expressions.) There are many possible solutions such as
Figure 2.29 Principle states and territories of Australia. Coloring this map can be
viewed as a constraint satisfaction problem. The goal is to assign colors to each
region so that no neighboring regions have the same color.
49
Figure 2.30 Mapping Problem
Initial state: the empty assignment {},in which all variables are unassigned.
Successor function: a value can be assigned to any unassigned variable, provided that
it does not conflict with previously assigned variables.
Goal test: the current assignment is complete.
Path cost: a constant cost(E.g.,1) for every step.
Varieties of CSPs
(i) Discrete variables Finite domains
The simplest kind of CSP involves variables that are discrete and have finite
domains. Map coloring problems are of this kind. The 8-queens problem can also be viewed
as finite- domain
CSP, where the variables Q1,Q2,…..Q8 are the positions each queen in columns 1,
….8 and each variable has the domain {1,2,3,4,5,6,7,8}. If the maximum domain size
of any
50
51
variable in a CSP is d, then the number of possible complete assignments is O(d n) – that is,
exponential in the number of variables. Finite domain CSPs include Boolean CSPs, whose
variables can be either true or false. Infinite domains
Discrete variables can also have infinite domains – for example, the set of integers or
the set of strings. With infinite domains, it is no longer possible to describe constraints by
enumerating all allowed combination of values. Instead a constraint language of algebric
inequalities such as Startjob1 + 5 <= Startjob3.
CSPs with continuous domains are very common in real world. For example in
operation research field, the scheduling of experiments on the Hubble Telescope requires
very precise timing of observations; the start and finish of each observation and manoeuvre
are continuous-valued variables that must obey a variety of astronomical, precedence and
power constraints. The best known category of continuous-domain CSPs is that of linear
programming problems, where the constraints must be linear inequalities forming a convex
region. Linear programming problems can be solved in time polynomial in the number of
variables.
Varieties of constraints
S
Figure 2.31 cryptarithmetic puzzles.
52
Figure 2.32 Cryptarithmetic puzzles-Solution
The term backtracking search is used for depth-first search that chooses values for
one variable at a time and backtracks when a variable has no legal values left to assign. The
algorithm is shown in figure
53
Figure 2.34 A simple backtracking algorithm for constraint satisfaction problem. The
algorithm is modeled on the recursive depth-first search
Figure 2.34 Part of the search tree generated by simple backtracking for the map-
coloring problem
Figure 2.35 Part of search tree generated by simple backtracking for the map
coloring problem.
54
Forward checking
One way to make better use of constraints during search is called forward checking.
Whenever a variable X is assigned, the forward checking process looks at each unassigned
variable Y that is connected to X by a constraint and deletes from Y ’s domain any value that
is inconsistent with the value chosen for X. Figure 5.6 shows the progress of a map-coloring
search with forward checking.
Figure 2.36 The progress of a map-coloring search with forward checking. WA = red
is assigned first; then forward checking deletes red from the domains of the
neighboring variables NT and SA. After Q = green, green is deleted from the domain
of NT, SA, and NSW. After V = blue, blue, is deleted from the domains of NSW and
SA, leaving SA with no legal values.
Constraint propagation
Although forward checking detects many inconsistencies, it does not detect all of them.
Arc Consistency
55
Figure 2.38 Arc Consistency –CSP
k-Consistency
Independent Subproblems
56
Tree-Structured CSPs
Competitive environments, in which the agent’s goals are in conflict, give rise to
adversarial search problems – often known as games.
Games
We will consider games with two players, whom we will call MAX and MIN. MAX
moves first, and then they take turns moving until the game is over. At the end of the game,
points are awarded to the winning player and penalties are given to the loser. A game can be
formally defined as a search problem with the following components:
o The initial state, which includes the board position and identifies the player to move.
o A successor function, which returns a list of (move, state) pairs, each indicating a
legal move and the resulting state.
57
o A terminal test, which describes when the game is over. States where the game has
ended are called terminal states.
o A utility function (also called an objective function or payoff function), which give a
numeric value for the terminal states. In chess, the outcome is a win, loss, or draw,
with values+1,-1, or 0. he payoffs in backgammon range from +192 to -192.
Game Tree
The initial state and legal moves for each side define the game tree for the game.
Figure 2.18 shows the part of the game tree for tic-tac-toe (noughts and crosses). From the
initial state, MAX has nine possible moves. Play alternates between MAX’s placing an X and
MIN’s placing a 0 until we reach leaf nodes corresponding to the terminal states such that one
player has three in a row or all the squares are filled. He number on each leaf node indicates
the utility value of the terminal state from the point of view of MAX; high values are
assumed to be good for MAX and bad for MIN. It is the MAX’s job to use the search tree
(particularly the utility of terminal states) to determine the best move.
Figure 2.41 A partial search tree. The top node is the initial state, and MAX
move first, placing an X in an empty square.
In normal search problem, the optimal solution would be a sequence of move leading
to a goal state – a terminal state that is a win. In a game, on the other hand, MIN has
something
58
to say about it, MAX therefore must find a contingent strategy, which specifies MAX’s
move in the initial state, then MAX’s moves in the states resulting from every possible
response by MIN, then MAX’s moves in the states resulting from every possible response by
MIN those moves, and so on. An optimal strategy leads to outcomes at least as good as any
other strategy when one is playing an infallible opponent.
59
Figure 2.44 An algorithm for calculating minimax decisions. It returns the action
corresponding to the best possible move, that is, the move that leads to the outcome
with the best utility, under the assumption that the opponent plays to minimize
utility. The functions MAX-VALUE and MIN-VALUE go through the whole game
tree, all the way to the leaves, to determine the backed-up value of a state.
The minimax algorithm computes the minimax decision from the current state. It uses
a simple recursive computation of the minimax values of each successor state, directly
implementing the defining equations. The recursion proceeds all the way down to the leaves
of the tree, and then the minimax values are backed up through the tree as the recursion
unwinds. For example in Figure 2.19,the algorithm first recourses down to the three bottom
left nodes, and uses the utility function on them to discover that their values are 3, 12, and 8
respectively. Then it takes the minimum of these values, 3, and returns it as the backed-up
value of node B. A similar process gives the backed up values of 2 for C and 2 for D. Finally,
we take the maximum of 3, 2, and 2 to get the backed-up value of 3 at the root node. The
minimax algorithm performs a complete depth-first exploration of the game tree. If the
maximum depth of the tree is m, and there are b legal moves at each point, then the time
complexity of the minimax algorithm is O(bm). The space complexity is O(bm) for an
algorithm that generates successors at once.
Alpha-Beta Pruning
The problem with minimax search is that the number of game states it has to examine
is exponential in the number of moves. Unfortunately, we can’t eliminate the exponent, but
60
we can effectively cut it in half. By performing pruning, we can eliminate large part of the
tree from consideration. We can apply the technique known as alpha beta pruning, when
applied to a minimax tree, it returns the same move as minimax would, but prunes away
branches that cannot possibly influence the final decision.
Alpha Beta pruning gets its name from the following two parameters that describe
bounds on the backed-up values that appear anywhere along the path:
o α : the value of the best (i.e., highest-value) choice we have found so far at any
choice point along the path of MAX.
o β: the value of best (i.e., lowest-value) choice we have found so far at any choice
point along the path of MIN.
Alpha Beta search updates the values of α and β as it goes along and prunes the
remaining branches at anode(i.e., terminates the recursive call) as soon as the value of the
current node is known to be worse than the current α and β value for MAX and MIN,
respectively. The complete algorithm is given in Figure. The effectiveness of alpha-beta
pruning is highly dependent on the order in which the successors are examined. It might be
worthwhile to try to examine first the successors that are likely to be the best. In such case, it
turns out that alpha-beta needs to examine only O(bd/2) nodes to pick the best move, instead of
O(bd) for minimax. This means that the effective branching factor becomes sqrt(b) instead of
b – for chess,6 instead of 35. Put an other way alpha-beta cab look ahead roughly twice as far
as minimax in the same amount of time.
61
Figure 2.45 The alpha beta search algorithm. These routines are the same as the
minimax routines in figure 2.20,except for the two lines in each of MIN-VALUE
and MAX-VALUE that maintain α and β
Alpha: Alpha is the best choice or the highest value that we have found at any
instance along the path of Maximizer. The initial value for alpha is – ∞.
Beta: Beta is the best choice or the lowest value that we have found at any instance
along the path of Minimizer. The initial value for alpha is + ∞.
Each node has to keep track of its alpha and beta values. Alpha can be updated only
when it’s MAX’s turn and, similarly, beta can be updated only when it’s MIN’s
chance.
MAX will update only alpha values and MIN player will update only beta values.
The node values will be passed to upper nodes instead of values of alpha and beta
during go into reverse of tree.
1. We will first start with the initial move. We will initially define the alpha and beta
values as the worst case i.e. α = -∞ and β= +∞. We will prune the node only when
alpha becomes greater than or equal to beta.
62
Figure 2.46 Step 1 Alpha-beta Pruning
2. Since the initial value of alpha is less than beta so we didn’t prune it. Now it’s turn for
MAX. So, at node D, value of alpha will be calculated. The value of alpha at node D
will be max (2, 3). So, value of alpha at node D will be 3.
3. Now the next move will be on node B and its turn for MIN now. So, at node B, the
value of alpha beta will be min (3, ∞). So, at node B values will be alpha= – ∞ and
beta will be 3.
In the next step, algorithms traverse the next successor of Node B which is node E,
and the values of α= -∞, and β= 3 will also be passed.
63
4. Now it’s turn for MAX. So, at node E we will look for MAX. The current value of
alpha at E is – ∞ and it will be compared with 5. So, MAX (- ∞, 5) will be 5. So, at
node E, alpha = 5, Beta = 5. Now as we can see that alpha is greater than beta which
is satisfying the pruning condition so we can prune the right successor of node E and
algorithm will not be traversed and the value at node E will be 5.
6. In the next step the algorithm again comes to node A from node B. At node A alpha
will be changed to maximum value as MAX (- ∞, 3). So now the value of alpha and
beta at node A will be (3, + ∞) respectively and will be transferred to node C. These
same values will be transferred to node F.
7. At node F the value of alpha will be compared to the left branch which is 0. So, MAX
(0, 3) will be 3 and then compared with the right child which is 1, and MAX (3,1) = 3
still α remains 3, but the node value of F will become 1.
64
8. Now node F will return the node value 1 to C and will compare to beta value at C.
Now its turn for MIN. So, MIN (+ ∞, 1) will be 1. Now at node C, α= 3, and β= 1 and
alpha is greater than beta which again satisfies the pruning condition. So, the next
successor of node C i.e. G will be pruned and the algorithm didn’t compute the entire
subtree G.
Now, C will return the node value to A and the best value of A will be MAX (1, 3)
will be 3.
The above represented tree is the final tree which is showing the nodes which are
computed and the nodes which are not computed. So, for this example the optimal value of
the maximizer will be 3.
65
KNOWLEDGE REPRESENTATION
First Order Predicate Logic – Prolog Programming – Unification – Forward Chaining-
Backward Chaining – Resolution – Knowledge Representation – Ontological Engineering-
Categories and Objects – Events – Mental Events and Mental Objects – Reasoning Systems
for Categories – Reasoning with Default Information.
Propositional logic is a declarative language because its semantics is based on a truth relation
between sentences and possible worlds. It also has sufficient expressive power to deal with
partial information, using disjunction and negation.
First-Order Logic is a logic which is sufficiently expressive to represent a good deal of our
common sense knowledge.
FOL adopts the foundation of propositional logic with all its advantages to build a
more expressive logic on that foundation, borrowing representational ideas from natural
language while avoiding its drawbacks.
1. Nouns and noun phrases that refer to objects (Squares, pits, rumpuses)
2. Verbs and verb phrases that refer to among objects ( is breezy, is adjacent to)
Some of these relations are functions-relations in which there is only one “Value” for
a given “input”. Whereas propositional logic assumes the world contains facts, first-order
logic (like natural language) assumes the world contains Objects: people, houses, numbers,
colors, baseball games, wars, …
Relations: red, round, prime, brother of, bigger than, part of, comes between,
66
3.1 SPECIFY THE SYNTAX OF FIRST-ORDER LOGIC IN BNF FORM
The domain of a model is DOMAIN the set of objects or domain elements it contains.
The domain is required to be nonempty—every possible world must contain at least one
object. Figure 8.2 shows a model with five objects: Richard the Lionheart, King of England
from 1189 to 1199; his younger brother, the evil King John, who ruled from 1199 to 1215;
the left legs of Richard and John; and a crown. The objects in the model may be related in
various ways. In the figure, Richard and John are brothers. Formally speaking, a relation
TUPLE is just the set of tuples of objects that are related. (A tuple is a collection of objects
arranged in a fixed order and is written with angle brackets surrounding the objects.) Thus,
the brotherhood relation in this model is the set
The crown is on King John’s head, so the “on head” relation contains just one tuple, _
the crown, King John_. The “brother” and “on head” relations are binary relations — that is,
they relate pairs of objects. Certain kinds of relationships are best considered as functions, in
that a given object must be related to exactly one object in this way. For example, each
person has one left leg, so the model has a unary “left leg” function that includes the
following mappings:
67
The five objects are,
The objects in the model may be related in various ways, In the figure Richard and
John are brothers.
Formally speaking, a relation is just the set of tuples of objects that are related.
A tuple is a collection of Objects arranged in a fixed order and is written with angle
brackets surrounding the objects.
Thus, the brotherhood relation in this model is the set {(Richard the Lionheart, King
John),(King John, Richard the Lionheart)}
The crown is on King John’s head, so the “on head” relation contains just one tuple,
(the crown, King John).
o The relation can be binary relation relating pairs of objects (Ex:- “Brother”) or
unary relation representing a common object (Ex:- “Person” representing both
Richard and John)
Certain kinds of relationships are best considered as functions that relates an object to
exactly one object.
For Example:- each person has one left leg, so the model has a unary “left leg”
function that includes the following mappings (Richard the Lionheart) >
Richard’s left leg
(King John)----> John’s left leg
The basic syntactic elements of first-order logic are the symbols that stand for
objects, relations and functions
Kinds of Symbols
68
(Ex:- King) Function Symbols stands for functions
(Ex:-Left Leg)
69
o Symbols will begin with uppercase letters
o The choice of names is entirely up to the user
o Each predicate and function symbol comes with an arity
o Arity fixes the number of arguments.
follows;
Richard refers to Richard the Lion heart and John refers to the evil King John.
Brother refers to the brotherhood relation, that is the set of tuples of objects given in
equation {(Richard the Lionheart, King John),(King John, Richard the
Lionheart)}
On Head refers to the “on head” relation that holds between the crown and King John;
Person, King and Crown refer to the set of objects that are persons, kings and crowns.
Left leg refers to the “left leg” function, that is, the mapping given in {(Richard the
Lion heart, King John), (King John, Richard the Lionheart)}
70
Term A term is a logical expression that refers TERM to an object. Constant symbols
are therefore terms, but it is not always convenient to have a distinct symbol to name every
object. For example, in English we might use the expression “King John’s left leg” rather
than giving a name to his leg. This is what function symbols are for: instead of using a
constant symbol, we use Left Leg (John). The formal semantics of terms is straightforward.
Consider a term f(t1, . . . , tn). The function symbol f refers to some function in the model.
Atomic sentences Atomic sentence (or atom for short) is formed from a predicate symbol
optionally followed by a parenthesized list of terms, such as Brother (Richard, John). Atomic
sentences can have complex terms as arguments. Thus, Married (Father (Richard),Mother
(John)) states that Richard the Lionheart’s father is married to King John’s mother.
Complex Sentences
We can use logical connectives to construct more complex sentences, with the same
syntax and semantics as in propositional calculus
Quantifiers
∀x King(x) ⇒ Person(x)
∀ is usually pronounced “For all. . .” Thus, the sentence says, “For all x, if x is a king,
then x is a person.” The symbol x is called a variable. A term with no variables is called a
ground term.
Consider the model shown in Figure 8.2 and the intended interpretation that goes with
it. We can extend the interpretation in five ways:
71
The universally quantified sentence ∀ x King(x) ⇒ Person(x) is true in the original
model if the sentence King(x) ⇒ Person(x) is true under each of the five extended
interpretations. That is, the universally quantified sentence is equivalent to asserting the
following five sentences:
Universal quantification makes statements about every object. Similarly, we can make
a statement about some object in the universe without naming it, by using an existential
quantifier. To say, for example, that King John has a crown on his head, we write
The fifth assertion is true in the model, so the original existentially quantified
sentence is true in the model. Just as ⇒ appears to be the natural connective to use with ∀, 𝖠
is the natural connective to use with ∃.
Using 𝖠 as the main connective with ∀ led to an overly strong statement in the
example in the previous section; using ⇒ with ∃ usually leads to a very weak statement,
indeed. Consider the following sentence:
Applying the semantics, we see that the sentence says that at least one of the
following assertions is true:
72
and so on. Now an implication is true if both premise and conclusion are true, or if its
premise is false. So if Richard the Lionheart is not a crown, then the first assertion is true and
the existential is satisfied. So, an existentially quantified implication sentence is true
whenever any object fails to satisfy the premise
Nested quantifiers
We will often want to express more complex sentences using multiple quantifiers.
The simplest case is where the quantifiers are of the same type. For example, “Brothers are
siblings” can be written as
Consecutive quantifiers of the same type can be written as one quantifier with several
variables. For example, to say that siblinghood is a symmetric relationship, we can write
∀x, y Sibling(x, y) ⇔ Sibling(y, x). In other cases we will have mixtures. “Everybody
loves somebody” means that for every person, there is someone that person loves:
On the other hand, to say “There is someone who is loved by everyone,” we write
∀x (∃ y Loves(x, y)) says that everyone has a particular property, namely, the property
that they love someone. On the other hand,
∃y (∀ x Loves(x, y)) says that someone in the world has a particular property, namely
the property of being loved by everybody.
The two quantifiers are actually intimately connected with each other, through
negation. Asserting that everyone dislikes parsnips is the same as asserting there does not
exist someone who likes them, and vice versa:
73
Equality
We can use the equality symbol to signify that two terms refer to the same object. For
example,
Father (John)=Henry
says that the object referred to by Father (John) and the object referred to by Henry are the
same.
The equality symbol can be used to state facts about a given function, as we just did
for the Father symbol. It can also be used with negation to insist that two terms are not the
same object. To say that Richard has at least two brothers, we would write
Figure 3.2 Formal languages and their ontological and epistemological commitments
The basic syntactic elements of first-order logic are the symbols that stand for objects,
relations, and functions. The symbols, come in three kinds:
We adopt the convention that these symbols will begin with uppercase letters.
74
Quantifiers
Universal quantification
Thus, the sentence says, "For all x, if x is a king, then is a person." The symbol x is called a
variable(lower case letters)
The sentence x P, where P is a logical expression says that P is true for every object x.
75
The universally quantified sentence is equivalent to asserting the following five sentences
Existential quantification
It is possible to make a statement about some object in the universe without naming
it,by using an existential quantifier.
Example
Nested Quantifiers
76
For Example:- “Brothers are Siblings” can be written as
Consecutive quantifiers of the same type can be written as one quantifier with
77
To specify the agent's task, we specify its percepts, actions, and goals. In the wumpus
world, these are as follows:
• In the square containing the wumpus and in the directly (not diagonally)
adjacent squares the agent will perceive a stench.
• In the squares directly adjacent to a pit, the agent will perceive a breeze.
• In the square where the gold is, the agent will perceive a glitter.
• When an agent walks into a wall, it will perceive a bump.
• When the wumpus is killed, it gives out a woeful scream that can be perceived
anywhere in the cave.
• The percepts will be given to the agent in the form of a list of five symbols;
for example, if there is a stench, a breeze, and a glitter but no bump and no
scream, the agent will receive the percept [Stench, Breeze, Glitter, None,
None]. The agent cannot perceive its own location.
• Just as in the vacuum world, there are actions to go forward, turn right by 90°,
and turn left by 90°. In addition, the action Grab can be used to pick up an
object that is in the same square as the agent. The action Shoot can be used to
fire an arrow in a straight line in the direction the agent is facing. The arrow
continues until it either hits and kills the wumpus or hits the wall. The agent
only has one arrow, so only the first Shoot action has any effect.
The wumpus agent receives a percept vector with five elements. The corresponding
first order sentence stored in the knowledge base must include both the percept and the time
at which it occurred; otherwise, the agent will get confused about when it saw what. We use
integers for time steps. A typical percept sentence would be
Here, Percept is a binary predicate, and Stench and so on are constants placed in a list.
The actions in the wumpus world can be represented by logical terms:
which returns a binding list such as {a/Grab}. The agent program can then return
Grab as the action to take. The raw percept data implies certain facts about the current state.
For example:
78
These rules exhibit a trivial form of the reasoning process called perception. Simple
“reflex” behavior can also be implemented by quantified implication sentences.
Given the percept and rules from the preceding paragraphs, this would yield the
desired conclusion
BestAction(Grab, 5)—that is, Grab is the right thing to do. For example, if the agent
is at a square and perceives a breeze, then that square is breezy:
It is useful to know that a square is breezy because we know that the pits cannot move
about. Notice that Breezy has no time argument. Having discovered which places are breezy
(or smelly) and, very important, not breezy (or not smelly), the agent can deduce where the
pits are (and where the wumpus is). first-order logic just needs one axiom:
3.3 SUBSTITUTION
The rule of Universal Instantiation (UI for short) says that we can infer any
sentence obtained by substituting a ground term (a term without variables) for the variable.
Let SUBST(θ,α) denote the result of applying the substitution θ to the sentence α.
Then the rule is written
In the rule for Existential Instantiation, the variable is replaced by a single new
constant symbol. The formal statement is as follows: for any sentence α, variable v, and
constant symbol k that does not appear elsewhere in the knowledge base,
79
∃x Crown(x) 𝖠OnHead(x, John) we can infer the sentence
Crown(C1) 𝖠OnHead(C1, John)
EXAMPLE
Then we apply UI to the first sentence using all possible ground-term substitutions
from the vocabulary of the knowledge base—in this case,
and we discard the universally quantified sentence. Now, the knowledge base is essentially
propositional if we view the ground atomic sentences
King(John),
Greedy(John), and so on—as proposition symbols.
3.4 UNIFICATION
Lifted inference rules require finding substitutions that make different logical
expressions look identical. This process is called unification and is a key component of all
first- order inference algorithms. The UNIFY algorithm takes two sentences and returns a
unifier for them if one exists: UNIFY(p, q)=θ where SUBST(θ, p)= SUBST(θ, q).
Here are the results of unification with four different sentences that might be in the
knowledge base: UNIFY(Knows(John, x), Knows(John, Jane)) = {x/Jane}
The last unification fails because x cannot take on the values John and Elizabeth at the
same time. Now, remember that Knows(x, Elizabeth) means “Everyone knows Elizabeth,” so
we should be able to infer that John knows Elizabeth. The problem arises only because the
two sentences happen to use the same variable name, x. The problem can be avoided by
80
standardizing apart one of the two sentences being unified, which means renaming its
variables to avoid name clashes. For example, we can rename x in
Knows(x, Elizabeth) to x17 (a new variable name) without changing its meaning.
For example,
The first unifier gives Knows(John, z) as the result of unification, whereas the second
gives Knows(John, John). The second result could be obtained from the first by an
additional substitution {z/John}; we say that the first unifier is more general than the second,
because it places fewer restrictions on the values of the variables. An algorithm for
computing most general unifiers is shown in Figure.
The process is simple: recursively explore the two expressions simultaneously “side
by side,” building up a unifier along the way, but failing if two corresponding points in the
structures do not match. There is one expensive step: when matching a variable against a
complex term, one must check whether the variable itself occurs inside the term; if it does,
the match fails because no consistent unifier can be constructed.
81
In artificial intelligence, forward and backward chaining is one of the important
topics, but before understanding forward and backward chaining lets first understand that
from where these two terms came.
Inference engine
a. Forward chaining
b. Backward chaining
Horn clause and definite clause are the forms of sentences, which enables knowledge
base to use a more restricted and efficient inference algorithm. Logical inference algorithms
use forward and backward chaining approaches, which require KB in the form of the first-
order definite clause.nt
Definite clause: A clause which is a disjunction of literals with exactly one positive
literal is known as a definite clause or strict horn clause.
Horn clause: A clause which is a disjunction of literals with at most one positive
literal is known as horn clause. Hence all the definite clauses are horn clauses.
A. Forward Chaining
The Forward-chaining algorithm starts from known facts, triggers all rules whose
premises are satisfied, and add their conclusion to the known facts. This process repeats until
the problem is solved.
Properties of Forward-Chaining
82
o Forward-chaining approach is also called as data-driven as we reach to the goal
using available data.
Consider the following famous example which we will use in both approaches:
Example
"As per the law, it is a crime for an American to sell weapons to hostile nations.
Country A, an enemy of America, has some missiles, and all the missiles were sold to it by
Robert, who is an American citizen."
To solve the above problem, first, we will convert all the above facts into first-order
definite clauses, and then we will use a forward-chaining algorithm to reach the goal.
o It is a crime for an American to sell weapons to hostile nations. (Let's say p, q, and
r are variables)
American (p) 𝖠 weapon(q) 𝖠 sells (p, q, r) 𝖠 hostile(r) → Criminal(p) …(1)
o Robert is American
American(Robert). …(8)
83
Forward chaining proof
Step-1
In the first step we will start with the known facts and will choose the sentences which
do not have implications, such as: American (Robert), Enemy(A, America), Owns(A, T1),
and Missile(T1). All these facts will be represented as below.
Figure 3.5
Step-2
At the second step, we will see those facts which infer from available facts and with
satisfied premises.
Rule-(1) does not satisfy premises, so it will not be added in the first
Rule-(4) satisfy with the substitution {p/T1}, so Sells (Robert, T1, A) is added, which infers
from the conjunction of Rule (2) and (3).
Rule-(6) is satisfied with the substitution(p/A), so Hostile(A) is added and which infers from
Rule-(7).
Figure 3.6
Step-3
At step-3, as we can check Rule-(1) is satisfied with the substitution {p/Robert, q/T1,
r/A}, so we can add Criminal (Robert) which infers all the available facts. And hence we
reached our goal statement.
84
Figure 3.7
B. Backward Chaining
Example
In backward-chaining, we will use the same above example, and will rewrite all the rules.
85
o Missile(T1)
Backward-Chaining proof
In Backward chaining, we will start with our goal predicate, which is Criminal
(Robert), and then infer further rules.
Step-1
At the first step, we will take the goal fact. And from the goal fact, we will infer other
facts, and at last, we will prove those facts true. So our goal fact is "Robert is Criminal," so
following is the predicate of it.
Step-2
At the second step, we will infer other facts form goal fact which satisfies the rules.
So as we can see in Rule-1, the goal predicate Criminal (Robert) is present with
substitution
{Robert/P}. So we will add all the conjunctive facts below the first level and will replace p
with Robert.
Figure 3.8
86
Step-3
t At step-3, we will extract further fact Missile(q) which infer from Weapon(q), as it
satisfies Rule-(5). Weapon (q) is also true with the substitution of a constant T1 at q.
Figure 3.9
Step-4
At step-4, we can infer facts Missile(T1) and Owns(A, T1) form Sells(Robert, T1, r)
which satisfies the Rule- 4, with the substitution of A in place of r. So these two statements
are proved here.
Figure 3.10
87
Step-5
At step-5, we can infer the fact Enemy(A, America) from Hostile(A) which satisfies
Rule- 6. And hence all the statements are proved true using backward chaining.
Figure 3.11
Suppose you have a production system with the FOUR rules: R1: IF A AND C then F
R2: IF A AND E, THEN G R3: IF B, THEN E R4: R3: IF G, THEN D and you have four
initial facts: A, B, C, D. PROVE A&B TRUE THEN D IS TRUE. Explain what is meant by
“forward chaining”, and show explicitly how it can be used in this case to determine new
facts.
3.5 RESOLUTION IN
FOL Resolution
Resolution is used, if there are various statements are given, and we need to prove a
conclusion of those statements. Unification is a key concept in proofs by resolutions.
Resolution is a single inference rule which can efficiently operate on the conjunctive normal
form or clausal form.
Clause: Disjunction of literals (an atomic sentence) is called a clause. It is also known
as a unit clause.
To better understand all the above steps, we will take an example in which we will
apply resolution.
Example
a. John likes all kind of food.
b. Apple and vegetable are food
c. Anything anyone eats and not killed is food.
d. Anil eats peanuts and still alive
e. Harry eats everything that Anil eats. Prove by resolution that:
f. John likes peanuts.
In the first step we will convert all the given statements into its first order logic.
89
o Move negation (¬)inwards and rewrite
1. ∀x ¬ food(x) V likes(John, x)
2. food(Apple) Λ food(vegetables)
3. ∀x ∀y ¬ eats(x, y) V killed(x) V food(y)
4. eats (Anil, Peanuts) Λ alive(Anil)
5. ∀x ¬ eats(Anil, x) V eats(Harry, x)
6. ∀x ¬killed(x) ] V alive(x)
7. ∀x ¬ alive(x) V ¬ killed(x)
8. likes(John, Peanuts).
o Drop Universal quantifiers. In this step we will drop all universal quantifier since all
the statements are not implicitly quantified so we don't need it.
1. ¬ food(x) V likes(John, x)
2. food(Apple)
3. food(vegetables)
4. ¬ eats(y, z) V killed(y) V food(z)
5. eats (Anil, Peanuts)
6. alive(Anil)
7. ¬ eats(Anil, w) V eats(Harry, w)
8. killed(g) V alive(g)
9. ¬ alive(k) V ¬ killed(k)
10. likes(John, Peanuts).
o Distribute conjunction 𝖠 over disjunction ¬.This step will not make any change in
this problem.
90
Step-3: Negate the statement to be proved
In this statement, we will apply negation to the conclusion statements, which will be
written as ¬likes(John, Peanuts)
o Now in this step, we will solve the problem by resolution tree using substitution. For
the above problem, it will be given as follows:
Figure 3.12
o In the first step of resolution graph, ¬likes(John, Peanuts), and likes(John, x) get
resolved (canceled) by substitution of {Peanuts/x}, and we are left with ¬
food(Peanuts)
o In the second step of the resolution graph, ¬ food (Peanuts), and food(z) get resolved
(canceled) by substitution of { Peanuts/z}, and we are left with ¬ eats(y, Peanuts) V
killed(y).
o In the third step of the resolution graph, ¬ eats(y, Peanuts) and eats (Anil, Peanuts) get
resolved by substitution {Anil/y}, and we are left with Killed(Anil).
o In the fourth step of the resolution graph, Killed(Anil) and ¬ killed(k) get resolve by
substitution {Anil/k}, and we are left with ¬ alive(Anil).
91
o In the last step of the resolution graph ¬ alive(Anil) and alive(Anil) get resolved.
Table 3.1
Concepts such as Events, Time, Physical Objects, and Beliefs— that occur in many
different domains. Representing these abstract concepts is sometimes called ontological
engineering.
92
Figure 3.13 The upper ontology of the world, showing the topics to be
covered later in the chapter. Each link indicates that the lower concept is a
specialization of the upper one. Specializations are not necessarily disjoint;
a human is both an animal and an agent, for example.
For example, a shopper would normally have the goal of buying a basketball, rather
than a particular basketball such as BB9 There are two choices for representing categories in
first-order logic: predicates and objects. That is, we can use the predicate Basketball (b), or
we can reify1 the category as an object, Basketballs.
93
BB9 ∈ Basketballs
• A category is a subclass of another category. Basketballs ⊂ Balls
• All members of a category have some properties.
Notice that because Dogs is a category and is a member of Domesticated Species, the
latter must be a category of categories. Categories can also be defined by providing necessary
and sufficient conditions for membership. For example, a bachelor is an unmarried adult
male:
Physical Composition
We use the general PartOf relation to say that one thing is part of another. Objects can
be grouped into part of hierarchies, reminiscent of the Subset hierarchy:
BunchOf (s): ∀x x∈ s ⇒PartOf (x, BunchOf (s)) Furthermore, BunchOf (s) is the
smallest object satisfying this condition. In other words, BunchOf (s) must be part of
any object that has all the elements of s as parts:
94
Measurements
In both scientific and commonsense theories of the world, objects have height, mass,
cost, and so on. The values that we assign for these properties are called measures.
Length(L1)=Inches(1.5)=Centimeters(3.81)
Similar axioms can be written for pounds and kilograms, seconds and days, and
dollars and cents. Measures can be used to describe objects as follows:
Diameter (Basketball12)=Inches(9.5)
ListPrice(Basketball12)=$(19)
d∈ Days ⇒ Duration(d)=Hours(24)
Time Intervals
Event calculus opens us up to the possibility of talking about time, and time intervals.
We will consider two kinds of time intervals: moments and extended intervals. The
distinction is that only moments have zero duration:
Partition({Moments,ExtendedIntervals}, Intervals )
i∈Moments⇔Duration(i)=Seconds(0)
The functions Begin and End pick out the earliest and latest moments in an interval,
and the function Time delivers the point on the time scale for a moment.
The function Duration gives the difference between the end time and the start time.
Two intervals Meet if the end time of the first equals the start time of the second. The
complete set of interval relations, as proposed by Allen (1983), is shown graphically in
Figure
12.2 and logically below:
Meet(i,j) ⇔ End(i)=Begin(j)
Before(i,j) ⇔ End(i) <
Begin(j) After (j,i) ⇔ Before(i,
j)
95
During(i,j) ⇔ Begin(j) < Begin(i) < End(i) < End(j)
96
Overlap(i,j) ⇔ Begin(i) < Begin(j) < End(i) < End(j)
Begins(i,j) ⇔ Begin(i) = Begin(j)
Finishes(i,j) ⇔ End(i) = End(j)
Equals(i,j) ⇔ Begin(i) = Begin(j) 𝖠 End(i) = End(j)
3.7 EVENTS
Event calculus reifies fluents and events. The fluent At(Shankar, Berkeley) is an
object that refers to the fact of Shankar being in Berkeley, but does not by itself say anything
about whether it is true. To assert that a fluent is actually true at some point in time we use
the predicate T, as in T(At(Shankar, Berkeley), t). Events are described as instances of
event categories. The event E1 of Shankar flying from San Francisco to Washington, D.C. is
described as E1 ∈Flyings𝖠 Flyer (E1, Shankar ) 𝖠 Origin(E1, SF) 𝖠 Destination (E1,DC)
we can define an alternative three-argument version of the category of flying events and say
E1 ∈Flyings(Shankar, SF,DC) We then use Happens(E1, i) to say that the event E1 took
place over the time interval i, and we say the same thing in functional form with
Extent(E1)=i. We represent time intervals by a (start, end) pair of times; that is, i = (t1, t2) is
the time interval that starts at t1 and ends at t2. The complete set of predicates for one version
of the event calculus is T(f, t) Fluent f is true at time t Happens(e, i) Event e happens over the
time interval i Initiates(e, f, t) Event e causes fluent f to start to hold at time t Terminates(e, f,
t) Event e causes fluent f to cease to hold at time t Clipped(f, i) Fluent f ceases to be true at
some point during time interval i Restored (f, i) Fluent f becomes true sometime during time
interval i We assume a distinguished event, Start, that describes the initial state by saying
which fluents are initiated or terminated at the start time. We define T by saying that a fluent
holds at a point in time if the fluent was initiated by an event at some time in the past and was
not made false (clipped) by an intervening event. A fluent does not hold if it was terminated
by an event and not made true (restored) by another event. Formally, the axioms are:
Happens(e, (t1, t2)) 𝖠Initiates(e, f, t1) 𝖠 ¬ Clipped(f, (t1, t)) 𝖠 t1 < t ⇒T(f,
t)Happens(e, (t1, t2)) 𝖠 Terminates(e, f, t1)𝖠¬Restored (f, (t1, t)) 𝖠 t1 < t ⇒¬T(f, t)
97
where Clipped and Restored are defined by Clipped(f, (t1, t2)) ⇔∃ e, t, t3
Happens(e, (t, t3))𝖠 t1 ≤ t < t2 𝖠 Terminates(e, f, t) Restored (f, (t1, t2)) ⇔∃ e, t, t3
Happens(e, (t, t3)) 𝖠 t1 ≤ t < t2 𝖠 Initiates(e, f, t)
What we need is a model of the mental objects that are in someone’s head (or
something’s knowledge base) and of the mental processes that manipulate those mental
objects. The model does not have to be detailed. We do not have to be able to predict how
many milliseconds it will take for a particular agent to make a deduction. We will be happy
just to be able to conclude that mother knows whether or not she is sitting.
We begin with the propositional attitudes that an agent can have toward mental
objects: attitudes such as Believes, Knows, Wants, Intends, and Informs. The difficulty is that
these attitudes do not behave like “normal” predicates.
For example, suppose we try to assert that Lois knows that Superman can fly: Knows
(Lois, CanFly(Superman)) One minor issue with this is that we normally think of
CanFly(Superman) as a sentence, but here it appears as a term. That issue can be patched up
just be reifying CanFly(Superman); making it a fluent. A more serious problem isthat, if it
is true that Superman is Clark Kent, then we must conclude that Lois knows that Clark can
fly: (Superman = Clark) 𝖠Knows(Lois, CanFly(Superman)) |= Knows(Lois, CanFly
(Clark)) Modal logic is designed to address this problem. Regular logic is concerned with a
single modality, the modality of truth, allowing us to express “P is true.” Modal logic
includes special modal operators that take sentences (rather than terms) as arguments.
For example, “A knows P” is represented with the notation KAP, where K is the
modal operator for knowledge. It takes two arguments, an agent (written as the subscript) and
a sentence. The syntax of modal logic is the same as first-order logic, except that sentences
can also be formed with modal operators. In first-order logic a model contains a set of objects
and an interpretation that maps each name to the appropriate object, relation, or function. In
modal logic we want to be able to consider both the possibility that Superman’s secret
identity is Clark and that it isn’t. Therefore, we will need a more complicated model, one that
consists of a collection of possible worlds rather than just one true world. The worlds are
connected in a graph by accessibility relations, one relation for each modal operator. We say
that world w1 is accessible from world w0 with respect to the modal operator KA if
everything in w1 is consistent with what A knows in w0, and we write this as
Acc(KA,w0,w1). In diagrams such as Figure 12.4 we show accessibility as an arrow between
possible worlds. In general, a knowledge atom KAP is true in world w if and only if P is true
in every world accessible from
w. The truth of more complex sentences is derived by recursive application of this rule and
the normal rules of first-order logic. That means that modal logic can be used to reason about
nested knowledge sentences: what one agent knows about another agent’s knowledge. For
example, we can say that, even though Lois doesn’t know whether Superman’s secret identity
98
is Clark Kent, she does know that Clark knows: KLois [KClark Identity(Superman, Clark
)
99
∨KClark¬Identity(Superman, Clark )] Figure 3.15 shows some possible worlds for this
domain, with accessibility relations for Lois and Superman
Figure 3.15
In the TOP-LEFT diagram, it is common knowledge that Superman knows his own
identity, and neither he nor Lois has seen the weather report. So in w0 the worlds w0 and w2
are accessible to Superman; maybe rain is predicted, maybe not. For Lois all four worlds are
accessible from each other; she doesn’t know anything about the report or if Clark is
Superman. But she does know that Superman knows whether he is Clark, because in every
world that is accessible to Lois, either Superman knows I, or he knows ¬ I. Lois does not
know which is the case, but either way she knows Superman knows. In the TOP-RIGHT
diagram it is common knowledge that Lois has seen the weather report. So in w4 she knows
rain is predicted and in w6 she knows rain is not predicted. Superman does not know the
report, but he knows that Lois knows, because in every world that is accessible to him, either
she knows R or she knows ¬
R. In the BOTTOM diagram we represent the scenario where it is common knowledge that
Superman knows his identity, and Lois might or might not have seen the weather report. We
represent this by combining the two top scenarios, and adding arrows to show that Superman
does not know which scenario actually holds. Lois does know, so we don’t need to add any
arrows for her. In w0 Superman still knows I but not R, and now he does not know whether
Lois knows R. From what Superman knows, he might be in w0 or w2, in which case Lois
does not know whether R is true, or he could be in w4, in which case she knows R, or w6, in
which case she knows ¬R.
There are many variants of semantic networks, but all are capable of representing
individual objects, categories of objects, and relations among objects. A typical graphical
notation displays object or category names in ovals or boxes, and connects them with labeled
links. For example, Figure 12.5 has a Member Of link between Mary and Female Persons,
corresponding to the logical assertion Mary ∈FemalePersons ; similarly, the SisterOf link
between Mary and John corresponds to the assertion SisterOf (Mary, John). We can connect
categories using SubsetOf links, and so on. We know that persons have female persons as
mothers, so can we draw a HasMother link from Persons to FemalePersons? The answer is
no, because HasMother is a relation between a person and his or her mother, and categories
do not have mothers. For this reason, we have used a special notation—the double-boxed link
—in Figure 12.5. This link asserts that ∀x x∈ Persons ⇒ [∀ y HasMother (x, y) ⇒ y
∈FemalePersons
] We might also want to assert that persons have two legs—that is, ∀x x∈ Persons ⇒ Legs(x,
2) The semantic network notation makes it convenient to perform inheritance reasoning. For
example, by virtue of being a person, Mary inherits the property of having two legs. Thus, to
find out how many legs Mary has, the inheritance algorithm followsthe MemberOf link from
Mary to the category she belongs to, and then follows SubsetOf links up the hierarchy until it
finds a category for which there is a boxed Legs link—in this case, the Persons category.
Inheritance becomes complicated when an object can belong to more than one
category or when a category can be a subset of more than one other category; this is called
multiple inheritance. The drawback of semantic network notation, compared to first-order
101
logic: the fact
102
that links between bubbles represent only binary relations. For example, the sentence
Fly(Shankar, NewYork, NewDelhi, Yesterday) cannot be asserted directly in a semantic
network. Nonetheless, we can obtain the effect of n-ary assertions by reifying the proposition
itself as an event belonging to an appropriate event category. Figure 12.6 shows the semantic
network structure for this particular event. Notice that the restriction to binary relations forces
the creation of a rich ontology of reified concepts. One of the most important aspects of
semantic networks is their ability to represent.
One of the most important aspects of semantic networks is their ability to represent
default values for categories. Examining Figure 3.6 carefully, one notices that John has one
leg, despite the fact that he is a person and all persons have two legs. In a strictly logical KB,
this would be a contradiction, but in a semantic network, the assertion that all persons have
two legs has only default status; that is, a person is assumed to have two legs unless this is
contradicted by more specific information
We also include the negated goal Criminal (West). The resolution proof is shown in
Figure 3.18.
103
Figure 3.18 A resolution proof that West is a criminal. At each step,
the literals that unify are in bold.
Notice the structure: single “spine” beginning with the goal clause, resolving against
clauses from the knowledge base until the empty clause is generated. This is characteristic of
resolution on Horn clause knowledge bases. In fact, the clauses along the main spine
correspond exactly to the consecutive values of the goals variable in the backward-chaining
algorithm of Figure. This is because we always choose to resolve with a clause whose
positive literal unified with the left most literal of the “current” clause on the spine; this is
exactly what happens in backward chaining. Thus, backward chaining is just a special case of
resolution with a particular control strategy to decide which resolution to perform next.
EXAMPLE 2
Our second example makes use of Skolemization and involves clauses that are not
definite clauses. This results in a somewhat more complex proof structure. In English, the
problem is a follows:
104
Now we apply the conversion procedure to convert each sentence to CNF:
The resolution proof that Curiosity kills the cat is given in Figure. In English, the
proof could be paraphrased as follows:
Suppose Curiosity did not kill Tuna. We know that either Jack or Curiosity did; thus
Jack must have. Now, Tuna is a cat and cats are animals, so Tuna is an animal. Because
anyone who kills an animal is loved by no one, we know that no one loves Jack. On the other
hand, Jack loves all animals, so someone loves him; so we have a contradiction. Therefore
Curiosity killed the cat.
Figure 3.19 A resolution proof that Curiosity killed that Cat. Notice then use
of factoring in the derivation of the clause Loves(G(Jack), Jack). Notice also
in the upper right, the unification of Loves(x,F(x)) and Loves(Jack,x) can
only succeed after the variables have been standardized apart
The proof answers the question “Did Curiosity kill the cat?” but often we want to
pose more general questions, such as “Who killed the cat?” Resolution can do this, but it
takes a little more work to obtain the answer. The goal is w Kills (w, Tuna), which, when
negated become Kills (w, Tuna) in CNF, Repeating the proof in Figure with the new
negated goal, we obtain a similar proof tree, but with the substitution {w/Curiosity} in one of
the steps. So, in this case, finding out who killed the cat is just a matter of keeping track of
the bindings for the query variables in the proof.
EXAMPLE 3
105
2. All happy people smile.
3. Someone is graduating.
4. Conclusion: Is someone smiling?
Solution
(v) Eliminate
1. x graduating(x) vhappy(x)
2. x happy(y) vsmile(y)
106
3. graduating(name1)
4. w smile(w)
(vi) Eliminate
1. graduating(x) vhappy(x)
2. happy(y) vsmile(y)
3. graduating(name1)
4. smile(w)
EXAMPLE 4
Explain the unification algorithm used for reasoning under predicate logic with an
example. Consider the following facts
a. Team India
b. Team Australia
c. Final match between India and Australia
d. India scored 350 runs, Australia scored 350 runs, India lost 5 wickets,
Australia lost 7 wickets.
f. If the scores are same the team which lost minimum wickets wins the match.
Represent the facts in predicate, convert to clause form and prove by resolution “India
wins the match”.
107
Solution
(v) Eliminate
(a) team(India)
(b) team(Australia)
(c) team(India) v team(Australia) v final_match (India,Australia)
(d) score(India,350) ^ score(Australia,350) ^ wicket(India,5) ^ wicket(Australia,7)
(e) team(x) v wins(x) vscore(x, max_runs))
(f) score(x,equal(y)) vwicket(x,min_wicket) v-final_match(x,y)) vwin(x)
(vi) Eliminate
108
(vii) Convert to conjunct of disjuncts form.
EXAMPLE 5
Problem 3
109
Convert the facts in predicate form to clauses and then prove by resolution: “John pays
tax”.
Solution
(v) Eliminate
1. company(ABC) ^employee(500,ABC)
2. company(ABC) v employee(x,ABC) vearns(x,5000) v pays(x,tax)
3. manager(John,ABC)
4. manager(x, ABC) v earns(x,10000)
(vi) Eliminate
110
(viii) Make each conjunct a separate clause.
1. (a) company(ABC)
(b) employee(500,ABC)
2. company(ABC) v employee(x,ABC) vearns(x,5000) v pays(x,tax)
3. manager(John,ABC)
4. manager(x, ABC) v earns(x,10000)
Problem 4
If a perfect square is divisible by a prime p then it is also divisible by square of p.
Every perfect square is divisible by some prime.
36 is a perfect square.
111
2. xy perfect)sq(x) ^prime(y) ^divides(x,y)
3. perfect_sq(36)
Problem 5
Example
Trace the operation of the unification algorithm on each of the following pairs of literals:
In propositional logic it is easy to determine that two literals can not both be true at
the same time. Simply look for L and ~L. In predicate logic, this matching process is more
complicated, since bindings of variables must be considered.
For example man (john) and man(john) is a contradiction while man (john) and
man(Himalayas) is not. Thus in order to determine contradictions we need a matching
procedure that compares two literals and discovers whether there exist a set of substitutions
that makes them identical. There is a recursive procedure that does this matching. It is called
112
Unification algorithm.
113
In Unification algorithm each literal is represented as a list, where first element is the
name of a predicate and the remaining elements are arguments. The argument may be a single
element (atom) or may be another list. For example we can have literals as
To unify two literals, first check if their first elements re same. If so proceed.
Otherwise they can not be unified. For example the literals
Can not be Unfied. The unification algorithm recursively matches pairs of elements,
one pair at a time. The matching rules are :
ii) A variable can match another variable, any constant or a function or predicate
expression, subject to the condition that the function or [predicate expression
must not contain any instance of the variable being matched (otherwise it will
lead to infinite recursion).
iii) The substitution must be consistent. Substituting y for x now and then z for x
later is inconsistent (a substitution y for x written as y/x).
The Unification algorithm is listed below as a procedure UNIFY (L1, L2). It returns a
list representing the composition of the substitutions that were performed during the match.
An empty list NIL indicates that a match was found without any substitutions. If the list
contains a single value F, it indicates that the unification procedure failed.
114
4. For I = 1 to number of elements in L1 do
i) call UNIFY with the ith element of L1 and I’th element of L2, putting the result in S
ii) if S = F then return F
iii) if S is not equal to NIL then do
(A) apply S to the remainder of both L1 and L2
(B) SUBST := APPEND (S, SUBST) return SUBST.
Consider a knowledge base containing just two sentences: P(a) and P(b). Does this
knowledge base entail ∀ x P(x)? Explain your answer in terms of models.
The knowledge base does not entail ∀x P(x). To show this, we must give a model
where P(a) and P(b) but ∀x P(x) is false. Consider any model with three domain elements,
where a and b refer to the first two elements and the relation referred to by P holds only for
those two elements.
What is ontological commitment (what exists in the world) of first order logic?
Represent the sentence “Brothers are siblings” in first order logic?
Ontological commitment means what assumptions language makes about the nature if
reality. Representation of “Brothers are siblings” in first order logic is x, y [Brother (x, y)
Siblings (x, y)]
Following are the comparative differences versus first order logic and propositional
logic.
115
propositional logic, but it also has variables for individual objects, quantifier,
symbols for functions and symbols for relations.
Illustrate the use of first order logic to represent knowledge. The best way to find
usage of First order logic is through examples. The examples can be taken from some simple
domains. In knowledge representation, a domain is just some part of the world about which
we wish to express some knowledge. Assertions and queries in first-order logic Sentences are
added to a knowledge base using TELL, exactly as in propositional logic. Such sentences are
called assertions. For example, we can assert that John is a king and that kings are persons:
Where KB is knowledge base. TELL(KB, x King(x) => Person(x)). We can ask questions
of the knowledge base using AS K. For example, returns true. Questions asked using ASK
are called queries or goals ASK(KB, Person(John)) Will return true. (ASK KBto find whether
Jon is a king) ASK (KB, x person(x)) The kinship domain The first example we consider is
the domain of family relationships, or kinship. This domain includes facts such as "Elizabeth
is the mother of Charles" and "Charles is the father of William7' and rules such as "One's
grandmother is the mother of one's parent." Clearly, the objects in our domain are people. We
will have two unary predicates, Male and Female. Kinship relations-parenthood, brotherhood,
marriage, and so on- will be represented by binary predicates: Parent, Sibling, Brother, Sister,
Child, Daughter, Son, Spouse, Husband, Grandparent, Grandchild, Cousin, Aunt, and Uncle.
We will use functions for Mother and Father.
116
SCHOOL OF ELECTRICAL AND ELECTRONICS
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINERING
117
UNIT 4
SOFTWARE AGENTS
Architecture for Intelligent Agents – Agent communication – Negotiation and Bargaining –
Argumentation among Agents – Trust and Reputation in Multi-agent systems.
4.1 DEFINITION
4. Layered simply means that the system is comprised of a set of layers that
provide a specific set of logical functionality and that connectivity is
commonly restricted to the layers contiguous to one another.
Based on the goals of the agent application, a variety of agent architectures exist to
help. This section will introduce some of the major architecture types and applications for
which they can be used.
1. Reactive architectures
2. Deliberative architectures
3. Blackboard architectures
4. Belief-desire-intention (BDI) architecture
5. Hybrid architectures
6. Mobile architectures
1. REACTIVE ARCHITECTURES
2. In this architecture, agent behaviors are simply a mapping between stimulus and
response.
3. The agent has no decision-making skills, only reactions to the environment in which it
exists.
118
4. The agent simply reads the environment and then maps the state of the environment to
one or more actions. Given the environment, more than one action may be
appropriate, and therefore the agent must choose.
8. Sequences of actions require the presence of state, which is not encoded into the
mapping function.
2. DELIBERATIVE ARCHITECTURES
2. Instead of mapping the sensors directly to the actuators, the deliberative architecture
considers the sensors, state, prior results of given actions, and other information in
order to select the best action to perform.
4. The advantage of the deliberative architecture is that it can be used to solve much
more complex problems than the reactive architecture.
6. The disadvantage is that it is slower than the reactive architecture due to the
deliberation for the action to select.
119
Figure 4.2 A deliberative agent architecture considers its actions
3. BLACKBOARD ARCHITECTURES
1. The blackboard architecture is a very common architecture that is also very interesting.
3. The blackboard is a common work area for a number of agents that work
cooperatively to solve a given problem.
4. The blackboard therefore contains information about the environment, but also
intermediate work results by the cooperative agents.
5. In this example, two separate agents are used to sample the environment through the
available sensors (the sensor agent) and also through the available actuators (action
agent).
6. The blackboard contains the current state of the environment that is constantly
updated by the sensor agent, and when an action can be performed (as specified in the
blackboard), the action agent translates this action into control of the actuators.
7. The control of the agent system is provided by one or more reasoning agents.
8. These agents work together to achieve the goals, which would also be contained in the
blackboard.
9. In this example, the first reasoning agent could implement the goal definition
behaviors, where the second reasoning agent could implement the planning portion (to
translate goals into sequences of actions).
10. Since the blackboard is a common work area, coordination must be provided such that
agents don’t step over one another.
120
121
11. For this reason, agents are scheduled based on their need. For example, agents can
monitor the blackboard, and as information is added, they can request the ability to
operate.
12. The scheduler can then identify which agents desire to operate on the blackboard, and
then invoke them accordingly.
13. The blackboard architecture, with its globally available work area, is easily
implemented with a multi-threading system.
14. Each agent becomes one or more system threads. From this perspective, the
blackboard architecture is very common for agent and non-agent systems.
2. Belief represents the view of the world by the agent (what it believes to be the state of
the environment in which it exists). Desires are the goals that define the motivation of
the agent (what it wants to achieve).
3. The agent may have numerous desires, which must be consistent. Finally, Intentions
specify that the agent uses the Beliefs and Desires in order to choose one or more
actions in order to meet the desires.
4. As we described above, the BDI architecture defines the basic architecture of any
deliberative agent. It stores a representation of the state of the environment (beliefs),
maintains a set of goals (desires), and finally, an intentional element that maps desires
to beliefs (to provide one or more actions that modify the state of the environment
based on the agent’s needs).
122
Figure 4.4 The BDI architecture desires to model mental attributes
5. HYBRID ARCHITECTURES
3. This same stack also shares some elements of a blackboard architecture, as there are
global elements that are visible and used by each component of the architecture.
4. The same is true for agent architectures. Based on the needs of the agent system,
different architectural elements can be chosen to meet those needs.
6. MOBILE ARCHITECTURES
1. The final architectural pattern that we’l discuss is the mobile agent architecture.
2. This architectural pattern introduces the ability for agents to migrate themselves
between hosts. The agent architecture includes the mobility element, which allows an
agent to migrate from one host to another.
3. An agent can migrate to any host that implements the mobile framework.
5. This framework also requires some kind of authentication and security, to avoid a
mobile agent framework from becoming a conduit for viruses. Also implicit in the
mobile agent framework is a means for discovery.
123
6. For example, which hosts are available for migration, and what services do they
provide? Communication is also implicit, as agents can communicate with one
another on a host, or across hosts in preparation for migration.
7. ARCHITECTURE DESCRIPTIONS
1. The Subsumption architecture, originated by Rodney Brooks in the late 1980s, was
created out of research in behavior-based robotics.
2. The fundamental idea behind subsumption is that intelligent behavior can be created
through a collection of simple behavior modules.
124
3. These behavior modules are collected into layers. At the bottom are behaviors that are
reflexive in nature, and at the top, behaviors that are more complex. Consider the
abstract model shown in Figure.
4. At the bottom (level 0) exist the reflexive behaviors (such as obstacle avoidance). If
these behaviors are required, then level 0 consumes the inputs and provides an action
at the output. But no obstacles exist, so the next layer up is permitted to subsume
control.
5. At each level, a set of behaviors with different goals compete for control based on the
state of the environment.
6. To support this capability levels can be inhibited (in other words, their outputs are
disabled). Levels can also be suppressed such that sensor inputs are routed to higher
layers. As shown in Figure.
8. For example, we begin with obstacle avoidance and then extend for object seeking.
From this perspective, the architecture takes a more evolutionary design approach.
9. Subsumption does have its problems. It is simple, but it turns out not to be extremely
extensible. As new layers are added, the layers tend to interfere with one another, and
then the problem becomes how to layer the behaviors such that each has the
opportunity to control when the time is right.
10. Subsumption is also reactive in nature, meaning that in the end, the architecture still
simply maps inputs to behaviors (no planning occurs, for example). What
subsumption does provide is a means to choose which behavior for a given
environment.
1. Behavior networks, created by Pattie Maes in the late 1980s, is another reactive
architecture that is distributed in nature. Behavior networks attempt to answer the
question, which action is best suited for a given situation.
2. As the name implies, behavior networks are networks of behaviors that include
activation links and inhibition links.
3. An example behavior network for a game agent is shown in Figure. As shown in the
legend, behaviors are rectangles and define the actions that the agent may take (attack,
explore, reload, etc.).
4. The ovals specify the preconditions for actions to be selected, which are inputs from
the environment.
6. The environment is sampled, and then the behavior for the agent is selected based on
the current state of the environment. The first thing to note is the activation and
inhibition links. For example, when the agent’s health is low, attack and exploration
are inhibited, leaving the agent to find the nearest shelter. Also, while exploring, the
agent may come across medkits or ammunition.
9. The algorithm also includes decay, such that activiations dissipate over time. Like the
subsumption architecture, behavior networks are instances of Behavior-Based
Systems (BBS). The primitive actions produced by these systems are all behaviors,
based on the state of the environment.
10. Behavior networks are not without problems. Being reactive, the architecture does not
support planning or higher- level behaviors. The architecture can also suffer when
behaviors are highly inter-dependent. With many competing goals, the behavior
modules can grow dramatically in order to realize the intended behaviors. But for
simpler architecture, such as the FPS game agent in Figure 4.7, this algorithm is ideal.
126
Figure 4.7 Behavior network for a simple game agent
2. ATLANTIS was to prove that a goal-oriented robot could be built from a hybrid
architecture of lower-level reactive behaviors and higher- level deliberative behaviors.
127
3. Where the subsumption architecture allows layers to subsume control, ATLANTIS
operates on the assumption that these behaviors are not exclusive of one another. The
lowest layer can operate in a reactive fashion to the immediate needs of the
environment, while the uppermost layer can support planning and more goal-oriented
behaviors.
4. In ATLANTIS, control is performed from the bottom- up. At the lowest level (the
control layer) are the reactive behaviors.
5. These primitive level actions are capable of being executed first, based on the state of
the environment. At the next layer is the sequencing layer. This layer is responsible
for executing plans created by the deliberative layer.
6. The deliberative layer maintains an internal model of the environment and creates
plans to satisfy goals.
7. The sequencing layer may or may not complete the plan, based on the state of the
environment. This leaves the deliberation layer to perform the computationally
expensive tasks. This is another place that the architecture is a hybrid.
8. The lower- level behavior-based methods (in the controller layer) are integrated with
higher- level classical AI mechanisms (in the deliberative layer). Interestingly, the
deliberative layer does not control the sequencing layer, but instead simply advises on
sequences of actions that it can perform.
9. The advantage of this architecture is that the low- level reactive layer and higher-
level intentional layers are asynchronous. This means that while deliberative plans are
under construction, the agent is not susceptible to the dynamic environment. This is
because even though planning can take time at the deliberative layer, the controller
can deal with random events in the environment.
2. At the core of the Homer architecture is a memory that is divided into two parts. The
first part contains general knowledge (such as knowledge about the environment). The
second part is called episodic knowledge, which is used to record experiences in the
environment (perceptions and actions taken).
3. The natural language processor accepts human input via a keyboard, and parses and
responds using a sentence generator. The temporal planner creates dynamic plans to
satisfy predefined goals, and is capable of replanning if the environment requires.
128
4. The architecture also includes a plan executor (or interpreter), which is used to
execute the plan at the actuators. The architecture also included a variety of monitor
processes. The basic idea behind Homer was an architecture for general intelligence.
5. The keyboard would allow regular English language input, and a terminal would
display generated English language sentences. The user could therefore communicate
with Homer to specify goals and receive feedback via the terminal.
6. Homer could log perceptions of the world, with timestamps, to allow dialogue with
the user and rational answers to questions. Reflective (monitor) processes allow
Homer to add or remove knowledge from the episodic memory.
➢ BB1 (BLACKBOARD)
3. The key behind BB1 is its ability to incrementally plan. Instead of defining a
complete plan for a given goal, and then executing that plan, BB1 dynamically
develops the plan and adapts to the changes in the environment. This is key for
dynamic environments, where unanticipated changes can lead to brittle plans that
eventually fail.
2. PRS is also a BDI architecture, mimicking the theory on human reasoning. PRS
integrates both reactive and goal-directed deliberative processing in a distributed
architecture.
4. Actions can also be taken through an intentions module. At the core is an interpreter
(or reasoner) which selects a goal to meet (given the current set of beliefs) and then
retrieves a plan to execute to achieve that goal. PRS iteratively tests the assumptions
of the plan during its execution. This means that it can operate in dynamic
environments where classical planners are doomed to fail.
5. Plans in PRS (also called knowledge areas) are predefined for the actions that are
possible in the environment. This simplifies the architecture because it isn’t required
to generate plans, only select them based on the environment and the goals that must
be met.
6. While planning is more about selection than search or generation, the interpreter
ensures that changes to the environment do not result in inconsistencies in the plan.
Instead, a new plan is selected to achieve the specific goals.
7. PRS is a useful architecture when all necessary operations can be predefined. It’s also
very efficient due to lack of plan generation. This makes PRS an ideal agent
architecture for building agents such as those to control mobile robots.
➢ AGLETS (MOBILE)
1. Aglets is a mobile agent framework designed by IBM Tokyo in the 1990s. Aglets is
based on the Java programming language, as it is well suited for a mobile agents
framework. First, the applications are portable to any system (both homogeneous and
130
heterogeneous) that is capable of running a Java Virtual Machine (JVM). Second, a
JVM is an ideal platform for migration services.
3. In this case, the Java application is restarted on a new JVM. Java also provides a
secure environment (sandbox) to ensure that a mobile agent framework doesn’t
become a virus distribution system. The Aglets framework is shown in Figure 4.9. At
the bottom of the framework is the JVM (the virtual machine that interprets the Java
byte codes). The agent runtime environment and mobility protocol are next. The
mobility protocol, called Aglet Transport Protocol (or ATP), provides the means to
serialize agents and then transport them to a host previously defined by the agent.
4. The agent API is at the top of the stack, which in usual Java fashion, provides a
number of API classes that focus on agent operation. Finally, there are the various
agents that operate on the framework.
5. The agent API and runtime environment provide a number of services that are central
to a mobile agent framework. Some of the more important functions are agent
management, communication, and security. Agents must be able to register
themselves on a given host to enable communication from outside agents.
➢ MESSENGERS (MOBILE)
3. The messengers environment provides the hop statement which defines when and
where to migrate to a new destination.
4. After migration is complete, the messengers agent restarts in the application at the
point after the previous hop statement. The end result is that the application moves to
the data, rather than using a messaging protocol to move the data to the agent.
131
5. There are obvious advantages to this when the data set is large and the migration links
are slow. The messengers model provides what the authors call Navigational
Programming, and also Distributed Sequential Computing (DSC).
6. What makes these concepts interesting is that they support the common model of
programming that is identical to the traditional flow of sequential programs. This
makes them easier to develop and understand.
7. Let’s now look at an example of DSC using the messengers environment. Listing 11.5
provides a simple program. Consider an application where on a series of hosts, we
manipulate large matrices which are held in their memory.
➢ SOAR (HYBRID)
2. Soar provides a model of cognition along with an implementation of that model for
building general-purpose AI systems.
3. The idea behind Soar is from Newell’s unified theories of cognition. Soar is one of the
most widely used architectures, from research into aspects of human behavior to the
design of game agents for first person- shooter games.
4. The goal of the Soar architecture is to build systems that embody general intelligence.
While Soar includes many elements that support this goal (for example, representing
knowledge using procedural, episodic, and declarative forms), but Soar lacks some
important aspects. These include episodic memories and also a model for emotion.
Soar’s underlying problem-solving mechanism is based on a production system
(expert system).
5. Behavior is encoded in rules similar to the if-then form. Solving problems in Soar can
be most simply described as problem space search (to a goal node). If this model of
problem solving fails, other methods are used, such as hill climbing.
6. When a solution is found, Soar uses a method called chunking to learn a new rule
based on this discovery. If the agent encounters the problem again, it can use the rule
to select the action to take instead of performing problem solving again.
132
1. Agents communicate in order to achieve better the goals of themselves or of the
society/ system in which they exist.
2. Communication can enable the agents to coordinate their actions and behavior,
resulting in systems that are more coherent.
4. The degree of coordination is the extent to which they avoid extraneous activity by
reducing resource contention, avoiding live lock and deadlock, and maintaining
applicable safety conditions.
6. Typically, to cooperate successfully, each agent must maintain a model of the other
agents, and also develop a model of future interactions. This presupposes sociability
Coherence is how well a system behaves as a unit. A problem for a multiagent system
is how it can maintain global coherence without explicit global control. In this case,
the agents must be able on their own to determine goals they share with other agents,
determine common tasks, avoid unnecessary conflicts, and pool knowledge and
evidence. It is helpful if there is some form of organization among the agents.
Dimensions of Meaning
There are three aspects to the formal study of communication: syntax (how the
symbols of communication are structured), semantics (what the symbols denote), and
pragmatics (how the symbols are interpreted). Meaning is a combination of semantics and
pragmatics. Agents communicate in order to understand and be understood, so it is important
to consider the different dimensions of meaning that are associated with communication.
2. Personal vs. Conventional Meaning. An agent might have its own meaning for a
message, but this might differ from the meaning conventionally accepted by the other
agents with which the agent communicates. To the greatest extent possible, multiagent
systems should opt for conventional meanings, especially since these systems are
typically open environments in which new agents might be introduced at any time.
133
environment, which can be perceived objectively. The effect might be different than
that understood internally, i.e., subjectively, by the sender or receiver of the message.
7. Coverage Smaller languages are more manageable, but they must be large enough so
that an agent can convey the meanings it intends.
3. In keeping with the above definition for and assumptions about an agent, we assume
that an agent can send and receive messages through a communication network.
5. There are two basic message types: assertions and queries. Every agent, whether
active or passive, must have the ability to accept information. In its simplest form, this
information is communicated to the agent from an external source by means of an
assertion. In order to assume a passive role in a dialog, an agent must additionally be
134
able to answer questions, i.e., it must be able to 1) accept a query from an external
source and 2) send a reply to the source by making an assertion. Note that from the
standpoint of the communication network, there is no distinction between an
unsolicited assertion and an assertion made in reply to a query.
6. In order to assume an active role in a dialog, an agent must be able to issue queries
and make assertions. With these capabilities, the agent then can potentially control
another agent by causing it to respond to the query or to accept the information
asserted. This means of control can be extended to the control of subagents, such as
neural networks and databases.
7. An agent functioning as a peer with another agent can assume both active and passive
roles in a dialog. It must be able to make and accept both assertions and queries.
3. Perlocution, the action that results from the locution. KQML (Knowledge Query and
Manipulation Language)
2. A method for packaging the messages is then necessary (messaging layer), and finally
an internal format that represents the messages and is sufficiently expressive to
convey not only information but requests, responses, and plans (content layer).
7. This content is defined in a language (how to represent the content), and an ontology
that describes the vocabulary (and meaning) of the content. Finally, the agent can
attach a context which the response will contain (in-reply-to) in order to correlate the
request with the response. The structure of a KQML message.
(performative-name
: sender X
: receiver Y
: content Z
: language L
: ontology Y
: reply-with R
: in-reply-to Q
)
Let’s now look at an example conversation between two KQML agents. In this
example, an agent requests the current value of a temperature sensor in a system. The request
is for the temperature of TEMP_SENSOR_1A that’s sampled at the temperature-server agent.
The content is the request, defined in the prolog language. Our agent making the request is
called thermal-control-appl.
(ask-one
:sender thermal-control-appl
:receiver temperature-server
:languageprolog
:ontology CELSIUS-DEGREES
:content “temperature(TEMP_SENSOR_1A ?temperature)”
:reply-with request-102
)
Our agent would then receive a response from the temperature-server, defining the
temperature of the sensor of interest.
136
(reply
:sender temperature-server
:receiver thermal-control-appl
:languageprolog
:ontology CELSIUS-DEGREES
:content “temperature(TEMP_SENSOR_1A 45.2)”
:in-reply-to request-102
)
Our agent would then receive a response from the temperature-server, defining the
temperature of the sensor of interest,
(reply
:sender temperature-server
:receiver thermal-control-appl
:languageprolog
:ontology CELSIUS-DEGREES
:content “temperature(TEMP_SENSOR_1A 45.2_”
:in-reply-to request1102
)
Table 4.1
Performatives Description
Evaluate Evaluate the content of the message
ask-one Request for the answer to the question
reply Communicate a reply to a question
stream-about Provide multiple response to a question
sorry Return an error (can’t respond)
tell Inform an agent of sentence
achieve A request of something to achieve by the receiver
advertise Advertise the ability to process a performative
subscribe Subscribe to changes of information
forward Route a message
137
KQML is a useful language to communicate not only data, but the meaning of the
data (in terms of a language and ontology).
KQML provides a rich set of capabilities that cover basic speed acts, and more
complex acts including data streaming and control of information transfer.
1. Where KQML is a language defined in the context of a university, the FIPA ACL is a
consortium-based language for agent communication.
2. ACL simply means Agent Communication Language and it was standardized through
the Foundation for Intelligent Physical Agents consortium. As with KQML, ACL is a
speech- act language defined by a set of per formatives.
4. The FIPA ACL is very similar to the KQML, even adopting the inner and outer
content layering for message construction (meaning and content).
6. The FIPA ACL also uses the Semantic Language, or SL, as the formal language to
define ACL semantics. This provides the means to support BDI themes (beliefs,
desires, intentions). In other words, SL allows the representation of persistent goals
(intentions), as well as propositions and objects. Each agent language has its use, and
while both have their differences, they can also be viewed as complementary.
XML
1. XML is the Extensible Markup Language and is an encoding that represents data and
meta- data (meaning of the data). It does this with a representation that includes tags
that encapsulate the data.
2. The tags explicitly define what the data represents. For example, consider the ask-one
request from KQML. This can be represented as XML as shown below:
138
1. There are some obvious similarities to XML and KQML. In KQML, the tags
exist, but use different syntax than is defined for XML. One significant difference
is that KQML permits the layering of tags.
2. Note here that the <msg> tag is the outer layer of the performative and its
arguments. XML is very flexible in its format and permits very complex
arrangements of both data and meta-data.
1. User confidence
• Can we trust the user behind the agent?
– Is he/she a trustworthy source of some kind of knowledge? (e.g. an expert
in a field)
– Does he/she acts in the agent system (through his agents in a trustworthy
way?
139
What is Trust?
2. The multi- agent system is usually built by a single developer or a single team of
developers and the chosen developers, option to reduce complexity is to ensure
cooperation among the agents they build including it as an important system
requirement.
1. A binary value (1=‘I do trust this agent’, 0=‘I don’t trust this agent’)
4. A probability distribution
2. Trust values can be inferred from some existing representation about the interrelations
between the agents
• Communication patterns, cooperation history logs, e-mails, webpage connectivity
mapping...
140
4. Trust values can be propagated or shared through a MAS
• Recommender systems, Reputation mechanisms.
3. Trust is an individual measure of confidence that a given agent has over other agent(s)
• My reputation clearly affects the amount of trust that others have towards me.
• Reputation can have a sanctioning role in social groups: a bad reputation can be very
costly to one’s future transactions.
5. Most authors combine (individual) Trust with some form of (social) Reputation in
their models
141
Direct experiences are the most relevant and reliable information source for individual
trust/reputation
Give new agents the lowest possible reputation value there is no incentive to throw
away a cyber-identity when an agent’s reputation falls below a starting point.
142
• Problem: The combination of the different reputation values tends to be an ad-
hoc solution with no social basis
143
Alice may trust Cathy value f(X, Y)
9. Recommendation protocol
1. Alice ->Bob: RRQ(Eric)
2. Bob ->Cathy: RRQ(Eric)
3. Cathy -> Bob: Rec(Eric,3)
4. Bob ->Alice: Rec(Eric,3)
■ ReGreT assumes that there is no difference between direct interaction and direct
observation in terms of reliability of the information. It talks about direct experiences.
145
13. Reputation Model: Witness reputation
a. First step to calculate a witness reputation is to identify the set of witnesses that will
be taken into account by the agent to perform the calculation.
b. The initial set of potential witnesses might be
i. the set of all agents that have interacted with the target agent in the past.
ii. This set, however, can be very big and the information provided by its members
probably suffer from the correlated evidence problem.
c. Next step is to aggregate these values to obtain a single value for the witness
reputation. The importance of each piece of information in the final reputation value
will be proportional to the witness credibility
14. Reputation Model: Witness reputation
a. Two methods to evaluate witness credibility:
i. ReGreT uses fuzzy rules to calculate how the structure of social relations influences the
credibility on the information. The antecedent of each rule is the type and degree of a
social relation (the edges in a sociogram) and the consequent is the credibility of the
witness from the point of view of that social relation.
ii The second method used in the ReGreT system to calculate the credibility of a witness
is to evaluate the accuracy of previous pieces of information sent by that witness to
the agent. The agent is using the direct trust value to measure the truthfulness of the
information received from witnesses.
15. Reputation Model: Neighbourhood Reputation
a. Neighbourhood in a MAS is not related with the physical location of the agents but
with the links created through interaction.
b. The main idea is that the behaviour of these neighbours and the kind of relation they
have with the target agent can give some clues about the behaviour of the target agent.
c. To calculate a Neighbourhood Reputation the ReGreT system uses fuzzy rules.
i. The antecedents of these rules are one or several direct trusts associated to different
behavioural aspects and the relation between the target agent and the neighbour.
ii. The consequent is the value for a concrete reputation (that can be associated to the
same behavioural aspect of the trust values or not).
16. Reputation Model: System Reputation
a. To use the common knowledge about social groups and the role that the agent is
playing in the society as a mechanism to assign default reputations to the agents.
b. ReGreT assumes that the members of these groups have one or several observable
features that unambiguously identify their membership.
c. Each time an agent performs an action we consider that it is playing a single role.
i. E.g. an agent can play the role of buyer and seller but when it is selling a product only
the role of seller is relevant.
146
17. System reputations are calculated using a table for each social group where the rows
are the roles the agent can play for that group, and the columns the behavioural
aspects.
18. Reputation Model: Default Reputation
a. To the previous reputation types we have to add a fourth one, the reputation assigned
to a third party agent when there is no information at all: the default reputation.
b. Usually this will be a fixed value
19. Reputation Model: Combining reputations
a. Each reputation type has different characteristics and there are a lot of heuristics that
can be used to aggregate the four reputation values to obtain a single and
representative reputation value.
b. In ReGreT this heuristic is based on the default and calculated reliability assigned to
each type.
c. Assuming we have enough information to calculate all the reputation types, we have
the stance that
a. witness reputation is the first type that should be considered, followed by
b. the neighbourhood reputation,
c. system reputation
d. the default reputation.
20. Main criticism to Trust and Reputation research:
a. Proliferation of ad-hoc models weakly grounded in social theory
b. No general, cross-domain model for reputation
c. Lack of integration between models
i. Comparison between models unfeasible
ii. Researchers are trying to solve this by, e.g. the ART competition
NEGOTIATION
1. A frequent form of interaction that occurs among agents with different goals is termed
negotiation.
3. The major features of negotiation are (1) the language used by the participating agents,
(2) the protocol followed by the agents as they negotiate, and (3) the decision process
that each agent uses to determine its positions, concessions, and criteria for
agreement.
4. Many groups have developed systems and techniques for negotiation. These can be
147
either environment-centered or agent-centered. Developers of environment-centered
techniques focus on the following problem: "How can the rules of the environment be
148
designed so that the agents in it, regardless of their origin, capabilities, or intentions,
will interact productively and fairly?"
The resultant negotiation mechanism should ideally have the following attributes:
• Symmetry: the mechanism should not be biased against any agent for arbitrary or
inappropriate reasons.
6. A task-oriented domain is one where agents have a set of tasks to achieve, all
resources needed to achieve the tasks are available, and the agents can achieve the
tasks without help or interference from each other. However, the agents can benefit by
sharing some of the tasks. An example is the "Internet downloading domain," where
each agent is given a list of documents that it must access over the Internet. There is a
cost associated with downloading, which each agent would like to minimize. If a
document is common to several agents, then they can save downloading cost by
accessing the document once and then sharing it.
7. The environment might provide the following simple negotiation mechanism and
constraints:
(1) each agent declares the documents it wants
(2) documents found to be common to two or more agents are assigned to agents
based on the toss of a coin,
(3) agents pay for the documents they download, and
(4) agents are granted access to the documents they download. as well as any in
their common sets. This mechanism is simple, symmetric, distributed, and
efficient (no document is downloaded twice). To determine stability, the
agents' strategies must be considered.
8. An optimal strategy is for an agent to declare the true set of documents that it needs,
regardless of what strategy the other agents adopt or the documents they need.
Because there is no incentive for an agent to diverge from this strategy, it is stable.
149
9. For the first approach, speech-act classifiers together with a possible world semantics
are used to formalize negotiation protocols and their components. This clarifies the
conditions of satisfaction for different kinds of messages. To provide a flavor of this
approach, we show in the following example how the commitments that an agent
might make as part of a negotiation are formalized [21]:
10. This rule states that an agent forms and maintains its commitment to achieve ø
individually iff (1) it has not precommitted itself to another agent to adopt and achieve
ø, (2) it has a goal to achieve ø individually, and (3) it is willing to achieve ø
individually. The chapter on "Formal Methods in DAI" provides more information on
such descriptions.
11. The second approach is based on an assumption that the agents are economically
rational. Further, the set of agents must be small, they must have a common language
and common problem abstraction, and they must reach a common solution. Under
these assumptions, Rosenschein and Zlotkin [37] developed a unified negotiation
protocol. Agents that follow this protocol create a deal, that is, a joint plan between
the agents that would satisfy all of their goals. The utility of a deal for an agent is the
amount he is willing to pay minus the cost of the deal. Each agent wants to maximize
its own utility.
The agents discuss a negotiation set, which is the set of all deals that have a positive utility
for every agent.
In formal terms, a task-oriented domain under this approach becomes a tuple <T, A, c>
where T is the set of tasks, A is the set of agents, and c(X) is a monotonic function for the
cost of executing the tasks X. A deal is a redistribution of tasks. The utility of deal d for agent
k is Uk(d) = c(Tk) - c(dk)
The conflict deal D occurs when the agents cannot reach a deal. A deal d is individually
rational if d > D. Deal d is pareto optimal if there is no deal d' > d. The set of all deals that are
individually rational and pareto optimal is the negotiation set, NS. There are three possible
situations:
2. compromise: agents prefer to be alone, but since they are not, they will agree to a
negotiated deal
150
3. cooperative: all deals in the negotiation set are preferred by both agents over
achieving their goals alone.
When there is a conflict, then the agents will not benefit by negotiating—they are
better off acting alone. Alternatively, they can "flip a coin" to decide which agent gets to
satisfy its goals.
Since the agents have some execution autonomy, they can in principle deceive or
mislead each other. Therefore, an interesting research problem is to develop protocols or
societies in which the effects of deception and misinformation can be constrained. Another
aspect of the research problem is to develop protocols under which it is rational for agents to
be honest with each other. The connections of the economic approaches with human-oriented
negotiation and argumentation have not yet been fully worked out.
4.7 BARGAINING
f : (S,d) → S
Thus the solution to a bargaining problem is a pair in R2. It gives the values of the
game to the two players and is generated through the function called bargaining function.
Bargaining function maps the set of possible outcomes to the set of acceptable ones.
Bargaining Solution
In a transaction when the seller and the buyer value a product differently, a surplus is
created. A bargaining solution is then a way in which buyers and sellers agree to divide the
surplus. For example, consider a house made by a builder A. It costed him Rs.10 Lacs. A
potential buyer is interested in the house and values it at Rs.20 Lacs. This transaction can
generate a surplus of Rs.10 Lacs. The builder and the buyer now need to trade at a price. The
buyer knows that the cost is less than 20 Lacs and the seller knows that the value is greater
than 10 Lacs. The two of them need to agree at a price. Both try to maximize their surplus.
Buyer would want to buy it for 10 Lacs, while the seller would like to sell it for 20 Lacs.
They bargain on the price, and either trade or dismiss. Trade would result in the generation of
surplus, whereas no surplus is created in case of no-trade. Bargaining Solution provides an
acceptable way to divide the surplus among the two parties. Formally, a Bargaining Solution
is defined as, F : (X,d) → S,
where X R2 and S,d R2. X represents the utilities of the players in the set of possible
bargaining agreements. d represents the point of disagreement. In the above example, price
151
[10,20], bargaining set is simply x + y 10, x 0, y 0. A point (x,y) in the bargaining set
represents the case, when seller gets a surplus of x, and buyer gets a surplus of y, i.e. seller
sells the house at 10 + x and the buyer pays 20 − y.
1. the set of payoff allocations that are jointly feasible for the two players in the process
of negotiation or arbitration, and
2. the payoffs they would expect if negotiation or arbitration were to fail to reach a
settlement.
Axiom 1 (Individual Rationality) This axiom asserts that the bargaining solution
should give neither player less than what it would get from disagree ment, i.e., f(S,d) ≥ d.
Axiom 2 (Symmetry) As per this axiom, the solution should be independent of the
names of the players, i.e., who is named a and who is named b. This means that when the
players’ utility functions and their disagreement utilities are the same, they receive equal
shares. So any symmetries in the final payoff should only be due to the differences in their
utility functions or their disagreement outcomes.
Axiom 3 (Strong Efficiency) This axiom asserts that the bargaining solution should be
feasible and Pareto optimal.
Axiom 4 (Invariance) According to this axiom, the solution should not change as a
result of linear changes to the utility of either player. So, for example, if a player’s utility
function is multiplied by 2, this should not change the solution. Only the player will value
what it gets twice as much.
Nash proved that the bargaining solution that satisfies the above five axioms is given
by:
Table 4.2
152
The players are allowed to communicate Each player independently chooses its
before choosing their strategies and playing strategy.
the game.
The basic modeling unit is the group The basic modeling unit is the individual.
Players can enforce cooperation in the The cooperation between individuals is self-
group through a third party. enforcing.
The following are the four key procedures for bargaining over multiple issue:
1. Global bargaining: Here, the bargaining agents directly tackle the global problem in
which all the issues are addressed at once. In the context of non-cooperative theory,
the global bargaining procedure is also called the package deal procedure. In this
procedure, an offer from one agent to the other would specify how each one of the
issues is to be resolved.
153
2. Independent implementation agenda independence: This axiom states that global
bargaining and sequential bargaining with independent implementation yield the same
agreement.
3. Separate/global equivalence: This axiom states that global bargaining and separate
bargaining yield the same agreement.
An agent’s cumulative utility is linear and additive. The functions Ua and Ub give
the cumulative utilities for a and b respectively at time t and are defined as follows.
(4.1)
(4.2)
Where waR+m denotes an m element vector of constants for agent a and wbR+m such a vector
b. These vectors indicate how the agents prefer different issuers. For example, if wac>
wac+1, then agent a values issue c more than issue c+ 1. Like for agent b.
4.9 ARGUMENTATION
➢ “A verbal and social activity of reason aimed at increasing (or decreasing) the
acceptability of a controversial standpoint for the listener or reader, by putting
forward a constellation of propositions (i.e. arguments) intended to justify (or refute)
the standpoint before a rational judge”
– Motivational args: Beliefs, Desires ->Desire e.g. If it is cloudy and you want to get
out then you don’t want to get wet.
– Practical arguments: Belief, Sub-Goals -> Goal e.g. If it is cloudy and you own a
raincoat then put the raincoat.
154
– Social arguments: Social commitments-> Goal, Desire e.g. I will stop at the corner
because the law say so. e.g I can’t do that, I promise to my mother that won’t.
Process of Argumentation
LAYERED ARCHITECTURES
Given the requirement that an agent be capable of reactive and pro-active behavior, an
obvious decomposition involves creating separate subsystems to deal with these different
types of behaviors. This idea leads naturally to a class of architectures in which the various
subsystems are arranged into a hierarchy of interacting layers. In this section, we will
consider some general aspects of layered architectures, and then go on to consider two
examples of such architectures:
Typically, there will be at least two layers, to deal with reactive and pro-active
behaviors respectively. In principle, there is no reason why there should not be many more
layers. However many layers there are, a useful typology for such architectures is by the
155
information and control flows within them. Broadly speaking, we can identify two types of
control flow within layered architectures (see Figure):
• Vertical layering. In vertically layered architectures (Figure (b) and (c)), sensory input
and action output are each dealt with by at most one layer each.
The great advantage of horizontally layered architectures is their conceptual
simplicity: if we need an agent to exhibit n different types of behavior, then we
implement n different layers.
However, because the layers are each in effect competing with one-another to
generate action suggestions, there is a danger that the overall behavior of the agent will not be
coherent. In order to ensure that horizontally layered architectures are consistent, they
generally include a mediator function, which makes decisions about which layer has ''control"
of the agent at any given time.
The need for such central control is problematic: it means that the designer must
potentially consider all possible interactions between layers. If there are n layers in the
architecture, and each layer is capable of suggesting m possible actions, then this means there
are mn such interactions to be considered. This is clearly difficult from a design point of view
in any but the simplest system. The introduction of a central control system also introduces a
bottleneck into the agent's decision making
ABSTRACT ARCHITECTURE
1. We can easily formalize the abstract view of agentgs presented so far. First, we will
assume that the state of the agent’s environment can be characterized as a set S = {s 1,
156
s2,…) of environment states.
157
2. At any given instant, the environment is assumed to be in one of these states. The
effectoric capability of an agent is assumed to be represented by a set A = (a1, a2,…)
of actions. Then abstractly, an agent can be viewed as a function
Action S* → A
env: S x A → P (S)
Which takes the current state of the environment s S and an action (perfrmed by the
agent), and maps them to a set of environment state env(s,a) – those that could result
from performing action a in state s. If all the sets in the rnage of env are all sigletons,
(i.e., if the result of performing any action in any state in a set containing a single
member), then the environment is deterministic, and its behaviour can be accrately
predicted.
h:: s
0
s s a s u1
where so is the initial state of the environment (i.e., its state when the agent starts
executing), au is the uth action that the agent chose to perform, and s u is the uth
environment state (which is one of the possible results of executing action a u-2 in state
su-1). S* → A is an agent, env: S x S → p(S) is an environment, and so is the initial
state of environment.
158
1. We have considered agents only in the abstract. So, while we have examined the
properties of agents that do and do not maintain state, we have not stopped to consider
what this state might look like. Similarly, we have modelled an agent's decision
making as an abstract function action, which somehow manages to indicate which
action to perform—but we have not discussed how this function might be
implemented. In this section, we will rectify this omission. We will consider four
classes of agents:
• logic based agents—in which decision making is realized through logical deduction;
In each of these cases, we are moving away from the abstract view of agents, and
beginning to make quite specific commitments about the internal structure and operation of
agents. Each section explains the nature of these commitments, the assumptions upon which
the architectures depend, and the relative advantages and disadvantages of each.
159
TEXT/REFERENCE BOOKS
3. Nils J. Nilsson, “The Quest for Artificial Intelligence”, Cambridge University Press,
2009.
5. Gerhard Weiss, “Multi Agent Systems”, 2nd Edition, MIT Press, 2013.
160