Artificial-Intelligence Notes
Artificial-Intelligence Notes
QUESTION BANK
Unit-1:
A rational agent is one that does the right thing. Here right thing is one that will cause
agent to be more successful. That leaves us with the problem of deciding how and when to
evaluate the agent’s success.
12 what are the four components to define a problem? Define them. May-13
1. initial state: state in which agent starts in.
2. A description of possible actions: description of possible actions which are
available to the agent.
3. The goal test: it is the test that determines whether a given state is goal state.
4. A path cost function: it is the function that assigns a numeric cost (value ) to each
path. The problem-solving agent is expected to choose a cost function that
reflects its own performance measure.
Properties of Environment
The environment has multifold properties −
o If an agent sensor can sense or access the complete state of an environment at each
point of time then it is a fully observable environment, else it is partially observable.
o A fully observable environment is easy as there is no need to maintain the internal state
to keep track history of the world.
o An agent with no sensors in all environments then such an environment is called
as unobservable.
2. Deterministic vs Stochastic:
o If an agent's current state and selected action can completely determine the next state
of the environment, then such environment is called a deterministic environment.
o A stochastic environment is random in nature and cannot be determined completely by
an agent.
o In a deterministic, fully observable environment, agent does not need to worry about
uncertainty.
3. Episodic vs Sequential:
o In an episodic environment, there is a series of one-shot actions, and only the current
percept is required for the action.
o However, in Sequential environment, an agent requires memory of past actions to
determine the next best actions.
4. Single-agent vs Multi-agent
o If only one agent is involved in an environment, and operating by itself then such an
environment is called single agent environment.
o However, if multiple agents are operating in an environment, then such an
environment is called a multi-agent environment.
o The agent design problems in the multi-agent environment are different from single
agent environment.
5. Static vs Dynamic:
o If the environment can change itself while an agent is deliberating then such
environment is called a dynamic environment else it is called a static environment.
o Static environments are easy to deal because an agent does not need to continue
looking at the world while deciding for an action.
o However for dynamic environment, agents need to keep looking at the world at each
action.
o Taxi driving is an example of a dynamic environment whereas Crossword puzzles are
an example of a static environment.
6. Discrete vs Continuous:
o If in an environment there are a finite number of percepts and actions that can be
performed within it, then such an environment is called a discrete environment else it
is called continuous environment.
o A chess gamecomes under discrete environment as there is a finite number of moves
that can be performed.
o A self-driving car is an example of a continuous environment.
7. Known vs Unknown
o Known and unknown are not actually a feature of an environment, but it is an agent's
state of knowledge to perform an action.
o In a known environment, the results for all actions are known to the agent. While in
unknown environment, agent needs to learn how it works in order to perform an
action.
o It is quite possible that a known environment to be partially observable and an
Unknown environment to be fully observable.
Components of Planning System
The important components of planning are
Choosing the best rule to apply next based on the best variable available heuristic
information.
Applying the chosen rule to compute the new problem state that arises from its
application.
Detecting when a solution has been found.
Detecting dead ends so that they can be abandoned and the system‘s effort
directed in correct direction.Repairing an almost correct solution.
First isolate a set of differences between the desired goal state and the current state.
Detect rules that are relevant to reduce the differences.
If several rules are found, a variety of heuristic information can be exploited to
choose among them.This technique is based on the means end analysis method.
2. Applying rules:
Applying the rules is easy.
Each rule specifies the problem state that would result from its application.
We must be able to deal with rules that specify only a small part of the complete problem
state.Different ways to do this are Describe, for each action, each of the changes it makes the
state description.A state was described by a set of predicates representing the facts that were
true in that state. Each state is represented as a predicate.The manipulation of the state
description is done using a resolution theorem prover.
3. Detecting a solution
A planning system has succeeded in finding a solution to a problem when it has
found a sequence of operators that transforms the initial problem state into the goal state.Any
of the corresponding reasoning mechanisms could be used to discover when a solution has
been found.
4. Detecting dead ends
As a planning system is searching for a sequence of operators to solve a particular
problem, it must be able to detect when it is exploring a path that can never lead to a solution.
The same reasoning mechanisms that can be used to detect a solution can often be used for
detecting a dead end.
If the search process is reasoning forward from the initial state, it can prune any path
that leads to a state from which the goal state cannot be reached.
If the search process is reasoning backward from the goal state, it can also terminate
a path either because it is sure that the initial state cannot be reached or little progress is
being made.
In reasoning backward, each goal is decomposed into sub goals. Each of them may
lead to a set of additional sub goals. Sometimes it is easy to detect that there is now way that
the entire sub goals in a given set can be satisfied at once. Other paths can be pruned because
they lead nowhere.
5. Repairing an almost correct solution
Solve the sub problems separately and then combine the solution to yield a correct
solution. But it leads to wasted effort.
The other way is to look at the situations that result when the sequence of operations
corresponding to the proposed solution is executed and to compare that situation to the
desired goal. The difference between the initial state and goal state is small. Now the
problem solving can be called again and asked to find a way of eliminating a new difference.
The first solution can then be combined with second one to form a solution to the original
problem.
When information as possible is available, complete the specification in such a way
that no conflicts arise. This approach is called least commitment strategy. It can be applied in
a variety of ways.
To defer deciding on the order in which operations can be performed.
Choose one order in which to satisfy a set of preconditions, we could leave the order
unspecified until the very end. Then we could look at the effects of each of the sub solutions
to determine the dependencies that exist among them. At that point, an ordering can be
chosen.
Unit-2:
Dec 10
DFS algorithm performance measurement is done with four ways –
1) Completeness – It is complete (guarantees solution)
2) Optimality – it is not optimal.
3) Time complexity – It’s time complexity is O (b).
4) Space complexity – its space complexity is O (b d+1).
5. What are the four components to define a problem? Define them? May 13
The four components to define a problem are,
1) Initial state – it is the state in which agent starts in.
2) A description of possible actions – it is the description of possible actions which are
available to the agent.
3) The goal test – it is the test that determines whether a given state is goal (final) state.
4) A path cost function – it is the function that assigns a numeric cost (value) to each path.
The problem-solving agent is expected to choose a cost function that reflects its own
performance measure.
12. What is the use of online search agent in unknown environment? May-15
Ans: Refer Question 6
13. list some of the uninformed search techniques?
The uninformed search strategies are those that do not take into account the
location of the goal. That is these algorithms ignore where they are going until they
find a goal and report success. The three most widely used uninformed search
strategies are
1.depth-first search-it expands the deepest unexpanded node
2.breadth-first search-it expands shallowest unexpanded node
3.lowest -cost-first search (uniform cost search)- it expands the lowest cost node
1. Discuss any 2 uninformed search methods with examples. Dec 09,Dec 14,May-13,May-17
Breadth First Search (BFS)
Breadth first search is a general technique of traversing a graph. Breadth first search
may use more memory but will always find the shortest path first. In this type of search the
state space is represented in form of a tree. The solution is obtained by traversing through the
tree. The nodes of the tree represent the start value or starting state, various intermediate states
and the final state. In this search a queue data structure is used and it is level by level
traversal. Breadth first search expands nodes in order of their distance from the root. It is a
path finding algorithm that is capable of always finding the solution if one exists. The solution
which is found is always the optional solution. This task is completed in a very memory
intensive manner. Each node in the search tree is expanded in a breadth wise at each level.
Concept:
Step 1: Traverse the root node
Step 2: Traverse all neighbours of root node.
Step 3: Traverse all neighbours of neighbours of the root node.
Step 4: This process will continue until we are getting the goal node.
Algorithm:
Step 1: Place the root node inside the queue.
Step 2: If the queue is empty then stops and return failure.
Step 3: If the FRONT node of the queue is a goal node then stop and return
success.
Step 4: Remove the FRONT node from the queue. Process it and find all its
neighbours that are in readystate then place them inside the queue in any
order.
Step 5: Go to Step 3.
Step 6: Exit.
Implementation:
Let us implement the above algorithm of BFS by taking the following suitable
example.
Consider the graph in which let us take A as the starting node and F as the goal
4. Explain the nature of heuristics with example. What is the effect of heuristics
accuracy?May-13,May-16
We can also call informed search as Heuristics search. It can be classified as below
A* Search
AO* Search
Hill Climbing
Constraint satisfaction
Heuristic is a technique which makes our search algorithm more efficient. Some heuristics
help to guide a search process without sacrificing any claim to completeness and some sacrificing
it.
These searches uses some functions that estimate the cost from the current state to the goal
presuming that such function is efficient. A heuristic function is a function that maps from problem
state descriptions to measure of desirability usually represented as number.
The purpose of heuristic function is to guide the search process in the most profitable
directions by suggesting which path to follow first when more than is available.
In AI heuristic has a general meaning and also a more specialized technical meaning.
Generally a term heuristic is used for any advice that is effective but is not guaranteed to work in
every case.
For example in case of travelling sales man (TSP) problem we are using a heuristic to
calculate the nearest neighbour. Heuristic is a method that provides a better guess about the correct
choice to make at any junction that would be achieved by random guessing.
This technique is useful in solving though problems which could not be solved in any other
way. Solutions take an infinite time to compute.
This search algorithm serves as combination of depth first and breadth first search algorithm.
Best first search algorithm is often referred greedy algorithm this is because they quickly attack the
most desirable path as soon as its heuristic weight becomes the most desirable.
Concept:
Step 2: Traverse any neighbor of the root node, that is maintaining a least distance from the
root node and insert them in ascending order into the queue.
Step 3: Traverse any neighbor of neighbor of the root node, that is maintaining a least
distance fromthe root node and insert them in ascending order into the queue
Step 4: This process will continue until we are getting the goal node
Algorithm:
Step 1: Place the starting node or root node into the queue.
Step 4: Else, remove the first element from the queue. Expand it and compute the estimated
goal distancefor each child. Place the children in the queue in ascending order to the goal
distance. Step 5: Go to step-3
Implementation:
Step 1: Consider the node A as our root node. So the first element of the queue is A whish is
not our goal node, so remove it from the queue and find its neighbor that are to inserted in
ascending order. A
Step 2: The neighbors of A are B and C. They will be inserted into the queue in ascending
order. B C A
Step 3: Now B is on the FRONT end of the queue. So calculate the neighbours of B that are
maintaining a least distance from the roof. F E D C B Step 4:Now the node F is on the
FRONT end of the queue. But as it has no further children, so remove it from the queue and
proceed further. E D C B Step 5:Now E is the FRONT end. So the children of E are J and K.
Insert them into the queue in ascending order.K J D C E
Step 6:Now K is on the FRONT end and as it has no further children, so remove it and
proceed further J D C K
Step 8:Now D is on the FRONT end and calculates the children of D and put it into the
queue. I C D
Step 10:Now C is the FRONT node .So calculate the neighbours of C that are to be inserted
in ascending order into the queue.G H C
Step 11:Now remove G from the queue and calculate its neighbour that is to insert in
ascending order into the queue. M L H G
Step12:Now M is the FRONT node of the queue which is our goal node. So stop here and
exit. L H M
Advantage:
Time complexity of Best first search is much less than Breadth first search.
The Best first search allows us to switch between paths by gaining the benefits of both breadth
first and depth first search. Because, depth first is good because a solution can be found
without computing all nodes and Breadth first search is good because it does not get trapped
in dead ends.
Disadvantages:
Sometimes, it covers more distance than our consideration.
Branch and Bound Search
Branch and Bound is an algorithmic technique which finds the optimal solution by keeping the
best solution found so far. If partial solution can‘t improve on the best it is abandoned, by this
E. 0+5 = 5 (The cost of A is 0 as it is the
starting node) F:0+9 = 9
C:0+7 = 7
AB
Step 2:
C FBA
F. 0+5+4 = 9
G. 0+5+6 = 11
Step 3:
CFDB
H. 0+5+4+8 = 17
I. 0+5+4+3 = 12
The least distance is F from D and it is our goal node. So stop and return success.
Step 4:
CFD
Advantages:
As it finds the minimum path instead of finding the minimum successor so there should not
be any repetition. The time complexity is less compared to other algorithms.
Disadvantages:
The load balancing aspects for Branch and Bound algorithm make it parallelization difficult.
The Branch and Bound algorithm is limited to small size network. In the problem of large networks,
where the solution search space grows exponentially with the scale of the network, the approach
becomes relatively prohibitive.
A* SEARCH
A* is a cornerstone name of many AI systems and has been used since it was developed in
1968 by Peter Hart; Nils Nilsson and Bertram Raphael. It is the combination of Dijkstra‘s algorithm
and Best first search. It can be used to solve many kinds of problems. A* search finds the shortest
path through a search space to goal state using heuristic function. This technique finds minimal cost
solutions and is directed to a goal state called A* search.
In A*, the * is written for optimality purpose. The A* algorithm also finds the lowest cost
path between the start and goal state, where changing from one state to another requires some cost. A*
requires heuristic function to evaluate the cost of path that passes through the particular state.
This algorithm is complete if the branching factor is finite and every action has fixed cost. A*
requires heuristic function to evaluate the cost of path that passes through the particular state. It can be
defined by following formula
Where g (n): The actual cost path from the start state to the current state.
h (n): The actual cost path from the current state to goal state.
f (n): The actual cost path from the start state to the goal state.
For the implementation of A* algorithm we will use two arrays namely OPEN and
CLOSE.
OPEN:
An array which contains the nodes that has been generated but has not been yet examined.
CLOSE:
Algorithm:
Step 1: Place the starting node into OPEN and find its f (n) value.
Step 2: Remove the node from OPEN, having smallest f (n) value. If it is a goal node then
stop and return success.
Step 3: Else remove the node from OPEN, find all its successors.
Step 4: Find the f (n) value of all successors; place them into OPEN and
Step 5: Go to Step-2.
Step 6: Exit.
Implementation:
Advantages:
Where g (n): The actual cost path from the start state to the current state.
h (n): The actual cost path from the current state to goal state.
f (n): The actual cost path from the start state to the goal state.
For the implementation of A* algorithm we will use two arrays namely OPEN and
CLOSE.
OPEN:
An array which contains the nodes that has been generated but has not been yet examined.
CLOSE:
Algorithm:
Step 1: Place the starting node into OPEN and find its f (n) value.
Step 2: Remove the node from OPEN, having smallest f (n) value. If it is a goal node then
stop and return success.
Step 3: Else remove the node from OPEN, find all its successors.
Step 4: Find the f (n) value of all successors; place them into OPEN and
Step 5: Go to Step-2.
Step 6: Exit.
Implementation:
Advantages:
The Depth first search and Breadth first search given earlier for OR trees or graphs can be easily
adopted by AND-OR graph. The main difference lies in the way termination conditions are determined,
since all goals following an AND nodes must be realized; where as a single goal node following an OR
node will do. So for this purpose we are using AO* algorithm.Like A* algorithm here we will use two
arrays and one heuristic function.
OPEN:
It contains the nodes that has been traversed but yet not been marked solvable or unsolvable.
CLOSE:
Algorithm:
Step 3: Select a node n that is both on OPEN and a member of T0. Remove it from OPEN and
place it in CLOSE
Step 4:
As the nodes G and H are unsolvable, so place them into CLOSE directly and process the nodes
D and E.
Step 5:
Now we have been reached at our goal state. So place F into CLOSE.
Step 6:
R. It is an optimal algorithm.
.Disadvantages:
Algorithm:
Step 1: Evaluate the starting state. If it is a goal state then stop and return success.
Step 2: Else, continue with the starting state as considering it as a current state.
Step 3: Continue step-4 until a solution is found i.e. until there are no new states left
Step 4:
T. Select a state that has not been yet applied to the current state
and apply it to produce a new state.
b
If it is better than the current state, then make it current
state and proceed further.
c
If it is not better than the current state, then continue in
the loop until a solution is found.
Step 5:Exit.
Advantages:
938+938=1876
12.Explain alpha-beta pruning algorithm and the Minmax game playing algorithm with example?
Dec-03,Dec-04,May-10,May-10, May-09, May 17, May 19,Dec-04, May-10, May-10,Dec-10 ,May 17
ALPHA-BETA pruning is a method that reduces the number of nodes explored
in Minimax strategy. It reduces the time required for the search and it must be restricted so that
no time is to be wasted searching moves that are obviously bad for the current player.
The exact implementation of alpha-beta keeps track of the best move for each side as it moves
throughout the tree.
We proceed in the same (preorder) way as for the minimax algorithm. For the MIN nodes, the score
computed starts with +infinity and decreases with time.
For MAX nodes, scores computed starts with –infinity and increase with time.
The efficiency of the Alpha-Beta procedure depends on the order in which successors of a node
are examined. If we were lucky, at a MIN node we would always consider the nodes in order
from low to high score and at a MAX node the nodes in order from high to low score. In general
it can be shown that in the most favorable circumstances the alpha-beta search opens as many
leaves as minimax on a game tree with double its depth.
Alpha-Beta algorithm: The algorithm maintains two values, alpha and beta, which represents
the minimum score that the maximizing player is assured of and the maximum score that the
minimizing player is assured of respectively. Initially alpha is negative infinity and beta is
positive infinity. As the recursion progresses the "window" becomes smaller. When beta becomes
less than alpha, it means that the current position cannot be the result of best play by both players
and hence need not be explored further.
if node is a leaf
return the heuristic value of node
return beta
return beta
return alpha
The Min-Max algorithm is applied in two player games, such as tic-tac-toe, checkers,
chess, go, and so on.
There are two players involved, MAX and MIN. A search tree is generated, depth-first, starting with the
current game position upto the end game position. Then, the final game position is evaluated from
MAX‘s point of view, as shown in Figure 1. Afterwards, the inner node values of the tree are filled
bottom-up with the evaluated values. The nodes that belong to the MAX player receive the maximum
value of it‘s children. The nodes for the MIN player will select the minimun value of it‘s children.
The values represent how good a game move is. So the MAX player will try to select the move
with highest value in the end. But the MIN player also has something to say about it and he will
try to select the moves that are better to him, thus minimizing MAX‘s outcome.
Algorithm
if (GameEnded(game))
return EvalGameState(game);
else
return best_move;
ForEach moves {
return best_move;
Optimization
BB. price
This all means that sometimes the search can be aborted because we find out that the
search subtree won‘t lead us to any viable answer. This optimization is known as alpha-beta
cutoffs.
The algorithm Have two values passed around the tree nodes: The alpha value
which holds the best MAX value found; The beta value which holds
the best MIN value found.
At MAX level, before evaluating each child path, compare the returned value with of the
previous path with the beta value. If the value is greater than it abort the search for the current
node;
At MIN level, before evaluating each child path, compare the returned value with of the previous
path with the alpha value. If the value is lesser than it abort the search for the current node.
Example
―All music lovers who enjoy Bach either dislike Wagner or think that anyone who
dislikes any composer is a philistine''.
"x[musiclover(x) enjoy(x,Bach)
for e.g. a ® b = ~a v b .
~(~p) = p
~(aÙb) = ~aÚ~b
3.Change variable names such that, each quantifier has a unique name.
We do this in preparation for the next step. As variables are just dummy names, changing a
variable name doesnot affect the truth value of the wff. Suppose we have
As we already have unique names for each quantifier in the previous step, this will not cause a
problem.
We can eliminate the existential quantifier by simply replacing the variable with a reference
to a function that produces the desired value.
If the existential quantifiers occur within the scope of a universal wuantifier, then the value that
satisfies the predicate may depend on the values of the universally quantified variables.
As we have eliminated all existential quantifiers, all the variables present in the wff are
unversally quantified, hence for simplicity we can just drop the prefix, and assume that every
variable is universally quantified. We have form our example :
as we have no ANDs we will just have to use the associative property to get rid of the
brackets.
We have :
As we did not have ANDs in out example, this step is avoided for our example and the final
output of the conversion is :
Unit-3:
1. What are the limitations in using propositional logic to represent the knowledge base? May-11
Propositional logic has following limitations to represent the knowledge base.
i. It has limited expressive power.
ii. It cannot directly represent properties of individuals or relations between
individuals.
iii. Generalizations, patterns, regularities cannot easily be represented.
iv. Many rules (axioms) are requested to write so as to allow inference.
4.What is ontological commitment (what exists in the world) of first order logic? Represent the
sentence “Brothers are siblings” in first order logic? Dec - 10
Ontological commitment means what assumptions language makes about the nature if reality.
Representation of “Brothers are siblings” in first order logic is
x, y [Brother (x, y) Siblings (x, y)]
5.Differentiate between propositional and first order predicate logic? May – 10 , Dec – 11
Following are the comparative differences versus first order logic and propositional logic.
1) Propositional logic is less expressive and do not reflect individual object`s properties
explicitly. First order logic is more expressive and can represent individual object along
with all its properties.
2) Propositional logic cannot represent relationship among objects whereas first order logic
can represent relationship.
3) Propositional logic does not consider generalization of objects where as first order logic
handles generalization.
4) Propositional logic includes sentence letters (A, B, and C) and logical connectives, but
not quantifier.
First order logic has the same connectives as propositional logic, but it also has variables
for individual objects, quantifier, symbols for functions and symbols for relations.
6.What factors justify whether the reasoning is to be done in forward or backward reasoning?
Dec - 11
Following factors justify whether the reasoning is to be done in forward or backward
reasoning:
a. possible to begin with the start state or goal state?
b. Is there a need to justify the reasoning?
c. What kind of events trigger the problem - solving?
d. In which direction is the branching factor greatest? One should go in the
direction with lower branching factor?
Diagnostics rules are used in first order logic for inference. The diagnostics rules generate
hidden causes from observed effect. They help to deduce hidden facts in the world. For
example consider the Wumpus world.
The diagnostics rule finding ‘pit’ is
“If square is breezy some adjacent square must contain pit”, which is written as, s
Breezy(s) => Adjacent (r,s) pit (r).
SUBST (, q)
There is n+ 1 premise to this rule: The ‘n’ atomic sentences P'i and the one implication. The
conclusion is the result applying the substitution to the consequent q.
8. Define atomic sentence and complex sentence? Dec – 14
Atomic sentences
1. An atomic sentence is formed from a predicate symbol followed by a parenthesized list of
terms.
For example: Stepsister (Cindrella, Drizella)
2. Atomic sentences can have complex terms as the arguments.
For example: Married (Father (Cindrella), Mother (Drizella))
3. Atomic sentences are also called atomic expressions, atoms or propositions.
For example: Equal (plus (two, three), five) is an atomic sentence.
Complex sentences
The rules that determine the conflict resolution strategy are called meta rules. Meta rules
define knowledge about how the system will work. For example, meta rules may define that
knowledge from expert1 is to be trusted more than knowledge from expert 2. Meta rules are
treated by the system like normal rules but are they are given higher priority.
𝑒𝑎𝑡
¬𝑐𝑎𝑡(𝑥) ∨ ¬𝑓𝑖𝑠ℎ(𝑦) ∨ 𝑙𝑖𝑘𝑒𝑠 𝑥, 𝑦
12. Explain following term with reference to prolog programming language :clauses
Clauses: clauses are the structure elements of the program. A prolog programmer develops a
program by writing a collection of clauses in a text file. The programmer the uses the consult
command ,specifying the name of the text file, to load the process into the prolog
environment.
13. explain following term with refernce to prolog programming language : predicates
Each predicate has a name and zero or more arguments .the predicate name is a prolog atom .
each argument is an arbitrary prolog term. A predicate with pred and n arguments is denoted
by pred/N, which is called a predicate indicator. A predicate is defined by a collection of
clauses.
A clause is either a rule or fact . A clauses that constitute a predicate denote logical
alternative: if any clause is true, then the whole predicate is true.
14. explain the following term with reference to prolog programming language : domains
Domains : the argument to the predicates must belong to know prolog domains. A
domain can be a standard domain, or it can be one you declare in the domain section. The
two types of process- facts and rules. Examples :
Predicates:
my_predicate(name,number) you will need to declare suitable domains for name and
number.
Assuming you want these to be symbol and integer respectively, the domain declaration
looks like this.
Domains:
Name=symbol
Number = integer
Predicates:
my_predicate(name,number)
15. explain the following term with reference to prolog programming language :goal
a goal is a statement starting with a predicate and probably followed by its arguments. In
a valid goal,the predicate must have appeared in atleast one fact or a rule in the consulted
program, and a number of arguments in the goal must be the same as that appears in the
consulted program . also, al the arguments (if any) or constants.
The purpose of submitting a goal is to find out whether the statement represented by the
goal is true according to the knowledge database(i.e. the facts and rules in the consulted
program). This is similar to proving a hypothesis – the goal being the hypothesis, the
facts being the axioms and the rules being the theorem.
16. explain the following term with reference to prolog programming language : cut
The cut, in prolog , is a goal, return us !,which always succeeds, but cannot be
backtracked past. The prolog cut predicate, or ‘!’,eliminates choices is a prolog derivation
tree it is used to prevent unwanted backtracking, for example, to prevent extra solutions
being found by prolog.
The cut should be used sparingly. There is a temptation to insert cuts experimentally into
code that is not working correctly.
17. explain the following term with reference to prolog programming language :fail
It is the built-in prolog predicate with no arguments, which, as the name suggest, always fails, it
is useful for forcing backtracking and various other contexts.
Inference engine: prolog built-in backward chaining inference engine which can be used
to partially implement some expert system. Prolog rules are used for the knowledge
representation, and the prolog inference engine is used to derive conclusions. Other
portions of the system, such as the user interface, must be coded using prolog as a
programming language. The prolog inference engine thus simple backward chaining.
Each rule has a goal and a number of each sub-goals. The prolog inference engine either
proves or disproves each goal. There is no uncertainty associated with the results.
This rule structure and inference strategy is adequate for many expert system
applications. Only the dialogue with the user needs to be improved to create a simple
expert system. These feature are used in he chapter to build a sample application called,
“birds,”which identifies birds.
It is a process of representing the abstract concepts like actions,time which are related to
the real world domains. This process is complex and lengthy because in real world
objects have many different characteristics with various values which can differ over
time. In such cases ontological engineering generalizes the objects having similar
characteristics.
1. Write the algorithm for deciding entailment in propositional logic. May 13 Dec 14
REFER Qno 7
To satisfy these assumptions about KR, we need formal notation that allows automated inference
and problem solving. One popular choice is use of logic.
Logic
Logic is concerned with the truth of statements about the world. Generally each
statement is either TRUE or FALSE. Logic includes: Syntax, Semantics and Inference Procedure.
1. Syntax:
Specifies the symbols in the language about how they can be combined to form sentences.
The facts about the world are represented as sentences in logic.
2. Semantic:
Specifies how to assign a truth value to a sentence based on its meaning in the world. It
Specifies what facts a sentence refers to. A fact is a claim about the world, and it may be TRUE or
FALSE.
3. Inference Procedure:
Specifies methods for computing new sentences from the existing sentences. Logic as a
KR Language
Logic is a language for reasoning, a collection of rules used while doing logical reasoning. Logic
is studied as KR languages in artificial intelligence. Logic is a formal system in which the formulas or
sentences have true or false values. Problem of designing KR language is a tradeoff between that which is
Logics are of different types: Propositional logic, Predicate logic, temporal logic, Modal logic,
Description logic etc;
They represent things and allow more or less efficient inference. Propositional logic
and Predicate logic are fundamental to all logic. Propositional Logic is the study of
statements and their connectivity. Predicate Logic is the study of individuals and
their properties.
Logic Representation
The facts are claims about the world that are True or False. To
build a Logic-based representation:
PP. Sentences - either TRUE or false but not both are called propositions.
the declarative "snow is white" expresses that snow is white; further, "snow is white" expresses
that snow is white is TRUE.
For example man (john) and man(john) is a contradiction while man (john) and
man(Himalayas) is not. Thus in order to determine contradictions we need a matching procedure
that compares two literals and discovers whether there exist a set of substitutions that makes them
identical. There is a recursive procedure that does this matching . It is called Unification
algorithm.
is the name of a predicate and the remaining elements are arguments. The
argument may be a single element (atom) or may be another list. For example we
To unify two literals, first check if their first elements re same. If so proceed. Otherwise they
cannot be unified. For example the literals (try assassinate Marcus Caesar)
Cannot be unified. The unification algorithm recursively matches pairs of elements, one pair at a time.
The matching rules are :
TT. The substitution must be consistent. Substituting y for x now and then z
for x later is inconsistent. (a substitution y for x written as y/x)
The Unification algorithm is listed below as a procedure UNIFY (L1, L2). It
returns a list representing the composition of the substitutions that were performed
during the match. An empty list NIL indicates that a match was found without any
substitutions. If the list contains a single value F, it indicates that the unification
procedure failed.
else return F.
ZZ. call UNIFY with the i th element of L1 and I‘th element of L2, putting
the result in S
Resolution yields a complete inference algorithm when coupled with any complete
search algorithm. Resolution makes use of the inference rules. Resolution performs deductive
inference. Resolution uses proof by contradiction. One can perform Resolution from a
Knowledge Base. A Knowledge Base is a collection of facts or one can even call it a database
with all facts.
Resolution basically works by using the principle of proof by contradiction. To find the
conclusion we should negate the conclusion. Then the resolution rule is applied to the resulting
clauses.
Each clause that contains complementary literals is resolved to produce a 2new clause,
which can be added to the set of facts (if it is not already present). This process continues until
one of the two things happen. There are no new clauses that
can be added. An application of the resolution rule derives the empty clause An empty clause
shows that the negation of the conclusion is a complete contradiction, hence the negation of the
conclusion is invalid or false or the assertion is completely valid or true.
Steps for Resolution
KKK. a → b = ~a v b
LLL. ~ (a ^ b) = ~ a v ~ b …………DeMorgan‘sLaw
NNN. ~ (~a) = a
Eliminate Existential Quantifier ‗∃‘
Here ‗y‘ is an independent quantifier so we can replace ‗y‘ by any name (say – George Bush).
So, ∃y: President (y) becomes President (George Bush).
a ^ b splits the entire clause into two separate clauses i.e. a and b
To eliminate ‗^‘ break the clause into two, if you cannot break the clause, distribute the OR ‗v‘
and then break the clause.
Problem Statement:
Three types of rules are mostly used in the Rule-based production systems.
HHHH
Knowledge Declarative Rules :
These rules state all the facts and relationships about a problem. Example
:
These rules advise on how to solve a problem, while certain facts are known. Example :
These are rules for making rules. Meta-rules reason about which rules should be considered
for firing.
Example :
IF the rules which do not mention the current goal in their premise, AND there
are rules which do mention the current goal in their premise, THEN the former
rule should be used in preference to the latter. Meta-rules direct reasoning rather
than actually performing reasoning.
Meta-rules specify which rules should be considered and in which order they
should be invoked. FACTS :They represent the real world information
Inference Engine
The inference engine uses one of several available forms of inferencing. By inferencing means
the method used in a knowledge-based system to process the stored knowledge and supplied
data to produce correct conclusions.
Example
Dempster/Shafer theory
KKKK
The Dempster-Shafer theory, also known as the theory
of belief functions, is a generalization of the Bayesian theory of
subjective probability.
LLLL
The Bayesian theory requires probabilities for each question of
interest, belief functions allow us to base degrees of belief for
one question on probabilities for a related question.
MMMM
These degrees of belief may or may not have the
mathematical properties of probabilities; how much they differ
from probabilities will depend on how closely the two questions
are related.
NNNN
The Dempster-Shafer theory owes its name to work by
A. P. Dempster (1968) and Glenn Shafer (1976), but the kind of
reasoning the theory uses can be found as far back as the
seventeenth century.
OOOO
The theory came to the attention of AI researchers in the
early 1980s, when they were trying to adapt probability theory to
expert systems.