0% found this document useful (0 votes)
31 views

Artificial-Intelligence Notes

The document provides information about artificial intelligence including definitions of key concepts. It contains 12 questions with explanations about artificial intelligence topics: 1) It defines AI as the branch of computer science dealing with automation of intelligent behavior. 2) It defines a robotic agent as a machine that looks like a human and performs complex human tasks through a programmed set of instructions and stored knowledge from sensors. 3) It defines an agent as anything (a program or machine) that perceives its environment through sensors and acts on it through actuators.

Uploaded by

Aditi Goel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

Artificial-Intelligence Notes

The document provides information about artificial intelligence including definitions of key concepts. It contains 12 questions with explanations about artificial intelligence topics: 1) It defines AI as the branch of computer science dealing with automation of intelligent behavior. 2) It defines a robotic agent as a machine that looks like a human and performs complex human tasks through a programmed set of instructions and stored knowledge from sensors. 3) It defines an agent as anything (a program or machine) that perceives its environment through sensors and acts on it through actuators.

Uploaded by

Aditi Goel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 67

ARTIFICIAL INTELLIGENCE

QUESTION BANK

Unit-1:

1. Define A.I or what is A.I? May03,04


Artificial intelligence is the branch of computer science that deals with the
automation of intelligent behavior. AI gives basis for developing human like
programs which can be useful to solve real life problems and thereby become useful
to mankind.
2. What is meant by robotic agent? May 05
A machine that looks like a human being and performs various complex acts of a
human being. It can do the task efficiently and repeatedly without fault. It works on the basis
of a program feeder to it; it can have previously stored knowledge from environment through
its sensors. It acts with the help of actuators.

3 Define an agent? May 03,Dec-09


An agent is anything ( a program, a machine assembly ) that can be viewed as
perceiving its environment through sensors and acting upon that environment through
actuators.

4 Define rational agent? Dec-05,11, May-10

A rational agent is one that does the right thing. Here right thing is one that will cause
agent to be more successful. That leaves us with the problem of deciding how and when to
evaluate the agent’s success.

5 Give the general model of learning agent? Dec-03


Learning agent model has 4 components –
1) Learning element.
2) Performance element.
3) Critic
4) Problem Generator.

6 What is the role of agent program? May-04


Agent program is important and central part of agent system. It drives the agent,
which means that it analyzes date and provides probable actions agent could take.
An agent program is internally implemented as agent function.
An agent program takes input as the current percept from the sensor and returns an
action to the effectors.
7 List down the characteristics of intelligent agent? May-11
The IA must learn and improve through interaction with the environment.
The IA must adapt online and in the real time situation.
The IA must accommodate new problem-solving rules incrementally.
The IA must have memory which must exhibit storage and retrieval capabilities.

8 Define abstraction? May-12


In AI the abstraction is commonly used to account for the use of various levels in
detail in a given representation language or the ability to change from one level to another
level to another while preserving useful properties. Abstraction has been mainly studied in
problem solving, theorem solving
9 State the concept of rationality? May-12
Rationality is the capacity to generate maximally successful behavior given the
available information. Rationality also indicates the capacity to compute the perfectly
rational decision given the initially available information. The capacity to select the
optimal combination of computation – sequence plus the action, under the constraint
that the action must be selected by the computation is also rationality.
Perfect rationality constraints an agent’s actions to provide the maximum
expectations of success given the information available.

10 What are the functionalities of the agent function? Dec-12


Agent function is a mathematical function which maps each and every possible
percept sequence to a possible action.
The major functionality of the agent function is to generate the possible action to each
and every percept. It helps the agent to get the list of possible actions the agent can
take. Agent function can be represented in the tabular form.

11 Define basic agent program? May-13


The basic agent program is a concrete implementation of the agent function which
runs on the agent architecture. Agent program puts bound on the length of percent
sequence and considers only required percept sequences. Agent program implements
the functions of percept sequence and action which are external characteristics of the
agent.
Agent program takes input as the current percept from the sensor and return an action
to the effectors (Actuators)

12 what are the four components to define a problem? Define them. May-13
1. initial state: state in which agent starts in.
2. A description of possible actions: description of possible actions which are
available to the agent.
3. The goal test: it is the test that determines whether a given state is goal state.
4. A path cost function: it is the function that assigns a numeric cost (value ) to each
path. The problem-solving agent is expected to choose a cost function that
reflects its own performance measure.

1. Explain properties of environment. Dec -2009

Properties of Environment
The environment has multifold properties −

1. Fully observable vs Partially Observable


2. Static vs Dynamic
3. Discrete vs Continuous
4. Deterministic vs Stochastic
5. Single-agent vs Multi-agent
6. Episodic vs sequential
7. Known vs Unknown
8. Accessible vs Inaccessible

1. Fully observable vs Partially Observable:

o If an agent sensor can sense or access the complete state of an environment at each
point of time then it is a fully observable environment, else it is partially observable.
o A fully observable environment is easy as there is no need to maintain the internal state
to keep track history of the world.
o An agent with no sensors in all environments then such an environment is called
as unobservable.

2. Deterministic vs Stochastic:

o If an agent's current state and selected action can completely determine the next state
of the environment, then such environment is called a deterministic environment.
o A stochastic environment is random in nature and cannot be determined completely by
an agent.
o In a deterministic, fully observable environment, agent does not need to worry about
uncertainty.

3. Episodic vs Sequential:

o In an episodic environment, there is a series of one-shot actions, and only the current
percept is required for the action.
o However, in Sequential environment, an agent requires memory of past actions to
determine the next best actions.

4. Single-agent vs Multi-agent

o If only one agent is involved in an environment, and operating by itself then such an
environment is called single agent environment.
o However, if multiple agents are operating in an environment, then such an
environment is called a multi-agent environment.
o The agent design problems in the multi-agent environment are different from single
agent environment.

5. Static vs Dynamic:

o If the environment can change itself while an agent is deliberating then such
environment is called a dynamic environment else it is called a static environment.
o Static environments are easy to deal because an agent does not need to continue
looking at the world while deciding for an action.
o However for dynamic environment, agents need to keep looking at the world at each
action.
o Taxi driving is an example of a dynamic environment whereas Crossword puzzles are
an example of a static environment.

6. Discrete vs Continuous:

o If in an environment there are a finite number of percepts and actions that can be
performed within it, then such an environment is called a discrete environment else it
is called continuous environment.
o A chess gamecomes under discrete environment as there is a finite number of moves
that can be performed.
o A self-driving car is an example of a continuous environment.

7. Known vs Unknown

o Known and unknown are not actually a feature of an environment, but it is an agent's
state of knowledge to perform an action.
o In a known environment, the results for all actions are known to the agent. While in
unknown environment, agent needs to learn how it works in order to perform an
action.
o It is quite possible that a known environment to be partially observable and an
Unknown environment to be fully observable.
Components of Planning System
The important components of planning are
Choosing the best rule to apply next based on the best variable available heuristic
information.
Applying the chosen rule to compute the new problem state that arises from its
application.
Detecting when a solution has been found.
Detecting dead ends so that they can be abandoned and the system‘s effort
directed in correct direction.Repairing an almost correct solution.

1.Choosing rules to apply:

First isolate a set of differences between the desired goal state and the current state.
Detect rules that are relevant to reduce the differences.
If several rules are found, a variety of heuristic information can be exploited to
choose among them.This technique is based on the means end analysis method.

2. Applying rules:
Applying the rules is easy.
Each rule specifies the problem state that would result from its application.
We must be able to deal with rules that specify only a small part of the complete problem
state.Different ways to do this are Describe, for each action, each of the changes it makes the
state description.A state was described by a set of predicates representing the facts that were
true in that state. Each state is represented as a predicate.The manipulation of the state
description is done using a resolution theorem prover.
3. Detecting a solution
A planning system has succeeded in finding a solution to a problem when it has
found a sequence of operators that transforms the initial problem state into the goal state.Any
of the corresponding reasoning mechanisms could be used to discover when a solution has
been found.
4. Detecting dead ends
As a planning system is searching for a sequence of operators to solve a particular
problem, it must be able to detect when it is exploring a path that can never lead to a solution.
The same reasoning mechanisms that can be used to detect a solution can often be used for
detecting a dead end.
If the search process is reasoning forward from the initial state, it can prune any path
that leads to a state from which the goal state cannot be reached.
If the search process is reasoning backward from the goal state, it can also terminate
a path either because it is sure that the initial state cannot be reached or little progress is
being made.
In reasoning backward, each goal is decomposed into sub goals. Each of them may
lead to a set of additional sub goals. Sometimes it is easy to detect that there is now way that
the entire sub goals in a given set can be satisfied at once. Other paths can be pruned because
they lead nowhere.
5. Repairing an almost correct solution
Solve the sub problems separately and then combine the solution to yield a correct
solution. But it leads to wasted effort.
The other way is to look at the situations that result when the sequence of operations
corresponding to the proposed solution is executed and to compare that situation to the
desired goal. The difference between the initial state and goal state is small. Now the
problem solving can be called again and asked to find a way of eliminating a new difference.
The first solution can then be combined with second one to form a solution to the original
problem.
When information as possible is available, complete the specification in such a way
that no conflicts arise. This approach is called least commitment strategy. It can be applied in
a variety of ways.

To defer deciding on the order in which operations can be performed.


Choose one order in which to satisfy a set of preconditions, we could leave the order
unspecified until the very end. Then we could look at the effects of each of the sub solutions
to determine the dependencies that exist among them. At that point, an ordering can be
chosen.

Unit-2:

1. How will you measure the problem-solving performance? May 10


Problem solving performance is measured with 4 factors.
1) Completeness - Does the algorithm (solving procedure) surely finds solution if really the
solution exists.
2) Optimality – If multiple solutions exits then do the algorithm returns optimal amongst
them.
3) Time requirement.
4) Space requirement.
2. What is the application of BFS? May 10
It is simple search strategy, which is complete i.e. it surely gives solution if solution exists. If
the depth of search tree is small then BFS is the best choice. It is useful in tree as well as in
graph search.
3. State on which basis search algorithms are chosen? Dec 09
Search algorithms are chosen depending on two components.
1) How is the state space – That is, state space is tree structured or graph?
Critical factor for state space is what is branching factor and depth level of that tree or
graph.
2) What is the performance of the search strategy? A complete, optimal search strategy with
better time and space requirement is critical factor in performance of search strategy.
4. Evaluate performance of problem-solving method based on depth-first search algorithm?

Dec 10
DFS algorithm performance measurement is done with four ways –
1) Completeness – It is complete (guarantees solution)
2) Optimality – it is not optimal.
3) Time complexity – It’s time complexity is O (b).
4) Space complexity – its space complexity is O (b d+1).

5. What are the four components to define a problem? Define them? May 13
The four components to define a problem are,
1) Initial state – it is the state in which agent starts in.
2) A description of possible actions – it is the description of possible actions which are
available to the agent.
3) The goal test – it is the test that determines whether a given state is goal (final) state.
4) A path cost function – it is the function that assigns a numeric cost (value) to each path.
The problem-solving agent is expected to choose a cost function that reflects its own
performance measure.

6. State on what basis search algorithms are chosen? Dec 09


Refer Question 3

7. Define the bi-directed search? Dec 13


As the name suggests bi-directional that is two directional searches are made in this searching
technique. One is the forward search which starts from initial state and the other is the
backward search which starts from goal state. The two searches stop when both the search
meet in the middle.

8. List the criteria to measure the performance of search strategies? May 14


Refer Question 3

9. Why problem formulation must follow goal formulation? May 15


Goal based agent is the one which solves the problem. Therefore, while formulating problem
one need to only consider what is the goal to be achieved so that problem formulation is done
accordingly. Hence problem formulation must follow goal formulation.

10. mention how the search strategies are evaluated?


Search strategies are evaluated on following four criteria
1. completeness: the search strategy always finds a solution, if one exits?
2. Time complexity: how much time the search strategy takes to complete?
3. Space complexity: how much memory consumption search strategy has?
4. Optimality: the search strategy finds a highest solution.

11. define admissible and consistent heuristics?


Admissible heuristics: a heuristic is admissible if the estimated cost is never more than actual
cost from the current node to the goal node.
Consistent heuristics:
A heuristic is consistent if the cost from the current node to a successor node plus the
estimated cost from the successor node to the goal is less than or equal to estimated cost from
the current node to the goal.

12. What is the use of online search agent in unknown environment? May-15
Ans: Refer Question 6
13. list some of the uninformed search techniques?
The uninformed search strategies are those that do not take into account the
location of the goal. That is these algorithms ignore where they are going until they
find a goal and report success. The three most widely used uninformed search
strategies are
1.depth-first search-it expands the deepest unexpanded node
2.breadth-first search-it expands shallowest unexpanded node
3.lowest -cost-first search (uniform cost search)- it expands the lowest cost node

14. When is the class of problem said to be intractable? Dec– 03


The problems whose algorithm takes an unreasonably large amount of resources (time
and / or space) are called intractable.
For example – TSP
Given set of ‘N’ points, one should find shortest tour which connects all of them.
Algorithm will consider all N! Orderings, i.e. consider n = 16  16! > 250 which is
impractical for any computer?

15. What is the power of heuristic search? or Dec-04


Why does one go for heuristics search? Dec-05
Heuristic search uses problem specific knowledge while searching in state space. This
helps to improve average search performance. They use evaluation functions which
denote relative desirability (goodness) of a expanding node set. This makes the search
more efficient and faster. One should go for heuristic search because it has power to
solve large, hard problems in affordable times.

16. What are the advantages of heuristic function? Dec- 09,11


Heuristics function ranks alternative paths in various search algorithms, at each
branching step, based on the available information, so that a better path is chosen.
The main advantage of heuristic function is that it guides for which state to explore
now, while searching. It makes use of problem specific knowledge like constraints to
check the goodness of a state to be explained. This drastically reduces the required
searching time.
17. State the reason when hill climbing often gets stuck? May -10
Local maxima are the state where hill climbing algorithm is sure to get struck.
Local maxima are the peak that is higher than each of its neighbor states, but lower
than the global maximum. So we have missed the better state here. All the search
procedure turns out to be wasted here. It is like a dead end.
18. When a heuristic function h is said to be admissible? Give an admissible heuristic function for
TSP? Dec-10
Admissible heuristic function is that function which never over estimates the cost to
reach the goal state. It means that h(n) gives true cost to reach the goal state ‘n’.
The admissible heuristic for TSP is
a. Minimum spanning tree.
b. Minimum assignment problem.
19. What do you mean by local maxima with respect to search technique? May -11
Local maximum is the peak that is higher than each of its neighbor states, but lowers
than the global maximum i.e. a local maximum is a tiny hill on the surface whose
peak is not as high as the main peak (which is a optimal solution). Hill climbing fails
to find optimum solution when it encounters local maxima. Any small move, from
here also makes things worse (temporarily). At local maxima all the search procedure
turns out to be wasted here. It is like a dead end.
20. How can we avoid ridge and plateau in hill climbing? Dec-12
Ridge and plateau in hill climbing can be avoided using methods like backtracking,
making big jumps. Backtracking and making big jumps help to avoid plateau,
whereas, application of multiple rules helps to avoid the problem of ridges.
21. What is CSP? May-10
CSP are problems whose state and goal test conform to a standard structure and very
simple representation. CSPs are defined using set of variables and a set of constraints
on those variables. The variables have some allowed values from specified domain.
For example – Graph coloring problem.

22. How can minimax also be extended for game of chance?


Dec-10
In a game of chance, we can add extra level of chance nodes in game search tree. These nodes
have successors which are the outcomes of random element.
The minimax algorithm uses probability P attached with chance node d i based on this value.
Successor function S(N,di) give moves from position N for outcome di

1. Discuss any 2 uninformed search methods with examples. Dec 09,Dec 14,May-13,May-17
Breadth First Search (BFS)
Breadth first search is a general technique of traversing a graph. Breadth first search
may use more memory but will always find the shortest path first. In this type of search the
state space is represented in form of a tree. The solution is obtained by traversing through the
tree. The nodes of the tree represent the start value or starting state, various intermediate states
and the final state. In this search a queue data structure is used and it is level by level
traversal. Breadth first search expands nodes in order of their distance from the root. It is a
path finding algorithm that is capable of always finding the solution if one exists. The solution
which is found is always the optional solution. This task is completed in a very memory
intensive manner. Each node in the search tree is expanded in a breadth wise at each level.
Concept:
Step 1: Traverse the root node
Step 2: Traverse all neighbours of root node.
Step 3: Traverse all neighbours of neighbours of the root node.
Step 4: This process will continue until we are getting the goal node.
Algorithm:
Step 1: Place the root node inside the queue.
Step 2: If the queue is empty then stops and return failure.
Step 3: If the FRONT node of the queue is a goal node then stop and return
success.
Step 4: Remove the FRONT node from the queue. Process it and find all its
neighbours that are in readystate then place them inside the queue in any
order.
Step 5: Go to Step 3.
Step 6: Exit.
Implementation:
Let us implement the above algorithm of BFS by taking the following suitable
example.

Consider the graph in which let us take A as the starting node and F as the goal
4. Explain the nature of heuristics with example. What is the effect of heuristics
accuracy?May-13,May-16

We can also call informed search as Heuristics search. It can be classified as below

Best first search

Branch and Bound Search

A* Search

AO* Search

Hill Climbing

Constraint satisfaction

Means end analysis

Heuristic is a technique which makes our search algorithm more efficient. Some heuristics
help to guide a search process without sacrificing any claim to completeness and some sacrificing
it.

Heuristic is a problem specific knowledge that decreases expected search efforts. It is a


technique which sometimes works but not always. Heuristic search algorithm uses information
about the problem to help directing the path through the search space.

These searches uses some functions that estimate the cost from the current state to the goal
presuming that such function is efficient. A heuristic function is a function that maps from problem
state descriptions to measure of desirability usually represented as number.

The purpose of heuristic function is to guide the search process in the most profitable
directions by suggesting which path to follow first when more than is available.

Generally heuristic incorporates domain knowledge to improve efficiency over blind


search.

In AI heuristic has a general meaning and also a more specialized technical meaning.
Generally a term heuristic is used for any advice that is effective but is not guaranteed to work in
every case.

For example in case of travelling sales man (TSP) problem we are using a heuristic to
calculate the nearest neighbour. Heuristic is a method that provides a better guess about the correct
choice to make at any junction that would be achieved by random guessing.

This technique is useful in solving though problems which could not be solved in any other
way. Solutions take an infinite time to compute.

Classifications of heuristic search.

Best First Search


Best first search is an instance of graph search algorithm in which a node is selected for
expansion based o evaluation function f (n). Traditionally, the node which is the lowest evaluation is
selected for the explanation because the evaluation measures distance to the goal. Best first search can
be implemented within general search frame work via a priority queue, a data structure that will
maintain the fringe in ascending order of f values.

This search algorithm serves as combination of depth first and breadth first search algorithm.
Best first search algorithm is often referred greedy algorithm this is because they quickly attack the
most desirable path as soon as its heuristic weight becomes the most desirable.

Concept:

Step 1: Traverse the root node

Step 2: Traverse any neighbor of the root node, that is maintaining a least distance from the
root node and insert them in ascending order into the queue.

Step 3: Traverse any neighbor of neighbor of the root node, that is maintaining a least
distance fromthe root node and insert them in ascending order into the queue

Step 4: This process will continue until we are getting the goal node

Algorithm:

Step 1: Place the starting node or root node into the queue.

Step 2: If the queue is empty, then stop and return failure.


Step 3: If the first element of the queue is our goal node, then stop and return success.

Step 4: Else, remove the first element from the queue. Expand it and compute the estimated
goal distancefor each child. Place the children in the queue in ascending order to the goal
distance. Step 5: Go to step-3

Implementation:

Step 1: Consider the node A as our root node. So the first element of the queue is A whish is
not our goal node, so remove it from the queue and find its neighbor that are to inserted in
ascending order. A
Step 2: The neighbors of A are B and C. They will be inserted into the queue in ascending
order. B C A

Step 3: Now B is on the FRONT end of the queue. So calculate the neighbours of B that are
maintaining a least distance from the roof. F E D C B Step 4:Now the node F is on the
FRONT end of the queue. But as it has no further children, so remove it from the queue and
proceed further. E D C B Step 5:Now E is the FRONT end. So the children of E are J and K.
Insert them into the queue in ascending order.K J D C E

Step 6:Now K is on the FRONT end and as it has no further children, so remove it and
proceed further J D C K

Step7:Also, J has no corresponding children. So remove it and proceed further.D C J

Step 8:Now D is on the FRONT end and calculates the children of D and put it into the
queue. I C D

Step9:Now I is the FRONT node and it has no children. So proceed further

after removing this node from the queue. CI

Step 10:Now C is the FRONT node .So calculate the neighbours of C that are to be inserted
in ascending order into the queue.G H C

Step 11:Now remove G from the queue and calculate its neighbour that is to insert in
ascending order into the queue. M L H G

Step12:Now M is the FRONT node of the queue which is our goal node. So stop here and
exit. L H M

Advantage:

It is more efficient than that of BFS and DFS.

Time complexity of Best first search is much less than Breadth first search.

The Best first search allows us to switch between paths by gaining the benefits of both breadth
first and depth first search. Because, depth first is good because a solution can be found
without computing all nodes and Breadth first search is good because it does not get trapped
in dead ends.
Disadvantages:
Sometimes, it covers more distance than our consideration.
Branch and Bound Search
Branch and Bound is an algorithmic technique which finds the optimal solution by keeping the
best solution found so far. If partial solution can‘t improve on the best it is abandoned, by this
E. 0+5 = 5 (The cost of A is 0 as it is the
starting node) F:0+9 = 9

C:0+7 = 7

Here B (5) is the least distance.

AB

Step 2:

Now the stack will be

C FBA

As B is on the top of the stack so calculate the neighbors of B.

F. 0+5+4 = 9

G. 0+5+6 = 11

The least distance is D from B. So it will be on the top of the stack.


A5B 4C

Step 3:

As the top of the stack is D. So calculate neighbours of D.

CFDB

H. 0+5+4+8 = 17

I. 0+5+4+3 = 12

The least distance is F from D and it is our goal node. So stop and return success.

Step 4:

CFD

Hence the searching path will be A-B -D-F

Advantages:

As it finds the minimum path instead of finding the minimum successor so there should not
be any repetition. The time complexity is less compared to other algorithms.
Disadvantages:

The load balancing aspects for Branch and Bound algorithm make it parallelization difficult.

The Branch and Bound algorithm is limited to small size network. In the problem of large networks,
where the solution search space grows exponentially with the scale of the network, the approach
becomes relatively prohibitive.

A* SEARCH

A* is a cornerstone name of many AI systems and has been used since it was developed in
1968 by Peter Hart; Nils Nilsson and Bertram Raphael. It is the combination of Dijkstra‘s algorithm
and Best first search. It can be used to solve many kinds of problems. A* search finds the shortest
path through a search space to goal state using heuristic function. This technique finds minimal cost
solutions and is directed to a goal state called A* search.

In A*, the * is written for optimality purpose. The A* algorithm also finds the lowest cost
path between the start and goal state, where changing from one state to another requires some cost. A*
requires heuristic function to evaluate the cost of path that passes through the particular state.

This algorithm is complete if the branching factor is finite and every action has fixed cost. A*
requires heuristic function to evaluate the cost of path that passes through the particular state. It can be
defined by following formula

.f (n) + g (n) = h (n)

Where g (n): The actual cost path from the start state to the current state.

h (n): The actual cost path from the current state to goal state.

f (n): The actual cost path from the start state to the goal state.

For the implementation of A* algorithm we will use two arrays namely OPEN and

CLOSE.

OPEN:

An array which contains the nodes that has been generated but has not been yet examined.

CLOSE:

An array which contains the nodes that have been examined.

Algorithm:
Step 1: Place the starting node into OPEN and find its f (n) value.

Step 2: Remove the node from OPEN, having smallest f (n) value. If it is a goal node then
stop and return success.

Step 3: Else remove the node from OPEN, find all its successors.

Step 4: Find the f (n) value of all successors; place them into OPEN and

place the removed node into CLOSE.

Step 5: Go to Step-2.

Step 6: Exit.

Implementation:

The implementation of A* algorithm is 8-puzzle game.

Advantages:

J. It is complete and optimal.

K. It is the best one from other techniques. It is used to solve very


complex problems.

L. It is optimally efficient, i.e. there is no other optimal algorithm


guaranteed to expand fewer nodes than A*.
This algorithm is complete if the branching factor is finite and every action has fixed cost. A*
requires heuristic function to evaluate the cost of path that passes through the particular state. It can be
defined by following formula

.f (n) + g (n) = h (n)

Where g (n): The actual cost path from the start state to the current state.

h (n): The actual cost path from the current state to goal state.

f (n): The actual cost path from the start state to the goal state.

For the implementation of A* algorithm we will use two arrays namely OPEN and

CLOSE.
OPEN:

An array which contains the nodes that has been generated but has not been yet examined.

CLOSE:

An array which contains the nodes that have been examined.

Algorithm:

Step 1: Place the starting node into OPEN and find its f (n) value.

Step 2: Remove the node from OPEN, having smallest f (n) value. If it is a goal node then
stop and return success.

Step 3: Else remove the node from OPEN, find all its successors.

Step 4: Find the f (n) value of all successors; place them into OPEN and

place the removed node into CLOSE.

Step 5: Go to Step-2.

Step 6: Exit.

Implementation:

The implementation of A* algorithm is 8-puzzle game.

Advantages:

M. It is complete and optimal.

N. It is the best one from other techniques. It is used to solve very


complex problems.

O. It is optimally efficient, i.e. there is no other optimal algorithm


guaranteed to expand fewer nodes than A*.
Disadvantages:

P. This algorithm is complete if the branching factor is finite and every


action has fixed cost.

Q. The speed execution of A* search is highly dependent on the accuracy of


the heuristic algorithm that is used to compute h (n).It has complexity
problems.

AO* Search: (And-Or) Graph

The Depth first search and Breadth first search given earlier for OR trees or graphs can be easily
adopted by AND-OR graph. The main difference lies in the way termination conditions are determined,
since all goals following an AND nodes must be realized; where as a single goal node following an OR
node will do. So for this purpose we are using AO* algorithm.Like A* algorithm here we will use two
arrays and one heuristic function.

OPEN:

It contains the nodes that has been traversed but yet not been marked solvable or unsolvable.

CLOSE:

It contains the nodes that have already been processed.

Algorithm:

Step 1: Place the starting node into OPEN.

Step 2: Compute the most promising solution tree say T0.

Step 3: Select a node n that is both on OPEN and a member of T0. Remove it from OPEN and
place it in CLOSE
Step 4:

As the nodes G and H are unsolvable, so place them into CLOSE directly and process the nodes
D and E.

Step 5:

Now we have been reached at our goal state. So place F into CLOSE.

Step 6:

Success and Exit


Advantages:

R. It is an optimal algorithm.

S. If traverse according to the ordering of nodes. It can be used for both OR


and AND graph

.Disadvantages:

a. Sometimes for unsolvable nodes, it can‘t find the optimal path.


Its complexity is than other algorithms.

5. Explain the following types of hill climbing search techniques Dec-18,


May 16,Dec 17
1. Simple hill climbing
2. Steepest hill climbing
3. Simulated annealing
Hill climbing search algorithm is simply a loop that continuously moves in the direction of
increasing value. It stops when it reaches a ―peak‖ where no neighbour has higher value. This
algorithm is considered to be one of the simplest procedures for implementing heuristic search.
The hill climbing comes from that idea if you are trying to find the top of the hill and you go up
direction from where ever you are. This heuristic combines the advantages of both depth first and
breadth first searches into a single method.
6.
The name hill climbing is derived from simulating the situation of a person climbing the hill.
The person will try to move forward in the direction of at the top of the hill.His movement stops
when it reaches at the peak of hill and no peak has higher value of heuristic function than this.
Hill climbing uses knowledge about the local terrain, providing a very useful and effective
heuristic for eliminating much of the unproductive search space. It is a branch by a local
evaluation function. The hill climbing is a variant of generate and test in which direction the
search should proceed. At each point in the search path, a successor node that appears to reach for
exploration.

Algorithm:

Step 1: Evaluate the starting state. If it is a goal state then stop and return success.

Step 2: Else, continue with the starting state as considering it as a current state.

Step 3: Continue step-4 until a solution is found i.e. until there are no new states left

to be applied in the current state.

Step 4:

T. Select a state that has not been yet applied to the current state
and apply it to produce a new state.

U. Procedure to evaluate a new state.


a
If the current state is a goal state, then stop and return
success.

b
If it is better than the current state, then make it current
state and proceed further.

c
If it is not better than the current state, then continue in
the loop until a solution is found.

Step 5:Exit.

Advantages:
938+938=1876

12.Explain alpha-beta pruning algorithm and the Minmax game playing algorithm with example?
Dec-03,Dec-04,May-10,May-10, May-09, May 17, May 19,Dec-04, May-10, May-10,Dec-10 ,May 17
ALPHA-BETA pruning is a method that reduces the number of nodes explored
in Minimax strategy. It reduces the time required for the search and it must be restricted so that
no time is to be wasted searching moves that are obviously bad for the current player.

The exact implementation of alpha-beta keeps track of the best move for each side as it moves
throughout the tree.

We proceed in the same (preorder) way as for the minimax algorithm. For the MIN nodes, the score
computed starts with +infinity and decreases with time.

For MAX nodes, scores computed starts with –infinity and increase with time.

The efficiency of the Alpha-Beta procedure depends on the order in which successors of a node
are examined. If we were lucky, at a MIN node we would always consider the nodes in order
from low to high score and at a MAX node the nodes in order from high to low score. In general
it can be shown that in the most favorable circumstances the alpha-beta search opens as many
leaves as minimax on a game tree with double its depth.

Alpha-Beta algorithm: The algorithm maintains two values, alpha and beta, which represents
the minimum score that the maximizing player is assured of and the maximum score that the
minimizing player is assured of respectively. Initially alpha is negative infinity and beta is
positive infinity. As the recursion progresses the "window" becomes smaller. When beta becomes
less than alpha, it means that the current position cannot be the result of best play by both players
and hence need not be explored further.

Pseudocode for the alpha-beta algorithm.

evaluate (node, alpha, beta)

if node is a leaf
return the heuristic value of node

if node is a minimizing node

for each child of node

beta = min (beta, evaluate (child, alpha, beta))

if beta <= alpha

return beta

return beta

if node is a maximizing node

for each child of node

alpha = max (alpha, evaluate (child, alpha, beta))

if beta <= alpha

return alpha

Min Max Algorithm

The Min-Max algorithm is applied in two player games, such as tic-tac-toe, checkers,
chess, go, and so on.

There are two players involved, MAX and MIN. A search tree is generated, depth-first, starting with the
current game position upto the end game position. Then, the final game position is evaluated from
MAX‘s point of view, as shown in Figure 1. Afterwards, the inner node values of the tree are filled
bottom-up with the evaluated values. The nodes that belong to the MAX player receive the maximum
value of it‘s children. The nodes for the MIN player will select the minimun value of it‘s children.

The values represent how good a game move is. So the MAX player will try to select the move
with highest value in the end. But the MIN player also has something to say about it and he will
try to select the moves that are better to him, thus minimizing MAX‘s outcome.

Algorithm

MinMax (GamePosition game) {

return MaxMove (game);

MaxMove (GamePosition game)

if (GameEnded(game))

return EvalGameState(game);

else

best_move <- {};

moves <- GenerateMoves(game);


ForEach moves
{

move <- MinMove(ApplyMove(game));

if (Value(move) > Value(best_move))

best_move <- move;

return best_move;

MinMove (GamePosition game) {

best_move <- {};

moves <- GenerateMoves(game);

ForEach moves {

move <- MaxMove(ApplyMove(game));

if (Value(move) > Value(best_move)) {

best_move <- move;


}

return best_move;

Optimization

BB. price

CC. Limit the depth of the tree.

Speed up the algorithm

This all means that sometimes the search can be aborted because we find out that the
search subtree won‘t lead us to any viable answer. This optimization is known as alpha-beta
cutoffs.

The algorithm Have two values passed around the tree nodes: The alpha value
which holds the best MAX value found; The beta value which holds
the best MIN value found.

At MAX level, before evaluating each child path, compare the returned value with of the
previous path with the beta value. If the value is greater than it abort the search for the current
node;

At MIN level, before evaluating each child path, compare the returned value with of the previous
path with the alpha value. If the value is lesser than it abort the search for the current node.

i) Write an algorithm for converting to clause form. (June 2013)


Convert the following well – formed formula into clause form with sequence of steps,
X: [Roman (x) Know (x, Marcus)] [ hate (x, caeser) (∀ y : z: hate(y,z)
Thinkcrazy (x,y))] (MAY/JUNE 2016)
To convert the axioms into conjunctive normal form (clause form)

DD. Eliminate implications

EE. Move negations down to the atomic formulas

FF. Purge existential quantifiers

GG. Rename variables if necessary

HH. Move the universal quantifiers to the left

II. Move the disjunctions down to the literals

JJ. Eliminate the conjunctions

KK. Rename the variables if necessary

LL. Purge the universal quantifiers

Example

Consider the sentence

―All music lovers who enjoy Bach either dislike Wagner or think that anyone who
dislikes any composer is a philistine''.

Use enjoy() for enjoying a composer, and for dislike.

"x[musiclover(x) enjoy(x,Bach)

dislike(x,Wagner) ( "y[$z[dislike(y,z)] =>think-philistine(x,y)])]


Conversion

Step 1: Filter the expression to remove symbols.

"x [ (musiclover(x) enjoy(x,Bach)) => dislike(x,Wagner) ( "y[ $z [dislike(y,z)]=> think-


philistine(x,y)])].
We will eliminate implication [ ®] by substituting it with its equivalent.

for e.g. a ® b = ~a v b .

Here 'a' and 'b' can be any predicate logic expression.

For the above statement we get :

"x ~[Roman(x) Ù know(x,Marcus)] Ú [hate(x,Caesar) Ú("y~($z


hate(y,z)) Úthinkcrazy(x,y))]

2. Reduce the scope of ~

To reduce the scope we can use 3 rules :

~(~p) = p

DeMorgans Laws : ~(aÚb) = ~aÙ~b

~(aÙb) = ~aÚ~b

Applying this reduction on our example yields :

"x[~Roman(x) Ú~know (x,Marcus)] Ú [hate(x,Caesar) Ú ("y"z ~hate(y,z) Ú


thinkcrazy(x,y))]

3.Change variable names such that, each quantifier has a unique name.

We do this in preparation for the next step. As variables are just dummy names, changing a
variable name doesnot affect the truth value of the wff. Suppose we have

"xP(x) Ú "xQ(x) will be converted to "x(P(x) Ú "yQ(y)


4.Move all the quantifiers to the left of the formula without changing their relative
order.

As we already have unique names for each quantifier in the previous step, this will not cause a
problem.

Performing this on our example we get : "x"y"z [


~Roman(x) Ú ~know(x,Marcus)] Ú [
hate(x,Caesar)Ú(~hate(y,z)Úthinkcrazy(x,y))]

5. Eliminate existential quantifiers [ $ ]

We can eliminate the existential quantifier by simply replacing the variable with a reference
to a function that produces the desired value.

for eg. $y President(y) can be transformed into the formula President(S1)

If the existential quantifiers occur within the scope of a universal wuantifier, then the value that
satisfies the predicate may depend on the values of the universally quantified variables.

For eg.. "x$y fatherof(y,x) will be converted to "x fatherof( S2(x),x )

6. Drop the Prefix

As we have eliminated all existential quantifiers, all the variables present in the wff are
unversally quantified, hence for simplicity we can just drop the prefix, and assume that every
variable is universally quantified. We have form our example :

[ ~Roman(x) Ú ~know(x,Marcus)] Ú [ hate(x,Caesar)Ú(~hate(y,z)Úthinkcrazy(x,y))]


7. Convert into conjunction of disjuncts

as we have no ANDs we will just have to use the associative property to get rid of the
brackets.

In case of ANDs we will need to use the distributive property.

We have :

~Roman(x) Ú ~know(x,Marcus) Ú hate(x,Caesar) Ú ~hate(y,z) Ú thinkcrazy(x,y)

8. Separate each conjunct into a new clause.

As we did not have ANDs in out example, this step is avoided for our example and the final
output of the conversion is :

~Roman(x) Ú ~know(x,Marcus) Ú hate(x,Caesar) Ú ~hate(y,z) Úthinkcrazy(x,y)

Unit-3:

1. What are the limitations in using propositional logic to represent the knowledge base? May-11
Propositional logic has following limitations to represent the knowledge base.
i. It has limited expressive power.
ii. It cannot directly represent properties of individuals or relations between
individuals.
iii. Generalizations, patterns, regularities cannot easily be represented.
iv. Many rules (axioms) are requested to write so as to allow inference.

2. Name two standard quantifiers. Dec – 09,May – 13


The two standard quantifier are universal quantifiers and existential quantifier.
They are used for expressing properties of entire collection of objects rather just a single
object.
Eg. x Happy(x) means that “if the universe of discourse is people, then
everyone is happy”.
x Happy(x) means that “if the universe of discourse is people, then this
means that there is at-least one happy person.”

3. What is the purpose of unification? May – 12,Dec – 12,Dec - 09


It is for finding substitutions for inference rules, which can make different logical
expression to look identical. It helps to match to logical expressions. Therefore it is used
in many algorithm in first order logic.

4.What is ontological commitment (what exists in the world) of first order logic? Represent the
sentence “Brothers are siblings” in first order logic? Dec - 10
Ontological commitment means what assumptions language makes about the nature if reality.
Representation of “Brothers are siblings” in first order logic is
 x, y [Brother (x, y)  Siblings (x, y)]

5.Differentiate between propositional and first order predicate logic? May – 10 , Dec – 11
Following are the comparative differences versus first order logic and propositional logic.
1) Propositional logic is less expressive and do not reflect individual object`s properties
explicitly. First order logic is more expressive and can represent individual object along
with all its properties.
2) Propositional logic cannot represent relationship among objects whereas first order logic
can represent relationship.
3) Propositional logic does not consider generalization of objects where as first order logic
handles generalization.
4) Propositional logic includes sentence letters (A, B, and C) and logical connectives, but
not quantifier.
First order logic has the same connectives as propositional logic, but it also has variables
for individual objects, quantifier, symbols for functions and symbols for relations.

6.What factors justify whether the reasoning is to be done in forward or backward reasoning?

Dec - 11
Following factors justify whether the reasoning is to be done in forward or backward
reasoning:
a. possible to begin with the start state or goal state?
b. Is there a need to justify the reasoning?
c. What kind of events trigger the problem - solving?
d. In which direction is the branching factor greatest? One should go in the
direction with lower branching factor?

7. Define diagnostic rules with example? May – 12

Diagnostics rules are used in first order logic for inference. The diagnostics rules generate
hidden causes from observed effect. They help to deduce hidden facts in the world. For
example consider the Wumpus world.
The diagnostics rule finding ‘pit’ is
“If square is breezy some adjacent square must contain pit”, which is written as, s
Breezy(s) =>  Adjacent (r,s)  pit (r).

8. Represent the following sentence in predicate form:

“All the children like sweets” Dec – 12


x child(x)  sweet(y)  likes (x,y).
5. what is Skolemization? May - 13
It is the process of removing existential quantifier by elimination. It converts a sentence
with existential quantifier into a sentence without existential quantifier such that the first
sentence is satisfiable if and only if the second is.
For eliminating an existential quantifier each occurrence of its variable is replaced by a
skolem function whose argument are the variables of universal quantifier whose
argument are the variables of universal quantifier whose scope includes the scope of
existential quantifier.
6. Define the first order definite clause? Dec – 13
1) They are disjunctions of literals of which exactly one is positive.
2) A definite clause is either atomic sentence or is an implication whose antecedents (left
hand side clause) is a conjunction of positive literals and consequent (right hand side
clause) is a single positive literal.
For example:
Princess (x)  Beautiful (x)  Goodhearted(x)
Princess(x)
Beautiful(x)
7. Write the generalized Modus ponens Rule? May – 14
1) Modus ponens :
If the sentence P and P Q are known to be true, then modus ponens lets us infer Q
For example : if we have statement , “ If it is raining then the ground will be wet” and “It is
raining”. If P denotes “It is raining” and Q is “The ground is wet” then the first expression
becomes P Q. Because if it is indeed now raining (P is true), our set of axioms becomes,
 P Q
P
Through an application of modus ponens, the fact that “The ground is wet” (Q) may be added
to the set of true expressions.
2) The generalized modus ponens :
For atomic sentences P i , P'i and q, where there is a substitution Q such that
SUBST (,P'i) = SUBST (,P'i) ,
For all i ,
P'1, P'2 , …… P'n , (P1  P2  ….. Pn  q)

SUBST (, q)
There is n+ 1 premise to this rule: The ‘n’ atomic sentences P'i and the one implication. The
conclusion is the result applying the substitution  to the consequent q.
8. Define atomic sentence and complex sentence? Dec – 14
Atomic sentences
1. An atomic sentence is formed from a predicate symbol followed by a parenthesized list of
terms.
For example: Stepsister (Cindrella, Drizella)
2. Atomic sentences can have complex terms as the arguments.
For example: Married (Father (Cindrella), Mother (Drizella))
3. Atomic sentences are also called atomic expressions, atoms or propositions.
For example: Equal (plus (two, three), five) is an atomic sentence.

Complex sentences

i) Atomic sentences can be connected to each other to form complex sentence.


Logical connectives, , , ,  can be used to connect atomic sentences.
For example:
 Princess (Drizella)  Princess (Cindrella)
ii) (foo(two, two, plus(two, three)))  (Equal (plus (three, two), five)  true) is a sentence
because all its components are sentences, appropriately connected by logical operators.
iii) Various sentences in first order logic formed using connectives:
1) If S is a sentence, then so its negation,  S.
2) If S1, and S2 are sentences, then so their conjunction, S1  S2.
3) If S1, and S2 are sentences, then so their disjunction, S1  S2.
4) If S1, and S2 are sentences, then so their implication, S1S2.
5) If S1, and S2 are sentences, then so their equivalence, S1  S2.

9. What is Unification? Dec - 14


1) It is the process of finding substitutions for lifted inference rules, which can make different
logical expression to look similar (identical).
2) Unification is a procedure for determining substitutions needed to make two first order logic
expressions match.
3) Unification is important component of all first order logic inference algorithms.
4) The unification algorithm takes two sentences and returns a unifier for them, if one exists.
10. Differentiate forward chaining and backward chaining? May – 15
Forward chaining is data driven
It is automatic unconscious processing.
Ex. – Object reorganization, routine decision.
It may do lots of work that is irrelevant to the goal.
Backward chaining is goal driven.
It is appropriate for the problem solving.
Ex. Where are my keys?, How do I get into a Ph.D programme?
Complexity of backward chaining can be much less than linear in size of knowledge
base.
11. Define metarules? May - 17

The rules that determine the conflict resolution strategy are called meta rules. Meta rules
define knowledge about how the system will work. For example, meta rules may define that
knowledge from expert1 is to be trusted more than knowledge from expert 2. Meta rules are
treated by the system like normal rules but are they are given higher priority.

Convert the following into Horn clauses. Dec -17


𝑒𝑎𝑡
∀𝑥: ∀𝑦: 𝑐𝑎𝑡(𝑥) ∨ 𝑓𝑖𝑠ℎ(𝑦) → 𝑙𝑖𝑘𝑒𝑠 𝑥, 𝑦
Horn clauses are as follows,

𝑒𝑎𝑡
¬𝑐𝑎𝑡(𝑥) ∨ ¬𝑓𝑖𝑠ℎ(𝑦) ∨ 𝑙𝑖𝑘𝑒𝑠 𝑥, 𝑦

12. Explain following term with reference to prolog programming language :clauses

Clauses: clauses are the structure elements of the program. A prolog programmer develops a
program by writing a collection of clauses in a text file. The programmer the uses the consult
command ,specifying the name of the text file, to load the process into the prolog
environment.

The two types of clauses – facts and rules.


Facts – a fact is an atom or structure followed by fullstop .examples of valid prolog syntax
defining facts are :cold , male(homer).and father(homer,bart)
Rules: a rule consist of a head and body . the head and body are separated by a :- and
followed by a fullstop. If the body of a clause id true then the head of the clause is true.
Examples of valid prolog syntax for defining rules are: bigger(X,Y):-X>Y.and
parents(F,M,C):-father (F,C),mother(M,C).

13. explain following term with refernce to prolog programming language : predicates

Each predicate has a name and zero or more arguments .the predicate name is a prolog atom .
each argument is an arbitrary prolog term. A predicate with pred and n arguments is denoted
by pred/N, which is called a predicate indicator. A predicate is defined by a collection of
clauses.
A clause is either a rule or fact . A clauses that constitute a predicate denote logical
alternative: if any clause is true, then the whole predicate is true.
14. explain the following term with reference to prolog programming language : domains

Domains : the argument to the predicates must belong to know prolog domains. A
domain can be a standard domain, or it can be one you declare in the domain section. The
two types of process- facts and rules. Examples :

If you declare a predicate my_predicate(symbol,integer) in the predicate section, like this:


predicates:
My_predicate(symbol,integer)
You don’t need to declare its arguments domains in the domain section, because symbol
and integer are standard domains. But if you declare a predicate
my_predicate(name,number) in the predicates section, like this

Predicates:

my_predicate(name,number) you will need to declare suitable domains for name and
number.

Assuming you want these to be symbol and integer respectively, the domain declaration
looks like this.

Domains:
Name=symbol

Number = integer

Predicates:

my_predicate(name,number)
15. explain the following term with reference to prolog programming language :goal

a goal is a statement starting with a predicate and probably followed by its arguments. In
a valid goal,the predicate must have appeared in atleast one fact or a rule in the consulted
program, and a number of arguments in the goal must be the same as that appears in the
consulted program . also, al the arguments (if any) or constants.

The purpose of submitting a goal is to find out whether the statement represented by the
goal is true according to the knowledge database(i.e. the facts and rules in the consulted
program). This is similar to proving a hypothesis – the goal being the hypothesis, the
facts being the axioms and the rules being the theorem.

16. explain the following term with reference to prolog programming language : cut

The cut, in prolog , is a goal, return us !,which always succeeds, but cannot be
backtracked past. The prolog cut predicate, or ‘!’,eliminates choices is a prolog derivation
tree it is used to prevent unwanted backtracking, for example, to prevent extra solutions
being found by prolog.

The cut should be used sparingly. There is a temptation to insert cuts experimentally into
code that is not working correctly.

17. explain the following term with reference to prolog programming language :fail

It is the built-in prolog predicate with no arguments, which, as the name suggest, always fails, it
is useful for forcing backtracking and various other contexts.

The two types of process- facts and rules.


18. explain the following term with reference to prolog programming language : Inference engine

Inference engine: prolog built-in backward chaining inference engine which can be used
to partially implement some expert system. Prolog rules are used for the knowledge
representation, and the prolog inference engine is used to derive conclusions. Other
portions of the system, such as the user interface, must be coded using prolog as a
programming language. The prolog inference engine thus simple backward chaining.
Each rule has a goal and a number of each sub-goals. The prolog inference engine either
proves or disproves each goal. There is no uncertainty associated with the results.
This rule structure and inference strategy is adequate for many expert system
applications. Only the dialogue with the user needs to be improved to create a simple
expert system. These feature are used in he chapter to build a sample application called,
“birds,”which identifies birds.

19. define ontological engineering?

It is a process of representing the abstract concepts like actions,time which are related to
the real world domains. This process is complex and lengthy because in real world
objects have many different characteristics with various values which can differ over
time. In such cases ontological engineering generalizes the objects having similar
characteristics.

20. Differentiate general purpose ontology from special purpose ontology.


Special purpose ontology considers some basics facts about the world in such a way tat
they may notbe represented in generalized manner. It provides domain specific axioms.
Whereas the general purpose ontology is applicable to any special purpose domain with
the addiction of domain specific axioms, it tries to represent real world abstract concepts
in more generic manner, so as to cover larger domains.
A general purpose ontology unifies and do reasoning for sufficiently large domains and
different areas, where as special purpose ontology is restricted to specific problem
domain.

1. Write the algorithm for deciding entailment in propositional logic. May 13 Dec 14
REFER Qno 7

2. What is conductive normal form of a rule? What is solemnizations?(4) Dec -10


REFER Qno 7
3. Describe the detail the steps involved in the knowledge engineering process?May – 10,May
– 11,May-13,Dec-13
Knowledge Engineering is the process of imitating how a human expert in a specific domain
would act and take decisions. It looks at the metadata (information about a data object that
describes characteristics such as content, quality, and format), structure and processes that are the
basis of how a decision is made or conclusion reached. Knowledge engineering attempts to take
e.g., property “dark red” applies to my car.

6. Make Inferences, draw new conclusions from existing facts.

To satisfy these assumptions about KR, we need formal notation that allows automated inference
and problem solving. One popular choice is use of logic.

Logic

Logic is concerned with the truth of statements about the world. Generally each
statement is either TRUE or FALSE. Logic includes: Syntax, Semantics and Inference Procedure.
1. Syntax:

Specifies the symbols in the language about how they can be combined to form sentences.
The facts about the world are represented as sentences in logic.

2. Semantic:

Specifies how to assign a truth value to a sentence based on its meaning in the world. It
Specifies what facts a sentence refers to. A fact is a claim about the world, and it may be TRUE or
FALSE.

3. Inference Procedure:

Specifies methods for computing new sentences from the existing sentences. Logic as a
KR Language

Logic is a language for reasoning, a collection of rules used while doing logical reasoning. Logic
is studied as KR languages in artificial intelligence. Logic is a formal system in which the formulas or
sentences have true or false values. Problem of designing KR language is a tradeoff between that which is

MM. Expressive enough to represent important objects and relations


in a problem domain.
NN. Efficient enough in reasoning and answering questions about
implicit information in a reasonable amount of time.

Logics are of different types: Propositional logic, Predicate logic, temporal logic, Modal logic,
Description logic etc;

They represent things and allow more or less efficient inference. Propositional logic
and Predicate logic are fundamental to all logic. Propositional Logic is the study of
statements and their connectivity. Predicate Logic is the study of individuals and
their properties.
Logic Representation

Logic can be used to represent simple facts.

The facts are claims about the world that are True or False. To
build a Logic-based representation:

a. User defines a set of primitive symbols and the associated


semantics.

b. Logic defines ways of putting symbols together so that user can


define

legal sentences in the language that represent TRUE facts.

OO. Logic defines ways of inferring new sentences from existing


ones.

PP. Sentences - either TRUE or false but not both are called propositions.

QQ. A declarative sentence expresses a statement with a proposition


as content; example:

the declarative "snow is white" expresses that snow is white; further, "snow is white" expresses
that snow is white is TRUE.

Resolution and Unification algorithm


In propositional logic it is easy to determine that two literals cannot both be true at the
same time. Simply look for L and ~L . In predicate logic, this matching process is more
complicated, since bindings of variables must be considered.

For example man (john) and man(john) is a contradiction while man (john) and
man(Himalayas) is not. Thus in order to determine contradictions we need a matching procedure
that compares two literals and discovers whether there exist a set of substitutions that makes them
identical. There is a recursive procedure that does this matching . It is called Unification
algorithm.

In Unification algorithm each literal is represented as a list, where first element

is the name of a predicate and the remaining elements are arguments. The

argument may be a single element (atom) or may be another list. For example we

can have literals as

(tryassassinate Marcus Caesar)

(tryassassinate Marcus (ruler of Rome))

To unify two literals, first check if their first elements re same. If so proceed. Otherwise they
cannot be unified. For example the literals (try assassinate Marcus Caesar)

(hate Marcus Caesar)

Cannot be unified. The unification algorithm recursively matches pairs of elements, one pair at a time.
The matching rules are :

RR. Different constants, functions or predicates can not match, whereas


identical ones can.

SS. A variable can match another variable, any constant or a function or


predicate expression, subject to the condition that the function or
[predicate expression must not contain any instance of the variable being
matched (otherwise it will lead to infinite recursion).

TT. The substitution must be consistent. Substituting y for x now and then z
for x later is inconsistent. (a substitution y for x written as y/x)
The Unification algorithm is listed below as a procedure UNIFY (L1, L2). It

returns a list representing the composition of the substitutions that were performed

during the match. An empty list NIL indicates that a match was found without any

substitutions. If the list contains a single value F, it indicates that the unification

procedure failed.

UNIFY (L1, L2)

UU. if L1 or L2 is an atom part of same thing do

a. if L1 or L2 are identical then return NIL

b. else if L1 is a variable then do

VV. if L1 occurs in L2 then return F


else return (L2/L1) © else if L2 is a
variable then do

WW. if L2 occurs in L1 then return F else return (L1/L2)

else return F.

XX. If length (L!) is not equal to length (L2) then return F.

YY. Set SUBST to NIL


(at the end of this procedure , SUBST will contain all the substitutions used to unify L1 and L2).

4. For I = 1 to number of elements in L1 do

ZZ. call UNIFY with the i th element of L1 and I‘th element of L2, putting
the result in S

AAA. if S = F then return F

BBB. if S is not equal to NIL then do

CCC. apply S to the remainder of both L1 and L2

DDD. SUBST := APPEND (S, SUBST) return SUBST.

Resolution yields a complete inference algorithm when coupled with any complete
search algorithm. Resolution makes use of the inference rules. Resolution performs deductive
inference. Resolution uses proof by contradiction. One can perform Resolution from a
Knowledge Base. A Knowledge Base is a collection of facts or one can even call it a database
with all facts.

Resolution basically works by using the principle of proof by contradiction. To find the
conclusion we should negate the conclusion. Then the resolution rule is applied to the resulting
clauses.

Each clause that contains complementary literals is resolved to produce a 2new clause,
which can be added to the set of facts (if it is not already present). This process continues until
one of the two things happen. There are no new clauses that
can be added. An application of the resolution rule derives the empty clause An empty clause
shows that the negation of the conclusion is a complete contradiction, hence the negation of the
conclusion is invalid or false or the assertion is completely valid or true.
Steps for Resolution

EEE. Convert the given statements in Predicate/Propositional Logic

FFF. Convert these statements into Conjunctive Normal Form

GGG. Negate the Conclusion (Proof by Contradiction)


HHH. Resolve using a Resolution Tree (Unification)

Steps to Convert to CNF (Conjunctive Normal Form)

III. Every sentence in Propositional Logic is logically equivalent to a


conjunction of disjunctions of literals.

A sentence expressed as a conjunction of disjunctions of literals is said to be in Conjunctive


normal Form or CNF.

JJJ. Eliminate implication ‗→‘

KKK. a → b = ~a v b

LLL. ~ (a ^ b) = ~ a v ~ b …………DeMorgan‘sLaw

MMM. ~ (a v b) = ~ a ^ ~ b ………... DeMorgan‘sLaw

NNN. ~ (~a) = a
Eliminate Existential Quantifier ‗∃‘

To eliminate an independent Existential Quantifier, replace the variable by a


Skolemconstant. This process is called as Skolemization.
Example: ∃y: President (y)

Here ‗y‘ is an independent quantifier so we can replace ‗y‘ by any name (say – George Bush).
So, ∃y: President (y) becomes President (George Bush).

To eliminate a dependent Existential Quantifier we replace its variable by SkolemFunction that


accepts the value of ‗x‘ and returns the corresponding value of ‗y.‘
Example: ∀x : ∃y : father_of(x, y)

Here ‗y‘ is dependent on ‗x‘, so we replace ‗y‘ by S(x).


To eliminate the Universal Quantifier, drop the prefix in PRENEX NORMAL FORM i.e.
just drop ∀and the sentence then becomes in PRENEX NORMAL FORM. Eliminate AND ‗^‘

a ^ b splits the entire clause into two separate clauses i.e. a and b

(a v b) ^ c splits the entire clause into two separate clauses a v b and c (a ^ b) v c


splits the clause into two clauses i.e. a v c and b v c

To eliminate ‗^‘ break the clause into two, if you cannot break the clause, distribute the OR ‗v‘
and then break the clause.

Problem Statement:

OOO. Ravi likes all kind of food.

PPP. Apples and chicken are food

QQQ. Anything anyone eats and is not killed is food

RRR. Ajay eats peanuts and is still alive

SSS. Rita eats everything that Ajay eats

Prove by resolution that Ravi likes peanuts using resolution.

Step 1: Converting the given statements into Predicate/Propositional Logic


i. ∀x : food(x) → likes (Ravi, x)

TTT. food (Apple) ^ food (chicken)


UUU. ∀a : ∀b: eats (a, b) ^ ~killed (a) → food (b)

VVV. eats (Ajay, Peanuts) ^ alive (Ajay)


WWW. ∀c : eats (Ajay, c) → eats (Rita, c)

XXX. ∀d : alive(d) → ~killed (d)


YYY. ∀e: ~killed(e) → alive(e)

Conclusion: likes (Ravi, Peanuts)

Step 2: Convert into CNF

i. ~food(x) v likes (Ravi, x)

ZZZ. Food (apple)

AAAA. Food (chicken)

BBBB. ~ eats (a, b) v killed (a) v food (b)

CCCC. Eats (Ajay, Peanuts)

DDDD. Alive (Ajay)

EEEE. ~eats (Ajay, c) V eats (Rita, c)

FFFF. ~alive (d) v ~ killed (d)

GGGG. Killed (e)


v alive (e)
Conclusion: likes
(Ravi, Peanuts)
Negate the
conclusion
~ likes (Ravi, Peanuts)
•To chain forward, match data in working memory against 'conditions' of rules
in the rule base.
•When one of them fires, this is liable to produce more data.
•So the cycle continues up to conclusion.
Example
•Here are two rules:
•If corn is grown on poor soil, then it will get blackfly.
•If soil hasn't enough nitrogen, then it is poor soil.
Forward chaining algorithm
Repeat
Collect the rule whose condition matches a fact in WM.

Do actions indicated by the rule.


(add facts to WM or delete facts from WM)
Until problem is solved or no condition match
Apply on the Example 2 extended (adding 2 more rules and 1 fact)
Rule R1 : IF hot AND smoky THEN ADD fire
Rule R2 : IF alarm_beeps THEN ADD smoky
Rule R3 : If fire THEN ADD switch_on_sprinklers
Rule R4 : IF dry THEN ADD switch_on_humidifier
Rule R5 : IF sprinklers_on THEN DELETE dry
Fact F1 : alarm_beeps [Given]
Fact F2 : hot [Given]
Fact F2 : Dry [Given]
Now, two rules can fire (R2 and R4)
Rule R4 ADD humidifier is on [from F2]
ADD smoky [from F1]
ADD fire [from F2 by R1]
ADD switch_on_sprinklers [by R3]
Rule R2
[followed by
sequence of
actions]
DELEATE dry, ie
humidifier is off a conflict ! [by R5 ]
6.Discuss backward chaining algorithm?May-10,Dec-10,May-15,May–13,Dec-16,May-17,Dec-18
BACKWARD CHAINING
Backward chaining: working from the conclusion to the facts. Sometimes
called the goal-driven approach. Starts with something to find out, and looks for rules
that will help in answering it goal driven.
Steps in BC
•To chain backward, match a goal in working memory against 'conclusions' of
rules in the rule-base.
•When one of them fires, this is liable to produce more goals.
•So the cycle continues
Example
•Same rules:
•If corn is grown on poor soil, then it will get blackfly.
•If soil hasn't enough nitrogen, then it is poor soil.
■ Backward chaining algorithm
Prove goal G
If G is in the initial facts , it is proven.
Otherwise, find a rule which can be used to conclude G, and
try to prove each of that rule's conditions.
Encoding of rules
Rule R1 : IF hot AND smoky THEN fire
Rule R2 : IF alarm_beeps THEN smoky
Rule R3 : If fire THEN switch_on_sprinklers
Fact F1 : hot [Given]
Fact F2 : alarm_beeps [Given]
Types of Rules

Three types of rules are mostly used in the Rule-based production systems.
HHHH
Knowledge Declarative Rules :

These rules state all the facts and relationships about a problem. Example
:

IF inflation rate declines

THEN the price of gold goes down.

These rules are a part of the knowledge base.


IIII
Inference Procedural Rules

These rules advise on how to solve a problem, while certain facts are known. Example :

IF the data needed is not in the system


THEN request it from the user.

These rules are part of the inference engine.


JJJJ
Meta rules

These are rules for making rules. Meta-rules reason about which rules should be considered
for firing.

Example :

IF the rules which do not mention the current goal in their premise, AND there
are rules which do mention the current goal in their premise, THEN the former
rule should be used in preference to the latter. Meta-rules direct reasoning rather
than actually performing reasoning.

Meta-rules specify which rules should be considered and in which order they
should be invoked. FACTS :They represent the real world information
Inference Engine

The inference engine uses one of several available forms of inferencing. By inferencing means
the method used in a knowledge-based system to process the stored knowledge and supplied
data to produce correct conclusions.

Example

How old are you?

Subtract the year you were born in from 2014.

The answer will either be exactly right,

Or one year short.

Dempster/Shafer theory

KKKK
The Dempster-Shafer theory, also known as the theory
of belief functions, is a generalization of the Bayesian theory of
subjective probability.

LLLL
The Bayesian theory requires probabilities for each question of
interest, belief functions allow us to base degrees of belief for
one question on probabilities for a related question.

MMMM
These degrees of belief may or may not have the
mathematical properties of probabilities; how much they differ
from probabilities will depend on how closely the two questions
are related.

NNNN
The Dempster-Shafer theory owes its name to work by
A. P. Dempster (1968) and Glenn Shafer (1976), but the kind of
reasoning the theory uses can be found as far back as the
seventeenth century.
OOOO
The theory came to the attention of AI researchers in the
early 1980s, when they were trying to adapt probability theory to
expert systems.

You might also like