Ai Module1 QB Solutions
Ai Module1 QB Solutions
AI seeks to understand the computations required from intelligent behavior and to produce computer
systems that exhibit intelligence. Aspects of intelligence studied by AI include perception,
communication using human languages, reasoning, planning, learning and memory.
The following questions are to be considered before we can start the study of specific AI problems
and solutions:
1. What are the underlying assumptions about intelligence?
2. What kinds of techniques will be useful for solving AI problems?
3. At what level human intelligence can be modelled?
4. When will it be realized when an intelligent program has been built?
2) List and explain the problems characteristics which must be analyzed before deciding
on a proper heuristic search? (Dec –Jan 2018) (Jun- Jul 2018)
Problem characteristics
● Is the problem decomposable into a set of (nearly) independent smaller or easier sub
problems?
● Can solution steps be ignored or at least undone if they prove unwise?
3.Describe the importance of defining the problem as a state space and search.
Demonstrate the same with respect to water jug problem. (Jun- Jul 2018)
The state space for this problem can be described as a set of ordered pairs of integers
(x,y) where,
x represents the number of gallons of water in the 4-gallon jug
y represents the number of gallons of water in the 4-gallon jug.
Values of x can be 0,1,2,3, or 4
Values of y can be 0,1,2, or 3.
Start state is (0,0)
Goal state is (2, n) for any of value of n
Production Rules:
Solution
4.What are production systems? List and explain different classes of production
system? (Dec –Jan 2018)
Structure of AI programs that facilitates search process. A production system consists of:
A set of rules, each consisting of a left side that determine the applicability of the rule
and a right side that describes the operation to be performed if the rule is applied.
One or more knowledge/databases that contain the appropriate information.
A control strategy that specifies the order in which the rules will be compared to the
database and a way of resolving the conflicts when several rules match at once.
A rule applier.
5.Explain the requirements of a good control strategy. Give the algorithms for DFS and
BFS? Compare both with suitable example. (Jun-Jul 2018)
Requirements of a good control strategy:
1. A good control strategy causes motion (Change of state from initial to final)
2. A good control strategy must be systematic.
BFS
1. Create a variable called NODE-LIST and set it to initial state.
2. Until a goal state is found or NODE-LIST is empty
A. Remove the first element from the NODE-LIST and call it El.If NODE-LIST was
empty quit.
B. For each way that each rule can match the state described in E do:
1. Apply the rule to generate a new state.
2. If the new state is goal state quit and return this state.
3. Otherwise, add the new state to the end of NODE-LIST.
DFS:
1. If the initial state is a goal state quit and return success.
2. Otherwise, do the following until success or failure is signaled
A. Generate a successor E of the initial state. If there are no more successor, signal
failure.
B. Call DFS with E as the initial state.
C. If success is returned signal success, otherwise continue in this loop.
Advantages:
DFS:
Requires less memory since only the nodes on the current path are stored.
By chance, DFS may find a solution without examining much of the search space at
all.
BFS:
Will not get trapped exploring a blind alley.
BFS is guaranteed to find a solution if one exists.
Minimal solution is always found.
6.Explain steepest hill climbing search technique with an algorithm. Comment on its
drawbacks and how to overcome these drawbacks. (Dec-Jan 2018)
Steepest-Ascent Hill climbing : It first examines all the neighbouring nodes and then
selects the node closest to the solution state as next node.
1. Local Maximum: A local maximum is a peak state in the landscape which is better than
each of its neighbouring states, but there is another state also present which is higher than the
local maximum.
Solution: Backtracking technique can be a solution of the local maximum in state space
landscape. Create a list of the promising path so that the algorithm can backtrack the search
space and explore other paths as well.
2. Plateau: A plateau is the flat area of the search space in which all the neighbor states of
the current state contains the same value, because of this algorithm does not find any best
direction to move. A hill-climbing search might be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little steps while searching,
to solve the problem. Randomly select a state which is far away from the current state so it is
possible that the algorithm could find non-plateau region.
3. Ridges: A ridge is a special form of the local maximum. It has an area which is higher than
its surrounding areas, but itself has a slope, and cannot be reached in a single move.
Solution: With the use of bidirectional search, or by moving in different directions, we can
improve this problem.
Generate-And-Test Algorithm
Generate-and-test search algorithm is a very simple algorithm that guarantees to find a
solution if done systematically and there exists a solution.
Algorithm: Generate-And-Test
1.Generate a possible solution.
2.Test to see if this is the expected solution.
3.If the solution has been found quit, else go to step 1.
Generate-and-test, like depth-first search, requires that complete solutions be generated for
testing. In its most systematic form, it is only an exhaustive search of the problem space.
Solutions can also be generated randomly but solution is not guaranteed. This approach is
what is known as British Museum algorithm: finding an object in the British Museum by
wandering randomly.
Hill Climbing:
A variant of generate and test.
In the depth-first search, the test function will merely accept or reject a solution.
Feedback from the heuristic function is used to help the generator decide which
direction to move in the search space.
Test function is augmented with heuristic function which provides an estimate on how
close a given state is to goal state.
Also known as greedy local search.
Simple Hill climbing: It examines the neighbouring nodes one by one and selects the first
neighbouring node which optimizes the current cost as next node.
Simulated annealing
Algorithm : Simulated Annealing
7.Give Best First search algorithm (A*), explain how it combines the benefits of both
BFS and DFS.
A* is a graph search algorithm that follows best first search
Tries to combine the advantages of DFS and BFS.
Follow a single path at a time, but switch path whenever some competing path looks
more promising than the current one.
At each step the Best-First-search selects the most promising of the nodes generated so
far using an appropriate heuristic function.
OPEN - Nodes that have been generated and have had the heuristic function applied to them,
but which have not been explored yet. OPEN is actually a priority queue.
CLOSED - Nodes that have been already explored.
A* expands paths that are already less expensive by using this function:
f(n)=g(n)+h’(n), where
f(n) = total estimated cost of path through node n
g(n) = cost so far to reach node n
h’(n) = estimated cost from n to goal. This is the heuristic part of the cost function, so it is
like a guess.
1. If OPEN is empty, stop and return failure , Else pick the BESTNODE on OPEN
with lowest f ' value and place it on CLOSED
2. If BESTNODE is goal state return success and stop Else Generate the successors of
BESTNODE.
For each SUCCESSOR do the following:
1. Set SUCCESSOR to point back to BESTNODE. (back links will help to recover
the path)
2. Compute g(SUCCESSOR) = g(BESTNODE) + cost of getting from BESTNODE
to SUCCESSOR.
3. If SUCCESSOR is the same as any node on OPEN, call that node OLD and add
OLD to BESTNODE 's successors. Check g(OLD) and g(SUCCESSOR). If g(SUCCESSOR)
is cheaper then reset OLD 's parent link to point to BESTNODE. Update g (OLD) and
f '(OLD).
4. If SUCCESSOR was not on OPEN, see if it is on CLOSED. if so call the node on
CLOSED as OLD, and if it is better as earlier then set the parent link and g and f ' values
appropriately.
5. If SUCCESSOR was not already on earlier OPEN or CLOSED, then put it on
OPEN and add it to the list of BESTNODE 's successors. Compute f ' (SUCCESSOR) =
g(SUCCESSOR) + h ' (SUCCESSOR).
Example:
8. What are AND-OR graphs, explain how problem reduction (AO*) algorithm uses
AND-OR graphs for search procedure. (Jun- Jul 2018)
Useful for representing the solution of problems that can be solved by decomposing
them into a set of smaller problems, all of which must be then solved.
Some problems are best represented as achieving sub goals, some of which achieved
simultaneously and independently (AND)
Up to now, only dealt with OR options
AO* Algorithm:
1. Initialise the graph to start node
2. Traverse the graph following the current path accumulating nodes that have not yet
been expanded or solved
3. Pick any of these nodes and expand it and if it has no successors call this
value FUTILITY otherwise calculate only f' for each of the successors.
4. If f' is 0 then mark the node as SOLVED
5. Change the value of f' for the newly created node and let f’ reflect on its predecessors
by back propagation.
6. Wherever possible use the most promising routes and if all descendants of a node is
marked as SOLVED then mark the parent node as SOLVED.
7. If starting node is SOLVED or value greater than FUTILITY, stop, else repeat from 2.
Example:
Example Link:http://artificialintelligence-notes.blogspot.com/2010/07/problem-reduction-
with-ao-algorithm.html
The aim is to choose a value for each variable so that the resulting possible world
satisfies the constraints; we want a model of the constraints. A finite CSP has a finite
set of variables and a finite domain for each variable.
Ex: Cryptarithmetic puzzles, labelling problems
Algorithm:
1. Propagate available constraints. Set OPEN to the set of all objects that must have
values assigned to them. Do until an inconsistency is detected or until OPEN is empty.
Select an object OB from OPEN, strengthen as much as possible the set of
constraints that apply to OB.
If this set is different from the set that was assigned the last time the OB was
examined or if this is the first time OB has been examined, then ass to OPEN all
objects that share any constraints with OB.
Remove OB from OPEN
2. If the union of the constraints discovered above defines a solution, then quit and report
the solution.
3. If the union of the constraints discovered above defines a contradiction, then return
failure.
4. If neither of the above occur, then it is necessary to make a guess at something in order
to proceed. To do this, loop until a solution is found or all possible solutions have been
eliminated.
Select the object whose value is not yet determined and select a way of
strengthening the constraints on that object.
Recursively invoke constraint satisfaction with the current set of constraints
augmented by the strengthening constraint just selected.
10. Solve the following crypt arithmetic problem SEND+MORE=MONEY? (Dec –Jan
2018)
Uploaded the solution and other examples in Module1 ppt
11. Explain means end analysis with an example
It is a problem solving technique used in AI for limiting the search
Mixture of two direction (forward or backward).
Solve major part of problem first and then minor parts.
Important Steps:
Detects the difference between the current state and goal state.
Select various Operators that can be applied for each difference.
Apply the operator at each difference which reduces the difference between the current
state and goal state.
Example:
Algorithm:
1. Compare CURRENT to GOAL, if no difference return.
2. Select the most important difference and reduce it by doing the following until success
or failure is signalled.
1. Select an as yet untried operator O that is applicable to the current difference. If no
such operator signal failure.
2. Attempt to apply O to current. Generate two states O_START and O_RESULT.
3. If (FIRST_PART)<- MEA(CURRENT, O_START)
And (LAST_PART<-MEA(O_RESULT, GOAL)
are successful then signal success and return the result of concatenating
FIRST_PART, O and LAST_PART