Unit I-Ai
Unit I-Ai
Introduction to AI
Artificial Intelligence:
Artificial Intelligence is the ability of a computer to act like a human being.
• Artificial intelligence systems consist of people, procedures, hardware, software, data, and
knowledge needed to develop computer systems and machines that demonstrate the
characteristics of intelligence
Concept of Rationality
A system is rational if it does the “right thing”. Given what it knows.
– System that think like human
– System that think rationally
– System that act like human
– System that act rationally
Types of AI Tasks
(i) Mundane Tasks
• Perception
• Vision
• Speech
• Natural Language understanding, generation and translation
• Common-sense Reasoning
• Simple reasoning and logical symbol manipulation
• Robot Control
(ii) Formal Tasks
• Games
– Chess
Deep Blue recently beat Gary Kasparov
– Backgammon
– Draughts
• Mathematics
– Geometry and Logic
Logic Theorist: It proved mathematical theorems. It actually proved several theorems
from Classical Math Textbooks
– Integral Calculus
– Programs such as Mathematical and Mathcad and perform complicated symbolic
integration and differentiation.
(iii) Expert Tasks
• Engineering
– Design
– Fault finding
– Manufacturing
• Planning
• Scientific Analysis
• Medical Diagnosis
• Financial Analysis
• Rule based systems - if (conditions) then action
Agents and environments
• An agent is anything that can be viewed as perceiving its environment through sensors and
acting upon that environment through actuators
• Human Sensors:
– eyes, ears, and other organs for sensors.
• Human Actuators:
– hands, legs, mouth, and other body parts.
• Robotic Sensors:
– Mic, cameras and infrared range finders for sensors
• Robotic Actuators:
– Motors, Display, speakers etc
Problem definition
Formal Description of the problem
1. Define a state space that contains all the possible configurations of the relevant objects.
2. Specify one or more states within that space that describe possible situations from which the
problem solving process may start ( initial state)
3. Specify one or more states that would be acceptable as solutions to the problem. ( goal states)
4. Specify a set of rules that describe the actions (operations) available.
Problem Formulation
To build a system to solve a problem
1. Define the problem precisely
2. Analyse the problem
3. Isolate and represent the task knowledge that is necessary to solve the problem
4. Choose the best problem-solving techniques and apply it to the particular problem.
1.Example: Playing Chess
• To build a program that could “play chess”, we could first have to specify the starting position
of the chess board, the rules that define the legal moves, and the board positions that represent
a win for one side or the other.
• In addition, we must make explicit the previously implicit goal of not only playing the legal
game of chess but also winning the game, if possible,
white pawn at
square(file e, rank 2) Move pawn from
AND square(file e, rank 2)
square(file e, rank 3) is empty to
AND square(file e, rank 4)
square(file e, rank 4) is empty
8 (x, y) if x+y >= 3 and x>0 (x-(3-y), 3) Pour water from the 4-gallon jug into the 3-
gallon jug until the 3-gallon jug is full
9 (x, y) if x+y <=4 and y>0 (x+y, 0) Pour all the water from the 3-gallon jug into
the 4-gallon jug
10 (x, y) if x+y <= 3 and x>0 (0, x+y) Pour all the water from the 4-gallon jug into
the 3-gallon jug
Required a control structure that loops through a simple cycle in which some rule whose left
side matches the current state is chosen, the appropriate change to the state is made as described in the
corresponding right side, and the resulting state is checked to see if it corresponds to goal state. One
solution to the water jug problem shortest such sequence will have a impact on the choice of
appropriate mechanism to guide the search for solution.
Production Systems
A production system consists of:
• A set of rules, each consisting of a left side that determines the applicability of the rule and a
right side that describes the operation to be performed if that rule is applied.
• One or more knowledge/databases that contain whatever information is appropriate for the
particular task. Some parts of the database may be permanent, while other parts of it may
pertain only to the solution of the current problem.
• A control strategy that specifies the order in which the rules will be compared to the database
and a way of resolving the conflicts that arise when several rules match at once.
• A rule applier
To solve a problem:
• We must first reduce it to one for which a precise statement can be given. This can be done by
defining the problem’s state space (start and goal states) and a set of operators for moving that
space.
• The problem can then be solved by searching for a path through the space from an initial state
to a goal state.
• The process of solving the problem can usefully be modeled as a production system.
Control Strategies
Control Strategy decides which rule to apply next during the process of searching for a solution to
a problem.
• Requirements for a good Control Strategy
– It should cause motion
In water jug problem, if we apply a simple control strategy of starting each time from the top
of rule list and choose the first applicable one, then we will never move towards solution.
– It should explore the solution space in a systematic manner
If we choose another control strategy, say, choose a rule randomly from the applicable rules
then definitely it causes motion and eventually will lead to a solution. But one may arrive to same
state several times. This is because control strategy is not systematic.
Breadth First Search
Algorithm:
1. Create a variable called NODE-LIST and set it to initial state
2. Until a goal state is found or NODE-LIST is empty do
a. Remove the first element from NODE-LIST and call it E. If NODE-LIST
was empty, quit
b. For each way that each rule can match the state described in E do:
i. Apply the rule to generate a new state
ii. If the new state is a goal state, quit and return this state
iii. Otherwise, add the new state to the end of NODE-LIST
o
(0,(0,0)0)
(4, 0) (0, 3)
(0, 2) (4, 0)
Advantages of BFS
• BFS will not get trapped exploring a blind alley. This contrasts with DFS, which may follow
a single unfruitful path for a very long time, perhaps forever, before the path actually
terminates in a state that has no successors.
• If there is a solution, BFS is guaranteed to find it.
• If there are multiple solutions, then a minimal solution will be found.
Search Algorithms
• Uninformed or blind search strategies uses only the information available in the problem
definition
• Informed or heuristic search strategies use additional information. Heuristic tells us
approximately how far the state is from the goal state. Heuristics might underestimate or
overestimate the merit of a state.
Generate and test
The generate and test is the simplest form of all heuristic search methods
Algorithm
1. Generate a possible solution. For some problems, this means generating a particular point in
the problem space. For others, it means generating a path from a start state.
2. Test to see if this is actually a solution by comparing the chosen point or the endpoint of the
chosen path to the set of acceptable goal states.
3. If a solution has been found, quit. Otherwise, return to step 1.
Example - Traveling Salesman Problem (TSP)
• Traveler needs to visit n cities.
• Know the distance between each pair of cities.
• Want to know the shortest route that visits all the cities once.
• TSP - generation of possible solutions is done in lexicographical order of cities:
1. A - B - C - D
2. A - B - D - C
3. A - C - B - D
4. A - C - D - B
5. A - D - C - B
6. A - D - B - C
• n=80 will take millions of years to solve exhaustively!
Hill Climbing
Algorithm
1. Evaluate the initial state.
2. Loop until a solution is found or there are no new operators left to be applied:
- Select and apply a new operator
- Evaluate the new state:
goal quit
better than current state new current state
Example: 8 puzzle problem
Here, h(n) = the number of misplaced tiles (not including the blank), the Manhattan Distance
heuristic helps us quickly find a solution to the 8-puzzle.
Advantages of Hill Climbing
• Estimates how far away the goal is.
• Is neither optimal nor complete.
• Can be very fast.
Algorithm
1. Evaluate the initial state.
2. Loop until a solution is found or a complete iteration produces no change to current state:
- SUCC = a state such that any possible successor of the
current state will be better than SUCC (the worst state).
- For each operator that applies to the current state, evaluate
the new state:
goal quit
better than SUCC set SUCC to this state
- SUCC is better than the current state set the current state to SUCC.
Disadvantages
• Local maximum
A state that is better than all of its neighbours, but not better than some other states far away.
• Plateau
A flat area of the search space in which all neighbouring states have the same value.
• Ridge
The orientation of the high region, compared to the set of available moves, makes it impossible to
climb up. However, two moves executed serially may increase the height.
Evaluation function
f(n) = h(n)+g(n)
• f(n) = cost of the cheapest solution through n
• g(n) = actual path cost from the start node to node n
Algorithm
1. Create a priority queue of search nodes (initially the start state). Priority is determined by
the function f )
2. While queue not empty and goal not found:
(a) Get best state x from the queue.
(b) If x is not goal state:
(i) generate all possible children of x (and save path information with each node).
(ii) Apply f to each new node and add to queue.
(iii) Remove duplicates from queue (using f to pick the best).
A-S-R-P-B 140+80+97+101=418
Performance Analysis
• Time complexity – depends on heuristic function and admissible heuristic value
• space complexity – O(bm)
• Optimality – yes (locally finite graphs)
• Completeness – yes (locally finite graphs)
So far we have seen many search strategies that can move either in the forward direction or
backward direction. But, means end analysis allows both backward and forward searching
Search process reduces the difference between the current state and the goal state until the required
goal is achieved
• Solve major parts of a problem first and then return to smaller problems when assembling the
final solution – operator sub-goaling
• Example :
– GPS was the first AI program to exploit means-ends analysis.
– STRIPS (A robot Planner)
Procedure
1. Until the goal is reached or no more process are available:
(a) Describe the current state, the goal state and the differences between the two.
(b) Use the difference between the current state and goal state, possibly with the
description of the current state or goal state, select a promising procedure.
(c) Use the promising procedure and update current state.
2. If goal is reached then success otherwise failure.
Household robot domain
• Problem: Move desk with two things on it from one location S to another G. Find a sequence
of actions robot performs to complete the given task.
• Operators are: PUSH, CARRY, WALK, PICKUP, PUTDOWN and PLACE given with
preconditions and results.
S B________C G
Start PUSH Goal
Algorithm
1. Compare CURRENT and GOAL. If there are no differences between them then return.
2Otherwise, select the most important difference and reduce it by doing the following until success or
failure is signaled.
(a) Select an as yet untried operator O that is applicable to the current difference. If there are
no such operators, then signal failure.
(b) Attempt to apply O to CURRENT. Generate descriptions of two states:
O-START- a state in which O’s preconditions are specified.
O-RESULT- the state that would result if O were applied in O-START.
(c) If
(FIRST-PARTMEA(CURRENT, O-START))
and
(LAST-PARTMEA(O-RESULT, GOAL))
are successful, then signal success and return the result of concatenating FIRST-PART, O and
LAST-PART.
OR graph
An OR graph consists entirely of OR nodes, and in order to solve the problem represented by
it, you only need to solve the problem represented by one of his children
(Eight Puzzle Tree example).
AND GRAPH
An AND graph consists entirely of AND nodes, and in order to solve a problem represented by it,
you need to solve the problems represented by all of his children (Hanoi towers example).
AND/OR Graphs
AND-OR graph is useful for certain problems where
• The solution involves decomposing the problem into smaller problems. We then solve these
smaller problems
• An AND/OR graph consists of both AND nodes and OR nodes.
Problem Reduction
Each sub-problem is solved and final solution is obtained by combining solutions of each sub-
problem.
Decomposition generates arcs that we will call AND arc.
One AND arc may point to any number of successors, all of which must be solved.
Such structure is called AND–OR graph rather than simply AND graph.
To find a solution in AND–OR graph, we need an algorithm similar to A* with the ability to
handle AND arc appropriately.
In search for AND-OR graph, we will also use the value of heuristic function f for each node.
Game Playing
Mini-Max Terminology
• backed-up value
• minimax procedure: search down several levels; at the bottom level apply the utility
function, back-up values all the way up to the root node, and that node selects the move.
RCDC – RCDO
where RCDC is number of rows, columns and diagonals in which computer could still win
and RCDO is number of rows, columns and diagonals in which opponent could still win.
–
Static Evaluation:
“+1” for a win “0” for a draw
Evaluation obtained:
Properties of Minimax
For the purposes of this minimax tutorial, this tree is equivalent to the list representation:
At the start of the problem, you see only the current state (i.e. the current position of pieces on
the game board). As for upper and lower bounds, all you know is that it's a number less than
infinity and greater than negative infinity. Thus, here's what the initial situation looks like:
which is equivalent
to
Since the bounds still contain a valid range, we start the problem by generating the first child
state, and passing along the current set of bounds. At this point our search looks like this:
We're still not down to depth 4, so once again we generate the first child node and pass along our
current alpha and beta values:
When we get to the first node at depth 4, we run our evaluation function on the state, and get
the value 3. Thus we have this:
We pass this node back to the min node above. Since this is a min node, we now know that the
minimax value of this node must be less than or equal to 3. In other words, we change beta to 3.
Next we generate the next child at depth 4, run our evaluation function, and return a value of 17
to the min node above:
Since this is a min node and 17 is greater than 3, this child is ignored. Now we've seen all of the
children of this min node, so we return the beta value to the max node above. Since it is a max
node, we now know that it's value will be greater than or equal to 3, so we change alpha to 3:
Notice that beta didn't change. This is because max nodes can only make restrictions on the
lower bound. Further note that while values passed down the tree are just passed along, they
aren't passed along on the way up. Instead, the final value of beta in a min node is passed on to
possibly change the alpha value of its parent. Likewise the final value of alpha in a max node is
passed on to possibly change the beta value of its parent.
At the max node we're currently evaluating, the number line currently looks like this:
Since this is a min node, we now know that the value of this node will be less than or equal to 2,
so we change beta to 2:
The number line now looks like this:
As you can see from the number line, there is no longer any overlap between the regions
bounded by alpha and beta. In essense, we've discovered that the only way we could find a
solution path at this node is if we found a child node with a value that was both greater than 3 and
less than 2. Since that is impossible, we can stop evaluating the children of this node, and return
the beta value (2) as the value of the node.
Admittedly, we don't know the actual value of the node. There could be a 1 or 0 or -100
somewhere in the other children of this node. But even if there was such a value, searching for it
won't help us find the optimal solution in the search tree. The 2 alone is enough to make this
subtree fruitless, so we can prune any other children and return it.
That's all there is to beta pruning!