Artificial Intelligence Notes Unit 1
Artificial Intelligence Notes Unit 1
ARTIFICIAL INTELLIGENCE
VI SEMESTER CSE
UNIT-I
----------------------------------------------------------------------------------------1.1 INTRODUCTION
1.1.1 What is AI? 1.1.2 The foundations of Artificial Intelligence. 1.1.3 The History of Artificial Intelligence 1.1.4 The state of art
Introduction to AI
1.1.1 What is artificial intelligence?
Artificial Intelligence is the branch of computer science concerned with making computers behave like humans. Major AI textbooks define artificial intelligence as "the study and design of intelligent agents," where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. John McCarthy, who coined the term in 1956, defines it as "the science and engineering of making intelligent machines,especially intelligent computer programs." The definitions of AI according to some text books are categorized into four approaches and are summarized in the table below : Systems that think like humans The exciting new effort to make computers think machines with minds,in the full and literal sense.(Haugeland,1985) Systems that act like humans The art of creating machines that perform functions that require intelligence when performed by people.(Kurzweil,1990) The four approaches in more detail are as follows : Systems that think rationally The study of mental faculties through the use of computer models. (Charniak and McDermont,1985) Systems that act rationally Computational intelligence is the study of the design of intelligent agents.(Poole et al.,1998)
Machine learning to adapt to new circumstances and to detect and extrapolate patterns To pass the complete Turing Test,the computer will need Computer vision to perceive the objects,and Robotics to manipulate objects and move about. (b)Thinking humanly : The cognitive modeling approach We need to get inside actual working of the human mind : through introspection trying to capture our own thoughts as they go by; through psychological experiments Allen Newell and Herbert Simon,who developed GPS,the General Problem Solver tried to trace the reasoning steps to traces of human subjects solving the same problems. The interdisciplinary field of cognitive science brings together computer models from AI and experimental techniques from psychology to try to construct precise and testable theories of the workings of the human mind
Brains and digital computers perform quite different tasks and have different properties. Tablere 1.1 shows that there are 10000 times more neurons in the typical human brain than there are gates in the CPU of a typical high-end computer. Moores Law predicts that the CPUs gate count will equal the brains neuron count around 2020. Psycology(1879 present) The origin of scientific psychology are traced back to the wok if German physiologist Hermann von Helmholtz(1821-1894) and his student Wilhelm Wundt(1832 1920) In 1879,Wundt opened the first laboratory of experimental psychology at the university of Leipzig. In US,the development of computer modeling led to the creation of the field of cognitive science. The field can be said to have started at the workshop in September 1956 at MIT.
Linguistics (1957-present)
Modem linguistics and AI, then, were "born" at about the same time, and grew up together, intersecting in a hybrid field called computational linguistics or natural language processing.
General Problem Solver (GPS) was a computer program created in 1957 by Herbert Simon and Allen Newell to build a universal problem solver machine. The order in which the program considered
subgoals and possible actions was similar to that in which humans approached the same problems. Thus, GPS was probably the first program to embody the "thinking humanly" approach. At IBM, Nathaniel Rochester and his colleagues produced some of the first A1 programs. Herbert Gelernter (1959) constructed the Geometry Theorem Prover, which was
able to prove theorems that many students of mathematics would find quite tricky.
Lisp was invented by John McCarthy in 1958 while he was at the Massachusetts Institute of Technology (MIT). In 1963, McCarthy started the AI lab at Stanford.
Tom Evans's ANALOGY program (1968) solved geometric analogy problems that appear in IQ tests, such as the one in Figure 1.1
Figure 1.1 The Tom Evans ANALOGY program could solve geometric analogy problems as shown.
It is not my aim to surprise or shock you-but the simplest way I can summarize is to say that there are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until-in a visible future-the range of problems they can handle will be coextensive with the range to which the human mind has been applied. Knowledge-based systems: The key to power? (1969-1979) Dendral was an influential pioneer project in artificial intelligence (AI) of the 1960s, and the computer software expert system that it produced. Its primary aim was to help organic chemists in identifying unknown organic molecules, by analyzing their mass spectra and using knowledge of chemistry. It was done at Stanford University by Edward Feigenbaum, Bruce Buchanan, Joshua Lederberg, and Carl Djerassi.
Psychologists including David Rumelhart and Geoff Hinton continued the study of neural-net models of memory.
An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. This simple idea is illustrated in Figure 1.2.
o A human agent has eyes, ears, and other organs for sensors and hands, legs, mouth, and other body
parts for actuators.
o A robotic agent might have cameras and infrared range finders for sensors and various motors for
actuators.
o A software agent receives keystrokes, file contents, and network packets as sensory inputs and acts
on the environment by displaying on the screen, writing files, and sending network packets.
Figure 1.2 Agents interact with environments through sensors and actuators. Percept
We use the term percept to refer to the agent's perceptual inputs at any given instant.
Percept Sequence An agent's percept sequence is the complete history of everything the agent has ever perceived. Agent function Mathematically speaking, we say that an agent's behavior is described by the agent function
that maps any given percept sequence to an action.
Agent program
Internally, The agent function for an artificial agent will be implemented by an agent program. It is important to keep these two ideas distinct. The agent function is an abstract mathematical description; the agent program is a concrete implementation, running on the agent architecture.
To illustrate these ideas, we will use a very simple example-the vacuum-cleaner world shown in Figure 1.3. This particular world has just two locations: squares A and B. The vacuum agent perceives which square it is in and whether there is dirt in the square. It can choose to move left, move right, suck up the dirt, or do nothing. One very simple agent function is the following: if the current square is dirty, then suck, otherwise move to the other square. A partial tabulation of this agent function is shown in Figure 1.4.
[A, Clean] Right [A, Dirty] Suck [B, Clean] Left [B, Dirty] Suck [A, Clean], [A, Clean] Right [A, Clean], [A, Dirty] Suck Figure 1.4 Partial tabulation of a simple agent function for the vacuum-cleaner world shown in Figure 1.3.
Rational Agent A rational agent is one that does the right thing-conceptually speaking, every entry in the table for the agent function is filled out correctly. Obviously, doing the right thing is better than doing the wrong thing. The right action is the one that will cause the agent to be most successful. Performance measures A performance measure embodies the criterion for success of an agent's behavior. When an agent is plunked down in an environment, it generates a sequence of actions according to the percepts it receives. This sequence of actions causes the environment to go through a sequence of states. If the sequence is desirable, then the agent has performed well. Rationality What is rational at any given time depends on four things: o The performance measure that defines the criterion of success. o The agent's prior knowledge of the environment. o The actions that the agent can perform. o The agent's percept sequence to date.
This leads to a definition of a rational agent:
For each possible percept sequence, a rational agent should select an action that is ex-
pected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.
Omniscience, learning, and autonomy An omniscient agent knows the actual outcome of its actions and can act accordingly; but omniscience is impossible in reality. Doing actions in order to modify future percepts-sometimes called information gathering-is an important part of rationality. Our definition requires a rational agent not only to gather information, but also to learn as much as possible from what it perceives. To the extent that an agent relies on the prior knowledge of its designer rather than on its own percepts, we say that the agent lacks autonomy. A rational agent should be autonomous-it should learn what it can to compensate for partial or incorrect prior knowledge.
Task environments We must think about task environments, which are essentially the "problems" to which rational agents are
the "solutions."
Specifying the task environment The rationality of the simple vacuum-cleaner agent, needs specification of o the performance measure o the environment o the agent's actuators and sensors.
PEAS
All these are grouped together under the heading of the task environment. We call this the PEAS (Performance, Environment, Actuators, Sensors) description. In designing an agent, the first step must always be to specify the task environment as fully as possible. Agent Type Performance Environments Actuators Sensors Measure Taxi driver Safe: fast, legal, Roads,other Steering,accelerator, Cameras,sonar, comfortable trip, traffic,pedestrians, brake, Speedometer,GPS, maximize profits customers Signal,horn,display Odometer,engine sensors,keyboards, accelerometer Figure 1.5 PEAS description of the task environment for an automated taxi.
For example, an agent that has to spot defective parts on an assembly line bases each decision on the current part, regardless of previous decisions; In sequential environments, on the other hand, the current decision could affect all future decisions. Chess and taxi driving are sequential: Discrete vs. continuous. The discrete/continuous distinction can be applied to the state of the environment, to the way time is handled, and to the percepts and actions of the agent. For example, a discrete-state environment such as a chess game has a finite number of distinct states. Chess also has a discrete set of percepts and actions. Taxi driving is a continuousstate and continuous-time problem: the speed and location of the taxi and of the other vehicles sweep through a range of continuous values and do so smoothly over time. Taxi-driving actions are also continuous (steering angles, etc.). Single agent vs. multiagent. An agent solving a crossword puzzle by itself is clearly in a single-agent environment, whereas an agent playing chess is in a two-agent environment. As one might expect, the hardest case is partially observable, stochastic, sequential, dynamic, continuous, and multiagent.
Figure 1.7 lists the properties of a number of familiar environments.
Agent programs The agent programs all have the same skeleton: they take the current percept as input from the sensors and return an action to the actuatom6 Notice the difference between the agent program, which takes the current percept as input, and the agent function, which takes the entire percept history. The agent program takes just the current percept as input because nothing more is available from the environment; if the agent's actions depend on the entire percept sequence, the agent will have to remember the percepts.
Function TABLE-DRIVEN_AGENT(percept) returns an action static: percepts, a sequence initially empty table, a table of actions, indexed by percept sequence
append percept to the end of percepts action LOOKUP(percepts, table) return action
Figure 1.8 The TABLE-DRIVEN-AGENT program is invoked for each new percept and
returns an action each time. Drawbacks: Table lookup of percept-action pairs defining all possible condition-action rules necessary to interact in an environment Problems Too big to generate and to store (Chess has about 10^120 states, for example) No knowledge of non-perceptual parts of the current state Not adaptive to changes in the environment; requires entire table to be updated if changes occur Looping: Can't make actions conditional Take a long time to build the table No autonomy Even with learning, need a long time to learn the table entries
Some Agent Types Table-driven agents use a percept sequence/action table in memory to find the next action. They are implemented by a (large) lookup table. Simple reflex agents are based on condition-action rules, implemented with an appropriate production system. They are stateless devices which do not have memory of past world states. Agents with memory have internal state, which is used to keep track of past states of the world. Agents with goals are agents that, in addition to state information, have goal information that describes desirable situations. Agents of this kind take future events into consideration. Utility-based agents base their decisions on classic axiomatic utility theory in order to act rationally. Simple Reflex Agent
The simplest kind of agent is the simple reflex agent. These agents select actions on the basis of the current percept, ignoring the rest of the percept history. For example, the vacuum agent whose agent function is tabulated in Figure 1.10 is a simple reflex agent, because its decision is based only on the current location and on whether that contains dirt.
o Select action on the basis of only the current percept. E.g. the vacuum-agent o Large reduction in possible percept/action situations(next page). o Implemented through condition-action rules If dirty then suck
function SIMPLE-REFLEX-AGENT(percept) returns an action static: rules, a set of condition-action rules state INTERPRET-INPUT(percept) rule RULE-MATCH(state, rule) action RULE-ACTION[rule] return action
Figure 1.10 A simple reflex agent. It acts according to a rule whose condition matches
Figure 1.11 The agent program for a simple reflex agent in the two-state vacuum environment. This program implements the agent function tabulated in the figure 1.4.
Characteristics
o Only works if the environment is fully observable. o Lacking history, easily get stuck in infinite loops o One solution is to randomize actions o Model-based reflex agents
The most effective way to handle partial observability is for the agent to keep track of the part of the world it can't see now. That is, the agent should maintain some sort of internal state that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state. Updating this internal state information as time goes by requires two kinds of knowledge to be encoded in the agent program. First, we need some information about how the world evolves independently of the agent-for example, that an overtaking car generally will be closer behind than it was a moment ago. Second, we need some information about how the agent's own actions affect the world-for example, that when the agent turns the steering wheel clockwise, the car turns to the right or that after driving for five minutes northbound on the freeway one is usually about five miles north of where one was five minutes ago. This knowledge about "how the world working - whether implemented in simple Boolean circuits or in complete scientific theories-is called a model of the world. An agent that uses such a MODEL-BASED model is called a model-based agent.
Figure 1.12 A model based reflex agent function REFLEX-AGENT-WITH-STATE(percept) returns an action static: rules, a set of condition-action rules state, a description of the current world state action, the most recent action. state UPDATE-STATE(state, action, percept) rule RULE-MATCH(state, rule) action RULE-ACTION[rule] return action Figure 1.13 Model based reflex agent. It keeps track of the current state of the world using an internal
model. It then chooses an action in the same way as the reflex agent.
Goal-based agents
Knowing about the current state of the environment is not always enough to decide what to do. For example, at a road junction, the taxi can turn left, turn right, or go straight on. The correct decision depends on where the taxi is trying to get to. In other words, as well as a current state description, the agent needs some sort of goal information that describes situations that are desirable-for example, being at the passenger's destination. The agent
program can combine this with information about the results of possible actions (the same information as was used to update internal state in the reflex agent) in order to choose actions that achieve the goal. Figure 1.13 shows the goal-based agent's structure.
Utility-based agents Goals alone are not really enough to generate high-quality behavior in most environments. For example, there are many action sequences that will get the taxi to its destination (thereby achieving the goal) but some are quicker, safer, more reliable, or cheaper than others. Goals just provide a crude binary distinction between "happy" and "unhappy" states, whereas a more general performance measure should allow a comparison of different world states according to exactly how happy they would make the agent if they could be achieved. Because "happy" does not sound very scientific, the customary terminology is to say that if one world state is preferred to another, then it has higher utility for the agent.
Figure 1.15 A model-based, utility-based agent. It uses a model of the world, along with
a utility function that measures its preferences among states of the world. Then it chooses the action that leads to the best expected utility, where expected utility is computed by averaging
over all possible outcome states, weighted by the probability of the outcome.
Certain goals can be reached in different ways. Some are better, have a higher utility. Utility function maps a (sequence of) state(s) onto a real number. Improves on goals: Selecting between conflicting goals Select appropriately between several goals based on likelihood of success.
Figure 1.16 A general model of learning agents. All agents can improve their performance through learning. A learning agent can be divided into four conceptual components, as shown in Figure 1.15 The most important distinction is between the learning element, which is responsible for making improvements, and the performance element, which is responsible for selecting external actions. The performance element is what we have previously considered to be the entire agent: it takes in percepts and decides on actions. The learning element uses feedback from the critic on how the agent is doing and determines how the performance element should be modified to do better in the future. The last component of the learning agent is the problem generator. It is responsible for suggesting actions that will lead to new and informative experiences. But if the agent is willing to explore a little, it might discover much better actions for the long run. The problem generator's job is to suggest these exploratory actions. This is what scientists do when they carry out experiments.
An ideal agent always chooses the action which maximizes its expected performance, given its percept sequence so far. An agent program maps from percept to action and updates internal state. Reflex agents respond immediately to percepts. simple reflex agents model-based reflex agents Goal-based agents act in order to achieve their goal(s). Utility-based agents maximize their own utility function. All agents can improve their performance through learning.
What is Search?
Search is the systematic examination of states to find path from the start/root state to the goal state. The set of possible states, together with operators defining their connectivity constitute the search space. The output of a search algorithm is a solution, that is, a path from the initial state to a state that satisfies the goal test.
Problem-solving agents
A Problem solving agent is a goal-based agent . It decide what to do by finding sequence of actions that lead to desirable states. The agent can adopt a goal and aim at satisfying it. To illustrate the agents behavior ,let us take an example where our agent is in the city of Arad,which is in Romania. The agent has to adopt a goal of getting to Bucharest. Goal formulation,based on the current situation and the agents performance measure,is the first step in problem solving. The agents task is to find out which sequence of actions will get to a goal state. Problem formulation is the process of deciding what actions and states to consider given a goal.
Problem formulation
A problem is defined by four items: initial state e.g., at Arad" successor function S(x) = set of action-state pairs e.g., S(Arad) = {[Arad -> Zerind;Zerind],.} goal test, can be explicit, e.g., x = at Bucharest" implicit, e.g., NoDirt(x) path cost (additive) e.g., sum of distances, number of actions executed, etc. c(x; a; y) is the step cost, assumed to be >= 0 A solution is a sequence of actions leading from the initial state to a goal state. Figure 1.17 Goal formulation and problem formulation
Search
An agent with several immediate options of unknown value can decide what to do by examining different possible sequences of actions that leads to the states of known value,and then choosing the best sequence. The process of looking for sequences actions from the current state to reach the goal state is called search. The search algorithm takes a problem as input and returns a solution in the form of action sequence. Once a solution is found,the execution phase consists of carrying out the recommended action.. Figure 1.18 shows a simple formulate,search,execute design for the agent. Once solution has been executed,the agent will formulate a new goal. function SIMPLE-PROBLEM-SOLVING-AGENT( percept) returns an action inputs : percept, a percept static: seq, an action sequence, initially empty state, some description of the current world state goal, a goal, initially null problem, a problem formulation state UPDATE-STATE(state, percept) if seq is empty then do goal FORMULATE-GOAL(state) problem FORMULATE-PROBLEM(state, goal)
seq SEARCH( problem) action FIRST(seq); seq REST(seq) return action Figure 1.18 A Simple problem solving agent. It first formulates a goal and a problem,searches for a sequence of actions that would solve a problem,and executes the actions one at a time. The agent design assumes the Environment is Static : The entire process carried out without paying attention to changes that might be occurring in the environment. Observable : The initial state is known and the agents sensor detects all aspects that are relevant to the choice of action Discrete : With respect to the state of the environment and percepts and actions so that alternate courses of action can be taken Deterministic : The next state of the environment is completely determined by the current state and the actions executed by the agent. Solutions to the problem are single sequence of actions An agent carries out its plan with eye closed. This is called an open loop system because ignoring the percepts breaks the loop between the agent and the environment.
Figure 1.20 The state space for the vacuum world. Arcs denote actions: L = Left,R = Right,S = Suck
The 8-puzzle
An 8-puzzle consists of a 3x3 board with eight numbered tiles and a blank space. A tile adjacent to the balank space can slide into the space. The object is to reach the goal state ,as shown in figure 2.4 Example: The 8-puzzle
The problem formulation is as follows : o States : A state description specifies the location of each of the eight tiles and the blank in one of the nine squares. o Initial state : Any state can be designated as the initial state. It can be noted that any given goal can be reached from exactly half of the possible initial states. o Successor function : This generates the legal states that result from trying the four actions(blank moves Left,Right,Up or down). o Goal Test : This checks whether the state matches the goal configuration shown in figure 2.4.(Other goal configurations are possible) o Path cost : Each step costs 1,so the path cost is the number of steps in the path.
o The 8-puzzle belongs to the family of sliding-block puzzles,which are often used as test problems for new search algorithms in AI. This general class is known as NP-complete. The 8-puzzle has 9!/2 = 181,440 reachable states and is easily solved. The 15 puzzle ( 4 x 4 board ) has around 1.3 trillion states,an the random instances can be solved optimally in few milli seconds by the best search algorithms. The 24-puzzle (on a 5 x 5 board) has around 1025 states ,and random instances are still quite difficult to solve optimally with current machines and algorithms.
8-queens problem
The goal of 8-queens problem is to place 8 queens on the chessboard such that no queen attacks any other.(A queen attacks any piece in the same row,column or diagonal). Figure 2.5 shows an attempted solution that fails: the queen in the right most column is attacked by the queen at the top left. An Incremental formulation involves operators that augments the state description,starting with an empty state.for 8-queens problem,this means each action adds a queen to the state. A complete-state formulation starts with all 8 queens on the board and move them around. In either case the path cost is of no interest because only the final state counts.
Figure 1.22 8-queens problem The first incremental formulation one might try is the following : o States : Any arrangement of 0 to 8 queens on board is a state. o Initial state : No queen on the board. o Successor function : Add a queen to any empty square. o Goal Test : 8 queens are on the board,none attacked. In this formulation,we have 64.6357 = 3 x 1014 possible sequences to investigate. A better formulation would prohibit placing a queen in any square that is already attacked. : o States : Arrangements of n queens ( 0 <= n < = 8 ) ,one per column in the left most columns ,with no queen attacking another are states. o Successor function : Add a queen to any square in the left most empty column such that it is not attacked by any other queen. This formulation reduces the 8-queen state space from 3 x 1014 to just 2057,and solutions are easy to find.
For the 100 queens the initial formulation has roughly 10400 states whereas the improved formulation has about 1052 states. This is a huge reduction,but the improved state space is still too big for the algorithms to handle. 1.3.2.2
REAL-WORLD PROBLEMS
ROUTE-FINDING PROBLEM Route-finding problem is defined in terms of specified locations and transitions along links between them. Route-finding algorithms are used in a variety of applications,such as routing in computer networks,military operations planning,and air line travel planning systems. AIRLINE TRAVEL PROBLEM The airline travel problem is specifies as follows : o States : Each is represented by a location(e.g.,an airport) and the current time. o Initial state : This is specified by the problem. o Successor function : This returns the states resulting from taking any scheduled flight(further specified by seat class and location),leaving later than the current time plus the within-airport transit time,from the current airport to another. o Goal Test : Are we at the destination by some prespecified time? o Path cost : This depends upon the monetary cost,waiting time,flight time,customs and immigration procedures,seat quality,time of dat,type of air plane,frequent-flyer mileage awards, and so on. TOURING PROBLEMS Touring problems are closely related to route-finding problems,but with an important difference. Consider for example,the problem,Visit every city at least once as shown in Romania map. As with route-finding the actions correspond to trips between adjacent cities. The state space, however,is quite different. The initial state would be In Bucharest; visited{Bucharest}. A typical intermediate state would be In Vaslui;visited {Bucharest,Urziceni,Vaslui}. The goal test would check whether the agent is in Bucharest and all 20 cities have been visited. THE TRAVELLING SALESPERSON PROBLEM(TSP) Is a touring problem in which each city must be visited exactly once. The aim is to find the shortest tour.The problem is known to be NP-hard. Enormous efforts have been expended to improve the capabilities of TSP algorithms. These algorithms are also used in tasks such as planning movements of automatic circuit-board drills and of stocking machines on shop floors. VLSI layout A VLSI layout problem requires positioning millions of components and connections on a chip to minimize area ,minimize circuit delays,minimize stray capacitances,and maximize manufacturing yield. The layout problem is split into two parts : cell layout and channel routing. ROBOT navigation ROBOT navigation is a generalization of the route-finding problem. Rather than a discrete set of routes,a robot can move in a continuous space with an infinite set of possible actions and states. For a circular Robot moving on a flat surface,the space is essentially two-dimensional.
When the robot has arms and legs or wheels that also must be controlled,the search space becomes multi-dimensional. Advanced techniques are required to make the search space finite. AUTOMATIC ASSEMBLY SEQUENCING The example includes assembly of intricate objects such as electric motors. The aim in assembly problems is to find the order in which to assemble the parts of some objects. If the wrong order is choosen,there will be no way to add some part later without undoing somework already done. Another important assembly problem is protein design,in which the goal is to find a sequence of Amino acids that will be fold into a three-dimensional protein with the right properties to cure some disease. INTERNET SEARCHING In recent years there has been increased demand for software robots that perform Internet searching.,looking for answers to questions,for related information,or for shopping deals. The searching techniques consider internet as a graph of nodes(pages) connected by links.
Figure 1.23 Partial search trees for finding a route from Arad to Bucharest. Nodes that have been expanded are shaded.; nodes that have been generated but not yet expanded are outlined in bold;nodes that have not yet been generated are shown in faint dashed line The root of the search tree is a search node corresponding to the initial state,In(Arad). The first step is to test whether this is a goal state. The current state is expanded by applying the successor function to the current state,thereby generating a new set of states. In this case,we get three new states: In(Sibiu),In(Timisoara),and In(Zerind). Now we must choose which of these three possibilities to consider further. This is the essense of search- following up one option now and putting the others aside for latter,in case the first choice does not lead to a solution.
Search strategy . The general tree-search algorithm is described informally in Figure 1.24 .
Tree Search
The choice of which state to expand is determined by the search strategy. There are an infinite number paths in this state space ,so the search tree has an infinite number of nodes. A node is a data structure with five components : o STATE : a state in the state space to which the node corresponds; o PARENT-NODE : the node in the search tree that generated this node; o ACTION : the action that was applied to the parent to generate the node; o PATH-COST :the cost,denoted by g(n),of the path from initial state to the node,as indicated by the parent pointers; and o DEPTH : the number of steps along the path from the initial state. It is important to remember the distinction between nodes and states. A node is a book keeping data structure used to represent the search tree. A state corresponds to configuration of the world.
Figure 1.25 Nodes are data structures from which the search tree is constructed. Each has a parent,a state, Arrows point from child to parent.
Fringe
Fringe is a collection of nodes that have been generated but not yet been expanded. Each element of the fringe is a leaf node,that is,a node with no successors in the tree. The fringe of each tree consists of those nodes with bold outlines. The collection of these nodes is implemented as a queue. The general tree search algorithm is shown in Figure 2.9
Figure 1.26 The general Tree search algorithm The operations specified in Figure 1.26 on a queue are as follows: o MAKE-QUEUE(element,) creates a queue with the given element(s). o EMPTY?(queue) returns true only if there are no more elements in the queue. o FIRST(queue) returns FIRST(queue) and removes it from the queue. o INSERT(element,queue) inserts an element into the queue and returns the resulting queue. o INSERT-ALL(elements,queue) inserts a set of elements into the queue and returns the resulting queue. MEASURING PROBLEM-SOLVING PERFORMANCE The output of problem-solving algorithm is either failure or a solution.(Some algorithms might struck in an infinite loop and never return an output. The algorithms performance can be measured in four ways : o Completeness : Is the algorithm guaranteed to find a solution when there is one? o Optimality : Does the strategy find the optimal solution o Time complexity : How long does it take to find a solution?
Figure 1.27 Breadth-first search on a simple binary tree. At each stage ,the node to be expanded next is indicated by a marker. Properties of breadth-first-search
Figure 1.29 Time and memory requirements for breadth-first-search. The numbers shown assume branch factor of b = 10 ; 10,000 nodes/second; 1000 bytes/node Time complexity for BFS Assume every state has b successors. The root of the search tree generates b nodes at the first level,each of which generates b more nodes,for a total of b2 at the second level. Each of these generates b more nodes,yielding b3 nodes at the third level,and so on. Now suppose,that the solution is at depth d. In the worst case,we would expand all but the last node at level d,generating bd+1 - b nodes at level d+1. Then the total number of nodes generated is b + b2 + b3 + + bd + ( bd+1 + b) = O(bd+1). Every node that is generated must remain in memory,because it is either part of the fringe or is an ancestor of a fringe node. The space compleity is,therefore ,the same as the time complexity
2.5.1.3 DEPTH-FIRST-SEARCH
Depth-first-search always expands the deepest node in the current fringe of the search tree. The progress of the search is illustrated in figure 1.31. The search proceeds immediately to the deepest level of the search tree,where the nodes have no successors. As those nodes are expanded,they are dropped from the fringe,so then the search backs up to the next shallowest node that still has unexplored successors.
Figure 1.31 Depth-first-search on a binary tree. Nodes that have been expanded and have no descendants in the fringe can be removed from the memory;these are shown in black. Nodes at depth 3 are assumed to have no successors and M is the only goal node. This strategy can be implemented by TREE-SEARCH with a last-in-first-out (LIFO) queue,also known as a stack. Depth-first-search has very modest memory requirements.It needs to store only a single path from the root to a leaf node,along with the remaining unexpanded sibling nodes for each node on the path. Once the node has been expanded,it can be removed from the memory,as soon as its descendants have been fully explored(Refer Figure 2.12). For a state space with a branching factor b and maximum depth m,depth-first-search requires storage of only bm + 1 nodes.
Using the same assumptions as Figure 2.11,and assuming that nodes at the same depth as the goal node have no successors,we find the depth-first-search would require 118 kilobytes instead of 10 petabytes,a factor of 10 billion times less space. Drawback of Depth-first-search The drawback of depth-first-search is that it can make a wrong choice and get stuck going down very long(or even infinite) path when a different choice would lead to solution near the root of the search tree. For example ,depth-first-search will explore the entire left subtree even if node C is a goal node. BACKTRACKING SEARCH A variant of depth-first search called backtracking search uses less memory and only one successor is generated at a time rather than all successors.; Only O(m) memory is needed rather than O(bm)
1.3.4.4 DEPTH-LIMITED-SEARCH
The problem of unbounded trees can be alleviated by supplying depth-first-search with a predetermined depth limit l.That is,nodes at depth l are treated as if they have no successors. This approach is called depth-limited-search. The depth limit soves the infinite path problem. Depth limited search will be nonoptimal if we choose l > d. Its time complexity is O(bl) and its space compleiy is O(bl). Depth-first-search can be viewed as a special case of depth-limited search with l = oo Sometimes,depth limits can be based on knowledge of the problem. For,example,on the map of Romania there are 20 cities. Therefore,we know that if there is a solution.,it must be of length 19 at the longest,So l = 10 is a possible choice. However,it oocan be shown that any city can be reached from any other city in at most 9 steps. This number known as the diameter of the state space,gives us a better depth limit. Depth-limited-search can be implemented as a simple modification to the general tree-search algorithm or to the recursive depth-first-search algorithm. The pseudocode for recursive depthlimited-search is shown in Figure 1.32. It can be noted that the above algorithm can terminate with two kinds of failure : the standard failure value indicates no solution; the cutoff value indicates no solution within the depth limit. Depth-limited search = depth-first search with depth limit l, returns cut off if any path is cut off by depth limit function Depth-Limited-Search( problem, limit) returns a solution/fail/cutoff return Recursive-DLS(Make-Node(Initial-State[problem]), problem, limit) function Recursive-DLS(node, problem, limit) returns solution/fail/cutoff cutoff-occurred? false if Goal-Test(problem,State[node]) then return Solution(node) else if Depth[node] = limit then return cutoff else for each successor in Expand(node, problem) do result Recursive-DLS(successor, problem, limit) if result = cutoff then cutoff_occurred? true else if result not = failure then return result if cutoff_occurred? then return cutoff else return failure Figure 1.32 Recursive implementation of Depth-limited-search:
Figure 1.33 The iterative deepening search algorithm ,which repeatedly applies depth-limitedsearch with increasing limits. It terminates when a solution is found or if the depth limited search resturns failure,meaning that no solution exists.
Figure 1.34 Four iterations of iterative deepening search on a binary tree Iterative search is not as wasteful as it might seem
S D
S A D A D E
Figure 1.36
In general,iterative deepening is the prefered uninformed search method when there is a large search space and the depth of solution is not known.
Figure 1.37 A schematic view of a bidirectional search that is about to succeed,when a Branch from the Start node meets a Branch from the goal node.
Figure 1.38 Evaluation of search strategies,b is the branching factor; d is the depth of the shallowest solution; m is the maximum depth of the search tree; l is the depth limit. Superscript caveats are as follows: a complete if b is finite; b complete if step costs >= E for positive E; c optimal if step costs are all identical; d if both directions use breadth-first search.
Repeated states can be the source of great inefficiency: identical sub trees will be explored many times! A A
B C C C
B C
B C
Figure 1.39
Figure 1.40
Figure 1.41 The General graph search algorithm. The set closed can be implemented with a hash table to allow efficient checking for repeated states. Do not return to the previous state. Do not create paths with cycles. Do not generate the same state twice. - Store states in a hash table.
o Using more memory in order to check repeated state o Algorithms that forget their history are doomed to repeat it. o Maintain Close-List beside Open-List(fringe) Strategies for avoiding repeated states We can modify the general TREE-SEARCH algorithm to include the data structure called the closed list ,which stores every expanded node. The fringe of unexpanded nodes is called the open list. If the current node matches a node on the closed list,it is discarded instead of being expanded. The new algorithm is called GRAPH-SEARCH and much more efficient than TREE-SEARCH. The worst case time and space requirements may be much smaller than O(bd).
o Answer : [Right,Suck,Left,Suck] coerce the world into state 7 without any sensor o Belief State: Such state that agent belief to be there (SLIDE 7) Partial knowledge of states and actions: sensorless or conformant problem Agent may have no idea where it is; solution (if any) is a sequence. contingency problem Percepts provide new information about current state; solution is a tree or policy; often interleave search and execution. If uncertainty is caused by actions of another agent: adversarial problem exploration problem When states and actions of the environment are unknown.
Figure
Figure Contingency, start in {1,3}. Murphys law, Suck can dirty a clean carpet. Local sensing: dirt, location only. Percept = [L,Dirty] ={1,3} [Suck] = {5,7} [Right] ={6,8} [Suck] in {6}={8} (Success) BUT [Suck] in {8} = failure Solution?? Belief-state: no fixed action sequence guarantees solution Relax requirement: [Suck, Right, if [R,dirty] then Suck] Select actions based on contingencies arising during execution. Time and space complexity are always considered with respect to some measure of the problem difficulty. In theoretical computer science ,the typical measure is the size of the state space. In AI,where the graph is represented implicitly by the initial state and successor function,the complexity is expressed in terms of three quantities:
b,the branching factor or maximum number of successors of any node; d,the depth of the shallowest goal node; and m,the maximum length of any path in the state space. Search-cost - typically depends upon the time complexity but can also include the term for memory usage. Totalcost It combines the search-cost and the path cost of the solution found.