0% found this document useful (0 votes)
35 views

Ai Module 1

The document provides an introduction to artificial intelligence including definitions of AI, techniques like machine learning and neural networks, applications in areas like robotics, and approaches to create intelligent systems like rational agents and cognitive modeling.

Uploaded by

GUNEET SURA
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

Ai Module 1

The document provides an introduction to artificial intelligence including definitions of AI, techniques like machine learning and neural networks, applications in areas like robotics, and approaches to create intelligent systems like rational agents and cognitive modeling.

Uploaded by

GUNEET SURA
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 87

ARTIFICIAL INTELLIGENCE

ARTIFICIAL INTELLIGENCE
INTRODUCTION

AI Applications

What is AI ?

Intelligent Agents

Agent Environments

Problem Formulation
To introduce AI and key paradigms of AI

Understand core techniques and algorithms of AI: Deep Learning, Machine


Learning, Natural Language Processing, Reinforcement Learning, Q-
learning, Intelligent Agents, Various Search Algorithms

COURSE Understand the basic of knowledge presentation


OBJECTIVES
Gain knowledge of problem solving techniques

How to build an intelligent system


1. Artificial Intelligence A modern Approach, 3rd edition,
Stuart Russel and Peter Norwig
2. Artificial intelligence : structures and strategies for
complex problem, 6th Edition, Luger, George F, Pearson
TEXT BOOKS Education, 2009
3. Master Machine Learning Algorithms, Edition, v1.12,
Jason Brownlee, eBook, 2017
4. Artificial Intelligence, Patrick H. Winston, 3rd edition,
Pearson Education, 1992
AI
APPLICATIONS
A GLIMPSE INTO THE FUTURE
HUMANOID ROBOTS

NADINE ASIMO
ERICA
 Natural skin, realistic hands
 Returns a greeting, makes eye contact, and can remember

NADINE: A
all the conversations had with it.
 It is able to answer questions autonomously in several
languages,

HUMANOID SOCIAL
 Simulate emotions both in gestures and facially,
depending on the content of the interaction with the user.
 Nadine can recognise persons it has previously seen, and

ROBOT MODELLED engage in flowing conversation.


 Nadine has been programmed with a "personality", in that

AFTER PROF. NADIA its demeanour can change according to what is said to it.
 Nadine has a total of 27 degrees of freedom for facial
expressions and upper body movements.
THALMANN FROM  With persons it has previously encountered, it remembers
facts and events related to each person.

NTU SINGAPORE  It can assist people with special needs by reading stories,
showing images, put on Skype sessions, send emails, and
communicate with other members of the family.
(2013)  It can play the role of a receptionist in an office or be
dedicated to be a personal coach
ASIMO: A
HUMANOID ASIMO
ROBOT (HONDA
2000-2018)
NON-HUMANOID ROBOTS

PARO AIBO MIRO


 HDFC Bank has developed an AI-based chatbot
called EVA (Electronic Virtual Assistant), built
by Bengaluru-based Senseforth AI Research.
 Apple's personal assistant, Siri. 
 Amazon’s Alexa

IS AI ONLY  Tesla Vehicles


 Google Maps Vs Google WAZE

ABOUT
 Ride Sharing apps like Uber, Lyft
 Jarvis by Facebook
 Email categorization by Gmail

ROBOTS?  Photo recognition apps


 Face.com Photo Finder and Photo Tagger (now
acquired by FB)
 DeepText AI Engine by FB (near human
accuracy, 20 langauges, 1000s posts/sec)
 Steering a driver-less car, AI autopilots for
commercial flights

SOME  Spam Filters


 Beating Gary Kasparov in a chess match

DOMAINS  Understanding language


 Healthcare: Robotic assistants in surgery, CAD

WHERE AI IS
 Automated customer support: chatbots
 Personalized shopping experience
 Finance: Stock brokerage prediction

INFUSED  Travel and navigation


 Social Media

WITH
 Smart home devices
 Creative arts: watson BEAT
 Security and surveillance
PLAYING CHESS PROVING WRITING
MATHEMATICAL POETRY
THEOREMS
AI PROBLEMS

DRIVING A CAR DIAGNOSING


ON A CROWDED DISEASES
STREET
INTRODUCTION TO AI
HOW TO DEFINE AI
WHAT IS ARTIFICIAL
INTELLIGENCE?

 What is intelligence?
 What is thinking
 What is a machine? Is
computer a machine?
 Are humans intelligent?
 Can a machines think? If
yes, are we machines?
•Artificial Intelligence  is a branch of
WHAT IS ARTIFICIAL
INTELLIGENCE? computer science by which we can create
intelligent machines which can behave like
a human, think like humans, and able
to make decisions​
 AI a term coined
by McCortey •Artificial Intelligence is the study
of systems that​
Think  Humanly Think Rationally
Acting/behaving Humanly Acting/behaving Rationally
ACTING HUMANLY: TURING TEST
 Human beings are intelligent
 To be called intelligent, a machine must produce
responses that are indistinguishable from those of a
human

Alan Turing

• If interrogator cannot reliably


distinguish human from computer
then computer possess intelligence.
AI TECHNIQUES: REQUIRED CAPABILITIES TO ACT
HUMANLY(TURING TEST)

Natural Language Automated


Knowledge
Processing: Reasoning:  Using the
Presentation: Storing
Enabling the computer stored information to
what the computer
to communicate answer questions and to
knows or hear
successfully in English draw new conclusions

Machine Learning:
Adapting to new Robotics: To
Computer Vision:  To
circumstances and to manipulate objects and
perceive objects
detect and extrapolate move about.
patterns
 To get inside the actual workings of human
minds

THINK  Three ways to get inside the mind:


 Introspection: Trying to catch our own
thoughts as they go by

HUMANLY:  Psychological Experiments: observing a


person in action

COGNITIVE
 Brain Imaging: observing the brain in
action

MODELING The interdisciplinary field of cognitive science

APPROACH
brings together computer models from AI and
experimental techniques from psychology to
construct precise and testable theories of the
human mind.
THINKING RATIONALLY: THE LAWS OF THOUGHT APPROACH

 Greek philosopher Aristotle was one of the first to attempt to codify "right thinking," that is,
irrefutable reasoning processes. His famous syllogisms provided patterns for argument
structures that always gave correct conclusions given correct premises.
 For example, "Socrates is a man; all men are mortal; therefore Socrates is mortal."
 These laws of thought were supposed to govern the operation of the mind, and initiated the field
of logic.
 There are two main obstacles to this approach. First, it is not easy to take informal knowledge
and state it in the formal terms required by logical notation. Second, there is a big difference
between being able to solve a problem "in principle" and doing so in practice.
 Even problems with just a few dozen facts can exhaust the computational resources of any
computer unless it has some guidance as to which reasoning steps to try first.
ACT RATIONALLY : THE RATIONAL AGENT
APPROACH
 An agent is just something that acts.
 Agents can be classified according to the
environment in which they are like Intelligent
Agents, Human Agent, Robotic agent, Biological
Agents, Software/computer Agents, Hardware
Agents, Interface Agents, Mobile Agents,
Reactive Agents, Information Agents,

AGENTS
Distributed Artificial Intelligence Agents DAI …
 Computer Agents does:
 operate autonomously
 perceive their environment
 persist over a prolonged time period
 adapt to change
 create and pursue goals
AI AGENTS

An agent is anything that can be viewed


as perceiving its environment through
sensors and acting upon that
environment through actuators -
(Artificial Intelligence: A Modern Approach
by Stuart Russell and Peter Norvig)
Intelligent Agent must
 Must sense 
 Must act/react 
 Must autonomous (to some extend)
 Must rational
AI AGENT COMPONENTS
 Sensors: An agent perceives its environment through sensors. Eg:  cameras,
infrared range finders
 Actuators: Agent can change the environment through actuators/ effectors Eg: Motors
 Percept: The complete set of inputs at a given time for an agent is called a percept
  Percept sequence: The complete history of everything the agent has perceived 
 Agent function(agent’s behaviour)  maps from percept histories to actions: [f:
p*  A] 
  Agent program runs on the physical architecture to produce f . It is a concrete
implementation
 Action: An operation involving an actuator is called an action
VACUUM-CLEANER WORLD

 Percepts: location and contents,


 e.g., [A, dirty]
 Actions: Left, Right, Suck, NoOp
 Agents Function: look-up table
A SIMPLE AGENT FUNCTION

Percept sequence Action


[A, Clean]  Right
[A, Dirty] Suck
[B, Clean] Left
[B, Dirty] Suck
[A, Clean], [A, Clean] Right
[A, Clean], [A, Dirty] Suck

[A, Clean], [A, Clean], [A, Clean] Right
[A, Clean], [A, Clean], [A, Dirty] Suck

 Rationality

PROPERTIES
 Autonomy
 Reactivity.

OF AGENT
RATIONALITY

An agent should "do the right thing", based on what it can  perceive and the actions it can perform. The right
action is the  one that will cause the agent to be most successful
A rational agent always selects an action based on the percept sequence it has received so as to maximize its
(expected) performance measure given the percept it has received and the knowledge possess by it.

Definition Ideal Rational Agent: For each possible percept sequence, a rational  agent should select an action
that is expected to  maximize its performance measure, given the  evidence provided by the percept sequence
and  whatever built-in knowledge the agent has.
Percepts may not supply all
relevant information.  We can
Rational is different behave rationally even when
faced with incomplete

from omniscience information.


E.g., in card game, don’t know
cards of others.

RATIONALITY
Rational is different Rationality maximizes expected
outcome while perfection
from being perfect maximizes actual outcome.
DEFINITION: The autonomy of an agent is the extent
to which its behavior is determined by its own
experience(in dynamic unpredictable
environments ), rather than knowledge of designer.
 Agents can perform actions in order to modify future
percepts so as to obtain useful
information: information gathering, exploration.
Extremes
 No autonomy – ignores environment/data
AUTONOMY 
 Complete autonomy – must act randomly/no program
Example: baby learning to crawl
Ideal
 Idesign agents to have some autonomy
 Possibly become more autonomous with experience
DEFINITION: A reactive system is one that maintains
an ongoing interaction with its environment, and
responds to changes that occur in it (in time for the
response to be useful)
Ways to achieve reactivity
Reactive architectures
 [Situation – Action] rules
 Layered, behaviour-based architectures
REACTIVITY 
Deliberative architectures
 Symbolic world model, long-term goals
 Reasoning, planning

Hybrid architectures
 Reactive layer + deliberative layer
Must first specify the setting for
intelligent agent design

PEAS ANALYSIS Agents can be described by their PEAS.

A rational agent maximizes the


performance measure for their PEAS
Specifying the task environment is
always  the first step in designing agent

Performance
Environment
PEAS: Actuators PEAS
Sensors

Percepts
Actions
PAGE Description Goals
Environment
Performance measure: An objective criterion for success of an
agent's behavior. The performance measure depends on the agent
function.

Performance measures of a vacuum-cleaner agent: amount of


dirt cleaned up, amount of time taken, amount of electricity
consumed, level of noise generated, etc.
PERFORMANCE
Performance measures self-driving car: time to reach destination
MEASURES
(minimize), safety, predictability of behavior for other agents,
reliability, etc.

Performance measure of game-playing agent: win/loss


percentage (maximize), robustness, unpredictability (to
“confuse” opponent), etc.
THE ENVIRONMENT

 What all do we need to specify?


 The action space
 The percept space
 The environment as a string of
mappings from the action space to the
percept space
TAXI DRIVER EXAMPLE

Performance 
Environment Actuators Sensors
Measure

camera,  sonar, 
safe, fast, speedometer,  GPS,
legal, roads, other  steering,  odometer,  engine 
comfortable traffic,  accelerator, 
trip, sensors, 
maximize pedestrians,  brake,  signal, keyboard, 
profits customers horn,  display accelerometer
MEDICAL DIAGNOSIS SYSTEM

Performance 
Environment Actuators Sensors
Measure

Screen display
Keyboard (entry of
(questions, tests,
Healthy patient, symptoms, findings,
diagnoses,
minimize costs,  Patient, hospital, patient's answers)
treatments,
lawsuits staff referrals)
PART-PICKING ROBOT

Performance 
Environment Actuators Sensors
Measure

Jointed arm and Camera, joint angle


Percentage of parts in hand sensors
Conveyor belt with
correct bins parts, bins
INTERACTIVE ENGLISH TUTOR

Performance 
Environment Actuators Sensors
Measure

Screen display
(exercises,
Maximize student's Keyboard
suggestions,
score on test Set of students corrections)
EXAMPLES PAGE DESCRIPTIONS
STRUCTURE OF INTELLIGENT AGENTS
THE JOB OF AI IS TO DESIGN THE AGENT
PROGRAM: A FUNCTION THAT
IMPLEMENTS THE AGENT MAPPING FROM
 Agent’s structure can be viewed as:
PERCEPTS TO ACTIONS.
 Agent = Architecture + Agent
WE ASSUME THIS PROGRAM WILL RUN ON
SOME SORT OF ARCHITECTURE
Program
COMPUTING DEVICE, WHICH WE WILL  Architecture = the machinery that
CALL THE ARCHITECTURE.
an agent executes on.
THE ARCHITECTURE MIGHT BE A PLAIN  Agent Program = an
COMPUTER, OR IT MIGHT INCLUDE implementation of an agent
SPECIAL-PURPOSE HARDWARE FOR
CERTAIN TASKS, SUCH AS PROCESSING function. (….our focus )
CAMERA IMAGES OR FILTERING AUDIO
INPUT.
AI AGENT ENVIRONMENT
 Fully Observable(vs. partially
observable)
AGENT  Deterministic(vs. stochastic) 

ENVIRONMEN  Episodic (vs. sequential)


 Static Environment  (vs. dynamic)
T TYPES  Discrete(vs.continuous)
 Single Agent(vs. multiagent)
ENVIRONMENT TYPES
 Fully observable (vs. partially observable)(Accessible vs. inaccessible):
An agent's sensors give it access to the complete state of the environment
at each point in time. An accessible environment is one in which the
agent can obtain complete, accurate, up-to-date information about the
environment’s state. Most moderately complex environments (for
example, the everyday physical world and the Internet) are inaccessible.
The more accessible an environment is, the simpler it is to build agents
to operate in it
 Deterministic (vs. stochastic): The next state of the environment is
completely determined by the current state and the action executed by
the agent(agent’s action uniquely determines the outcome). (If the
environment is deterministic except for the actions of other agents, then
the environment is strategic) A deterministic environment is one in
which the next state of the environment is completely determined by the
current state and the action executed by the agent. The physical world
can to all intents and purposes be regarded as non-deterministic. Non-
deterministic environments present greater problems for the agent
designer
ENVIRONMENT TYPES
 Episodic (vs. sequential): The agent's experience is divided into atomic "episodes" (each episode consists of the agent
perceiving and then performing a single action), and the choice of action in each episode depends only on the episode
itself. In an episodic environment, the performance of an agent is dependent on a number of discrete episodes, with no
link between the performance of an agent in different scenarios • Episodic environments are simpler from the agent
developer’s perspective because the agent can decide what action to perform based only on the current episode — it
need not reason about the interactions between this and future episodes
 Static (vs. dynamic): The environment is unchanged while an agent is deliberating. (The environment is semi-dynamic
if the environment itself does not change with the passage of time but the agent's performance score does) A static
environment is unchanged while an agent is reflecting. A dynamic environment is one that has other processes
operating on it, and which hence changes in ways beyond the agent’s control. Other processes can interfere with the
agent’s actions (as in concurrent systems theory). The physical world is a highly dynamic environment
 Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions. An environment is
discrete if there are a fixed, finite number of actions and percepts in it – Ex: chess game. Continuous environments
have a certain level of mismatch with computer systems – Ex: taxi driving. Discrete environments could in principle
be handled by a kind of “lookup table”
 Single agent (vs. multiagent): An agent operating by itself in an environment.
ENVIRONMENT TYPES
 Four basic types in order of
increasing generality:
1. Simple reflex agents  
2. Reflex agents with

AGENT TYPES state/model(Model based)


3. Goal-based agents
4. Utility-based agents
5. Learning Agents
SIMPLE REFLEX AGENTS

It works by finding a rule whose condition matches the current situation (as
defined by the percept) and then doing the action associated with that rule.

if car-in-front-is-braking then initiate-braking


SIMPLE REFLEX AGENTS

 Simple but very limited intelligence.


 Action does not depend on percept history, only on current percept. 
 Therefore no memory requirements.
 Infinite loops  
 Suppose vacuum cleaner does not observe location. What do you do given location = clean? Left of A or right on B -> infinite loop.

 Consider the following more obvious case: from time to time, the driver looks in the rear-view mirror to check on the locations of nearby vehicles.
 When the driver is not looking in the mirror, the vehicles in the next lane are invisible (i.e., the states in which they are present and absent are
indistinguishable); but in order to decide on a lane-change maneuver, the driver needs to know whether or not they are there.
 The problem illustrated by this example arises because the sensors do not provide access to the complete state of the world.
 In such cases, the agent may need to maintain some internal state information in order to distinguish between world states that generate the same
perceptual input but nonetheless are significantly different. Here, "significantly different" means that different actions are appropriate in the two
states.
MODEL-BASED REFLEX AGENTS

They use a model of the world to choose their actions. They maintain an internal state.
Model − knowledge about “how the things happen in the world”.
Internal State − It is a representation of unobserved aspects of current state depending on percept history.
Updating the state requires the information about −
•How the world evolves.
•How the agent’s actions affect the world.
GOAL-BASED AGENTS
GOAL BASED AGENTS

 They choose their actions in order to achieve goals. Goal-based approach is more flexible than reflex agent since the
knowledge supporting a decision is explicitly modeled, thereby allowing for modifications.
 Goal − It is the description of desirable situations.
 At a road junction, the taxi can turn left, right, or go straight on. The right decision depends on where the taxi is trying to get to. In
other words, as well as a current state description, the agent needs some sort of goal information, which describes situations that are
desirable
 The agent program can combine this with information about the results of possible actions (the same information as was
used to update internal state in the reflex agent) in order to choose actions that achieve the goal.
 Sometimes, this will be simple, when goal satisfaction results immediately from a single action;
 sometimes, it will be more tricky, when the agent has to consider long sequences of twists and turns to find a way to
achieve the goal. Search and planning are the subfields of AI devoted to finding action sequences that do achieve the
agent's goals.
UTILITY-BASED AGENTS
• Goals alone are not really enough to generate high-quality
behavior.
• For example, there are many action sequences that will get the
taxi to its destination, thereby achieving the goal, but some are
quicker, safer, more reliable, or cheaper than others.
• Goals just provide a crude distinction between "happy" and
"unhappy" states, whereas a more general performance
measure should allow a comparison of different world states
(or sequences of states) according to exactly how happy they
would make the agent if they could be achieved.
• Because "happy" does not sound very scientific, the customary
terminology is to say that if one world state is preferred to
another, then it has higher utility for the agent.
• Utility is therefore a function that maps a state onto a real
number, which describes the associated degree of happiness.
LEARNING AGENTS

A learning agent in AI is the type of agent which can learn


from its past experiences or it has learning capabilities.
It starts to act with basic knowledge and then able to act
and adapt automatically through learning.
A learning agent has mainly four conceptual components,
which are:
1.Learning element :It is responsible for making
improvements by learning from the environment
2.Critic: Learning element takes feedback from critic
which describes how well the agent is doing with respect
to a fixed performance standard.
3.Performance element: It is responsible for selecting
external action
4.Problem Generator: This component is responsible for
suggesting actions that will lead to new and informative
experiences.
PROBLEM FORMULATION
STATE SPACE REPRESENTATION

 Each agent is an abstract representation of the agent’s environment . It is an abstraction that denotes configuration
of agent
 INITIAL STATE: The description of the starting configuration of the agent
 An ACTION/OPERATOR takes the agent from on state to the another state. A state can have number of
successor states
 A PLAN is sequence of actions
 A GOAL is a description of a set of desirable states of the world. Goal states are often specified by a goal test
which any goal state must satisfy
 PATH COST: path cost positive number. Usually path cost = sum of step costs
PROBLEM FORMULATION

 Problem formulation means choosing a relevant set of states to consider, and a feasible set of operators for
moving from one state to another
 Search is the process of imagining sequences of operators applied to the initial state, and checking which
sequence reaches the goal state
 If you are able to represent the problem in terms of state space representation, you can solve the problem using the
search techniques which finds out the set of states and the corresponding actions that will start with initial states s
to final state goal G
PROBLEM FORMULATION: 

 Vacuum Cleaner
 Missionaries and cannibals
 Cryptarithmatic
 N-Queens Problem (8-Queens Problem)
 N-Puzzle Problem (N-Puzzle Problem)
 Water Jug Problem
 MLGC Problem (Man Lion Goat Cabbage)
PROBLEM FORMULATION: VACUUM CLEANER

 Let the world contain just two locations. Each location may or may
not contain dirt, and the agent may be in one location or the other.
 There are 8 possible world states
 The agent has three possible actions in this version of the vacuum
world: Left, Right, and Suck.
 Assume, for the moment, that sucking is 100% effective.
 The goal is to clean up all the dirt. That is, the goal is equivalent
to the state set {7,8}.
 Suppose that the agent's sensors give it enough information to tell
exactly which state it is in (i.e., the world is accessible); and
suppose that it knows exactly what each of its actions does. Then it
can calculate exactly which state it will be in after any sequence of
actions. For example, if its initial state is 5, then it can calculate
that the action sequence [Right,Suck] will get to a goal state. This
is the simplest case, which we call a single-state problem.
PROBLEM FORMULATION: VACUUM CLEANER
PROBLEM FORMULATION: MISSIONARIES & CANNIBALS

 3 missionaries and 3 cannibals need to cross a river


 1 boat that can carry 1 or 2 people
 Find a way to get everyone to the other side, without ever
leaving the group of missionaries in one place outnumbered by
cannibals in that place
 Link:

http://www.learn4good.com/games/puzzle/boat.htm
https://www.novelgames.com/en/missionaries/
PROBLEM FORMULATION: MISSIONARIES & CANNIBALS

 Problem Formulation
  States:
 <m, c, b> representing the # of missionaries and the # of cannibals, and the position of the boat

  Initial state:
 <3, 3, 1>

  Actions:
 take 1 missionary, 1 cannibal, 2 missionaries, 2 cannibals, or 1 missionary and 1 cannibal across the river

  Transition model:
 state after an action

  Goal test:
 <0, 0, 0>

  Path cost:
 number of crossing
PROBLEM FORMULATION: CRYPTARITHMATIC
PROBLEM FORMULATION: CRYPTARITHMATIC

The following formulation is probably the simplest:


States: a cryptarithmetic puzzle with some letters replaced by digits.

Operators: replace all occurrences of a letter with a digit not already


appearing in the puzzle.

Goal test: puzzle contains only digits, and represents a correct sum.

Path cost: zero. All solutions equally valid.


8-QUEENS PROBLEM

State: 8*8 chessboard (empty, or one queen placed in first col in each
row, etc.)
Goal: all Queens on chessboard such that no two queens attack each other
Actions:
Placing one queen in one of the squares
Cost: Can be number of moves
8-PUZZLE PROBLEM

State: 3*3 pattern of tiles with each tile having a number [1-8] and one
tile blank
Actions:
The empty space can only move in four directions (Movement of empty
space): Up, Down, Right or Left
The empty space cannot move diagonally and can take only one step at
a time.
Cost: Can be number of moves
 You are given an m liter jug and a n liter jug. Both the jugs are initially empty. The jugs don’t have markings to
allow measuring smaller quantities. You have to use the jugs to measure d liters of water where d is less than n.  No
limitation of water usage
 State: (X, Y) corresponds to a state where X refers to the amount of water in Jug1 and Y refers to the amount of
water in Jug2 
Goal: Determine the path from the initial state (xi, yi) to the final state (xf, yf), where (xi, yi) is (0, 0) which
indicates both Jugs are initially empty and (xf, yf) indicates a state which could be (0, d) or (d, 0).
 Actions: 
 Empty a Jug, Ex. (X, Y)->(0, Y) Empty Jug 1
 Fill a Jug, Ex. (0, 0)->(X, 0) Fill Jug 1
 Pour water from one jug to the other until one of the jugs is either empty or full, Ex. (X, Y) -> (X-d, Y+d)
SOLVING PROBLEMS BY SEARCHING

Figure: Map of Romania


SOLVING PROBLEMS BY SEARCHING

 Intelligent agents are supposed to act in such a way that the environment goes through a sequence of states that
maximizes the performance measure. 
 In its full generality, this specification is difficult to translate into a successful agent design.
 Imagine our agent in the city of Arad, Romania, toward the end of a touring holiday. 
 The agent has a ticket to fly out of Bucharest the following day. The ticket is nonrefundable, the agent's visa is
about to expire, and after tomorrow, there are no seats available for six weeks. 
 Now the agent's performance measure contains many other factors besides the cost of the ticket and the
undesirability of being arrested and deported. For example, it wants to improve its suntan, improve its Romanian,
take in the sights, and so on. 
SOLVING PROBLEMS BY SEARCHING

 All these factors might suggest any of a vast array of possible actions.


 Given the seriousness of the situation, however, it should adopt the goal of driving to Bucharest. 
 Actions that result in a failure to reach Bucharest on time can be rejected without further consideration. 
 Goals such as this help organize behavior by limiting the objectives that the agent is trying to achieve. 
 Goal formulation, based on the current situation, is the first step in problem solving.
 Formulating a goal, the agent may wish to decide on some other factors that affect the desirability of different
ways of achieving the goal.
 ROUTE-FINDING PROBLEM FROM ARAD TO BUCHAREST

Figure: Romania routes


SOLVING PROBLEMS BY SEARCHING

 Problem formulation is the process of deciding what actions and states to consider, and follows goal formulation. 
 Our agent has now adopted the goal of driving to Bucharest, and is considering which town to drive to from Arad.
There are three roads out of Arad, one toward Sibiu, one to Timisoara, and one to Zerind. None of these achieves
the goal, so unless the agent is very familiar with the geography of Romania, it will not know which road to
follow.
 If the agent has no additional knowledge, then it is stuck. The best it can do is choose one of the actions at
random.
 But suppose the agent has a map of Romania, either on paper or in its memory.
SOLVING PROBLEMS BY SEARCHING

 The agent can use this information to consider subsequent stages of a hypothetical journey through each of the
three towns, to try to find a journey that eventually gets to Bucharest.
 Once it has found a path on the map from Arad to Bucharest, it can achieve its goal; by carrying out the driving
actions that correspond to the legs of the journey.
 In general, then, an agent with several immediate options of unknown value can decide what to do by first
examining ; different possible sequences of actions that lead to states of known value, and then choosing the best
one. 
 This process of looking for such a sequence is called search. A search algorithm takes a problem as input and
returns a solution in the form of an action sequence.
SOLVING PROBLEMS BY SEARCHING

 Once the solution is found, the actions it recommends can be carried out. This is called the execution phase. 
 Thus, we have a simple "formulate, search, execute" design for the agent.
  After formulating a goal and a problem to solve, the agent calls a search procedure to solve it. 
 It then uses the solution to guide its actions, doing whatever the solution recommends as the next thing to do, and
then removing that step from the sequence.
  Once the solution has been executed, the agent will find a new goal.
PROBLEM-
SOLVING
AGENTS
GENERATING ACTION SEQUENCES: ROUTE-FINDING PROBLEM FROM ARAD TO
BUCHAREST

Figure: Romania routes


GENERATING ACTION SEQUENCES

Route-finding problem from Arad to Bucharest


 Let initial state, Arad. The first step is to test if this is a goal state. Clearly it is not, but it is important to check so
that we can solve trick problems like "starting in Arad, get to Arad." 
 Because this is not a goal state, we need to consider some other states. This is done by applying the operators to
the current state, thereby generating a new set of states. The process is called expanding the state. 
 In this case, we get three new states, "in Sibiu," "in Timisoara," and "in Zerind," because there is a direct one-step
route from Arad to these three cities. 
 If there were only one possibility, we would just take it and continue. But whenever there are
multiple possibilities, we must make a choice about which one to consider further
GENERATING ACTION SEQUENCES

Route-finding problem from Arad to Bucharest contd...


 This is the essence of search—choosing one option and putting the others aside for later, in case the first choice
does not lead to a solution. 
 Suppose we choose Zerind. We check to see if it is a goal state (it is not), and then expand it to get "in Arad" and
"in Oradea." We can then choose any of these two,  or go back and choose Sibiu or Timisoara. 
 We continue choosing, testing, and expanding until a solution is found, or until there are no more states to be
expanded.
 The choice of which state to expand first is determined by the search strategy.
GENERATING ACTION SEQUENCES

Route-finding problem from Arad to Bucharest contd...


 We can look at search process as building up a search tree that is superimposed over the state space. 
 The root of the search tree is a search node corresponding to the initial state. 
 The leaf nodes of the tree correspond to states that do not have successors in the tree, either because they have not
been expanded yet, or because they were expanded, but generated the empty set.
  At each step, the search algorithm chooses one leaf node to expand.
 ROUTE-FINDING
PROBLEM FROM
ARAD TO
BUCHAREST: SOME
EXPANSIONS
THE
GENERAL  Distinguish between the state space and the search tree. 
SEARCH  For the routefinding problem, there are only 20 states in the state space,

ALGORITHM one for each city.


 But there are an infinite number of paths in this state space, so the
search tree has an infinite number of nodes.
 For example, the branch Arad-Sibiu-Arad continues Arad-Sibiu-Arad-
Sibiu-Arad, and so on, indefinitely.
 A good search algorithm avoids following such paths.
datatype node
components: STATE, PARENT-NODE, OPERATOR, DEPTH, PATH-COST

We assume a node is a data structure with five components:


DATA
 the state in the state space to which the node corresponds;
STRUCTURES  the node in the search tree that generated this node (this is called the
FOR SEARCH parent node);

TREES  the operator that was applied to generate the node;


 the number of nodes on the path from the root to this node (the depth of
the node);
 the path cost of the path from the initial state to the node.
 Difference between Node and state:

 A node is a bookkeeping data structure used to represent the search tree for a


DATA particular problem instance as generated by a particular algorithm. 
STRUCTURES  A state represents a configuration (or set of configurations) of the world.

FOR SEARCH  Thus, nodes have depths and parents, whereas states do not. 
 It is quite possible for two different nodes to contain the same state, if that
TREES state is generated via two different sequences of actions.
FORMAL
VERSION OF THE
GENERAL
SEARCH
ALGORITHM
 Search strategies evalution in terms of
four criteria:
 Completeness: is the strategy
guaranteed to find a solution when
there is one?

SEARCH
 Time complexity: how long does it
take to find a solution?
 Space complexity: how much memory
STRATEGIES does it need to perform the search?
 Optimality: does the strategy find the
highest-quality solution when there are
several different solutions?
 Blind Search
 Depth First Search
 Breadth First Search
 Iterative Deepening Search

SEARCH  Bidirectional Search

 Informed Search

STRATEGIES  Constraint Satisfaction 


 Adversary Search
THANK YOU

You might also like