0% found this document useful (0 votes)
15 views

AIML Module 1

The document provides an overview of Artificial Intelligence (AI), its definition, sub-areas, applications, and the structure of intelligent agents. It discusses various AI subfields such as game playing, speech recognition, and expert systems, along with their real-world applications in business, engineering, and medicine. Additionally, it explains the concepts of agents, environments, and the importance of rationality and autonomy in AI systems.

Uploaded by

06mizzu
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

AIML Module 1

The document provides an overview of Artificial Intelligence (AI), its definition, sub-areas, applications, and the structure of intelligent agents. It discusses various AI subfields such as game playing, speech recognition, and expert systems, along with their real-world applications in business, engineering, and medicine. Additionally, it explains the concepts of agents, environments, and the importance of rationality and autonomy in AI systems.

Uploaded by

06mizzu
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Module 1

Introduction:

 Artificial Intelligence is concerned with the design of intelligence in an artificial device.


The term was coined by John McCarthy in 1956.
 Intelligence is the ability to acquire, understand and apply the knowledge to achieve
goals in the world.
 AI is the study of the mental faculties through the use of computational models
 AI is the study of intellectual/mental processes as computational processes.
 AI program will demonstrate a high level of intelligence to a degree that equals

orexceeds the intelligence required of a human in performing some task.


 AI is unique, sharing borders with Mathematics, Computer Science,
Philosophy, Psychology, Biology, Cognitive Science and many others.
 Although there is no clear definition of AI or even Intelligence, it can be described as an

attempt to build machines that like humans can think and act, able to learn and use
knowledge to solve problems on their own.

Sub Areas of AI:

1) Game Playing
Deep Blue Chess program beat world champion Gary Kasparov
2) Speech Recognition
PEGASUS spoken language interface to American Airlines' EAASY SABRE reservation
system, which allows users to obtain flight information and make reservations over the

1
telephone. The 1990s has seen significant advances in speech recognition so that
limitedsystems are now successful.
3) Computer Vision
Face recognition programs in use by banks, government, etc. The ALVINN system from
CMU autonomously drove a van from Washington, D.C. to San Diego (all but 52 of 2,849
miles), averaging 63 mph day and night, and in all weather conditions. Handwriting
recognition, electronics and manufacturing inspection, photo interpretation, baggage
inspection, reverse engineering to automatically construct a 3D geometric model.
4) Expert Systems
Application-specific systems that rely on obtaining the knowledge of human experts in
anarea and programming that knowledge into a system.
a. Diagnostic Systems: MYCIN system for diagnosing bacterial infections of the
blood and suggesting treatments. Intellipath pathology diagnosis system (AMA
approved). Pathfinder medical diagnosis system, which suggests tests and
makesdiagnoses. Whirlpool customer assistance center.
b. System Configuration
DEC's XCON system for custom hardware configuration. Radiotherapy treatment planning.
c. Financial Decision Making
Credit card companies, mortgage companies, banks, and the U.S.
governmentemploy AI systems to detect fraud and expedite financial
transactions. For example, AMEX credit check.
d. Classification Systems
Put information into one of a fixed set of categories using several sources of
information. E.g., financial decision making systems. NASA developed a system for
classifying very faint areas in astronomical images into either stars or galaxies with
very high accuracy by learning from human experts' classifications.
5) Mathematical Theorem Proving
Use inference methods to prove new theorems.
6) Natural Language Understanding
AltaVista's translation of web pages. Translation of Catepillar Truck manuals into 20 languages.

2
7) Scheduling and Planning
Automatic scheduling for manufacturing. DARPA's DART system used in Desert Storm and
Desert Shield operations to plan logistics of people and supplies. American Airlines rerouting
contingency planner. European space agency planning and scheduling of spacecraft
assembly,integration and verification.
8) Artificial Neural Networks:
9) Machine Learning

Applications of AI:

AI algorithms have attracted close attention of researchers and have also been applied
successfully to solve problems in engineering. Nevertheless, for large and complex problems,
AI algorithms consume considerable computation time due to stochastic feature of the search
approaches

1. Business; financial strategies


2. Engineering: check design, offer suggestions to create new product, expert systems for
all engineering problems
3. Manufacturing: assembly, inspection and maintenance
4. Medicine: monitoring, diagnosing
5. Education: in teaching
6. Fraud detection
7. Object identification
8. Information retrieval
9. Space shuttle scheduling

Building AI Systems:

1) Perception
Intelligent biological systems are physically embodied in the world and experience the
world through their sensors (senses). For an autonomous vehicle, input might be images
from a camera and range information from a rangefinder. For a medical diagnosis

3
system, perception is the set of symptoms and test results that have been obtained and
input to thesystem manually.

2) Reasoning
Inference, decision-making, classification from what is sensed and what the internal "model" is of
the world. Might be a neural network, logical deduction system, Hidden Markov Model
induction,heuristic searching a problem space, Bayes Network inference, genetic algorithms, etc.
Includes areas of knowledge representation, problem solving, decision theory, planning, game
theory, machine learning, uncertainty reasoning, etc.
3) Action
Biological systems interact within their environment by actuation, speech, etc. All behavior is
centered around actions in the world. Examples include controlling the steering of a Mars rover or
autonomous vehicle, or suggesting tests and making diagnoses for a medical diagnosis system.
Includes areas of robot actuation, natural language generation, and speech synthesis.
The definitions of AI:

a) "The exciting new effort to make b) "The study of mental faculties


computers think . . . machines with through the use of computational
minds,in the full and literal sense" models" (Charniak and McDermott,
(Haugeland, 1985) 1985)

"The automation of] activities that we "The study of the computations


associate with human thinking, activities that make it possible to perceive,
such as decision-making, problem solving, reason,and act" (Winston, 1992)
learning..."(Bellman, 1978)

4
c) "The art of creating machines that d) "A field of study that seeks to explain
performfunctions that require and emulate intelligent behavior in
intelligence when performed by people" terms of computational processes"
(Kurzweil, 1990) (Schalkoff, 1 990)
"The branch of computer science
"The study of how to make that is concerned with the
computersdo things at which, at the automation of intelligent
moment, people are better" (Rich behavior"
and Knight, 1 (Luger and Stubblefield, 1993)
99 1 )

The definitions on the top, (a) and (b) are concerned with reasoning, whereas those on the
bottom, (c) and (d) address behavior. The definitions on the left, (a) and (c) measure success
interms of human performance, and those on the right, (b) and (d) measure the ideal concept
of intelligence called rationality
Intelligent Systems:
In order to design intelligent systems, it is important to categorize them into four
categories(Luger and Stubberfield 1993), (Russell and Norvig, 2003)
1. Systems that think like humans
2. Systems that think rationally
3. Systems that behave like humans
4. Systems that behave rationally

Human- Rationall
Like y

Cognitive Science Approach Laws of thought Approach


Think:
“Machines that think like humans” “ Machines that think Rationally”

Turing Test Approach Rational Agent Approach


Act:
“Machines that behave like humans” “Machines that behave Rationally”

5
Cognitive Science: Think Human-Like

a. Requires a model for human cognition. Precise enough models


allowsimulation by computers.

b. Focus is not just on behavior and I/O, but looks like reasoning process.

c. Goal is not just to produce human-like behavior but to produce a sequence of steps of
thereasoning process, similar to the steps followed by a human in solving the same task.

Laws of thought: Think Rationally

a. The study of mental faculties through the use of computational models; that it is,
thestudy of computations that make it possible to perceive reason and act.

b. Focus is on inference mechanisms that are probably correct and guarantee an optimal solution.

c. Goal is to formalize the reasoning process as a system of logical rules and procedures
ofinference.

d. Develop systems of representation to allow inferences to be like

“Socrates is a man. All men are mortal. Therefore Socrates is mortal”


Turing Test: Act Human-Like
a. The art of creating machines that perform functions requiring intelligence when
performed by people; that it is the study of, how to make computers do things which, at
the moment, people do better.

b. Focus is on action, and not intelligent behavior centered around the representation of the world

c. Example: Turing Test

o 3 rooms contain: a person, a computer and an interrogator.

o The interrogator can communicate with the other 2 by teletype (to


avoidthe machine imitate the appearance of voice of the person)

o The interrogator tries to determine which the person is and which


themachine is.

6
o The machine tries to fool the interrogator to believe that it is the
human, and the person also tries to convince the interrogator that it is
the human.

o If the machine succeeds in fooling the interrogator, then conclude that


themachine is intelligent.

Rational agent: Act Rationally

a. Tries to explain and emulate intelligent behavior in terms of computational process;


thatit is concerned with the automation of the intelligence.

b. Focus is on systems that act sufficiently if not optimally in all situations.

c. Goal is to develop systems that are rational and sufficient

Agents and Environments:

Fig 2.1: Agents and Environments


Agent:
An Agent is anything that can be viewed as perceiving its environment through sensors
andacting upon that environment through actuators.

 A human agent has eyes, ears, and other organs for sensors and hands, legs,
mouth,and other body parts for actuators.
 A robotic agent might have cameras and infrared range finders for sensors
andvarious motors foractuators.
 A software agent receives keystrokes, file contents, and network packets as

7
sensoryinputs and acts on the environment by displaying on the screen, writing
files, and sending network packets.

Percept:
We use the term percept to refer to the agent's perceptual inputs at any given instant.

Percept Sequence:
An agent's percept sequence is the complete history of everything the agent has ever perceived.

Agent function:
Mathematically speaking, we say that an agent's behavior is described by the agent
functionthat maps any given percept sequence to an action.

Agent program
Internally, the agent function for an artificial agent will be implemented by an agent
program. It is important to keep these two ideas distinct. The agent function is an
abstract

mathematical description; the agent program is a concrete implementation, running on


theagent architecture.
To illustrate these ideas, we will use a very simple example-the vacuum-cleaner world shown in
Fig 2.1.5. This particular world has just two locations: squares A and B. The vacuum agent
perceives which square it is in and whether there is dirt in the square. It can choose to move
left,move right, suck up the dirt, or do nothing. One very simple agent function is the
following: ifthe current square is dirty, then suck, otherwise move to the other square. A
partial tabulation ofthis agent function is shown in Fig 2.1.6.

Fig 2.1.5: A vacuum-cleaner world with just two locations.


8
Agent function

Percept Sequence Action

[A, Clean] Right

[A, Dirty] Suck

[B, Clean] Left

[B, Dirty] Suck

[A, Clean], [A, Clean] Right

[A, Clean], [A, Dirty] Suck

Fig 2.1.6: Partial tabulation of a simple agent function for the example: vacuum-cleaner world shown in the
Fig2.1.5

Function REFLEX-VACCUM-AGENT ([location, status]) returns an

action If status=Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Fig 2.1.6(i): The REFLEX-VACCUM-AGENT program is invoked for each new percept
(location, status) and returns an action each time

 A Rational agent is one that does the right thing. we say that the right action is the one that
willcause the agent to be most successful. That leaves us with the problem of deciding how
and when to evaluate the agent's success.
We use the term performance measure for the how—the criteria that determine how
successfulan agent is.

9
 Ex-Agent cleaning the dirty floor
 Performance Measure-Amount of dirt collected
 When to measure-Weekly for better results

What is rational at any given time depends on four things:


 The performance measure defining the criterion of success
 The agent’s prior knowledge of the environment
 The actions that the agent can perform
 The agent’s percept sequence up to now.

Omniscience ,Learning and Autonomy:


 We need to distinguish between rationality and omniscience. An Omniscient agent knows
theactual outcome of its actions and can act accordingly but omniscience is impossible in
reality.
 Rational agent not only gathers information but also learns as much as possible from what
itperceives.
 If an agent just relies on the prior knowledge of its designer rather than its own percepts
thenthe agent lacks autonomy.
 A system is autonomous to the extent that its behavior is determined its own experience.
 A rational agent should be autonomous.
E.g., a clock(lacks autonomy)
 No input (percepts)
 Run only but its own algorithm (prior knowledge)
 No learning, no experience, etc.

ENVIRONMENTS:
The Performance measure, the environment and the agents actuators and sensors comes under
the heading task environment. We also call this as
PEAS(Performance,Environment,Actuators,Sensors)

10
Environment-Types:

1. Accessible vs. inaccessible or Fully observable vs Partially Observable:


If an agent sensor can sense or access the complete state of an environment at each point of
timethen it is a fully observable environment, else it is partially observable.
2. Deterministic vs. Stochastic:
If the next state of the environment is completely determined by the current state and the
actionsselected by the agents, then we say the environment is deterministic
3. Episodic vs. nonepisodic:
 The agent's experience is divided into "episodes." Each episode consists of the agent
perceiving and then acting. The quality of its action depends just on the episode itself,
becausesubsequent episodes do not depend on what actions occur in previous episodes.
 Episodic environments are much simpler because the agent does not need to think ahead.
4. Static vs. dynamic.
If the environment can change while an agent is deliberating, then we say the environment
isdynamic for that agent; otherwise it is static.

11
5. Discrete vs. continuous:
If there are a limited number of distinct, clearly defined percepts and actions we
saythat the environment is discrete. Otherwise, it is continuous.

STRUCTURE OF INTELLIGENT AGENTS

 The job of AI is to design the agent program: a function that implements the agent mapping
from percepts to actions. We assume this program will run on some sort of ARCHITECTURE
computing device, which we will call the architecture.
 The architecture might be a plain computer, or it might include special-purpose hardware for
certain tasks, such as processing camera images or filtering audio input. It might also include
software that provides a degree of insulation between the raw computer and the agent
program,so that we can program at a higher level. In general, the architecture makes the
percepts from the sensors available to the program, runs the program, and feeds the
program's action choices to the effectors as they are generated.
 The relationship among agents, architectures, and programs can be summed up as
follows:agent = architecture + program

12
Agent programs:
 Intelligent agents accept percepts from an environment and generates actions. The
earlyversions of agent programs will have a very simple form (Figure 2.4)

 Each will use some internal data structures that will be updated as new percepts arrive.
 These data structures are operated on by the agent's decision-making procedures to
generate anaction choice, which is then passed to the architecture to be executed

Types of agents:

Agents can be grouped into four classes based on their degree of perceived intelligence and capability :
 Simple Reflex Agents
13
 Model-Based Reflex Agents
 Goal-Based Agents
 Utility-Based Agents
Simple reflex agents:
 Simple reflex agents ignore the rest of the percept history and act only on the basis
ofthe current percept.
 The agent function is based on the condition-action rule.
 If the condition is true, then the action is taken, else not. This agent function only succeeds
whenthe environment is fully observable.

Model-based reflex agents:


 The Model-based agent can work in a partially observable environment, and track the situation.
 A model-based agent has two important factors:
 Model: It is knowledge about "how things happen in the world," so it is called a Model-based agent.
 Internal State: It is a representation of the current state based on percept history.

14
Goal-based agents:

 A goal-based agent has an agenda.


 It operates based on a goal in front of it and makes decisions based on how best to reach that goal.
 A goal-based agent operates as a search and planning function, meaning it targets the goal ahead
andfinds the right action in order to reach it.
 Expansion of model-based agent.

Utility-based agents:
 A utility-based agent is an agent that acts based not only on what the goal is, but the best way to reach
thatgoal.
 The Utility-based agent is useful when there are multiple possible alternatives, and an agent has
tochoose in order to perform the best action.
 The term utility can be used to describe how "happy" the agent is.

15
Problem Solving Agents:
 Problem solving agent is a goal-based agent.
 Problem solving agents decide what to do by finding sequence of actions that lead to desirable states.
Goal Formulation:
It organizes the steps required to formulate/ prepare one goal out of multiple goals available.
Problem Formulation:
It is a process of deciding what actions and states to consider to follow goal
formulation.The process of looking for a best sequence to achieve a goal is called
Search.
A search algorithm takes a problem as input and returns a solution in the form of action sequences.
Once the solution is found the action it recommends can be carried out. This is called Execution
phase.Well Defined problems and solutions:
A problem can be defined formally by 4 components:
 The initial state of the agent is the state where the agent starts in. In this case, the initial state can be
16
described as In: Arad
 The possible actions available to the agent, corresponding to each of the state the agent
residesin.
For example, ACTIONS(In: Arad) = {Go: Sibiu, Go: Timisoara, Go: Zerind}.
Actions are also known as operations.
 A description of what each action does.the formal name for this is Transition model,Specified by
thefunction Result(s,a) that returns the state that results from the action a in state s.
We also use the term Successor to refer to any state reachable from a given state by a single
action.For EX:Result(In(Arad),GO(Zerind))=In(Zerind)

Together the initial state,actions and transition model implicitly defines the state space of the
problemState space: set of all states reachable from the initial state by any sequence of actions
 The goal test, determining whether the current state is a goal state. Here, the goal state is
{In:Bucharest}
 The path cost function, which determine the cost of each path, which is reflecting in
theperformance measure.
we define the cost function as c(s, a, s’), where s is the current state and a is the action performed by
theagent to reach state s’.

17
Example –
8 puzzle problem
Initial State

Goal State

 States: a state description specifies the location of each of the eight tiles in one of the
ninesquares. For efficiency, it is useful to include the location of the blank.
 Actions: blank moves left, right, up, or down.
 Transition Model: Given a state and action, this returns the resulting state. For example if
weapply left to the start state the resulting state has the 5 and the blank switched.
 Goal test: state matches the goal configuration shown in fig.
 Path cost: each step costs 1, so the path cost is just the length of the path.

18
State Space Search/Problem Space Search:
The state space representation forms the basis of most of the AI methods.
 Formulate a problem as a state space search by showing the legal problem states, the
legal operators, and the initial and goal states.
 A state is defined by the specification of the values of all attributes of interest in the world
 An operator changes one state into the other; it has a precondition which is the value of
certain attributes prior to the application of the operator, and a set of effects, which
arethe attributes altered by the operator
 The initial state is where you start
 The goal state is the partial description of the solution

Formal Description of the problem:


1. Define a state space that contains all the possible configurations of the relevant objects.
2. Specify one or more states within that space that describe possible situations from
whichthe problem solving process may start ( initial state)
3. Specify one or more states that would be acceptable as solutions to the problem. ( goal states)
Specify a set of rules that describe the actions (operations) available

State-Space Problem Formulation:

Example: A problem is defined by four items:


1. initial state e.g., "at Arad“
2. actions or successor function : S(x) = set of action–state
pairse.g., S(Arad) = {<Arad → Zerind, Zerind>, … }
3. goal test (or set of goal states)
e.g., x = "at Bucharest”, Checkmate(x)
4. path cost (additive)
e.g., sum of distances, number of actions executed, etc.
c(x,a,y) is the step cost, assumed to be ≥ 0
A solution is a sequence of actions leading from the initial state to a goal state

19
Example: 8-queens problem

1. Initial State: Any arrangement of 0 to 8 queens on board.


2. Operators: add a queen to any square.
3. Goal Test: 8 queens on board, none attacked.
4. Path cost: not applicable or Zero (because only the final state counts, search cost
mightbe of interest).

Search strategies:
Search: Searching is a step by step procedure to solve a search-problem in a given search space. A
search problem can have three main factors:
Search Space: Search space represents a set of possible solutions, which a system may
have.Start State: It is a state from where agent begins the search.
Goal test: It is a function which observe the current state and returns whether the goal state is
achievedor not.

20
Properties of Search Algorithms

Which search algorithm one should use will generally depend on the
problemdomain. There are four important factors to consider:

1. Completeness – Is a solution guaranteed to be found if at least one solution exists?

2. Optimality – Is the solution found guaranteed to be the best (or lowest cost) solution if

thereexists more than one solution?

3. Time Complexity – The upper bound on the time required to find a solution, as a function

ofthe complexity of the problem.


4. Space Complexity – The upper bound on the storage space (memory) required at any
pointduring the search, as a function of the complexity of the problem.

State Spaces versus Search Trees:


 State Space
o Set of valid states for a problem
o Linked by operators
o e.g., 20 valid states (cities) in the Romanian travel problem
 Search Tree
– Root node = initial state
– Child nodes = states that can be visited from parent
– Note that the depth of the tree can be infinite
• E.g., via repeated states
– Partial search tree
• Portion of tree that has been expanded so far
– Fringe
• Leaves of partial search tree, candidates
forexpansion Search trees = data structure to search state-
space

Searching
Many traditional search algorithms are used in AI applications. For complex problems, the traditional
algorithms are unable to find the solution within some practical time and space limits. Consequently,
many special techniques are developed; using heuristic functions. The algorithms that use heuristic

21
functions are called heuristic algorithms. Heuristic algorithms are not really intelligent; they appear to
be intelligent because they achieve better performance.

Heuristic algorithms are more efficient because they take advantage of feedback from the data to
directthe search path.
Uninformed search

Also called blind, exhaustive or brute-force search, uses no information about the problem to guide
thesearch and therefore may not be very efficient.

Informed Search:

Also called heuristic or intelligent search, uses information about the problem to guide the search, usually
guesses the distance to a goal state and therefore efficient, but the search may not be always possible.

Uninformed Search (Blind searches):

1. Breadth First Search:

 One simple search strategy is a breadth-first search. In this strategy, the root node is
expanded first, then all the nodes generated by the root node are expanded next, and
thentheir successors, and so on.
 In general, all the nodes at depth d in the search tree are expanded before the nodes at depth d
+ 1.

BFS illustrated:

22
Step 1: Initially frontier contains only one node corresponding to the source state A.

Figure 1
Frontier: A

Step 2: A is removed from fringe. The node is expanded, and its children B and C are generated.
They are placed at the back of fringe.

Figure 2
Frontier: B C

Step 3: Node B is removed from fringe and is expanded. Its children D, E are generated and

putat the back of fringe.


Figure 3
Frontier: C D E

Step 4: Node C is removed from fringe and is expanded. Its children D and G are added to
theback of fringe.
23
Figure 4
Frontier: D E D G

Step 5: Node D is removed from fringe. Its children C and F are generated and added to the
backof fringe.

Figure 5
Frontier: E D G C F

Step 6: Node E is removed from fringe. It has no children.

Figure 6
Frontier: D G C F

Step 7: D is expanded; B and F are put in OPEN.

24
Figure 7
Frontier: G C F B F

Step 8: G is selected for expansion. It is found to be a goal node. So the algorithm returns
thepath A C G by following the parent pointers of the node corresponding to G. The
algorithm terminates.

Breadth first search is:

 One of the simplest search strategies


 Complete. If there is a solution, BFS is guaranteed to find it.
 If there are multiple solutions, then a minimal solution will be found
 The algorithm is optimal (i.e., admissible) if all operators have the same
cost.Otherwise, breadth first search finds a solution with the shortest path
length.
 Time complexity : O(bd )
 Space complexity : O(bd )
 Optimality :Yes
b - branching factor(maximum no of successors of any
node), d – Depth of the shallowest goal node

Advantages: Maximum length of any path (m) in search space

25
 BFS will provide a solution if any solution exists.
 If there are more than one solutions for a given problem, then BFS will provide the minimal
solutionwhich requires the least number of steps.

Disadvantages:
 Requires the generation and storage of a tree whose size is exponential the depth of
theshallowest goal node.
 The breadth first search algorithm cannot be effectively used unless the search space
isquite small.
Applications Of Breadth-First Search Algorithm
GPS Navigation systems: Breadth-First Search is one of the best algorithms used to find neighboring
locations by using the GPS system.
Broadcasting: Networking makes use of what we call as packets for communication. These packets
follow a traversal method to reach various networking nodes. One of the most commonly used
traversal

methods is Breadth-First Search. It is being used as an algorithm that is used to communicate


broadcastedpackets across all the nodes in a network.

Depth- First- Search.


We may sometimes search the goal along the largest depth of the tree, and move up only when
further traversal along the depth is not possible. We then attempt to find alternative offspring
of the parent of the node (state) last visited. If we visit the nodes of a tree using the above
principles to search the goal, the traversal made is called depth first traversal and consequently
the search strategy is called depth first search.

26
DFS illustrated:

A State Space Graph

Step 1: Initially fringe contains only the node for A.

Figure 1
FRINGE: A

Step 2: A is removed from fringe. A is expanded and its children B and C are put in front of
fringe.

Figure 2
FRINGE: B C

Step 3: Node B is removed from fringe, and its children D and E are pushed in front of fringe.

Figure 3

27
FRINGE: D E C

Step 4: Node D is removed from fringe. C and F are pushed in front of fringe.

Figure 4
FRINGE: C F E C

Step 5: Node C is removed from fringe. Its child G is pushed in front of fringe.

Figure 5

Figure 5

FRINGE: G F E C
Step 6: Node G is expanded and found to be a goal node.

Figure 6

28
FRINGE: G F E C

The solution path A-B-D-C-G is returned and the algorithm terminates.

Depth first search

1. takes exponential time.


2. If N is the maximum depth of a node in the search space, in the worst case the algorithm will
d
take time O(b ).
3. The space taken is linear in the depth of the search tree, O(bN).

Note that the time taken by the algorithm is related to the maximum depth of the search tree. If the
search tree has infinite depth, the algorithm may not terminate. This can happen if the search space is
infinite. It can also happen if the search space contains cycles. The latter case can be handled by
checking for cycles in the algorithm. Thus Depth First Search is not complete.

Iterative Deeping DFS


 The iterative deepening algorithm is a combination of DFS and BFS algorithms.
 This search algorithm finds out the best depth limit and does it by gradually increasing the
limituntil a goal is found.
 This algorithm performs depth-first search up to a certain "depth limit", and it keeps
increasingthe depth limit after each iteration until the goal node is found.

Advantages:
 It combines the benefits of BFS and DFS search algorithm in terms of fast search and
memoryefficiency.
Disadvantages:
 The main drawback of IDDFS is that it repeats all the work of the previous phase.

Iterative deepening search L=0

29
Iterative deepening search L=1

Iterative deepening search L=2

Iterative Deepening Search L=3

M is the goal node. So we stop there.

Complete: Yes

Time: O(bd)

Space: O(bd)

Optimal: Yes, if step cost = 1 or increasing function of depth.

Conclusion:

30
We can conclude that IDS is a hybrid search strategy between BFS and DFS inheriting
their advantages.
IDS is faster than BFS and DFS.
Itissaidthat “IDSisthepreferreduniformedsearchmethodwhen thereisalargesearchspace and the
depthof the solution is not known

Informed search/Heuristic search

A heuristic is a method that

 might not always find the best solution but is guaranteed to find a good solution
inreasonable time. By sacrificing completeness it increases efficiency.
 Useful in solving tough problems which
o could not be solved any other way.
o solutions take an infinite time or very long time to compute.

Calculating Heuristic Value:

 1. Euclidian distance- used to calculate straight line distance.


 2.Manhatten distance-If we want to calculate vertical or horizontal

distanceFor ex: 8 puzzle problem

Source state

1 3 2
31
6 5 4
8 7
destination state

1 2 3
4 5 6
7 8

Then the Manhattan distance would be sum of the no of moves required to move
eachnumber from source state to destination state.
Number in 8 1 2 3 4 5 6 7 8
puzzle
No. of moves 0 2 1 2 0 2 2 0
to reach
destination

3. No. of misplaced tiles for 8 puzzle problem

Source state

1 3 2
6 5 4
8 7

Destination state
1 2 3
4 5 6
7 8
Here just calculate the number of tiles that have to be changed to reach goal
stateHere 1,5,8 need not be changed
2,3,4,6,7 should be changed, so the heuristic value will be 5(because 5 tiles have to be changed)

Hill Climbing Algorithm

 Hill climbing algorithm is a local search algorithm which continuously moves in the
direction of increasing elevation/value to find the peak of the mountain or best solution
tothe problem. It terminates when it reaches a peak value where no neighbor has a
higher value.
32
 It is also called greedy local search as it only looks to its good immediate neighbor
stateand not beyond that.
 Hill Climbing is mostly used when a good heuristic is available.
 In this algorithm, we don't need to maintain and handle the search tree or graph as it
onlykeeps a single current state.

The idea behind hill climbing is as follows.

1. Pick a random point in the search space.


2. Consider all the neighbors of the current state.
3. Choose the neighbor with the best quality and move to that state.
4. Repeat 2 thru 4 until all the neighboring states are of lower quality.
5. Return the current state as the solution state.

Different regions in the state space landscape:

Local Maximum: Local maximum is a state which is better than its neighbor states, but there is also another state which is
higher than it.

Global Maximum: Global maximum is the best possible state of state space landscape. It has the highest value of objective
function.

Current state: It is a state in a landscape diagram where an agent is currently present.

Flat local maximum: It is a flat space in the landscape where all the neighbor states of current states have the same value.

Shoulder: It is a plateau region which has an uphill edge.

Algorithm for Hill Climbing

33
Problems in Hill Climbing Algorithm:

Simulated annealing search

34
A hill-climbing algorithm that never makes “downhill” moves towards states with lower value
(or higher cost) is guaranteed to be incomplete, because it can stuck on a local maximum. In
contrast, a purely random walk –that is, moving to a successor chosen uniformly at random
from the set of successors – is complete, but extremely inefficient. Simulated annealing is an

algorithm that combines hill-climbing with a random walk in some way that yields both
efficiency and completeness.

simulated annealing algorithm is quite similar to hill climbing. Instead of picking the best
move, however, it picks the random move. If the move improves the situation, it is always
accepted. Otherwise, the algorithm accepts the move with some probability less than 1.
The
probability decreases exponentially with the “badness” of the move – the amount E by which
the evaluation is worsened. The probability also decreases as the "temperature" T goes down: "bad
moves are more likely to be allowed at the start when temperature is high, and they become more
unlikely as T decreases. One can prove that if the schedule lowers T slowly enough, the algorithm will
find a global optimum with probability approaching 1.
Simulated annealing was first used extensively to solve VLSI layout problems. It has been applied widely
to factory scheduling and other large-scale optimization tasks.

Best First Search:

 A combination of depth first and breadth first searches.


 Depth first is good because a solution can be found without computing all nodes and
breadth first is good because it does not get trapped in dead ends.
 The best first search allows us to switch between paths thus gaining the benefit of both
approaches. At each step the most promising node is chosen. If one of the nodes chosen
generates nodes that are less promising it is possible to choose another at the same
level and in effect the search changes from depth to breadth. If on analysis these are no
better than this previously unexpanded node and branch is not forgotten and the search
method reverts to the

OPEN is a priority queue of nodes that have been evaluated by the heuristic function but

35
whichhave not yet been expanded into successors. The most promising nodes are at the front.

CLOSED are nodes that have already been generated and these nodes must be stored because
agraph is being used in preference to a tree.

Algorithm:

1. Start with OPEN holding the initial state


2. Until a goal is found or there are no nodes left on open do.

 Pick the best node on OPEN


 Generate its successors
 For each successor Do
• If it has not been generated before ,evaluate it ,add it to OPEN
andrecord its parent

• If it has been generated before change the parent if this new path is
betterand in that case update the cost of getting to any successor
nodes.

3. If a goal is found or no more nodes left in OPEN, quit, else return to 2.

Example:

36
1. It is not optimal.

2. It is incomplete because it can start down an infinite path and never return to try
otherpossibilities.

3. The worst-case time complexity for greedy search is O (bm), where m is the
maximumdepth of the search space.
4. Because greedy search retains all nodes in memory, its space complexity is the same
asits time complexity
A* Algorithm

The Best First algorithm is a simplified form of the A* algorithm.

The A* search algorithm (pronounced "Ay-star") is a tree search algorithm that finds a path
from a given initial node to a given goal node (or one passing a given goal test). It employs a
"heuristic estimate" which ranks each node by an estimate of the best route that goes through

37
thatnode. It visits the nodes in order of this heuristic estimate.

Similar to greedy best-first search but is more accurate because A* takes into account the
nodes that have already been traversed.

From A* we note that f = g + h where

g is a measure of the distance/cost to go from the initial node to the current node

his an estimate of the distance/cost to solution from the current node.

Thus fis an estimate of how long it takes to go from the initial node to the solution

Algorithm:

1. Initialize : Set OPEN = (S); CLOSED


= ( ) g(s)= 0, f(s)=h(s)
2. Fail : If OPEN = ( ), Terminate and fail.

3. Select : select the minimum cost state, n, from OPEN,

save n in CLOSED

4. Terminate : If n €G, Terminate with success and return f(n)

5. Expand : for each successor, m, of n

a) If m € [OPEN U
CLOSED] Set g(m) =
g(n) + c(n , m) Set f(m)
= g(m) + h(m)
Insert m in OPEN

b) If m € [OPEN U CLOSED]

Set g(m) = min { g(m) , g(n) + c(n


, m)} Set f(m) = g(m) + h(m)
If f(m) has decreased and m € CLOSED

Move m to OPEN.
Description:

38
 A* begins at a selected node. Applied to this node is the "cost" of entering this node
(usually zero for the initial node). A* then estimates the distance to the goal node
from the current node. This estimate and the cost added together are the heuristic
which is assigned to the path leading to this node. The node is then added to a priority
queue, oftencalled "open".
 The algorithm then removes the next node from the priority queue (because of the way
a priority queue works, the node removed will have the lowest heuristic). If the queue is
empty, there is no path from the initial node to the goal node and the algorithm stops. If
the node is the goal node, A* constructs and outputs the successful path and stops.
 If the node is not the goal node, new nodes are created for all admissible adjoining
nodes;the exact way of doing this depends on the problem at hand. For each successive
node, A* calculates the "cost" of entering the node and saves it with the node. This cost
is calculated from the cumulative sum of costs stored with its ancestors, plus the cost of
the operation which reached this new node.
 The algorithm also maintains a 'closed' list of nodes whose adjoining nodes have been
checked. If a newly generated node is already in this list with an equal or lower cost, no
further processing is done on that node or with the path associated with it. If a node in
the closed list matches the new one, but has been stored with a higher cost, it is
removed from the closed list, and processing continues on the new node.

 Next, an estimate of the new node's distance to the goal is added to the cost to form the
heuristic for that node. This is then added to the 'open' priority queue, unless an identical node
is found there.
 Once the above three steps have been repeated for each new adjoining node, the original node
taken from the priority queue is added to the 'closed' list. The next node is then popped from
the priority queue and the process is repeated

39
The heuristic costs from each city to Bucharest:

40
41
A* search properties:

 The algorithm A* is admissible. This means that provided a solution exists, the first
solutionfound by A* is an optimal solution. A* is admissible under the following
conditions:

 Heuristic function: for every node n , h(n) ≤ h*(n) .

 A* is also complete.

 A* is optimally efficient for a given heuristic.

 A* is much more efficient that uninformed search.

Constraint Satisfaction Problems


https://www.cnblogs.com/RDaneelOlivaw/p/8072603.html

Sometimes a problem is not embedded in a long set of action sequences but requires picking the
best option from available choices. A good general-purpose problem solving technique is to list
the constraints of a situation (either negative constraints, like limitations, or positive elements
that you want in the final solution). Then pick the choice that satisfies most of the constraints.

Formally speaking, a constraint satisfaction problem (or CSP) is defined by a set of variables,
X1;X2; : : :
;Xn, and a set of constraints, C1;C2; : : : ;Cm. Each variable Xi has anonempty domain Di of
possible values. Each constraint Ci involves some subset of tvariables and specifies the allowable
combinations of values for that subset. A state of theproblem is defined by an assignment of
values to some or all of the variables, {Xi = vi;Xj =vj ; : : :} An assignment that does not violate any
constraints is called a consistent or
legalassignment. A complete assignment is one in which every variable is mentioned, and a
solution to a CSP is a complete assignment that satisfies all the constraints. Some CSPs also
require a solution that maximizes an objectivefunction.

CSP can be given an incremental formulation as a standard search problem as follows:

1. Initial state: the empty assignment fg, in which all variables are unassigned.

2. Successor function: a value can be assigned to any unassigned variable, provided that it
doesnot conflict with previously assigned variables.

42
3. Goal test: the current assignment is complete.
4. Path cost: a constant cost for every step

Examples:

1. The best-known category of continuous-domain CSPs is that of


linear programming problems, where constraints must be linear
inequalities forming a convex region.

2. Crypt arithmetic puzzles.

Example: The map coloring problem.

The task of coloring each region red, green or blue in such a way that no
neighboringregions have the same color.
We are given the task of coloring each region red, green, or blue in such a way that
theneighboring regions must not have the same color.
To formulate this as CSP, we define the variable to be the regions: WA, NT, Q, NSW, V, SA, and
T. The domain of each variable is the set {red, green, blue}. The constraints require

neighboring regions to have distinct colors: for example, the allowable combinations
forWA and NT are the pairs
{(red,green),(red,blue),(green,red),(green,blue),(blue,red),(blue,green)}. (The constraint
can also be represented as the inequality WA ≠ NT). There aremany possible solutions,

43
such as {WA = red, NT = green, Q = red, NSW = green, V = red, SA = blue, T = red}.Map
of Australia showing each of its states and territories
Constraint Graph: A CSP is usually represented as an undirected graph, called
constraint graph where the nodes are the variables and the edges are the
binaryconstraints.

The map-coloring problem represented as a constraint

graph. CSP can be viewed as a standard search

problemas follows:

> Initial state : the empty assignment {},in which all variables are unassigned.
> Successor function: a value can be assigned to any unassigned variable, provided
that it does not conflict with previously assigned variables.
> Goal test: the current assignment is complete.
> Path cost: a constant cost(E.g.,1) for every step.

44

You might also like