0% found this document useful (0 votes)
80 views

AI-UNIT-1 PPT

Uploaded by

ayyanreddymanda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views

AI-UNIT-1 PPT

Uploaded by

ayyanreddymanda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 149

ARTIFICIAL INTELLIGENCE

P.Dastagiri Reddy
Assistant Professor
SOCSE-1 Department
UNIT-1

UNIT - I Introduction: AI problems, Agents and Environments,


Structure of Agents, Problem Solving Agents
Basic Search Strategies: Problem Spaces, Uninformed Search
(Breadth-First, Depth-First Search, Depth-first with Iterative
Deepening), Heuristic Search (Hill Climbing, Generic Best-First, A*),
Constraint Satisfaction (Backtracking, Local Search)
Introduction:
• AI is the universal field of computer science.
• It is one of the fascinating technology.
• Greatest scope in future
• It has a tendency to cause a machine to work as a human.

• Artificial---------” man made”


• Intelligence-------- the ability of making artificial thing----”thinking
power”
AI definition:
• It is a branch of cs by which we can create an intelligent machine, which can
behave like a human and think like a human and able to make decisions.
• With AI, you do not need to pre-program a machine to do some work.
• Inspite you can programm a machine, which can work with own intelligence

• There are two ideas in the definition.

Intelligence [Ability to understand, think & learn]

Artificial device [Non Natural]


What is AI
• Artificial Intelligence (AI) is a branch of Science which
deals with helping machines find solutions to complex
problems in a more human-like fashion.
• This generally involves borrowing characteristics from
human intelligence, and applying them as algorithms in
a computer friendly way.
• A more or less flexible or efficient approach can be
taken depending on the requirements established,
which influences how artificial the intelligent behavior
appears
Structure Programming Vs AI
Structured Programming Artificial Intelligence
A program without AI can answer AI answers any question belonging
the "specific" questions it is meant to its "generic" type.
to answer

If you modify a structural program, AI programs are all about


its entire structure changes. modifications. They keep absorbing
the information provided to them
as stimuli for future referencing like
the human brain.
Artificial intelligence can be viewed
from a variety of perspectives.
• From the perspective of intelligence artificial intelligence is making
machines "intelligent" -- acting as we would expect people to act.
• The inability to distinguish computer responses from human
responses is called the Turing test.
• Intelligence requires knowledge
• Expert problem solving - restricting domain to allow including
significant relevant knowledge
• From a business perspective AI is a set of very powerful tools, and
methodologies for using those tools to solve business problems.
• From a programming perspective, AI includes the study of symbolic
programming, problem solving, and search.
• Typically AI programs focus on symbols rather than numeric
processing.
• Problem solving - achieve goals.
• Search - seldom access a solution directly. Search may include a
variety of techniques.
Why AI:
With the help of AI,
• We can create such amazing software's ---- a device which can solve
real-world problems accurately and easily.
• We can create our personal virtual assistances.
• We can build such robots which can work in a environment where
survival of human can be at a risk .
• AI opens path for new technologies, new devices and new
opportunities.
Goals:
• Replicate human intelligence
• Solve-knowledge intensive tasks
• An intelligent connection of perception and action.
• Building a machine which can perform tasks that requires human intelligence
Providing theorem/algorithm
Plan some surgical operation
Playing chess
Driving car in traffic
• Creating some system which can exhibit intelligent behaviours.
History of AI
• 1943: Early Beginnings
Boolean Circuit model of Brains

• 1950: Turing
Turing’s computing Machinery and Intelligence

• 1956: Birth of AI
Dartmouth Conference: Artificial Intelligence name Adopted

• 1955-1965: Great Enthusiasm


GPS Solver [General Problem Solver]
History of AI
• 1966: Reality Dawns
Realization that many AI problems are intraceable
• 1969-1985: Adding Domain Knowledge
Development of knowledge based systems
Success of rule based expert systems
• 1986: Rise of Machine Learning
Neural Networks return to popularity
• 1990: Role of Uncertainty
Bayesian networks as a knowledge representation framework
• 1995: AI as Science[Integration of learning, reasoning, knowledge
representation, AI methods used in vision, language and data
mining.
Applications of AI
• Gaming − AI plays important role for machine to think of large number
of possible positions based on deep knowledge in strategic games.

• Natural Language Processing − Interact with the computer that


understands natural language spoken by humans

• Expert Systems − Machine or software provide explanation and advice


to the users.
Applications of AI
• Vision Systems − Systems understand, explain, and describe visual input
on the computer
• Speech Recognition − There are some AI based speech recognition
systems have ability to hear and express as sentences and understand
their meanings while a person talks to it.
• Handwriting Recognition − The handwriting recognition software reads
the text written on paper and recognize the shapes of the letters and
convert it into editable text.
• Intelligent Robots − Robots are able to perform the instructions given by a
human.
• Thinking Humanly: The Cognitive modeling Approach
• We can say that given program thinks like a human, we must have some
way of determining how humans think. i.e., we need to get inside the
actual working of human minds.
• There are 3 ways to do this;
• Through Introspection
Trying to catch our own Thoughts as they go by
• Through psychological experiments
Once we have a sufficiently precise theory of the mind,it becomes possible to
express the theory as computer programs.
If the programs input/output and timing behaviour matches human behaviour that
is the evidence.
• Brain Imaging
Observing the brain in action
• Acting Humanly: The Turing test Approach

• The Turing Test, proposed by Alan Turing (195O), was designed to


provide a satisfactory operational definition of intelligence.

• Here the computer is asking some questions by a human interrogator.

• The computer passes the test if a human interrogator, after posing


some written questions, cannot tell whether the written responses
come from a person or not.
Acting Humanly
• The computer would need to possess the following capabilities:
• Natural language processing: Enable it to communicate
successfully in English.
• Knowledge representation: Store what it knows or hears.
• Automated reasoning: Use the stored information to answer
questions and to draw new conclusions.
• Machine learning: To adapt to new circumstances and to detect
and extrapolate patterns.
• Computer vision: To perceive objects.
• Robotics: To manipulate objects and move about.
AI is composed of:
• Reasoning: Set of process that enable us to provide logical thinking
• Learning: It is an activity of gaining knowledge
• Perception: It is a process of acquiring, interpreting and selecting and
even organizing the sensor information.
• Problem solving: It is the process of working through the details of a
problem to reach the solution.
• Linguistic intelligence: It is one’s ability to use comprehend speak and
write the verbal and written languages.
Importance of AI
• Game Playing
• Speech Recognition
• Understanding Natural Language
• Computer Vision(3D)
• Expert Systems(medical)
• Heuristic Classification(fraud detection)
The applications of AI
Consumer Marketing
• Have you ever used any kind of credit/ATM/store card while shopping?
• if so, you have very likely been “input” to an AI algorithm
• All of this information is recorded digitally
• Companies like Nielsen gather this information weekly and search for
patterns
• – general changes in consumer behavior
• – tracking responses to new products
• – identifying customer segments: targeted marketing, e.g., they find
out that consumers with sports cars who buy textbooks respond well
to offers of new credit cards.
• Algorithms (“data mining”) search data for patterns based on
mathematical theories of learning
Applications of AI
Identification Technologies
• ID cards e.g., ATM cards
• can be a nuisance and security risk: cards can be lost, stolen, passwords forgotten, etc
• Biometric Identification, walk up to a locked door
• – Camera
• – Fingerprint device
• – Microphone
• – Computer uses biometric signature for identification
• – Face, eyes, fingerprints, voice pattern
• – This works by comparing data from person at door with stored library
• – Learning algorithms can learn the matching process by analyzing a large library database off-
line, can improve its performance.
Applications of AI
Intrusion Detection
• Computer security
• - we each have specific patterns of computer use times of day,
lengths of sessions, command used, sequence of commands,
etc
• – would like to learn the “signature” of each authorized user
• – can identify non-authorized users o How can the program
automatically identify users?
• – record user’s commands and time intervals
• – characterize the patterns for each user
• – model the variability in these patterns
• – classify (online) any new user by similarity to stored patterns
Applications of AI
Machine Translation
• Language problems in international business – e.g., at a
meeting of Japanese, Korean, Vietnamese and Swedish
investors, no common language
• – If you are shipping your software manuals to 127 countries,
the solution is ; hire translators to translate
• – would be much cheaper if a machine could do this!
• How hard is automated translation
• – very difficult!
• – e.g., English to Russian
• – not only must the words be translated, but their meaning
also!
Intelligent Agent:
• Agents in AI:
Study of relational agents and its environment. i.e., agents scense the
environment through sensors and act on their environment through
actuators.
Ex: automatic self-driving car
AI agent can have mental properties like:
Knowledge,
Belief and intention etc…
Intelligent Agent’s:
Must Sense
Must Act
Must be Autonomous(to some extent)
Must be rational

Fig 2.1: Agents and Environments


What is an agent?
• An agent can be anything, that perceive environment through sensors and
act up on that environment through actuators.

• Perceiving-----------thinking-----------acting

• Generally agent can be of 3 types:

Human agent
Robotic agent
Software agent ---- keystrokes (python –F5)
• Sensor: It is a device which detects the change in environment and
sends the information to the other electronic device.

• Actuators: It is a part or component of machine that converts energy


into motion

• Effectors: The device which effects the environment.


Different types of AI agents:
• Agent can be grouped int 5 classes based on their degree of perceived
intelligence and capacity.
1. Simple reflex agent
2. Model based reflex agent
3. Goal based agent
4. Utility based agent
5. Learning agent
• Simple reflex agent: This agent works only n the principle of current
perception. Works based on condition-action rule

• Model based reflex agent: It works by finding a rule whose condition


matches the current situation. Itt can handle actually observable
environments.
• Goal based agent: It mainly focus on reaching the goal set and the
decision took by the agent is based on how far it is currently from the
goal and desired state.
• Utility based agent: It is most similar to Goal-based agent but
provides an extra component f utility measurement which makes
them difference by providing a measure of success at a given state.
It is useful when there are multiple possible alternatives and agent
has to choose.
• Learning agent: It learns from past experience.
It starts with basic knowledge and able to act, automatic adaptability
through learning.
• Learning agent mainly focus on four conceptual components:
1. Learning elements
2. Critic elements
3. Performance element
4. Problem generator
• example-the vacuum-cleaner world shown in Fig
This particular world has just two locations: squares A and B. The vacuum agent
perceives which square it is in and whether there is dirt in the square. It can choose
to move left, move right, suck up the dirt, or do nothing. One very simple agent
function is the following: if the current square is dirty, then suck, otherwise move
to the other square. A partial tabulation of this agent function is shown in pt Sequence
•Action
•[A, Clean]
•Right
•[A, Dirty]
•Suck
•[B, Clean]
•Left
•[B, Dirty]
•Suck
•[A, Clean], [A, Clean]
•Right
•[A, Clean], [A, Dirty]
•Suck
•…

1.Vaccum cleaner 2. Agent Function


• The REFLEX-VACCUM-AGENT program is invoked for each new percept
(location, status) and returns an action each time.
• Function REFLEX-VACCUM-AGENT ([location, status]) returns an action
• If status=Dirty then return Suck
• else if location = A then return Right
• else if location = B then return Left

• Task environments:
• We must think about task environments, which are essentially
the "problems" to which rational agents are the "solutions."

• Specifying the task environment (PEAS)


The rationality of the simple vacuum-cleaner agent, needs
specification of
• The performance measure
• The environment
• The agent's actuators and sensors.
PEAS:
All these are grouped together under the heading of the task environment. We call
this the PEAS (Performance, Environment, Actuators, and Sensors)
description. In designing an agent, the first step must always be to specify the task
environment as fully as possible.
Performance Environments Actuators Sensors
Agent
Type Measure

Cameras, sonar,
Steering,
Safe: fast, legal, Roads, other traffic,
accelerator, Speedometer, GPS,
comfortable trip, pedestrians,
brake, Signal,
Odometer, engine
maximize profits customers horn, display
sensors, keyboards,
Taxi driver

accelerometer
• Properties of task environments:

• Fully observable vs. partially observable


• Deterministic vs. stochastic
• Episodic vs. sequential
• Static vs. dynamic
• Discrete vs. continuous
• Single agent vs. Multiagent
Problem Solving Using Search

STATE
SPACE
SEARCH
Problem Solving by Search
• Problem searching:
In general searching refers to finding information for one needs.
Searching is most commonly used technique of problem solving in AI.
• Generally to build a system, to solve a problem what we need:
1. Define (Initial situation)
2. Analyzing (techniques)
3. Isolate and represent
4. Choose the best solution
5. Implementation
This is also called as problem space which defines the the various
components that go into creating a resolution for a problem and also
includes the above 5 points as stages of problem space.
• For example problem solving in games, “SUDOKU PUZZLE”
It is done by building an AI system.
To do this first we define the problem statement and generating the
solution and keeping the condition in mind.

• Some of the problem solving techniques which helps AI are:


1. Chess
2. Travelling sales man problem
3. N-Queen problem
Flow chart:
Problem-Solving Agents: Introduction
• This is a one-kind of goal-based agent (use structured representation)
called a problem-solving agent or rational agent.
• Problem-solving agent use atomic representations, states of the world
are considered as wholes, with no internal structure visible to the
problem solving algorithms.

The major difference between intelligent and problem solving agent is:
• Intelligent agent maximize the performance
• Problem-solving agents find sequence of actions.
Ex: shortest route path algorithm
Functionality of Problem-Solving Agents:
• Goal Formulation:
Problem-solving is about having a goal we want to reach (Ex: we want to
travel from A ------ E)

• Problem Formulation:
A problem formulation is about deciding what actions and states to
consider.
• Search:
The process of finding the a sequence is called search.
• Solution and execute:
Once the solution is found from different aspects through search
recommendation , that sequence of actions will help to carry out the
execution.
• Problem-solving agent now simply designed as:
Formulate------search------execute
A problem can be defined formally by 4 components:
1.Initial state

2.State description
3.Goal test

4.Path cost
Goal Directed Agent
Definitions
Definitions
Definitions
Search Problem
Search Problem
Searching Process
State Space
Pegs and Disks
8 Queens
8 Queens Solution
N Queens Problem Formulation 1
N Queens Problem Formulation 2
N Queens Problem Formulation 3
Other Examples:
Different types of search strategies:
• Search strategies can be :
Uninformed search strategies
Informed search strategies
• Uninformed search strategies: These are also called as Blind search. These
search strategies use only the information available in the problem
definition. (This includes Breadth-first search, Uniform cost search, Depth-
first search, Iterative deepening depth-first search and Bidirectional
Search)

Informed search strategies: These are also called as Heuristic search. These
search strategies use other domain specific information. (This includes Hill
climbing, Best-first, Greedy Search and A* Search)
Search strategies classification:
Difference between uninformed and informed
search:
Search Algorithm Terminologies:

• Search
 Search problem may have three factors: search space, start
state, goal state
• Search tree
• Actions
• Transition model
• Path cost
• solution
• Optimal solution
Properties of Search Algorithms:

• Following are the four essential properties of search algorithms to compare the efficiency
of these algorithms:

• Completeness: A search algorithm is said to be complete if it guarantees to return a


solution if at least any solution exists for any random input.

• Optimality: If a solution found for an algorithm is guaranteed to be the best solution


(lowest path cost) among all other solutions, then such a solution for is said to be an
optimal solution.

• Time Complexity: Time complexity is a measure of time for an algorithm to complete its
task.

• Space Complexity: It is the maximum storage space required at any point during the
search, as the complexity of the problem.
Definition of search problem:
Exploring the state space:
Search Trees:
Breadth-First search (BFS):
• It is the most common search strategy for traversing a tree or graph
• This algorithms searches in breadth-wise in a tree/graph, so it is called
as breadth-first search
• BFS algorithm starts searching from the root node of the tree and
expand all nodes (successor nodes) at the current state before moving
to the next node of next level
• BFS is implemented using FIFO queue DS
Example:
• Let us see how the BFS is following the FIFO order (FIRST IN FIRST OUT):
• A -----start node (check the possibilities for node A) i.e A is first entered into queue
A
BC
CDE
DEFG
EFGH
FGHI
GHIJ
HIJ
IJK
JK
K -----GOAL NODE
• Time complexity: This is obtained by the no of nodes traverse in BFS
until the shallowest little depth
• Let d is the depth of shallowest
• Let b is the node at every state
Then T(b)=
Space complexity: It is given by memory complexity i.e., the memory
required to store each node
Then s(b)=
Completeness: BFS is complete
Optimal: Yes it is optimal
Example BFS:
• Find the route path from S to E using BFS
• Expand all possibilities from each and every node.
• After expanding with all possibilities the below one is the tree
representation with start node A and final node E.
• Based on the path cost the path should be SBE (or) SCE
• Let us see how the BFS is following the FIFO order (FIRST IN FIRST
OUT):
• S -----start node (check the possibilities for node S) i.e S is first
entered into queue
S
ABC
BCD
CDDE
DDEE
DEEE
EEEE
EEE
EE
Depth-First search
• DFS may be recursive or non recursive algorithm.
• It is a recursive algorithm for traversing a tree / graph
• In this it starts from the root node and follows each path of its
greatest depth node before moving to next path. That’s the reason to
call it as DFS.
• So it travels from top to bottom direction but not like BFS.
• DFS uses stack DS (LIFO) for its implementation.
• The process is similar to BFS algorithm.
• Advantages:
1. It requires less memory as it only needs to store a stack of the
nodes on the path from root node to current node.
2. It takes less time to reach to the goal node than BFS alg

Disadvantages:
3. There is a chance of re-occurring of many states and there is no
guarantee of finding the solution.
4. DFS alg goes for deep down searching and sometimes it may go to
infinite loop
Example: DFS
Backtracking
• It starts searching from the root nodes.
• So starts traversing from root node ‘S’.
• The visited nodes will be pushed into the stack by checking their successor
nodes (i.e A and H) G --- pop

• Then it traverse to A (visited and successor nodes are B & C) C

• Traverse from A ----- B (visited and successor nodes are D&E) E ----- pop

• Traverse from B ----- D (visited and no successor nodes) D ---- pop

B -----Pop
• Since there are no expanded nodes so it will be popped
A
from the stack and starts backtracking.
S
Now traverse back to “B”, since it is already visited check it
successor node i.e “E” (visited and no successor nodes) stack
• Since there are no expanded nodes so it will be popped
from the stack and starts backtracking.
• Now traverse back to “B” then to “A”, but is visited and check for
successor nodes which are not visited.
• “A” has successor node “C” (visited and successor nodes are G )
• Traverse from C --- G (visited and no successor nodes.)
• “G” is the goal node and stops searching when reaches goal node
• Since there are no expanded nodes so it will be popped
from the stack
• The DFS is not done because in stack we have some nodes,
When all the stack elements are popped out C ---- pop

i.e., stack should be empty then only we can say that B ---- pop

A ---- pop
DFS is terminated.
• Before popping out check the node with its successors S ----- pop
Whether all visited or not. If visited pop out one by one stack
Until the stack is empty

Output: SABDECG
• Completeness: Yes, Complete with in finite state space.
• Time complexity: T(n)= 1+n2 +n3 +………..+nm
Hence T(n)=O(nm)
m----- max depth of any node
• Space complexity: O(bm)
b ------ is level of the tree
• Optimal: non-optimal (due o infinite loops)
Example 2: DFS example
F
• Start node is “A” G
H
• A----------B,S (possible nodes from A)
E
• Traverse from A --- B D ----- pop
• Traverse from A --- S (C,G possible nodes from S) C
S
• Traverse from S ---- C (D,E,F possible nodes from C) B ---- pop
• Traverse from C ------ D (no possible nodes, just pop it out) A

• Traverse from C ------ E (H possible node from E) stack


• Traverse from E ------ H (G possible node from H) F ---- POP
• Traverse from H ----- G (F possible node from G) G --- pop

• Traverse from G ----- F () H ---- pop


E ----- pop
Output : ABSCDEHGF C ----- pop
S ------ pop
A ---- pop
DLS: Depth Limited search
• It is similar to DFS with a predetermined limit
• It also solves the problem of infinite path (DFS)
• Always treats the depth has no successor nodes
• DLS can be terminated in 2 conditions:
One is during standard failure value
Second one is cut-off failure value
Advantages:
Memory efficient
• Disadvantages:
• Incompleteness and not optimal
• In the given diagram consider my
limit is level 2
• If my goal is found or not found
in the limit DLS terminates

Time complexity is: = O(bL)


Of course in the above diagram we are
L------limit following the concept of DFS but not
moving beyond the search . i.e., it avoids
Space complexity is: O(b*L) infinite path in DLS which is n disadvantage
of DFS
Iterative deepening depth-first
search:
• Iterative deepening is the preferred uninformed search method.
When the search space is large and the depth of the solution is not
known.
• It is often used in combination with DFS, that finds the best depth-
limit. This can be done by gradually increasing the limit i.e., First ---0,
then 1 and so on…… until a goal is found.
• Iterative deepening combines the benefits of DFS and BFS.
• Description:

 It is a search strategy resulting when you combine BFS and DFS, thus
combining the advantages of each strategy, taking the completeness and
optimality of BFS and the modest memory requirements of DFS.
 IDS works by looking for the best search depth d, thus starting with depth
limit 0 and make a BFS and if the search failed it increase the depth limit
by 1 and try a BFS again with depth 1 and so on – first d = 0, then 1 then
2 and so on – until a depth d is reached where a goal is found.
Example
At depth-limit ---0
• “A” root node is visited (open node)
• Iterate the depth level, so level=1
• The possible nodes from ‘A’ are B,C,D
• In level1 “B” (current node) has adjacent node “C” and “E” but “C” is
in the same level1 ,”E” is not because it limit exceeds.
• Now the current node is “C”
• In level1 “C” (current node) has adjacent node “B” (already visited)
and “F” , “G” is not in the same level1 because it limit exceeds.
Algorithm:

procedure IDDFS(root)
for depth from 0 to ∞
found ← DLS(root, depth)
if found ≠ null
return found

procedure DLS(node, depth)


if depth = 0 and node is a goal
return node
else if depth > 0
foreach child of node
found ← DLS(child, depth−1)
if found ≠ null
return found
return null
Performance Measure:
Completeness: IDS is like BFS, is complete when the branching factor b is finite.

Optimality: IDS is also like BFS optimal when the steps are of the same cost.

Time Complexity: N(IDS) = (b)d + (d – 1)b2 + (d – 2)b3 + …. + (2)bd-1 + (1)bd =


O(bd)
If this search were to be done with BFS, the total number of generated nodes in
the worst case will be like:
N(BFS) = b + b2 + b3 + b4 + …. bd + (bd + 1 – b) = O(bd + 1)
If we consider a realistic numbers, and use b = 10 and d = 5, then number of
generated nodes in BFS and IDS will be like
N(IDS) = 50 + 400 + 3000 + 20000 + 100000 = 123450
N(BFS) = 10 + 100 + 1000 + 10000 + 100000 + 999990 = 1111100
BFS generates like 9 time nodes to those generated with IDS.
• Space Complexity:
o IDS is like DFS in its space complexity, taking O(bd) of memory.

Conclusion:

• We can conclude that IDS is a hybrid search strategy between BFS and
DFS inheriting their advantages.
• IDS is faster than BFS and DFS.
• It is said that “IDS is the preferred uniformed search method when
there is a large search space and the depth of the solution is not
known”.
Informed search strategies:

• A Heuristic technique helps in solving problems, even though there is no guarantee


that it will never lead in the wrong direction. There are heuristics of every general
applicability as well as domain specific. The strategies are general purpose
heuristics. In order to use them in a specific domain they are coupler with some
domain specific heuristics. There are two major ways in which domain - specific,
heuristic information can be incorporated into rule-based search procedure.

• A heuristic function is a function that maps from problem state description to


measures desirability, usually represented as number weights. The value of a
heuristic function at a given node in the search process gives a good estimate of that
node being on the desired path to solution.
• Greedy Best First Search
• Greedy best-first search tries to expand the node that is closest to the
goal, on the: grounds that this is likely to lead to a solution quickly.
Thus, it evaluates nodes by using just the heuristic function:

• Taking the example of Route-finding problems in Romania, the goal is


to reach Bucharest starting from the city Arad. We need to know the
straight-line distances to Bucharest from various cities
• For example, the initial state is In (Arad), and the straight line distance
heuristic hSLD (In (Arad)) is found to be 366. Using the straight-line
distance heuristic hSLD, the goal state can be reached faster.
Evaluation Criterion of Greedy Search

Complete: NO [can get stuck in loops, e.g., Complete in finite space


with repeated-state checking ]
Time Complexity: O (bm) [but a good heuristic can give dramatic
improvement]
Space Complexity: O (bm) [keeps all nodes in memory]
Optimal: NO

Greedy best-first search is not optimal, and it is incomplete. The


worst-case time and space complexity is O (bm),
where m is the maximum depth of the search space.
• HILL CLIMBING PROCEDURE:

• Hill Climbing Algorithm

• We will assume we are trying to maximize a function. That is, we are trying to
find a point in the search space that is better than all the others. And by "better"
we mean that the evaluation is higher. We might also say that the solution is of
better quality than all the others.
• The idea behind hill climbing is as follows.

1. Pick a random point in the search space.


2. Consider all the neighbors of the current state.
3. Choose the neighbor with the best quality and move to that state.
4. Repeat 2 thru 4 until all the neighboring states are of lower quality.
5. Return the current state as the solution state.
• Algorithm:

• Function HILL-CLIMBING(Problem) returns a solution


state Inputs: Problem, problem
• Local variables: Current, a node
• Next, a node
• Current = MAKE-NODE(INITIAL-STATE[Problem])
• Loop do
• Next = a highest-valued successor of Current

• If VALUE[Next] < VALUE[Current] then return Current


Current = Next
• End
• You should note that this algorithm does not maintain a search tree. It only returns a
final solution. Also, if two neighbors have the same evaluation and they are both the
best quality, then the algorithm will choose between them at random.

Problems with Hill Climbing


• The main problem with hill climbing (which is also sometimes called gradient
descent) is that we are

• Not guaranteed to find the best solution. In fact, we are not offered any guarantees
about
the solution. It could be abysmally bad.
• You can see that we will eventually reach a state that has no better neighbours but
there are better solutions elsewhere in the search space. The problem we have just
described is called a local maxima.
MAX VALUE
• Example for hill climbing

1.Local max problem


Target reach
Max value of
first hill

Initial point
3. Ridge

goal
2.plateau/Flat max

start
Best First Search:
A combination of depth first and breadth first searches.

Depth first is good because a solution can be found without computing all nodes and breadth first is good because it
does not get trapped in dead ends.
The best first search allows us to switch between paths thus gaining the benefit of both approaches. At each step the
most promising node is chosen. If one of the nodes chosen generates nodes that are less promising it is possible to
choose another at the same level and in effect the search changes from depth to breadth. If on analysis these are no
better than this previously unexpanded node and branch is not forgotten and the search method reverts to the

OPEN is a priority queue of nodes that have been evaluated by the heuristic function but which have not yet been
expanded into successors. The most promising nodes are at the front.

CLOSED are nodes that have already been generated and these nodes must be stored because a graph is being
used in preference to a tree.
• Algorithm:

• Start with OPEN holding the initial state


• Until a goal is found or there are no nodes left on open do.

• Pick the best node on OPEN


• Generate its successors
• For each successor Do
• If it has not been generated before ,evaluate it ,add it to OPEN and record
its parent
• If it has been generated before change the parent if this new path is
better and in that case update the cost of getting to any successor nodes.
• If a goal is found or no more nodes left in OPEN, quit, else return to 2.
NODE H(n)
S 13
A 12
B 4
C 7
D 3
E 8
F 2
H 4
I 9
G 0

• OPEN --------- nodes which are not used


• CLOSE ------- which are already evaluated
• Initial node ----s
• Open[A,B], close[s]
• Open [A], CLOSE[S,B]
• OPEN[A,E], CLOSE [S,B,F]
• OPEN[A,E,H], CLOSE[S,B,F,G]

• OUTPUT: S---B---F----G
Time complexity=space complexity o(bm)
M---- max depth of search space
Properties:
1. It is not optimal.
2. It is incomplete because it can start down an infinite path and
never return to try other possibilities.
3. The worst-case time complexity for greedy search is O (bm),
where m is the maximum depth of the search space.
4. Because greedy search retains all nodes in memory, its space
complexity is the same as its time complexity
A* search algorithm
• The Best First algorithm is a simplified form of the A* algorithm.

• The A* search algorithm (pronounced "Ay-star") is a tree search


algorithm that finds a path from a given initial node to a given goal
node (or one passing a given goal test). It employs a "heuristic
estimate" which ranks each node by an estimate of the best route
that goes through that node. It visits the nodes in order of this
heuristic estimate.
• Similar to greedy best-first search but is more accurate because A*
takes into account the nodes that have already been traversed.
• A* search algorithm finds the shortest path through the search space
using the heuristic function i.e., h(n)
• It uses h(n) and cost to reach the node ‘n’ from the start state i.e.,
g(n)
• To find the value of the particular path then,
F(n)=g(n)+h(n)
• This algorithm expands less search tree and provides optimal results
faster
• It is similar to uniform cost search (gives path cost from one node to
another node i.e., uses g(n))
• A* uses search heuristic as well as the cost to reach the node. So
combine both cost as:
• F(n)=g(n)+h(n)
• F(n) is the function which is also called as fitness number
F(n)= Estimated cost of cheapest solution
g(n)=cost to reach node ‘n’ from start state
h(n)=cost to reach from node ‘n’ to goal node
state h(n)
S 5
A 3
B 4
C 2
D 6
G 0
Algorithm:
• Initial state ‘S’
• S------A =F(n)=g(n)+h(n)
F(n)=1+3 =4
• S-------G =F(n)=g(n)+h(n)
F(n)=10+0 =10
• From “s” we have received F(n) =4 &10
• Compare 2 values and consider the lowest value as the current path and put highest value in hold state.
• Like wise we have to repeat till the goal state reach and stops searching
• After reaching the goal state check the hold state values are greater than the goal state value or not, if yes than
consider that lowest path cost to reach the goal

-----------------------------
• S—A—B = F(n)=g(n)+h(n)
F(n)= 3+4=7
• S—A—C=F(n)=g(n)+h(n)
F(n)= 2+2=4
-------------
• S—A—C—D=F(n)=g(n)+h(n)
F(n)=5+6=11
• S—A—C—G=F(n)=g(n)+h(n)
F(n)=6+0=6
The cost is 6 to reach the goal

• Output: S---A---C---G
• Advantages:
Best algorithm than other
Optimal &complete
Can also solve complex problems
• Disadvantages:
Always not guaranteed to produce shortest path
It is not practical for various large-scale problems
Constraint Satisfaction Problem
• A constraint satisfaction problem (CSP) is a problem that requires
its solution within some limitations or conditions also known as constraints.

• It consists of the following:

• A finite set of variables which stores the solution (V = {V1, V2, V3,....., Vn})

• A set of discrete values known as domain from which the solution is picked
(D = {D1, D2, D3,.....,Dn})

• A finite set of constraints (C = {C1, C2, C3,......, Cn})


E

You might also like