AI-UNIT-1 PPT
AI-UNIT-1 PPT
P.Dastagiri Reddy
Assistant Professor
SOCSE-1 Department
UNIT-1
• 1950: Turing
Turing’s computing Machinery and Intelligence
• 1956: Birth of AI
Dartmouth Conference: Artificial Intelligence name Adopted
• Perceiving-----------thinking-----------acting
Human agent
Robotic agent
Software agent ---- keystrokes (python –F5)
• Sensor: It is a device which detects the change in environment and
sends the information to the other electronic device.
• Task environments:
• We must think about task environments, which are essentially
the "problems" to which rational agents are the "solutions."
Cameras, sonar,
Steering,
Safe: fast, legal, Roads, other traffic,
accelerator, Speedometer, GPS,
comfortable trip, pedestrians,
brake, Signal,
Odometer, engine
maximize profits customers horn, display
sensors, keyboards,
Taxi driver
accelerometer
• Properties of task environments:
STATE
SPACE
SEARCH
Problem Solving by Search
• Problem searching:
In general searching refers to finding information for one needs.
Searching is most commonly used technique of problem solving in AI.
• Generally to build a system, to solve a problem what we need:
1. Define (Initial situation)
2. Analyzing (techniques)
3. Isolate and represent
4. Choose the best solution
5. Implementation
This is also called as problem space which defines the the various
components that go into creating a resolution for a problem and also
includes the above 5 points as stages of problem space.
• For example problem solving in games, “SUDOKU PUZZLE”
It is done by building an AI system.
To do this first we define the problem statement and generating the
solution and keeping the condition in mind.
The major difference between intelligent and problem solving agent is:
• Intelligent agent maximize the performance
• Problem-solving agents find sequence of actions.
Ex: shortest route path algorithm
Functionality of Problem-Solving Agents:
• Goal Formulation:
Problem-solving is about having a goal we want to reach (Ex: we want to
travel from A ------ E)
• Problem Formulation:
A problem formulation is about deciding what actions and states to
consider.
• Search:
The process of finding the a sequence is called search.
• Solution and execute:
Once the solution is found from different aspects through search
recommendation , that sequence of actions will help to carry out the
execution.
• Problem-solving agent now simply designed as:
Formulate------search------execute
A problem can be defined formally by 4 components:
1.Initial state
2.State description
3.Goal test
4.Path cost
Goal Directed Agent
Definitions
Definitions
Definitions
Search Problem
Search Problem
Searching Process
State Space
Pegs and Disks
8 Queens
8 Queens Solution
N Queens Problem Formulation 1
N Queens Problem Formulation 2
N Queens Problem Formulation 3
Other Examples:
Different types of search strategies:
• Search strategies can be :
Uninformed search strategies
Informed search strategies
• Uninformed search strategies: These are also called as Blind search. These
search strategies use only the information available in the problem
definition. (This includes Breadth-first search, Uniform cost search, Depth-
first search, Iterative deepening depth-first search and Bidirectional
Search)
Informed search strategies: These are also called as Heuristic search. These
search strategies use other domain specific information. (This includes Hill
climbing, Best-first, Greedy Search and A* Search)
Search strategies classification:
Difference between uninformed and informed
search:
Search Algorithm Terminologies:
• Search
Search problem may have three factors: search space, start
state, goal state
• Search tree
• Actions
• Transition model
• Path cost
• solution
• Optimal solution
Properties of Search Algorithms:
• Following are the four essential properties of search algorithms to compare the efficiency
of these algorithms:
• Time Complexity: Time complexity is a measure of time for an algorithm to complete its
task.
• Space Complexity: It is the maximum storage space required at any point during the
search, as the complexity of the problem.
Definition of search problem:
Exploring the state space:
Search Trees:
Breadth-First search (BFS):
• It is the most common search strategy for traversing a tree or graph
• This algorithms searches in breadth-wise in a tree/graph, so it is called
as breadth-first search
• BFS algorithm starts searching from the root node of the tree and
expand all nodes (successor nodes) at the current state before moving
to the next node of next level
• BFS is implemented using FIFO queue DS
Example:
• Let us see how the BFS is following the FIFO order (FIRST IN FIRST OUT):
• A -----start node (check the possibilities for node A) i.e A is first entered into queue
A
BC
CDE
DEFG
EFGH
FGHI
GHIJ
HIJ
IJK
JK
K -----GOAL NODE
• Time complexity: This is obtained by the no of nodes traverse in BFS
until the shallowest little depth
• Let d is the depth of shallowest
• Let b is the node at every state
Then T(b)=
Space complexity: It is given by memory complexity i.e., the memory
required to store each node
Then s(b)=
Completeness: BFS is complete
Optimal: Yes it is optimal
Example BFS:
• Find the route path from S to E using BFS
• Expand all possibilities from each and every node.
• After expanding with all possibilities the below one is the tree
representation with start node A and final node E.
• Based on the path cost the path should be SBE (or) SCE
• Let us see how the BFS is following the FIFO order (FIRST IN FIRST
OUT):
• S -----start node (check the possibilities for node S) i.e S is first
entered into queue
S
ABC
BCD
CDDE
DDEE
DEEE
EEEE
EEE
EE
Depth-First search
• DFS may be recursive or non recursive algorithm.
• It is a recursive algorithm for traversing a tree / graph
• In this it starts from the root node and follows each path of its
greatest depth node before moving to next path. That’s the reason to
call it as DFS.
• So it travels from top to bottom direction but not like BFS.
• DFS uses stack DS (LIFO) for its implementation.
• The process is similar to BFS algorithm.
• Advantages:
1. It requires less memory as it only needs to store a stack of the
nodes on the path from root node to current node.
2. It takes less time to reach to the goal node than BFS alg
Disadvantages:
3. There is a chance of re-occurring of many states and there is no
guarantee of finding the solution.
4. DFS alg goes for deep down searching and sometimes it may go to
infinite loop
Example: DFS
Backtracking
• It starts searching from the root nodes.
• So starts traversing from root node ‘S’.
• The visited nodes will be pushed into the stack by checking their successor
nodes (i.e A and H) G --- pop
• Traverse from A ----- B (visited and successor nodes are D&E) E ----- pop
B -----Pop
• Since there are no expanded nodes so it will be popped
A
from the stack and starts backtracking.
S
Now traverse back to “B”, since it is already visited check it
successor node i.e “E” (visited and no successor nodes) stack
• Since there are no expanded nodes so it will be popped
from the stack and starts backtracking.
• Now traverse back to “B” then to “A”, but is visited and check for
successor nodes which are not visited.
• “A” has successor node “C” (visited and successor nodes are G )
• Traverse from C --- G (visited and no successor nodes.)
• “G” is the goal node and stops searching when reaches goal node
• Since there are no expanded nodes so it will be popped
from the stack
• The DFS is not done because in stack we have some nodes,
When all the stack elements are popped out C ---- pop
i.e., stack should be empty then only we can say that B ---- pop
A ---- pop
DFS is terminated.
• Before popping out check the node with its successors S ----- pop
Whether all visited or not. If visited pop out one by one stack
Until the stack is empty
Output: SABDECG
• Completeness: Yes, Complete with in finite state space.
• Time complexity: T(n)= 1+n2 +n3 +………..+nm
Hence T(n)=O(nm)
m----- max depth of any node
• Space complexity: O(bm)
b ------ is level of the tree
• Optimal: non-optimal (due o infinite loops)
Example 2: DFS example
F
• Start node is “A” G
H
• A----------B,S (possible nodes from A)
E
• Traverse from A --- B D ----- pop
• Traverse from A --- S (C,G possible nodes from S) C
S
• Traverse from S ---- C (D,E,F possible nodes from C) B ---- pop
• Traverse from C ------ D (no possible nodes, just pop it out) A
It is a search strategy resulting when you combine BFS and DFS, thus
combining the advantages of each strategy, taking the completeness and
optimality of BFS and the modest memory requirements of DFS.
IDS works by looking for the best search depth d, thus starting with depth
limit 0 and make a BFS and if the search failed it increase the depth limit
by 1 and try a BFS again with depth 1 and so on – first d = 0, then 1 then
2 and so on – until a depth d is reached where a goal is found.
Example
At depth-limit ---0
• “A” root node is visited (open node)
• Iterate the depth level, so level=1
• The possible nodes from ‘A’ are B,C,D
• In level1 “B” (current node) has adjacent node “C” and “E” but “C” is
in the same level1 ,”E” is not because it limit exceeds.
• Now the current node is “C”
• In level1 “C” (current node) has adjacent node “B” (already visited)
and “F” , “G” is not in the same level1 because it limit exceeds.
Algorithm:
procedure IDDFS(root)
for depth from 0 to ∞
found ← DLS(root, depth)
if found ≠ null
return found
Optimality: IDS is also like BFS optimal when the steps are of the same cost.
Conclusion:
• We can conclude that IDS is a hybrid search strategy between BFS and
DFS inheriting their advantages.
• IDS is faster than BFS and DFS.
• It is said that “IDS is the preferred uniformed search method when
there is a large search space and the depth of the solution is not
known”.
Informed search strategies:
• We will assume we are trying to maximize a function. That is, we are trying to
find a point in the search space that is better than all the others. And by "better"
we mean that the evaluation is higher. We might also say that the solution is of
better quality than all the others.
• The idea behind hill climbing is as follows.
• Not guaranteed to find the best solution. In fact, we are not offered any guarantees
about
the solution. It could be abysmally bad.
• You can see that we will eventually reach a state that has no better neighbours but
there are better solutions elsewhere in the search space. The problem we have just
described is called a local maxima.
MAX VALUE
• Example for hill climbing
Initial point
3. Ridge
goal
2.plateau/Flat max
start
Best First Search:
A combination of depth first and breadth first searches.
Depth first is good because a solution can be found without computing all nodes and breadth first is good because it
does not get trapped in dead ends.
The best first search allows us to switch between paths thus gaining the benefit of both approaches. At each step the
most promising node is chosen. If one of the nodes chosen generates nodes that are less promising it is possible to
choose another at the same level and in effect the search changes from depth to breadth. If on analysis these are no
better than this previously unexpanded node and branch is not forgotten and the search method reverts to the
OPEN is a priority queue of nodes that have been evaluated by the heuristic function but which have not yet been
expanded into successors. The most promising nodes are at the front.
CLOSED are nodes that have already been generated and these nodes must be stored because a graph is being
used in preference to a tree.
• Algorithm:
• OUTPUT: S---B---F----G
Time complexity=space complexity o(bm)
M---- max depth of search space
Properties:
1. It is not optimal.
2. It is incomplete because it can start down an infinite path and
never return to try other possibilities.
3. The worst-case time complexity for greedy search is O (bm),
where m is the maximum depth of the search space.
4. Because greedy search retains all nodes in memory, its space
complexity is the same as its time complexity
A* search algorithm
• The Best First algorithm is a simplified form of the A* algorithm.
-----------------------------
• S—A—B = F(n)=g(n)+h(n)
F(n)= 3+4=7
• S—A—C=F(n)=g(n)+h(n)
F(n)= 2+2=4
-------------
• S—A—C—D=F(n)=g(n)+h(n)
F(n)=5+6=11
• S—A—C—G=F(n)=g(n)+h(n)
F(n)=6+0=6
The cost is 6 to reach the goal
• Output: S---A---C---G
• Advantages:
Best algorithm than other
Optimal &complete
Can also solve complex problems
• Disadvantages:
Always not guaranteed to produce shortest path
It is not practical for various large-scale problems
Constraint Satisfaction Problem
• A constraint satisfaction problem (CSP) is a problem that requires
its solution within some limitations or conditions also known as constraints.
• A finite set of variables which stores the solution (V = {V1, V2, V3,....., Vn})
• A set of discrete values known as domain from which the solution is picked
(D = {D1, D2, D3,.....,Dn})