Artificial Intelligence
Artificial Intelligence
Informed Searches
418
450
A* Search
• Node evaluation by the combination of two cost
functions, i.e.
• f(n) = g(n) + h(n)
• g(n) : Cost to reach the node n from start node
• h(n) : Cost to reach the goal from node n
• A* is optimal given h(n) is admissible heuristic (AH)
• AH never overestimates the cost to reach the
goal
• Heuristic h(n) is consistent
• If h(n) ≤ C (n, a, n’) + h(n’)
• C(n, a, n’) : Step cost from n to n’ by action a
• h(n’) : Estimated cost from n’ to goal node
A* Search
A* Search
A* Search
Hill Climbing Search
• Search algorithm loop continuously in the
increasing direction value
• Terminates when reaches Peak
• No neighbors have higher value
• No tree structure is maintained (used)
• Stochastic hill climbing
• Random selection of uphill moves (neighbors)
• Selection probability can vary w.r.t uphill
steepness
• First-choice hill climbing
• Random generation of successor states
• Generation until a better successor state is
found
Hill Climbing Search
Simulated annealing
• Heat treatment that alters a material to increase its ductility
and to make it more workable
• Maximization of objective function
• A move is always accepted if the situation is improved
• Otherwise the acceptance probability is less than 1
• Acceptance probability (otherwise case) decreases w.r.t
move badness
• ΔE = Value [next] – Value[current]
• ΔE corresponds to the amount of badness of a move
• Acceptance probability also decrease w.r.t. to T
• T goes down
• Bad move may be allowed initially (high T)
• Bad move permission is unlikely at end (low T)
• At high temperatures, explore parameter space
• At lower temperatures, restrict exploration
• Slow change (schedule) in T helps find global optimum with
probability approaching to 1
Simulated annealing
Local beam Search
1. Begins with K randomly generated states
2. At each step successor of K states are
generated
3. If goal is found then algorithm halts
4. Otherwise k best states are selected from the
successors
5. Algorithm is repeated from step 2
Local beam Search (K=2)
Start
A
20 18
B C
16 15 17 13
D E F G
9 12
10 11
H I J K
8 2
L M Goal
Local beam Search (K=2)
Start
A
20 18
B C
16 15 17 13
D E F G
9 12
10 11
H I J K
8 2
L M Goal
Local beam Search (K=2)
Start
A
20 18
B C
16 15 17 13
D E F G
9 12
10 11
H I J K
8 2
L M Goal
Local beam Search (K=2)
Start
A
20 18
B C
16 15 17 13
D E F G
9 12
10 11
H I J K
8 2
L M Goal
Local beam Search (K=2)
Start
A
20 18
B C
16 15 17 13
D E F G
9 12
10 11
H I J K
8 2
L M Goal
Local beam Search (K=2)
Start
A
20 18
B C
16 15 17 13
D E F G
9 12
10 11
H I J K
8 2
L M Goal
Local beam Search (K=2)
Start
A
20 18
B C
16 15 17 13
D E F G
9 12
10 11
H I J K
8 2
L M Goal
Local beam Search (K=2)
Start
A
20 18
B C
16 15 17 13
D E F G
9 12
10 11
H I J K
8 2
L M Goal
Local beam Search (K=2)
Start
A
20 18
B C
16 15 17 13
D E F G
9 12
10 11
H I J K
8 2
L M Goal
Genetic algorithm (GA)
• Successor states are generated by combining two
parent states
• GA begins with a set of k randomly generated
states (population)
• A state is represented by a string of finite
alphabet
• Each state is rated by an evaluation function
(Fitness function)
• The states are randomly selected, randomness in
accordance with fitness function.
• For mating between two states crossover point is
randomly chosen
• Each location is subject to random mutation with
an independent probability
Genetic algorithm (GA)
Genetic algorithm (GA)
S01 S02
Min S03
3 12 8 2 4 6 14 5 2
Minmax Algorithm
S0
S0 S02 S03
1
3
S0 S02
1
3 2 S03
S0 S02
1
3 2 2 S03
S0 S02
1
3 2 2 S03
S01
S01
1- V Max-value(S0)
3 12 8
2- V = -∞
3- V Max( V = -∞ ,Min-value(S01))
Minmax Algorithm
S0
1- V Max-value(S0)
S01
2- V = -∞
3- V Max( V = -∞ ,Min-value(S01))
4- V = ∞
5- V Min( V = ∞ ,Max-value(S011))
7- V Min( V = ∞ , 3)
8- V Min( V = 3 ,12)
S011 S012 S013 9- V Min( V = 3, 8)
3 12 8
Minmax Algorithm
S0
1- V Max-value(S0)
S01 3
2- V = -∞
3- V Max( V = -∞ ,Min-value(S01))
4- V = ∞
5- V Min( V = ∞ ,Max-value(S011))
7- V Min( V = ∞ ,3)
8- V Min( V = 3 ,12)
S011 S012 S013 9- V Min( V = 3, 8)
10- return V = 3 after 3rd time calling of
3 12 8 Max-Value in Min function
Minmax Algorithm
S0
S01
S02
3
2 4 6
Minmax Algorithm
S0
S01
S02
3 2 12- V = ∞
13- V Min( V = ∞ ,Max-value(S021))
14- V Min( V = ∞, 2)
15- V Min( V = 2 , 4)
16- V Min( V = 2, 6)
17- return V = 2 at step 3
S021 S022 S023
2 4 6
Minmax Algorithm
S0
S01
18- V Max( V = 3 ,Min-value(S03))
3
19- V = ∞
2 2
20- V Min( V = ∞ , Max-value(S031)) S02 S03
21- V Min( V = ∞, 14)
22- V Min( V = 14 , 5)
23- V Min( V = 5, 2)
24- return V = 2 at step 3
14 5 2
Minmax Algorithm
S0
3
S0 S02
1
3 2 2 S03
3
AlphaBeta pruning Algorithm
MinMax problem :
1. Has to visit exhausts the state space
2. States to be examined are exponential with respect to the moves
3. Exponential expected states to be visited can not be removed
At a max node:
= largest child utility found so far
= of parent
At a min node:
= of parent
= smallest child utility found so far
AlphaBeta pruning Algorithm Example
α=-∞ S0
β=+∞
S01 S02
S03
3 12 8 2 4 6 14 5 2
AlphaBeta pruning Algorithm Example
α=-∞ S0
β=+∞
α=-∞ S S02
01 S03
β=+∞
3 12 8 2 4 6 14 5 2
AlphaBeta pruning Algorithm Example
α=-∞ S0
β=+∞
α=-∞ S S02
01 S03
β=+∞
3 12 8 2 4 6 14 5 2
β=3
AlphaBeta pruning Algorithm Example
α=-∞ S0
β=+∞
3 12 8 2 4 6 14 5 2
β=12
AlphaBeta pruning Algorithm Example
α=-∞ S0
β=+∞
3 12 8 2 4 6 14 5 2
β=8
AlphaBeta pruning Algorithm Example
α=3 S0
β=+∞
3 12 8 2 4 6 14 5 2
β=8
AlphaBeta pruning Algorithm Example
α=3 S0
β=+∞
3 12 8 2 4 6 14 5 2
β=2
AlphaBeta pruning Algorithm Example
α=3 S0
β=+∞
3 12 8 2 4 6 14 5 2
β=2
AlphaBeta pruning Algorithm Example
α=3 S0
β=+∞
3 12 8 2 4 6 14 5 2
β=2
AlphaBeta pruning Algorithm Example
α=3 S0
β=+∞
3 12 8 2 4 6 14 5 2
β=14
AlphaBeta pruning Algorithm Example
α=3 S0
β=+∞
3 12 8 2 4 6 14 5 2
β=5
AlphaBeta pruning Algorithm Example
α=3 S0
β=+∞
3 12 8 2 4 6 14 5 2
β=2
Artificial Intelligence
Informed Searches
• X ϵ D (Xi) is deleted if
Xi
Xj
Arc consistency
Artificial Intelligence
fringe : Fringe is a collection containing the generated but not expanded Nodes
Breadth-first search (BFS)
1. Tree search implementation
2. FIFO input to the fringe in Tree search
3. Newly generated nodes by Successor Function are placed at
the end of fringe
4. Complete if the shallowest goal at d depth
5. Time complexity is O(bd+1)
a. d = depth
b. b = branching factor
6. Space complexity
a. b1+b2+b3+…+bd+bd+1 = O(bd+1)
Breadth-first search (BFS)
Depth-first search (DFS)
• Tree search implementation
• LIFO input to the fringe in the Tree search
• Newly generated nodes by Successor Function are
placed at the start of fringe
• Incomplete if a node at depth d is at infinite length
• Low memory space requirement
• The expanded leaf nodes are dropped from fringe
• Store single path from the root node to the leaf
node along with unexpanded sibling
• O(bm+1) Branching factor b, depth m
• Time complexity O(bm)
Depth-first search (DFS)
Back tracking (variant DFS)
• Only one successor is generated instead of all
successors
• Partially expanded node remember which
successor to generate next
• Space requirement O(m) instead of O(bm)
• Depth limit search perform DFS with a predefined
depth limit l
Iterative deepening DFS
• Iterative deepening depth-first search
• Variation of DFS
• Depth limit increase gradually if goal node is not
found, i.e., 0, 1, 2, d.
• Memory requirement O(bd)
• Regeneration of multiple states (from root node)
• Leaf nodes are only created once
• Generated nodes
o d(b)1 + (d-1)b2+…+(1)bd
Logical Agents
the world
arrow
Performance measure :
Environment
degree
• Tautology
– A Sentence which is true in all models /
assignments
– E.g., P ∨ ¬ P
• Contradiction
– A Sentence which is false in all models /
assignments
– E.g., P ˄ ¬ P
Entailment
• We say that a sentence φ logically entails a sentence ψ (written φ ⊨ ψ) if and only if
every truth assignment that satisfies φ also satisfies ψ. More generally, we say that
a set of sentences Δ logically entails a sentence ψ (written Δ ⊨ ψ) if and only if
every truth assignment that satisfies all of the sentences in Δ also satisfies ψ.
• For example, the sentence p logically entails the sentence (p ∨ q). Since a
disjunction is true whenever one of its disjuncts is true, then (p ∨ q) must be true
whenever p is true. On the other hand, the sentence p does not logically entail (p ∧
q). A conjunction is true if and only if both of its conjuncts are true, and q may be
false. Of course, any set of sentences containing both p and q does logically entail
(p ∧ q).
• Once again, consider the case of (p ∧ q). Although p does not logically entail this
sentence, it is possible that both p and q are true and, therefore, (p ∧ q) is true.
However, the logical entailment does not hold because it is also possible that q is
false and, therefore, (p ∧ q) is false.
Entailment : Propositional Example
P Q P˄Q P∨Q ¬P ∨ P ¬P ˄ P
T T T T T F
F T F T T F
F F F F T F
T F T T T F
Entailment : Propositional Example
P Q P˄Q P∨Q ¬P ∨ P ¬P ˄ P
T T T T T F
F T F T T F
F F F F T F
T F T T T F
Entailment : Propositional Example
Entailment : Propositional Example
P Q P˄Q P∨Q ¬P ∨ P ¬P ˄ P
T T T T T F
F T F T T F
F F F F T F
T F F T T F
Entailment : 2nd Propositional Example
• Entailment
P=T Q=F F
P=F Q=T T
P=F Q=F T
S# P Q
1 T T T
P=T Q=F F
P=F Q=T T
P=F Q=F T
• [1] R2 is stated as per general statement / sentence that if there is a Breeze in a cell then the
neighboring cells (one / more) may have Pit.
Inference:
• Table Enumerates all the models (assignments) of KB (sentences)
• 2n=7 = 128 models, n = number of Propositions in the KB
Bidirectional elimination :
• α↔β
• (α → β) Λ ( β → α)
• [1] An inference algorithm that derives only entailed sentences is called Sound or Truth-preserving
Example inference
• KB : R1 Λ R2 Λ R3 Λ R4 Λ R5
• R1 : ¬ P1,1
• R2 : B1,1 ↔ (P1,2 V P2,1)
• R3 : B1,1 ↔ (P2,1 V P2,2 V P3,1)
• R4 : ¬ B1,1
• R5 : B2,1
• Objective : Prove no pit in [1, 2] , i.e., ¬ P1,2
• Bidirectional elimination on R2
• R6 : (B1,1 → (P1,2 V P2,1)) Λ ((P1,2 V P2,1) → B1,1)
• And elimination
• R7 : ((P1,2 V P2,1) → B1,1)
• Logical equivalence for contra positive
• R8 : (¬B1,1 → ¬(P1,2 V P2,1))
• MP with percept (¬B1,1)
• R9 : ¬(P1,2 V P2,1)
• Demorgen’s rule
• R10 : ¬P1,2 Λ ¬P2,1
Resolution
• Unit resolution
• l1 or l3
• Knowledge Base 1
• Knowledge Base 2
• Knowledge Base 3
• Knowledge Base 4
• Knowledge Base 5
• listensToMusic(butch). Fact
• playsAirGuitar(butch):-happy(butch). Rule
• playsAirGuitar(butch):-listensToMusic(butch). Rule
• happy(vincent).
• cares(marcellus,mia). Relation
• cares(pumpkin,honey_bunny). Relation
• cares(honey_bunny,pumpkin). Relation
• if there is some individual Z that X cares, and Y also cares the same
individual Z
• For ?-jealous(V,U).
• Since cares (Vincent, mia) is first Fact in KB5
• V = U, U = vincent;
• V = vincent;
• U = marcellus;
• It keeps on displaying values of U and V in loop
• Until . is pressed