3 Ai
3 Ai
What is Intelligence?
Intelligence is the ability to learn about, to learn from, to understand about,
and interact with one’s environment.
What is Artificial Intelligence (AI)?
A.I:- Is simply a way of making a computer think.
A.I:- Is the part of computer science concerned with designing intelligent
computer system that exhibit the characteristic associated with intelligent in
human behavior.
This requires many processes:-
1- Learning: - acquiring the knowledge and rules that used these
knowledge.
2- Reasoning:- Used the previous rules to access to nearly reasoning or
fixed reasoning.
Introduction to Artificial Intelligence
A.I Principles:-
1- The data structures used in knowledge representation.
2- The algorithms needed to apply that knowledge.
3- The language and programming techniques used their implementation.
• Game playing
• Speech recognition
• Understanding natural language
• Computer vision
• Expert systems
• Heuristic classification
Characteristics of AI
Graph Theory:-
A graph consists of a set of a nodes and a set of arcs or links
connecting pairs of nodes. The domain of state space search, the
nodes are interpreted to be stated in problem solving process, and
the arcs are taken to be transitions between states.
Graph theory is our best tool for reasoning about the structure of
objects and relations.
Graph Theory
Nodes={a,b,c,d,e}
Arcs={(a,b), (a,c),(b,c),(b,e),(d,e),(d,c),(e,d)}
a
b d
e f j
i
g h
Nodes={a,b,c,d,e,f,g,h,i,j}
Arcs={(a,b),(a,c),(a,d),(b,e),(b,f),(c,g),(c,h),(c,i),(d,j)}
State Space Representation
A state space is represented by four tuple [N,A,S,GD], where:-
Starting at A , find the shortest path through all the cities , visiting each city
exactly once returning to A.
a b c d e a=375
a b c e d a =425
a b d c e a = 474
……………………
2- Nearest Neighbor Heuristic
At each stage of the circuit, go to the nearest unvisited
city. This strategy reduces the complexity to N, so it is
highly efficient, but it is not guaranteed to find the
shortest path, as the following example:
Cost of Nearest neighbor path is a e d b c a=550
Is not the shortest path , the comparatively high cost of arc (C,A)
defeated the heuristic
This type of search takes all nodes of tree in specific
order until it reaches to goal. The order can be in
breath and the strategy will be called breadth – first –
search, or in depth and the strategy will be called depth
first search.
In breadth –first search, when a state is examined, all of
its siblings are examined before any of its children.
The space is searched level-by-level, proceeding all the
way across one level before doing down to the next
level.
Breadth – first – search Algorithm
Begin
Open: = [start];
Closed: = [ ];
While open ≠ [ ] do
Begin
Remove left most state from open, call it x;
If x is a goal the return (success)
Else
Begin
Generate children of x;
Put x on closed;
Eliminate children of x on open or closed; (Removing the repeated child or node)
Put remaining children on right end of open
End
End
Return (failure)
End.
1 – Open= [A]; closed = [ ].
2 – Open= [B, C, D]; closed = [A].
3 – Open= [C, D, E, F]; closed = [B, A].
4 – Open= [D, E, F, G, H]; closed = [C, B, A].
5 – Open= [E, F, G, H, I, J]; closed = [D, C, B, A].
6 – Open= [F, G, H, I, J, K, L]; closed = [E, D, C, B, A].
7 – Open= [G, H, I, J, K, L, M]; closed = [F, E, D, C, B, A].
8 – Open= [H, I, J, K, L, M, N,]; closed = [G, F, E, D, C, B, A].
9 – and so on until either U is found or open = [ ].
In depth – first – search, when a state is examined, all
of its children and their descendants are examined
before any of its siblings.
Depth – first search goes deeper in to the search space
when ever this is possible only when no further
descendants of a state can found.
Depth – first – search Algorithm
Begin
Open: = [start];
Closed: = [ ];
While open ≠ [ ] do
Remove leftmost state from open, call it x;
If x is a goal then return (success)
Else begin
Generate children of x;
Put x on closed;
Eliminate children of x on open or closed; (Removing the repeated child or
node)
put remaining children on left end of open end
End;
Return (failure)
End.
1 – Open= [A]; closed = [ ].
2 – Open= [B, C, D]; closed = [A].
3 – Open= [E, F, C, D]; closed = [B, A].
4 – Open= [K, L, F, , D]; closed = [E, B, A].
5 – Open= [S, L, F, C, D]; closed = [K, E, B, A].
6 – Open= [L, F, C, D]; closed = [S, K, E, B, A].
7 – Open= [T, F, C, D]; closed = [L, S, K, E, B, A].
8 – Open= [F, C, D,]; closed = [T, L, S, K, E, B, A].
9– Open= [M, C, D] as L is already on; closed = [F, T, L, S, K, E, B, A].
10 – and so on until either U is found or open = [ ].
A heuristic is a method that might not always find the
best solution but is guaranteed to find a good solution in
reasonable time. By sacrificing completeness it increases
efficiency. Heuristic search is useful in solving problems
which:-
• Could not be solved any other way.
• Solution take an infinite time or very long time to
compute.
Heuristic search methods generate and test algorithm ,
from these methods are:-
1- Hill Climbing.
2- Best-First Search.
3- A and A* algorithm.
The idea here is that, you don’t keep the big list of
states around you just keep track of the one state you
are considering, and the path that got you there from
the initial state. At every state you choose the state
leads you closer to the goal (according to the heuristic
estimate), and continue from there.
The name “Hill Climbing” comes from the idea
that you are trying to find the top of a hill , and you go
in the direction that is up from wherever you are. This
technique often works, but since it only uses local
information.
Hill Climbing
Hill Climbing Algorithm
Begin
Cs=start state;
Open=[start];
Stop=false;
Path=[start];
While (not stop) do
{
if (cs=goal) then
return (path);
generate all children of cs and put it into open
if (open=[]) then
stop=true
else
{
x:= cs;
for each state in open do
{
compute the heuristic value of y (h(y));
if y is better than x then
x=y
}
if x is better than cs then
cs=x
else
stop =true;
}
}
return failure;
}
Open=[A] Closed=[] A
Open=[C3,B2,D1] Closed=[A] C3
Open=[G4,F2] Closed=[A, C3] G4
Open=[N5,M4] Closed=[A,C3,G4] N5
Open=[R4,S4] closed=[A,C3,G4,N5] R4
g(n):- Measures the actual length of path from any state (n) to the start
state.
H(n):- is a heuristic estimate of the distance from state n to the goal.
A-Algorithm :Example 2
A*Algorithm
1. Open=[a4], Closed=[]
2. Open=[c4,b6,d6], Closed=[a4]
3. Open=[e5,f5,b6,d6,g6], Closed=[a4,c4]
4. Open=[f5,b6,d6,g6,h6,i7], Closed=[a4,c4,e5]
5. Open=[j5,b6,d6,g6,h6,j7,k7], Closed=[a4,c4,e5,f5]
6. Open=[l5, b6,d6,g6,h6,j7,k7], Closed=[a4,c4,e5,f5,j5]
7. Open=[m5, b6,d6,g6,h6,j7,k7,n7],Closed=[a4,c4,e5,f5,j5,l5]
8. Success, m=goal!!
Draw the path to get the goal using Hill Climbing search
algorithm?
3-puzzle
Example: Consider the 3-puzzle problem, which is a
simpler version of the 8-puzzle where the board is 2 × 2
and there are three tiles, numbered 1, 2, and 3. There are
four moves: move the blank up, right, down, and left. The
cost of each move is 1. Consider this start state:
Draw the entire non-repeating state space for this
problem, labeling nodes and arcs clearly?.
Assume the goal is:
Is used to describe the nouns, the adjectives , the verbs(actions ) and the
objects.
1- It is hot.
2- It is not hot.
3- If it is raining, then will not go to mountain.
4- The food is good and the service is good.
5- If the food is good and the service is good then
the restaurant is good.
Answer:
4.2 The Predicate Calculus (Also known as First-Order Logic):
Local search
Simulated annealing
Tabu search
Metaheuristics
Metaheuristics: A new kind of approximate algorithm has emerged which tries to
combine basic heuristic methods in higher level frameworks aimed at efficiently and
effectively exploring a search space. These methods nowadays commonly called
metaheuristics.
The term metaheuristic is derived from composition of two Greek words, the suffix
Meta mean beyond , in upper level and the verb heuriskein which means “to find”.
Metaheuristic were often called modern heuristic.
Heuristic algorithm typically intends to find a good solution to an optimization problem
by „trail –and- error‟ in reasonable amount of computing time. There is no guarantee to
find the best or optimal solution, though it might be better or improved solution than an
educated guess. Metaheuristic algorithms are higher level heuristic algorithms.
Different between heuristic and metaheuristic
heuristic is solving method for special problem (it can benefit the from the properties of
the solved problem).
Metaheuristic is generalized solving method like Genetic Algorithm (GA),Tabu Search
(TS), etc.
Metaheuristics
Disadvantage:
Tabu search approach is to climb the hill in the steepest
direction and stop at top and then climb downwards to
search for another hill to climb. The drawback is that a lot of
iterations are spent climbing each hill rather than searching
for tallest hill.
GRASP
The GRASP metaheuristic is an iterative greedy heuristic to
solve combinatorial optimization problems. It was introduced
in 1989. Each iteration of the GRASP algorithm contains two
steps:
1- construction and local search:In the construction step, a
feasible solution is built using a randomized greedy
algorithm.
2- a local search heuristic: the next step is applied from the
constructed solution.
GRASP
Greedy Randomized Adaptive Search Procedure
Artificial Neural Networks (ANN)
Theory of Neural Networks (NN)
1-Parallelism
2-Capacity for adaptation "learning rather
programming"
3-Capacity of generalization
4-No problem definition
5- Abstraction & solving problem with noisy data.
6- Ease of constriction & learning.
7-Distributed memory
8- Fault tolerance
Type of learning
1- Supervised learning:-
The supervised is that, at every step the system is informed about the
exact output vector. The weights are changed according to a formula
(e.g. the delta-rule), if o/p is unequal to a. This method can be
compared to learning under a teacher, who knows the contents to be
learned and regulates them accordingly in the learning procedure.
2-Unsupervised Learning:-
Here the correct final vector is not specified, but instead the weights
are
changed through random numbers. With the help of an evaluation
function one can ascertain whether the output calculated with the
changed weights is better than the previous one. In this case the
changed weights are stored, else forgotten. This type of learning is also
called reinforcement learning.
3- Learning through self- organization:-
The weights changed themselves at every learning step. The
change depends up on:
1- The neighborhood of the input pattern.
2- The probability pattern, with which the permissible input
pattern is offered.
Typical Architecture of NN
1 - Single-Layer Net
2 - Multilayer net
The figure shown bellow is an example of a three-layered
neural net work with two hidden neurons.
Basic Activation Functions
The activation function (Sometimes called a transfers function) shown in figure below
can be a linear or nonlinear function. There are many different types of activation
functions. Selection of one type over another depends on the particular problem that
the neuron (or neural network) is to solve. The most common types of activation
function are:-
Alternate nonlinear model of an ANN
ACTIVATION FUNCTION S
x1
w1
w2 y
Sol x2
w3
net = x3
=0.5*0+1*-0.3+(-0.7*0.6)= -0.72
1- if f is linear
y = -0.72
2- if f is hard limiter (on-off)
y = -1
3-if f is sigmoid
5-if f is TLU with b=0.6, a=3 , y=0.28
y=0.28 y b y b
EX: The output of a simulated neural using a sigmoid function is
0.5 find the value of threshold when the input x1 = 1, x2 = 1.5, x3
= 2.5. and have initial weights value = 0.2.
Solution
The Bias
Some networks employ a bias unit as part of every layer
except the output layer.
This units have a constant activation value of 1 or -1, it's
weight might be adjusted during learning. The bias unit
provides a constant term in the weighted sum which results
in an improvement on the convergence properties of the
network.
A bias acts exactly as a weight on a connection from a unit
whose activation is always 1. Increasing the bias increases
the net input to the unit. If a bias is included, the activation
function is typically taken to be:
Some authors do not use a bias weight, but instead use a
fixed threshold for the activation function.
Learning Algorithms
The NN's mimic the way that a child learns to identify shapes and
colors NN algorithms are able to adapt continuously based on current
results to improve performance. Adaptation or learning is an essential
feature of NN's in order to handle the new "environments" that are
continuously encountered. In contrast to NN's algorithms, traditional
statistical techniques are not adoption but typically process all training
data simultaneously before being used with new data. The performance
of learning procedure depends on many factors such as:-
1- The choice of error function.
2- The net architecture.
3- Types of nodes and possible restrictions on the values of the weights.
4- An activation function.
The convergent of the net:-
The AND function can be solved if we modify its representation to express the
inputs as well as the targets in bipolar form. Bipolar representation of the inputs
and targets allows modifications of a weight when the input unit and the target
value are both "on" at the same time and when they are both "off" at the same
time and all units will learn whenever there is an error in the output. The Hebb
net for the AND function: bipolar inputs and targets are:
Replace the 0 with -1
Basic Delta Rule (BDR)
Note
Computation of error
Back Propagation Training Algorithm
The algorithm is as following :-
Error computation at
output layer
New vector
computation
The Hopfield Network
1- Crossover