softcomputing_1
softcomputing_1
Rock-type identification
Works in parallel
With a Teacher
Learning
Wektor cech Result of
Wynik
vector
(dane nauki) learning
klasyfikacji
Parameters
Klasyfikator
Weights
No softcomputing:
Cerebellum
– movement control
nervous system
Brain (ca. 1.3 kg) – 2 hemispheres
– feeling, thinking, movement
Anatomy Foundations (2)
Anatomy Foundations (3)
Cerebral cortex – thickness: 2 mm, area: ca. 1.5 m2
w0 y w j x j w0 x) w j x j w T~
y (~ x
x1 j 1 j 0
w1 scalar description vector description
2 n1 k 1 2 n1 k 1 j 0
looking for a minimum of E(W) function:
E (W )
k , j wkj
0
Pseudoinverse Algorithm
E (W ) 1 N M
wkj 2 n1 j '0
n
n n
2 wkj x j ' t k x j 0
N M
k , j n1 j '0
wkj x nj' x nj
N
t kn x nj
n1
where:
1 x11 x1M t11 t21 t1K w10 w11 w1M
2 w
1 x12 xM2 t t22 t K2 w21 w2 M
X T 1 W 20
N
1 x1 xMN t1 t KN wK 1 wKM
N
t2N wK 0
finally:
X XW
T T
XT T XW T T WT (XT X) 1 XTT W T X τ T,
τ pseudoinverse
y
Gradient-Type Algorithms (1)
iterative approach:
1 y
x x x x
x
steps:
( 1) E (w)
wkj wkj w
– random weight vector
wkj
– new weight vector following: -w E
– repeat process generating the sequence of weights vectors: w( )
– components of weight vectors calculated by
2
1 K M
error function: E (w ) E (w ) n
E (w) wkj x j tk
n n n
n
2 k 1 j 0
Gradient-Type Algorithms (2)
sequential approach:
E n E n
wkj
( 1)
wkj
wkj wkj
yk (x n ) tkn * x nj kn * x nj
g (a ) g (a ) a wj x j
1 for a 0 0 for a 0 j 0
bipolar unipolar
E Perceptron (2)
error function: w jk – does not exist, because g(a) is not differentiable
perceptron criterions:
compare the actual value of yi and the required output value di and:
– if yi = di the weights values of Wij and w0 are unchanged
– if yi = 0 and the required value di =1 update the weights as follow:
Wij (t 1) Wij (t ) x j , bi (t 1) bi (t ) 1,
p
E ( yi( k ) d i( k ) ) 2 ,
k 1
Perceptron – Problems (1)
linear separability: XOR problem – Minsky & Papert (1969):
In1 In2
X1
In1 In2 Out
C1 0 0 0
0 1 1
XOR 1 0 1
y(x)=0
C2 X2 1 1 0
Out
w s1 s2 w
-2w w
w S