Lab Manual Soft Computing
Lab Manual Soft Computing
LAB MANUAL
Organized by
1
INDEX
SNO. LIST OF EXPERIMENTS PAGE NO.
10. WAP to implement Hetro associate neural net for mapping input vectors 19
to output vectors
2
Experiment No. 1
OUTPUT:
Weight matrix
1 1 -1 -1
1 1 -1 -1
-1 -1 1 1
-1 -1 1 1
3
Experiment No. 2
CODE:
x=-10:0.1:10;
tmp=exp(-x);
y1=1./(1+tmp);
y2=(1-tmp)./(1+tmp);
y3=x;
title('Logistic Function');
xlabel('(a)');
axis('square')
xlabel('(b)');
axis('square');
title('Identity Function');
xlabel('(c)');
axis('square');
4
OUTPUT:
1 1 5
0 0 0
-1 -1 -5
-2 -2 -10
-10 0 10 -10 0 10 -10 0 10
(a) (b) (c)
5
Experiment No. 3
CODE:
6
OUTPUT:
1.5
0.5
-0.5
-1
-1.5
0 1 2 3 4 5 6 7 8
time[sec]
prediction error
0.4
0.2
-0.2
-0.4
-0.6
-0.8
-1
0 1 2 3 4 5 6 7 8
time[sec]
7
Experiment No. 4
CODE:
clear;
inp=[-2 -2 -1 -1 0 0 0 0 1 1 2 3 3 3 4 4 5 5;2 3 1 4 0 1 2 3 1 0 1 -1 1 2 -2 1 -1 0];
out=[1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0];
choice=input('1: Perceptron Learning Law\n2: LMS Learning Law\n Enter your choice :');
switch choice
case 1
network=newp([-2 5;-2 4],1);
network=init(network);
y=sim(network,inp);
figure,plot(inp,out,inp,y,'o'),title('Before Training');
axis([-10 20 -2.0 2.0]);
network.trainParam.epochs = 20;
network=train(network,inp,out);
y=sim(network,inp);
figure,plot(inp,out,inp,y,'o'),title('After Training');
axis([-10 20 -2.0 2.0]);
display('Final weight vector and bias values : \n');
Weights=network.iw{1};
Bias=network.b{1};
Weights
Bias
Actual_Desired=[y' out'];
Actual_Desired
case 2
network=newlin([-2 5;-2 4],1);
network=init(network);
y=sim(network,inp);
network=adapt(network,inp,out);
y=sim(network,inp);
display('Final weight vector and bias values : \n');
Weights=network.iw{1};
Bias=network.b{1};
8
Weights
Bias
Actual_Desired=[y' out'];
Actual_Desired
otherwise
error('Wrong Choice');
end
OUTPUT:
Weights =
-1 1
Bias =
Actual_Desired =
1 1
9
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
0 0
0 0
0 0
0 0
0 0
0 0
0 0
0 0
0 0
10
Before Training
2
1.5
0.5
-0.5
-1
-1.5
-2
-10 -5 0 5 10 15 20
After Training
2
1.5
0.5
-0.5
-1
-1.5
-2
-10 -5 0 5 10 15 20
11
Best Training Performance is 0.5 at epoch 0
0 Train
10
Best
Mean Absolute Error (mae)
-0.1
10
-0.2
10
-0.3
10
-0.4
10
-0.5
10
12
Experiment No. 5
CODE:
13
OUTPUT:
Bottom up Weights
0.5700 0.6667 0.3000
0 0 0.3000
0 0 0.3000
0 0.6667 0.3000
14
Experiment No. 6
CODE:
OUTPUT:
x=
1 1 -1
1 -1 1
y=
1 -1
-1 1
15
Experiment No. 7
AIM: WAP to implement Full Counter Propagation Network with input pair
CODE:
16
OUTPUT:
v=
0.4200 0.2000
0.7200 0.2000
0.4400 0.6000
0.1400 0.6000
w=
0.5800 0.3000
0.2800 0.3000
17
Experiment No. 8
CODE:
OUTPUT:
y=
0 0 1 0
1 1 1 0
18
Experiment No. 9
CODE:
OUTPUT:
Weight matrix
Columns 1 through 12
0 0 0 0 0 0 0 0 0 0 0 0
Columns 13 through 20
0 0 0 0 0 2 2 2
Bias
19
Experiment No.10
AIM: WAP to implement Hetro associate neural net for mapping input vectors to output
vectors
CODE:
%Hetro associate neural net for mapping input vectors to output vectors
clc;
clear;
x=[1 1 0 0;1 0 1 0;1 1 1 0;0 1 1 0];
t=[1 0;1 0;0 1;0 1];
w=zeros(4,2);
for i=1:4
w=w+x(i,1:4)'*t(i,1:2);
end
disp('weight matrix');
disp(w);
OUTPUT:
weight matrix
2 1
1 2
1 2
0 0
20
Experiment No. 11
CODE:
% Determine the weights of a network with 4 input and 2 output units using
% Delta Learning Law with f(x)=1/(1+exp(-x)) for the following input-output
% pairs:
%
% Input: [1100]' [1001]' [0011]' [0110]'
% output: [11]' [10]' [01]' [00]'
% Discuss your results for different choices of the learning rate parameters.
% Use suitable values for the initial weights.
21
OUTPUT:
[1 2 1 3 1;1 0 1 0 2]
wgt =
22
Experiment No. 12
CODE:
23
for k=1:2
if zin(k)>0;
b1(k)=b1(k)+alpha*(-1-zin(k));
w(1:2,k)=w(1:2,k)+alpha*(-1-zin(k))*x(1:2,i);
end
end
end
end
end
epoch=epoch+1;
end
disp('weight matrix of hidden layer');
disp(w);
disp('Bias of hidden layer');
disp(b1);
disp('Total Epoch');
disp(epoch);
OUTPUT:
0.2812 -2.1031
-0.6937 0.9719
-1.3562 -1.6406
Total Epoch
24
Experiment No. 13
CODE:
25
OUTPUT:
1.2000 1.2000
Final Bias
-1.2000
26
Experiment No. 14
CODE:
clear;
clc;
p1=[1 1]';p2=[1 2]';
p3=[-2 -1]';p4=[2 -2]';
p5=[-1 2]';p6=[-2 -1]';
p7=[-1 -1]';p8=[-2 -2]';
% define the input matrix , which is also a target matrix for auto
% association
P=[p1 p2 p3 p4 p5 p6 p7 p8];
%we will initialize the network to zero initial weights
net= newlin([min(min(P)) max(max(P)); min(min(P)) max(max(P))],2);
weights = net.iw{1,1}
%set training goal (zero error)
net.trainParam.goal=0.0;
%number of epochs
net.trainParam.epochs=400;
[net, tr]= train(net,P,P);
%target matrix T=P
%default training function is Widrow-Hoff Learning for newlin defined
%weights and bias after the training
W=net.iw{1,1}
B=net.b{1}
Y=sim(net,P);
%Haming like distance criterion
criterion=sum(sum(abs(P-Y)')')
%calculate and plot the errors
rs=Y-P; legend(['criterion=' num2str(criterion)])
figure
plot(rs(1,:),rs(2,:),'k*')
test=P+rand(size(P))/10;
%let's add some noise in the input and test the networkagain
Ytest=sim(net,test);
criteriontest=sum(sum(abs(P-Ytest)')')
figure
output=Ytest-P
%plot errors in the output
plot(output(1,:),output(2,:),'k*')
27
OUTPUT:
weights =
0 0
0 0
W=
1.0000 -0.0000
-0.0000 1.0000
B=
1.0e-12 *
-0.1682
-0.0100
criterion =
1.2085e-12
criteriontest =
0.9751
output =
Columns 1 through 7
Column 8
0.0800
0.0142
28
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
-15
x 10
-2
-4
-6
-8
-10
-12
-14
-16
-3 -2.5 -2 -1.5 -1 -0.5
-13
x 10
29
0.1
0.09
0.08
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0
0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
-5
10
Mean Squared Error (mse)
-10
10
-15
10
-20
10
-25
10
0 50 100 150 200 250 300 350 400
400 Epochs
30
Validation Checks = 0, at epoch 400
1
0.8
0.6
0.4
0.2
val fail
-0.2
-0.4
-0.6
-0.8
-1
0 50 100 150 200 250 300 350 400
400 Epochs
31
Training: R=0.97897 Validation: R=0.92226
50 50
Data Data
O u tp u t ~ = 0 .9 5 * T a r g e t + 1 .1
O u tp u t ~ = 0 .9 1 * T a r g e t + 1 .4
Fit Fit
40 Y=T 40 Y=T
30 30
20 20
10 10
10 20 30 40 50 10 20 30 40 50
Target Target
O u tp u t ~ = 0 .9 4 * T a r g e t + 1 .1
Fit Fit
40 Y=T 40 Y=T
30 30
20 20
10 10
10 20 30 40 50 10 20 30 40 50
Target Target
32
Experiment No.15
CODE:
clear
inp=[0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1;0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1;...
0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1;0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1];
out=[0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0];
network=newff([0 1;0 1; 0 1; 0 1],[6 1],{'logsig','logsig'});
network=init(network);
y=sim(network,inp);
figure,plot(inp,out,inp,y,'o'),title('Before Training');
axis([-5 5 -2.0 2.0]);
network.trainParam.epochs = 500;
network=train(network,inp,out);
y=sim(network,inp);
figure,plot(inp,out,inp,y,'o'),title('After Training');
axis([-5 5 -2.0 2.0]);
Layer1_Weights=network.iw{1};
Layer1_Bias=network.b{1};
Layer2_Weights=network.lw{2};
Layer2_Bias=network.b{2};
Layer1_Weights
Layer1_Bias
Layer2_Weights
Layer2_Bias
Actual_Desired=[y' out'];
Actual_Desired
OUTPUT:
Layer1_Weights =
33
-6.0494 -18.5892 -5.9393 5.6923
Layer1_Bias =
-16.0634
5.4848
9.5144
9.6231
7.4340
5.7091
Layer2_Weights =
Layer2_Bias =
18.4268
34
Actual_Desired =
0.0000 0
1.0000 1.0000
0.9999 1.0000
0.0000 0
1.0000 1.0000
0.0000 0
0.0000 0
0.9998 1.0000
1.0000 1.0000
0.0000 0
0.0000 0
0.9999 1.0000
0.0000 0
1.0000 1.0000
1.0000 1.0000
0.0000 0
35
Before Training
2
1.5
0.5
-0.5
-1
-1.5
-2
-5 -4 -3 -2 -1 0 1 2 3 4 5
After Training
2
1.5
0.5
-0.5
-1
-1.5
-2
-5 -4 -3 -2 -1 0 1 2 3 4 5
36
0
Best Training Performance is 3.3016e-09 at epoch 26
10
Train
Best
-2
10
Mean Squared Error (mse)
-4
10
-6
10
-8
10
0 5 10 15 20 25
26 Epochs
37
Gradient = 1.9837e-08, at epoch 26
0
10
gradient
-5
10
-10
10
Mu = 1e-13, at epoch 26
0
10
-10
mu
10
-20
10
-1
0 5 10 15 20 25
26 Epochs
Training: R=1
1
Data
0.9 Fit
Y = T
0.8
Output ~= 1*Target + 1.6e-05
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.2 0.4 0.6 0.8 1
Target
38
Experiment No. 16
CODE:
% Using the Instar learning law, group all the sixteen possible binary
% vectors of length 4 into four different groups. Use suitable values for
% the initial weights and for the learning rate parameter. Use a 4-unit
% input and 4-unit output network. Select random initial weights in the
% range [0,1]
in=[0 0 0 0;0 0 0 1;0 0 1 0;0 0 1 1;0 1 0 0;0 1 0 1;0 1 1 0;0 1 1 1;1 0 0 0;1 0 0 1;1 0 1 0;1 0 1 1;1
1 0 0;1 1 0 1;1 1 1 0;1 1 1 1];
wgt=[0.4 0.1 0.2 0.7; 0.9 0.7 0.4 0.7; 0.1 0.2 0.9 0.8 ; 0.5 0.6 0.7 0.6];
eta=0.5;
it=3000;
for t=1:it
for i=1:16
for j=1:4
w(j)=in(i,:)*wgt(j,:)';
end
[v c]=max(w);
wgt(c,:)=wgt(c,:)+eta*(in(i,:)-wgt(c,:));
k=power(wgt(c,:),2);
f=sqrt(sum(k));
wgt(c,:)=wgt(c,:)/f;
end
end
for i=1:16
for j=1:4
w(j)=in(i,:)*wgt(j,:)';
end
[v c]=max(w);
if(v==0)
c=4;
end
s=['Input= ' int2str(in(i,:)) ' Group= ' int2str(c)];
display(s);
end
wgt
39
OUTPUT:
s=
Input= 0 0 0 0 Group= 4
s=
Input= 0 0 0 1 Group= 1
s=
Input= 0 0 1 0 Group= 3
s=
Input= 0 0 1 1 Group= 3
s=
Input= 0 1 0 0 Group= 2
40
s=
Input= 0 1 0 1 Group= 4
s=
Input= 0 1 1 0 Group= 2
s=
Input= 0 1 1 1 Group= 4
s=
Input= 1 0 0 0 Group= 1
s=
Input= 1 0 0 1 Group= 1
41
s=
Input= 1 0 1 0 Group= 3
s=
Input= 1 0 1 1 Group= 3
s=
Input= 1 1 0 0 Group= 2
s=
Input= 1 1 0 1 Group= 1
s=
Input= 1 1 1 0 Group= 2
42
s=
Input= 1 1 1 1 Group= 4
wgt =
43
Experiment No.17
CODE:
clc;
clear;
x= [-1 -1 -1 -1; -1 -1 1 1 ];
t= [1 1 1 1];
w= zeros(4,4);
for i=1:2
w= w+x(i,1:4)'*x(i,1:4);
end
yin= t*w;
for i= 1:4
if yin(i)>0
y(i)=1;
else
y(i)= -1;
end
end
disp('The Calculated Weight Matrix');
disp(w);
if x(1,1:4)==y(1:4)| x(2,1:4)==y(1:4)
disp('the vector is a known vector');
else
disp('the vector is a unknown vector');
end
OUTPUT:
2 2 0 0
2 2 0 0
0 0 2 2
0 0 2 2
44