Soft Computing Lab File
Soft Computing Lab File
PRACTICAL FILE
Submitted To Submitted By
x = -10:0.1:10;
tmp = exp(-x);
y1 = 1./(1+tmp);
y2 = (1-tmp)./(1+tmp);
y3 = x;
subplot(231); plot(x, y1); grid on;
axis([min(x) max(x) -2 2]);
title('Logistic Function');
xlabel('(a)');
axis('square');
subplot(232); plot(x, y2); grid on;
axis([min(x) max(x) -2 2]);
title('Hyperbolic Tangent Function');
xlabel('(b)');
axis('square');
subplot(233); plot(x, y3); grid on;
axis([min(x) max(x) min(x) max(x)]);
title('Identity Function');
xlabel('(c)');
axis('square');
AIM-2 Generate ANDNOT function using McCulloch-Pitts Neural
Net by a MATLAB program.
Solution:
The truth table for the ANDNOT function is as follows:
X 1 X2 Y
0 0 0
0 1 0
1 0 1
1 1 0
%ANDNOT function using Mcculloch-Pitts neuron
clear;
clc;
%Getting weights and threshold value
disp('Enter weights');
w1=input('Weight w1=');
w2=input('weight w2=');
disp('Enter Threshold Value');
theta=input('theta=');
y=[0 0 0 0];
x1=[0 0 1 1];
x2=[0 1 0 1];
z=[0 0 1 0];
con=1;
while con
zin=x1*w1+x2*w2;
for i=1:4
if zin(i)>=theta
y(i)=1;
else
y(i)=0;
end
end
disp('Output of Net');
disp(y);
if y==z
con=0;
else
disp('Net is not learning enter another set of weights and Threshold
value');
w1=input('weight w1=');
w2=input('weight w2=');
theta=input('theta=');
end
end
disp('Mcculloch-Pitts Net for ANDNOT function');
disp('Weights of Neuron');
disp(w1);
disp(w2);
disp('Threshold value');
disp(theta);
Output
Enter weights
Weight w1=1
weight w2=1
Enter Threshold Value
theta=0.1
Output of Net
0 1 1 1
Net is not learning enter another set of weights and Threshold value
Weight w1=1
weight w2=-1
theta=1
Output of Net
0 0 1 0
Mcculloch-Pitts Net for ANDNOT function
Weights of Neuron
1
-1
Threshold value
1
AIM-3 Generate XOR function using McCulloch-Pitts neuron by
writing an M-file.
Enter weights
Weight w11=1
weight w12=-1
Weight w21=-1
weight w22=1
weight v1=1
weight v2=1
Enter Threshold Value
theta=1
Output of Net
0 1 1 0
McCulloch-Pitts Net for XOR function
Weights of Neuron Z1
1
-1
weights of Neuron Z2
-1
1
weights of Neuron Y
1
1
Threshold value
1
AIM-4 Write a MATLAB program for Hebb Net to classify two
dimensional input patterns.
Solution:
clear;
clc;
%Input Patterns
E=[1 1 1 1 1 -1 -1 -1 1 1 1 1 1 -1 -1 -1 1 1 1 1];
F=[1 1 1 1 1 -1 -1 -1 1 1 1 1 1 -1 -1 -1 1 -1 -1 -1];
x(1,1:20)=E;
x(2,1:20)=F;
w(1:20)=0;
t=[1 -1];
b=0;
for i=1:2
w=w+x(i,1:20)*t(i);
b=b+t(i);
end
disp('Weight matrix');
disp(w);
disp('Bias');
disp(b);
Output
Weight matrix
Columns 1 through 18
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
Columns 19 through 20
2 2
Bias 0
AIM- 5 Write a MATLAB program for perceptron net for an AND
X1 X2 Y
–1 –1 –1
–1 1 –1
1 –1 –1
1 1 1
Output
Enter Learning rate=1
Enter Threshold value=0.5
Perceptron for AND funtion
Final Weight matrix
1 1
Final Bias
-1
AIM-6 Write a MATLAB program for Adaline network for OR
function Bipolar inputs and targets
Solution:
clear all;
clc;
disp('Adaline network for OR function Bipolar inputs and targets');
%input pattern
x1=[1 1 -1 -1];
x2=[1 -1 1 -1];
%bias input
x3=[1 1 1 1];
%target vector
t=[1 1 1 -1];
%initial weights and bias
w1=0.1;w2=0.1;b=0.1;
%initialize learning rate
alpha=0.1;
%error convergence
e=2;
%change in weights and bias
delw1=0;delw2=0;delb=0;
epoch=0;
while(e>1.018)
epoch=epoch+1;
e=0;
for i=1:4
nety(i)=w1*x1(i)+w2*x2(i)+b;
%net input calculated and target
nt=[nety(i) t(i)];
delw1=alpha*(t(i)-nety(i))*x1(i);
delw2=alpha*(t(i)-nety(i))*x2(i);
delb=alpha*(t(i)-nety(i))*x3(i);
%weight changes
wc=[delw1 delw2 delb]
%updating of weights
w1=w1+delw1;
w2=w2+delw2;
b=b+delb;
%new weights
w=[w1 w2 b]
%input pattern
x=[x1(i) x2(i) x3(i)];
%printring the results obtained
pnt=[x nt wc w]
end
for i=1:4
nety(i)=w1*x1(i)+w2*x2(i)+b;
e=e+(t(i)-nety(i))^2;
end
end
AIM-7 Write a MATLAB program for cluster two vectors using
Kohonen self organizing maps.
Solution:
clc;
clear;
x=[1 1 0 0;0 0 0 1;1 0 0 0;0 0 1 1];
alpha=0.6;
%initial weight matrix
w=rand(4,2);
disp('Initial weight matrix');
disp(w);
con=1;
epoch=0;
while con
for i=1:4
for j=1:2
D(j)=0;
for k=1:4
D(j)=D(j)+(w(k,j)-x(i,k))^2;
end
end
for j=1:2
if D(j)==min(D)
J=j;
end
end
w(:,J)=w(:,J)+alpha*(x(i,:)'-w(:,J));
end
alpha=0.5*alpha;
epoch=epoch+1;
if epoch==300
con=0;
end
end
disp('Weight Matrix after 300 epoch');
disp(w);
Output
clc;
clear;
% set initial weights
v=[0.6 0.2;0.6 0.2;0.2 0.6; 0.2 0.6];
w=[0.4 0.3;0.4 0.3];
x=[0 1 1 0];
y=[1 0];
alpha=0.3;
for j=1:2
D(j)=0;
for i=1:4
D(j)=D(j)+(x(i)-v(i,j))^2;
end
for k=1:2
D(j)=D(j)+(y(k)-w(k,j))^2;
end
end
for j=1:2
if D(j)==min(D)
J=j;
end
end
disp('After one step the weight matrix are');
v(:,J)=v(:,J)+alpha*(x'-v(:,J))
w(:,J)=w(:,J)+alpha*(y'-w(:,J))
Output
Solution:
clc;
clear;
b=[0.57 0.0 0.3;0.0 0.0 0.3;0.0 0.57 0.3;0.0 0.47 0.3];
t=[1 1 0 0;1 0 0 1;1 1 1 1];
vp=0.4;
L=2;
x=[1 0 1 1];
s=x;
ns=sum(s);
y=x*b;
con=1;
while con
for i=1:3
if y(i)==max(y)
J=i;
end
end
x=s.*t(J,:);
nx=sum(x);
if nx/ns >= vp
b(:,J)=L*x(:)/(L-1+nx);
t(J,:)=x(1,:);
con=0;
else
y(J)=-1;
con=1;
end
if y+1==0
con=0;
end
end
disp('Top Down Weights');
disp(t);
disp('Bottom up Weights');
disp(b);
Output
Top-down Weights
1 1 0 0
1 0 0 1
1 1 1 1
Bottom-up Weights
0.5700 0.66670.3000
0 0 0.3000
0 0 0.3000
0 0.66670.3000
Aim 11: Study of genetic algorithm.
Genetic Algorithms were invented to mimic some of the processes observed in natural
evolution. Thefather of the original Genetic Algorithm was John Holland who invented it
in the early 1970's. Genetic Algorithms (GAs) are adaptive heuristic search algorithm
based on the evolutionary ideas ofnatural selection and genetics. As such they represent
an intelligent exploitation of a random sarch used to solve optimization problems.
Although randomized, GAs are by no means random, instead they exploit historical
information to direct the search into the region of better performance within the search
space. The basic techniques of the GAs are designed to simulate processes in natural
systems necessary for evolution, specially those follow the principles first laid down by
Charles Darwin of "survival of the fittest".
1. Selection Operator
Give preference to better individuals, allowing them to pass on their genes to the next
generation.
The goodness of each individual depends on its fitness.
Fitness may be determined by an objective function or by a subjective judgment.
2. Crossover Operator
Prime distinguished factor of GA from other optimization techniques
Two individuals are chosen from the population using the selection operator
A crossover site along the bit strings is randomly chosen
The values of the two strings are exchanged up to this point
If S1=000000 and s2=111111 and the crossover point is 2 then S1'=110000 and
s2'=001111
The two new offspring created from this mating are put into the next generation of the
population
By recombining portions of good individuals, this process is likely to create even better
individuals
3. Mutation Operator
With some low probability, a portion of the new individuals will have some of their bits
flipped. Its purpose is to maintain diversity within the population and inhibit premature
convergence. Mutation alone induces a random walk through the search space Mutation
and selection (without crossover) create a parallel, noise-tolerant, hill-climbing
algorithms