Intelligent Control Lab Habte (1) (2)
Intelligent Control Lab Habte (1) (2)
October 2024
1|Page
Program 1. Write a MATLAB program to compute addition, subtraction, multiplication, division,
and inverse of given intervals.
%Arithmetic operations on intervals
% Enter the two intervals
al = input('Enter the lower bound of interval [a]: ');
au = input('Enter the upper bound of interval [a]: ');
bl = input('Enter the lower bound of interval [b]: ');
bu = input('Enter the upper bound of interval [b]: ');
%Addition of [a] and [b]
disp('The addition of two intervals is:');
suml=al+bl
sumu=au+bu
%Subtraction of [a] and [b]
disp('The subtraction of two intervals is:');
diffl=al-bu
diffu=au-bl
%Multiplication of [a] and [b]
disp('The multiplication of two intervals is:');
mul=min([al*bl,al*bu,au*bl,au*bu])
muu=max([al*bl,al*bu,au*bl,au*bu])
%Division of [a] and [c] such that 0 does not belong to [c]
cl = input('Enter the lower bound of interval [c]: ');
cu = input('Enter the upper bound of interval [c]: ');
disp('The division of two intervals [a] and [c] is:');
if(cl<=0 && cu>=0)
disp( 'The division of [a]/[c] is:(-inf,inf)');
else
disp( 'The division of [a]/[c] is:');
divl=al/cu
divu=au/cl
end
%Additive inverse of an interval
disp( 'The additive inverse of an interval c=[cl,cu] is:');
ail=-cu
aiu=-cl
%Multiplicative inverse of an interval
disp( 'The multiplicative inverse of an interval c=[cl,cu]
is:');
if(cl<=0 && cu>=0)
disp( '[-inf,inf]');
else
mil=1/cu
miu=1/cl
end
Input the following on the command window as per the request and see the result
Enter the lower bound of interval a: -8
Enter the upper bound of interval a: 9
2|Page
Enter the lower bound of interval b: -4
Enter the upper bound of interval b: 7
Enter the lower bound of interval c: 4
Enter the upper bound of interval c: 7
Program 2. Write a MATLAB program for computing support, height, core, and boundary of a
fuzzy set.
%Support, height, core and boundary of a fuzzy set
clear all;
clc;
x = [-2, -1, 0, 1, 2];
n = length(x);
disp('The corresponding membership values are:')
mu = 1./(1+x.^2); %membership function
disp(mu);
%Support
disp('Support=');
for i=1:n
if(mu(i)>0)
s(i)=x(i);
end
end
disp(s);
%Height
disp('Height=')
h=max(mu);
disp(h);
%Core
disp('Core=');
j=1;
for i=1:n
if(mu(i)==1)
c(i)=x(i);
j=j+1;
end
end
disp(c);
%Boundary
disp('Boundary=');
k=1;
for i=1:n
if(mu(i)>=0&&mu(i)<1)
b(i)=x(i);
k=k+1;
end
end
disp(b)
3|Page
Program 3: Write a MATLAB program for computing intersection, union, and complement of
two given fuzzy sets.
%Intersection, union and complement of two fuzzy sets
clear all;
clc;
a = input('Enter the elements x in fuzzy set
A1=(x,muA1(x)):\n');
mu1 = input('Enter the corresponding membership values in
A1:\n');
n = length(a);
b = input('Enter the elements x in fuzzy set
A2=(x,muA2(x)):\n');
mu2 = input('Enter the corresponding membership values in
A2:\n');
m = length(b);
if n~= m
disp('The dimension of two fuzzy sets a and b must be
same');
end
% Standard union
muI = min(mu1,mu2);
% Standard intersection
muU = max(mu1,mu2);
% replace the following codes with the above codes
k = 1;
for i=1:n
c(i) = a(i);
for j=1:m
if(a(i) == b(j))
muI(k) = min(mu1(i),mu2(j));
muU(k) = max(mu1(i),mu2(j));
k = k+1;
end
end
end
disp('The intersection of A1 and A2 is (x,muI(x)):');
disp(' x muI(x)');
disp([c' muI'])
disp('The union of A1 and A2 is (x,muU(x)):');
disp(' x muU(x)');
disp([c' muU'])
% Complement of a and b
muC1= 1- mu1;
muC2= 1- mu2;
4|Page
disp('The complement of set A1 is:');
disp(' x muA1c(x)');
disp([a' muC1'])
disp('The complement of set A2 is:');
disp(' x muA2c(x)');
disp([a' muC2'])
Input the following on the command window as per the request and see the result.
Enter the elements x in fuzzy set A1=(x,muA1(x)):
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Enter the corresponding membership values in A1:
[0.1, 0.5, 0.8, 1, 0.7, 0.3, 0.1, 0, 0, 0]
Enter the elements x in fuzzy set A2=(x,muA2(x)):
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Enter the corresponding membership values in A2:
[0, 0.2, 0.8, 1, 1, 0.3, 0, 0, 0, 0]
Program 4: Write a MATLAB program for finding the bounded sum and bounded difference of
given fuzzy sets.
%Bounded sum and bounded difference of two fuzzy sets
clear all;
clc;
a = input('Enter the elements x in fuzzy set A=(x,muA(x)):\n');
mu1 = input('Enter the corresponding membership values in A:\n');
n = length(a);
b = input('Enter the elements x in fuzzy set B=(x,muB(x)):\n');
mu2 = input('Enter the corresponding membership values in B:\n');
m = length(b);
k=1;
disp('The bounded sum of x having membership values (ms) for the
given fuzzy sets is:');
ms= min(1,mu1+mu2);
for i=1:n
if(a(i)==b(i))
ms(i)=min(1,mu1(i)+mu2(i));
end
end
disp(' x ms')
disp([a' ms'])
disp('The bounded difference of x having membership values (md)
for the given fuzzy sets is:');
md= max(0,mu1+mu2-1);
for i=1:n
if(a(i)==b(i))
md(i)=max(0,(mu1(i)+mu2(i))-1);
end
end
disp(' x md')
5|Page
disp([a' md'])
Input the following on the command window as per the request and see the result.
Enter the elements x in fuzzy set A=(x,muA(x)):
[3, 5, 7]
Enter the corresponding membership values in A:
[0.5, 1, 0.6]
Enter the elements x in fuzzy set B=(x,muB(x)):
[3, 5, 7]
Enter the corresponding membership values in B:
[1, 0.6, 0]
Program 5: Write a MATLAB program to generate user-defined Triangular fuzzy number or
membership function (TFN), Trapezoidal fuzzy number (TrFN), and Gaussian Fuzzy number
(GFN) .
%Generate fuzzy numbers viz. TFN,TrFN and GFN
clc; clear all;
% Enter values of the fuzzy numbers TFN,TrFN and GFN
tfn = input('Enter the value of a,b and c for TFN (a,b,c)\n');
trfn = input('Enter the value of a, b, c and d for TrFN
(a,b,c,d)\n');
gfn = input('Enter the value of a, sigma1 and sigma2 for GFN
(a,sigma1,sigma2)\n');
x =[-10:0.1:10]'; %x values
yt = trimf(x,[tfn(1) tfn(2) tfn(3)]); %generates TFN membership
function
ytr = trapmf(x,[trfn(1) trfn(2) trfn(3) trfn(4)]); %generates
TFN membership function
yg = gauss2mf(x,[gfn(2) gfn(1) gfn(3) gfn(1)]); %generates TFN
membership function
plot(x,yt,'-r','LineWidth',2)
hold on
plot(x,ytr,'-b','LineWidth',2)
plot(x,yg,'-k','LineWidth',2)
xlabel('x')
ylabel('\mu(x)')
ylim([-0.05 1.05])
legend('tfn(a,b,c)','trfn(a,b,c,d)','gfn(a,\sigma_1,\sigma_2)')
Input the following on the command window as per the request and see the result.
Enter the value of a,b and c for TFN (a,b,c)
[-2 1 2]
Enter the value of a, b, c and d for TrFN (a,b,c,d)
[-1 2 4 6]
Enter the value of a, sigma1 and sigma2 for GFN (a,sigma1,sigma2)
[-2 1 3]
6|Page
Fuzzy Logic Control (FLC) Systems
1. Theory
A fuzzy system is a static non-linear mapping between its inputs and outputs (i.e., it is not a
dynamic system). It is assumed that the fuzzy system has inputs ri ∈ Ri where i = 1, 2. . . n and
outputs ui ∈ Ui where i = 1, 2, . . ., m, as shown in Figure below. The inputs and outputs are ‚crisp‛
that is, they are real numbers, not fuzzy sets. The fuzzification block converts the crisp inputs to
fuzzy sets, the inference mechanism uses the fuzzy rules in the rule-base to produce fuzzy
7|Page
conclusions (e.g., the implied fuzzy sets), and the de-fuzzification block converts these fuzzy
conclusions into the crisp outputs.
2. ExperimentProcedure
2.1. Experiment – 1
Design a fuzzy logic Control system to determine the ‚Washing time‛ in minuets for a cloth,
given the amount of ‚Dirt‛ and ‚Grease‛ on a cloth by percentage.
Fuzzification
a. Number of input/outputs
For the given system, we have 2- inputs and 1- output.
Inputs: Amount of Dirt (D) and Amount of Grease (G) in percentage
Outputs: Washing time (W) in minuets
a. Size of universe of discourse
SD = [0, 100] %, SG = [0, 100] % and SW = [0, 60] mins
b. Number and shape of fuzzy sub-sets
i. Amount of Dirt (D)
This input fuzzy set has 3- fuzzy subsets (Small dirt (SD), Medium dirt (MD) and High
Dirt (LD))
8|Page
The washing time has 5- fuzzy subsets. They are Very small time (VS), Small time (S), Medium
time (M), Large time (L), Very Large time (VL).
Step 1: Opening the fuzzy designer tool on MATLAB
Step 2: Selecting the Inference engine for the Fuzzy logic system
9|Page
Step 4: Rename the Input-Outupt Variables names.
Step 5: Double click on the Input-Outupt Variables and adjust the intervals
a. Dirt: Range = [0 100], 3-MFs
{SD=[-50 0 50], MD=[0 50 100], LD=[50 100 150]}
b. Grease: Range = [0 100], 3-MFs
{SG=[-50 0 50], MG=[0 50 100], LG=[50 100 150]}
c. Wash_time: Range = [0 60], 5-MFs
{VS=[0 0 15], S=[0 15 30], M=[ 15 30 45], L=[30 45 60], VL=[45 60 75]}
10 | P a g e
Step 6: Constructing the Rule base.
By double clicking on the fuzzy inference block (mamdani) we write the if… then rules.
11 | P a g e
Once we finish writing the Rule-Base, we Export the designed fuzzy system to file (to save it in
file) and to work-space (to use it in SIMULINK).
Double click on the Fuzzy Logic Controller block and rename as flc1 as shown below.
12 | P a g e
b) Implementation of the FLC system by rule viewer.
Open the Fuzzy logic system designer toolbox by typing ‘fuzzy’ on the matlab command
window.
Then Export the fuzzy file to workspace (that means flc1)
Finally run the two simulations (a and b) and compare the two results.
Expected results
13 | P a g e
Experiment - 2
Step 1: Opening the fuzzy designer tool on MATLAB
Once the i/ps and o/ps are defined we rename them base on our variable names
14 | P a g e
Step 3: Editing the Membership functions number and interval by double clicking each i/o
variables one by one.
Then we delete the default Membership functions and add New MFs based on our needs. In our
case select triangular MFs & number of MFs=7.
15 | P a g e
And set the ranges for each variable by clicking on them. Assuming, e=[-6,6], ce=[-1,1] and u=[-
9,9] and rename the names of the MFs according mf1=NB, mf2=NM, mf3=NS ...
Do this step for all Variables e, ce and u. Finally, we obtain such configuration.
16 | P a g e
Then we close the membership function editor, and get back to the fuzzy designer.
Step 4: Constructing the Rule base.
By double clicking on the fuzzy inference block (mamdani) we write the if… then rules.
Once we finish writing the Rule-Base, we Export the designed fuzzy system to file (to save it in
file) and to work-space (to use it in SIMULINK). Save as flc2.
17 | P a g e
Double click on the Fuzzy Logic Controller block and rename as flc2 as shown below.
18 | P a g e
Open the Fuzzy logic system designer toolbox by typing ‘fuzzy’ on the matlab command
window.
Then Export the fuzzy file to workspace (that means flc2)
19 | P a g e
If the input is above a certain threshold, the function changes from one value to
another, but otherwise remain constant. This implies that the function is not
differentiable at the threshold and for the rest the derivative is 0. Due to this fact,
backpropagation learning, for example, is impossible.
b. Bipolar step function is represented
Matlab implementation (hardlims)
20 | P a g e
Single neuron model: Single-Input Neuron
Weight Indices: second index indicates the source of the signal fed to the neuron. Thus, the
𝐰𝟏,𝟐.
Example:
21 | P a g e
If p1 = 2.5; p2 = 3; w1 = 0.5; w2 = -0.7; b = 0.3. Let’s assume the transfer function of the
neuron is hardlimit.
Solution
a = hardlim(n) = hardlim( w1p1 + w2p2 + b)
= hardlim( 0.5×2.5 + (-0.7)×3 + 0.3 )
= hardlim( -0.55) = 0.
The training algorithm’ parameters like error goal, maximum number of epochs
(iterations), etc., are defined.
22 | P a g e
Run the training algorithm.
Simulate the output of the neural network with the measured input data. This is compared
with the measured outputs.
Final validation must be carried out with independent data.
The MATLAB commands used in the procedure are newff, train, and sim
1) newff create a feed-forward backpropagation network object and it also automatically
initializes the network.
Syntax:
net = newff (P,T,S)
net = newff (PR, [S1 S2 …SNl], {TF1, TF2, …, TFNl}, BTF ,BLF,PF)
net = newff (P,T,S,TF,BTF,BLF,PF,IPF,OPF, DDF)
Description:
The function takes the following parameters
P - RxQ1 matrix of Q1 representative R-element input vectors.
T - SNxQ2 matrix of Q2 representative SN-element target vectors.
S - Sizes of N-1 hidden layers, S1 to S(N-1), default = [].
PR - = Rx2 matrix of min and max values for R input elements
Si - Number of neurons (size) in the ith layer, i = 1,…, Nl.
Nl - Number of layers.
TFi - Transfer function of ith layer. Default is 'tansig' for hidden layers, and 'purelin' for
output layer. The transfer functions TF{i} can be any differentiable transfer function
such as TANSIG, LOGSIG, or PURELIN.
BTF - Backpropagation training function, default = 'traingdx'.
BLF - Backpropagation learning function, default = 'learngdm'.
PF - Performance function, default = 'mse'.
And returns an N layer feed-forward backpropagation Network.
newff uses random number generator in creating the initial values for the network weights.
If neurons should have different transfer functions, then they have to be arranged in
different layers.
Example:
net = newff (minmax(p), [5, 2], {’tansig’,’logsig’}, ’traingdm’, ’learngdm’, ’mse’);
2) train: is used to train the network whenever train is called.
Syntax:
net1 = train (net, P, T)
Description:
The function takes the following parameters
net - the initial MLP network generated by newff.
P – Network’ measured input vector.
T - Network targets (measured output vector), default = zeros
23 | P a g e
And returns
net1 - New network object.
The network’s training parameters (net.trainParam) are set to contain the parameters:
trainParam: This property defines the parameters and values of the current training
function.
net.trainParam: The fields of this property depend on the current training function.
The most used of these parameters (components of trainParam).
net.trainParam.epochs which tells the algorithm the maximum number of epochs to train.
net.trainParam.show that tells the algorithm how many epochs there should be between
each presentation of the performance.
Training occurs according to trainlm training parameters, shown here with their default
values:
net.trainParam.epochs =100. Maximum number of epochs to train
net.trainParam.show =25. Epochs between showing progress
net.trainParam.goal= 0. Performance goal
net.trainParam.time= inf .Maximum time to train in seconds
net.trainParam.min_grad =1e-6. Minimum performance gradient
net.trainParam.max_fail= 5. Maximum validation failures
Typically, one epoch of training is defined as a single presentation of all input vectors to the
network. The network is then updated according to the results of all those presentations.
Each weight and bias updates according to its learning function after each epoch (one pass
through the entire set of input vectors).
24 | P a g e
msereg Mean squared error w/reg performance function.
sse Sum squared error-performance function.
To prepare a custom network to be trained with mae, set
net.performFcn = 'mae';
To prepare a custom network to be trained with mse, set
net.performFcn = 'mse'.
For a perceptron it is the mean absolute error performance function mae. For linear regression
usually the mean squared error performance function mse is used.
Training stops when any of these conditions are met:
The maximum number of epochs (repetitions) is reached.
Performance has been minimized to the goal.
The maximum amount of time has been exceeded.
Validation performance has increase more than max_fail times since the last time it
decreased (when using validation).
train calls the function indicated by net.trainFcn, using the training parameter values indicated
by net.trainParam.
3) sim is used to simulate the network when sim is called.
Syntax:
a = sim (net1, P)
Description:
The function takes the following parameters
net1 - final MLP object.
P - input vector
And returns
a - measured output.
To test how well the resulting MLP net1 approximates the data, sim Command is applied.
The measured output is a (simulated output of MLP network).
Error difference (e = T – a) at each measured point. The final validation must be done with
independent data.
4) Init: is used to initialize the network whenever init is called.
net = init (net)
25 | P a g e
The initFcn is the function that initialized the weights and biases of the network.
5) adapt: allows a neural network to adapt (change weights and biases on each presentation of an
input).
The trainFcn and adaptFcn are used for the two different learning types:
Batch learning.
Incremental or on-line learning
6) display the name and properties of a neural network’s variables.
display (net)
7) view: View network structure.
view (net);
8) Type net to see the network:
>> net
Examples:
Example 1: (fitting data) Consider humps function in MATLAB. It is given by
26 | P a g e
net.trainParam.lr = 0.05; % Learning rate used in some
gradient schemes
net.trainParam.epochs =1000; % Max number of iterations
net.trainParam.goal = 1e-3; % Error tolerance; stopping
criterion
%Train network
net1 = train(net, P, T); % Iterates gradient type of loop
% Resulting network is stored in net1
%Convergence curve c is shown below.
% Simulate how good a result is achieved: Input is the same input vector P.
% Output is the output of the neural network, which should be compared with output data
a= sim(net1,P);
% Plot result and compare
plot (P, a-T, P,T); grid;
The fit is quite bad, to solve this problem:
Change the size of the network (bigger. size).
net=newff([0 2], [20,1], {'tansig','purelin'},'traingd');
d. Improve the training algorithm performance or even change the algorithm
Try Levenberg-Marquardt – trainlm (more efficient training algorithm)
e. net=newff([0 2], [10,1], {'tansig','purelin'},'trainlm');
It is clear that L-M algorithm is significantly faster and preferable method to back-propagation.
Try simulating with independent data.
x1=0:0.01:2; P1=x1;y1=humps(x1); T1=y1;
a1= sim (net1, P1);
plot (P1,a1, 'r', P1,T1, 'g', P,T, 'b', P1, T1-a1, 'y');
legend ('a1','T1','T','Error');
The regression plot shows the actual network outputs plotted in terms of the associated target
values. If the network has learned to fit the data well, the linear fit to this output-target relationship
should closely intersect the bottom-left and top-right corners of the plot. If this is not the case then
further training, or training a network with more hidden neurons, would be advisable (correlation
between actual network outputs and the target values).
27 | P a g e
III. Error histogram is used to show how the error sizes are distributed. Typically, most errors
are near zero, with very few errors far from that.
To display the Error histogram, we can use either:
“Error histogram” button in the training tool.
Call ploterrhist function
E = T - A;
ploterrhist (E);
Example 2: Let us take an example of model having three inputs a,b and c which generate output
y. Which is y=2a+3b+5c.The advantage of ANN is it requires input and output data only. And then
collect data for mathematical equation y=2a+3b+5c. Based up on this data ANN goes into training
for learning and understanding relationship between input and output. Suppose we need the 1000
output data with the following range of inputs.
A=[0-1],b=[0-1],c=[0-1];
The random function has the following MATLAB syntax.
a=rand(x,y), x is number of rows and y number of columns. The random function by default
generates the random values from 0 to 1.for example format for getting the 10 samples having
input range from 10 to 15 is, a=A*rand(x,y)+B, where is A is maximum range and B is minimum
range.
Then write the following code on M- file and run the program. This is input and output data for
NN to be trained.
a=rand(1,100);
b=rand(1,100);
c=rand(1,100);
y=2*a+b*3+5*c;
I=[a;b;c;]
T=y
The next step is to train the neural network. Thus, write the following program, save and run it.
Then new GUI window will open.
I=[a;b;c;]
T=y
net=newff(minmax(I),[3,5,1],{'logsig','tansig','purelin'},'train
lm');
net=init(net);
net.trainparam.show=1;
net.trainparam.epochs=1000;
net.trainparam.goal=1e-12;
net=train(net,I,T);
28 | P a g e
Then click on the performance, Training state and Regression plot, to see how NN is perfectly
trained. The NN is trained well when the value of Regression(R) is greater than 0.9 values. It is
perfectly trained when regression (R) value is equal to 1.
The next step see the on workspacewindow is that there is NN trained with the name net variable.
Then Right clcikc and save it anywhere on your computer.
The next step is testing the performance of NN by putting values and by adding y=sim(net).
I=[a;b;c;]
T=y
net=newff(minmax(I),[3,5,1],{'logsig','tansig','purelin'},'train
lm');
net=init(net);
net.trainparam.show=1;
net.trainparam.epochs=1000;
net.trainparam.goal=1e-12;
29 | P a g e
net=train(net,I,T);
y=sim(net,[0.5 0.5 0.5]')
Moreover, see the result on command window if it is right or not. Additionally, you can test your
trained NN by writing the following command on command window.
y=sim(net,[0.5 0.5 0.5]').
Finally by putting the a,b,c values you can test your trained NN result. However, if you put a=10,
b=20, c=30, is your trained NN give you correct result? If not, what is the problem?
y=sim(net,[10 20 30]')
You can also modify the above program with following program as follows.
a=rand(1,100);
b=rand(1,100);
c=rand(1,100);
y=2*a+b*3+5*c;
I=[a;b;c;]
T=y;
%New version
net = feedforwardnet([3,5,1],'trainlm');
net= init(net);
net.trainparam.epochs= 1000;
net.trainparam.goal= 1e-12;
net = train(net,I,T);
y= net(I);
error= T-y;
y=sim(net,[0.5 0.5 0.5]')
NB: Always you have to know data range for ANN training. Examples:
Data for voltage control (min,max values).
Data for frequency control (min,max values).
Data for range speed control of turbine (min,max values).
Example 3: Modeling Logical AND Function (linear separable classification)
Truth table:
The problem is linearly-separable, try to build a one neuron perceptron network with following
inputs and output.
30 | P a g e
P = [0 0 1 1; 0 1 0 1]; % training inputs, p = [p1; p2]
T = [0 0 0 1]; % targets
net = newp ([0 1; 0 1], 1);
net = train (net, P, T);
a = sim (net, P);
Write a and T on command window and check the result.
Dividing the Data
When training multilayer networks, the general practice is to first divide the data into three subsets.
The first subset is the training set, which is used for computing the gradient and updating the
network weights and biases. The second subset is the validation set. The error on the validation set
is monitored during the training process. The validation error normally decreases during the initial
phase of training, as does the training set error. However, when the network begins to overfit the
data, the error on the validation set typically begins to rise. The network weights and biases are
saved at the minimum of the validation set error.
There are four functions provided for dividing data into training, validation and test sets. They are
dividerand(the default), divideblock, divideint, and divideind. The data division is normally
performed automatically when you train the network.
net.divideParam
The divide function is accessed automatically whenever the network is trained, and is used to
divide the data into training, validation and testing subsets. If net.divideFcnis set to 'dividerand'(the
default), then the data is randomly divided into the three subsets using the division parameters
net.divideParam.trainRatio, net.divideParam.valRatio, and net.divideParam.testRatio. The
fraction of data that is placed in the training set is trainRatio/(trainRatio+valRatio+testRatio),
with a similar formula for the other two sets. The default ratios for training, testing and validation
are 0.7, 0.15 and 0.15, respectively.
Example:1 Generate I/O Data.
% Generate I/O data
x= linspace(-pi/2, pi/2, 1000);
y= cos(x)+0.01*randn(1,1000);
% Generate a network with two hidden layers
31 | P a g e
net= feedforwardnet([4 3]);
% Set up division of data for training, validation, testing
net.divideParam.trainRatio= 70/100; % (default value)
net.divideParam.valRatio= 15/100; % (default value)
net.divideParam.testRatio= 15/100; % (default value)
% Set up the number of training epoch
net.trainParam.epochs= 500; % changeable
% Set up the transfer functions in hidden and output layers
net.layers{1}.transferfcn= 'tansig'; % #1 hidden layer
(default function)
net.layers{2}.transferFcn= 'tansig'; % #2 hidden layer
(default function)
net.layers{3}.transferFcn= 'purelin'; % output layer
(default function)
% Set up weight and bias values
% NN has 1x4=4 weights and 4 biases in layer 1,
% 4x3= 12 weights and 3 biases in layer 2, and
% 3x1 weights and 1 bias in output layer 3.
% So, the total number of weight and bias values in NN is
4+4+12+3+3+1= 27.
% Set the weights and biases to random values.
net = setwb(net,rand(27,1));
net.IW{1} % check values
net.b{1}
% Train the network
net= train(net,x,y);
% Calculate the network output
ynet= net(x);
% Plot the signals
plot(x,y,'k',x,ynet,'r')
legend('Teaching data','Net output')
32 | P a g e
Method 2.ANN implementation using MATLAB/Simulink.
In this stage the first step is bring back NN trained before, which you saved it.
Click file on Matlab then open, then it will be appearing on workspace window.
MATLAB syntax for moving trained ANN from command window or workspace to Simulink is
the following.
Block Generation
The function gensim generates block descriptions of networks so you can simulate them using
Simulink software.
gensim(net,st)
The second argument to gensim determines the sample time, which is normally chosen to be some
positive real value.
If a network has no delays associated with its input weights or layer weights, this value can be
set to -1. A value of -1 causes gensim to generate a network with continuous sampling.
genism(net, -1)
This will transfer the information NN to Simulink and at the same time it automatically generates
a SIMULINK file with NN block as shown below. Write the following on command window and
press enter
>>genism(net, -1)
Click on any layer (custom NN) to see the neural network structure. To test the NN double click
on constant block and input the values of input values and test it. To test double click on the scope
and see the result and compare with first method.
33 | P a g e
As you see from above graph the answer is correct, that means it is 5. You can test for various
input values.
Method 3: ANN implementation by using GUI Tool(nntool).
To start implementation of ANN with GUI Tool write the following syntax on the command
window.
>>nntool
34 | P a g e
y=2*a+b*3+5*c;
I=[a;b;c;]
T=y;
%New version
net = feedforwardnet([3,5,1],'trainlm');
net= init(net);
net.trainparam.epochs= 1000;
net.trainparam.goal= 1e-12;
net = train(net,I,T);
y= net(I);
error= T-y;
After running the above program, we will get on workspace window the name I and T, input(I)
and target (T) data.
Then click import Neural Network/Data Manager (nntool). Then select input and target data
respectively as shown below and press ok.
After importing two data’s click on new button of nntool. And then create NN. Then new window
will appear as shown below.
35 | P a g e
Moreover, do the following steps
1. Write the name of the network.
2. Select network type (e.g., select feed forward back propagation network).
3. Select input data I
4. Select Target data T
5. Select training function (e.g., select TRAIN LM)
6. Choose performance function (eg,MSE)
7. Put the number of layers. For example, for our case we are using three (3) layers, so infront
of number layers put 3 and press enter. Here press enter is very important, otherwise
3 layers will not will be generated.
8. Property for layers
Choose Layer 1, choose number of neurons 3, then select activation function LOGSIG.
Choose Layer 2, choose number of neurons 5, then select activation function TANSIG.
Choose Layer 3, number of neurons is not shown here because it is one output, but select
activation function PURELIN.
Additionally, click on create button, to create NN and click ok. Then, New network with the name
you specified is created on nntool window. Finally, double click on the name of the network on
nntool. Then the following will appear.
36 | P a g e
Then do the following steps again
1. Click train button and select input and target data.
2. Click on training parameters and see various parameters. If you want you can change
number of epochs.
3. Click on train the network, and the following will appear.
4. Then click on performance, training state and regression plot to see how NN is trained well.
5. Export trained NN to workspace window. Then click on export, then select the name of
your network and save it. Then see workspace window section that the name of the network
you saved will appear.
37 | P a g e
6. Now NN is ready for testing. Use the following command for testing of NN trained.
y=sim(net,[0.5 0.5 0.5]’) on command window. Here changes name of the net with the
network name you saved it and see the result. Finally, NN successfully trained.
How NN trained can be improved?
By increasing number of iterations.
By increasing number of neurons in the hidden layers.
By changing activation functions.
By changing training algorithm.
By increasing number of hidden layers.
Notice: NN will give different result based on CPU speed, MATLAB version and RAM size of
your computer.
How to get ANN data from Simulink?
Draw the following block for automatic voltage regulator on MATLAB/Simulink. Use PID values
P=1, I=0.25, D=0.28.
Then change the above Simulink model to the following by adding simout (to workspace) to
input and output PID in order to get date from Simulink to workspace. The steps are;
1. Double click on input of PID which is Out. input (to work space) and change the name to
input and save format to array.
2. Double click on output of PID which is Out. input (to work space) and change the name to
output and save format to array.
38 | P a g e
After receiving input and output from Simulink (see workspace window there appear input and
output data), then perform ANN training by using the following program.
For example, use the following code but, the first two lines is changed because its data we get
from Simulink.
y=out.output';
I=out.input';
T=y;
%New version
net=newff(minmax(I),[1,5,1],{'logsig','tansig','purelin'},'
trainlm');
net= init(net);
net.trainparam.show=1;
net.trainparam.lr=0.06;
net.trainparam.epochs= 10000;
net.trainparam.goal= 1e-12;
net = train(net,I,T);
net = train(net,I,T);
y= net(I);
error= y-I;
Look at the workspace section, so that the net block is available.it is the trained artificial neural
network.
The next step to shift the trained NN from workspace to Simulink. For this command we have
MATLAB syntax gensim (net, -1). Thus, write gensim (net, -1) on command window and press
enter. Take custom neural network trained to Simulink model.
Remove PID block and replace by ANN block as shown below.
Simulate by clicking scope button and see the voltage graph. ANN try to provide smooth response.
Finally compared PID and ANN response difference as below.
39 | P a g e
Double click on the final scope and see the response of PID and ANN.As you can response of PID
is better that ANN, but too much overshoot. But after successful training of ANN the response of
ANN can be better. So, try by yourself.
Backpropagation method for training neural network
The Back propagation algorithm (BPA) is a supervised learning method for training ANNs,
and is one of the most common forms of training techniques.
It uses a gradient-descent optimization method, also referred to as the delta rule when
applied to feedforward networks. So let us see the example below for XOR problem.
MATLAB Program
function test05
% Training XOR Using Back Propagation Algorithm
input= [0 0; 0 1;1 0;1 1];
%desired target output of XOR
output= [0;1;1;0];
% Initialize the bias
bias= [-1 -1 -1];
%learning rate or coefficient
coeff= 1;
%number of iterations
iterations = 600;
%random weights
weights= [ 0 0 0;0.1 0.1 0.1;0.1 0.1 0.1];
40 | P a g e
for i= 1:iterations
out= zeros(4,1)%we expect 4 rows of output and 1 column
output
numIn= length(input(:,1));
for j= 1:numIn
% feed forward calculations
%Hidden layer
H1=
bias(1)*weights(1,1)+input(j,1)*weights(1,2)+input(j,2)*wei
ghts(1,3);
%sending data through sigmoid function
x2(1)= sigma(H1);
H2=
bias(2)*weights(2,1)+input(j,1)*weights(2,2)+input(j,2)*wei
ghts(2,3);
x2(2)= sigma(H2);
% for output layer
x3_1=
bias(3)*weights(3,1)+x2(1)*weights(3,2)+x2(2)*weights(3,3);
out(j)= sigma(x3_1);
%adjust delta values of the weights
%delta=(1-actual output)*(desired output-actual
output)
delta3_1= out(j)*(1-out(j))*(output(j)-out(j));
%back propagate delta backwards
delta2_1= x2(1)*(1-x2(1))*weights(3,2)*delta3_1;
delta2_2= x2(2)*(1-x2(2))*weights(3,3)*delta3_1;
%new weights calculations
for k=1:3
if k==1 %Bias Cases
weights(1,k)=weights(1,k)+coeff*bias(1,1)*delta2_1;
weights(2,k)=weights(2,k)+coeff*bias(1,2)*delta2_2;
weights(3,k)=weights(3,k)+coeff*bias(1,3)*delta3_1;
else % when k=2 or 3 input cases to neuron
weights(1,k)=weights(1,k)+coeff*input(j,1)*delta2_1;
weights(2,k)=weights(2,k)+coeff*input(j,2)*delta2_2;
weights(3,k)=weights(3,k)+coeff*x2(k-
1)*delta3_1;
end
end
41 | P a g e
end
end
disp(out)
a=weights(1,1)
function y= sigma(x)
y=1/(1+exp(-x));
finally run the algorithm. And then when you run the algorithm you will not get expected output
result. Thus, change number of iterations to 6000.Then see your output or result. So, what happen?
Did you get the expected output?
GOOD LUCK!!!!!!!!!!!!!!!
42 | P a g e