粒子群算法应用——BP神经网络优化

本篇文章使用粒子群算法寻找BP神经网络最优权值来改善BP神经网络的性能。

粒子群算法详见:粒子群优化算法及应用-CSDN博客

目录

1.BP神经网络

1.1BP神经网络组成

1.2BP神经网络工作过程

2.优化思路

3.优化结果展示


1.BP神经网络

1.1BP神经网络组成

BP神经网络,即反向传播神经网络(Backpropagation Neural Network),是一种基于误差反向传播算法的多层前馈神经网络,具有强大的非线性映射能力和优良的多维函数映射能力。

BP神经网络由输入层、隐藏层(可有多个)和输出层组成。每个神经元都与前一层的所有神经元相连,通过加权和的方式传递信号,并经过激活函数进行非线性变换。其核心算法是误差反向传播算法(Error Backpropagation,简称BP算法),该算法通过计算网络输出与目标值之间的误差,利用梯度下降法对网络权重进行调整,以最小化误差。

1.2BP神经网络工作过程

前向传播:输入信号从输入层经过隐藏层,最终到达输出层。在前向传播过程中,每层神经元的输出都是基于上一层神经元的输出和权重计算得到的。在前向传播过程中,神经元的输出计算方式通常为:

其中,y_{j}表示当前神经元的输出,f\left (\cdot \right )为激活函数,w_{ij}​为从神经元j到神经元i的连接权重,x_{i}为前一层的输入(或神经元j的输出),b_{j}为神经元i的偏置项。


误差反传:如果输出层的实际输出与期望输出不相符,则转入误差的反向传播过程。误差反传是将输出误差通过隐含层向输入层逐层反传,并将误差分摊给各层所有单元。根据误差梯度,利用链式法则计算每个权重的梯度,然后更新权重以减小误差。常用的误差函数为均方误差:

其中,y_{j}表示当前神经元的实际输出,d_{j}表示当前神经元的期望输出。


通过反复的前向传播和误差反传过程,BP神经网络不断调整权重和阈值,直到达到预设的训练次数或误差达到设定的阈值为止。

2.优化思路

具体思路:找到最合适的w_{ij}b_{j},使适应度值误差函数E最小。

维度是:输入层数*隐含层数+隐含层数+隐含层数*输出层数+输出层数

随机生成初始权值
求适应度值矩阵,找历史最优是Pb和全局最优Gb;
利用粒子群更新权值,再求适应度值矩阵;
更新历史最优是Pb和全局最优Gb;

%% 适应度函数
function fitness = fun(X, hiddennum, net, Input_train, Out_train)
%%节点个数
inputnum  = size(Input_train, 1);%输入层节点数
outputnum = size(Out_train, 1);%输出层节点数
%%提取权值和阈值
w1 = X(1 : inputnum * hiddennum);
B1 = X(inputnum * hiddennum + 1 : inputnum * hiddennum + hiddennum);
w2 = X(inputnum * hiddennum + hiddennum + 1 : inputnum * hiddennum + hiddennum + hiddennum * outputnum);
B2 = X(inputnum * hiddennum + hiddennum + hiddennum * outputnum + 1 : inputnum * hiddennum + hiddennum + hiddennum * outputnum + outputnum);
%%网络赋值
net.Iw{1, 1} = reshape(w1, hiddennum, inputnum );
net.Lw{2, 1} = reshape(w2, outputnum, hiddennum);
net.b{1}     = reshape(B1, hiddennum, 1);
net.b{2}     = B2';
%%网络训练
net = train(net, Input_train, Out_train);
%%仿真测试
train_sim= sim(net, Input_train);
%%适应度值
error = sum(abs((train_sim - Out_train)));
fitness=error;
end

3.优化结果展示

关注私信我代码获取
1 部分理论引用网络文献,若有侵权联系我整改!
2 优化算法有关的可以找我合作!!!

This add-in to the PSO Research toolbox (Evers 2009) aims to allow an artificial neural network (ANN or simply NN) to be trained using the Particle Swarm Optimization (PSO) technique (Kennedy, Eberhart et al. 2001). This add-in acts like a bridge or interface between MATLAB’s NN toolbox and the PSO Research Toolbox. In this way, MATLAB’s NN functions can call the NN add-in, which in turn calls the PSO Research toolbox for NN training. This approach to training a NN by PSO treats each PSO particle as one possible solution of weight and bias combinations for the NN (Settles and Rylander ; Rui Mendes 2002; Venayagamoorthy 2003). The PSO particles therefore move about in the search space aiming to minimise the output of the NN performance function. The author acknowledges that there already exists code for PSO training of a NN (Birge 2005), however that code was found to work only with MATLAB version 2005 and older. This NN-addin works with newer versions of MATLAB till versions 2010a. HELPFUL LINKS: 1. This NN add-in only works when used with the PSORT found at, http://www.mathworks.com/matlabcentral/fileexchange/28291-particle-swarm-optimization-research-toolbox. 2. The author acknowledges the modification of code used in an old PSO toolbox for NN training found at http://www.mathworks.com.au/matlabcentral/fileexchange/7506. 3. User support and contact information for the author of this NN add-in can be found at http://www.tricia-rambharose.com/ ACKNOWLEDGEMENTS The author acknowledges the support of advisors and fellow researchers who supported in various ways to better her understanding of PSO and NN which lead to the creation of this add-in for PSO training of NNs. The acknowledged are as follows: * Dr. Alexander Nikov - Senior lecturer and Head of Usaility Lab, UWI, St. Augustine, Trinidad, W.I. http://www2.sta.uwi.edu/~anikov/ * Dr. Sabine Graf - Assistant Professor, Athabasca University, Alberta, Canada. http://scis.athabascau.ca/scis/staff/faculty.jsp?id=sabineg * Dr. Kinshuk - Professor, Athabasca University, Alberta, Canada. http://scis.athabascau.ca/scis/staff/faculty.jsp?id=kinshuk * Members of the iCore group at Athabasca University, Edmonton, Alberta, Canada.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

酷酷的小刘

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值