ANN - Self Organizing Neural Network (SONN) Learning Algorithm Last Updated : 10 Jul, 2020 Summarize Comments Improve Suggest changes Share Like Article Like Report Prerequisite: ANN | Self Organizing Neural Network (SONN) In the Self Organizing Neural Network (SONN), learning is performed by shifting the weights from inactive connections to active ones. The neurons which were won are selected to learn along with their neighborhood neurons. If a neuron does not respond for a specific input pattern, then learning will not be performed in that particular neuron. Self-Organizing Neural Network Learning Algorithm: Step 0: Initialize synaptic weights $W_{ij}$ to random values in a specific interval like, [-1, 1] or [0, 1]. Assign topological neighborhood parameters. Define learning rate $\alpha$ (say, 0.1). Step 1: Until termination condition is reached, do loop: Steps 2-8 Step 2: For randomly chosen input vector $X$ from the set of training samples, do loop: Steps 3-5. Step 3: Synaptic weight vector $W_{j}$ of the winning neuron $j$ for the input vector $X$. For each $j$, Euclidean Distance is calculated between a pair of (n X 1) vectors $X$ and $W_{j}$ is represented by- \[$D(j) = \left\|\mathbf{X}-\mathbf{W}_{j}\right\|=\left[\sum_{i=1}^{n}\left(x_{i}-w_{i j}\right)^{2}\right]^{1 / 2}$\] This is a criteria to finding similarity between two sets of samples. The nodes (neurons) in the network are evaluated to determine the most likely input vector according to its weights Step 4: To select the winning neuron, $j_{\mathbf{X}}$, that best matches the input vector $X$, so that the D(j) is minimum. \[$j_{\mathbf{X}}(p)=\min _{j}\left\|\mathbf{X}-\mathbf{W}_{j}(p)\right\|=\left\{\sum_{i=1}^{n}\left[x_{i}-w_{i j}(p)\right]^{2}\right\}^{1 / 2}$\\ $j = 1, 2, \ldots, m$\] Where: $n$ is the number of neurons in the input layer, $m$ is the number of neurons in the Kohonen layer. The winning node is generally termed as the Best Matching Unit (BMU). Step 5: Now it is the Learning phase; to update the synaptic weights. For all nodes $j$ within the neighborhood of that neuron, for every $i$: \[$w_{i j}(p+1)=w_{i j}(p)+\Delta w_{i j}(p)$\] Where $\Delta w_{i j}(p)$ is the weight correction at iteration $p$. This process to update weight is based on the competitive learning rule: \[$\Delta w_{i j}(p) = \left\{\begin{array}{cl}\alpha\left(x_{i}-w_{i j}(p)\right), & \text { if neuron } j \text { wins the competition } \\ 0, & \text { if neuron } j \text { loses the competition }\end{array}\right.$\] Where $\alpha$ is the learning rate. Here the neighborhood function centered around the winner-takes-all neuron $j_{\mathbf{X}}$ at iteration $p$. Any neurons within the radius of the BMU are modified to make them more similar to the input vector. Step 6: Update the learning rate $\alpha$. Following the equation- \[$\alpha (t + 1) = 0.5 * \alpha(t)$\] Step 7: At specified times reduce radius of topological neighborhood BMU. As per the clustering process progresses, the radius of the neighborhood around a cluster unit also decreases accordingly. Step 8: Check for the termination condition. Example with iterations: Take an input vector of 2 - Dimension: $\mathbf{X}=\left[\begin{array}{l}0.52 \\ 0.12\end{array}\right]$ The initial weight vectors, $W_{\mathbf{j}}$, are given by \[$\mathbf{W}_{1}=\left[\begin{array}{l}0.27 \\ 0.81\end{array}\right] \quad \mathbf{W}_{2}=\left[\begin{array}{l}0.42 \\ 0.70\end{array}\right] \quad \mathbf{W}_{3}=\left[\begin{array}{l}0.43 \\ 0.21\end{array}\right]$\] We find the winning (best-matching) neuron $j_{\mathbf{X}}$ satisfying the minimum distance Euclidean criterion: \[$d_{1}=\sqrt{\left(x_{1}-w_{11}\right)^{2}+\left(x_{2}-w_{21}\right)^{2}}=\sqrt{(0.52-0.27)^{2}+(0.12-0.81)^{2}}=0.73$\\ $d_{2}=\sqrt{\left(x_{1}-w_{12}\right)^{2}+\left(x_{2}-w_{22}\right)^{2}}=\sqrt{(0.52-0.42)^{2}+(0.12-0.70)^{2}}=0.59$\\ $d_{3}=\sqrt{\left(x_{1}-w_{13}\right)^{2}+\left(x_{2}-w_{23}\right)^{2}}=\sqrt{(0.52-0.43)^{2}+(0.12-0.21)^{2}}=0.13$\] Neuron 3 is the winner and its weight vector $W_{\mathbf{3}}$ is updated following the competitive learning rule. \[$\Delta w_{13}=\alpha\left(x_{1}-w_{13}\right)=0.1(0.52-0.43)=0.01$\\ $\Delta w_{23}=\alpha\left(x_{2}-w_{23}\right)=0.1(0.12-0.21)=-0.01$\] The updated weight vector $W_{\mathbf{3}}$ at iteration $(p + 1)$ is calculated as: \[$\mathbf{W}_{3}(p+1)=\mathbf{W}_{3}(p)+\Delta \mathbf{W}_{3}(p)=\left[\begin{array}{l}0.43 \\ 0.21\end{array}\right]+\left[\begin{array}{r}0.01 \\ -0.01\end{array}\right]=\left[\begin{array}{l}0.44 \\ 0.20\end{array}\right]\]$ The weight vector $W_{\mathbf{3}}$ of the winning neuron 3 becomes closer to the input vector $X$ in each iteration. Comment More infoAdvertise with us Next Article ANN - Self Organizing Neural Network (SONN) Learning Algorithm G goodday451999 Follow Improve Article Tags : Machine Learning Neural Network Practice Tags : Machine Learning Similar Reads ANN - Self Organizing Neural Network (SONN) Self Organizing Neural Network (SONN) is an unsupervised learning model in Artificial Neural Network termed as Self-Organizing Feature Maps or Kohonen Maps. These feature maps are the generated two-dimensional discretized form of an input space during the model training (based on competitive learnin 2 min read ANN - Implementation of Self Organizing Neural Network (SONN) from Scratch Prerequisite: ANN | Self Organizing Neural Network (SONN) Learning Algorithm To implement a SONN, here are some essential consideration- Construct a Self Organizing Neural Network (SONN) or Kohonen Network with 100 neurons arranged in a 2-dimensional matrix with 10 rows and 10 columns Train the netw 4 min read Least Mean-Squares Algorithm in Neural Networks The Least Mean-Squares (LMS) algorithm is a widely used adaptive filter technique in neural networks, signal processing, and control systems. Developed by Bernard Widrow and Ted Hoff in 1960, the LMS algorithm is a stochastic gradient descent method that iteratively updates filter coefficients to mi 10 min read Architecture and Learning process in neural network In order to learn about Backpropagation, we first have to understand the architecture of the neural network and then the learning process in ANN. So, let's start about knowing the various architectures of the ANN: Architectures of Neural Network: ANN is a computational system consisting of many inte 9 min read Single Layered Neural Networks in R Programming Neural networks also known as neural nets is a type of algorithm in machine learning and artificial intelligence that works the same as the human brain operates. The artificial neurons in the neural network depict the same behavior of neurons in the human brain. Neural networks are used in risk anal 5 min read Like