0% found this document useful (0 votes)
26 views23 pages

Unit 2, 4,5,3 SC

Uploaded by

2313323037015
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views23 pages

Unit 2, 4,5,3 SC

Uploaded by

2313323037015
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Associative Neural Network

In my note

Hopfield Network

Fuzzy Logic

A fuzzy set is any set that allows its members to have different grades of membership (membership function) in
the interval [0,1]. A numerical value between 0 and 1 that represents the degree to which an element belongs to a
particular set, also referred to as membership value.
Genetic Algorithm and Search Space

Introduction to Evolutionary Computing: Evolutionary computing is an approach introduced in the 1960s by I.


Rechenberg. One of its most well-known forms is the Genetic Algorithm (GA), which was invented by John
Holland in 1975. In his book Adaptation in Natural and Artificial Systems, Holland introduced the idea of using
the concept of evolution and natural selection (i.e., "survival of the fittest") to solve optimization and search
problems.

In essence, the genetic algorithm is a heuristic method that mimics the process of natural evolution to solve
problems by finding the best solutions among many possible ones.

Search Space:

The search space (or state space) refers to the set of all possible solutions to a given problem. Each point in this
space represents a potential solution. The goal of a genetic algorithm is to navigate this search space to find the
best solution based on a predefined fitness function.

Each possible solution in the search space is evaluated using the fitness function, which gives a score (fitness
value) indicating how "good" the solution is in solving the problem. GAs search the space to find the optimal or
near-optimal solution, though it may not always guarantee finding the best one.

For example, consider searching for the minimum of a function. A genetic algorithm would explore various
points in the search space, looking for the point with the smallest value, which corresponds to the best (minimum)
solution.

Features of Genetic Algorithms (GAs)


Genetic algorithms have several key characteristics that distinguish them from other optimization methods:

1. Stochastic Nature: GAs are stochastic, meaning they rely heavily on randomness. Both the selection of
individuals for reproduction and the reproduction process itself use random techniques to simulate natural
evolution.
2. Population-Based: GAs maintain a population of solutions rather than a single solution at any point in
time. This allows for greater diversity and enables better exploration of the search space. By recombining
solutions, GAs can generate offspring that may outperform their parents.
3. Parallelism: The population-based nature of GAs makes them ideal for parallel computing, where
multiple solutions can be evaluated and processed simultaneously. This improves efficiency in large-scale
problems.
4. Robustness: GAs are robust and adaptable, meaning they perform well across a wide range of problems.
They do not require prior knowledge about the search space or problem structure, making them versatile
tools for complex optimization problems.

Evolutionary Algorithms

The principles of genetic algorithms have inspired other evolutionary algorithms, such as evolution strategies
and genetic programming. These algorithms, which share the common concept of mimicking natural evolution,
are often collectively referred to as Evolutionary Algorithms. Their ability to solve complex and diverse
problems has made them widely applicable across various fields.

However, it's important to recognize that while GAs are powerful, they are not perfect. GAs do not guarantee
finding the global optimum (the absolute best solution), but instead aim for "good enough" solutions, especially
when the search space is unknown or difficult to navigate.

Evolution and Optimization Example: Basilosaurus

To understand how evolution leads to optimization, consider the example of the Basilosaurus, an ancient whale
species that lived around 45 million years ago. Basilosaurus initially had physical traits that were not fully
adapted to its aquatic environment. Over time, through a process of natural selection, adaptations such as
shorter limbs and longer fingers helped improve its swimming ability, allowing it to hunt more effectively.
This gradual improvement through generations mirrors the way GAs optimize solutions. Just as beneficial traits
become more common over time in a species, good solutions become more prevalent over generations in a GA.
Similarly, over time, GAs "evolve" better solutions by combining favorable characteristics from previous
iterations.

How Genetic Algorithms Work

The basic idea behind GAs is that within a population of solutions, the potential for the best solution exists but
may not be immediately obvious. By simulating processes like reproduction, mutation, and natural selection, GAs
can discover new solutions that are better suited to the problem.

Key Operators in GAs:

1. Crossover (Recombination): Crossover is akin to sexual reproduction. It takes two parent solutions
(genotypes) and creates a new solution by combining parts of both parents. In nature, crossover involves
cutting and splicing DNA from two parents to form offspring. This operation allows the offspring to
inherit characteristics from both parents, potentially resulting in better solutions.
2. Mutation: Mutation introduces random changes to a solution's genes (values). While most mutations in
nature are harmful or neutral, in GAs, a few well-placed mutations can help the algorithm explore new
parts of the search space and prevent it from getting stuck in local optima.
3. Selection: Based on their fitness values, solutions are selected for reproduction. Solutions with higher
fitness have a better chance of being chosen to pass their genes to the next generation, simulating the
"survival of the fittest."

Through these operators, GAs evolve better solutions over time, eventually converging on an optimal or
near-optimal solution for the given problem.

Limitations of GAs

Despite their versatility, GAs are not a magic solution for all problems:

1. Local Optima: GAs may not always find the global optimum solution; instead, they may settle for a good
enough solution, especially in complex or large search spaces.
2. Specialized Algorithms: For some specific problems, specialized algorithms may outperform GAs in
terms of both speed and accuracy. In these cases, GAs might be used in combination with other
techniques, creating hybrid methods that take advantage of GA's generalization and other algorithm's
precision.

Conclusion

Genetic algorithms are inspired by the process of natural evolution. They work by evolving a population of
solutions using selection, crossover, and mutation to solve optimization problems. While GAs are robust and
versatile, they are not guaranteed to find the best solution for every problem but are excellent for exploring
complex search spaces and finding solutions where other methods might fail.

Their application extends to areas such as image processing, scheduling problems, and even artificial intelligence,
proving the strength of evolution as a powerful problem-solving tool.

Genetic Algorithms

Genetic Algorithms(GAs) are adaptive heuristic search algorithms that belong to the larger part of evolutionary
algorithms. Genetic algorithms are based on the ideas of natural selection and genetics. These are intelligent
exploitation of random searches provided with historical data to direct the search into the region of better
performance in solution space. They are commonly used to generate high-quality solutions for optimization
problems and search problems.

Genetic algorithms simulate the process of natural selection which means those species that can adapt to
changes in their environment can survive and reproduce and go to the next generation. In simple words, they
simulate “survival of the fittest” among individuals of consecutive generations to solve a problem. Each
generation consists of a population of individuals and each individual represents a point in search space and
possible solution. Each individual is represented as a string of character/integer/float/bits. This string is analogous
to the Chromosome.

Foundation of Genetic Algorithms


Genetic algorithms are based on an analogy with the genetic structure and behavior of chromosomes of the
population. Following is the foundation of GAs based on this analogy –

1. Individuals in the population compete for resources and mate


2. Those individuals who are successful (fittest) then mate to create more offspring than others
3. Genes from the “fittest” parent propagate throughout the generation, that is sometimes parents create
offspring which are better than either parent.
4. Thus each successive generation is more suited for their environment.

Search Space

Fitness Score

A Fitness Score is given to each individual which shows the ability of an individual to “compete”. The
individual having optimal fitness score (or near optimal) are sought.

operators of Genetic Algorithms

Once the initial generation is created, the algorithm evolves the generation using following operators
1) Selection Operator: The idea is to give preference to the individuals with good fitness scores
and allow them to pass their genes to successive generations.
2) Crossover Operator: This represents mating between individuals. Two individuals are selected
using a selection operator and crossover sites are chosen randomly. Then the genes at these crossover
sites are exchanged thus creating a completely new individual (offspring). For example –

3) Mutation Operator: The key idea is to insert random genes in offspring to maintain the diversity
in the population to avoid premature convergence. For example –
Operators in GA

Selection in Genetic Algorithms (GA):

Selection is a key operation in GAs, where individuals (solutions) are chosen from the population based
on their fitness to produce offspring for the next generation. The goal is to favor fitter individuals,
increasing the chance of finding optimal solutions over time.

Types of Selection:

1. Roulette Wheel Selection:


○ Individuals are selected proportionally to their fitness.
○ Higher fitness gives a higher probability of being chosen, like spinning a weighted roulette wheel.
2. Random Selection:
○ Individuals are chosen randomly, without considering fitness.
○ It introduces diversity but doesn’t prioritize better solutions.
3. Rank Selection:
○ Individuals are ranked based on their fitness.
○ Selection is based on rank rather than fitness value, reducing bias towards overly fit
individuals.
4. Tournament Selection:
○ A set number of individuals are selected randomly, and the fittest among them is chosen.
○ Provides balance between exploration and exploitation.
5. Boltzmann Selection:
○ Selection probability depends on both fitness and a temperature parameter (from
simulated annealing).
○ Initially allows more diversity, then gradually favors fitter solutions as temperature
decreases.

Each method affects how quickly and effectively the GA converges to optimal solutions.

Introduction to Hybrid Soft Computing Techniques

Soft computing techniques, including neural networks, fuzzy systems, and genetic algorithms, are
inspired by biological computation and nature’s problem-solving methods. Each of these approaches is
powerful in specific areas but has limitations that can be overcome by combining them into hybrid
systems. These hybrid systems leverage the strengths of each technique, creating more flexible and
robust problem-solving methods.

1. Neural Networks (NNs):


These mimic the human nervous system's ability to learn and adapt. NNs excel in recognizing
patterns and learning from examples, but they are often "black boxes," meaning it’s hard to
explain how they make decisions. They are covered in chapters 2–6 of the book, focusing on types
like supervised and unsupervised learning networks, associative memory networks, and others.
2. Fuzzy Systems:
Fuzzy logic deals with uncertainty by formulating fuzzy rules to manage problems where
boundaries are not clearly defined. This technique allows gradual transitions between states (e.g.,
partial membership in a set) but struggles with automatic rule generation. Fuzzy systems are
discussed in chapters 7–14, covering fuzzy sets, relations, membership functions, fuzzy
arithmetic, and control systems.
3. Genetic Algorithms (GAs):
GAs are optimization algorithms inspired by natural evolution. They use mechanisms like
selection, crossover, and mutation to find optimal solutions. Chapter 15 explains the fundamental
GA operators and their applications in optimization.

Hybrid Soft Computing Systems


Hybrid systems combine two or more of these soft computing techniques to overcome the limitations of
using them individually. The main goal of hybridization is to take advantage of each technique’s
strengths while mitigating its weaknesses.

● Neural Networks are excellent at learning but weak in explaining decision-making.


● Fuzzy Systems are strong in explaining reasoning but need help with automatic rule generation.
● By combining NNs with fuzzy systems or GAs, hybrid systems can optimize performance and
provide better problem-solving capabilities.

Importance of Hybrid Systems

Hybrid soft computing systems are used in diverse fields like engineering design, medical diagnosis,
stock market analysis, and process control. For instance:

● NNs can handle pattern recognition while fuzzy systems can provide rule-based decision-making.
● GAs can optimize fuzzy membership functions, ensuring better performance in uncertain
environments.

However, hybrid systems must be carefully designed, as combining techniques doesn’t always guarantee
better results.

Neuro-Fuzzy Hybrid Systems

Neuro-fuzzy hybrid systems combine neural networks with fuzzy logic, creating systems that benefit
from the learning ability of NNs and the interpretability of fuzzy systems. J.S.R. Jang proposed this
model, known as the Adaptive Neuro-Fuzzy Inference System (ANFIS).

● In Neuro-Fuzzy Systems (NFS), fuzzy rules (such as IF-THEN rules) are used to approximate
functions, and NNs help fine-tune the fuzzy parameters.
● These systems balance interpretability and accuracy. While fuzzy models are interpretable, they
sometimes lack precision. NFS allows for both readable rules and learning from data to improve
accuracy.

There are two major types of fuzzy models in neuro-fuzzy hybrid systems:
1. Linguistic Fuzzy Modeling: Focuses on interpretability (e.g., Mamdani model).
2. Precise Fuzzy Modeling: Focuses on accuracy (e.g., Takagi-Sugeno-Kang model).

Characteristics of Neuro-Fuzzy Systems

The architecture of an NFS typically has three layers:

● Input Layer: Represents input variables.


● Rule Layer: Corresponds to fuzzy rules.
● Output Layer: Produces the final output.

Neuro-fuzzy systems can be initialized with predefined fuzzy rules, or the rules can be learned using
data-driven methods. The learning process adjusts fuzzy rules and membership functions to fit the input
data, optimizing the system’s performance.

Types of Neuro-Fuzzy Systems

1. Cooperative Neuro-Fuzzy Systems:


In these systems, neural networks and fuzzy systems work independently, and the neural network
learns parameters from the fuzzy system. The learned parameters can be applied offline or online
to improve performance.
○ Example: Learning fuzzy set parameters or rule weights using NNs.
2. General Neuro-Fuzzy Hybrid Systems (NFHS):
In general NFHS, the fuzzy system is interpreted as a specialized neural network. Fuzzy rules are
encoded as neurons, and fuzzy sets are represented by connection weights. This tight integration
allows for the advantages of both approaches.
Applications and Challenges

Neuro-fuzzy hybrid systems are applied in fields such as control systems, signal processing, medical
diagnosis, and stock market predictions. Despite their advantages, care must be taken in applying these
systems. Hybridization doesn't always lead to better solutions, and inappropriate applications may yield
poor results. Successful implementations require careful tuning of the systems based on the problem at
hand.

Adaptive Neuro-Fuzzy Inference System (ANFIS)

ANFIS, a popular NFS model in MATLAB, uses the structure of a fuzzy inference system combined
with neural network learning. It adjusts parameters like membership functions to optimize system
performance. ANFIS is widely used for tasks like system modeling and control.

This summary provides a clear understanding of hybrid soft computing techniques, particularly the
neuro-fuzzy hybrid systems and their practical applications.

Fusion Approach of Multispectral Images with SAR for Flood Area Analysis
Flooding is one of the most destructive natural disasters, especially in monsoon regions prone to sudden
floods caused by storms or phenomena like El Niño and La Niña. The damage from floods can be
significant, affecting the environment, human lives, and property. Therefore, accurate methods are
needed to monitor and assess flood-affected areas. One effective approach is combining multispectral
data and Synthetic Aperture Radar (SAR) imagery, each providing complementary insights into the
flooded landscape.

Importance of Multispectral and SAR Data

1. Multispectral Images:
○ These images capture data at different wavelengths of the electromagnetic spectrum. They
are useful in land cover analysis, as they can detect vegetation, soil, water, and other
surface features.
○ For flood analysis, techniques like the Normalized Difference Vegetation Index (NDVI),
calculated from multispectral data, help assess vegetation health and changes. This is
useful for identifying areas impacted by floods.
2. SAR Images:
○ SAR uses radar waves to capture images. It is particularly useful for flood detection
because radar can penetrate through clouds and provide data even in poor weather
conditions. SAR is sensitive to the moisture content of surfaces, making it effective at
identifying waterlogged or flooded areas.
○ SAR's backscattering effect can distinguish between water and other surfaces, which is
essential during flood events.

Fusion of Multispectral and SAR Images

Since multispectral and SAR data provide different types of information, combining them (image fusion)
enhances the accuracy of flood detection and analysis. This fusion takes advantage of the strengths of
both data types:

● Multispectral data provides detailed information about land cover.


● SAR data detects changes in water and moisture, which is crucial for flood detection.
The study referenced here used satellite data from the JERS-1 SAR and OPS sensors. SAR data was
collected before and after a flood caused by tropical storm Zira in Surat Thani Province, Thailand, in
1997. The fusion of these images allowed a more detailed and reliable assessment of the flooded areas.

Image Fusion Techniques

In image fusion, spatial and spectral data are integrated to enhance the classification of images and
improve feature recognition. Two main methods exist:

1. Spatial Domain Methods: These focus on combining spatial features from different images.
2. Spectral Domain Methods: These focus on combining the spectral characteristics of images,
often used in applications like color space transformation.

In this case, the Intensity-Hue-Saturation (IHS) model was used for fusion. This technique converts
the images into a color space model, making it easier to blend multispectral and SAR data to identify
flood areas.

Neural Network Classification for Flood Analysis

To classify the flood areas, a machine learning approach using artificial neural networks (ANNs) was
employed. The multilayer perceptron (MLP), a type of neural network based on the backpropagation
algorithm, was used for this purpose. The MLP consists of layers of nodes (neurons) where:

● The input layer receives data such as pixel values from the images.
● The hidden layers process this data to identify patterns.
● The output layer gives the classification result, indicating whether a particular area is flooded or
not.

This neural network model helps in recognizing complex and noisy patterns, which is critical for
accurately identifying flood zones.

Methodology for SAR and Multispectral Image Fusion

Here is a step-by-step breakdown of the methodology used in this study:

1. SAR Data Preprocessing:


○ The SAR data, initially in 16-bit format, was reduced to 8-bit using linear scaling, which
made it easier to process. This was done to create a dataset with 256 intensity values.
○ Wavelet decomposition was applied to the SAR images to filter out noise, such as the
common "speckle" effect in radar images, and retain useful data for the neural network
algorithm.
2. Resolution Matching:
○ The resolution of SAR data (12.5m x 12.5m) was resampled to match the resolution of the
OPS (Optical Sensor) data (25m x 25m). This ensured that the images were aligned
correctly for fusion.
3. Image Registration and Correction:
○ All the images were geometrically corrected to ensure they were in the same spatial
reference frame, making it possible to overlay and fuse them.
4. Data Fusion and Neural Network Classification:
○ After fusion, the satellite images were used as input for neural network classification.
Separate classifications were performed for the fused data and the original non-fused data
for comparison.

Results of the Study

The results of the neural network classification showed that:

● Fused data provided better classification accuracy compared to non-fused data, confirming that
combining multispectral and SAR images enhances flood area detection.
● Multitemporal SAR data was especially useful for flood monitoring, as it provided detailed
insights into the water extent over time.
● The OPS data added important information about the land cover, which complemented the flood
area analysis.

Conclusion

By fusing multispectral and SAR data, this method provided a reliable and enhanced approach to flood
area classification. The use of neural networks further improved the accuracy of the analysis, allowing
for better identification and monitoring of flood zones. This fusion approach holds great potential for
applications in flood management, disaster response, and environmental monitoring.
Optimization of Traveling Salesman Problem (TSP) Using Genetic Algorithm

The Traveling Salesman Problem (TSP) is a well-known optimization challenge where a traveler must
visit each city in a given list exactly once, returning to the starting city, while minimizing the total
distance traveled. The complexity arises because the number of possible routes (solutions) grows
factorially with the number of cities. As a result, solving TSP exactly for large numbers of cities is
computationally infeasible due to the vast search space. It is categorized as an NP-hard problem,
meaning there's no known algorithm to solve it efficiently in polynomial time for all cases.

To address this, Genetic Algorithms (GAs) offer a powerful heuristic approach that can find good,
near-optimal solutions much faster than brute-force methods. GAs are inspired by the process of natural
selection and evolution, using techniques like selection, crossover (recombination of solutions), and
mutation to iteratively improve solutions over generations.

Genetic Algorithm Overview

A Genetic Algorithm works by generating an initial population of candidate solutions and evolving this
population over several generations. Each solution, representing a specific route for the TSP, is evaluated
based on its fitness, which in this case is inversely proportional to the total distance of the route (i.e.,
shorter routes are more fit).

Here’s how the basic process of a Genetic Algorithm unfolds for solving the TSP:

1. Initialization: Start with a randomly generated population of solutions (routes). In this case, the
population size is typically set to 100 routes.
2. Selection: Solutions are chosen for reproduction based on their fitness. Solutions with shorter
routes have a higher probability of being selected. The roulette wheel selection method is
commonly used, where fitter solutions are more likely to be picked for mating.
3. Crossover (Reproduction): New solutions (children) are created by combining two selected
parent solutions. Crossover operators determine how the parents’ routes are combined to form
offspring, which will inherit features (city order or edges) from both parents.
4. Mutation: After crossover, small changes are made to some solutions to introduce variability.
This helps the population escape local optima by ensuring diversity. Mutation rates are typically
low (around 1%).
5. Replacement: The new generation of solutions replaces the old one, and the process repeats for a
predetermined number of generations (e.g., 1000 generations).
6. Termination: The algorithm terminates either when the maximum number of generations is
reached or when the population converges to a solution.

The objective of applying a GA to TSP is to minimize the length of the route. The best solution in the
final population is usually close to the optimal solution.

Crossover Operators (Binary Operators)

In Genetic Algorithms, crossover operators combine two parent solutions to produce offspring. Several
crossover methods have been tested for the TSP:

1. Uniform Order-Based Crossover (OCX):


○ This method ensures that 50% of the cities in the child’s route come from one parent, with
the remaining cities coming from the other parent. A random binary mask determines
which positions in the child route are filled from the first parent, while the remaining
positions are filled from the second parent in the order of cities not already chosen.
○ Example: If Parent 1 is (1 2 3 4) and Parent 2 is (4 3 2 1), the child might become (1 4 3 2)
using a random mask.
2. Heuristic Order-Based Crossover (HCX):
○ This method favors edges with shorter distances between cities. It uses a distance map to
prioritize shorter routes in offspring. The first part of the child route consists of edges with
better fitness from one parent, and the remaining cities are filled in the order from the
second parent.
○ This approach attempts to pass on beneficial edges, similar to how advantageous traits are
inherited in biological evolution.
3. Edge Recombination (ER):
○ This method preserves as many edges as possible from the parents. Each city in the child
route is chosen based on an "edge list" derived from both parents, ensuring that most of the
edges (connections between cities) from the parents are passed down to the child.
○ This method is particularly suited to TSP, as the edges (or connections between cities) are
critical to determining the length of the route.
Mutation Operators (Unary Operators)

Mutation introduces random changes to a solution, allowing for exploration of new parts of the search

space:

1. Reciprocal Exchange:
○ This mutation swaps two randomly selected cities in the route. For example, in a route (1 2
3 4 5), swapping cities 2 and 4 might result in (1 4 3 2 5). This small change can lead to
new, potentially better solutions.
2. Inversion:
○ This mutation reverses a section of the route between two randomly chosen points. For
example, if the route is (1 2 3 4 5 6) and the inversion is applied between cities 2 and 5, the
result might be (1 5 4 3 2 6). This operator is useful in flipping sections of the route and
exploring new configurations.

Results of Genetic Algorithm on TSP

In experiments, different crossover and mutation methods were tested on a TSP with 14 cities, and the
results showed the following:

● OCX (Uniform Order-Based Crossover) with Inversion Mutation performed the best overall,
consistently producing the shortest routes.
● Edge Recombination (ER) was the fastest at finding near-optimal solutions, often requiring
fewer generations to achieve good results.
● Reciprocal Exchange Mutation was effective at finding the optimal solution with higher
frequency than the inversion method.
● Heuristic Order-Based Crossover (HCX), which favored shorter edge lengths, performed well
but did not outperform OCX or ER consistently.

All genetic operators significantly outperformed random solutions and brute force methods, showing that
GAs are effective at solving TSP quickly and efficiently, especially for problems of this size.

Conclusion
Genetic Algorithms provide an efficient way to solve the Traveling Salesman Problem by evolving a
population of solutions through selection, crossover, and mutation. Among the tested methods, OCX and
ER with reciprocal exchange mutation were the most effective, finding optimal or near-optimal solutions
in significantly less time than exhaustive search methods. GAs strike a balance between exploration
(testing new solutions) and exploitation (refining good solutions), making them a powerful tool for
tackling NP-hard problems like the TSP.

Unit 3
Counter Propagation Network

They are multilayer networks based on the combinations of the input, output, and clustering
layers.

The application of counter propagation nets are data compression, function approximation
and pattern association.

The counter propagation network is basically constructed from an instar-outstar model.

This model is a three layer neural network that performs input-output data mapping,
producing an output vector y in response to input vector x, on the basis of competitive
learning.

The three layers in an instar-outstar model are the input layer, the hidden(competitive) layer
and the output layer.

Outstar and Instar

Instar responds to a single input

Outstar produces single (multi dimensional) output d when simulated with a binary value x.
There are two stages involved in the training process of a counterpropagation net. The input

vectors are clustered in the first stage.

In the second stage of training, the weights from the cluster layer units to the output units

are tuned to obtain the desired response.

There are two types of counter propagation net:

1. Full counterpropagation network

2. Forward-only counterpropagation network

Full counterpropagation network

Full CPN efficiently represents a large number of vector pairs x:y by adaptively constructing

a look-up-table. The full CPN works best if the inverse function exists and is continuous.

The vector x and y propagate through the network in a counterflow manner to yield output

vector x* and y*.

Architecture of Full CPN:


The four major components of the instar-outstar model are the input layer, the instar, the

competitive layer and the outstar. For each node in the input layer there is an input value xi.

All the instar are grouped into a layer called the competitive layer. Each of the instar

responds maximally to a group of input vectors in a different region of space. An outstar

model is found to have all the nodes in the output layer and a single node in the competitive

layer. The outstar looks like the fan-out of a node.


Forward only CPN
A simplified version of full CPN is the forward-only CPN. Forward-only CPN uses only the x

vector to form the cluster on the Kohonen units during phase I training. In case of

forward-only CPN, first input vectors are presented to the input units. First, the weights

between the input layer and cluster layer are trained. Then the weights between the cluster

layer and output layer are trained. This is a specific competitive network, with target known.

https://blog.oureducation.in/cpn-counterpropagation-network/

You might also like