Unit 2, 4,5,3 SC
Unit 2, 4,5,3 SC
In my note
Hopfield Network
Fuzzy Logic
A fuzzy set is any set that allows its members to have different grades of membership (membership function) in
the interval [0,1]. A numerical value between 0 and 1 that represents the degree to which an element belongs to a
particular set, also referred to as membership value.
Genetic Algorithm and Search Space
In essence, the genetic algorithm is a heuristic method that mimics the process of natural evolution to solve
problems by finding the best solutions among many possible ones.
Search Space:
The search space (or state space) refers to the set of all possible solutions to a given problem. Each point in this
space represents a potential solution. The goal of a genetic algorithm is to navigate this search space to find the
best solution based on a predefined fitness function.
Each possible solution in the search space is evaluated using the fitness function, which gives a score (fitness
value) indicating how "good" the solution is in solving the problem. GAs search the space to find the optimal or
near-optimal solution, though it may not always guarantee finding the best one.
For example, consider searching for the minimum of a function. A genetic algorithm would explore various
points in the search space, looking for the point with the smallest value, which corresponds to the best (minimum)
solution.
1. Stochastic Nature: GAs are stochastic, meaning they rely heavily on randomness. Both the selection of
individuals for reproduction and the reproduction process itself use random techniques to simulate natural
evolution.
2. Population-Based: GAs maintain a population of solutions rather than a single solution at any point in
time. This allows for greater diversity and enables better exploration of the search space. By recombining
solutions, GAs can generate offspring that may outperform their parents.
3. Parallelism: The population-based nature of GAs makes them ideal for parallel computing, where
multiple solutions can be evaluated and processed simultaneously. This improves efficiency in large-scale
problems.
4. Robustness: GAs are robust and adaptable, meaning they perform well across a wide range of problems.
They do not require prior knowledge about the search space or problem structure, making them versatile
tools for complex optimization problems.
Evolutionary Algorithms
The principles of genetic algorithms have inspired other evolutionary algorithms, such as evolution strategies
and genetic programming. These algorithms, which share the common concept of mimicking natural evolution,
are often collectively referred to as Evolutionary Algorithms. Their ability to solve complex and diverse
problems has made them widely applicable across various fields.
However, it's important to recognize that while GAs are powerful, they are not perfect. GAs do not guarantee
finding the global optimum (the absolute best solution), but instead aim for "good enough" solutions, especially
when the search space is unknown or difficult to navigate.
To understand how evolution leads to optimization, consider the example of the Basilosaurus, an ancient whale
species that lived around 45 million years ago. Basilosaurus initially had physical traits that were not fully
adapted to its aquatic environment. Over time, through a process of natural selection, adaptations such as
shorter limbs and longer fingers helped improve its swimming ability, allowing it to hunt more effectively.
This gradual improvement through generations mirrors the way GAs optimize solutions. Just as beneficial traits
become more common over time in a species, good solutions become more prevalent over generations in a GA.
Similarly, over time, GAs "evolve" better solutions by combining favorable characteristics from previous
iterations.
The basic idea behind GAs is that within a population of solutions, the potential for the best solution exists but
may not be immediately obvious. By simulating processes like reproduction, mutation, and natural selection, GAs
can discover new solutions that are better suited to the problem.
1. Crossover (Recombination): Crossover is akin to sexual reproduction. It takes two parent solutions
(genotypes) and creates a new solution by combining parts of both parents. In nature, crossover involves
cutting and splicing DNA from two parents to form offspring. This operation allows the offspring to
inherit characteristics from both parents, potentially resulting in better solutions.
2. Mutation: Mutation introduces random changes to a solution's genes (values). While most mutations in
nature are harmful or neutral, in GAs, a few well-placed mutations can help the algorithm explore new
parts of the search space and prevent it from getting stuck in local optima.
3. Selection: Based on their fitness values, solutions are selected for reproduction. Solutions with higher
fitness have a better chance of being chosen to pass their genes to the next generation, simulating the
"survival of the fittest."
Through these operators, GAs evolve better solutions over time, eventually converging on an optimal or
near-optimal solution for the given problem.
Limitations of GAs
Despite their versatility, GAs are not a magic solution for all problems:
1. Local Optima: GAs may not always find the global optimum solution; instead, they may settle for a good
enough solution, especially in complex or large search spaces.
2. Specialized Algorithms: For some specific problems, specialized algorithms may outperform GAs in
terms of both speed and accuracy. In these cases, GAs might be used in combination with other
techniques, creating hybrid methods that take advantage of GA's generalization and other algorithm's
precision.
Conclusion
Genetic algorithms are inspired by the process of natural evolution. They work by evolving a population of
solutions using selection, crossover, and mutation to solve optimization problems. While GAs are robust and
versatile, they are not guaranteed to find the best solution for every problem but are excellent for exploring
complex search spaces and finding solutions where other methods might fail.
Their application extends to areas such as image processing, scheduling problems, and even artificial intelligence,
proving the strength of evolution as a powerful problem-solving tool.
Genetic Algorithms
Genetic Algorithms(GAs) are adaptive heuristic search algorithms that belong to the larger part of evolutionary
algorithms. Genetic algorithms are based on the ideas of natural selection and genetics. These are intelligent
exploitation of random searches provided with historical data to direct the search into the region of better
performance in solution space. They are commonly used to generate high-quality solutions for optimization
problems and search problems.
Genetic algorithms simulate the process of natural selection which means those species that can adapt to
changes in their environment can survive and reproduce and go to the next generation. In simple words, they
simulate “survival of the fittest” among individuals of consecutive generations to solve a problem. Each
generation consists of a population of individuals and each individual represents a point in search space and
possible solution. Each individual is represented as a string of character/integer/float/bits. This string is analogous
to the Chromosome.
Search Space
Fitness Score
A Fitness Score is given to each individual which shows the ability of an individual to “compete”. The
individual having optimal fitness score (or near optimal) are sought.
Once the initial generation is created, the algorithm evolves the generation using following operators
1) Selection Operator: The idea is to give preference to the individuals with good fitness scores
and allow them to pass their genes to successive generations.
2) Crossover Operator: This represents mating between individuals. Two individuals are selected
using a selection operator and crossover sites are chosen randomly. Then the genes at these crossover
sites are exchanged thus creating a completely new individual (offspring). For example –
3) Mutation Operator: The key idea is to insert random genes in offspring to maintain the diversity
in the population to avoid premature convergence. For example –
Operators in GA
Selection is a key operation in GAs, where individuals (solutions) are chosen from the population based
on their fitness to produce offspring for the next generation. The goal is to favor fitter individuals,
increasing the chance of finding optimal solutions over time.
Types of Selection:
Each method affects how quickly and effectively the GA converges to optimal solutions.
Soft computing techniques, including neural networks, fuzzy systems, and genetic algorithms, are
inspired by biological computation and nature’s problem-solving methods. Each of these approaches is
powerful in specific areas but has limitations that can be overcome by combining them into hybrid
systems. These hybrid systems leverage the strengths of each technique, creating more flexible and
robust problem-solving methods.
Hybrid soft computing systems are used in diverse fields like engineering design, medical diagnosis,
stock market analysis, and process control. For instance:
● NNs can handle pattern recognition while fuzzy systems can provide rule-based decision-making.
● GAs can optimize fuzzy membership functions, ensuring better performance in uncertain
environments.
However, hybrid systems must be carefully designed, as combining techniques doesn’t always guarantee
better results.
Neuro-fuzzy hybrid systems combine neural networks with fuzzy logic, creating systems that benefit
from the learning ability of NNs and the interpretability of fuzzy systems. J.S.R. Jang proposed this
model, known as the Adaptive Neuro-Fuzzy Inference System (ANFIS).
● In Neuro-Fuzzy Systems (NFS), fuzzy rules (such as IF-THEN rules) are used to approximate
functions, and NNs help fine-tune the fuzzy parameters.
● These systems balance interpretability and accuracy. While fuzzy models are interpretable, they
sometimes lack precision. NFS allows for both readable rules and learning from data to improve
accuracy.
There are two major types of fuzzy models in neuro-fuzzy hybrid systems:
1. Linguistic Fuzzy Modeling: Focuses on interpretability (e.g., Mamdani model).
2. Precise Fuzzy Modeling: Focuses on accuracy (e.g., Takagi-Sugeno-Kang model).
Neuro-fuzzy systems can be initialized with predefined fuzzy rules, or the rules can be learned using
data-driven methods. The learning process adjusts fuzzy rules and membership functions to fit the input
data, optimizing the system’s performance.
Neuro-fuzzy hybrid systems are applied in fields such as control systems, signal processing, medical
diagnosis, and stock market predictions. Despite their advantages, care must be taken in applying these
systems. Hybridization doesn't always lead to better solutions, and inappropriate applications may yield
poor results. Successful implementations require careful tuning of the systems based on the problem at
hand.
ANFIS, a popular NFS model in MATLAB, uses the structure of a fuzzy inference system combined
with neural network learning. It adjusts parameters like membership functions to optimize system
performance. ANFIS is widely used for tasks like system modeling and control.
This summary provides a clear understanding of hybrid soft computing techniques, particularly the
neuro-fuzzy hybrid systems and their practical applications.
Fusion Approach of Multispectral Images with SAR for Flood Area Analysis
Flooding is one of the most destructive natural disasters, especially in monsoon regions prone to sudden
floods caused by storms or phenomena like El Niño and La Niña. The damage from floods can be
significant, affecting the environment, human lives, and property. Therefore, accurate methods are
needed to monitor and assess flood-affected areas. One effective approach is combining multispectral
data and Synthetic Aperture Radar (SAR) imagery, each providing complementary insights into the
flooded landscape.
1. Multispectral Images:
○ These images capture data at different wavelengths of the electromagnetic spectrum. They
are useful in land cover analysis, as they can detect vegetation, soil, water, and other
surface features.
○ For flood analysis, techniques like the Normalized Difference Vegetation Index (NDVI),
calculated from multispectral data, help assess vegetation health and changes. This is
useful for identifying areas impacted by floods.
2. SAR Images:
○ SAR uses radar waves to capture images. It is particularly useful for flood detection
because radar can penetrate through clouds and provide data even in poor weather
conditions. SAR is sensitive to the moisture content of surfaces, making it effective at
identifying waterlogged or flooded areas.
○ SAR's backscattering effect can distinguish between water and other surfaces, which is
essential during flood events.
Since multispectral and SAR data provide different types of information, combining them (image fusion)
enhances the accuracy of flood detection and analysis. This fusion takes advantage of the strengths of
both data types:
In image fusion, spatial and spectral data are integrated to enhance the classification of images and
improve feature recognition. Two main methods exist:
1. Spatial Domain Methods: These focus on combining spatial features from different images.
2. Spectral Domain Methods: These focus on combining the spectral characteristics of images,
often used in applications like color space transformation.
In this case, the Intensity-Hue-Saturation (IHS) model was used for fusion. This technique converts
the images into a color space model, making it easier to blend multispectral and SAR data to identify
flood areas.
To classify the flood areas, a machine learning approach using artificial neural networks (ANNs) was
employed. The multilayer perceptron (MLP), a type of neural network based on the backpropagation
algorithm, was used for this purpose. The MLP consists of layers of nodes (neurons) where:
● The input layer receives data such as pixel values from the images.
● The hidden layers process this data to identify patterns.
● The output layer gives the classification result, indicating whether a particular area is flooded or
not.
This neural network model helps in recognizing complex and noisy patterns, which is critical for
accurately identifying flood zones.
● Fused data provided better classification accuracy compared to non-fused data, confirming that
combining multispectral and SAR images enhances flood area detection.
● Multitemporal SAR data was especially useful for flood monitoring, as it provided detailed
insights into the water extent over time.
● The OPS data added important information about the land cover, which complemented the flood
area analysis.
Conclusion
By fusing multispectral and SAR data, this method provided a reliable and enhanced approach to flood
area classification. The use of neural networks further improved the accuracy of the analysis, allowing
for better identification and monitoring of flood zones. This fusion approach holds great potential for
applications in flood management, disaster response, and environmental monitoring.
Optimization of Traveling Salesman Problem (TSP) Using Genetic Algorithm
The Traveling Salesman Problem (TSP) is a well-known optimization challenge where a traveler must
visit each city in a given list exactly once, returning to the starting city, while minimizing the total
distance traveled. The complexity arises because the number of possible routes (solutions) grows
factorially with the number of cities. As a result, solving TSP exactly for large numbers of cities is
computationally infeasible due to the vast search space. It is categorized as an NP-hard problem,
meaning there's no known algorithm to solve it efficiently in polynomial time for all cases.
To address this, Genetic Algorithms (GAs) offer a powerful heuristic approach that can find good,
near-optimal solutions much faster than brute-force methods. GAs are inspired by the process of natural
selection and evolution, using techniques like selection, crossover (recombination of solutions), and
mutation to iteratively improve solutions over generations.
A Genetic Algorithm works by generating an initial population of candidate solutions and evolving this
population over several generations. Each solution, representing a specific route for the TSP, is evaluated
based on its fitness, which in this case is inversely proportional to the total distance of the route (i.e.,
shorter routes are more fit).
Here’s how the basic process of a Genetic Algorithm unfolds for solving the TSP:
1. Initialization: Start with a randomly generated population of solutions (routes). In this case, the
population size is typically set to 100 routes.
2. Selection: Solutions are chosen for reproduction based on their fitness. Solutions with shorter
routes have a higher probability of being selected. The roulette wheel selection method is
commonly used, where fitter solutions are more likely to be picked for mating.
3. Crossover (Reproduction): New solutions (children) are created by combining two selected
parent solutions. Crossover operators determine how the parents’ routes are combined to form
offspring, which will inherit features (city order or edges) from both parents.
4. Mutation: After crossover, small changes are made to some solutions to introduce variability.
This helps the population escape local optima by ensuring diversity. Mutation rates are typically
low (around 1%).
5. Replacement: The new generation of solutions replaces the old one, and the process repeats for a
predetermined number of generations (e.g., 1000 generations).
6. Termination: The algorithm terminates either when the maximum number of generations is
reached or when the population converges to a solution.
The objective of applying a GA to TSP is to minimize the length of the route. The best solution in the
final population is usually close to the optimal solution.
In Genetic Algorithms, crossover operators combine two parent solutions to produce offspring. Several
crossover methods have been tested for the TSP:
Mutation introduces random changes to a solution, allowing for exploration of new parts of the search
space:
1. Reciprocal Exchange:
○ This mutation swaps two randomly selected cities in the route. For example, in a route (1 2
3 4 5), swapping cities 2 and 4 might result in (1 4 3 2 5). This small change can lead to
new, potentially better solutions.
2. Inversion:
○ This mutation reverses a section of the route between two randomly chosen points. For
example, if the route is (1 2 3 4 5 6) and the inversion is applied between cities 2 and 5, the
result might be (1 5 4 3 2 6). This operator is useful in flipping sections of the route and
exploring new configurations.
In experiments, different crossover and mutation methods were tested on a TSP with 14 cities, and the
results showed the following:
● OCX (Uniform Order-Based Crossover) with Inversion Mutation performed the best overall,
consistently producing the shortest routes.
● Edge Recombination (ER) was the fastest at finding near-optimal solutions, often requiring
fewer generations to achieve good results.
● Reciprocal Exchange Mutation was effective at finding the optimal solution with higher
frequency than the inversion method.
● Heuristic Order-Based Crossover (HCX), which favored shorter edge lengths, performed well
but did not outperform OCX or ER consistently.
All genetic operators significantly outperformed random solutions and brute force methods, showing that
GAs are effective at solving TSP quickly and efficiently, especially for problems of this size.
Conclusion
Genetic Algorithms provide an efficient way to solve the Traveling Salesman Problem by evolving a
population of solutions through selection, crossover, and mutation. Among the tested methods, OCX and
ER with reciprocal exchange mutation were the most effective, finding optimal or near-optimal solutions
in significantly less time than exhaustive search methods. GAs strike a balance between exploration
(testing new solutions) and exploitation (refining good solutions), making them a powerful tool for
tackling NP-hard problems like the TSP.
Unit 3
Counter Propagation Network
They are multilayer networks based on the combinations of the input, output, and clustering
layers.
The application of counter propagation nets are data compression, function approximation
and pattern association.
This model is a three layer neural network that performs input-output data mapping,
producing an output vector y in response to input vector x, on the basis of competitive
learning.
The three layers in an instar-outstar model are the input layer, the hidden(competitive) layer
and the output layer.
Outstar produces single (multi dimensional) output d when simulated with a binary value x.
There are two stages involved in the training process of a counterpropagation net. The input
In the second stage of training, the weights from the cluster layer units to the output units
Full CPN efficiently represents a large number of vector pairs x:y by adaptively constructing
a look-up-table. The full CPN works best if the inverse function exists and is continuous.
The vector x and y propagate through the network in a counterflow manner to yield output
competitive layer and the outstar. For each node in the input layer there is an input value xi.
All the instar are grouped into a layer called the competitive layer. Each of the instar
model is found to have all the nodes in the output layer and a single node in the competitive
vector to form the cluster on the Kohonen units during phase I training. In case of
forward-only CPN, first input vectors are presented to the input units. First, the weights
between the input layer and cluster layer are trained. Then the weights between the cluster
layer and output layer are trained. This is a specific competitive network, with target known.
https://blog.oureducation.in/cpn-counterpropagation-network/