0% found this document useful (0 votes)
36 views40 pages

Soft Computing Lab Programs for CS-801

The document is a lab file for a Soft Computing course at the University Institute of Technology, RGPV, detailing various programming assignments related to algorithms and neural networks. It includes tasks such as implementing search algorithms (BFS, A*, AO*), solving the N-Queens problem, and developing pattern recognition using ADALINE and MADALINE networks. Each section provides a brief description of the problem and sample code implementations in C++.

Uploaded by

srajal534
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views40 pages

Soft Computing Lab Programs for CS-801

The document is a lab file for a Soft Computing course at the University Institute of Technology, RGPV, detailing various programming assignments related to algorithms and neural networks. It includes tasks such as implementing search algorithms (BFS, A*, AO*), solving the N-Queens problem, and developing pattern recognition using ADALINE and MADALINE networks. Each section provides a brief description of the problem and sample code implementations in C++.

Uploaded by

srajal534
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

University Institute of Technology,RGPV

Department of Computer Science and Engineering

SOFT COMPUTING
LAB FILE
CS-801

SUBMITTED TO:
Dr. Shikha Agrawal

SUMBITTED BY- ENROLLMENT NO:-

Kavery Pandey 0101CS201059


Index Problem Description Date of Date of
Experiment Submiss
ion
1 Develop a program for Breadth First Search algorithm.

2 Develop a program for AO* search algorithm

3 Develop a program for A* search algorithm.

4 Invent a program for solving N-Queen’s Problem.

5 Develop a program for a program for the purpose of pattern


recognition using ADALINE network.

6 Develop a program to implement XOR function in MADALINE


Neural Network.

7 Construct a program for training of Artificial Neural


Network using Error Back Propagation algorithm.

8 Develop a program to implement LMS and Perceptron


Learning Rule.

9 Develop a program for that illustrates the training & working of


Hopfield network.

10 Develop a program for that illustrates the training & working


of Full CPN with input pair Neural Network.

11 Develop a program to cluster four vectors using ART1


network.

12 Develop a program to implement Hetero associate neural net


for mapping input.

13 Develop a program for illustrating all operations on fuzzy set.

14 Develop a program to maximize the function f(x) =x2 using


Genetic algorithm.
1. Develop a program for Breadth First Search algorithm.

To implement the Breadth-First Search (BFS) algorithm in C++, we'll typically use a queue to
manage the set of next nodes to visit. You also need a way to represent the graph, usually
through adjacency lists or an adjacency matrix, and an array or vector to keep track of visited
nodes.

Here's an example of how you can implement BFS in C++ using an adjacency list and the
standard library's queue
#include <iostream>
#include <vector>
#include <queue>

// Function to perform BFS on the graph


void bfs(vector<vector<int>>& graph, int start) {
int n = [Link](); // Number of vertices in the graph
vector<bool> visited(n, false); // Track visited vertices
queue<int> q; // Queue for BFS

/ Start by visiting the starting node


visited[start] = true; [Link](start);

cout << "BFS starting from vertex " << start << ":\n"; while
(![Link]()) {
/ Dequeue a vertex from the queue
int v = [Link]();
[Link]();
cout << v << " ";

/ Go through all nodes adjacent to v for


(int adj : graph[v]) {

if (!visited[adj])
{ visited[adj] = true;
[Link](adj);
}
}
}
cout << endl;
}
int main() {

/ Example graph represented as an adjacency list


vector<vector<int>> graph = {

{1, 2}, // Adjacency list for vertex 0 {0, 3,


4}, // Adjacency list for vertex 1 {0, 4}, //
Adjacency list for vertex 2 {1, 5}, //
Adjacency list for vertex 3 {1, 2, 5}, //
Adjacency list for vertex 4 {3, 4} //
Adjacency list for vertex 5
};
/ Perform BFS from vertex 0
bfs(graph, 0);
return 0;}
2. Develop a program for AO* search algorithm.

The AO* (And-Or graph search) algorithm is a heuristic search used to find an optimal
solution graph in a graph that includes both "AND" nodes and "OR" nodes. It's particularly
useful in scenarios involving goal-directed problem-solving in graphs that represent both
conjunctive (AND).
The AO* algorithm is useful for searching and-or graphs where some nodes ("AND
nodes") require all of their child nodes to be solved, while other nodes ("OR nodes")
require only one of their child nodes to be solved to consider them as solved. The objective
is often to find the best (lowest cost) solution graph that leads from a start node to goal
nodes.

#include <iostream>
#include <vector>
#include <unordered_map>
#include <queue>
#include <climits>
#include <set>

using namespace std;

struct Node {
int id;
vector<pair<int, int>> children; // Pair: (child id, cost)
int h; // Heuristic value
int cost; // Cost from start
bool operator<(const Node& other) const {
return cost + h > [Link] + other.h; // Min-Heap by f = cost + h
}
};
class AOSearch {
public:
unordered_map<int, Node> graph;
priority_queue<Node> open;
set<int> closed;

Node aoStar(int startId) {


Node& start = graph[startId];
[Link] = 0;
[Link](start);
while (![Link]()) {
Node current = [Link]();
[Link]();
if (isGoal(current)) {
return current;
}
[Link]([Link]);
expandNode(current);
}
return Node{-1}; // Return an error node if no solution is found
}
private:
void expandNode(Node& node) {
vector<int> costs([Link](), INT_MAX); for
(size_t i = 0; i < [Link](); ++i) {
auto& child = [Link][i];
Node& childNode = graph[[Link]];
int totalCost = [Link] + [Link];
if ([Link]([Link]) == [Link]()) { // If not in closed if
(totalCost < [Link]) {
[Link] = totalCost;
[Link](childNode);
}
}
costs[i] = totalCost;
}
/ Update the cost of the node to the minimum of its children costs [Link] =
*min_element([Link](), [Link]());
}
bool isGoal(Node& node) {
return [Link](); // Assuming goal nodes have no children
}
};
int main() {
// Create nodes
Node n1{1, {{2, 1}, {3, 4}}, 7, INT_MAX};
Node n2{2, {{4, 5}}, 5, INT_MAX};
Node n3{3, {}, 0, INT_MAX}; // Goal node
Node n4{4, {}, 0, INT_MAX}; // Goal node

/ Setup graph
AOSearch ao;
[Link][1] = n1;
[Link][2] = n2;
[Link][3] = n3;
[Link][4] = n4;
/ Perform AO* Search Node
result = [Link](1); if
([Link] != -1) {
cout << "Goal node found with ID: " << [Link] << ", Cost: " <<
[Link] << endl;
} else {
cout << "No solution found." << endl;
}
return 0;
}
3. Develop a program for A* search algorithm.

The A* (A-star) algorithm is a popular pathfinding and graph traversal algorithm


widely used in computer science, especially in areas like AI and game programming. It
efficiently finds the shortest path from a start node to a goal node while attempting to
minimize the cost of the path and the estimated cost to the goal.
#include <iostream>
#include <vector>
#include <queue>
#include <unordered_map>
#include <cmath>
#include <climits>

using namespace std;


struct Node {
int id;
double f, g, h; // f = g + h, g is the cost from start, h is the heuristic to goal
Node(int id, double g, double h) : id(id), g(g), h(h) {
f = g + h;
}
};

/ Comparator for the priority queue struct


CompareNode {
bool operator()(Node const& a, Node const& b) { return
a.f > b.f;
}
};

/ Heuristic function for A* (Euclidean distance)


double heuristic(int x1, int y1, int x2, int y2) {
return sqrt(pow(x2 - x1, 2) + pow(y2 - y1, 2));
}
/ A* Search Algorithm Function
void AStarSearch(vector<vector<pair<int, double>>>& graph, int start, int goal,
vector<int>& coords) {

priority_queue<Node, vector<Node>, CompareNode> openSet; vector<double>


gScore([Link](), INT_MAX); // Cost from start to n gScore[start] = 0;
[Link](Node(start, 0, heuristic(coords[2*start],
coords[2*start+1], coords[2*goal], coords[2*goal+1])));

vector<int> cameFrom([Link](), -1); // Track path while (!


[Link]()) {
Node current = [Link]();
[Link]();
if ([Link] == goal) {
cout << "Path found, cost: " << current.g << endl;
int pathNode = goal;
while (pathNode != -1) {
cout << pathNode << " <- ";
pathNode = cameFrom[pathNode];
}
cout << "start" << endl;
return;
}
for (auto& neighbor : graph[[Link]]) {
int neighborId = [Link];

double tentative_gScore = current.g + [Link]; if


(tentative_gScore < gScore[neighborId]) {
cameFrom[neighborId] = [Link];
gScore[neighborId] = tentative_gScore;
double h = heuristic(coords[2*neighborId],
coords[2*neighborId+1], coords[2*goal], coords[2*goal+1]);
[Link](Node(neighborId, tentative_gScore, h));
}
}
}
cout << "No path found." << endl;
}

int main() {
/ Example graph (as adjacency list) and coordinates for each node (id*2 for x,
id*2+1 for y)
vector<vector<pair<int, double>>> graph = {
{{1, 1.0}, {2, 4.0}},
{{2, 2.0}, {3, 5.0}},
{{3, 1.0}},
{{4, 1.0}},
{}
};
vector<int> coords = {0, 0, 1, 1, 2, 2, 3, 3, 4, 4}; // x and y coordinates for nodes 0,
1, 2, 3, 4

/ Start A* search from node 0 to node 4


AStarSearch(graph, 0, 4, coords); return 0;
}
4. Invent a program for solving N-Queen’s Problem.

The N-Queens problem is a classic puzzle where the goal is to place N queens on
an N×N chessboard such that no two queens threaten each other. This means that
no two queens can share the same row, column, or diagonal.

Here’s a simple and efficient solution using backtracking in C++. This method
uses recursion to try to place queens on the board and backtracks when no legal
moves are left.
#include <iostream>
#include <vector>
#include <string>
#include <cmath> // For abs()

using namespace std;

class NQueenSolver {
private:
int N;
vector<vector<string>> solutions;
vector<int> position;

/ Utility function to check if the position (row, col) is safe bool


isSafe(int row, int col) {
for (int i = 0; i < row; ++i) {
// Check for the same column and diagonals
if (position[i] == col || abs(position[i] - col) == abs(i - row))
return false;
}
return true;
}
/ Utility to convert the current solution to a readable form void
storeSolution() {
vector<string> solution(N, string(N, '.')); for
(int i = 0; i < N; ++i) {
solution[i][position[i]] = 'Q';
}
solutions.push_back(solution);
}
/ Function to place queens recursively void
solveNQueensRec(int row) {
if (row == N)
{ storeSolution();
return;
}
for (int col = 0; col < N; ++col) { if
(isSafe(row, col)) {
position[row] = col;
solveNQueensRec(row + 1);
/ Backtrack not needed as `position[row]` will be overwritten
}
}
}
public:
NQueenSolver(int N) : N(N), position(N) {}
vector<vector<string>> solve() {
solveNQueensRec(0);
return solutions;
}
};
int main() {
int N = 8; // Change N to any number to solve for different sizes
NQueenSolver solver(N);
vector<vector<string>> solutions = [Link]();
cout << "Number of solutions for " << N << "-Queens: " <<
[Link]() << endl;
for (const auto& solution : solutions) {
for (const string& row : solution) {
cout << row << endl;
}
cout << endl; // Separating different solutions
}
return 0;
}
5. Develop a program for a program for the purpose of pattern recognition
using ADALINE network.
It is essentially a single-layer neural network with a linear transfer function that is used for
tasks like classification and pattern recognition.

#include <iostream>
#include <vector>
#include <cmath>
#include <numeric>

class ADALINE {
private:
vector<double> weights;
double learning_rate;
int epochs;
public:
ADALINE(double lr, int ep) : learning_rate(lr), epochs(ep) {}
void fit(const vector<vector<double>>& X, const vector<int>& y)
{ [Link](X[0].size() + 1, 0.0); // +1 for bias weight
for (int epoch = 0; epoch < epochs; ++epoch)
{ double total_error = 0.0;
for (size_t i = 0; i < [Link](); ++i) {
double output = predict(X[i], false);
double error = y[i] - output;
// Update weights
weights[0] += learning_rate * error; // bias weight update for
(size_t j = 0; j < X[i].size(); ++j) {
weights[j + 1] += learning_rate * error * X[i][j];
}
total_error += error * error;
}
cout << "Epoch " << (epoch + 1) << "/" << epochs << ", Error: " <<
total_error / 2.0 << endl;
}
}
double predict(const vector<double>& x, bool threshold = true) { double
activation = weights[0]; // bias
for (size_t i = 0; i < [Link](); ++i) {
activation += weights[i + 1] * x[i];
}
if (threshold) {
return (activation >= 0.0) ? 1 : -1;
}
return activation;
}
};

int main() {
vectorvector<double>> X = {
{1, 2},
{2, 3},
{3, 2},
{4, 3},
{5, 3},
{6, 2},
{6, 4},
{7, 1}
};
vector<int> y = {1, 1, 1, 1, -1, -1, -1, -1};
ADALINE adaline(0.001, 10);
[Link](X, y);

/ Testing the classifier vector<double>


new_point = {5, 3};
cout << "Prediction for (5, 3): " << [Link](new_point) <<endl; return 0;
}
6. Develop a program to implement XOR function in MADALINE Neural
Network.

MADALINE (Many ADALINE) is a type of artificial neural network that is an extension


of the basic ADALINE network. It involves multiple ADALINE units organized in
layers, typically used for pattern classification tasks that are not linearly separable, such
as the XOR problem.
#include <iostream>
#include <vector>

class MADALINENN {
private:
vector<vector<double>> weights_hidden; // Weights for hidden layer neurons
vector<double> bias_hidden; // Biases for hidden layer vector<double>
weights_output; // Weights for output neuron double bias_output; // Bias
for output neuron

/ Activation function: step function int


activation(double x) {
return x >= 0 ? 1 : -1;
}
public:
/ Constructor to initialize the network
MADALINENN() {
/ Initialize weights for the hidden layer neurons
weights_hidden = {{1, 1}, {1, 1}}; bias_hidden = {-0.5,
-1.5};
/ Initialize weights for the output neuron
weights_output = {1, 1};
bias_output = -1.5;
}
/ Predict function to process the inputs through the network int
predict(vector<int>& inputs) {
vector<double> hidden_outputs;

/ Compute outputs from the hidden layer


for (size_t i = 0; i < weights_hidden.size(); i++)
{ double sum = 0;
for (size_t j = 0; j < [Link](); j++) { sum +=
inputs[j] * weights_hidden[i][j];
}
sum += bias_hidden[i];
hidden_outputs.push_back(activation(sum));
}
/ Compute output from the output neuron double
output_sum = 0;
for (size_t i = 0; i < hidden_outputs.size(); i++) { output_sum +=
hidden_outputs[i] * weights_output[i];
}
output_sum += bias_output; return
activation(output_sum);
}
};

int main() {
MADALINENN nn; // Create an instance of the MADALINE Neural Network

/ Define input vectors for XOR problem


vector<vector<int>> inputs = {
{1, 1}, {1, -1}, {-1, 1}, {-1, -1}
};

/ Expected outputs for XOR


vector<int> expected_outputs = {-1, 1, 1, -1};
/ Testing the network with XOR inputs for
(size_t i = 0; i < [Link](); i++) {
int output = [Link](inputs[i]);
cout << "Input: (" << inputs[i][0] << ", " << inputs[i][1] << ") => Output: "
< output
<< ", Expected: " << expected_outputs[i] << endl;
}
return 0;
}
7. Construct a program for training of Artificial Neural Network using Error Back
Propagation algorithm.

Below is a short and simple Python implementation of an artificial neural network (ANN)
that uses the error backpropagation algorithm to learn the XOR function. This example
assumes some familiarity with the NumPy library for numerical operations.
import numpy as np
# Sigmoid activation function and its derivative def
sigmoid(x):
return 1 / (1 + [Link](-x))

def sigmoid_derivative(x):
return x * (1 - x)

# Input datasets
inputs = [Link]([[0,0], [0,1], [1,0], [1,1]])
expected_output = [Link]([[0], [1], [1], [0]])

epochs = 10000
lr = 0.1
inputLayerNeurons, hiddenLayerNeurons, outputLayerNeurons = 2, 2, 1

# Random weights and bias initialization


hidden_weights = [Link](size=(inputLayerNeurons,
hiddenLayerNeurons))
hidden_bias =[Link](size=(1, hiddenLayerNeurons))
output_weights = [Link](size=(hiddenLayerNeurons,
outputLayerNeurons))
output_bias = [Link](size=(1, outputLayerNeurons))

# Training algorithm for _


in range(epochs):
# Forward Propagation
hidden_layer_activation = [Link](inputs, hidden_weights)
hidden_layer_activation += hidden_bias
hidden_layer_output = sigmoid(hidden_layer_activation)
output_layer_activation = [Link](hidden_layer_output, output_weights)
output_layer_activation += output_bias
predicted_output = sigmoid(output_layer_activation)

# Backpropagation
error = expected_output - predicted_output
d_predicted_output = error * sigmoid_derivative(predicted_output)

error_hidden_layer = d_predicted_output.dot(output_weights.T)
d_hidden_layer = error_hidden_layer *
sigmoid_derivative(hidden_layer_output)
# Updating Weights and Biases
output_weights += hidden_layer_output.[Link](d_predicted_output) * lr output_bias
+= [Link](d_predicted_output, axis=0, keepdims=True) * lr hidden_weights +=
[Link](d_hidden_layer) * lr
hidden_bias += [Link](d_hidden_layer, axis=0, keepdims=True) * lr

# Print final weights, biases, and outputs for verification


print("Final hidden weights: ", hidden_weights) print("Final
hidden bias: ", hidden_bias)
print("Final output weights: ", output_weights)
print("Final output bias: ", output_bias)
print("Output from neural network after 10,000 epochs: ", predicted_output)
8. Develop a program to implement LMS and Perceptron Learning Rule.
These algorithms are foundational for learning in neural networks, with LMS being a
method used in linear regression and adaptive filtering, and the Perceptron being one of
the earliest neural network architectures.
1. LMS Algorithm
The LMS algorithm, also known as the Widrow-Hoff learning rule, is used to minimize
the mean squares error between the desired output and the predicted output of the
network. It's often used in adaptive filters
import numpy as np

def lms_learning(x, d, learning_rate=0.01, epochs=10):


# x: input dataset (features)
# d: desired output
# Initialize weights
w = [Link]([Link][1])

for _ in range(epochs):
for i in range([Link][0]):
# Calculate the prediction y =
[Link](x[i], w)
# Calculate the error
error = d[i] - y
# Update weights
w += 2 * learning_rate * error * x[i]

return w

# Example usage
x = [Link]([[0, 0], [0, 1], [1, 0], [1, 1]]) # Input dataset d =
[Link]([0, 0, 0, 1]) # Desired output weights = lms_learning(x,
d)
print("Learned weights:", weights)
2. Perceptron Learning Rule
The Perceptron Learning Rule is a binary classifier that updates its weights to classify
training examples correctly. It's a simple form of a neural network.

import numpy as np

def perceptron_learning(x, labels, learning_rate=0.1, epochs=10):


# x: input dataset (features)
# labels: class labels
# Initialize weights
w = [Link]([Link][1])

for _ in range(epochs):
for i in range([Link][0]):
# Calculate the prediction
activation = [Link](x[i], w) y = 1
if activation >= 0 else 0
# Calculate the error
error = labels[i] - y
# Update weights
w += learning_rate * error * x[i]

return w

# Example usage
x = [Link]([[1, 0], [1, 1], [2, 0], [2, 1]]) # Input dataset labels =
[Link]([0, 1, 0, 1]) # Class labels
weights = perceptron_learning(x, labels)
print("Learned weights:", weights)
9. Develop a program for that illustrates the training & working of Hopfield
network.

A Hopfield Network is a form of recurrent artificial neural network that serves as a content-
addressable ("associative") memory system with binary threshold nodes. It uses Hebbian
learning to store specific patterns or memories. When presented with a part of a stored
memory or a noisy version of it, the network can reconstruct the original memory.

Here is a basic Python program to illustrate the training and retrieval process in a Hopfield
network. The example will demonstrate how to store and recall patterns:

import numpy as np

class HopfieldNetwork:
def __init__(self, size):
[Link] = size
[Link] = [Link]((size, size))

def train(self, patterns):


for pattern in patterns:
[Link] += [Link](pattern, pattern)
np.fill_diagonal([Link], 0) # Zero out the diagonal elements

def recall(self, pattern, steps=10):


for _ in range(steps):
for i in range([Link]):
raw_activation = [Link]([Link][i], pattern)
pattern[i] = 1 if raw_activation >= 0 else -1
return pattern

def energy(self, pattern):


return -0.5 * [Link](pattern, [Link]([Link], pattern))

# Define patterns to store in the network


pattern1 = [Link]([1, 1, -1, -1, 1])
pattern2 = [Link]([-1, -1, 1, 1, -1])

# Initialize Hopfield Network network =


HopfieldNetwork(size=5)

# Train network with the defined patterns


[Link]([pattern1, pattern2])

# Test network to recall the patterns


noisy_pattern1 = [Link]([1, -1, -1, -1, 1]) # A noisy version of pattern1
recalled_pattern1 = [Link](noisy_pattern1) print("Recalled Pattern:",
recalled_pattern1)

# Check the energy of the recalled pattern


print("Energy of the recalled pattern:", [Link](recalled_pattern1))
10. Develop a program for that illustrates the training & working of Full CPN with
input pair Neural Network.

Developing a program that illustrates the training and working of a full


Counterpropagation Network (CPN) with input pair Neural Network involves
understanding two main components of CPN:
1. Kohonen Layer (Self-Organizing Map) - This unsupervised component maps input
vectors onto a competitive layer where neurons compete for activation until a winner is
identified.
2. Outstar Layer (Grossberg Layer) - This supervised component learns to associate
the output from the competitive layer with the desired output.

import numpy as np

class CPN:
def __init__(self, input_dim, num_neurons, output_dim):
# Initialize weights for the Kohonen layer (competitive)
self.kohonen_weights = [Link]((num_neurons, input_dim))
# Initialize weights for the Outstar layer (Grossberg)
self.grossberg_weights = [Link]((num_neurons, output_dim))

self.learning_rate_kohonen = 0.1
self.learning_rate_grossberg = 0.1
[Link] = 0.99 # Decay rate for learning rates

def train(self, input_data, output_data, epochs=10):


for epoch in range(epochs):
for input_vec, output_vec in zip(input_data, output_data):
# Find the winning neuron in the Kohonen layer
distances = [Link](self.kohonen_weights - input_vec, axis=1)
winner_index = [Link](distances)

# Update Kohonen weights (move towards the input vector)


self.kohonen_weights[winner_index] += self.learning_rate_kohonen * (input_vec
- self.kohonen_weights[winner_index])

# Update Grossberg weights (move towards the output vector)


self.grossberg_weights[winner_index] += self.learning_rate_grossberg *
(output_vec - self.grossberg_weights[winner_index])

# Decay learning rates


self.learning_rate_kohonen *= [Link]
self.learning_rate_grossberg *= [Link]

def predict(self, input_vec):


# Find the winning neuron in the Kohonen layer
distances = [Link](self.kohonen_weights - input_vec, axis=1)
winner_index = [Link](distances)

# Return the corresponding Grossberg output return


self.grossberg_weights[winner_index]

# Example usage
input_dim = 2
num_neurons = 5
output_dim = 1

# Create CPN instance


cpn = CPN(input_dim, num_neurons, output_dim)

# Train on simple data


input_data = [Link]([[0.1, 0.2], [0.8, 0.9], [0.1, 0.5], [0.3, 0.2], [0.9, 0.8]])
output_data = [Link]([[0.2], [0.9], [0.5], [0.3], [1.0]])
[Link](input_data, output_data, epochs=50)

# Test
test_input = [Link]([0.05, 0.1])
predicted_output = [Link](test_input)
print("Predicted output:", predicted_output)
[Link] a program to cluster four vectors using ART1 network.

Adaptive Resonance Theory (ART) networks are a class of neural networks that aim to
cluster input patterns in a way that is stable over time. ART1, in particular, is designed to
work with binary input vectors. It’s an unsupervised learning model that can retain learned
patterns without being affected by new input patterns, thanks to its stability-plasticity
characteristic.
Here, I'll provide an example of an implementation of an ART1 network in Python.

import numpy as np

class ART1:
def __init__(self, n, m, rho):
"""
n: number of input neurons
m: number of output neurons
rho: vigilance parameter, 0 < rho <= 1
"""
self.n = n
self.m = m
[Link] = rho
self.W = [Link]((m, n)) # Weights from input to output self.W /=
[Link](self.W, axis=1)[:, [Link]]

def fit(self, data, epochs=100):


for _ in range(epochs):
for x in data:
while True:
# Compute activations T
= [Link](self.W, x)
# Competition process
winner = [Link](T)

# Test vigilance
x_norm = [Link](x)
W_norm = [Link](self.W[winner])
if [Link](self.W[winner], x) / x_norm >= [Link] * W_norm:
# Update weights
self.W[winner] = 2 * [Link](self.W[winner], x) / (1 +
[Link](self.W[winner], x))
break

def predict(self, data):


result = []
for x in data:
T = [Link](self.W, x)
[Link]([Link](T))
return result

# Define data data


= [Link]([ [1, 0,
0, 0, 1], [0, 1, 0, 0,
1], [1, 0, 0, 1, 0],
[0, 0, 1, 1, 0]
])

# Create an ART1 network instance n =


5 # Input size
m = 2 # Number of categories rho =
0.5 # Vigilance parameter art1 =
ART1(n, m, rho)

# Train the network


[Link](data)

# Cluster the data


clusters = [Link](data)
print("Clusters:", clusters)
12. Develop a program to implement Hetro associate neural net for mapping input.

A heteroassociative neural network is designed to map a set of input vectors to a set of


output vectors, where the inputs and outputs are different but related in some way. This type of
neural network is useful for tasks like pattern recognition, where an input needs to be
associated with a specific output that is not merely a binary or similar state but could be a
complex pattern or vector.

import numpy as np

class HeteroassociativeNetwork:
def __init__(self, input_size, output_size):
# Initialize weight matrix with zeros
[Link] = [Link]((output_size, input_size))

def train(self, input_vectors, output_vectors):


""" Train the network using Hebbian learning rule (outer product). """
for inp, out in zip(input_vectors, output_vectors):
[Link] += [Link](out, inp)

def predict(self, input_vector):


""" Predict the output by applying the weight matrix to the input. """ activation =
[Link]([Link], input_vector)
# Use sign function as activation function for binary output mapping return
[Link](activation > 0, 1, -1)

# Example usage
if __name__ == "__main__":
# Define input and output patterns
input_vectors = [Link]([
[1, -1, 1], [-
1, 1, -1],
[1, 1, -1]
])
output_vectors = [Link]([ [1, -
1],
[-1, 1],
[1, 1]
])

# Create a Heteroassociative Network instance


net = HeteroassociativeNetwork(input_size=3, output_size=2)

# Train the network


[Link](input_vectors, output_vectors)

# Test the network


test_vector = [Link]([1, -1, 1])
predicted_output = [Link](test_vector)
print("Input:", test_vector)
print("Predicted Output:", predicted_output)
[Link] a program for illustrating all operations on fuzzy set.

To develop a program that illustrates all basic operations on fuzzy sets, we need to cover a
range of operations that include union, intersection, complement, and others that are central to
fuzzy set theory. Fuzzy set operations mimic classical set operations but are defined to handle the
degree of membership which ranges between 0 and 1.
Let's build a Python program to handle these operations for fuzzy sets.

class FuzzySet:
def __init__(self, elements):
# Elements should be a dictionary where the keys are elements and values are their
membership grades
[Link] = elements

def union(self, other):


"""Perform the union of two fuzzy sets."""
result = {}
for elem, grade in [Link]():
result[elem] = max(grade, [Link](elem, 0))
for elem, grade in [Link]():
if elem not in result:
result[elem] = grade
return FuzzySet(result)

def intersection(self, other):


"""Perform the intersection of two fuzzy sets."""
result = {}
for elem, grade in [Link]():
if elem in [Link]:
result[elem] = min(grade, [Link][elem])
return FuzzySet(result)

def complement(self):
"""Perform the complement of a fuzzy set."""
result = {elem: 1.0 - grade for elem, grade in [Link]()} return
FuzzySet(result)
def is_subset(self, other):
"""Check if this fuzzy set is a subset of another fuzzy set."""
for elem, grade in [Link]():
if elem not in [Link] or grade > [Link][elem]:
return False
return True

def cartesian_product(self, other):


"""Calculate the Cartesian product of two fuzzy sets."""
result = {}
for (elem1, grade1) in [Link]():
for (elem2, grade2) in [Link]():
result[(elem1, elem2)] = min(grade1, grade2)
return result

def __str__(self):
return str([Link])

# Example usage
setA = FuzzySet({'a': 0.5, 'b': 0.3, 'c': 0.9})
setB = FuzzySet({'a': 0.7, 'b': 0.1, 'd': 0.4})

print("Set A:", setA)


print("Set B:", setB)
print("Union of A and B:", [Link](setB))
print("Intersection of A and B:", [Link](setB))
print("Complement of A:", [Link]())
print("Is A a subset of B?", setA.is_subset(setB))
print("Cartesian Product of A and B:", setA.cartesian_product(setB))
14. Develop a program to maximize the function f(x) =x2 using Genetic algorithm.
To maximize a function using a genetic algorithm (GA), we simulate the process of natural
selection where the fittest individuals are selected for reproduction in order to produce offspring
of the next generation. For the given function ( )= 2f(x)=x2, we aim to find the value of that
maximizes this function over a given range, say between -10 and 10.

import numpy as np

def decode(binary, range_min, range_max, bits):


""" Decode binary string to a decimal value within specified range. """
decimal = int(binary, 2)
max_decimal = 2**bits - 1
return range_min + (decimal / max_decimal) * (range_max - range_min)

def fitness(x):
""" Fitness function f(x) = x^2. """
return x**2

def initialize_population(size, bits):


""" Generate initial population of random binary strings. """
return [''.join([Link](['0', '1']) for _ in range(bits)) for _ in
range(size)]

def select(population, fitnesses, num_parents):


""" Select individuals with the highest fitness scores. """
return list([Link](population)[[Link](fitnesses)[-num_parents:]])

def crossover(parent1, parent2):


""" Single point crossover between two parents. """
point = [Link](1, len(parent1))
return parent1[:point] + parent2[point:], parent2[:point] + parent1[point:]

def mutate(individual, mutation_rate):


""" Randomly mutate individual bits. """
individual = list(individual)
for i in range(len(individual)):
if [Link]() < mutation_rate:
individual[i] = '1' if individual[i] == '0' else '0'
return ''.join(individual)

def genetic_algorithm(range_min, range_max, bits=16, population_size=100,


generations=100, mutation_rate=0.01):
""" Run genetic algorithm to maximize f(x). """
population = initialize_population(population_size, bits)
best_solution = None
best_fitness = float('-inf')

for _ in range(generations):
decoded = [decode(ind, range_min, range_max, bits) for ind in population] fitnesses =
[fitness(x) for x in decoded]

# Track the best solution found if


max(fitnesses) > best_fitness:
best_fitness = max(fitnesses)
best_solution = decoded[[Link](fitnesses)]

parents = select(population, fitnesses, population_size // 2)


next_population = [Link]()

# Generate offspring via crossover


while len(next_population) < population_size:
parent1, parent2 = [Link](parents, 2, replace=False)
child1, child2 = crossover(parent1, parent2)
next_population.extend([mutate(child1, mutation_rate), mutate(child2,
mutation_rate)])

population = next_population

return best_solution, best_fitness

# Parameters
RANGE_MIN = -10
RANGE_MAX = 10
BITS = 16 # Number of bits per individual

# Run genetic algorithm


best_x, best_f = genetic_algorithm(RANGE_MIN, RANGE_MAX, BITS)
print(f"Best x found: {best_x}")
print(f"Best f(x) = x^2 found: {best_f}")

You might also like