Non-Dominated Sorting Genetic Algorithm 2 (NSGA-II)
Last Updated :
29 Jul, 2024
The Non-dominated Sorting Genetic Algorithm II (NSGA-II) is a widely used algorithm for multi-objective optimization. It is renowned for its efficiency in handling large populations and its ability to maintain diversity among solutions. NSGA-II utilizes a fast non-dominated sorting approach, elitism, and a crowding distance mechanism to ensure a well-distributed Pareto front.
This article will explore the foundational concepts of genetic algorithms and multi-objective optimization, emphasizing the significance of NSGA-II.
Understanding Multi-Objective Optimization
Multi-objective optimization involves the simultaneous optimization of two or more conflicting objectives. Unlike single-objective optimization, where the focus is on finding a single best solution, multi-objective optimization aims to find a set of solutions that represent the best trade-offs among the objectives. These solutions form what is known as the Pareto front, where no single solution can be considered better than another without improving at least one objective at the expense of another.
Genetic algorithms can be adapted for multi-objective optimization by modifying their selection mechanisms to account for multiple objectives. This adaptation often involves the use of non-dominated sorting and crowding distance techniques to ensure a diverse and well-distributed set of solutions along the Pareto front.
What is Non-Dominated Sorting Genetic Algorithm 2 (NSGA-II)?
The Non-dominated Sorting Genetic Algorithm II (NSGA-II) was developed by Kalyanmoy Deb and his colleagues in 2000. It was designed to address the limitations of its predecessor, NSGA, particularly in terms of computational efficiency and the ability to maintain diversity among solutions.
Key Concepts in NSGA-II
1. Non-dominated Sorting
Non-dominated sorting is a technique used to classify a population of solutions into different levels of Pareto fronts. A solution is said to be non-dominated if no other solution in the population is better in all objectives. The first front contains the non-dominated solutions, and subsequent fronts are formed by removing the previous fronts' solutions and finding the next set of non-dominated solutions. This sorting is crucial for identifying and preserving high-quality solutions in multi-objective optimization.
Process of Non-dominated Sorting
- Initialization: Assign each solution an initial rank and create an empty list for each front.
- Domination Check: For each solution, determine how many solutions dominate it and which solutions it dominates.
- Front Assignment: Solutions not dominated by any others are assigned to the first front. The process is repeated for subsequent fronts by removing already assigned solutions.
2. Crowding Distance
Crowding distance is a measure used to maintain diversity among solutions within a Pareto front. It ensures that solutions are well-distributed across the objective space by favoring those in less crowded regions. This helps avoid clustering of solutions and provides a more comprehensive set of trade-off solutions.
Calculation of Crowding Distance
- Initialization: Set the crowding distance of all solutions to zero.
- Sorting: For each objective, sort the solutions based on their objective values.
- Boundary Assignment: Assign an infinite crowding distance to boundary solutions.
- Distance Calculation: For each solution, calculate the crowding distance based on the normalized difference in objective values of neighboring solutions.
3. Fast Non-dominated Sorting Algorithm
The fast non-dominated sorting algorithm efficiently sorts the population into Pareto fronts, reducing the computational complexity compared to traditional methods. The steps involved are:
- Initialize Fronts: Create an empty list for each Pareto front.
- Dominance Count: For each solution, count the number of solutions that dominate it and list the solutions it dominates.
- First Front Assignment: Assign solutions with zero dominance count to the first front.
- Subsequent Fronts: For each solution in the current front, reduce the dominance count of the solutions it dominates. If any solution's dominance count becomes zero, assign it to the next front.
- Repeat: Continue the process until all solutions are assigned to a front.
Working of NSGA-II
The Non-Dominated Sorting Genetic Algorithm II (NSGA-II) is a widely-used algorithm for solving multi-objective optimization problems. The following steps outline its working mechanism:
Step 1: Initialization of Population
- Randomly generate an initial population of individuals.
- Each individual is represented by a chromosome, a vector of decision variables.
Step 2: Evaluation of Fitness Functions
- Evaluate the fitness of each individual based on multiple objective functions.
- Determine the objective values to be minimized or maximized for each individual.
Step 3: Non-Dominated Sorting
- Sort the population into different non-domination levels (fronts).
- The first front consists of individuals that are not dominated by any other individuals.
- The second front consists of individuals dominated only by those in the first front, and so on.
Step 4: Calculation of Crowding Distance
- Calculate the crowding distance for each individual within each front.
- The crowding distance measures the density of individuals surrounding a particular individual in the objective space.
- Individuals with larger crowding distances are preferred to promote diversity.
Step 5: Selection of Parents for Crossover
- Use binary tournament selection to choose parents for crossover:
- Randomly select two individuals.
- Compare their ranks (non-domination levels).
- Select the individual with the better rank.
- If both belong to the same front, select the one with the larger crowding distance.
Step 6: Application of Crossover and Mutation Operators
- Crossover:
- Perform crossover on selected parents to generate offspring.
- Exchange portions of chromosomes between parents to create new individuals.
- Mutation:
- Apply mutation to offspring by introducing small random changes in their chromosomes.
- Maintain genetic diversity and explore new areas of the solution space.
Step 7: Creation of the Next Generation
- Combine the parent and offspring populations to form an intermediate population.
- Perform non-dominated sorting on the combined population.
- Select the best individuals based on non-domination rank and crowding distance to form the next generation.
Step 8: Termination Criteria
- Repeat the process of selection, crossover, mutation, and generation update until a termination criterion is met.
- Common termination criteria include a fixed number of generations, a convergence threshold, or a maximum computational budget.
- The final population contains the Pareto-optimal solutions representing the best trade-offs among the objectives.
Implementing NSGA-II in Python
The following code demonstrates the implementation of the Non-Dominated Sorting Genetic Algorithm II (NSGA-II) in Python. We will break down the code into steps for better understanding.
Step 1: Import Libraries
Import the necessary libraries for mathematical calculations, random number generation, and plotting.
Python
import math
import random
import matplotlib.pyplot as plt
Step 2: Define Objective Functions
Define the objective functions that we aim to optimize. Here, we use two functions for demonstration.
Python
# First function to optimize
def function1(x):
return -x**2
# Second function to optimize
def function2(x):
return -(x-2)**2
Step 3: Helper Functions
Define helper functions to find the index of a value, sort by values, and perform non-dominated sorting.
Python
# Function to find the index of a value in a list
def index_of(a, list):
try:
return list.index(a)
except ValueError:
return -1
# Function to sort by values
def sort_by_values(list1, values):
sorted_list = []
values_copy = values[:]
while len(sorted_list) != len(list1):
min_index = index_of(min(values_copy), values_copy)
if min_index in list1:
sorted_list.append(min_index)
values_copy[min_index] = math.inf
return sorted_list
Step 4: Fast Non-Dominated Sorting
Implement the fast non-dominated sorting algorithm to classify the population into different Pareto fronts.
Python
# Function to carry out NSGA-II's fast non dominated sort
def fast_non_dominated_sort(values1, values2):
S = [[] for _ in range(len(values1))]
front = [[]]
n = [0 for _ in range(len(values1))]
rank = [0 for _ in range(len(values1))]
for p in range(len(values1)):
S[p] = []
n[p] = 0
for q in range(len(values1)):
if (values1[p] > values1[q] and values2[p] > values2[q]) or \
(values1[p] >= values1[q] and values2[p] > values2[q]) or \
(values1[p] > values1[q] and values2[p] >= values2[q]):
S[p].append(q)
elif (values1[q] > values1[p] and values2[q] > values2[p]) or \
(values1[q] >= values1[p] and values2[q] > values2[p]) or \
(values1[q] > values1[p] and values2[q] >= values2[p]):
n[p] += 1
if n[p] == 0:
rank[p] = 0
if p not in front[0]:
front[0].append(p)
i = 0
while front[i]:
Q = []
for p in front[i]:
for q in S[p]:
n[q] -= 1
if n[q] == 0:
rank[q] = i + 1
if q not in Q:
Q.append(q)
i += 1
front.append(Q)
del front[-1]
return front
Step 5: Crowding Distance Calculation
Calculate the crowding distance to maintain diversity within each Pareto front.
Python
# Function to calculate crowding distance
def crowding_distance(values1, values2, front):
distance = [0 for _ in range(len(front))]
sorted1 = sort_by_values(front, values1[:])
sorted2 = sort_by_values(front, values2[:])
distance[0] = distance[-1] = float('inf')
for k in range(1, len(front) - 1):
distance[k] += (values1[sorted1[k + 1]] - values1[sorted1[k - 1]]) / (max(values1) - min(values1))
distance[k] += (values2[sorted2[k + 1]] - values2[sorted2[k - 1]]) / (max(values2) - min(values2))
return distance
Step 6: Genetic Operators
Define crossover and mutation functions to generate new offspring.
Python
# Function to carry out the crossover
def crossover(a, b):
r = random.random()
if r > 0.5:
return mutation((a + b) / 2)
else:
return mutation((a - b) / 2)
# Function to carry out the mutation operator
def mutation(solution):
mutation_prob = random.random()
if mutation_prob < 0.1:
solution = min_x + (max_x - min_x) * random.random()
return solution
Step 7: Initialization
Initialize the population with random values within a specified range.
Python
# Main program starts here
pop_size = 20
max_gen = 50
# Initialization
min_x = -55
max_x = 55
solution = [min_x + (max_x - min_x) * random.random() for _ in range(pop_size)]
gen_no = 0
# Tracking progress for visualization
progress = []
Step 8: NSGA-II Algorithm Loop
Run the NSGA-II algorithm for a specified number of generations.
Python
while gen_no < max_gen:
function1_values = [function1(solution[i]) for i in range(pop_size)]
function2_values = [function2(solution[i]) for i in range(pop_size)]
non_dominated_sorted_solution = fast_non_dominated_sort(function1_values[:], function2_values[:])
print(f"The best front for Generation number {gen_no} is")
for value in non_dominated_sorted_solution[0]:
print(round(solution[value], 3), end=" ")
print("\n")
# Store progress for visualization
progress.append((function1_values, function2_values))
crowding_distance_values = []
for i in range(len(non_dominated_sorted_solution)):
crowding_distance_values.append(crowding_distance(function1_values[:], function2_values[:], non_dominated_sorted_solution[i][:]))
solution2 = solution[:]
# Generating offsprings
while len(solution2) != 2 * pop_size:
a1 = random.randint(0, pop_size - 1)
b1 = random.randint(0, pop_size - 1)
solution2.append(crossover(solution[a1], solution[b1]))
function1_values2 = [function1(solution2[i]) for i in range(2 * pop_size)]
function2_values2 = [function2(solution2[i]) for i in range(2 * pop_size)]
non_dominated_sorted_solution2 = fast_non_dominated_sort(function1_values2[:], function2_values2[:])
crowding_distance_values2 = []
for i in range(len(non_dominated_sorted_solution2)):
crowding_distance_values2.append(crowding_distance(function1_values2[:], function2_values2[:], non_dominated_sorted_solution2[i][:]))
new_solution = []
for i in range(len(non_dominated_sorted_solution2)):
non_dominated_sorted_solution2_1 = [index_of(non_dominated_sorted_solution2[i][j], non_dominated_sorted_solution2[i]) for j in range(len(non_dominated_sorted_solution2[i]))]
front22 = sort_by_values(non_dominated_sorted_solution2_1[:], crowding_distance_values2[i][:])
front = [non_dominated_sorted_solution2[i][front22[j]] for j in range(len(non_dominated_sorted_solution2[i]))]
front.reverse()
for value in front:
new_solution.append(value)
if len(new_solution) == pop_size:
break
if len(new_solution) == pop_size:
break
solution = [solution2[i] for i in new_solution]
gen_no += 1
Step 9: Visualization
Plot the final front and the progress over generations to visualize the optimization process.
Python
# Let's plot the final front
function1_values = [function1(solution[i]) for i in range(pop_size)]
function2_values = [function2(solution[i]) for i in range(pop_size)]
# Visualize the final front
plt.xlabel('Function 1', fontsize=15)
plt.ylabel('Function 2', fontsize=15)
plt.title('Final Front')
plt.scatter(function1_values, function2_values)
plt.show()
# Visualize the progress over generations
for gen, (f1_vals, f2_vals) in enumerate(progress):
plt.figure(figsize=(10, 6))
plt.scatter(f1_vals, f2_vals)
plt.xlabel('Function 1', fontsize=15)
plt.ylabel('Function 2', fontsize=15)
plt.title(f'Generation {gen}')
plt.show()
Output:
Advantages of NSGA-II
- Efficiency and Speed: Fast non-dominated sorting and elitism improve computational efficiency and solution quality.
- Diverse Solutions: Crowding distance ensures a well-distributed set of Pareto-optimal solutions.
- Flexibility: Applicable to a wide range of multi-objective problems across various domains.
- No Sharing Parameter: Simplifies implementation by using crowding distance instead of a sharing parameter.
- Wide Adoption: Proven performance and extensive use in academia and industry.
- Elitist Strategy: Preserves the best solutions, accelerating convergence.
- Constraint Handling: Effectively manages constraints, prioritizing feasible solutions.
Limitations of NSGA-II
- Computational Complexity: High computational cost and memory usage for large populations or many objectives.
- Parameter Sensitivity: Performance depends on appropriate settings for crossover, mutation rates, and population size.
- Lack of Adaptation: Fixed parameters may not perform well in dynamic environments.
- High-Dimensional Challenges: Difficulty in maintaining diversity and efficient convergence in problems with many objectives.
Applications of NSGA-II
- Engineering Design: Optimizes structural and automotive designs for multiple criteria like strength, weight, and cost.
- Finance: Portfolio optimization balancing risk and return.
- Scheduling: Efficient scheduling in manufacturing and logistics.
- Environmental Management: Balances economic and ecological objectives in resource management.
- Network Design: Optimizes network topologies for performance and reliability.
Conclusion
NSGA-II is a powerful tool for multi-objective optimization, known for its efficiency, ability to maintain diversity, and flexibility across various domains. While it has some limitations, such as computational complexity and parameter sensitivity, its advantages and wide range of applications make it a leading algorithm in the field of evolutionary multi-objective optimization.
Similar Reads
Simple Genetic Algorithm (SGA) Prerequisite - Genetic Algorithm Introduction : Simple Genetic Algorithm (SGA) is one of the three types of strategies followed in Genetic algorithm. SGA starts with the creation of an initial population of size N.Then, we evaluate the goodness/fitness of each of the solutions/individuals. After tha
1 min read
Steady State Genetic Algorithm (SSGA) Prerequisite - Genetic Algorithm SSGA stands for Steady-State Genetic Algorithm. It is steady-state meaning that there are no generations. It differs from the Simple Genetic Algorithm, as in that tournament selection does not replace the selected individuals in the population, and instead of adding
2 min read
Genetic Algorithms and Genetic Programming for Advanced Problem Solving Genetic algorithms (GAs) and genetic programming (GP) are branches of evolutionary computing, a subset of artificial intelligence where solutions evolve over time to fit a given set of parameters or solve specific problems. These techniques are inspired by the biological concepts of reproduction, mu
7 min read
Genetic Algorithms vs. Local Search Optimization Algorithms in AI Artificial Intelligence (AI) has revolutionized how we solve problems and optimize systems. Two popular methods in the optimization field are Local Search Optimization (LSO) algorithms and Genetic Algorithms (GAs). While both are used to tackle complex issues, their approaches, uses, and performance
10 min read
Introduction to Beam Search Algorithm In artificial intelligence, finding the optimal solution to complex problems often involves navigating vast search spaces. Traditional search methods like depth-first and breadth-first searches have limitations, especially when it comes to efficiency and memory usage. This is where the Beam Search a
5 min read
Introduction to Optimization with Genetic Algorithm Optimization is the process of finding the best solution after evaluating all possible combinations. When dealing with complex problems, finding the optimal solution becomes crucial. One powerful tool in machine learning for solving such optimization problems is the genetic algorithm. Inspired by th
10 min read
Implementation of Whale Optimization Algorithm Previous article Whale optimization algorithm (WOA) talked about the inspiration of whale optimization, its mathematical modeling and algorithm. In this article we will implement a whale optimization algorithm (WOA) for two fitness functions 1) Rastrigin function   2) Sphere function  The algorithm
6 min read
Implementation of Grey Wolf Optimization (GWO) Algorithm Previous article Grey wolf optimization- Introduction talked about inspiration of grey wolf optimization, and its mathematical modelling and algorithm. In this article we will implement grey wolf optimization (GWO) for two fitness functions - Rastrigin function and Sphere function. The aim of Grey w
7 min read
Local Search Algorithm in Artificial Intelligence Local search algorithms are essential tools in artificial intelligence and optimization, employed to find high-quality solutions in large and complex problem spaces. Key algorithms include Hill-Climbing Search, Simulated Annealing, Local Beam Search, Genetic Algorithms, and Tabu Search. Each of thes
4 min read
Trade-offs between Exploration and Exploitation in Local Search Algorithms Local search algorithms are a fundamental class of optimization techniques used to solve a variety of complex problems by iteratively improving a candidate solution. These algorithms are particularly useful in scenarios where the search space is large and a global optimum is difficult to identify di
9 min read