Salah Article Published
Salah Article Published
https://doi.org/10.1007/s11227-024-06614-8
Abstract
Community detection is crucial for understanding the structure and function of bio-
logical, social, and technological systems. This paper presents a novel algorithm,
fast local move iterated greedy (FLMIG), which enhances the Louvain Prune heuris-
tic using an iterated greedy (IG) framework to maximize modularity in non-overlap-
ping communities. FLMIG combines efficient local optimization from the fast local
move heuristic with iterative refinement through destruction and reconstruction
phases. A key refinement step ensures that detected communities remain internally
connected, addressing limitations of previous methods. The algorithm is scalable,
parameter-light, and performs efficiently on large networks. Comparative evalu-
ations against state-of-the-art methods, such as Leiden, iterated carousel greedy,
and Louvain Prune algorithms, show that FLMIG delivers statistically comparable
results with lower computational complexity. Extensive experiments on synthetic
and real-world networks confirm FLMIG’s ability to detect high-quality communi-
ties while maintaining robust performance across various network sizes, particularly
improving modularity and execution time in large-scale networks.
* Lyazid Toumi
[email protected]; [email protected]
Salaheddine Taibi
[email protected]
Salim Bouamama
[email protected]
1
Department of Computer Science, University Sétif 1 - Ferhat ABBAS, 19000 Sétif, Algeria
2
Mechatronics Laboratory (LMETR), Optics and Precision Mechanics Institute, University Sétif
1 - Ferhat ABBAS, 19000 Sétif, Algeria
Vol.:(0123456789)
182 Page 2 of 39 S. Taibi et al.
1 Introduction
2 Related works
2.1 Problem definition
(i, j) ∈ E exists if and only if there is connection between vertex i and vertex j. The
graph G can be represented by an adjancency matrix A ∈ {0, 1}n×n, where Aij = 1 if
there is an edge (i, j) ∈ E and Aij = 0 otherwise. Let Ni = {j|(i, j) ∈ E} be the set of
neighbors of vertex i and let di =∣ Ni ∣= Σnj=1 Aij be the degree of i. The community
detection problem with respect to G asks for a partition C = {C1 , C2 , ⋯ , Ck } such
⋃k
that r=1 Cr = V and all subsets of C are pairwise disjoint, that is, for all
1 ≤ i < j ≤ k it holds that Ci ∩ Cj = �.
The commonly used metric in the context of community detection for evaluating
the quality of a given partition C is the modularity function [38], known as the New-
man–Girvan modularity, normalized between -1 and 1, and can be formally defined
as:
( )
1 ∑ di dj
Q(C) =
2m ij
Aij −
2m
𝛿(Ci , Cj ) (1)
In the above equation, Aij is the element in the adjacency matrix A of the graph G. di
and dj are the degrees of vertices i and j, respectively. m is the total number of edges
in the graph. 𝛿(Ci , Cj ) is the Kronecker delta function [42], which is 1 if Ci = Cj (i.e.,
vertices i and j belong to the same community), and 0 otherwise.
This metric measures the quality of network community divisions by comparing
edge density within communities to that of a random graph. Higher modularity val-
ues indicate stronger, more meaningful community structures. Several variations of
the modularity function have been proposed in the literature; for further details, refer
to [24].
Consequently, the goal of the community detection problem is to find a parti-
tion of V (also known as graph clustering) into subsets (or communities) such that
the resulting partition optimizes the modularity measure. However, the problem of
maximizing modularity has been proved to be NP-hard, both in the general case and
with the restriction to cuts [9].
2.2.1 Heuristic methods
key advantage of this algorithm is that it does not require prespecifying the number
of communities. However, due to its greedy approach, the algorithm may converge
to local optima, potentially resulting in suboptimal community partitions.
An improvement of this greedy approach is the Louvain algorithm [4], which also
uses the concept of modularity but is developed using a two-phase iterative process and
often runs in nearly linear time O(m), and its performance can depend on the order in
which nodes are visited. In the first phase, each node in a graph is initially treated as its
own community. The algorithm calculates the initial modularity and then attempts to
move each node to a neighboring community if it increases modularity. If the modu-
larity gain is zero or negative, the node remains in its original community. This pro-
cess repeats until no further modularity improvements are possible. The second phase,
known as graph reduction or rebuilding, involves aggregating communities formed in
the first phase into new nodes (supernodes). This process is repeated iteratively until no
further increase in the network’s modularity can be achieved. The algorithm scales well
with the number of edges, making it suitable for very large networks.
Liu and Murata [28] developed a modularity-specialized label propagation algo-
rithm (LPAm) and improved it by incorporating a multi-step greedy agglomerative
algorithm (MSG) that merges multiple pairs of communities simultaneously. They
demonstrated that the enhanced version, called the advanced modularity-specialized
label propagation algorithm (LPAm+), outperforms the original LPAm.
Waltman et al. [51] introduced the smart local moving (SLM) algorithm, a modu-
larity-based community detection method for large networks. While it is similar to the
Louvain algorithm, there is a key difference: Before constructing a reduced network,
each community in the current structure is first treated as a subnetwork. Subsequently,
the local moving heuristic is applied to identify new communities within each subnet-
work. To begin, each node in the subnetwork is assigned to its own singleton commu-
nity, after which the heuristic optimizes the modularity function for that subnetwork.
Traag [48] proposed reducing the theoretical runtime of the Louvain algorithm
from O(m) to O(n log k), where k is the average degree, by moving nodes to a ran-
dom neighboring community during the first phase instead of the best neighbor-
ing community. This adjustment, known as the random neighbor move (RNM),
enhances the algorithm’s flexibility in exploring various solutions, though it may
slightly compromise the quality of the final result.
Ozaki et al. [40] presented the Louvain Prune algorithm, which reduces the runt-
ime of the Louvain algorithm by restricting the number of neighboring communi-
ties considered during the modularity optimization step in the first phase rather than
evaluating all possible neighboring communities. Experimental results demonstrate
that the Louvain Prune algorithm, leveraging the fast local move (FLM) operation,
can speed up the Louvain algorithm by up to tenfold.
Another significant advancement over the Louvain method is the Leiden algo-
rithm [49], which addresses some Louvain’s key limitations and benefits from sev-
eral prior enhancements by integrating techniques such as SLM [51], FLM [40], and
RNM [48]. The Louvain algorithm can sometimes generate poorly connected com-
munities, containing multiple disconnected subgraphs within a single community.
In contrast, the Leiden algorithm incorporates a refinement phase that ensures all
detected communities are internally well connected. Despite these additional steps,
182 Page 6 of 39 S. Taibi et al.
2.2.2 Metaheuristic methods
Furthermore, to reduce computing time and enhance the accuracy of MSIG [44],
Li et al. [25] integrated the Louvain algorithm [4] into the IG framework. This inte-
gration enabled the Louvain algorithm to reconstruct the partially destroyed solution
during the reconstruction phase. Additionally, a local search procedure was applied
after the reconstruction phase to further improve the algorithm’s performance.
Liu et al. [27] presented an iterated local search (ILS) algorithm that generates
an initial feasible solution with GCP [44] and performs local search using a modu-
larity-specialized label propagation algorithm from [28]. In this study, the authors
showed that, compared to MAGA-Net [26], the Louvain algorithm [4], and DBA
[45], ILS exhibits outstanding performance in detecting communities.
A whale optimization-based community detection algorithm (WOCDA) was
proposed by Zhang et al. [56] to discover communities in synthetic and real-world
networks. Three optimization operations, shrinking encircling, spiral updating, and
random searching, are designed for community detection. Experimental results dem-
onstrated that WOCDA can discover more accurate partitions than DPSO [10] and
BA [20], although it requires more computation time. Additionally, the efficiency of
WOCDA decreases as the number of nodes grows, due to the random search opera-
tion becoming more time-consuming.
In conclusion, while the metaheuristic approaches available in the literature for
detecting communities in complex networks have shown experimental effectiveness,
most of them require the adjustment of several parameters and are not highly scal-
able for large networks due to the considerable computing time needed. Therefore,
there is still room for improvement in their performance on real-world networks.
3 Proposed approach
Our proposed approach, named fast local move iterated greedy (FLMIG), is built
upon the iterated greedy (IG) framework [19] to address the community detection
problem. The IG algorithm was first introduced by Ruiz and Stützle [43] for address-
ing permutation flow shop scheduling problems. Since then, IG methods have dem-
onstrated considerable success in solving a wide range of NP-hard combinatorial
optimization problems [31], including the community detection problem [22, 25,
44] and other graph-based problems [6–8, 11].
IG is a straightforward stochastic iterative process, known for its minimal control
parameters and efficiency without requiring significant problem-specific knowledge,
unlike more complex heuristic methods. It starts with an initial solution and alter-
nates between two key phases: destruction and reconstruction.
Algorithm 1 provides a pseudo-code outlining the main steps of FLMIG’s
approach for solving community detection problem. It operates as follows: It begins
with seeding the algorithm with an initial solution using the Generate_initial_solu-
tion procedure based on fast local move (FLM) [40]. This initial solution serves as
a starting point for further improvement through iterative cycles of destruction and
reconstruction phases. These cycles continue until a specific termination criterion is
achieved.
182 Page 8 of 39 S. Taibi et al.
Generate_initial_solution procedure
3.1
Fig. 1 Graphical illustration of the different components of the FLMIG algorithm applied to the karate
club network
3.2 Fast_local_move procedure
The Fast_Local_Move procedure is part of the first phase of the Louvain Prune algo-
rithm [40], which has been shown to reduce computational time by up to 90% com-
pared to the original algorithm while preserving solution quality. We have adapted the
greedy heuristic to construct a complete solution, either from scratch or by rebuilding
from partially destroyed solutions. Furthermore, the method includes randomization to
182 Page 10 of 39 S. Taibi et al.
promote greater diversity in the produced solutions. Algorithm 3 outlines the specific
steps involved in this process, which starts by initializing a set L containing all nodes
from V when generating the initial solution. In contrast, when the procedure is called to
rebuild a partially destroyed solution, L contains only the removed nodes.
It then iterates over the components of L in a randomized order until the set is empty
and no further improvements in modularity can be detected. At each iteration, a node v
is selected uniformly at random from L. The modularity gain ΔQ is calculated for each
possible move of v to any neighboring community that contains at least one node adja-
cent to v. The community Cbest, which yields the greatest increase in ΔQ, is chosen as
the new community for v (lines 7–10). After this, all neighbors of v that are neither in
Cbest nor already in L are added to L (lines 14–15). The modularity gain ΔQ(v, C) from
assigning node v to community C is formally defined as [4, 40]:
∑
dv ⋅ u∈C du
ΔQ(v, C) = �Nv ∩ C� − (2)
2m2
where |Nv ∩ C| represents the number of neighbors of node v that belong to commu-
∑
nity C, and u∈C du is the sum of degrees of all nodes in C.
3.3 Destruction_solution procedure
3.4 Reconstruction_heuristic procedure
As detailed in algorithm 5, the reconstruction phase consists of two steps. The first
step involves applying the procedure Fast_local_move to the partially destructed
182 Page 12 of 39 S. Taibi et al.
solution Sdest based on the set of removed nodes, denoted as Ldest , that contains the
nodes that were taken out of the original solution during the destruction phase.
Although this procedure rebuilds the solution to restore its quality, it can result
in locally disconnected communities [49], where some nodes within the same
community are not fully connected. Identifying and addressing these local dis-
connections are essential for improving the accuracy of any community detection
algorithm.
The second step consists in the application of an improved variant of the Lou-
vain Prune algorithm [40], incorporating a refinement_procedure designed
to address badly connected communities. A community is considered well con-
nected if there is a path between any pair of nodes within it. So, this proce-
dure checks each community’s connectivity and, if necessary, partitions it into
well-connected subcommunities. Once the disconnected parts are identified, the
algorithm attempts to merge these subcommunities using a simple local move,
reassessing both modularity and connectivity. Table 1 shows which dataset the
refinement procedure could be applied to.
After the refinement step, communities are aggregated into supernodes, creat-
ing a smaller and more abstract version of the graph, which accelerates subse-
quent iterations. Aggregation compresses the graph by merging all nodes within
the same community into a single node (called a supernode), with edges between
communities updated based on the original graph’s connections. This process
builds a hierarchical structure of communities across different levels, ensuring
well-defined communities, even at larger scales.
The improved Louvain Prune algorithm iterates through these three phases,
adjusting the community structure at each hierarchical level, until no further
improvement in modularity is possible (Table 1).
Karate ×
Dolphins ×
Polbooks ×
Football ×
lesmis ×
Adjnoun ×
Jazz ×
Metabolic ×
NetScience ×
PGP ✓
as-22july06 ✓
com-DBLP ✓
com-Amazon ✓
Complex network community discovery using fast local move… Page 13 of 39 182
Table 2 Overview information Class Dataset ∣V∣ ∣E∣ Avg degree Avg CC
of real-world networks
CSRP Karate [53] 34 78 4.588 0.588
Dolphins [32] 62 160 5.129 0.2859
Football [17] 115 613 10.661 0.4033
Jazz [18] 198 2742 27.70 0.633
CMRP Polbooks [37] 105 441 8.4 0.4875
lesmis [21] 77 254 6.597 0.736
Adjnoun [37] 112 425 7.589 0.190
Metabolic [14] 453 2040 8.940 0.655
CLRP NetScience [37] 1589 2742 3.451 0.6377
PGP [5] 10680 24316 4.5 0.2659
as-22july06 [35] 22963 48436 4.21 0.2304
DBLP [46] 317080 1049866 5.530 0.732
Amazon [46] 334863 925872 6.622 0.430
182 Page 14 of 39 S. Taibi et al.
3.5 Select_next_solution procedure
After a new solution is reconstructed, the algorithm decides whether to accept the
new solution or continue with the current solution for the next iteration. This deci-
sion is based on a simulated annealing-like criteria [47, 50]. Even if the new solu-
tion is worse, it may still be accepted with a certain probability (especially in early
iterations) to avoid getting stuck in local optima. This is similar to the simulated
annealing technique, which combines both probabilistic acceptance and gradual
cooling (temperature reduction) to enhance the quality of solutions through con-
trolled exploration.
A new solution S∗ is accepted if it has higher modularity than the current solu-
tion S, i.e., Q(S∗ ) > Q(S). If S′ is not better, it can still be accepted with a probability
given by:
( )
∗ Q(S∗ ) − Q(S)
p(T, S, S ) = exp (3)
T
4 Experimental evaluation
All tests were conducted on an Intel Core i5-13600KF CPU clocked at 5.1 GHz with
16 GB of RAM, running Ubuntu 22.04 LTS. We implemented the fast local move
iterated greedy (FLMIG) algorithm using the Python package.1 The Leiden (LDN)
algorithm was implemented using the Python package,2 the Louvain Prune (LVNP)
algorithm was implemented with the Python package,3 and the iterated carousel
greedy (ICG) algorithm was implemented using the Python package.4
4.2 Problem instances
1
https://github.com/salahinfo/Iterated-greedy-algorithm-for-community-detction.git.
2
https://github.com/vtraag/leidenalg.
3
https://github.com/salahinfo/Prune_louvain-algorithm.git.
4
https://github.com/salahinfo/Iterated-greedy-algorithm-for-community-detction.git.
182 Page 16 of 39 S. Taibi et al.
Fig. 2 (a) Impact of 𝛽 values on the modularity value. (b) Impact of 𝛽 values on the computational time
4.3 Algorithm tuning
The optimal performance of the FLMIG algorithm relies heavily on the effective
tuning of parameters 𝛽 and 𝜖 , the source code of the FLMIG algorithm is available at
https://github.com/salahinfo/FLMIG_algorithm.
To enhance these values, we conducted comprehensive experiments on multiple
actual networks. The approach was crucial in identifying the most favorable values
for 𝛽 , and we observed that the computational outcomes were constant across mul-
tiple networks. An illustrative example is the Lesmis network, where the impact of
various 𝛽 values (ranging from 0.1 to 0.9) on both modularity and computational
time was thoroughly examined. In order to assess its influence, we observed the
average modularity for each 𝛽 configuration.
182 Page 18 of 39 S. Taibi et al.
A set of experiments were conducted to analyze the efficiency of the FLMIG algo-
rithm compared to the LDN, LVNP, and ICG algorithms for solving the CSP, CMP,
and CLP problem sets. For all algorithms, the best modularity value, the average
modularity value, and the standard deviation of the modularity across 10 independ-
ent runs were reported.
Tables 6, 7, and 8 present the results for the problem sets CSRP, CMRP, and
CLRP, respectively. Each column lists the dataset name, and for each algorithm, the
table displays the best (maximum) Q value, the average Q value, and the standard
deviation of Q across 10 independent runs. In each table, the bold values represent
the best (maximum) values for both Best and Avg columns in each case. The last
row shows the computation execution time in seconds. In the bottom of the tables,
the Kruskal–Wallis (K–W) test is employed to statistically compare the performance
of four algorithms across several datasets, using a significance level of 0.01. When
the test reveals significant differences, Dunn’s test with a Bonferroni correction
(DcB test) is then applied to identify which specific algorithms differ significantly in
performance. This approach provides clearer insights into how each algorithm ranks
in terms of performance across different datasets.
Complex network community discovery using fast local move… Page 19 of 39 182
Table 6 The class of smaller-size real-world problem set (CSRP): Performance indicators (Best, Avg,
and Std) for FLMIG, LDN, LVNP, and ICG (with the best values for Best and Avg highlighted for each
case)
Algorithm Karate Dolphins Football Jazz
Table 6 compares the performance of the FLMIG against three algorithms, LDN,
LVNP, and ICG, on four smaller-size real-world problem sets (Karate, Dolphins,
Football, and Jazz). FLMIG consistently delivers strong results, obtaining the best
performance on Karate, Dolphins, Football, and Jazz datasets with minimal vari-
ability (low Std values). Its average values are close to the best, demonstrating sta-
ble performance. The execution time of FLMIG is quite low compared to the other
algorithms, making it an efficient choice. LDN matches FLMIG in terms of best
values for most datasets and performs consistently, as indicated by zero or near-zero
standard deviations. It slightly underperforms FLMIG in terms of average values but
remains highly competitive. However, LDN takes more time to compute, especially
on larger datasets like Jazz. LVNP offers reasonable performance but lags behind
FLMIG and LDN in both best and average results, particularly on the Karate and
182 Page 20 of 39 S. Taibi et al.
Table 7 The class of moderate-size real-world problem set (CMRP): Performance indicators (Best,
Avg, and Std) for FLMIG, LDN, LVNP, and ICG (with the best values for Best (maximum) and Avg
highlighted in bold for each case)
Algorithm Polbooks Lesmis Metabolic Adjnoun
Table 7 compares the performance of the four algorithms, FLMIG, LDN, LVNP,
and ICG, on moderate-size real-world problem sets (Polbooks, Lesmis, Meta-
bolic, Adjnoun). FLMIG shows strong results, consistently providing the best
performance across all datasets. For Polbooks, Lesmis, and Metabolic, it achieves
the best values, with minimal standard deviation, indicating stable performance.
However, on Adjnoun, it performs slightly worse in average and standard devia-
tion, showing more variability. Its execution time remains reasonably low across
all datasets. LDN performs nearly on par with FLMIG in Polbooks and Lesmis,
matching its best and average scores. However, in Metabolic and Adjnoun, it
underperforms slightly, with lower best and average scores, particularly in Meta-
bolic. Despite this, LDN remains stable (low Std) and maintains a low computa-
tional time, similar to FLMIG. LVNP lags behind the other algorithms in terms
182 Page 22 of 39 S. Taibi et al.
Table 8 shows the performance of four algorithms, FLMIG, LDN, LVNP, and ICG
on large-size real-world problem sets (Netscience, PGP, as-22july06, Amazon,
and Dblp). FLMIG shows competitive results, achieving the best performance on
Netscience and Dblp. However, it falls short in PGP, as-22july06, and Amazon,
where LDN outperforms it. FLMIG’s average values are consistently close to
its best, indicating stability. However, its execution time increases dramatically
Complex network community discovery using fast local move… Page 23 of 39 182
Table 8 The class of large-size real-world problem set (CLRP) results: Performance indicators (Best,
Avg, and Std) for FLMIG, LDN, LVNP, and ICG (with the best values for Best (maximum) and Avg
highlighted in bold for each case)
Algorithm Netscience PGP as-22july06 Amazon Dblp
with the size of the dataset, particularly for Amazon and Dblp. LDN is the best-
performing algorithm overall, providing the best results in four out of five datasets
(PGP, as-22july06, Amazon, and Dblp). It outperforms FLMIG and LVNP in terms
of accuracy and offers reasonable execution times, significantly faster than FLMIG
in larger datasets like Amazon and Dblp. The algorithm also exhibits low standard
deviation, demonstrating reliable performance across runs. LVNP performs worse
than both FLMIG and LDN, especially in the as-22july06 and Amazon datasets,
where its best and average values are lower. However, it has the shortest execution
times in most datasets, making it a computationally efficient option, though with
a trade-off in accuracy. ICG is the weakest performer in terms of both best and
average values, across all datasets. Although it shows stability (low STD) on smaller
datasets, its execution times are exceedingly high, particularly for the larger datasets,
making it impractical for large-scale applications.
182 Page 24 of 39 S. Taibi et al.
Experiments are extended (i.e., scaled up) to further analyze the effectiveness of
FLMIG against LDN, LVNP, and ICG. Again, the modularity is used similar to the pre-
vious experiments presented in the previous section. In these tests, the mixing param-
eter 𝜇 has been increased from 0.1 to 0.8 (in 0.1 increments, yielding a total of 8 differ-
ent cases). Again, the best (maximum) modularity value, the average modularity value,
and the standard deviation of the modularity across 10 independent runs were reported.
In each table, the bold values represent the best (maximum) values for both Best and
Avg columns in each case. These metrics offer a comprehensive view of the algo-
rithms’ effectiveness and stability. Other parameters are kept the same (see Sect. 4.3
for details). The performance of scalability experiments for four problem sets: CSSP-a,
CSSP-b, CMSP, and CLSP are presented in Tables 9, 10, 11, and 12 (see Sect. 4.3 for
table details). The additional line 𝜇 in the tables represents.
Complex network community discovery using fast local move… Page 25 of 39 182
Table 9 The class of smaller-size synthetic problem set (CSSP-a): Performance indicators (Best,
Avg, and Std) for FLMIG, LDN, LVNP, and ICG (with the best values for Best (maximum) and Avg
highlighted in bold for each case)
Algo- 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
rithm
FLMIG Best 0.839626 0.738568 0.643158 0.534936 0.441307 0.318925 0.253626 0.243817
Avg 0.839626 0.738568 0.643158 0.534936 0.440893 0.313826 0.249834 0.238976
Std 0 0 0 0 0.000929 0.005563 0.002396 0.002447
Time 0.1961 0.2113 0.2412 0.2783 0.3247 0.5701 0.8736 0.9277
LDN Best 0.839626 0.738568 0.643158 0.534936 0.441307 0.307815 0.245941 0.231467
Avg 0.839626 0.738568 0.643158 0.534936 0.441307 0.306736 0.239857 0.230813
Std 0 0 0 0 0 0.001393 0.004198 0.000230
Time 0.0577 0.0576 0.0600 0.0582 0.0644 0.0709 0.0769 0.0810
LVNP Best 0.839626 0.738568 0.643158 0.534936 0.441307 0.308713 0.244525 0.231606
Avg 0.839626 0.738568 0.643158 0.534936 0.438452 0.301291 0.236869 0.227867
Std 0 0 0 0 0.002809 0.005486 0.003760 0.002262
Time 0.1222 0.1300 0.1548 0.1779 0.2093 0.2431 0.2896 0.3339
ICG Best 0.824425 0.723015 0.615761 0.511867 0.428280 0.333609 0.237661 0.220880
Avg 0.819451 0.718738 0.612248 0.504382 0.421417 0.332860 0.231470 0.218145
Std 0.001748 0.002922 0.003079 0.004203 0.004609 0.000701 0.003160 0.001875
Time 100.0765 101.0765 102.0765 103.0765 104.0765 105.0765 106.0765 105.0870
K–W test H-statis- 38.7097 37.9624 37.9624 37.9562 31.1414 30.3814 30.2007 35.6698
tics
P-value 0 0 0 0 0.000001 0.000001 0.000001 0
DcB test FLMIG- 1 1 1 1 1 0.572372 0.057672 0.206096
LDN
FLMIG- 1 1 1 1 0.539340 0.024229 0.002499 0.001544
LVNP
FLMIG- 0.000002 0.000003 0.000003 0.000003 0.000037 0.122271 0 0
ICG
LDN- 1 1 1 1 0.148968 1 1 0.743016
LVNP
LDN-ICG 0.000002 0.000003 0.000003 0.000003 0.000002 0.000402 0.030616 0.001544
LVNP- 0.000002 0.000003 0.000003 0.000003 0.028530 0.000001 0.376811 0.206096
ICG
The performance of four algorithms, FLMIG, LDN, LVNP, and ICG, on a smaller
synthetic problem set (CSSP-a), is summarized in Table 9, using metrics such as
best result, average result, standard deviation, and execution time across eight differ-
ent mixing parameters 𝜇 (ranging from 0.1 to 0.8).
FLMIG consistently achieves the best outcomes, excelling in both best and
average scores. It performs well across all problem sizes, with zero standard devi-
ation until 𝜇 = 0.5, where slight variations begin to appear. However, FLMIG’s
execution time increases substantially as problem size grows, reaching up to
182 Page 26 of 39 S. Taibi et al.
0.9277 s at the largest size. LDN shows similar performance to FLMIG in terms
of best and average scores, but its standard deviation becomes noticeable from
𝜇 = 0.6 onwards. Notably, LDN executes much faster than FLMIG, making it
more efficient, especially for smaller problems. LVNP follows a similar trend to
LDN, though its performance slightly diminishes with increasing problem size.
While its average scores are generally lower than LDN’s, its execution time is
faster than FLMIG’s but slower than LDN’s. The growing standard deviation
from 𝜇i = 0.5 onward suggests a decline in result stability as problems scale. ICG,
on the other hand, consistently performs worse than the others, with the lowest
best and average scores and higher standard deviations across most problem sizes.
Its execution time is significantly longer, exceeding 100 s for larger problems.
The K–W test reveals statistically significant differences among the algorithms
(P-value = 0), indicating unequal performance. The DcB test comparison
between the four algorithms (DcB test) confirms these distinctions, showing that
FLMIG significantly outperforms ICG (P-value ≈ 0), while FLMIG and LDN are
more comparable. Nevertheless, the statistical tests suggest significant differences
Table 10 The class of smaller-size synthetic problem set (CSSP-b): Performance indicators (Best,
Avg, and Std) for FLMIG, LDN, LVNP, and ICG (with the best values for Best (maximum) and Avg
highlighted in bold for each case)
Algo- 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
rithm
FLMIG Best 0.858805 0.762107 0.665136 0.570323 0.474956 0.372765 0.278217 0.245098
Avg 0.858805 0.762081 0.665113 0.569743 0.473554 0.366934 0.276318 0.239118
Std 0 0.000066 0.000040 0.001038 0.000955 0.003463 0.001289 0.003618
Time 0.1971 0.2112 0.2396 0.2765 0.3313 0.4842 0.7575 0.8969
LDN Best 0.858805 0.762100 0.665136 0.570323 0.474956 0.372143 0.271915 0.234195
Avg 0.858805 0.762100 0.665057 0.570323 0.474821 0.372143 0.271915 0.233921
Std 0 0 0.000028 0 0.000142 0 0 0.000441
Time 0.0577 0.0576 0.0600 0.0582 0.0644 0.0709 0.0769 0.0810
LVNP Best 0.858805 0.762107 0.665136 0.570341 0.474574 0.369127 0.268105 0.229174
Avg 0.858805 0.762084 0.664660 0.569969 0.473473 0.364591 0.264248 0.225876
Std 0 0.000054 0.001107 0.000606 0.000856 0.003348 0.003923 0.002631
Time 0.1318 0.1519 0.1669 0.1987 0.2282 0.2896 0.3156 0.3337
ICG Best 0.855528 0.746994 0.649884 0.554922 0.462516 0.367022 0.271155 0.221206
Avg 0.855211 0.743145 0.646670 0.551989 0.459599 0.365536 0.268267 0.215962
Std 0.000219 0.002306 0.002722 0.002146 0.001871 0.001126 0.001700 0.002415
Time 100.0345 101.0345 102.0345 103.0345 104.0345 105.0345 106.0345 106.6540
K–W test H-statis- 29.1648 26.0862 26.0862 26.3369 31.5534 21.6483 35.2654 36.6468
tics
P-value 0.000002 0.000009 0.000009 0.000008 0.000001 0.000077 0 0
DcB test FLMIG- 1 0.507196 0.507196 0.421602 0.064538 0.037162 0.323349 0.330563
LDN
FLMIG- 1 1 1 1 1 1 0 0.000693
LVNP
FLMIG- 0.000026 0.000007 0.000007 0.009689 0.013338 1 0.000204 0
ICG
LDN- 1 1 1 0.966506 0.033525 0.000310 0.002173 0.316239
LVNP
LDN-ICG 0.000107 0.000107 0.010614 0.000004 0 0.000365 0.159795 0.000810
LVNP- 0.000097 0.000097 0.000929 0.002208 0.027224 1 1 0.360834
ICG
between all algorithm pairs, particularly for larger problem sizes, where ICG
consistently underperforms. In summary, FLMIG is the most accurate but also
the slowest algorithm, while LDN strikes a balance between speed and accuracy.
LVNP is slightly less stable, and ICG is the least effective overall. Figure 6
provides a graphical representation of Table 9.
time across the mixing parameter varying from 𝜇 = 0.1 to 𝜇 = 0.8. FLMIG per-
forms best overall, with higher scores for both the best and average metrics across
all problem sizes. Its performance is consistent, with a very low or zero standard
deviation, but its execution time increases as the problem size grows, reaching up
to 0.8969 s at 0.8. Flmig demonstrates a balance between accuracy and efficiency,
though it is slower compared to LDN and LVNP. LDN follows FLMIG closely
in terms of best and average performance. While it has lower standard deviations
than FLMIG in some cases, it is more efficient in terms of execution time, con-
sistently outperforming FLMIG in this regard. However, LDN’s best performance
falls slightly short of FLMIG’s, particularly in the larger problem sizes. LVNP
generally delivers slightly worse results than both FLMIG and LDN. The stand-
ard deviation increases as the problem size grows, suggesting reduced stability,
particularly from problem size 0.6 onwards. LVNP’s execution time is faster than
FLMIG’s but slower than LDN’s, positioning it as a middle-ground option in
terms of efficiency. ICG underperforms compared to the other algorithms, with
lower best and average scores, particularly at the larger problem sizes. While it
is the least accurate, its stability is slightly better than LVNP’s in some cases, but
its execution time is significantly higher, exceeding 100 s for most problem sizes,
making it inefficient for larger problems.
The K–W test indicates statistically significant differences between the
algorithms, as evidenced by the very low P-values (close to 0). The DcB test
further confirms the differences, with FLMIG consistently outperforming LDN,
LVNP, and ICG, showing more comparable results in some cases, though they
also exhibit meaningful differences. In summary, FLMIG emerges as the most
accurate algorithm, while LDN is the fastest with acceptable accuracy. LVNP
strikes a balance but becomes less stable at larger problem sizes, and ICG, though
relatively stable, is the least accurate and highly inefficient. Figure 7 provides a
graphical representation of Table 10.
Table 11 presents the performance of the FLMIG against the competitor algo-
rithms, LDN, LVNP, and ICG, on a moderate-size synthetic problem set (CMSP).
The metrics assessed include best and average values, standard deviations (Std),
and running time across the mixing parameter varying from 𝜇 = 0.1 to 𝜇 = 0.8.
Each algorithm demonstrates different strengths and weaknesses based on per-
formance and computational efficiency; FLMIG delivers the best overall accu-
racy with both the highest best and average values. It remains stable across prob-
lem sizes, exhibiting very low standard deviations. However, its execution time
significantly increases as the problem size grows, becoming the slowest among
the faster algorithms. FLMIG is best suited for scenarios where accuracy is para-
mount, but it might not be optimal for time-sensitive applications. LDN provides
nearly identical performance to FLMIG in terms of best and average values, with
slightly lower accuracy but much faster execution times. Its standard deviations
are similarly low, reflecting consistent performance. LDN emerges as a balanced
Complex network community discovery using fast local move… Page 29 of 39 182
solution, offering both accuracy and speed, especially for larger datasets where
FLMIG might be too slow. LVNP performs well on smaller problem sizes but
begins to lag behind FLMIG and LDN as the size increases. Its results show
higher standard deviations at larger problem sizes, indicating that its performance
becomes less stable. While faster than FLMIG, LVNP is not as efficient as LDN,
positioning it as a middle-ground algorithm. ICG, in contrast, struggles with both
accuracy and speed. Its best and average values are consistently lower than the
other algorithms, particularly at larger problem sizes. Additionally, ICG’s exe-
cution time skyrockets beyond 1000 s, making it highly impractical for larger
problems.
Statistical tests confirm that these performance differences are statistically
significant. The K–W test yields low P-values, indicating that the observed
differences are not random. The DcB test further illustrates that FLMIG outperforms
ICG in nearly all cases, while LDN and LVNP are more competitive with each
other, showing minimal differences in some instances. In conclusion, FLMIG is the
best choice for maximum accuracy, LDN offers an excellent balance between speed
Table 11 The class of moderate-size synthetic problem set (CMSP): Performance indicators (Best, Avg, and Std) for FLMIG, LDN, LVNP, and ICG (with the best values
182
for Best (maximum) and Avg highlighted in bold for each case)
Algorithm 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
FLMIG Best 0.892135 0.793371 0.694420 0.594839 0.497151 0.399434 0.302507 0.252637
Avg 0.892112 0.793355 0.694368 0.594748 0.497028 0.399127 0.300336 0.251290
Page 30 of 39
and performance, LVNP can be suitable for smaller problem sizes, and ICG lags
significantly in both accuracy and speed, making it the least viable option. Figure 8
provides a graphical representation of Table 11.
Table 12 evaluates the performance of the FLMIG algorithm against the three algo-
rithms, LDN, LVNP, and ICG on a large-size synthetic problem set (CLSP). The
metrics assessed include best and average values, standard deviations (Std), and run-
ning time across the mixing parameter varying from 𝜇 = 0.1 to 𝜇 = 0.8.
FLMIG consistently performs well in terms of accuracy, delivering high best and
average values, especially at smaller problem sizes. However, its execution time
increases steadily as problem size grows, although it remains relatively manageable.
It maintains low standard deviations, indicating stable performance across different
problem instances. LDN achieves comparable performance to FLMIG, often slightly
Table 12 The class of large-size synthetic problem set (CLSP): Performance indicators (Best, Avg, and Std) for FLMIG, LDN, LVNP, and ICG (with the best values for
Best (maximum) and Avg highlighted in bold for each case)
Algorithm 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
FLMIG Best 0.900740 0.801933 0.703334 0.604998 0.506247 0.407568 0.310474 0.251754
Avg 0.900726 0.801910 0.703313 0.604952 0.506165 0.407514 0.310260 0.249895
Std 0.000010 0.000015 0.000014 0.000024 0.000039 0.000036 0.000127 0.001770
Time 37.2287 61.3815 92.7853 126.394 185.8676 230.5278 315.2526 650.8977
LDN Best 0.900783 0.801960 0.703339 0.604956 0.506138 0.407691 0.310800 0.238483
Avg 0.900762 0.801951 0.703326 0.604922 0.506086 0.407621 0.310636 0.237278
Std 0.000014 0.000007 0.000010 0.000021 0.000034 0.000060 0.000109 0.001227
Time 7.7707 8.4898 8.9789 9.5936 10.2837 11.1313 12.6324 15.456
LVNP Best 0.900733 0.801936 0.703317 0.604995 0.506213 0.407373 0.307396 0.234140
Avg 0.900723 0.801912 0.703294 0.604952 0.506145 0.407307 0.307233 0.231671
Std 0.000006 0.000015 0.000016 0.000031 0.000033 0.000043 0.000146 0.002141
Time 26.0225 50.7933 80.4861 116.6926 163.9604 204.8564 270.098 277.9552
Complex network community discovery using fast local move…
ICG Best 0.898475 0.798715 0.699222 0.599709 0.498933 0.393703 0.274978 0.155503
Avg 0.898466 0.798700 0.699213 0.599666 0.498865 0.393395 0.274425 0.154058
Std 0.000013 0.000019 0.000010 0.000050 0.000039 0.000246 0.000405 0.001210
Time 10000.0765 10001.0765 10002.0765 10003.0765 10004.0765 10005.0765 10006.0765 10005.0875
K–W test H-statistics 33.1070 32.7263 30.1873 25.6668 30.7068 35.6327 36.4405 36.5854
P-value 0 0 0.000001 0.000011 0.000001 0 0 0
Page 33 of 39
182
Table 12 (continued)
182
DcB test FLMIG-LDN 0.047049 0.021870 1 0.490539 0.027856 0.599883 0.365189 0.334696
FLMIG-LVNP 1 1 0.814306 1 1 0.244172 0.320263 0.000783
FLMIG-ICG 0.012435 0.029567 0.000670 0.000050 0.000004 0.000451 0.000725 0
Page 34 of 39
outperforming it in best and average values. However, LDN’s execution time rises
drastically for larger problem sizes, becoming less practical for very large datasets.
Its standard deviations are low, signifying consistent behavior, making it an attractive
choice when time constraints are not severe. LVNP also delivers competitive
performance in accuracy but tends to lag slightly behind FLMIG and LDN as the
problem size increases. The execution time for LVNP is extremely high, especially
at larger problem sizes, making it impractical for large-scale applications despite its
reasonable accuracy. ICG struggles in comparison to the other algorithms, showing
the lowest best and average values. It is notably slower, especially at large problem
sizes, with its execution time increasing steeply. The algorithm’s accuracy is also
less stable, reflected in higher standard deviations.
Statistical tests confirm significant performance differences between the
algorithms. The K–W test shows very low P-values, indicating that the differences in
performance are not due to chance. The DcB test further demonstrates that FLMIG
outperforms ICG in almost all cases, with LDN and LVNP offering competitive
results in certain instances. In summary, FLMIG and LDN are the top performers
182 Page 36 of 39 S. Taibi et al.
6 Conclusion
In this paper, we have introduced a new algorithm called the fast local move iterated
greedy (FLMIG) as an efficient approach to improve the detection of communities
in both synthetic and real-world networks. By combining fast local move heuris-
tics with an iterated greedy framework, FLMIG maximizes modularity while main-
taining scalability and simplicity. Extensive experimental results demonstrate that
FLMIG effectively identifies high-quality communities across both synthetic and
real-world networks. The algorithm shows comparable or superior performance to
state-of-the-art methods, including the Leiden and Louvain Prune algorithms, par-
ticularly in large networks.
FLMIG’s efficacy is substantially enhanced by its innovative components, includ-
ing the enhanced Louvain Prune algorithm. These characteristics guarantee that
the algorithm effectively manages the exploration and exploitation stages, resulting
in the detection of communities of superior quality. Moreover, FLMIG’s minimal
parameter tuning and iterative refinement process ensure that the detected communi-
ties remain internally connected, addressing a key limitation of earlier approaches.
The computational results validate the robustness, scalability, and effectiveness of
the proposed algorithm, making it a valuable tool for network analysis across vari-
ous domains.
In future work, FLMIG could be extended to handle overlapping communities
and further optimized to reduce computational time for very large datasets, with par-
allel processing being a potential avenue for speeding up calculations. Expanding
the algorithm to include temporal and multiplex networks, along with other com-
plex systems, would enhance its applicability and impact within network science.
Additionally, the algorithm shows promise for broader applications, such as diffu-
sion source identification, containment, and influence maximization in incomplete
networks. Exploring its use in fields like epidemic modeling, network security, and
personalized recommendation systems would showcase its versatility.
Acknowledgements This project is supported by the Algerian Directorate General of Scientific Research
and Technological Development (DGSRTD).
References
1. Aggarwal Kirti, Arora Anuja (2023) Assessment of discrete bat-modified (dbat-m) optimization
algorithm for community detection in complex network. Arab J Sci Eng 48(2):2277–2296
2. Barabasi Albert-Laszlo, Oltvai Zoltan N (2004) Network biology: understanding the cell’s
functional organization. Nat Rev Genet 5(2):101–113
Complex network community discovery using fast local move… Page 37 of 39 182
3. Attea Bara’a A, Abbood Amenah D, Hasan Ammar A, Pizzuti Clara, Al-Ani Mayyadah, Özdemir
Suat, Al-Dabbagh Rawaa Dawoud (2021) A review of heuristics and metaheuristics for community
detection in complex networks: current usage, emerging development and future directions. Swarm
Evol Comput 63:10088
4. Blondel Vincent D, Guillaume Jean-Loup, Lambiotte Renaud, Lefebvre Etienne (2008) Fast
unfolding of communities in large networks. J Stat Mech: Theory Exp 2008(10):P10008
5. Boguná Marián, Pastor-Satorras Romualdo, Díaz-Guilera Albert, Arenas Alex (2004) Models of
social networks based on social distance attachment. Phys Rev E 70(5):056122
6. Bouamama Salim, Blum Christian (2015) A randomized population-based iterated greedy
algorithm for the minimum weight dominating set problem. In 2015 6th international conference on
information and communication systems (ICICS), pages 7–12. IEEE
7. Bouamama Salim, Blum Christian, Boukerram Abdellah (2012) A population-based iterated greedy
algorithm for the minimum weight vertex cover problem. Appl Soft Comput 12(6):1632–1639
8. Bouamama Salim, Blum Christian, Pinacho-Davidson Pedro (2022) A population-based iterated
greedy algorithm for maximizing sensor network lifetime. Sensors 22(5):1804
9. Brandes Ulrik, Delling Daniel, Gaertler Marco, Gorke Robert, Hoefer Martin, Nikoloski Zoran,
Wagner Dorothea (2007) On modularity clustering. IEEE Trans Knowl Data Eng 20(2):172–188
10. Cai Qing, Gong Maoguo, Shen Bo, Ma Lijia, Jiao Licheng (2014) Discrete particle swarm
optimization for identifying community structures in signed social networks. Neural Netw 58:4–13
11. Alejandra Casado, Sergio Bermudo, López-Sánchez AD, Jesús Sánchez-Oro (2023) An iterated
greedy algorithm for finding the minimum dominating set in graphs. Math Comput Simul 207:41–58
12. Clauset Aaron, Newman Mark EJ, Moore Cristopher (2004) Finding community structure in very
large networks. Phys Rev E 70(6):066111
13. Danon Leon, Diaz-Guilera Albert, Duch Jordi, Arenas Alex (2005) Comparing community structure
identification. J Stat Mech: Theory Exp 2005(09):P09008
14. Duch Jordi, Arenas Alex (2005) Community detection in complex networks using extremal
optimization. Phys Rev E 72(2):027104
15. Fortunato Santo (2010) Community detection in graphs. Phys Rep 486(3–5):75–174
16. Ghalmane Zakariya, El Hassouni Mohammed, Cherifi Hocine (2019) Immunization of networks
with non-overlapping community structure. Soc Netw Anal Min 9:1–22
17. Girvan Michelle, Newman Mark EJ (2002) Community structure in social and biological networks.
Proc Natl Acad Sci 99(12):7821–7826
18. Gleiser Pablo M, Danon Leon (2003) Community structure in jazz. Adv Complex Syst
6(04):565–573
19. Stützle T, Hoos HH (2005) Stochastic local search: foundations & applications. Elsevier / Morgan
Kaufmann, San Francisco (CA), USA
20. Hassan Eslam A, Hafez Ahmed Ibrahem, Hassanien Aboul Ella, Fahmy Aly A (2015) A discrete
bat algorithm for the community detection problem. In Hybrid Artificial Intelligent Systems: 10th
International Conference, HAIS 2015, Bilbao, Spain, June 22-24, 2015, Proceedings 10, pages 188–
199. Springer
21. Knuth Donald E (1993) The stanford graphbase: a platform for combinatorial algorithms. In
Proceedings of the fourth annual ACM-SIAM Symposium on Discrete algorithms, pages 41–43
22. Kong Hanzhang, Kang Qinma, Li Wenquan, Liu Chao, Kang Yunfan, He Hong (2019) A hybrid
iterated carousel greedy algorithm for community detection in complex networks. Physica A
536:122124
23. Lancichinetti Andrea, Fortunato Santo, Radicchi Filippo (2008) Benchmark graphs for testing
community detection algorithms. Phys Rev E 78(4):046110
24. Li Jiakang, Lai Songning, Shuai Zhihao, Tan Yuan, Jia Yifan, Yu Mianyang, Song Zichen, Peng
Xiaokang, Xu Ziyang, Ni Yongxin et al (2024) A comprehensive review of community detection in
graphs. Neurocomputing, 128169
25. Li Wenquan, Kang Qinma, Kong Hanzhang, Liu Chao, Kang Yunfan (2020) A novel iterated greedy
algorithm for detecting communities in complex network. Soc Netw Anal Min 10:1–17
26. Li Zhangtao, Liu Jing (2016) A multi-agent genetic algorithm for community detection in complex
networks. Physica A 449:336–347
27. Liu Chao, Kang Qinma, Kong Hanzhang, Li Wenquan, Kang Yunfan (2020) An iterated local search
algorithm for community detection in complex networks. Int J Mod Phys B 34(04):2050013
28. Liu Xin, Murata Tsuyoshi (2010) Advanced modularity-specialized label propagation algorithm for
detecting communities in networks. Physica A 389(7):1493–1500
182 Page 38 of 39 S. Taibi et al.
29. Liu Y, Wang X, Wang X, Wang Z, Kurths J (2023) Diffusion source inference for large-scale
complex networks based on network percolation. Advance online publication, IEEE Transactions on
Neural Networks and Learning Systems
30. Liu Yang, Zhong Yebiao, Li Xiaoyu, Zhu Peican, Wang Zhen (2024) Vital nodes identification via
evolutionary algorithm with percolation optimization in complex networks. IEEE Trans Network
Sci Eng 11(4):3838–3850
31. Lozano Manuel, Rodríguez Francisco J (2023) Iterated greedy. In Discrete Diversity and Dispersion
Maximization: A Tutorial on Metaheuristic Optimization, 107–133. Springer
32. Lusseau David, Schneider Karsten, Boisseau Oliver J, Haase Patti, Slooten Elisabeth, Dawson
Steve M (2003) The bottlenose dolphin community of doubtful sound features a large proportion of
long-lasting associations: can geographic isolation explain this unique trait? Behav Ecol Sociobiol
54:396–405
33. Messaoudi Imane, Kamel Nadjet (2019) A multi-objective bat algorithm for community detection
on dynamic social networks. Appl Intell 49:2119–2136
34. Naeni Leila Moslemi, Berretta Regina, Moscato Pablo (2015) Ma-net: A reliable memetic algorithm
for community detection by modularity optimization. In Proceedings of the 18th Asia Pacific
Symposium on Intelligent and Evolutionary Systems, Volume 1, pages 311–323. Springer
35. Newman Mark E J (2013) Network datasets. http://www-personal.umich.edu/~mejn/netdata/.
Accessed on 07/03/2021
36. Newman Mark EJ (2004) Fast algorithm for detecting community structure in networks. Phys Rev E
69(6):066133
37. Newman Mark EJ (2006) Finding community structure in networks using the eigenvectors of
matrices. Phys Rev E 74(3):036104
38. Newman Mark EJ, Girvan Michelle (2004) Finding and evaluating community structure in
networks. Phys Rev E 69(2):026113
39. Onnela J-P, Saramäki Jari, Hyvönen Jorkki, Szabó György, Lazer David, Kaski Kimmo, Kertész
János, Barabási A-L (2007) Structure and tie strengths in mobile communication networks. Proc
Natl Acad Sci 104(18):7332–7336
40. Ozaki Naoto, Tezuka Hiroshi, Inaba Mary (2016) A simple acceleration method for the louvain
algorithm. Int J Comput Elect Eng 8(3):207
41. Palla Gergely, Barabási Albert-László, Vicsek Tamás (2007) Quantifying social group evolution.
Nature 446(7136):664–667
42. Reichardt Jörg, Bornholdt Stefan (2006) Statistical mechanics of community detection. Phys Rev E
74:016110
43. Ruiz Rubén, Stützle Thomas (2007) A simple and effective iterated greedy algorithm for the
permutation flowshop scheduling problem. Eur J Oper Res 177(3):2033–2049
44. Sanchez-Oro Jesus, Duarte Abraham (2018) Iterated greedy algorithm for performing community
detection in social networks. Futur Gener Comput Syst 88:785–791
45. Song Anping, Li Mingbo, Ding Xuehai, Cao Wei, Ke Pu (2016) Community detection using discrete
bat algorithm. IAENG Int J Comput Sci 43(1):37–43
46. Stanford Network Analysis Project (SNAP). SNAP Datasets: Stanford Large Network Dataset
Collection. https://snap.stanford.edu/data/index.html#socnets. Retrieved on 02/05/2020
47. Stützle Thomas (2006) Iterated local search for the quadratic assignment problem. Eur J Oper Res
174(3):1519–1539
48. Traag Vincent A (2015) Faster unfolding of communities: Speeding up the louvain algorithm. Phys
Rev E 92(3):032801
49. Traag Vincent A, Waltman Ludo, Eck Nees Jan Van (2019) From louvain to leiden: guaranteeing
well-connected communities. Sci Rep 9(1):5233
50. Van Laarhoven Peter JM, Aarts Emile HL, van Laarhoven Peter JM, Aarts Emile HL (1987)
Simulated annealing. Springer
51. Waltman Ludo, Eck Nees Jan Van (2013) A smart local moving algorithm for large-scale
modularity-based community detection. European Phys J B 86:1–14
52. Watts Duncan J, Strogatz Steven H (1998) Collective dynamics of ‘small-world’ networks. Nature
393(6684):440–442
53. Zachary Wayne W (1977) An information flow model for conflict and fission in small groups. J
Anthropol Res 33(4):452–473
Complex network community discovery using fast local move… Page 39 of 39 182
54. Zhang Kan, Zhang Zichao, Bian Kaigui, Xu Jin, Gao Jie (2017) A personalized next-song
recommendation system using community detection and markov model. In 2017 IEEE Second
International Conference on Data Science in Cyberspace (DSC), pages 118–123. IEEE
55. Zhang Weitong, Zhang Rui, Shang Ronghua, Li Juanfei, Jiao Licheng (2019) Application of natural
computation inspired method in community detection. Physica A 515:130–150
56. Zhang Yun, Liu Yongguo, Li Jieting, Zhu Jiajing, Yang Changhong, Yang Wen, Wen Chuanbiao
(2020) Wocda: A whale optimization based community detection algorithm. Physica A 539:122937
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under
a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted
manuscript version of this article is solely governed by the terms of such publishing agreement and
applicable law.