0% found this document useful (0 votes)
85 views30 pages

Enhancing Stock Market Forecasting - A Hybrid Model For Accurate Prediction of S&Amp P 500 and CSI 300 Future Prices - 1-s2.0-S0957417424022474-Main

This paper presents a hybrid model, MEMD-AO-LSTM, for predicting stock prices of the S&P 500 and CSI 300 indices, demonstrating significant predictive accuracy through advanced machine learning techniques. The model incorporates multivariate empirical mode decomposition and the Aquila optimizer to enhance prediction reliability by effectively analyzing complex stock market data. Performance metrics indicate that this approach outperforms traditional methods and is adaptable to various market conditions, providing valuable insights for investors.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views30 pages

Enhancing Stock Market Forecasting - A Hybrid Model For Accurate Prediction of S&Amp P 500 and CSI 300 Future Prices - 1-s2.0-S0957417424022474-Main

This paper presents a hybrid model, MEMD-AO-LSTM, for predicting stock prices of the S&P 500 and CSI 300 indices, demonstrating significant predictive accuracy through advanced machine learning techniques. The model incorporates multivariate empirical mode decomposition and the Aquila optimizer to enhance prediction reliability by effectively analyzing complex stock market data. Performance metrics indicate that this approach outperforms traditional methods and is adaptable to various market conditions, providing valuable insights for investors.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Expert Systems With Applications 260 (2025) 125380

Contents lists available at ScienceDirect

Expert Systems With Applications

journal homepage: www.elsevier.com/locate/eswa

Enhancing stock market Forecasting: A hybrid model or accurate


prediction o S&P 500 and CSI 300 uture prices
Qing Ge *
International Business School, Yunnan University o Finance and Economics, Kunming, Yunnan, 650221, China

A R T I C L E I N F O A B S T R A C T

Keywords: This paper investigates the challenging domain o stock market prediction, a signicant aspect o nancial
Financial Markets markets. It ocuses on developing predictive models to orecast stock prices accurately, vital or mitigating losses
Multivariate Empirical Mode Decomposition and maximizing gains amidst the inherent unpredictability and volatility o the market. The study compre-
Aquila Optimizer
hensively analyzes various predictive models, including time series analysis and advanced machine learning
S&P 500
CSI 300
techniques. It highlights the superiority o ensemble or hybrid models in enhancing prediction reliability. Central
Stock price to this research is the development o a model incorporating detailed data collection, thorough analysis, and
state-o-the-art machine learning methods, achieving notable predictive accuracy. This approach underscores the
benets o data-centric strategies in today’s rapidly evolving business environment and the widespread appli-
cability o predictive analytics. The model outperorms conventional methods by decomposing time series into
simpler components and optimizing hyperparameters, thereby enhancing prediction accuracy, as demonstrated
by perormance testing on the S&P 500 and CSI 300 indices. The RMSE, MAE, and R2 values o the MEME-AO-
LSTM model are 27.12, 19.43, and 0.992, respectively, which serve as evidence o this. The model’s general-
izability and high perormance are demonstrated by its ecacy in a variety o major markets, including the
NASDAQ 100, Nikkei 225, FTSE, DAX, SSE, and KOSPI. Additionally, the model’s adaptability under diverse
market conditions is demonstrated through its evaluation o its robustness in response to signicant events, such
as the economic stimulus responses to the COVID-19 pandemic and the geopolitical tensions resulting rom the
tension and confict between Russia and Ukraine. Consequently, the proposed methodology has the potential to
help investors achieve substantial and advantageous returns.

1. Introduction linear, and noisy system (Ahangar et al., 2010). It makes it challenging
to predict prices in the uture with accuracy and precision. Due to this
1.1. Motivations noise, identiying important patterns in historical price data may be
dicult. Due to the requent non-stationarity o stock prices, their sta-
Stock price orecasting is a challenging and complex task or com- tistical properties, such as mean and variance, are prone to change over
panies, investors, and equity traders to anticipate uture earnings (Ali time(Z. Li, Yu, et al., 2023; Mintarya et al., 2023). Because o this, using
et al., 2023). Since it can be infuenced by a variety o elements, such as traditional time series analysis techniques becomes challenging. Finding
macroeconomic policies, stock market decisions, the capital fows o key a strategy that can solve these issues and deliver reliable results is the
rms, and changes in ownership that make it a non-parametric, non- objective. Over the years, numerous techniques have been ormulated to

Abbreviations: ANN, Articial neural networks; AO, Aquila optimizer; BiLSTM, Bidirectional long short-term memory; BPNN, Back propagation neural network;
CNN, Convolutional neural network; CWT, Continuous wavelet transorms; DL, Deep learning; DWT, Discrete wavelet transorms; EEMD, Ensemble empirical mode
decomposition; EMD, Empirical mode decomposition; EWT, Empirical wavelet transorms; GRU, Gated recurrent unit; IMF, Intrinsic mode unction; LSTM, Long
short-term memory; MAE, Mean absolute error; MAPE, Mean absolute percentage error; MEMD, Multivariate empirical mode decomposition; MLP, Multilayer
perceptron; MSE, Mean squared error; RBF, Radial basis unction; RMSE, Root mean square error; SDTP, Series Decomposition Transormer with Period-correlation;
SLFFN, Single hidden layer eedorward network; SLFN, Single Layer Feed-orward Network; SSE, Shanghai stock exchange; Std., Standard deviation; SVR, Support
vector regression; SZSE, Shenzhen stock exchange; TNN, Traditional neural networks; VMD, Variational mode decomposition.
* Corresponding author.
E-mail address: zz1670@ynue.edu.cn.

https://doi.org/10.1016/j.eswa.2024.125380
Received 3 January 2024; Received in revised orm 8 August 2024; Accepted 9 September 2024
Available online 11 September 2024
0957-4174/© 2024 Elsevier Ltd. All rights are reserved, including those for text and data mining, AI training, and similar technologies.
Q. Ge Expert Systems With Applications 260 (2025) 125380

orecast outcomes in various elds (Nti et al., 2020). These methods 1.2. Related works
involve a range o approaches such as statistical analysis, machine
learning, neural networks, and deep learning. Through these methods, it Machine learning algorithms have progressively gained popularity in
has become possible to make accurate predictions in diverse areas o the eld o stock market prediction in recent years. To resolve the issue
application. o stock price orecasting, Tao et al. (2024) implemented a Series
This research utilizes cutting-edge methodologies to illustrate the Decomposition Transormer with Period-correlation (SDTP) model. To
potential o technology to transorm nancial orecasting. Traditional improve the accuracy and generalizability o orecasting, the SDTP
predictive models requently ail to eectively manage the intricacy o model employs a period-correlation mechanism and series decomposi-
stock market data. To address these challenges, this study is to introduce tion layers to more eectively identiy relationships within historical
a novel hybrid approach that is highly accurate in predicting the stock data and learn stock market trends. The VGC-GAN, a multi-graph con-
market. The proposed hybrid method, MEMD-AO-LSTM, consists o the volutional adversarial ramework, was proposed by Ma et al. (2024) to
multivariate empirical mode decomposition (MEMD), Aquila optimizer predict stock prices. Historical stock data is utilized by the model to
(AO), and long short-term memory (LSTM). LSTM, which is a deep generate numerous correlation graphs, which provide a comprehensive
learning method, was developed to overcome typical recurrent neural understanding o inter-stock correlations rom a variety o perspectives.
network gradient exploding and vanishing problems and capture long- To optimize predictive perormance, the ramework implements a
term connections in sequential input (Van Houdt et al., 2020). For Generative Adversarial Network that is supplemented with a MSE. The
instance, Baek (2024) introduced a deep learning technique (CNN- importance o stock markets in the global nancial system and their
LSTM) that is based on genetic algorithm optimization to orecast the impact on economic growth and stability were examined by Upadhyay
closing stock price o the ollowing day. This technique is designed to et al. (2023). To enhance the precision o stock value orecasting, their
address the challenges o stock market volatility, the necessity o veri- research employed deep learning algorithms. The potential o deep
ed data, and the optimal selection o eatures. The study employed learning to create a more dependable stock market environment was
convolutional neural networks (CNN) to extract eatures associated with investigated through a comparative analysis o the ecacy o LSTM and
stock price prediction, and LSTM networks to capture the long-term RNN algorithms in stock price estimation. To evaluate the ecacy o
trajectory o the input time series data. Botunac et al. (2024) conduct- these models, historical market data was obtained rom the Alpha Vault
ed an investigation into the potential benets o integrating LSTM API. LSTM displayed a higher degree o precision in predicting stock
models into conventional trading strategies to ascertain whether these prices than RNN, which encountered specic obstacles. To address the
modications result in improved perormance. The results o the study complexities and volatility o the contemporary stock market, Bhandari
demonstrated that trading strategies that employed LSTM models out- et al. (2022) created a predictive model that employs a Long Short-Term
perormed traditional strategies, underscoring the substantial advantage Memory neural network to predict the closing price o the S&P 500
o utilizing LSTM models or market prediction and trading decision- index the ollowing day. To obtain a thorough understanding o the
making. market’s behavior, the investigation meticulously selected a well-
Among deep learning models, LSTM provided the best results. Ater balanced combination o nine predictors that included undamental
being optimized by the AO method, this model received modied and market data, macroeconomic data, and technical indicators. Standard
selected data and perormed the training and testing process. A novel assessment metrics were employed to compare the perormances o both
swarm intelligence method called AO was initially suggested by Abua- single-layer and multi-layer LSTM models, which were developed using
ligah et al. (2021) or the solution o optimization issues. During hunting these input variables. In comparison to the multi-layer LSTM models, the
trips, AO relies on the our primary hunting techniques used by the experimental results indicated that the single-layer LSTM model oered
Aquila bird. The Aquila bird is able to switly switch between dierent a more accurate prediction. The challenge that investors ace when
hunting tactics thanks to the combination o its sharp hooks, sturdy eet, endeavoring to precisely time stock transactions, a actor that may lead
and great speed (S. Wang et al., 2021). For instance, a methodology was to nancial losses, was investigated by Hani’ah et al. (2023). Machine
developed by R. Liu et al. (2024) to address the challenges o accurately learning was employed to incorporate characteristics extracted rom
predicting nancial outcomes by incorporating the AO procedure with Google Trends data, technical indicators, and stock price data in a me-
support vector regression (SVR). Yiming (2024) suggested a method or thodology or predicting stock prices. To anticipate uture stock prices,
improving the predictive accuracy o extreme gradient boosting in stock three requently employed machine learning algorithms were imple-
market orecasting by incorporating it with a variety o optimization mented: Support Vector Regression (SVR), Multilayer Perceptron (MLP),
techniques such as AO. and Multiple Linear Regression (MLR). In the prediction o Indonesian
The S&P 500 and CSI 300 stock indices are used as the model’s main stock prices, SVR outperormed MLP and MLR with an average MAPE o
sources o data. To lessen the complexity o the data, there are many 0.50 %, as indicated by the test results. SVR was capable o predicting
decomposition strategies that can be used. However, most o the tradi- stock prices that were in close proximity to their actual values. The
tional decomposition methods can only handle single time series and are prediction o time series or nancial markets was investigated by Xia
unable to handle multivariate time series. I these decomposition et al. (2021) as an area o growing interest to investors and researchers.
methods are used to handle multivariate time series in isolation, the They present a ramework to orecast stock market behavior that in-
resultant modes will suer rom loss o correlation (Huang, Hasan, et al., corporates wavelet coherence, multiscale decomposition, and SVR. To
2022). Multivariate empirical mode decomposition can analyze many enhance the accuracy o predictions or multidimensional nonlinear
time series concurrently while maintaining the intrinsic relationship data, they implemented SVR ater extracting valuable inormation rom
between various modes. As a result, the modes o multivariate time se- unprocessed data through preprocessing. To evaluate the ramework’s
ries correlate with one another, making them excellent or investigating eectiveness, comparative experiments were implemented using the
multivariable connections. For instance, the objective o Huang, Deng, Dow Jones Index and the Shanghai Composite Index. Pangestu et al.
et al. (2022) was to improve the accuracy o stock price index ore- (2024) investigated the intricate matter o stock price prediction,
casting by utilizing the capabilities o MEMD to model the dependencies emphasizing the importance o eective methods or obtaining accurate
between pertinent variables. The modeling ramework employed in the predictions. Employing daily historical data, they proposed the use o
study was SVR based on MEMD, which simultaneously decomposed machine learning methods to orecast the stock prices o Apple Inc. The
covariates such as the opening price, highest price, lowest price, closing grid search method was implemented to optimize hyperparameters,
price, and trading volume o a stock index. such as cost, epsilon, kernel, and intercept t, with a k-value o 5. The
importance o stock market prediction in providing investors with in-
sights into uture stock prospects and optimizing prots was emphasized

2
Q. Ge Expert Systems With Applications 260 (2025) 125380

Table 1
Statistics description o the dataset or the S&P 500 index.
Count Mean Std. Min Max Skew Kurtosis

Open 2517 2741.91 872.933 1426.19 4804.51 0.676063 0.64677


High 2517 2756.636 879.4021 1461.89 4818.62 0.675695 0.65484
Low 2517 2725.856 865.6957 1426.19 4780.04 0.676603 0.64035
Volume 2517 3.89E+09 9.65E+08 1.3E+09 9.98E+09 1.68757 5.036371
Close 2517 2742.26 872.6929 1457.15 4796.56 0.676501 0.6474

by Ahuja et al. (2023). They orecasted stock prices by employing three learning techniques and time series analysis, to demonstrate the
prominent regression techniques: SVR, Random Forest, and Linear superiority o hybrid models in stock price orecasting.
Regression, in conjunction with historical data. The criteria or selecting • Demonstration o Decomposition Benefts: The study improves
these machine-learning algorithms were their pervasive use and the high the prediction reliability o the developed models by decomposing
precision o the outcomes. Pagliaro (2023) suggested a classication- time series data into manageable components, thereby resolving the
based method to enhance stock market predictions by reducing ore- challenges posed by the non-stationary and non-linear properties o
casting errors. The study introduced a method that utilizes an Extra stock market data.
Trees Classier to predict stock returns by utilizing technical indicators • Optimization Techniques or Improved Perormance: The
as inputs. The objective was to determine the percentage dierence research validates the ecacy o AO optimization in enhancing
between the closing price and the closing price ater 10 trading days or model perormance, demonstrating its potential or optimizing
120 companies in a variety o industries. hyperparameters in predictive models.
• Insights into Market Dynamics: The study oers practical impli-
2. Research gaps, main contributions, and novelties cations or traders and investors in various markets by comparing the
perormance o models on the S&P 500 and CSI 300 indices, thereby
Accurate price orecasts are essential or nancial stability and suc- providing valuable insights into how market dynamics infuence
cess despite the complexity and ambiguity o stock market prediction. predictive accuracy.
Traders, investors, and analysts are required to make inormed decisions • Comprehensive Market Analysis: The research involves extensive
in response to nancial market volatility. Reliable prediction is chal- testing across a variety o major markets, including the S&P 500, CSI
lenging due to the unpredictable nature o stock values. Accurate ore- 300, NASDAQ 100, Nikkei 225, FTSE, DAX, SSE, and KOSPI, to
casting is crucial or strategic planning, risk mitigation, and nancial demonstrate the model’s generalizability and high perormance in
gain. diverse market conditions.
• Robustness to Market Events: The study assesses the model’s
a) Research Gaps adaptability to signicant economic events, such as the COVID-19
• Insufcient Examination o Hybrid Models: Although the existing pandemic and geopolitical tensions resulting rom the tension and
literature has investigated a variety o individual predictive models confict between Russia and Ukraine, thereby conrming the model’s
or stock market orecasting, there is a dearth o comprehensive reliability under a variety o market conditions.
studies on the ecacy o hybrid or ensemble models. By developing c) Novelties
and analyzing the perormance o the MEMD-AO-LSTM model, this • MEMD-AO-LSTM Model Presenting: The MEMD-AO-LSTM model
research addresses this disparity and oers insights into their supe- is a signicant advancement in data analysis or stock market pre-
rior predictive capabilities. diction, owing to its highly accurate outcomes.
• Limited Utilization o Decomposition Techniques: Decomposi- • Cross-Market Analysis: The study introduces a novel approach to
tion techniques have not been thoroughly implemented in numerous comparing predictive models across two distinct market indices (S&P
studies to address the non-stationary and non-linear characteristics 500 and CSI 300), thereby acilitating a more comprehensive
o stock market data. This research addresses this gap by illustrating comprehension o the distinctive dynamics o each market and their
the ecacy o decomposing time series data into simpler compo- infuence on prediction models.
nents to improve the accuracy o predictions. • Incorporation o Advanced Techniques: The research provides a
• Optimization Techniques in Predictive Modeling: The applica- comprehensive approach to stock market prediction by integrating
tion o advanced optimization techniques to improve the ecacy o advanced machine learning techniques, decomposition methods, and
machine learning models in stock price prediction is notably lacking. optimization algorithms, potentially upgrading existing
This study examines the eectiveness o AO optimization, illus- methodologies.
trating its benecial infuence on the perormance o predictive • Versatility in Diverse Markets: To demonstrate the model’s robust
models. perormance and adaptability, its ecacy was evaluated in six
• Comparative Analysis o Diverse Markets: The perormance o additional markets: NASDAQ 100, Nikkei 225, FTSE, DAX, SSE, and
models across various market dynamics is oten not ully compre- KOSPI.
hended due to the predominant emphasis on single market indices in • Resilience in Specifc Market Conditions: The model’s adapt-
previous research. This study addresses this issue by contrasting the ability under diverse market conditions was underscored by its
predictive accuracy o models on both the S&P 500 and CSI 300 evaluation o its perormance in response to specic events,
indices, emphasizing the impact o market characteristics on model including the economic stimulus packages during the COVID-19
perormance. pandemic and the tension and confict between Russia and Ukraine.
b) Contributions
• Development o Advanced Predictive Models: The study presents
the MEMD-AO-LSTM model, which achieves a high degree o accu-
racy in stock price prediction by incorporating state-o-the-art ma- 2.1. Paper organization
chine learning methods and an advanced optimization technique.
• Comprehensive Model Evaluation: The investigation comprehen- The second section o the study provides inormation about methods
sively assesses a variety o predictive models, such as machine and materials. The third section presents the proposed ramework ar-
chitecture and its main stages. The ourth section presents the

3
Q. Ge Expert Systems With Applications 260 (2025) 125380

Table 2
Statistics description o the dataset or the CSI 300 index.
Count Mean Std. Min Max Skew Kurtosis

Open 2432 3659.598 852.7103 2079.87 5922.07 0.028693 0.61841


High 2432 3689.974 858.7159 2118.79 5930.91 0.024831 0.61369
Low 2432 3629.838 843.8259 2023.17 5747.66 0.020918 0.62602
Volume 2432 136253.3 86763.68 35,740 686,440 2.413249 7.438108
Close 2432 3662.548 852.204 2086.97 5807.72 0.023471 0.61705

Fig. 1. Visual interpretation o candlestick charts.

experimental outcomes and discusses them. Lastly, the th section de- number o actors, such as supply and demand, economic data, and news
bates the conclusions o the study. events. The ultimate price at which a nancial asset is bought or sold at
the end o a trading day or session is known as the market closing price.
3. Materials and methods This price refects the asset’s nal trade beore the market or exchange
shuts down or the day. It is the price that investors and traders use to
3.1. Data description assess the perormance o the day and make inormed decisions about
their next investments.
In this particular research endeavor, data was sourced rom the In along with the opening and closing prices, high and low prices are
Yahoo Finance website. The dataset encompasses pertinent inormation also important to consider while trading nancial products. The highest
pertaining to the CSI 300 and S&P 500 indices or renowned corpora- price at which a stock has traded during a trading session or time period
tions in the China and the United State o America, respectively. The is reerred to as the high price or high. It is an essential indicator o
inormation captured encompasses the open price, closing price, highest market sentiment since it shows the level o demand or the stock at its
price, lowest price, and trading volume or both indices on a daily basis. peak. The low price, on the other hand, reers to the lowest price at
This data was collected over a period spanning rom the start o 2013 which a stock has traded throughout the course o a trading session or
until the conclusion o 2022, resulting in a voluminous ten-year dataset. timerame. For traders, it is an essential metric since it provides data on
Tables 1 and 2 provide statistical insights into the dataset, such as count, the stock’s supply as well as potential support levels. In general, the
maximum, minimum, mean, skewness, standard deviation (Std.), and open, close, high, and low prices must be tracked by anybody intending
kurtosis. to trade nancial products. They provide helpul inormation on the
Throughout trading sessions or days, nancial assets such as stocks, market’s condition, supply and demand, and potential trading oppor-
bonds, commodities, or currency pairings are introduced with a xed tunities. Trading volume reers to the total number o shares that have
open or opening price. This price represents the initial transaction or changed hands or a certain company over a specied period o time,
the item during that particular session. In essence, how trades are con- requently a trading day. Increased interest in or volatility in the stock
ducted throughout the day is determined by price. Ater the market may be indicated by an increase in trading volume. Keeping track o
opens, the price o the nancial item will fuctuate depending on a trading volume may be useul or both investors and traders as it enables

4
Q. Ge Expert Systems With Applications 260 (2025) 125380

Fig. 2. The main stages o the data preprocessing algorithm.

them to identiy market trends and make inormed decisions. Table 1 perormance o China’s domestic stock market. The CSI 300 is widely
shows the statistical results o the dataset or the data obtained rom S&P regarded as one o the most important benchmarks or the Chinese stock
500 index, and Table 2 shows the statistical results o the dataset or the market and is an invaluable resource or those looking to understand and
data obtained rom CSI 300 index. invest in it. As such, it oers a comprehensive and detailed insight into
the workings o China’s stock market, making it an indispensable tool
or investors and analysts alike.
3.2. Explanation o trading markets CSI 300 & S&P 500 The S&P 500, or Standard & Poor’s 500, is an index o the stock
market that gauges the perormance o the top 500 American publicly
The China Securities Index 300, or CSI 300 as it is commonly reerred listed rms. This index is widely regarded as one o the most signicant
to, is a crucial stock market index in China. It represents the top 300 equity indices globally, serving as a reliable barometer or the overall
publicly traded businesses on the Shanghai Stock Exchange (SSE) and health and progress o the American stock market. S&P Global, a
Shenzhen Stock Exchange (SZSE), making it a reliable measure o the

5
Q. Ge Expert Systems With Applications 260 (2025) 125380

Fig. 3. Visual representation o the division o dataset into training and testing or (a) S&P 500 and (b) CSI 300 markets.

reputable nancial services rm that provides nancial market indices price. Typically, the candle is lled with green hue. A hollow candlestick
and data, is responsible or maintaining and publishing the S&P 500. It is typically shown in red with a black border is drawn i the close price is
an essential point o reerence or investors, analysts, and nancial higher than the open price. A candle’s body, oten reerred to as its
proessionals. By keeping an eye on the S&P 500, investors can make hollow or lled section, might be long, normal, or short in relation to the
inormed decisions about their investment strategies and stay ahead o line above or below it. The term shadows, tails, or wicks reers to the
the curve in the dynamic and ever-changing world o nance. long or short lines that indicate the high or low-price range above and
below the actual body. The top o the upper shadow indicates the price
3.3. Candlestick description that is highest or that specic day, while the bottom o the lower tail
indicates the price that is lowest. The body could or might not have
Candlestick chart combines the best eatures o a line and bar chart in wicks, tails, or shadows.
that each bar shows the range o price movement over the specied time
period. The most common application or it is in technical analysis o 3.4. Describe the data preparation process
currency and equities price trends. The open, high, low, and closing
prices o the day are used to build a candlestick as demonstrated in Data preparation, which is displayed in Fig. 2, is an essential
Fig. 1. component o any project that involves substantial datasets. At this
A ull candlestick is drawn i the open price is higher than the close stage, the raw data is transormed into a ormat that is both more usable

6
Q. Ge Expert Systems With Applications 260 (2025) 125380

Fig. 4. The overall structure o the models: (a) MLP, (b) ELM, (c) RBF, (d) RNN, (e) GRU, () BPNN, (g) Bi-LSTM, (h) Transormer.

Fig. 5. Visual presentation o the general ramework and step-by-step illustration o the proposed model.

7
Q. Ge Expert Systems With Applications 260 (2025) 125380

Fig. 6. Visual representation o methods and strategies used by Aquila or hunting.

and o higher quality by standardizing and eliminating any superfuous prevent the algorithm rom becoming ensnared in local minima. The
elements. normalization equation employed is as ollows:
(X  Xmin)
• Data Cleaning: This entails the removal o null, missing, and un- XScaled = (1)
(Xmax  Xmin)
known values, as well as the correction o any discrepancies. It is
imperative to guarantee consistency and precision during this phase
Normalization improves the quality o predictions and reduces the
in order to generate trustworthy insights.
training time o vector machines, thereby supporting vector machines
• Forward and Backward Filling: This method continues to propa-
and enhancing the perormance o machine learning algorithms.
gate the most recent non-null value until it encounters another non-
null value. It is especially benecial or bridging the voids in a
• Data Partitioning: The dataset obtained rom both indexes is
dataset where the uture data points have not yet been recorded, but
divided into train and test, as shown in Fig. 3. 80 % o the dataset is
the late values are comparable to the early values. On the other hand,
designated or training, while the remaining 20 % is designated or
backward lling utilizes the subsequent observed non-null value to
testing. This partitioning is indispensable or the assessment o the
replace the previous null values. This is appropriate when historical
machine learning model’s eectiveness. We can estimate the model’s
data trends are refected in subsequent data points.
perormance under real-world conditions and identiy any over-
• Scaling: These procedures are indispensable or guaranteeing that
tting by training it on the training data and evaluating it on the test
the data is consistent. The data was normalized using Min-Max
data.
scalers, which rescaled it to a range (typically 0 to 1) to ensure
consistency across all processed data. Normalization is especially
crucial or machine learning models, particularly those that are
based on gradient descent, as it can expedite convergence and

8
Q. Ge Expert Systems With Applications 260 (2025) 125380

Fig. 7. Aquila Optimization Flowchart.

3.5. Explanation o artifcial neural network models that o the single hidden layer eedorward neural network (Jiang et al.,
2023). Weights and biases rom the input layer are randomly assigned
3.5.1. Multilayer perceptron by ELM (Jiang et al., 2023), as shown in Fig. 4(b).
In MLP, three layers o articial neurons are employed. The typical
construction o a three-layer network consists o an input layer at the 3.5.3. Radial basis unctions
top, one or more hidden layers in the center o the tiered structure, and A subset o articial neural networks called RBFs, as seen in Fig. 4(c),
an output layer at the bottom. The input dataset will be routed via a are in charge o using distance to estimate the similarity o data. A eed-
series o input and hidden layers to produce an output parameter that orward articial neural network o sorts, an RBF network consists o
will be combined with hidden layer computations (Y. Liu et al., 2023). three layers: an input layer, a hidden layer, and an output layer. The
The general organization o MLP is shown in Fig. 4(a). neurons in the buried layer are activated by RBFs (Kumar, 2024).

3.5.2. Extreme learning Machine 3.5.4. Recurrent neural networks


Extreme Learning Machine is a collection o machine learning ap- With its loop-based architecture, as seen in Fig. 4(d), recurrent
proaches based on eedorward neural networks (Jiang et al., 2023). neural networks can retain pertinent data over extended periods o time
With one exception—the training phase o the conventional neural (Van Houdt et al., 2020). Within the network, data is transerred
network no longer employs the gradient-based technique (backward internally rom one timestep to the next. A labeled training dataset’s
propagation)—the network structure o the ELM model is the same as actual values and predicted values are compared to calculate the cost or

9
Q. Ge Expert Systems With Applications 260 (2025) 125380

Fig. 8. The overall structure o the LSTM.

error o the RNN during training. By keeping the network’s weights and hierarchical neuron groups. While encoders’ eedorward neural net-
biases updated until the lowest value is reached, the error is reduced as works have the same number o parameters, their unctions vary. Re-
much as possible. sidual connections surround the two sub models ater layer
normalization. A decoder produces a foating-point vector list. A simple,
3.5.5. Gated recurrent unit ully connected neural network layer, the linear layer, projects decoder
The GRU algorithm unction, which is displayed in Fig. 4(e), similar stack output as a logarithmic vector (Zhang et al., 2022).
to the LSTM method, except it makes use o a hidden state called the
update gate, which combines the orget and input gates into one (Van 3.6. Data decomposition methods
Houdt et al., 2020).
3.6.1. Empirical mode decomposition
3.5.6. Back propagation neural network The single-channel technique known as Empirical Mode Decompo-
The prediction technique used in this work is Back Propagation sition (EMD) was introduced or processing non-stationary and non-
Neural Network (BPNN), which can extract non-linear correlations rom linear signals (Y. Chen et al., 2022). The EMD splits the signal up into
the input components with good interpretability while avoiding local multiple amplitudes and requency-modulated zero-mean signals, which
convergence (L. Chen et al., 2023). According to Fig. 4(), the input are reerred to as intrinsic mode unctions (IMFs). In contrast to tradi-
layer, the hidden layer, and the output layer make up the three layers o tional decomposition techniques, the EMD represents the signal as an
BPNN, a traditional neural network model that runs rom input to expansion o basic unctions that are dependent on the input signal and
output. are calculated by an iterative process known as siting.

3.5.7. Bidirectional long Short-Term memory 3.6.2. Ensemble Empirical mode decomposition
To investigate the inherent relationships between the data, Bidirec- IMFs, a signal processing and time series analysis tool, are used to
tional Long Short-Term Memory (Bi-LSTM), which is displayed in Fig. 4 decompose a complicated signal into smaller components. An addition
(g), can spread both orward and backward (Z. Li, Xu, et al., 2023). An known as Empirical Mode Decomposition was developed to address
inverse LSTM layer and a orward LSTM layer comprise the Bi-LSTM some o the limitations o EMD. EEMD improves on EMD by introducing
model. The input sequences are learned sequentially in the orward an ensemble approach. Instead o relying on a single set o white noise-
LSTM layer to mine the sequential dependencies o the data. The input added signals as in EMD to address mode mixing, EEMD creates many
sequences are learned in the inverse order orm in the inverse LSTM realizations o the input signal by adding numerous realizations o white
layer to mine the inverse order dependencies o the data (Z. Li, Xu, et al., noise. Each realization is then separately dissected using EMD. To obtain
2023). the nal IMF components and residual, the corresponding IMFs and
residual are averaged over the group o decompositions (Y. Chen et al.,
3.5.8. Transormer 2022).
The Transormer model, shown in Fig. 4(h), could be parallelized
with the ewest consecutive operations (Zhang et al., 2022). The mod- 3.6.3. Variational mode decomposition
el has two layered encoders and decoders with similar compositions. To decompose a given signal into a collection o IMFs with various
The encoder’s input fows via a sel-attention layer to help it see addi- temporal scales, signal processing experts employ Variational mode
tional data in the sequence. The output is routed to a ully connected decomposition (VMD). When working with non-stationary signal-
eedorward neural network with a straightorward architecture and s—where the statistical characteristics o the signal fuctuate over

10
Q. Ge Expert Systems With Applications 260 (2025) 125380

Fig. 9. Leakage measurements o the decomposition methods or the CSI 300 dataset.

11
Q. Ge Expert Systems With Applications 260 (2025) 125380

Fig. 10. Leakage measurements o the decomposition methods or the S&P 500 dataset.

12
Q. Ge Expert Systems With Applications 260 (2025) 125380

Fig. 11. Decomposition results o WT or all variables, the dataset or CSI 300 index.

time—the VMD is very helpul. It seeks to separate out the undamental 4. Proposed ramework
elements or modes o the signal that best represent the prevailing trends
or oscillatory behavior over a range o scales. A mode with a certain As demonstrated in Fig. 5, in most o the research presented in order
requency and time-varying amplitude is represented by each IMF (H. Li to provide a hybrid model or the purpose o prediction, it is very
et al., 2020; Moreno et al., 2024). important to prepare the data rst. CSI 300 and S&P 500 have been the
two important indicators to collect data to present this research. The
3.6.4. Wavelet transorms second step involves the decomposition o data into smaller and un-
The signal can be decomposed into multiple resolution-level com- derstandable parts. Decomposing the data by using ve methods, among
ponents, commonly reerred to as mother wavelets, using wavelet which the MEMD method provides better results than other methods.
unctions (Tiwari & Chatterjee, 2010). The wavelet transorm simulta- Data processing and presenting the results o each o these models has
neously collects the location and time o the signal, while the classic led to the selection o a preerred model or data development. In this
Fourier transorm just receives the requency inormation. Continuous work, nine models were selected rom articial neural network and deep
wavelet transorm (CWT) and discrete wavelet transorm (DWT) are the learning methods, among these models, the model LSTM provided the
two primary types o wavelets transorm (Song et al., 2022). best results. Then, in the next step, the LSTM model was optimized by an
optimization algorithm which is AO. Fig. 5 shows the general procedure
o the presented work stages.

13
Q. Ge Expert Systems With Applications 260 (2025) 125380

Fig. 12. Decomposition results o VMD or all variables, the dataset or CSI 300 index.

4.1. Multivariate empirical mode decomposition to adjust the parameters o the LSTM (Abualigah et al., 2021).
The optimal solution’s estimation can be obtained by,
A single time series may be divided into many requency domain  
h1,1 ⋯ h1,j h1, Dim 1 h1, Dim
modes using decomposition methods that are more suited or uture  h2,1 ⋯ h2,j ⋯ h2, Dim 
machine learning. These decomposition techniques, however, are 
 ⋯ ⋯ hi,j ⋯ ⋯


limited to dealing with single time series and are unable to deal with H=   (2)
 ⋮ ⋮ ⋮ ⋮ ⋮ 

multivariate time series. The generated modes will experience a loss o  hN1,1 ⋯ hN1,j ⋯ hN1, Dim 
correlation i these decomposition methods are utilized to treat multi- hN,1 ⋯ hN,j hN, Dim 1 hN, Dim
variate time series in isolation (Huang, Hasan, et al., 2022). Rehman &
Mandic (2010) presented the MEMD, which can simultaneously eval- The dynamic object identication segmentation problem has Dim
uate several time series while keeping the intrinsic link between distinct dimensionality. Total issue solutions are N. The optimal answer is H. The
modes. Multivariate time series are ideal or examining links between value obtained during the ith solution is denoted as Hi .
many variables since their modes correlate with one another (Rehman &  
Mandic, 2010). Hij = rand × Uj  Lj + Lj , i = 1, 2, ⋯., N; j = 1, 2, ⋯., Dim (3)

The randomly generated integer Lj is the jth lower border, and UCj is the
4.2. Aquila optimization jth upper boundary.
(i) The Numerical Expression o AO:
Figs. 6 and 7 illustrate the Aquilla Optimization, which is employed AO hunting behavior includes expanded exploration, encircling,

14
Q. Ge Expert Systems With Applications 260 (2025) 125380

Fig. 13. Decomposition results o EMD or all variables, the dataset or CSI 300 index.

expanded exploitation, and narrowed exploitation. 4.3. Long Short-Term memory


(ii) H1: Expanded Exploration:
The targeted ghosts are picked like Aquila chooses the hunting area. LSTM design adds non-linear, data-dependent controls into the RNN
 t cell to avoid the gradient o the loss unction rom vanishing. A type o
H1 (t + 1) = Hbest (t) × 1  + (HM (t)  Hbest (t) × rand ) (4) recurrent neural network architecture called Long Short-Term Memory,
T
sometimes reerred to as LSTM was developed to solve a requent RNN
An initial search yield H1 (t +1). But the best prior answer is Hbest (t) . problem with collecting long-term connections in sequential input (Van
  Houdt et al., 2020). The LSTM enhances the conventional recurrent
Analysis o the particular item is possible. Follow the term 1  Tt or
neural network with a memory store and Gates components as shown in
each iteration to control the exploration, and utilize Hm (t) to obtain the Fig. 8.
position average value o the continuing solutions.

1 ∑N 5. Interpretation and review o results


HM (t) = Hi (t), ∀j = 1, 2, ⋯., Dim (5)
N i=1
5.1. Evaluation metrics
T represents the maximum iterations.
Perormance indicators in this study included the R2 coecient o
determination, the mean absolute error, mean absolute percentage error
and the root mean square error. The mathematical ormulas or these
measures are as ollows:

15
Q. Ge Expert Systems With Applications 260 (2025) 125380

Fig. 14. Decomposition results o EEMD or all variables, the dataset or CSI 300 index.

√̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅
 decomposition techniques such as EMD, EEMD, WT, VMD, and MEMD
n
 (yi  ̂ y i )2 are employed. These methods are nonlinear, which means that they are
i=1
RMSE = (6) susceptible to fuctuations in time series data, such as changes in value
n
or series length. I not eectively managed, this vulnerability has the
(
n ⃒ ⃒) potential to generate IMFs and leak inormation. This issue was resolved
1∑ ⃒yi  ̂
y i ⃒⃒
MAPE = ⃒ × 100 (7) through a comparative study that employed two methods to deconstruct
n i=1 ⃒ yi ⃒ the CSI300 and S&P500 time series data. The time series o character-
n istics o both markets were decomposed into IMFs prior to dividing the
i=1 |yi yi |
̂ data into training and testing sets. Each set was decomposed ater the
MAE = (8)
n time series was partitioned into training and testing sets. The real and
n decomposed data dierence values were compared to evaluate these
y i )2
(yi  ̂ approaches, as illustrated in Figs. 9 and 10. The original data, decom-
R2 = 1  i=1
n 2
(9)
i=1 (yi  y) posed series, and leakage or the CSI300 and S&P500 markets are
depicted in these gures.
The global decomposition approach exhibits minimal dierence
5.2. Data decomposition values in the Wavelet Transorm or both the training and assessment
sets o the CSI300 Market. In the testing set, the distinct decomposition
To deal with non-stationary nancial time series data, signal

16
Q. Ge Expert Systems With Applications 260 (2025) 125380

Fig. 15. Decomposition results o MEMD or all variables, the dataset or CSI 300 index.

approach demonstrated a slightly higher leakage. The testing set were minimal dierence values in EMD’s global approach, whereas the
exhibited signicant leakage when EEMD was decomposed separately, testing set exhibited a signicant increase in the separate approach.
whereas the global decomposition approach demonstrated reduced Compared to the separate approach, which exhibited moderate leakage
leakage. VMD was consistent, and minimal dierence values was in the testing set, MEMD’s global decomposition approach exhibited
observed in both methods, with the global decomposition having a extremely low leakage and dierence values.
modest advantage. The leakage was negligible in the global approach, The results o the analysis suggest that the global decomposition
but it was evident in the testing set or the discrete approach, according preceded by dataset division is generally more eective in reducing data
to EMD. In contrast to the separate decomposition approach, which leakage. The global decomposition approach guarantees a consistent set
demonstrated moderate dierence values in the testing set, MEMD’s o IMFs, thereby reducing the likelihood o leakage and preserving the
global decomposition approach demonstrated extremely low dierence model’s integrity. Conversely, the discrete decomposition approach in-
values. The global decomposition approach employed by WT resulted in troduces variability between the training and testing sets, which leads to
minimal dierence values or the S&P500 Market, whereas the separate increased leakage. This method may result in the training data being
decomposition approach resulted in a higher level o leakage in the overtted and the testing data being poorly generalized as a result o the
testing set. The EEMD method, like the CSI300, exhibited a signicant variations in the derived IMFs. The global decomposition approach has
amount o leakage in the testing set or the distinct decomposition been chosen and implemented or signal decomposition in nancial time
approach. The global approach was marginally more eective, while series analysis in accordance with the results. This approach mitigates
both approaches exhibited minimal dierence values in VMD. There the likelihood o inormation leakage and establishes a more dependable

17
Q. Ge Expert Systems With Applications 260 (2025) 125380

Fig. 16. Decomposition results o WT or all variables, the dataset or S&P 500 index.

oundation or the development o predictive models. variables in the time series data was complete and conclusive as a result
As part o the research, a comprehensive analysis was conducted o the application o MEDM, oering trustworthy inormation to
utilizing ve distinct methodologies, namely EMD, EEMD, VMD, WT, decision-makers. The decomposition results o wavelet transorm
and MEMD. Each o these techniques was applied with precision and applied to ve variables associated with the CSI 300 index dataset are
care to ensure the most accurate and reliable results possible. The depicted in Fig. 11. The analysis o data across various scales or reso-
ndings o this investigation shed light on important insights and trends lutions is acilitated by wavelet transorms, which oer insights into the
related to the research topic in question. The time series data under underlying dynamics o the index at multiple levels o detail. The CSI
consideration is made up o a number o dierent actors, including 300′s eatures were decomposed into nine IMFs (IMF1-IMF8, Residual)
volume and the open, closing, high, and low prices. The MEMD by the WT.
approach yields the best results when these analyzers’ output is Fig. 12 illustrates the Variational Mode Decomposition results or a
compared. The MEMD approach was used to start studying this non- variety o variables associated with the CSI 300 index, with a particular
stationary and non-linear data. This signal processing approach splits emphasis on eatures such as open price, high price, low price, trading
the time series into multiple segments, known as IMF, depending on volume, and close price. An IMF is represented by each plot, which
their band and range. By doing this, the data is changed into a typically captures a specic requency component o the original time series. The
stable condition, acilitating and improving analysis. The isolated high-requency IMFs may capture daily noise or minor fuctuations,
components were then organized or urther examination in descending whereas the lower-requency IMFs may refect more signicant trends
requency and ascending wavelength order. The data was meticulously and cycles. The VMD decomposed the open price and trading volume
examined using signal processing techniques, such as MEMD, to ensure eatures o the CSI 300 into nine IMFs (IMF1-IMF8, Residual), while the
accurate and trustworthy ndings. The decomposition o the chosen high price, low price, and close price eatures were decomposed into

18
Q. Ge Expert Systems With Applications 260 (2025) 125380

Fig. 17. Decomposition results o VMD or all variables, the dataset or S&P 500 index.

eight IMFs (IMF1-IMF7, Residual). nonlinear time series data. The EEMD decomposed the trading volume
Fig. 13 illustrates the Empirical Mode Decomposition results or a eature o the CSI 300 into 12 IMFs (IMF1-IMF11, Residual), while the
variety o variables associated with the CSI 300 index, with a particular open, high price, low price, and close price eatures were decomposed
emphasis on eatures such as open price, high price, low price, trading into 11 IMFs (IMF1-IMF10, Residual).
volume, and close price. Daily market noise or minor fuctuations are The decomposition results o Multivariate Empirical Mode Decom-
captured by high-requency IMFs. The market’s immediate responses to position applied to a variety o variables within the dataset that are
new inormation or short-term events are refected in them. Lower- associated with the CSI 300 index are depicted in Fig. 15. The traditional
requency IMFs are indicative o more substantial market trends or cy- EMD’s capabilities are enhanced by MEMD, which addresses its limita-
cles, which may be associated with longer-term economic or sectoral tions in the handling o multivariate data and the prevention o mode
dynamics. The residual component captures the slowest changing mixing across dierent channels. MEMD is uniquely suited or datasets
portion o the signal, which requently corresponds to the underlying in which multiple interrelated variables (such as various stock prices and
trend and represents the trend o the time series data. The EMD volumes) infuence each other, as it enables the simultaneous decom-
decomposed the CSI 300′s eatures into eight IMFs (including Residual, position o multivariate time series data. This method guarantees a
IMF1-IMF7). consistent mode alignment across all variables, thereby improving the
The decomposition results o Ensemble Empirical Mode Decompo- analysis’s relevance and accuracy. The CSI 300′s eatures were decom-
sition or a variety o variables in the dataset that are associated with the posed by the MEMD into nine IMFs, which include Residual, IMF1-IMF8.
CSI 300 index are illustrated in Fig. 14. Addressing some o the inherent The wavelet transorms decomposition results or ve variables
limitations o the original EMD method, such as mode mixing, the uti- associated with the S&P 500 index dataset are illustrated in Fig. 16.
lization o EEMD enables a more robust analysis o non-stationary and Insights into the underlying dynamics o the index at multiple levels o

19
Q. Ge Expert Systems With Applications 260 (2025) 125380

Fig. 18. Decomposition results o EMD or all variables, the dataset rom the S&P 500 index.

detail are provided by wavelet transorms, which are particularly illustrated in Fig. 18. An EMD decomposes a time series into a collection
eective or analyzing data across various scales or resolutions. The S&P o IMFs that are simple oscillatory modes at varying scales. This
500′s eatures, which encompass metrics such as open price, high price, approach is highly ecient and adaptive in its ability to extract mean-
low price, trading volume, and close price, are decomposed using WT in ingul signals rom noisy data without the need or a stationary basis, as
this gure. The WT decomposed the high, low, and close price eatures each IMF is directly derived rom the data. The S&P 500′s eatures were
o the S&P 500 into eight IMFs (IMF1-IMF7, Residual), the open price decomposed by the EMD into nine IMFs, which include Residual, IMF1-
into nine (IMF1-IMF8, Residual), and the trading volume into ten (IMF1- IMF8.
IMF9, Residual). The Ensemble Empirical Mode Decomposition results or a variety o
The decomposition results o Variational Mode Decomposition variables rom the S&P 500 index dataset are depicted in Fig. 19. IMF
applied to a variety o variables associated with the S&P 500 index unctions encompass a wide range o components, rom high-requency
dataset are illustrated in Fig. 17. VMD is a sophisticated technique or details that capture daily market fuctuations to low-requency compo-
the decomposition o non-stationary and non-linear signals into a nents that may refect underlying economic trends or business cycles.
sequence o band-limited IMFs. These IMFs oer a clear and concise The residual trend is typically depicted in the nal plot o each sequence,
understanding o the index’s underlying dynamics across a variety o which represents the overall trend o the market data that is not
requencies. The open, high, low, and close price eatures o the S&P 500 accounted or by the typical cyclical components. This is crucial or
were decomposed into ten IMFs (IMF1-IMF9, Residual) and the trading comprehending the market’s long-term trajectory or the underlying
volume into 11 (IMF1-IMF10, Residual) by the VMD. growth trends. The EEMD decomposed the S&P 500′s eatures into eight
The decomposition results o Empirical Mode Decomposition or a IMFs, including Residual, IMF1-IMF7.
variety o variables associated with the S&P 500 index dataset are The decomposition results o Multivariate Empirical Mode

20
Q. Ge Expert Systems With Applications 260 (2025) 125380

Fig. 19. Decomposition results o EEMD or all variables, the dataset rom the S&P 500 index.

Decomposition or several variables associated with the S&P 500 index hyperparameter denition. In the current study, the AO optimizer was
dataset are illustrated in Fig. 20. The S&P 500′s eatures were decom- employed to optimize the hyperparameters o the LSTM model. The
posed by the MEMD into ten IMFs, which include Residual, IMF1-IMF9. population size was set to 50 and the epoch size to 200, with the Mean
The residual trend or the open price is relatively stable, with a slight Squared Error (MSE), used as the objective unction. The optimal values
upward trajectory over the index range, indicating a gradual increase in or the LSTM’s hyperparameters identied through AO are presented in
the opening prices over the analyzed period. The high price trend ex- Table 3.
hibits an upward trend, similar to the open price, albeit with more
apparent fuctuations. The low-price trend exhibits a slower rate o
upward movement in comparison to the high and open prices, sug- 5.4. The results o the developed models
gesting that the lows are increasing at a slower pace. The residual trend
o the trading volume appears to be relatively consistent, with only Within this particular section, have conducted a thorough compari-
minor fuctuations. Similar to the open and high prices, the close price son o distinct models. In order to oer a comprehensive analysis, this
trend indicates a gradual increase. paper has presented the overall and specic ndings or each model in
both Tables 4 and 5 and Figs. 21 and 22. To eectively test the capa-
bilities o these models, the same dataset was utilized to predict both the
5.3. Hyperparameter setting CSI 300 and the S&P 500 index.
In both the training and testing sets, the MEMD-AO-LSTM model
Achieving optimal model perormance necessitates precise consistently outperormed other models in all metrics or both indices.

21
Q. Ge Expert Systems With Applications 260 (2025) 125380

Fig. 20. Decomposition results o MEMD or all variables, the dataset rom the S&P 500 index.

exceedingly near to the actual values. Its precision is urther emphasized


Table 3
by the MAE o 19.43. Although the perormance metrics or the CSI 300
Hyperparameters setting using AO algorithm.
index are slightly less impressive than those o the S&P 500, the MEMD-
Hyperparameter Lower bounds Upper bounds Optimal value AO-LSTM still exhibits strong predictive power, with an R2 o 0.9891,
Number o hidden units 2 64 32 RMSE o 56.66, MAPE o 0.92, and MAE o 42.76 in comparison. These
Batch size 2 16 16 ndings suggest that the model is capable o eectively managing the
Epoch 100 2000 200 complexities o the CSI 300 index, albeit with a slightly lower level o
Learning rate 0.0001 1 0.02
accuracy than the S&P 500 index. Several actors associated with market
dynamics can be used to explain the varying perormance o the models
The model explains 99.20 % o the variance in the data or the S&P 500 on the S&P 500 and CSI 300 indices. As it is a large and mature market,
index, as evidenced by an R2 o 0.9920. The model’s high accuracy in the S&P 500 is more ecient due to its higher liquidity and greater
percentage terms is evidenced by the extremely low MAPE o 0.47, while availability o inormation (Balcilar et al., 2021; Z. Wang et al., 2022).
the low RMSE o 27.12 indicates that the model’s predictions are This oten leads to more predictable patterns that sophisticated models

22
Q. Ge Expert Systems With Applications 260 (2025) 125380

Table 4
The obtained results o the developed models rom S&P 500 index.
S&P 500 INDEX TRAIN SET TEST SET
MODEL/Metrics R2 RMSE MAPE MAE R2 RMSE MAPE MAE

MLP 0.8457 203.85 7.38 185.89 0.8353 120.27 2.29 97.96


ELM 0.9073 158.00 6.92 147.83 0.8888 100.89 2.03 86.19
BPNN 0.9078 157.07 6.87 147.80 0.9023 94.36 1.93 84.91
RBF 0.9192 147.49 6.09 137.04 0.9089 89.45 1.77 74.20
RNN 0.9544 110.77 4.75 101.81 0.9436 71.86 1.55 64.21
Transormer 0.9596 106.47 4.59 97.46 0.9584 63.95 1.18 50.97
GRU 0.9642 98.21 4.18 89.94 0.9616 59.19 1.09 45.09
Bi-LSTM 0.9683 94.36 3.87 84.36 0.9678 54.36 0.99 42.72
LSTM 0.9738 85.29 3.30 78.79 0.9705 51.93 0.96 39.58
WT-LSTM 0.9712 88.69 3.41 80.36 0.9701 52.64 0.97 39.99
VMD-LSTM 0.9757 83.82 3.21 75.94 0.9719 48.03 0.93 38.67
EMD-LSTM 0.9808 73.02 2.82 67.46 0.9769 46.00 0.89 37.21
EEMD-LSTM 0.9839 65.83 1.99 45.51 0.9822 40.31 0.78 32.51
MEMD-LSTM 0.9905 50.46 1.84 40.51 0.9893 31.24 0.57 23.61
AO-LSTM 0.9815 71.74 2.89 68.83 0.9751 47.78 1.01 42.32
MEMD-AO-LSTM 0.9941 39.84 1.28 30.48 0.9920 27.12 0.47 19.43

Table 5
The outcomes o the developed models or CSI 300 index.
CSI 300 INDEX TRAIN SET TEST SET
MODEL/Metrics R2 RMSE MAPE MAE R2 RMSE MAPE MAE

MLP 0.8281 306.45 6.75 237.28 0.8241 214.17 4.04 183.17


ELM 0.8734 262.98 5.93 206.41 0.8689 196.38 2.73 136.07
RBF 0.8977 236.42 5.89 187.38 0.8929 177.47 2.45 121.20
BPNN 0.8998 223.52 5.81 180.73 0.8980 171.12 2.32 117.43
RNN 0.9349 188.61 4.50 158.13 0.9296 143.89 1.87 91.38
Transormer 0.9511 154.09 3.48 132.88 0.9501 120.65 1.76 86.04
Bi-LSTM 0.9594 149.69 3.11 123.20 0.9581 108.75 1.67 75.11
GRU 0.9595 148.72 3.03 115.61 0.9527 111.03 1.71 79.55
LSTM 0.9699 128.16 2.69 91.63 0.9659 100.10 1.32 63.57
WT-LSTM 0.9701 126.53 2.64 91.06 0.9663 97.46 1.28 62.61
VMD-LSTM 0.9729 119.49 2.43 88.81 0.9717 86.09 1.23 60.57
EMD-LSTM 0.9794 106.05 2.26 83.90 0.9757 79.63 1.17 57.11
EEMD-LSTM 0.9850 90.66 2.02 74.97 0.9828 71.21 1.18 54.06
MEMD-LSTM 0.9940 57.38 1.51 51.24 0.9854 65.52 1.05 49.28
AO-LSTM 0.9715 125.00 2.21 79.80 0.9691 89.80 1.60 73.15
MEMD-AO-LSTM 0.9970 40.24 0.86 29.90 0.9891 56.66 0.92 42.76

Fig. 21. Results and visual comparison o developed models or S&P 500 index via assessment criteria.

23
Q. Ge Expert Systems With Applications 260 (2025) 125380

Fig. 22. Results and visual comparison o developed models or CSI 300 index via assessment criteria.

such as the MEMD-AO-LSTM can capitalize on. On the other hand, the accuracy and convergence speed. AO improves the training process by
CSI 300, which is indicative o a less mature market, may demonstrate imitating natural hunting strategies, which leads to improved predictive
increased volatility and reduced eciency, which makes it more di- accuracy and generalization. The theoretical advantage o the AO is its
cult to accurately predict (Jin et al., 2022). The S&P 500 is dependent on ability to improve the eciency and eectiveness o the training pro-
the broader U.S. economy and is susceptible to macroeconomic policies, cess, which is essential or achieving optimal model perormance. Long-
corporate earnings, and global economic conditions, which are more term dependences and sequential patterns in time series data are rep-
predictable and stable (Z. Wang et al., 2022). Additional volatility and resented by LSTM networks (Van Houdt et al., 2020). Traditional RNNs
unpredictability may be introduced by the CSI 300′s susceptibility to are plagued by the vanishing gradient problem; however, LSTMs are
domestic economic policies, regulatory changes, and market sentiments equipped with memory cells that retain inormation or extended pe-
that are unique to China (Jia et al., 2023). More stable price movements riods. The ability o LSTMs to learn and retain long-term dependencies is
may result rom the behavior o market participants in the U.S. market, crucial or the precise prediction o stock prices. The model’s capacity to
which is distinguished by institutional investors with sophisticated capture intricate temporal patterns is improved by the gating mecha-
strategies (Balcilar et al., 2021). As a result o increased retail investor nisms o LSTMs, which regulate the fow o inormation. This ensures
activity in the Chinese market, volatility may increase and trends may that pertinent data is retained while irrelevant data is discarded. In the
become less predictable (Jia et al., 2023). context o nancial time series prediction, LSTMs are particularly
The MEMD-AO-LSTM model is a robust ramework or stock price eective due to their capacity to process sequences o data points while
prediction that capitalizes on the strengths o the long short-term preserving temporal order and context.
memory network, Aquila optimization, and multivariate empirical The synergistic eect o the MEMD-AO-LSTM model is the result o
mode decomposition. The integration o each component results in a the integration o MEMD, AO, and LSTM, where the strengths o each
synergistic eect that improves predictive perormance, as each component complement one another. By ensuring that the input data to
component oers distinct advantages. the LSTM network is clean and structured, MEMD’s decomposition ca-
MEMD is essential or managing the non-linear and non-stationary pabilities enable the LSTM to concentrate on learning meaningul pat-
nature o nancial market data. MEMD isolates underlying trends and terns rather than noise. AO optimizes the training process by
cyclical patterns by decomposing complex time series into IMFs and dynamically adjusting parameters, guaranteeing that the LSTM network
ltering out noise (Deng et al., 2022; Yao et al., 2023). This decompo- eectively captures the complex patterns in the decomposed data. This
sition allows the model to concentrate on the more inormative aspects comprehensive modeling approach eectively manages the complexity
o the data, thereby enhancing the accuracy o its predictions. The and volatility o stock price data, resulting in more precise and
simultaneous decomposition o multiple time series is a critical eature dependable predictions.
o MEMD, as it is capable o handling multivariate data (Deng et al., According to Figs. 23 and 24, the MEMD-AO-LSTM model exhibits a
2022; Yao et al., 2023). This is due to the act that there are numerous high degree o accuracy in tracking the actual price trends o the S&P
variables that infuence stock prices. The decomposition o MEMD 500. The actual data lines align closely with the lines that represent
provides a higher level o clarity, which enables more precise learning predicted data. LSTM and MEMD-AO-LSTM models exhibit a particu-
and prediction in subsequent modeling stages. AO, which is motivated larly tight t, indicating their ecacy in capturing the temporal de-
by the hunting strategies o the Aquila bird, dynamically balances the pendencies in the stock index data. Divergences between the predicted
exploration and exploitation phases to eciently navigate the param- and actual data are evident, particularly during periods o signicant
eter space (Abualigah et al., 2021). This algorithm, which is inspired by price fuctuations, as evidenced by MLP, BPNN, and ELM. The RBF ex-
nature, dynamically adjusts the search process in accordance with the hibits a signicant degree o deviation in prediction, particularly in the
current perormance, thereby preventing the common issue o becoming presence o sharp peaks or troughs. LSTM, GRU, Transormer, Bi-LSTM,
trapped in local minima (Abualigah et al., 2021). AO’s adaptive learning and RNN models appear to be more adept at managing unseen data, as
guarantees the ecient ne-tuning o parameters, resulting in enhanced they are able to more closely monitor the actual price fuctuations.

24
Q. Ge Expert Systems With Applications 260 (2025) 125380

Fig. 23. The outcome o the suggested model in comparison to the actual data or S&P 500 index during train.

Furthermore, they demonstrate some smoothing or lag eects, which are overtting, this suggests a sophisticated ability to model and predict
common in recurrent structures that occasionally average out anomalies complex time series data.
or rapid changes in the data. When it came to the testing phase, the The models’ ability to consistently monitor the price movements o
MEMD-AO-LSTM model outperormed all others, closely replicating the the CSI 300 during training is demonstrated in Figs. 25 and 26.
actual data, even at critical junctures. Due to an improved architecture Complicated patterns can be observed in a less dynamic market using
that eectively captures deep temporal structures and prevents LSTM and MEMD-AO-LSTM models. Signicant disparities exist

25
Q. Ge Expert Systems With Applications 260 (2025) 125380

Fig. 24. The outcome o the suggested model in comparison to the actual data or S&P 500 index during test.

between actual data, MLP, and ELM models, particularly during rapid signicant peaks and troughs., however. As illustrated in Figs. 25 and
market fuctuations. BPNN and RBF are subject to fuctuations, partic- 26, recurrent models and advanced variations such as MEMD-AO-LSTM
ularly during periods o abrupt market movement. LSTM, Transormer, outperorm MLP, ELM, and RBF in testing.
Bi-LSTM, RNN, and GRU models are more adept at adapting to unseen Frequent fuctuations in global markets are the result o a diverse
data, thereby aligning themselves more closely with market movements. array o incidents and events that can have substantial short-term re-
In a less dynamic market, smoothing eects and latency may obscure percussions. This article investigates a period that was characterized by

26
Q. Ge Expert Systems With Applications 260 (2025) 125380

Fig. 25. The outcome o the suggested model in comparison to the actual data or CSI 300 index during train.

a number o critical events, such as the COVID-19 pandemic and its bolstered investor condence, resulting in an increase in stock indices
extensive consequences. Between 2020 and 2021, governments world- across a multitude o countries (Atri et al., 2023; Oanh, 2022). The
wide implemented substantial economic stimulus packages to mitigate widespread distribution o vaccines was a critical actor in the restora-
the economic downturn precipitated by the pandemic. The S&P 500 tion o economic stability and the stimulation o market growth. In
index experienced a signicant increase o over 16 % in 2020 and over contrast, specic events resulted in signicant market declines. In
26 % in 2021 as a result o these measures (Gao et al., 2023; Olayungbo February 2022, the tension and confict between Russia and Ukraine
et al., 2024). In addition, the rapid acceleration o global vaccination resulted in a 10 % decline in major global indices, including China’s CSI,
campaigns signicantly alleviated pandemic-related concerns and Germany’s DAX, and France’s CAC 40 (Andrada-Félix et al., 2024;

27
Q. Ge Expert Systems With Applications 260 (2025) 125380

Fig. 26. The outcome o the suggested model in comparison to the actual data or CSI 300 index during test.

Table 6
Perormance o the MEMD-AO-LSTM model or the other markets.
Metrics NAZDAQ 100 NIKKEI 225 FTSE DAX SSE KOSPI

R2 0.9911 0.9883 0.9879 0.9869 0.9909 0.9894


RMSE 149 113.89 26.84 131.80 16.53 3.16
MAPE 0.94 0.41 0.29 0.72 0.39 0.08
MAE 117.15 91.19 22.31 102.72 12.63 2.56

28
Q. Ge Expert Systems With Applications 260 (2025) 125380

Olayungbo et al., 2024). Furthermore, the FTSE 100 and S&P 500 both • To demonstrate the model’s resilience under specic market condi-
experienced declines o approximately 6 % (Andrada-Félix et al., 2024). tions, its perormance has been evaluated in response to two specic
The vulnerability o global markets to sudden and signicant events was events: the economic stimulus packages implemented to alleviate the
emphasized by these geopolitical tensions. The high accuracy o the economic downturn caused by the COVID-19 pandemic and the
model in predicting market trends during both the bullish (rising) and tension and confict between Russia and Ukraine.
bearish (alling) phases is emphasized in the accompanying Figs. 24 and
26, which demonstrate the model’s eectiveness in a variety o market
conditions. They emphasize the proound infuence o signicant events, Declaration o competing interest
such as the COVID-19 pandemic and geopolitical conficts, on market
perormance, as evidenced by the CSI 300 and S&P 500 indices. The The authors declare that they have no known competing nancial
MEMD-AO-LSTM exhibits high accuracy in predicting both bullish and interests or personal relationships that could have appeared to infuence
bearish market phases, demonstrating the robustness o these ore- the work reported in this paper.
casting techniques in the presence o global economic disruptions.
The proposed MEMD-AO-LSTM model’s generalizability and Data availability
robustness across a variety o global stock markets, such as the NASDAQ
100, Nikkei 225, FTSE, DAX, SSE, and KOSPI, are emphasized by the Data will be made available on request.
results presented in Table 6. This table emphasizes the model’s peror-
mance metrics, which illustrate its adaptability and ecacy in a variety Reerences
o market conditions.
Abualigah, L., Yousri, D., Abd Elaziz, M., Ewees, A. A., Al-qaness, M. A. A., &
The MEMD-AO-LSTM model could be advantageous to nancial in- Gandomi, A. H. (2021). Aquila Optimizer: A novel meta-heuristic optimization
stitutions, portolio managers, high-requency trading rms, econo- algorithm. Computers & Industrial Engineering, 157, Article 107250. https://doi.org/
mists, policymakers, and corporations. Portolio managers can employ 10.1016/j.cie.2021.107250
R.G. Ahangar M. Yahyazadehar H. Pournaghshband The comparison o methods
this model to make inormed decisions regarding asset allocation, stock
articial neural network with linear regression using specic variables or prediction
selection, and transaction timing, thereby maximizing returns and stock price in Tehran stock exchange ArXiv Preprint 2010 ArXiv:1003.1457.
minimizing risks to create more protable and robust investment port- Ahuja, R., Kumar, Y., Goyal, S., Kaur, S., Sachdeva, R. K., & Solanki, V. (2023). Stock
Price Prediction By Applying Machine Learning Techniques. International Conerence
olios. Firms that engage in high-requency trading may incorporate the
on Emerging Smart Computing and Inormatics (ESCI), 2023, 1–5.
model into their algorithms to enhance trade execution, reduce trans- Ali, M., Khan, D. M., Alshanbari, H. M., & El-Bagoury, A.-A.-A.-H. (2023). Prediction o
action costs, improve trade timing, and increase protability. Financial complex stock market data using an improved hybrid emd-lstm model. Applied
institutions can utilize the model to more eectively evaluate and Sciences, 13(3), 1429.
Andrada-Félix, J., Fernández-Rodríguez, F., & Sosvilla-Rivero, S. (2024). A crisis like no
mitigate the risks associated with their stock portolios, thereby other? Financial market analogies o the COVID-19-cum-Ukraine war crisis. The
enabling proactive adjustments in risk exposure and improved overall North American Journal o Economics and Finance, 74, Article 102194. https://doi.
risk management. org/10.1016/j.naje.2024.102194
Atri, H., Teka, H., & Kouki, S. (2023). Does US ull vaccination against COVID-19
immunize correspondingly S&P500 index: Evidence rom the NARDL approach.
6. Conclusion Heliyon, 9(4), e15332. https://doi.org/10.1016/j.heliyon.2023.e15332
Baek, H. (2024). A CNN-LSTM Stock Prediction Model Based on Genetic Algorithm
Optimization. Asia-Pacifc Financial Markets, 31(2), 205–220. https://doi.org/
Stock trading is one o the most signicant and requently discussed 10.1007/s10690-023-09412-z
nancial topics. Investors are perpetually seeking methods to anticipate Balcilar, M., Ozdemir, Z. A., & Ozdemir, H. (2021). Dynamic return and volatility
uture trends in order to mitigate losses and optimize prots, as stock spillovers among S&P 500, crude oil, and gold. International Journal o Finance &
Economics, 26(1), 153–170.
prices are unpredictable and volatile. In summary, this investigation
Bhandari, H. N., Rimal, B., Pokhrel, N. R., Rimal, R., Dahal, K. R., & Khatri, R. K. C.
investigated the intricate and dynamic realm o stock prediction through (2022). Predicting stock market index using LSTM. Machine Learning with
the utilization o predictive models on dierent markets. This investi- Applications, 9(February), Article 100320. https://doi.org/10.1016/j.
mlwa.2022.100320
gation evaluated prediction models based on machine learning and time
Botunac, I., Bosna, J., & Matetić, M. (2024). Optimization o Traditional Stock Market
series analysis. The results indicate that a hybrid model could provide Strategies Using the LSTM Hybrid Approach. Inormation, 15(3). https://doi.org/
more accurate predictions. The primary results o the investigation are 10.3390/ino15030136.
as ollows: Chen, L., Wu, T., Wang, Z., Lin, X., & Cai, Y. (2023). A novel hybrid BPNN model based
on adaptive evolutionary Articial Bee Colony Algorithm or water quality index
prediction. Ecological Indicators, 146(December 2022), 109882. https://doi.org/
• The S&P 500 and CSI 300 daily transition datasets, which include 10.1016/j.ecolind.2023.109882.
open, high, low, and close prices, as well as trading volume, were Chen, Y., Zhao, P., Zhang, Z., Bai, J., & Guo, Y. (2022). A Stock Price Forecasting Model
Integrating Complementary Ensemble Empirical Mode Decomposition and
collected and prepared or this study. Independent Component Analysis. International Journal o Computational Intelligence
• The MEMD-AO-LSTM model has revolutionized the eld o data Systems, 15(1). https://doi.org/10.1007/s44196-022-00140-2.
analysis by outperorming all previous models in terms o eec- Deng, C., Huang, Y., Hasan, N., & Bao, Y. (2022). Multi-step-ahead stock price index
orecasting using long short-term memory model with multivariate empirical mode
tiveness and error measurement. During its extensive testing phase, decomposition. Inormation Sciences, 607, 297–321.
this model achieved previously unheard-o RMSE, MAE, and R2 Gao, J., Li, H., & Lu, Z. (2023). Impact o COVID-19 on investor sentiment in China’s
values o 27.12, 19.43, and 0.992, setting a new standard or stock markets. Heliyon, 9(10). https://doi.org/10.1016/j.heliyon.2023.e20801.
e20801
excellence in data processing and modeling.
Hani’ah, M., Abdullah, M. Z., Sabilla, W. I., Akbar, S., & Shaara, D. R. (2023). Google
• It was ound that the S&P 500 produced much better results than the Trends and Technical Indicator based Machine Learning or Stock Market Prediction.
CSI 300, even though the suggested model produced accurate results MATRIK : Jurnal Manajemen, Teknik Inormatika Dan Rekayasa Komputer; Vol 22 No 2
(2023); 271-284 ; 2476-9843 ; 1858-4144 ; 10.30812/Matrik.V22i2.
or both markets. Given that the CSI 300 is signicantly less dynamic
Jia, W., Liuyang, H. E., & Xu, W. (2023). Multi-scale Dynamic Hedging o CSI 300 Index
than the S&P 500, the perormance discrepancy may be explained by Futures Based on EMD-DCC-GARCH. Operations Research and Management Science, 32
the unique market dynamics o each market. (9), 200.
• To showcase the model’s strong perormance and versatility across Jiang, H., Chen, Y., Jiang, H., Ni, Y., & Su, H. (2023). A granular sigmoid extreme
learning machine and its application in a weather orecast. Applied Sot Computing,
dierent markets, its eectiveness in six additional markets: the 147, Article 110799. https://doi.org/10.1016/j.asoc.2023.110799
NASDAQ 100, Nikkei 225, FTSE, DAX, SSE, and KOSPI was also Jin, L., Yuan, X., Long, J., Li, X., & Lian, F. (2022). Price discovery in the CSI 300 Index
tested. In these markets, the proposed model achieved R2 values o derivatives markets. Journal o Futures Markets, 42(7), 1352–1368.
Kumar, R. (2024). Recurrent context layered radial basis unction neural network or the
0.9911, 0.9883, 0.9879, 0.9869, 0.9909, and 0.9894, respectively. identication o nonlinear dynamical systems. Neurocomputing, 580, Article 127524.
https://doi.org/10.1016/j.neucom.2024.127524

29
Q. Ge Expert Systems With Applications 260 (2025) 125380

Li, H., Liu, T., Wu, X., & Chen, Q. (2020). An optimized VMD method and its applications Models to Predict Apple Inc. Share Prices. Indonesian Journal o Artifcial Intelligence
in bearing ault diagnosis. Measurement, 166, Article 108185. https://doi.org/ and Data Mining; Vol 7, No 1 (2024): March 2024; 148-156 ; 2614-6150 ; 2614-3372.
10.1016/j.measurement.2020.108185 Rehman, N., & Mandic, D. P. (2010). Multivariate empirical mode decomposition.
Liu, R., Yang, Z., Su, J., & Cao, Y. (2024). A Hybrid Framework or Evaluating Financial Proceedings o the Royal Society A: Mathematical, Physical and Engineering Sciences, 466
Market Price: An Analysis o the Hang Seng Index Case Study. International Journal o (2117), 1291–1302.
Advanced Computer Science & Applications, 15(6). Song, C., Chen, X., Xia, W., Ding, X., & Xu, C. (2022). Application o a novel signal
Liu, Y., Sayed, B. T., Sivaraman, R., Alshahrani, S. M., Venkatesan, K., Thajudeen, K. Y., decomposition prediction model in minute sea level prediction. Ocean Engineering,
Al-Bahrani, M., Hadrawi, S. K., & Yasin, G. (2023). Novel and robust machine 260(February). https://doi.org/10.1016/j.oceaneng.2022.111961
learning model to optimize biodiesel production rom algal oil using CaO and CaO/ Tao, Z., Wu, W., & Wang, J. (2024). Series decomposition Transormer with period-
Al2O3 as catalyst: Sustainable green energy. Environmental Technology & Innovation, correlation or stock market index prediction. Expert Systems with Applications, 237,
30, Article 103018. https://doi.org/10.1016/j.eti.2023.103018 Article 121424.
Ma, D., Yuan, D., Huang, M., & Dong, L. (2024). VGC-GAN: A multi-graph convolution Tiwari, M. K., & Chatterjee, C. (2010). Development o an accurate and reliable hourly
adversarial network or stock price prediction. Expert Systems with Applications, 236 food orecasting model using wavelet–bootstrap–ANN (WBANN) hybrid approach.
(April 2023), 121204. https://doi.org/10.1016/j.eswa.2023.121204. Journal o Hydrology, 394(3–4), 458–470.
Mintarya, L. N., Halim, J. N. M., Angie, C., Achmad, S., & Kurniawan, A. (2023). Machine Upadhyay, N. K., Singh, V., Singh, S., & Khanna, P. (2023). Enhancing Stock Market
learning approaches in stock market prediction: A systematic literature review. Predictability: A Comparative Analysis o RNN And LSTM Models or Retail
Procedia Computer Science, 216, 96–102. https://doi.org/10.1016/j. Investors. Journal o Management and Service Science (JMSS); Vol. 3 No. 1 (2023); 1-9
procs.2022.12.115 ; 2583-1798.
Moreno, S. R., Seman, L. O., Steenon, S. F., dos Santos Coelho, L., & Mariani, V. C. Van Houdt, G., Mosquera, C., & Nápoles, G. (2020). A review on the long short-term
(2024). Enhancing wind speed orecasting through synergy o machine learning, memory model. Artifcial Intelligence Review, 53, 5929–5955.
singular spectral analysis, and variational mode decomposition. Energy, 292, Article Wang, S., Jia, H., Abualigah, L., Liu, Q., & Zheng, R. (2021). An improved hybrid aquila
130493. https://doi.org/10.1016/j.energy.2024.130493 optimizer and harris hawks algorithm or solving industrial engineering optimization
Nti, I. K., Adekoya, A. F., & Weyori, B. A. (2020). A systematic review o undamental and problems. Processes, 9(9). https://doi.org/10.3390/pr9091551
technical analysis o stock market predictions. Artifcial Intelligence Review, 53(4), Wang, Z., Bouri, E., Ferreira, P., Shahzad, S. J. H., & Ferrer, R. (2022). A grey-based
3007–3057. correlation with multi-scale analysis: S&P 500 VIX and individual VIXs o large US
Oanh, T. T. K. (2022). The impact o COVID-19 vaccination on stock market: Is there any company stocks. Finance Research Letters, 48, Article 102872.
dierence between developed and developing countries? Heliyon, 8(9). https://doi. L. Xia X. Liu L. Wang Forecasting Framework Using Hybrid Modeling and Support Vector
org/10.1016/j.heliyon.2022.e10718.e10718 Regression Journal o Physics: Conerence Series Volume 1746, Issue 1, Page 012014
Olayungbo, D. O., Zhuparova, A., Al-Faryan, M. A. S., & Ojo, M. S. (2024). Global oil 2021 ISSN 1742–6588 1742–6596 10.1088/1742-6596/1746/1/012014.
price and stock markets in oil exporting and European countries: Evidence during Yao, Y., Zhang, Z., & Zhao, Y. (2023). Stock index orecasting based on multivariate
the Covid-19 and the Russia-Ukraine war. Research in Globalization, 8, Article empirical mode decomposition and temporal convolutional networks. Applied Sot
100199. https://doi.org/10.1016/j.resglo.2024.100199 Computing, 142, Article 110356. https://doi.org/10.1016/j.asoc.2023.110356
Pagliaro, A. (2023). Forecasting Signifcant Stock Market Price Changes Using Machine Yiming, L. U. (2024). Review and Analysis o Financial Market Movements: Google Stock
Learning : Extra Trees Classifer Leads. 1–23. Case Study. International Journal o Advanced Computer Science & Applications, 15(4).
Pangestu, R. A., Vitianingsih, A. V., Kacung, S., Maukar, A. L., & Noertjahyana, A. Zhang, Q., Qin, C., Zhang, Y., Bao, F., Zhang, C., & Liu, P. (2022). Transormer-based
(2024). Comparative Analysis o Support Vector Regression and Linear Regression attention network or stock movement prediction. Expert Systems with Applications,
202, Article 117239. https://doi.org/10.1016/j.eswa.2022.117239

30

You might also like