0% found this document useful (0 votes)
12 views

Short_term_power_consumption_forecasting

Uploaded by

nahgconsulting
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Short_term_power_consumption_forecasting

Uploaded by

nahgconsulting
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Research Article

Published: 2024-10-24
https://doi.org/10.20935/AcadEnergy7381

Short-term power consumption forecasting using neural


networks with first- and second-order differencing
Meftah Elsaraiti1,*

Academic Editors: Marcelo Godoy Simões, Amjad Anvari-Moghaddam

Abstract
Electricity consumption forecasting is critical for efficient energy management and planning. Traditional time series models, such as
ARIMA (AutoRegressive Integrated Moving Average), have been widely used due to their simplicity and interpretability. However, they
often struggle with capturing the nonlinearity and complexity inherent in real-world data, especially in the presence of high seasonal
variability. Recent advancements in machine learning, particularly long short-term memory (LSTM) networks, have addressed some
of these limitations by leveraging neural network architectures capable of learning complex temporal dependencies. Nevertheless, both
ARIMA and LSTM models can fall short in certain contexts, especially when dealing with abrupt changes and seasonal patterns. Recent
research has focused on enhancing model sensitivity to these elements by incorporating first- and second-order variations, significantly
improving predictive accuracy.

Keywords: neural networks, short-term forecasting, power consumption, time series

Citation: Elsaraiti M. Short-term power consumption forecasting using neural networks with first- and second-order differencing.
Academia Green Energy 2024;1. https://doi.org/10.20935/AcadEnergy7381

1. Introduction
Accurately predicting electricity consumption forecasting is However, time series methods like ARIMA can also provide
difficult due to complex statistical features, such as seasonality, effective approximations with minimal computational resources
trends, sudden changes, and a gradual decline in the autocorrela- [3]. The accuracy of electrical load predictions is highly
tion function [1]. Traditional time series models, such as ARIMA influenced by the noise level in the observed signal. Before
(AutoRegressive Integrated Moving Average), have been widely making predictions, it is crucial to carefully examine the time
used due to their simplicity and interpretability. However, they series for noise. It has been shown that applying preliminary
often struggle with capturing the nonlinearity and complexity filtration to the time series enhances the effectiveness of the
inherent in real-world data, especially in the presence of high classic Box–Jenkins approach [4].
seasonal variability. Recent advancements in machine learning,
ARIMA and artificial neural networks (ANNs) were used as
particularly long short-term memory (LSTM) networks, have
forecasting models to predict day-ahead electric load for a power
addressed some of these limitations by leveraging neural network
utility’s dataset [5]. Unlike previous studies that used difference
architectures capable of learning complex temporal dependen-
error estimation for verification, this study demonstrates the
cies. Nevertheless, both ARIMA and LSTM models can fall short
advantages of the ANN methodology through analysis of variance
in certain contexts, especially when dealing with abrupt changes
[6]. Multivariate techniques and time series analysis are often
and seasonal patterns.
suggested for forecasting electricity consumption, but they
The accuracy of ARIMA forecasts depends on several critical require a substantial amount of historical data to achieve
factors, including the quality and completeness of the historical accurate predictions [7]. Various factors, including climate
data, the duration of the forecasting horizon, and the judicious change, influence the energy demands of power grids at local,
selection of parameters for the ARIMA model. It is imperative to national, and global levels. Accurate estimation of energy
conduct a thorough analysis of the historical data, ensuring its demand and consumption requires effective analysis of
relevance and integrity, and meticulously fine-tune the multivariable data. Studies suggest that LSTM and convolutional
parameters of the ARIMA model to achieve precise forecasts [2]. neural network (CNN) models can effectively model electricity
Many studies aim to enhance forecasting methods to better demand [8]. A deep neural network, specifically an LSTM
capture various patterns in time series data. Advances in network, is proposed for short-term electricity consumption
computing hardware now allow for solving complex equations forecasting due to its effectiveness in handling sequential data
with large datasets, such as those used in neural networks. like time series [9]. The literature indicates that prediction

1Division of Engineering, Higher Institute for Sciences and Technology, Misurata, Libya.
*email: [email protected]

ACADEMIA GREEN ENERGY 2024, 1 1 of 12


https://www.academia.edu/journals/academia-green-energy/about https://doi.org/10.20935/AcadEnergy7381

techniques largely rely on forecasting future data by learning consumption modeling, in this paper, a method for calculating
from past data. Both traditional and artificial intelligence (AI)- first- and second-order differences is studied, supplemented by
based models are commonly used for energy prediction [10]. additional layers of differentiation. This process enhances the
model’s ability to capture sudden changes and trends in the data.
For short-term forecasts, recurrent neural network (RNN) and
Moreover, integrating neural network analysis, particularly deep
LSTM models performed similarly to ARIMA and outperformed
learning architectures, provides a more robust framework. These
all other models for medium- and long-term forecasts [11]. In
neural networks can learn and model complex, nonlinear
multivariate time series analysis, an LSTM model is utilized to
interactions within the data, making them particularly suited for
predict energy consumption based on historical data. The process
predicting consumption patterns influenced by numerous and
starts with data merging and preprocessing to prepare it for the
diverse factors. The combination of differentiated series and deep
LSTM model. Various batch sizes and two optimizers are then
learning models offers a more powerful and flexible toolset for
evaluated and compared [12]. In a comparison of support vector
handling the inherent complexities in electricity consumption
regression (SVR), LSTM, gated recurrent unit (GRU), CNN-
data, thereby improving prediction accuracy and reliability. This
LSTM, and CNN-GRU models for predicting energy
integrated approach allows for the extraction of more nuanced
consumption in smart residential homes, empirical results
features and relationships, facilitating more accurate forecasting
showed that as the amount of data increased, the performance of
and deeper insights into the factors driving consumption trends.
the machine learning model SVR significantly declined, while
The paper is organized into different sections, starting with a
deep learning models maintained better performance [13].
review of previous works on the topic of electricity load
The growing number of smart and Internet of Things devices that forecasting. Section 3 discusses techniques used in time series
gather data both inside and outside buildings plays a crucial role forecasting, including the proposed method for building the
in enhancing the accuracy and ease of predicting energy neural network model. Section 4 provides information on the
consumption. By utilizing this data, it is valuable to explore data and analysis methods used in the study. In Section 5,
whether advanced AI techniques, such as neural networks and the results of the proposed neural network model are presented
machine learning, can effectively forecast energy usage and, and compared to the results of the ARIMA and LSTM models.
potentially, indoor conditions [14]. Deep learning techniques Finally, in Section 6, the paper concludes with a discussion of
effectively handle complex nonlinear features through multiple future recommendations for improving the accuracy of the
layers of processing. Two common algorithms for time series proposed method and its potential applications in energy man-
prediction are the RNN and the backpropagation neural network agement and planning.
(BPNN) [15]. A variety of machine learning models, such as linear
regression, k-nearest neighbors, XGBoost, random forests, and
ANNs, were trained and tested for predicting energy 2. Previous works
consumption. The models were based on a full year of hourly Forecasting studies, a subset of data mining, involve utilizing
energy usage data, which had been preprocessed to handle existing data to predict trends and behavioral patterns that could be
outliers and missing values [16]. The effectiveness of machine applicable to unforeseen events. These models are constructed by
learning algorithms was analyzed in predicting electricity identifying correlations among variables from historical data, with
consumption. These algorithms were also evaluated for their role the aim of leveraging these patterns to anticipate probable outcomes
in load management, demand response, and their ability to select in forthcoming scenarios. The efficacy of forecasting models heavily
important variables from high-dimensional datasets. The analy- relies on the quality of the underlying data. Incomplete, erroneous,
sis highlighted the accuracy of these models in predicting or biased data can compromise the accuracy of predictions. Thus,
electricity market pricing [17]. An advanced machine learning meticulous data selection and preprocessing are imperative prior to
model was developed to predict household electricity consump- model development [19]. In recent years, there has been a growing
tion. This model utilizes a multidimensional time series dataset interest in leveraging time series analysis to predict future data,
that includes energy usage patterns, customer characteristics, notably in short-term electricity demand forecasting. Various
and weather data [18]. methodologies have been employed for this task, encompassing AI
To enhance model sensitivity to seasonal changes, this study and statistical techniques. AI methods, notably neural networks and
explored the use of first- and second-order differences. First- deep learning algorithms, have garnered significant attention due to
order differences highlight changes between consecutive obser- their adeptness in managing intricate and nonlinear relationships
vations, making trends and shifts more apparent. Second-order among variables [20]. The accuracy of ARIMA forecasts depends on
differences, which involve the differences of the first differences, several critical factors, including the quality and completeness of the
further accentuate these variations and can help in detecting historical data, the duration of the forecasting horizon, and the
acceleration or deceleration in trends. These techniques have judicious selection of parameters for the ARIMA model. It is
been found to improve the model’s ability to capture abrupt imperative to conduct a thorough analysis of the historical data,
changes and seasonal effects, providing a richer representation of ensuring its relevance and integrity, and meticulously fine-tune the
the underlying data dynamics. To address the complexity and parameters of the ARIMA model to achieve precise forecasts [2].
nonlinearity of real-world electricity consumption data, tradi- In forecasting energy consumption, various techniques were
tional models like ARIMA and more recent developments like employed, such as ARIMA, seasonal autoregressive integrated
LSTM have been employed. However, both have limitations. moving average (SARIMA), and LSTM. The findings indicate that
ARIMA, with its reliance on linearity and stationarity assump- LSTM outperformed ARIMA and SARIMA in predicting energy
tions, often falls short in capturing intricate patterns in the data. consumption on a daily basis [21]. Combining conventional time
LSTM, while better at handling nonlinearity and sequential series methodologies with ARIMA models can offer a feasible
dependencies, can still struggle with very complex relationships approach for demand forecasting [22]. Previous studies have
and high levels of noise. To improve the accuracy of electricity

ACADEMIA GREEN ENERGY 2024, 1 2 of 12


https://www.academia.edu/journals/academia-green-energy/about https://doi.org/10.20935/AcadEnergy7381

extensively explored the prediction of power loads for the subse- 3. Methods
quent day. To address this challenge, several methodologies have
been proposed. Among them, time series analysis stands out [23– 3.1. Classical models
25]. ANNs exhibit remarkable adaptability, enabling them to discern Classical models like the moving average (MA) model, auto-
and model intricate nonlinear relationships within time series data, regressive (AR) model, and ARMA are widely employed for time
rendering them invaluable for forecasting purposes. Conversely, series forecasting. These models prove particularly effective for
traditional parametric models, such as the Box–Jenkins and ARIMA analyzing stationary time series data characterized by consistent
models, presume linear processes underlying time series data, mean and variance over time. However, when confronted with
potentially limiting their ability to capture complex nonlinear nonstationary time series data exhibiting trends or seasonality,
patterns [26]. Furthermore, advancements in the ANN theory have the ARMA model alone falls short. In such scenarios, the ARIMA
significantly contributed to the advancement of power consumption or SARIMA model comes into play. The ARIMA(p, d, q) model
prediction. Researchers have delved into various aspects, including incorporates a differencing step of order d to render the time
architectures, activation functions, and training algorithms, aiming series stationary, allowing subsequent modeling with an ARMA
to enhance the performance of ANNs in load forecasting [27, 28]. (p, q) framework. Meanwhile, the SARIMA(p, d, q)(Ps, Ds, Qs)s
model introduces a seasonal component to capture periodic
In numerous studies, ANNs have been employed to predict hourly
variations within the time series, in addition to the differencing
demand series in various regions and countries, such as Rio de
step and the ARMA(p, q) model.
Janeiro, England, and Wales. These predictions were then compared
with the performance of statistical models like ARIMA, which are
3.1.1. Pure autoregressive model (AR)
valued for their simplicity and ease of implementation and have
A pure AR model of order p, denoted as AR(p), can be
demonstrated effectiveness across diverse scenarios [29]. To
represented mathematically as follows:
forecast loads using ANNs, a typical approach involves utilizing the
preceding 48 electrical load data points per half hour as input yt =  + 1 yt −1 +  2 yt −2 + +  p yt − p +  t (1)
variables to train the ANN model. Subsequently, the trained model
is utilized to forecast upcoming loads for each half-hour interval
where 𝛽 is a constant, 𝑦𝑡 is the value of the time series at time t, yt−1,
[30]. Additionally, ANNs have been effectively utilized to forecast
yt−2,…, yt−p are the lagged values of 𝑦𝑡 (the previous p values), and
short-term load variations in photovoltaic (PV) systems, with
𝛼1 , 𝛼2 , … , 𝛼𝑝 are the coefficients of the lagged values of the time
consideration given to various levels of spatial aggregation. Notably,
series estimated by the model.
the findings underscore the utility of time series forecasting in
enhancing grid operation by optimizing energy flow [31]. Using the
3.1.2. Pure moving average model (MA)
backpropagation method, this study aims to forecast the daily energy
A pure MA model of order q, denoted as MA(q), can be
usage and monthly energy consumption for a mining company [32].
represented mathematically as follows:
Previous research on short-term electrical load forecasting indicates
that ANNs outperform time series models in long-term projections yt =  + 1 t −1 + 2 t −2 + + q t −q +  t (2)
of the future [33]. Furthermore, hybrid ANN models have been
explored and implemented in short-term electricity prediction to
where 𝑦𝑡 is the value of the time series at time t, 𝜇 is the mean of
address limitations encountered with traditional statistical methods
the series, 𝜃1 , 𝜃2 , … , 𝜃𝑞 are the parameters of the model
like linear regression and ARIMA when applied to nonlinear time
representing the weights of the past errors, and 𝜀𝑡 is a white noise
series data. These hybrid models leverage the strengths of both
error term at time t.
ANNs and conventional methods to enhance accuracy and
performance in time series forecasting [34]. Seasonality is explicitly
3.1.3. Autoregressive integrated moving average
addressed as a category within time series forecasting models,
model (ARIMA)
particularly leveraging the univariate LSTM neural network [35].
An ARIMA model is a commonly used time series forecasting
LSTM, among the latest and most significant deep learning
model that combines AR and MA components. To make the time
methodologies, has demonstrated effectiveness in electrical load
series stationary (i.e., to eliminate trends and seasonality), the
prediction across multiple studies employing ANNs [36]. The study
model involves differencing the series at least once. The “d”
compared the performance of an ANN methodology against the
parameter in the ARIMA model denotes the number of times the
traditional autoregressive moving average (ARMA) time series
series is differed. The equation for an ARIMA(p, d, q) model is
method in estimating total electrical load within a short-term
window of one day. The findings revealed that ANN outperformed
ARMA, yielding superior results [37]. For short-term forecasting, the
(1 −  L −  L − −  L ) (1 − L )
1 2
2
p
p d
xt
(3)
automatic integrated regression seasonal moving average (SARIMA) = (1 −  L −  L − −  L ) 
1 2
2
q
q
t
technique is compared with ANNs. The ANN model employed a
multilayer perspective trained with the Levenberg–Marquardt where xt is the time series at time t, 𝐿 is the lag operator,
algorithm [38]. Additionally, to predict the monthly electricity 1 , 2 , , q are the AR parameters, 1 , 2 , , q are the MA
output, an LSTM deep learning approach was compared with two
traditional machine learning methods: support vector machines and parameters, d is the degree of differencing, and 𝜀𝑡 is a white noise
error term at time t.
multilayer perceptrons. The results demonstrated the superior
performance of the LSTM model [39]. For short-term load forecast- The ARIMA model is commonly estimated using maximum
ing, a hybrid model combining ARIMA and ANN approaches was likelihood estimation or similar techniques. These estimated
developed. Comparative analysis against the exclusive utilization of parameters enable forecasting for future periods, providing
ARIMA revealed the superior performance of the hybrid model [40]. valuable insights into potential trends or patterns. Additionally,
incorporating exogenous variables or seasonality components

ACADEMIA GREEN ENERGY 2024, 1 3 of 12


https://www.academia.edu/journals/academia-green-energy/about https://doi.org/10.20935/AcadEnergy7381

can enhance the predictive capabilities of the model, offering a 3.2.1. Form a training sample based on the original
more comprehensive understanding of the data dynamics. time series values
3.2.1.1. For a model of the form
3.1.4. Order of differencing
If the series is nonstationary, has a trend, and is seasonal, it (
xt = f xt −1 , xt −2 , , xt ) (6)
must be converted to a stationary form using the finite
difference approach, not only for the present values but also for The sample is formed as follows:
the seasonal components. The order of differencing and the
order of the seasonal differencing component Ds are regarded ( xt−n , )
, xt −2 , xt −1 , xt , t = n + 1, L  (7)
as assured when the initial nonstationary series becomes
stationary after obtaining differencing of order D, followed by 3.2.1.2. For the view model
differencing of order d. The minimum differencing required to
obtain a near-stationary series that roams around a defined  xt −1 , xt −2 , , xt −n , 
mean is the correct order of differencing, and this is confirmed  
xt = f  xt −s −1 , xt −s −2 , , xt −s −n , , xt −ms −1,  (8)
when the Autocorrelation Function (ACF) plot quickly ap-
 x 
proaches zero. If the autocorrelations for any number of lags (10  t −ms−2 , , xt −ms −n , ut 
or more) are positive, the series requires further differencing.
Conversely, if the lag 1 autocorrelation is too negative, the series  xt −ms −n , , xt −ms −2 ,  
is likely over-differenced. Mathematically, the differencing   
operation can be represented as follows:
 xt −ms −1 , , xt −s −n , , xt −s −2 ,  , t = ms + n + 1, L  (9)
 x , x , , x , x , u , x  
 d xt = ( 1 − B ) xt , d = 0, 1, 2,
d
(4)  t −s −1 t −n t −2 t −1 t t  
where B is the backshift operator, which shifts the time series Here, the L parameter is equal to the maximum amount of data
backward by one period, and ∆ is the differencing operator. (rows) in the original sample.

3.2. Neural network model time series 3.3. Long short-term memory

Traditional forecasting methods often struggle to accurately RNNs and LSTMs stand out as two of the most widely used neural
capture underlying patterns and behaviors in data, particularly network architectures, particularly adept at handling sequential
when dealing with randomness and periodicity. To address these data like time series. Their key strength lies in their capacity to
challenges, researchers and practitioners continually explore retain a form of “memory” from prior inputs through feedback
innovative techniques and strategies to enhance prediction loops. This enables them to proficiently capture intricate depend-
accuracy and consistency [41]. Traditional forecasting models encies between past and future values within a time series, even
often rely on the assumption of a linear relationship between past when these relationships are nonlinear and intricate [44]. LSTMs
and future values within a time series. These models estimate emerge as a potent tool for handling sequential data, showcasing
coefficients based on historical data, assuming that future notable advantages over traditional RNNs and other variants of
behavior will follow similar patterns. However, real-world data RNNs in terms of performance [45]. RNNs represent a pivotal
often exhibit nonlinear relationships, which traditional models architecture within neural networks, characterized by a directed
may fail to capture effectively. In such cases, neural networks sequence of connections between elements. This structural
offer a powerful alternative. By leveraging their capacity to design empowers RNNs to adeptly process sequences of events in
approximate complex nonlinear functions, neural networks can time or sequential spatial series, rendering them invaluable for
better capture the intricate dynamics within the data [42, 43]. A handling sequential data types, such as text or time series data.
neural network model is implemented to create a predictive An evolution within this realm is the incorporation of LSTM cells
model for time series, such as a nonstationary series with trends into RNNs, engineered specifically to mitigate the challenge of
and seasonality, as follows: handling long-term dependencies inherent in sequential data.
The LSTM cell is equipped with a suite of gating mechanisms that
 xt −1 , xt −2 , , xt −n ,  enable it to selectively update its memory cell, thereby circum-
  venting issues like vanishing or exploding gradients during
xt = f  xt −s −1 , xt −s −2 , , xt −s −n , , xt −ms −1,  (5)
training [46, 47]. Figure 1 shows the architecture of a simple
 x 
 t −ms−2 , , xt −ms −n , ut  LSTM network for regression.

The method for reducing the time series to a stationary form, as


described in the scientific literature, is used to calculate the
values of n, s, and m dependencies.

Figure 1 • Long short-term memory network architecture.

ACADEMIA GREEN ENERGY 2024, 1 4 of 12


https://www.academia.edu/journals/academia-green-energy/about https://doi.org/10.20935/AcadEnergy7381

( )
3.4. Evaluation criterion 2
 yi − yp
Three evaluation criteria indices are utilized to examine the RMSE= (11)
correctness of predicted results in various contexts, particularly n
in regression and time series forecasting models: The mean y − yp

absolute error (MAE) is a widely used metric for assessing the 1
MAPE= i  100% (12)
performance of regression models. It quantifies the average n yi
absolute difference between the predicted and actual values,
providing a straightforward measure of prediction accuracy. The where n refers to the total number of test sets, yi refers to the
root mean square error (RMSE), like MAE, is commonly actual load value, and yp refers to the predicted load value.
employed in evaluating regression models and time series
forecasts. It calculates the square root of the average squared
difference between predicted and actual values, thus giving more
4. Data and analysis
weight to large errors, and the mean absolute percentage error To implement the proposed method, an algorithm was developed
(MAPE) is another valuable metric for evaluating the accuracy of and executed using MATLAB R2019b. This algorithm facilitated
time series forecasts. It expresses the average percentage the creation of a neural network model specifically designed to
difference between predicted and actual values, offering insights predict electricity consumption. The dataset used in this study
into the relative accuracy of forecasts with respect to the scale of consists of hourly total net load data (measured in megawatts) for
actual values. However, caution is warranted when dealing with Nova Scotia, Canada. The data, generously provided by Nova
small or zero values, as MAPE can yield infinite or undefined Scotia Power, is available through their dataset link, referred to
results under such circumstances. While each metric provides as the “Nova Scotia Power Dataset”. The dataset covers the period
valuable insights into model performance, the choice of from January 1, 2016, to December 12, 2017, and includes 17,544
evaluation criteria should align with the specific requirements entries. Each entry comprises information on the date, time, and
and objectives of the analysis. Utilizing multiple metrics is often corresponding load values. The analysis performed on this
beneficial for obtaining a comprehensive understanding of model dataset involved decomposing the time series into its fundamen-
performance across various dimensions [48]. The following are tal components: trend, seasonality, and random fluctuations.
the formulas for calculating the three evaluation criteria: Figure 2 specifically showcases a portion of this decomposition
for the month of December 2017, highlighting the various

MAE=
(y − y )
i p
(10)
components of the series.
n

Figure 2 • Original series (December 2017).

This method begins by computing the first- and second-order variations in electricity consumption and is instrumental in
differences of the time series data. The first-order difference is improving the accuracy of the predictive model. Figures 3
obtained by subtracting each data point from the previous one, and 4 illustrate the absolute values of the first and second one-
effectively capturing the rate of change in the series. The second- month differences in the original time series, respectively. In
order difference is then calculated by taking the difference of time series analysis, the first difference refers to the difference
consecutive first-order differences, highlighting the acceleration between each observation and the previous one. By taking the
or deceleration in the rate of change. These computed differences absolute value of these differences, we obtain the magnitude of
are integrated into a neural network architecture by treating change between consecutive data points, regardless of whether
them as additional input features alongside the original time the change was an increase or a decrease. Figure 3, therefore,
series data. The decomposition analysis, illustrated in Figure 2, depicts the magnitude of these changes for one month in the
reveals that the time series data contain a variable trend, indicat- original series, highlighting how much each data point differs
ing both upward and downward shifts over time. Additionally, from the one before it. The second difference goes a step further
the analysis identifies a clear seasonal pattern that recurs every by calculating the difference between each first difference and its
24 hours. This seasonal cycle is crucial for understanding daily preceding first difference. This essentially measures the change

ACADEMIA GREEN ENERGY 2024, 1 5 of 12


https://www.academia.edu/journals/academia-green-energy/about https://doi.org/10.20935/AcadEnergy7381

in the rate of change or how the first differences themselves vary observations, indicating high volatility or variability in the data.
over time. Taking the absolute value of the second difference gives Similarly, large magnitudes in the second difference reveal
us a sense of the magnitude of changes in the rate of change. fluctuations in the rate of change, pointing to periods of
Figure 4 showcases this for one month, indicating how the acceleration or deceleration in the trend. By examining these
variability in the magnitude of changes evolves. Analyzing the differences, one can better understand the stability or instability
first and second differences of a time series provides valuable of the time series, as well as the presence of underlying patterns
insights into the dynamics of the data. A high magnitude in the or anomalies.
first difference suggests significant changes between consecutive

Figure 3 • First difference of original series (December 2017).

Figure 4 • Second difference of original series (December 2017).

5. Results and discussions hidden layer. The purpose of varying the hidden neurons was to
explore how different network complexities affect the model’s ability
The neural network model for predicting electrical load was devel- to learn and generalize from the data. The dataset was split into two
oped using the backpropagation method. This approach involved parts for the training process: 70% of the data was used to train the
training the model with a range of input variables derived from the model, while the remaining 30% were reserved for testing its
original time series data. Key variables included in the input were the predictive performance. This division allowed for a thorough
first- and second-order differences, which capture changes and rates evaluation of how well the model could generalize to unseen data.
of change in the data, as well as the date and time, which account for Several error metrics, including MAE, RMSE, and MAPE, were used
temporal patterns and trends. To ensure the model was robust and to assess the accuracy and effectiveness of each network
capable of accurately predicting electrical load, a variety of training configuration. Table 1 summarizes the performance of the different
samples was created. The neural network was trained with different neural network configurations, varying by the number of neurons in
configurations, specifically by altering the number of neurons in the the hidden layer.

ACADEMIA GREEN ENERGY 2024, 1 6 of 12


https://www.academia.edu/journals/academia-green-energy/about https://doi.org/10.20935/AcadEnergy7381

Table 1• The performance criteria for the different building lowest MAE, RMSE, and MAPE, demonstrating its superior
models accuracy in forecasting electrical load compared to other
configurations tested. Figure 5 provides a visual representation
Number of neurons in the hidden layer
of the variation in MAPE values across different numbers of
4 8 12 16 20 hidden layer neurons. The plot shows that the MAPE decreases
MAE 100.92 108.55 98.7 104 95 as the number of neurons increases, reaching its lowest point
with 20 neurons. This suggests that a more complex model with
RMSE 120.39 129.13 116.95 126.72 111.76
20 hidden neurons can better capture the underlying patterns in
MAPE 5.906 6.34 5.81 6.09 5.62 the data, leading to more precise predictions. The results from
The analysis of these results indicates that the optimal model this analysis underscore the importance of selecting an appropri-
configuration for predicting electrical load was achieved with 20 ate number of hidden neurons to optimize model performance
neurons in the hidden layer. This configuration produced the and ensure reliable load forecasts.

Figure 5 • Mean absolute percentage error plot concerning hidden layer neuron.

Figure 6 depicts the neural network model that has been applied to 24 hours ahead, specifically targeting December 31, 2017. The close
predict electrical load, with its performance assessed by comparing alignment between the observed and predicted values throughout
actual observed values (continuous line) and predicted values the day indicates a generally robust predictive capability, underscor-
(dashed line) during a testing phase. The model’s predictions extend ing the model’s efficacy in forecasting electrical load.

Figure 6 • The results of comparing the test values with predicted values of the time series generated by the neural network model.

The model tends to overestimate the load during the early morning as reduced industrial activity or household energy use patterns. The
hours. This may be due to the inherent variability in electrical slight overestimation suggests the model might not fully capture
consumption during these hours, possibly influenced by factors such these nuances, potentially due to limited training data for this

ACADEMIA GREEN ENERGY 2024, 1 7 of 12


https://www.academia.edu/journals/academia-green-energy/about https://doi.org/10.20935/AcadEnergy7381

specific period or the complexity of underlying patterns not entirely comparative analysis between these two approaches, depicted in
reflected in the model’s features. Conversely, the model underesti- Figure 7, illustrates the strengths and limitations of each
mates the load from mid-morning to late evening. This period method. The ARIMA (1 ,1, 1) (1, 1, 1)24 model is a traditional
typically includes peak hours for both residential and commercial statistical approach often employed for time series forecasting.
energy consumption. The underestimation might indicate a need for This model is particularly noted for its ability to handle AR
a better representation of peak load drivers within the model, such components, differences (to achieve stationarity), and MA
as temperature fluctuations, special events, or socioeconomic factors average components, along with seasonal adjustments. However,
that can influence energy demand. The model shows a notable despite its capacity to follow the general trend of the electrical
improvement in accuracy during the last two hours of the day. This load data, the ARIMA model struggles with the intricacies of the
suggests that the neural network has a strong grasp of the daily load time series. This limitation is partly due to the model’s inherent
pattern’s tapering off, likely due to more predictable and stable assumptions, such as the requirement for stationarity and
consumption patterns as activities wind down. The overall linearity in the data. When the underlying data exhibits nonlinear
acceptable accuracy of the model, despite minor deviations, suggests or nonstationary behavior, the ARIMA model’s accuracy can
its viability as a forecasting tool. The close tracking of the actual diminish, particularly over longer forecast horizons. This
trend, particularly in the evening hours, indicates that the neural degradation in performance is attributed to the model’s reliance
network can effectively learn and predict daily load patterns. This on past values, which can introduce noise and uncertainty,
capability is crucial for various practical applications. leading to less reliable predictions. In contrast, the neural network
model, particularly a deep learning architecture, demonstrates a
5.1. Compare the proposed model superior ability to capture complex patterns and variations in the
In this particular case study involving a nonstationary time series data. Neural networks are not constrained by assumptions of
with inherent seasonality, the time series was analyzed using two stationarity or linearity, allowing them to better adapt to the inherent
different modeling methods: a classical ARIMA model and a complexities and nonlinear relationships present in real-world time
neural network model. The specific ARIMA model utilized is series data. This flexibility enables neural networks to make more
denoted as ARIMA (1, 1, 1) (1, 1, 1)24, which has been identified robust and precise forecasts, even in challenging environments
as the optimal configuration for daily load prediction. The where traditional models might fail.

Figure 7 • A comparison between the neural network and seasonal autoregressive integrated moving average model.

Specifically, the results from the neural network were further techniques in modern data analysis, especially in fields requiring
compared with those from an LSTM regression network, an nuanced understanding and prediction of complex patterns.
advanced form of RNN known for its effectiveness in capturing
Figure 8 showcases the effect of the initial learning rate on the
long-term dependencies in sequential data. The LSTM model was
training process of an LSTM regression network. The chosen
trained using specific parameters, including an initial learning
learning rate, set at 0.01, plays a pivotal role in determining the
rate of 0.01, to optimize the training process and minimize errors
efficiency and effectiveness of the model’s training. Hyperpa-
such as the RMSE and loss. The analysis clearly indicates the
rameter tuning, particularly the learning rate, is critical in neural
superior performance of neural networks over traditional ARIMA
network optimization as it influences the convergence speed and
models in handling complex and nonstationary time series data.
the stability of the learning process. A well-selected learning rate
While ARIMA models are suitable for short-term predictions and
helps in efficiently navigating the loss landscape, minimizing the
simpler datasets, their limitations become apparent with more
training error, and preventing issues, such as overfitting or
complex data structures. Neural networks, with their ability to
underfitting. The learning rate dictates the magnitude of updates
learn and generalize from data, provide a more powerful tool for
to the model’s weights with each iteration. A learning rate too
accurate and reliable forecasting in such scenarios. This comparison
high can cause the training process to overshoot minima,
underscores the growing importance of advanced machine learning
potentially leading to divergence, while a rate too low may result

ACADEMIA GREEN ENERGY 2024, 1 8 of 12


https://www.academia.edu/journals/academia-green-energy/about https://doi.org/10.20935/AcadEnergy7381

in an unnecessarily prolonged training process, getting stuck in hyperparameter configurations to find the optimal balance that
local minima or taking excessive time to converge. The results in maximizes model performance and ensures smooth training
Figure 9 highlight the necessity of experimenting with various progression.

Figure 8 • Training process with a learning rate of 0.01 and a time step test of 24 hours.

Figure 9 provides a comparative analysis of the hourly load data time periods. This allows the LSTM to model long-term
predictions in Nova Scotia, juxtaposing the actual data against dependencies more effectively than standard neural networks. In
forecasts made by both the proposed neural network model, and contrast, the proposed neural network model, while capable,
the LSTM model. The graph illustrates a striking difference in the does not align as closely with the real data. This discrepancy
accuracy and precision of the two models. The LSTM model suggests that the LSTM model’s architecture provides a more
exhibits a superior ability to closely follow the actual load data, nuanced and accurate representation of the data’s temporal
demonstrating its advanced capability in capturing the temporal dynamics. The LSTM’s ability to maintain a memory of previous
dependencies and complex patterns inherent in time series data. inputs allows it to more accurately predict future values,
LSTM networks, a specialized type of RNN, are particularly well particularly in the presence of seasonality and trend components,
suited for sequential data due to their architecture, which which are common in load forecasting.
includes memory cells that can retain information over extended

Figure 9 • Real and forecasted load for 24 hours ahead using neural network and long short-term memory models.

The analysis underscores the importance of selecting appropriate model performance and the accuracy of predictions. Addition-
hyperparameters, such as the learning rate, during the training ally, the comparison between the neural network and LSTM
of neural network models. Proper tuning can significantly impact models highlights the superior performance of LSTM networks

ACADEMIA GREEN ENERGY 2024, 1 9 of 12


https://www.academia.edu/journals/academia-green-energy/about https://doi.org/10.20935/AcadEnergy7381

in handling complex time series data. The LSTM’s structure, model. Through extensive experimentation and evaluation, the
which effectively captures long-term dependencies, proves research identifies that a model with a hidden layer of 20 neurons
advantageous in scenarios requiring precise forecasting, as achieves the best performance. This configuration shows supe-
demonstrated by its closer alignment with actual hourly load rior accuracy in predicting electrical load, as indicated by lower
data. This comparison reinforces the utility of advanced neural error metrics, such as MAE, RMSE, and MAPE. The performance
architectures like LSTM in achieving high accuracy in time series of model analysis reveals promising results, demonstrating the
forecasting tasks. model’s ability to closely follow the actual trends in electrical
load. While there are minor deviations during certain periods, the
5.2. Performance calculation using error metrics overall accuracy of the model is satisfactory, suggesting its
The performance evaluation of the neural network, ARIMA, and potential for practical load prediction applications. Moreover, the
LSTM models is conducted using 24-hour load data and study compares the proposed neural network model with
forecasted load data. The comparison is quantified using three traditional ARIMA and advanced LSTM models. Although the
commonly used metrics: MAE, MAPE, and RMSE. These metrics ARIMA model shows some forecasting capability, its accuracy is
provide a comprehensive view of the models’ accuracy and reli- significantly lower than that of the neural network, especially for
ability in forecasting. The results are summarized in Table 2, long-term predictions. The LSTM model, however, outperforms
which outlines the performance of each model across the three both the proposed neural network and ARIMA models in terms
metrics. of accuracy and tracking capability. Nonetheless, the neural
network model still provides adequate performance for short-
Table 2 • Comparison of mean absolute error, root mean term load forecasting tasks. Overall, the research highlights the
square error, and mean absolute percentage error using advantages of using advanced machine learning techniques, such
different models as neural networks, for time series forecasting, particularly in
complex data scenarios where traditional models like ARIMA
Name MAE MAPE (%) RMSE may fall short. The study also emphasizes the critical role of
Neural network 95.00 5.62 111.76 hyperparameter tuning in achieving optimal performance in
neural network models.
ARIMA 1,017.72 60.79 1,018.33

LSTM 23.07 1.36 25.77

ARIMA, autoregressive integrated moving average; LSTM, long short-term


Funding
memory; MAE, mean absolute error; MAPE, mean absolute percentage error; The author declares no financial support for the research,
RMSE, root mean square error. authorship, or publication of this article.
The comparative analysis reveals a clear hierarchy in model
performance, with the LSTM model outperforming both the neu-
ral network and ARIMA models across all metrics. The ARIMA Author contributions
model, while traditionally effective for short-term forecasting,
The author confirms sole responsibility for this work. The author
shows significant limitations in handling complex, long-term
approves of this work and takes responsibility for its integrity.
datasets. This is evident from its high RMSE, MAE, and MAPE
values, which indicate substantial forecast errors. The neural
network model, though more accurate than ARIMA, still falls Conflict of interest
short compared to the LSTM model, particularly in terms of
RMSE and MAE. However, its MAPE value suggests that it can The author declares no conflict of interest.
still provide reasonably accurate short-term forecasts, making it
a viable option in scenarios where LSTM’s computational re-
quirements or complexity may not be feasible. The LSTM model
Data availability statement
stands out as the most effective for this task, leveraging its ar- Data supporting these findings are available within the article, at
chitecture’s ability to capture long-term dependencies and https://doi.org/10.20935/AcadEnergy7381, or upon request.
complex patterns in time series data. Its low error metrics across
all categories demonstrate its robustness and reliability in fore-
casting, making it the preferred choice for scenarios demanding Institutional review board statement
high precision and accuracy in predictions. This analysis under-
Not applicable.
scores the importance of selecting an appropriate model based on
the specific characteristics of the data and the forecasting
requirements, highlighting the advantages of advanced neural Informed consent statement
network architectures like LSTM in tackling complex time series
problems. Not applicable.

6. Conclusions Additional information


The study provides a comprehensive examination of electrical Received: 2024-05-22
load prediction using neural network models, with a particular Accepted: 2024-10-05
emphasis on the backpropagation method. It systematically
Published: 2024-10-24
explores different configurations of input variables and the
number of neurons in the hidden layer to develop an optimized

ACADEMIA GREEN ENERGY 2024, 1 10 of 12


https://www.academia.edu/journals/academia-green-energy/about https://doi.org/10.20935/AcadEnergy7381

Academia Green Energy papers should be cited as Academia Green multivariable data. PLoS One. 2022;17(11):e0278071. doi:
Energy 2024, ISSN 2998-3665, https://doi.org/10.20935/Acad 10.1371/journal.pone.0278071
Energy7381. The journal’s official abbreviation is Acad. Energy.
9. Torres JF, Martínez-Álvarez F, Troncoso A. A deep LSTM
network for the Spanish electricity consumption forecasting.
Publisher’s note Neural Comput Appl. 2022;34(13):10533–45. doi: 10.1007/
s00521-021-06773-2
Academia.edu Journals stays neutral with regard to jurisdictional
claims in published maps and institutional affiliations. All claims 10. Awad M, Qasrawi I. Enhanced RBF neural network model
expressed in this article are solely those of the authors and do not for time series prediction of solar cells panel depending on
necessarily represent those of their affiliated organizations, or climate conditions (temperature and irradiance). Neural
those of the publisher, the editors, and the reviewers. Any Comput Appl. 2018;30:1757–68. doi: 10.1007/s00521-016-
product that may be evaluated in this article, or claim that may 2779-5
be made by its manufacturer, is not guaranteed or endorsed by 11. Deb C, Zhang F, Yang J, Lee SE, Shah KW. A review on time
the publisher. series forecasting techniques for building energy consump-
tion. Renew Sustain Energy Rev. 2017;74:902–24. doi:
10.1016/j.rser.2017.02.085
Copyright
12. Alshehri HA. Deep learning for electricity forecasting using
© 2024 copyright by the authors. This article is an open access time series data [Master thesis]. Montclair (NJ): Montclair
article distributed under the terms and conditions of the Creative State University; 2021.
Commons Attribution (CC BY) license (https://creativecommons.
org/licenses/by/4.0/). 13. Bhoj N, Bhadoria RS. Time-series based prediction for
energy consumption of smart home data using hybrid
convolution-recurrent neural network. Telemat Inform. doi:
References 10.1016/j.tele.2022.101907
1. Charfeddine L, Zaidan E, Alban AQ, Bennasr H, Abulibdeh 14. Martellotta F, Ayr U, Cannavale A, Liuzzi S, Rubino C. Using
A. Modeling and forecasting electricity consumption amid neural networks to predict hourly energy consumptions in
the COVID-19 pandemic: Machine learning vs. nonlinear office and industrial buildings as a function of weather data.
econometric time series models. Sustain Cities Soc. doi: J Phys: Conf Ser. 2022;2385(1):012097. doi: 10.1088/1742-
10.1016/j.scs.2023.104860 6596/2385/1/012097
2. Rashid MH. AMI smart meter big data analytics for time series 15. Wang JQ, Du Y, Wang J. LSTM based long-term energy
of electricity consumption. In: 2018 17th IEEE International consumption prediction with periodicity. Energy. 2020;
Conference On Trust, Security And Privacy In Computing And 197:117197. doi: 10.1016/j.energy.2020.117197
Communications/12th IEEE International Conference On Big
16. Reddy GV, Aitha LJ, Poojitha C, Shreya AN, Reddy DK,
Data Science And Engineering (TrustCom/BigDataSE); 2018
Meghana GS. Electricity consumption prediction using
Aug 1; New York. IEEE; 2018. p. 1771–6. doi: 10.1109/
machine learning. E3S Web Conf. 2023;391:01048. doi:
TrustCom/BigDataSE.2018.00251
10.1051/e3sconf/202339101048
3. Luzia R, Rubio L, Velasquez CE. Sensitivity analysis for
17. Jena TR, Barik SS, Nayak SK. Electricity consumption &
forecasting Brazilian electricity demand using artificial
prediction using machine learning models. Acta Tech
neural networks and hybrid models based on Autoregressive
Corviniensis-Bull Eng. 2020;9:2804–18. doi: 10.1051/e3sconf/
Integrated Moving Average. Energy. doi: 10.1016/
202339101048
j.energy.2023.127365.
18. Li Y. Energy consumption forecasting with deep learning. J
4. Chodakowska E, Nazarko J, Nazarko Ł. Arima models in
Phys: Conf Ser. 2024;2711(1):012012. doi: 10.1088/1742-
electrical load forecasting and their robustness to noise.
6596/2711/1/012012
Energies. https://doi.org/10.3390/en14237952
19. Siegel E. Predicitve analytics: the power to predict who will
5. Velasco LC, Polestico DL, Macasieb GP, Reyes MB, Vasquez
click, buy, lie, or die, revised and updated. Hoboken (NJ):
Jr FB. Load forecasting using autoregressive integrated
Wiley; 2016. p. 17.
moving average and artificial neural network. Int J Adv
Computer Sci Appl. 2018;9(7):23–9. doi: 10.14569/ 20. Hammad MA, Jereb B, Rosi B, Dragan D. Methods and models
IJACSA.2018.090704 for electric load forecasting: a comprehensive review. Logist
Sustain Transp. 2020;11(1):51–76. doi: https://doi.org/10.
6. Azadeh A, Ghaderi SF, Sohrabkhani S. Forecasting electrical
2478/jlst-2020-0004
consumption by integration of neural network, time series
and ANOVA. Appl Math Comput. 2007;186(2):1753–61. doi: 21. Dubey AK, Kumar A, García-Díaz V, Sharma AK, Kanhaiya
10.1016/j.amc.2006.08.094 K. Study and analysis of SARIMA and LSTM in forecasting
time series data. Sustain Energy Technol Assess. 2021;
7. Hu YC. Electricity consumption prediction using a neural-
47:101474. doi: 10.1016/j.seta.2021.101474
network-based grey forecasting approach. J Oper Res Soc.
2017;68(10):1259–64. doi: 10.1057/s41274-016-0150-y 22. Smith DG. Combination of forecasts in electricity demand
prediction. J Forecast. 1989;8(3):349–56. doi: 10.1007/978-
8. Chung J, Jang B. Accurate prediction of electricity
3-031-38387-8_9
consumption using a hybrid CNN-LSTM model based on

ACADEMIA GREEN ENERGY 2024, 1 11 of 12


https://www.academia.edu/journals/academia-green-energy/about https://doi.org/10.20935/AcadEnergy7381

23. Chakhchoukh Y, Panciatici P, Mili L. Electric load forecasting 36. Kong W, Dong ZY, Jia Y, Hill DJ, Xu Y, Zhang Y. Short-term
based on statistical robust methods. IEEE Transac Power Syst. residential load forecasting based on LSTM recurrent neural
2010;26(3):982–91. doi: 10.1109/TPWRS.2010.2080325 network. IEEE Trans Smart Grid. 2017;10(1):841–51. doi:
10.1109/TSG.2017.2753802
24. Tucker N, Moradipari A, Alizadeh M. Constrained Thompson
sampling for real-time electricity pricing with grid reliability 37. Topalli AK, Erkmen I, Topalli I. Intelligent short-term load
constraints. IEEE Trans Smart Grid. 2020;11:4971–83. doi: forecasting in Turkey. Int J Electrical Power Energy Syst.
10.1109/TSG.2020.3004770 2006;28(7):437–47. Doi: 10.1016/j.ijepes.2006.02.004

25. Chaaraoui S, Bebber M, Meilinger S, Rummeny S, 38. Bozkurt ÖÖ, Biricik G, Tayşi ZC. Artificial neural network
Schneiders T, Sawadogo W, et al. Day-ahead electric load and SARIMA based models for power load forecasting in
forecast for a ghanaian health facility using different Turkish electricity market. PLoS One. 2017;12(4). doi:
algorithms. Energies. 2021;14(2):409. doi: 10.3390/en14 10.1371/journal.pone.0175915
020409
39. Ramazan UN. A comparative study of machine learning and
26. Zhang G, Patuwo BE, Hu MY. Forecasting with artificial deep learning for time series forecasting: a case study of
neural networks: the state of the art. Int J Forecast. choosing the best prediction model for turkey electricity
1998;14(1):35–62. doi: 10.1016/S0169-2070(97)00044-7 production. Süleyman Demirel Üniversitesi Fen Bilimleri
Enstitüsü Dergisi. 2019;23(2):635–46. doi: 10.19113/
27. Sun J, Dong H, Gao Y, Fang Y, Kong Y. The short-term load
sdufenbed.494396
forecasting using an artificial neural network approach with
periodic and nonperiodic factors: a case study of Tai’an, 40. Fard AK, Akbari-Zadeh MR. A hybrid method based on
Shandong Province, China. Comput Intell Neurosci. 2021; wavelet, ANN and ARIMA model for short-term load
2021:1502932. doi: 10.1155/2021/1502932 forecasting. J Exp Theor Artif Intell. 2014;26(2):167–82.
doi: 10.1080/0952813X.2013.813976
28. Marin FJ, Garcia-Lagos F, Joya G, Sandoval F. Global model
for short-term load forecasting using artificial neural 41. Jiang P, Yang H, Heng J. A hybrid forecasting system based
networks. IEE Proc Generation, Transmission Distribution. on fuzzy time series and multi-objective optimization for
2002;149(2):121–5. doi: 10.1049/ip-gtd:20020224 wind speed forecasting. Appl Energy. 2019;235:786–801.
doi: 10.1016/j.apenergy.2018.11.012
29. Taylor JW, McSharry PE. Short-term load forecasting
methods: an evaluation based on European data. IEEE 42. Tealab A, Hefny H, Badr A. Forecasting of nonlinear time
Trans Power Syst. 2007;22(4):2213–9. doi: 10.1109/ series using artificial neural network. Futur Comput Inform
TPWRS.2007.907583 J. 2017;2(1):10. doi: 10.1016/ j.fcij.2017.06.001

30. Houimli R, Zmami M. Ben-Salha O. Short-term electric load 43. Jinu L. A neural network method for nonlinear time series
forecasting in Tunisia using artificial neural networks. analysis. J Time Ser Econom. 2019;11(1):1–18. doi: 10.1515/
Energy Syst. 2020;11(2):357–75. doi: 10.1007/s12667-019- jtse-2016-0011
00324-4
44. Nugaliyadde A, Sohel F, Wong KW, Xie H. Language modeling
31. Zufferey T, Ulbig A, Koch S, Hug G. Forecasting of smart through long-term memory network. In: 2019 International
meter time series based on neural networks. In: Interna- Joint Conference on Neural Networks (IJCNN); 2019 Jul 14.
tional workshop on data analytics for renewable energy IEEE. p. 1–6. doi: 10.48550/arXiv.1904.08936
integration. Cham: Springer; 2017. p. 10–21.
45. Gers FA, Schmidhuber J, Cummins F. Learning to forget:
32. Abramovich BN, Babanova IS. System for forecasting energy continual prediction with LSTM. Neural Comput. 2000;
consumption using the artificial neural network. Gornye 12(10):2451–71. doi: 10.1049/cp:19991218
nauki i tekhnologii= Min Sci Technol. 2016;1(2):66–77.
46. Elsaraiti M, Merabet A. Application of long-short-term-
doi: 10.17073/2500-0632-2016-2-66-77
memory recurrent neural networks to forecast wind speed.
33. Liu N, Babushkin V, Afshari A. Short-term forecasting of Appl Sci. 2021;11(5):2387. doi: 10.3390/app11052387
temperature driven electricity load using time series and
47. Olah C. Understanding lstm networks. 2015 [cited 2015 Aug
neural network model. J Clean Energy Technol. 2014;
27]. Available from: https://colah.github.io/posts/2015-08-
2(4):327–31. doi: 10.7763/JOCET.2014.V2.149
Understanding-LSTMs/
34. Baliyan A, Gaurav K, Mishra SK. A review of short term load
48. Tripathy DS, Prusty BR. Forecasting of renewable
forecasting using artificial neural network models. Proc
generation for applications in smart grid power systems. In:
Comput Sci.2015;48:121–5. doi: 10.1016/j.procs.2015.04.160
Advances in smart grid power system. Cham: Academic
35. Elsaraiti M, Merabet A. A comparative analysis of the arima Press; 2021. p. 265–98.
and lstm predictive models and their effectiveness for
predicting wind speed. Energies. 2021;14(20):6782. doi:
10.3390/en14206782

ACADEMIA GREEN ENERGY 2024, 1 12 of 12

You might also like