0% found this document useful (0 votes)
2 views

Deep Learning

This paper presents a Bayesian optimized deep learning approach for accurately estimating the state of charge (SOC) of lithium-ion batteries used in electric vehicles. It highlights the importance of hyperparameter tuning and the selection of input parameters, demonstrating that the proposed method using BiLSTM with 70 hidden neurons achieves SOC estimations with less than 2% root mean square error. The study validates the model across various temperatures and emphasizes its reliability for battery management systems.

Uploaded by

vedhanAyaki
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Deep Learning

This paper presents a Bayesian optimized deep learning approach for accurately estimating the state of charge (SOC) of lithium-ion batteries used in electric vehicles. It highlights the importance of hyperparameter tuning and the selection of input parameters, demonstrating that the proposed method using BiLSTM with 70 hidden neurons achieves SOC estimations with less than 2% root mean square error. The study validates the model across various temperatures and emphasizes its reliability for battery management systems.

Uploaded by

vedhanAyaki
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Received 8 February 2024, accepted 17 March 2024, date of publication 21 March 2024, date of current version 27 March 2024.

Digital Object Identifier 10.1109/ACCESS.2024.3380188

A Bayesian Optimized Deep Learning Approach


for Accurate State of Charge Estimation
of Lithium Ion Batteries Used for
Electric Vehicle Application
SELVARAJ VEDHANAYAKI AND VAIRAVASUNDARAM INDRAGANDHI
Vellore Institute of Technology, Vellore 632014, India
Corresponding author: Vairavasundaram Indragandhi ([email protected])
This work was supported by the Royal Academy of Engineering, U.K., under Grant TSP-2325-5-IN_172.

ABSTRACT Battery technology used in Electric Vehicles has recently drawn numerous researchers’
attention. Monitoring of battery condition, especially the state of charge, is necessary to ensure the safe
and reliable operation of the battery. Even though researchers have proposed numerous SOC estimation
techniques, exploration is still required to find a suitable technique that can adapt versatile lithium-ion
battery chemistries. Deep learning (DL) is a well-known machine learning strategy that has been shown
to outperform many other approaches for SOC estimation in recent studies. However, choosing the right
hyperparameters and appropriate use of suitable input parameters is crucial to get the best performance out
of DL models. Currently, researchers use well-established heuristics approaches to choose hyperparameters
by manual tuning or using thorough search techniques like grid search and random search. This leads
the models to be inefficient and less accurate. This paper suggests a methodical, automated procedure for
choosing hyperparameters using a Bayesian optimisation algorithm. In addition to that, average voltage and
average current are used as the important input parameters along with battery parameters (current, voltage
and temperature) for accurate SOC prediction as they involve the past and present history of voltages and
load conditions, respectively. The proposed methods are validated and tested for varying hidden neuron
count with four different datasets involving different temperatures, namely, -10◦ C, 0◦ C, 10◦ C and 25◦ C.
The findings demonstrate that, for all three RNN types (LSTM, GRU and BiLSTM), the ideal configuration
yields SOC estimations with less than 2% root mean square and 5% maximum error. Among the three,
BiLSTM with 70 hidden neurons estimates SOC with reduced estimation error compared to other methods.
By utilizing the suggested approach, battery management systems that monitor the condition of batteries in
various environmental circumstances can become more reliable.

INDEX TERMS Electric vehicle, battery management system, state of charge, long short term memory,
gated recurrent unit, bilayer LSTM.

I. INTRODUCTION focused on 100% zero-emission vehicles met to develop prin-


Countries create energy-saving and emission-reduction tech- ciples to accomplish the Paris Agreement goals by 2040 [1].
nology to reduce carbon dioxide emissions and environ- In [2], the researcher states that car electrification and renew-
mental repercussions like climate change, sea level rise, able energy sources are promising solutions to the energy
greenhouse effect, and biodiversity loss. COP26 in Glasgow, crisis and 40% GHGE reduction. In 2021, EV sales reached
UK, addressed these energy crisis challenges. Government 6.75 million units, up 108% from 2020, since they minimize
leaders from several countries, business people, and groups car emissions and store renewable energy [3].
Current energy storage methods in transportation include
The associate editor coordinating the review of this manuscript and lithium-ion, nickel-cobalt, lead acid, and nickel-cadmium
approving it for publication was Giambattista Gruosso . batteries [4]. Lithium-ion batteries are preferred for their

2024 The Authors. This work is licensed under a Creative Commons Attribution 4.0 License.
43308 For more information, see https://creativecommons.org/licenses/by/4.0/ VOLUME 12, 2024
S. Vedhanayaki, V. Indragandhi: Bayesian Optimized DL Approach

higher specific power, energy density, longevity, and lower


self-discharge rate [5]. Lithium-ion batteries have NMC,
NCA, LFP, LCO, LMO, and LTO chemical compositions.
Li-ion battery characteristics are compared in Figure 1.
The figure demonstrates that nickel cobalt aluminium oxide
(NCA) batteries have the best specific energy and power.
Reduced manufacturing cost is a key factor in Li-ion battery
adoption across industries. Although Li-ion batteries have
many benefits, they need a safe operating zone. Since Li-ion
batteries use a charge transfer reaction to store energy, regular
use of the battery causes problems with degradation. These
problems include the loss of active materials and lithium
inventory, the formation and breakdown of Solid Electrolyte
Interface film, and a deposit of metallic lithium in the anode.
Thus, exceeding lithium-ion battery packs’ tolerance will
damage them and make them dangerous [6]
A BMS is a software and hardware controller that improves
battery life and performance. Figure 2 depicts BMS schemat-
ically. Estimating the State of Charge, State of Health, cell FIGURE 1. Application Comparison of lithium-ion batteries preferred
balance, charge and discharge control, and thermal and power for EV.

flow management are essential battery management system


activities. SOC estimate is crucial among all the functions [7].
SOC is defined as a ratio of the battery’s remaining capac- feature information from input data. In order to automatically
ity to the rated capacity at a specific condition by the US extract the internal representation from the input signal and
Advanced Battery Consortium (USABC) [8]. estimate LiB SOC, a DNN-based end-to-end estimator is
Due to their unique diffusion method and complex elec- able to perform automatically. Recent years have seen the
trochemical reaction, Li-ion batteries’ rated capacity will not development of a multitude of DNN-based SOC estimate
match the manufacturer’s rating. The battery’s rated capacity approaches, including LSTM, GRU, and BiLSTM [15], [16],
will also change with age, temperature, and environmental [17], [18]. A major advantage of RNN-based SOC estimate
conditions. Along with manufacturing faults, potentiomet- methods over conventional approaches is: No requirement
ric, amperometric, and conductometric sensor limitations for operating-characteristic battery models. Self-learning its
affect SOC estimation [9]. Numerous methods, including the weight and bias eliminate the need for hand engineering and
Open Circuit Voltage method, the Ampere Hour method, parametrization.
model-based methods, filter-based methods, observer-based One of the major concerns of the data-driven method is the
methods, and data-based methods, have been presented for selection of model hyperparameters, namely, learning rate,
the purpose of providing an accurate assessment of voltage number of hidden units, hidden neurons, batch size, epochs,
over current (SOC). activation function and dropout rate. The inappropriate selec-
The estimation of SOC is critical in BMS. Because of tion of hyperparameters leads to a reduction in prediction
their recursive nature, many articles proposing SOC esti- accuracy [19]. Researchers employ a trial-and-error approach
mates use Kalman filter-based algorithms [10], [11], [12]. for hyperparameter selection. Training computation demands
The main disadvantage of Kalman filter-based approaches is make empirical hyperparameter selection for deep learning
that they necessitate precise battery modelling and param- models time-consuming and difficult. The search space for
eter identification. Because ML approaches do not impose DL hyperparameters is exponentially large, making trial-and-
chemical or electrical models, they provide an alternative to error evaluation challenging and time-consuming [20]. The
precise SOC prediction in light of the significant challenges model’s performance is also influenced by the input param-
associated with battery modelling [13]. Most of the ML eters used for training. Existing research has concentrated
methods use SVM and ANN. However, these methods have on computing the SOC; nevertheless, it is still necessary to
certain limitations that reduce SOC estimating performance. discover which input attributes are more important in the
The manual construction of characteristics from raw signal calculation of the SOC [21].
data at the input data level requires a lot of labour and Hence, in this paper, the Bayesian optimisation algorithm
skill. Despite their limited analytical capability and inability is introduced for hyperparameter tuning of RNN algorithms
to handle high-dimensional data, shallow learning architec- (LSTM, BiLSTM and GRU), thereby overcoming the draw-
tures are used at the model scale [14]. Through the use of back of the trial and error approach. The impact of variation
multi-layer nonlinear transformations, deep learning has the in the hidden neuron in the estimation accuracy is anal-
potential to construct deep neural networks (DNNs), which ysed for LSTM, BiLSTM and GRU. Along with the battery
possess the ability to hierarchically extract complicated input parameters (current, voltage and temperature), in this

VOLUME 12, 2024 43309


S. Vedhanayaki, V. Indragandhi: Bayesian Optimized DL Approach

FIGURE 2. Functions of Battery Management System in Electrical Vehicle System.

FIGURE 3. Classification of data-driven SOC estimation methods [4].

proposed work, average voltage and average current are also regarding the present and past load connected to the battery.
considered as the input parameters for SOC estimation. Since The significant contribution of the paper is
the average voltage is the average of present and past volt- • Three deep learning algorithms (LSTM, GRU and
ages, it can provide more information about the previous SOC BiLSM) with varied numbers of hidden neurons
condition. Similarly, average current can provide information are examined, and their architectures and principles

43310 VOLUME 12, 2024


S. Vedhanayaki, V. Indragandhi: Bayesian Optimized DL Approach

TABLE 1. Parameter comparison of the proposed method with existing literature.

FIGURE 4. LSTM Structure [17], [36].

FIGURE 6. Figure BiLSTM architecture [36].

of manually establishing the network parameters and


helps determine the best network parameters to increase
network performance.
• To verify the proposed model’s accuracy at various oper-
ating temperatures (−10◦ C, 0◦ C, 10◦ C and 25◦ C), the
FIGURE 5. GRU Structure [19], [26].
data set obtained from Hamilton’s McMaster Univer-
sity is used. The RMSE of the model obtained during
are described to elucidate their benefits for SOC training and testing is compared with each other and
estimation. found that BiLSTM has better performance compared
• Bayesian optimization algorithm-based hyperparameter to LSTM and GRU for the selected input parame-
tuning technique is proposed to compensate for the flaw ters (voltage, current, temperature, average temperature

VOLUME 12, 2024 43311


S. Vedhanayaki, V. Indragandhi: Bayesian Optimized DL Approach

FIGURE 7. Flowchart of SOC estimation using DL algorithms.

FIGURE 8. RMSE and Loss function of LSTM-based network trained with 30-neuron.

and average current) under varying numbers of hidden nonlinear features of the battery, the need for battery mod-
neurons. elling increases the time and computing complexity [24].
In modern times, data-based SOC estimation techniques
have been highly preferred by researchers for accurate SOC
II. LITERATURE REVIEW prediction. The recent data-driven methods applied for the
Over the past decades, numerous SOC estimation tech- SOC estimation technique are shown in Figure 3. Since SOC
niques have been proposed. The coulomb counting approach estimation approaches based on deep learning (LSTM, GRU,
and the lookup table method are the strategies that are DNN, BiLSTM) can directly map sampled battery opera-
considered to be conventional techniques. However, both tional signals (e.g., current and voltage) to SOC and eliminate
methods have their limitations in serving as the better option the necessity of laborious battery modelling or feature engi-
for the SOC estimation in EVs [22], [23]. To overcome neering, researchers are carrying out intense research in this
these drawbacks, numerous model-based, observer-based and field [14].
filter-based approaches have been proposed. The Kalman In [16], the LSTM with 500 hidden units is proposed for
filter is one of the filter-based techniques that has become SOC estimation. The model was validated using a public
more important for determining the battery’s SOC and dataset and obtained reduced estimation error at varying oper-
SOH. Despite the fact that these methods demonstrate the ating conditions. LSTM combined with UKF was proposed

43312 VOLUME 12, 2024


S. Vedhanayaki, V. Indragandhi: Bayesian Optimized DL Approach

FIGURE 9. RMSE and training time of LSTM network with 30, 50, 70 and 100 hidden neuron.

in [25], which reduces the noise and improves estimation the effect of auxiliary loads like heating and air conditioning
accuracy. In [26], the researcher proposed SOC estimation should have been taken into account. Numerous studies [21]
using GRU, which estimates with the reduced MAE of discovered that an electric vehicle’s energy consumption is
0.86%. GRU contains 1000 hidden units connected to an FC influenced by various factors, including traffic, road ele-
layer with 50 nodes. The researcher validated the proposed vation, auxiliary loads, wind direction and speed, ambient
model using two different public datasets. In this work, the temperature, and the starting battery’s level of charge. The
researcher utilized a trial-and-error approach to find the opti- past and present voltage also act as an important factor for
mal parameter. accurate SOC estimation. Therefore, for accurate calculation
Similarly, in [20], CNN-GRU-based SOC estimation was of SOC, the impact of all these elements can be considered.
proposed. In this work also trial and error approach is To fill this research gap, along with battery voltage, current
used to find the optimal hyperparameter. BiLSTM, BiGRU, and temperature, average voltage and average current are
and stacked LSTM-based SOC estimation have also been used as an input parameter for accurate SOC estimation in
proposed to estimate SOC accurately [27]. Many research this study. In the future, the study will be elaborated by
articles have also been proposed that focus on the devel- considering various environmental and road conditions as the
opment of algorithms for optimal hyperparameter selec- input parameter for estimation.
tion of RNN. The researchers have used the Ensemble
algorithm [18], Particle Swarm optimisation algorithm [28], III. OVERVIEW OF DEEP LEARNING ALGORITHM
genetic algorithm [29], Momentum-based optimizer [30] DL algorithms are highly preferred in various disciplines
and Nesterov optimizer [31] for the optimal hyperparameter of EV applications, namely in energy management, predic-
selection. However, the major limitation of these methods is tion of charging demand, estimation of SOH [12], vehicle
that the hyperparameter selection is based on selection heuris- detection [38], [39], [40], [41], cell balancing [5], thermal
tics. Hence, in the proposed work, the Bayesian optimisation management, and so on. In this section, three types of RNN,
technique is used for the optimal selection of hyperparam- namely LSTM, GRU and BiLSTM, applied for the estimation
eters that provide increased flexibility and a unified model of SOC of EV battery is discussed.
capable of predicting SOC more accurately under varying
ambient temperatures. A. LONG SHORT-TERM MEMORY
The estimation of SOC based on DL algorithms not only The existing RNN cannot handle lengthy input sequences due
depends on the optimal hyperparameter selection but also the to explosion problems and the gradient vanishing problem.
correlation of input parameters with the output SOC. From Therefore, an advanced gated RNN known as an LSTM is
Table 1, it is observed that most of the researchers have proposed to handle extended input sequences. Although the
considered only battery voltage, current and temperature as core modules or units of the LSTM and RNN networks are
the input parameters for the estimation of SOC. Though these composed differently, they share the same topological struc-
methods have shown encouraging results, there is still room ture. Figure 4 depicts the architecture of the LSTM network.
for development as these models have only taken into account Mapping input sequences to output sequences allows it to
a portion of the variables that could influence predictions and characterize nonlinear dynamic systems. The gating system
have not taken environmental factors into account. In [37], that regulates neural information processing is added inside

VOLUME 12, 2024 43313


S. Vedhanayaki, V. Indragandhi: Bayesian Optimized DL Approach

FIGURE 10. a. Predicted and targeted SOC of LSTM network with 30 hidden neurons obtained during
testing. b: Predicted and target SOC of LSTM network with 50 hidden neurons obtained during testing.
c: Predicted and target SOC of LSTM network with 70 hidden neurons obtained during testing.

43314 VOLUME 12, 2024


S. Vedhanayaki, V. Indragandhi: Bayesian Optimized DL Approach

FIGURE 10. (Continued.) d. Predicted and target SOC of LSTM network with 100 hidden neurons
obtained during testing.

each LSTM unit, which otherwise has the same input and B. GATED RECURRENT UNIT
output as the RNN unit. The old and new states are combined Li et al. proposed the Gated Recurrent Unit (GRU) in 2014
linearly to form the network’s state, with some old states still [26]. GRU can achieve long-term sequential dependence and
existing and flowing. RNN, on the other hand, updates and a straightforward internal structure in all enhanced RNNs.
replaces the state value entirely at each time step. The input, While the LSTM-based RNN model has demonstrated the
forget, and output gates are the three gates that make up an highest level of performance across various machine learning
LSTM. Three gates regulate the information in the cell at tasks, its gating mechanism has resulted in significant com-
time step t: the forget gate f (t) manages the cell state and plexity. In contrast to the LSTM-based model, the GRU-based
decides whether to remove information from the cell state. model requires less memory and is more effective at elim-
The input gate i(t) updates the state value and decides whether inating gradients due to its simpler structure and fewer
data should be written to the cell state. The final output gate parameters. Figure 5 depicts the GRU structure.
o(t) generates cell state c(t) and determines which data is z(t) is defined as an ‘‘update gate’’ that scales the value into
transmitted as the hidden state output. The candidate state [0, 1]. The amount of new input that should be used to update
g (t) is used to decide what information is written to the cell the hidden state is decided by the update gate. The ‘‘reset
state. The expression for various gates of LSTM has been gate’’ is denoted by the symbol r(t). It resembles the LSTM
stated in expressions 1 to 7. The hidden state h (t) is intended forget gate. Reset vector ‘‘r’’ specifies the extent to which the
to encode a type of characterization of the data from the prior hidden state ought to be forgotten. Candidates’ hidden
previous time step, whereas the cell state c(t) is intended to state determines the historical data stored. It is usually known
encode an aggregate of the data from all previously processed as the GRU cell’s memory component and was estimated
time steps. The output of each cell in LSTM is generated from the reset gate. The expression for various gates and
through output state y (t) .σ represents the sigmoidal activa- states of GRU is given from equations 8 and 12. These include
tion function. The model input weights (Wxi , Wxf , and Wxo ), the following: x(t) is the current hidden layer node’s input.
recurrent weights (Whi , Whf ,and Who ), and biases (bi , bf , and h(t) is the current hidden state, and h(t-1)is the output of the
bo ) are represented by the LSTM parameter matrices. previously hidden layer node. GRU cell output y (t) depen-
dents on updated hidden state h (t). The model input weights
Input gate i (t) = σ (Wxi xt + Whi ht−1 + bi ) (1)
(Wxr , Wxz , and Wxg ), recurrent weights (Whr , Whz ,and Whg ),
Forget gate f (t) = σ Wxf xt + Whf ht−1 + bf

(2) and biases (br and bz ) are represented by the GRU parameter
Candidate gate g (t) = σ Wxg xt + Whg ht−1 + bg matrices. σ Represents the sigmoidal activation function.

(3)
Output gate o (t) = σ (Wxo xt + Who ht−1 + bo ) (4) Reset gate r (t) = σ (Wxr xt + Whr ht−1 + br ) (8)
Cell state c (t) = (f (t) .c (t − 1)) + (i (t) .g (t)) (5) Update gate z (t) = σ (Wxz xt + Whz ht−1 + bz ) (9)
Hidden state h (t) = o (t) .(σ c (t)) Candidate state g (t) = σ Wxg xt + bg + (W hg ht−1 .r (t))

(6)
Output state y (t) = σ (Wy (h (t)) + by (7) (10)

VOLUME 12, 2024 43315


S. Vedhanayaki, V. Indragandhi: Bayesian Optimized DL Approach

FIGURE 11. a. RMSE between predicted and target SOC of LSTM for varied hidden neuron and temperature. b.
Max Error between predicted and target SOC of LSTM for varied hidden neuron and temperature.

Hidden state h (t) = ((1 − z (t)).g (t)) + (z (t) .h (t − 1)) regarding forward LSTM are stated in expressions 13 to 18,
(11) for the backward LSTM from 19 to 24 and the final output
gate is given in expression 25. The calculations are similar to
Output y (t) = σ (Wy (h (t)) + by (12) −−→
LSTM. The hidden state output of forward LSTM h (t) and
←−−
backward LSTM h (t) together calculates the BiLSTM cell
C. BILAYERED LSTM output y (t).
The term Bidirectional LSTM, also known as BiLSTM, Forward LSTM
−→ −→−−→ − →
refers to a sequence model that has two LSTM layers: Input gate = σ Wxi xt + Whi ht−1 + bi (13)
one for forward-processing input and another for backward-
−−→  −→ −→−−→ − → 
processing input [36]. Two unidirectional LSTMs comprise Forget gate f (t) = σ Wxf xt + Whf ht−1 + bf (14)
the bidirectional LSTM architecture, which processes the −−→  −→ −→−−→ − → 
sequence forward and backward. The BiLSTM structure is Candidate state g (t) = σ Wxg xt + Whg ht−1 + bg (15)
shown in Figure 6. This architecture could see two distinct −−→ −→ −→−−→ − →
LSTM networks, one receiving the token sequence in its Output gate o (t) = σ Wxo xt + Who ht−1 + bo (16)
original order and the other inverted. The final output is the −−→ −−→ −−−−−→ −→ −−→
Cell state c (t) = (f (t).c (t − 1)) + (i (t).g (t)) (17)
sum of the probabilities from each LSTM network, each −−→ −−→ −−→
producing a probability vector as its output. The expressions Hidden state h (t) = o (t).(σ c (t)) (18)

43316 VOLUME 12, 2024


S. Vedhanayaki, V. Indragandhi: Bayesian Optimized DL Approach

FIGURE 12. RMSE and loss function of GRU-based network trained with 30 neuron.

FIGURE 13. RMSE and training time of GRU network with 30, 50, 70 and 100 hidden neuron.

Backward LSTM D. HYPERPARAMETERS’ ROLE IN THE DL ALGORITHM


Hyperparameter tuning, sometimes referred to as hyperpa-
←− ←− ←−←−− ← −
Input gate ı(t) = σ Wxl xt + Whl ht−1 + bl (19) rameter optimisation, is the process of determining which
←− ←− ←−←−− ← − hyperparameters are optimal to utilize. The model is opti-
Forget gate f (t) = σ Wxf xt + Whf ht−1 + bf (20) mized via the application of optimisation parameters. The
←− ←− ←−←−− ← − most important hyperparameters of DL algorithms are learn-
Candidate state g(t) = σ Wxg xt + Whg ht−1 + bg (21) ing rate, number of hidden neurons, hidden units, batch size,
←− ←− ←−←−− ← − dropout rate, epochs, optimizer and activation function.
Output gate o(t) = σ Wxo xt + Who ht−1 + bo (22)
• Learning rate: The optimizer’s step size during each
←− ←− ←−−−− ← − ←−
Cell state c(t) = (f (t) · c(t − 1)) + (l(t).g(t)) (23) training iteration is controlled by this hyperparameter.
←− ←− ←− An excessively high learning rate can cause instability
Hidden state h(t) = o(t) · (σ c(t)) (24)
 −→ ←− and divergence, whereas an excessively modest learning
Output state y(t) = σ Wy h(t), h(t) + by (25) rate might cause sluggish convergence.

VOLUME 12, 2024 43317


S. Vedhanayaki, V. Indragandhi: Bayesian Optimized DL Approach

FIGURE 14. a. Predicted and target SOC of GRU network with 30 hidden neurons obtained during testing. b: Predicted
and target SOC of GRU network with 50 hidden neurons obtained during testing. c: Predicted and target SOC of GRU
network with 70 hidden neurons obtained during testing.

43318 VOLUME 12, 2024


S. Vedhanayaki, V. Indragandhi: Bayesian Optimized DL Approach

FIGURE 14. (Continued.) d. Predicted and target SOC of GRU network with 100 hidden neurons obtained during testing.

• Epochs: The number of times the model is trained using Tester channel, which provided voltage and current accu-
the whole training dataset is indicated by this hyperpa- racy within 0.1% of full scale. The training data is a single
rameter. Although adding more epochs can enhance the sequence of experimental data taken during a driving cycle in
model’s performance, if done carelessly, it could result which the battery-powered electric vehicle was at 25 degrees
in overfitting. Celsius. Around 6 lakh data is used for testing, and 39000 for
• Number of layers: This hyperparameter establishes the validation. Four experimental data sequences, at four distinct
model’s depth, which can greatly affect its intricacy and temperatures, −10◦ C, 0◦ C, ten ◦ C and 25◦ C, consisting of
capacity for learning. around 30000 data in each set, obtained during driving cycles
• Number of layers: The hyperparameter that controls the are included in the test data.
model’s width and affects its ability to depict intricate The selection of hyperparameters, namely the number
relationships in the data is the number of nodes per layer. of hidden neurons, hidden layers, learning rate, activation
• Activation function: By adding nonlinearity to the function and so on, plays a major role in the perfor-
model, this hyperparameter enables the model to learn mance of RNN. An increase in hyperparameters leads to
intricate decision limits. Rectified Linear Unit (ReLU), an exponential increase in the search space. In addition,
sigmoid, and tanh are examples of common activation each hyperparameter influences the other, and hence, the
functions. negotiation of these parameters may lead to suboptimal solu-
• Dropout rate: A dropout layer must be present in con- tions. Conventionally, grid search and random search are
junction with each LSTM layer. This layer lessens the the methods used for hyperparameter tuning. However, due
sensitivity to particular weights of the individual neu- to their vast space usage, high time consumption to train
rons by avoiding randomly chosen neurons, which helps a single model and computationally expensive nature, they
prevent overfitting during training. Dropout layers can are not highly preferred. Hence probabilistic model, namely
be applied to input layers but not output layers, as this Bayesian Optimization (BO), is preferred for hyperparameter
could cause issues with the model’s output and the error tuning.
computation. Twenty per cent is a good place to start, In contrast to GS and RS, BO bases its determination of
but the dropout rate should be kept low (up to fifty per future evaluation points on the outcomes of past assessments.
cent). It is commonly acknowledged that a 20% value A surrogate model and an acquisition function are two essen-
is the optimal balance between mitigating the risk of tial elements that BO employs to identify the subsequent
overfitting and maintaining model accuracy. hyper-parameter configuration. All currently observed points
are to be fitted into the objective function by the surro-
IV. DATASET PREPARATION AND SELECTION OF gate model. The acquisition function balances the trade-off
HYPERPARAMETER between exploration and exploitation to select the usage of
The data for training and testing is obtained from Hamilton’s various points after getting the prediction distribution of the
McMaster University. In an eight cubic foot thermal cham- probabilistic surrogate model. The variants of the Bayesian
ber, a brand-new 3Ah LG HG2 cell was tested using a optimizer are the tree-structured Parzen estimator and the
75 amp, 5-volt Digatron Firing Circuits Universal Battery Gaussian Process estimator. In the proposed work BO-GP

VOLUME 12, 2024 43319


S. Vedhanayaki, V. Indragandhi: Bayesian Optimized DL Approach

FIGURE 15. a. RMSE between predicted and target SOC of GRU for varied hidden neuron and temperature. b: Max
Error between predicted and target SOC of GRU for varied hidden neuron and temperature.

is used due to their ability to reduce the mean square error TABLE 2. Training parameters of DL algorithms.
during estimation.
Twenty random searches and ten iterations of Bayesian
optimization are used in this instance. Upon optimization,
0.001 and 0.2 are the determined learning and dropout rates.
The effect of variation in the number of hidden neurons in
the DL algorithm is considered for analysis in this work. The
various other parameter set during the algorithm’s execution
is listed in Table 2.

V. EXPERIMENTAL RESULT AND DISCUSSION


From the theoretical analysis, it is found that RNN algorithms
(LSTM, GRU and BiLSTM) can be best suitable for per-
forming SOC estimation compared to other SOC estimation
algorithms. The experiment is conducted in Matlab Software fully connected layers, and a clippedRelu layer followed by a
installed in a single PC system. The proposed architecture regression layer. In this research, various parameter settings
includes a single Sequence input layer, an RNN layer, two for SOC estimation are constructed, and the effects of varying

43320 VOLUME 12, 2024


S. Vedhanayaki, V. Indragandhi: Bayesian Optimized DL Approach

FIGURE 16. RMSE and Loss function of BiLSTM network trained with 30-neuron.

FIGURE 17. RMSE and training time of BiLSTM network with 30, 50, 70 and 100 hidden neuron.

hidden layer neuron numbers on the model are specifically function. To prevent the exploding of the gradient, the value
discussed to investigate the effects of these settings on the of the gradient threshold is set to be one and to avoid the
model estimation performance. padding of sequence; the minimum batch size is maintained
Following the establishment of the structure, the network to be 1. Mean Square Error is selected as the loss function.
has to be trained. Prior to training, the input data set is The general architecture of the proposed DL algorithms for
normalized using min-max normalization function to scale SOC estimation using LSTM is shown in Figure 7.
the input values between 0 and 1. This process is followed
by the implementation of Bayesian optimisation network.
To optimize the DL networks during training process, the A. LSTM-BASED SOC ESTIMATION
Adam optimization algorithm is used. In the proposed work, The performance of the proposed LSTM network for estimat-
the number of input features is 5, and the output response ing SOC is analyzed through performance matrices, namely
is 1. The Tanh function is used as the state activation func- RMSE and loss function (MSE). Initially, the proposed net-
tion, and the sigmoid function is used as the gate activation work is trained with 30 hidden neurons. The obtained training

VOLUME 12, 2024 43321


S. Vedhanayaki, V. Indragandhi: Bayesian Optimized DL Approach

FIGURE 18. a. Predicted and target SOC of BiLSTM network with 30 hidden neurons obtained during testing.
b. Predicted and target SOC of BiLSTM network with 50 hidden neurons obtained during testing. c. Predicted and
target SOC of BiLSTM network with 70 hidden neurons obtained during testing.

43322 VOLUME 12, 2024


S. Vedhanayaki, V. Indragandhi: Bayesian Optimized DL Approach

FIGURE 18. (Continued.) d. Predicted and target SOC of BiLSTM network with 100 hidden neurons obtained
during testing.

results are plotted in figure 8. The obtained RMSE during a clippedRelu layer and a regression output layer, as illus-
training is 0.024, and the training time is 7 minutes. The num- trated in Figure 7. The battery SOC at the moment is the
ber of hidden neurons is varied and analyzed to analyse the output of the GRU network model, which takes current,
proposed network’s performance in varied hyperparameter voltage, average current, average voltage and temperature
tuning environments. The number of hidden neurons varies measurement signals as inputs. The nonlinear relationship
from 30, 50, 70 and 100. between inputs and SOC was found in the training set. The
The RMSE value of obtained networks and their corre- leaky ReLU layer provided an execution threshold. Any input
sponding training time are shown in Figure 9. The figure value less than zero was primarily multiplied by a fixed factor
shows that the number of hidden layer neurons increases, and coefficient.
estimation accuracy increases. Compared to the other models, Like LSTM, Different GRU models with varying numbers
estimation accuracy is maximum when the number of hidden of hidden neurons (30, 50, 70, and 100) are built in this
layer neurons is 70; it begins to decline when the number of proposed work. There is a maximum epoch of 250 and a
hidden layer neurons reaches 100. To prove the generalization validation frequency of 30. The RMSE and loss function
of the network, the trained networks are tested with four acquired during the GRU model’s 30-hidden neuron training
different data sets obtained at varied temperatures (−10 ◦ C, are shown in Figures 12 respectively.
0 ◦ C, 10 ◦ C and 25 ◦ C) consisting of nearly 30000 data in Figures 13 display the RMSE value of the developed
each set. networks along with the corresponding training time. It is
The estimated outcomes of the model, based on the quan- evident from the figure that as the quantity of hidden layer
tity of various hidden neurons at (−10 ◦ C, 0 ◦ C, 10 ◦ C, neurons rises, so does the estimation accuracy. When there are
and 25 ◦ C, respectively), are shown in figures 10a to 10d. 50 hidden layer neurons, estimation accuracy is at its highest
Figures 11a and 11b show the associated errors. Furthermore, compared to the other models; it starts to decrease when there
the figure shows that, at room temperature, the curve repre- are 100 hidden layer neurons. The more hidden units there
senting the estimation results is relatively smooth; however, are, the longer it takes to analyze the data.
the degree of fitting with the actual measurement curve is Figures 14a to 14d show the discrepancy between the
relatively poor, resulting in low estimation accuracy at higher trained network’s predicted and targeted SOC after testing it
or lower temperatures, particularly near the end of the dis- using four distinct data sets. Plots of the RMSE and MAE
charge, the curve representing the model estimation result for test data at four distinct temperatures are shown in Fig-
is relatively steep, but the overall degree of fitting with the ures 15a and 15b. The figure indicates that at lower RMSE
actual measurement curve is appropriate. and MAE values, there is a higher chance of an accurate
SOC prediction. Furthermore, the figure shows that, at room
temperature, the curve representing the estimation results is
B. GRU-BASED SOC ESTIMATION relatively smooth; however, the degree of fitting with the
GRU-RNN network model comprises a sequence input layer, actual measurement curve is relatively poor, resulting in low
a GRU network layer, a fully connected layer with neurons, estimation accuracy at higher or lower temperatures.

VOLUME 12, 2024 43323


S. Vedhanayaki, V. Indragandhi: Bayesian Optimized DL Approach

FIGURE 19. a. RMSE between predicted and target SOC of BiLSTM for varied hidden neuron and temperature. b. Max
Error between predicted and target SOC of BiLSTM for varied hidden neuron and temperature.

C. BILSTM-BASED SOC ESTIMATION STM before training the model are:


SOC estimation using bidirectional long short-term memory • Data normalization: preprocessing Data to avid the scale
(Bi-LSTM) has been performed. The forward and backward that influences the indicators. In the proposed work,
temporal dependencies of battery sequential data can be cap- data normalization is carried out through the zero-centre
tured by the bidirectional LSTM, in contrast to the standard normalization technique.
unidirectional LSTM. The state is not shared by two LSTMs • Building the model: As with LSTM, BiLSTM also
moving in opposite directions. The forward LSTM’s output includes a sequence input layer, clipped Relu layer, fully
state is exclusively transmitted to the forward LSTM, while connected layer BiLSTM layer and regression layer.
the backward LSTM’s output state is exclusively transmitted • Selection of Optimizer: Adam optimizer is selected to
to the backward LSTM. It is not possible to connect the for- prevent the gradient from explosion.
ward and backward LSTMs directly. At every time step, the • The Selection of hyperparameters includes the number
input sequence is passed to the forward and backward LSTM of hidden layers, hidden neurons, hidden units, epochs,
layers, respectively, and the corresponding states are used to batch size, iteration count, validation frequency and
generate the outputs. The two outputs are then combined and Dropout rate.
integrated into the final output by connecting to the output In BiLSTM, the width of the layer is determined by the
layer. number of neurons that are present in the hidden layer. The

43324 VOLUME 12, 2024


S. Vedhanayaki, V. Indragandhi: Bayesian Optimized DL Approach

TABLE 3. Overall performance comparison of the proposed algorithm during training and testing.

TABLE 4. Performance comparison of the proposed method with the existing literature.

work that is being proposed comprises the construction of hidden neurons are depicted in Figure 16. Figure 17 illustrates
various BiLSTM models, each of which is comprised of the amount of time that the models need to complete their
two hidden layers that contain a different number of hidden training, as well as the related root mean square error (RMSE)
neurons (30, 50, 70, and 100). With a validation frequency for various hidden neurons. The picture makes it abundantly
of thirty, the maximum epoch that can be used is set to be clear that the estimation accuracy improves in proportion to
250. The root mean square error (RMSE) and loss function the number of neurons in the hidden layer. As compared to
found during the training of the BiLSTM model with thirty the other models, the estimation accuracy reaches its highest

VOLUME 12, 2024 43325


S. Vedhanayaki, V. Indragandhi: Bayesian Optimized DL Approach

point at fifty and seventy neurons in the hidden layer, and it 25◦ C. The results show that the optimal design produces
starts to decrease at one hundred neurons in the hidden layer. SOC estimations with less than 2% root mean square and
The difference between the anticipated and targeted SOC 5% maximum error for all three types of RNNs (LSTM,
is given in Figures 18a to 18d. This evaluation was performed GRU and BiLSTM). Out of the three, BiLSTM estimates
on the trained network using four distinct data sets. The SOC with less estimation error than the other two approaches
RMSE and MAE charts for the test data are displayed in since each fully connected layer has seventy hidden neurons.
Figures 19a and 19b. These plots are displayed at four dif- (RMSE = 0.12% & MaxE = 1.327% at 0 ◦ C; RMSE = 0.16%
ferent temperatures. The image illustrates that the probability at 10 ◦ C; MaxE = 0.822% at 25 ◦ C; RMSE = 0.15% at
of making an accurate prediction of the SOC increases as 25 ◦ C; MaxE = 0.933% at −10 ◦ C; RMSE = 0.12% &
the RMSE and MAE values decrease. In addition, the picture MaxE = 1.156%). The experimental results show that the
illustrates that the curve that corresponds to the estimation estimation accuracy is high at room temperature but gets
findings is generally smooth when the temperature is at room reduced when the temperature becomes very low. This is due
temperature. However, when the temperature is lower, the to the nonlinear characteristics of the battery. Estimation of
degree of fitting with the actual measurement curve is rather SOC with varied hidden neurons is analyzed, and it was found
bad, which results in a low estimation accuracy. that the number of the neurons should neither be too small
The overall performance comparison of the proposed nor too high as they lead to overfitting problems. Hence,
Bayesian-optimized deep learning methods is shown in 70 hidden neuron counts give the best performance in the
Table 3. From the table, the significant observations can be proposed work compared to 30 and 100 neurons.
concluded that In future work, numerous input parameters for the accurate
1. The estimation accuracy is high at the room tempera- estimation of SOC by considering environmental conditions,
ture but gets reduced when the temperature becomes wind speed, vehicle velocity, and road conditions and apply-
very low. This is due to the nonlinear characteristics of ing attention mechanisms to select the input parameters that
the battery. provide more information regarding output SOC will be done.
2. The number of neurons should be manageable, as they The selected input parameters will be validated using DL
lead to overfitting and underfitting problems. Hence, algorithms, and their performance will be compared to the
70 hidden neuron counts give the best performance in estimation made by graph convolutional networks.
the proposed work compared to 30 and 100 neurons.
Table 4 presents the results of a comparison between the ACKNOWLEDGMENT
suggested method and the one that is currently in use. Upon The authors would like to acknowledge the support from
examination of the table, it is evident that the deep learning the School of Electrical Engineering, Vellore Institute of
algorithms that have been suggested and optimized using the Technology, Vellore, India. They would also like to thank the
Bayesian algorithm have a greater level of accuracy. When Management of Vellore Institute of Technology, Vellore for
compared to Bayesian-optimized LSTM and GRU, the per- the facilities provided during the execution of this work.
formance of BiLSTM in comparison is superior.
REFERENCES
VI. CONCLUSION [1] M. Lennan and E. Morgera, ‘‘The Glasgow climate conference (COP26),’’
This paper proposes a Bayesian-optimized deep learning Int. J. Mar. Coastal Law, vol. 37, no. 1, pp. 137–151, 2022.
[2] Y. Khawaja, N. Shankar, I. Qiqieh, J. Alzubi, O. Alzubi,
method for SOC estimation. From the experimental results, M. K. Nallakaruppan, and S. Padmanaban, ‘‘Battery management
it is found that the performance of DL algorithms increases solutions for Li-ion batteries based on artificial intelligence,’’ Ain Shams
with the selection of optimal hyperparameters. The proposed Eng. J., vol. 14, no. 12, 2023, Art. no. 102213.
work offers several contributions. Initially, the proposed [3] R. Irle, ‘‘EV-volumes-The electric vehicle world sales database,’’ Glob. EV
Sales, 2021.
approach can estimate the SOC of the battery without having [4] F. Nadeem, S. M. S. Hussain, P. K. Tiwari, A. K. Goswami, and T. S. Ustun,
prior knowledge of the battery model and the requirement of ‘‘Comparative review of energy storage systems, their roles, and impacts
filters and observers. on future power systems,’’ IEEE Access, vol. 7, pp. 4555–4585, 2019.
[5] V. Selvaraj and I. Vairavasundaram, ‘‘Flyback converter employed
Secondly, the employment of a Bayesian optimisation non-dissipative cell equalization in electric vehicle lithium-ion batter-
algorithm for the selection of optimal hyperparameters ies,’’ e-Prime-Adv. Elect. Eng., Electron. Energy, vol. 5, Sep. 2023,
reduces the estimation error and increases the system Art. no. 100278.
efficiency. [6] P. U. Nzereogu, A. D. Omah, F. I. Ezema, E. I. Iwuoha, and A. C. Nwanya,
‘‘Anode materials for lithium-ion batteries: A review,’’ Appl. Surf. Sci. Adv.,
Finally, the performance of the proposed work is analysed vol. 9, Jun. 2022, Art. no. 100233.
for varying numbers of hidden neurons. The data set for [7] L. Wang, X. Zhao, Z. Deng, and L. Yang, ‘‘Application of electrochemical
analysis was obtained from Hamilton’s McMaster Univer- impedance spectroscopy in battery management system: State of charge
estimation for aging batteries,’’ J. Energy Storage, vol. 57, Jan. 2023,
sity and maps the relationship of input parameters (current, Art. no. 106275.
voltage, temperature, average voltage, average current and [8] J. P. Christophersen, ‘‘Battery test manual for electric vehicles, revision
temperature) to the output SOC. The experimental analysis is 3,’’ Idaho Nat. Lab., Idaho Falls, ID, USA, Tech. Rep. INL/EXT-15-34184,
carried out in MATLAB software. The suggested techniques 2015.
[9] M. J. Lain and E. Kendrick, ‘‘Understanding the limitations of lithium
undergo validation and testing on four distinct datasets at ion batteries at high rates,’’ J. Power Sour., vol. 493, May 2021,
four distinct temperature ranges: −10◦ C, 0◦ C, 10◦ C, and Art. no. 229690.

43326 VOLUME 12, 2024


S. Vedhanayaki, V. Indragandhi: Bayesian Optimized DL Approach

[10] J. Liu and X. Liu, ‘‘An improved method of state of health prediction for [31] Z. Zhang, Z. Dong, H. Lin, Z. He, M. Wang, Y. He, X. Gao, and M. Gao,
lithium batteries considering different temperature,’’ J. Energy Storage, ‘‘An improved bidirectional gated recurrent unit method for accurate state-
vol. 63, Jul. 2023, Art. no. 107028. of-charge estimation,’’ IEEE Access, vol. 9, pp. 11252–11263, 2021.
[11] S. Vedhanayaki and V. Indragandhi, ‘‘Certain investigation and implemen- [32] W. He, N. Williard, C. Chen, and M. Pecht, ‘‘State of charge estimation
tation of Coulomb counting based unscented Kalman filter for state of for Li-ion batteries using neural network modeling and unscented Kalman
charge estimation of lithium-ion batteries used in electric vehicle appli- filter-based error cancellation,’’ Int. J. Elect. Power Energy Syst., vol. 62,
cation,’’ Int. J. Thermofluids, vol. 18, May 2023, Art. no. 100335. pp. 783–791, Nov. 2014.
[12] K. Qian and X. Liu, ‘‘Hybrid optimization strategy for lithium-ion battery’s [33] E. Chemali, P. J. Kollmeyer, M. Preindl, and A. Emadi, ‘‘State-of-charge
state of charge/health using joint of dual Kalman filter and modified sine- estimation of Li-ion batteries using deep neural networks: A machine
cosine algorithm,’’ J. Energy Storage, vol. 44, Dec. 2021, Art. no. 103319. learning approach,’’ J. Power Sources, vol. 400, pp. 242–255, Oct. 2018.
[13] H. Ben Sassi, F. Errahimi, N. Es-Sbai, and C. Alaoui, ‘‘Comparative study [34] D. N. T. How, M. A. Hannan, M. S. H. Lipu, K. S. M. Sahari,
of ANN/KF for on-board SOC estimation for vehicular applications,’’ P. J. Ker, and K. M. Muttaqi, ‘‘State-of-charge estimation of Li-ion battery
J. Energy Storage, vol. 25, Oct. 2019, Art. no. 100822. in electric vehicles: A deep neural network approach,’’ IEEE Trans. Ind.
[14] V. Selvaraj and I. Vairavasundaram, ‘‘A comprehensive review of state Appl., vol. 56, no. 5, pp. 5565–5574, Sep. 2020.
of charge estimation in lithium-ion batteries used in electric vehicles,’’ [35] Y. Tian, R. Lai, X. Li, L. Xiang, and J. Tian, ‘‘A combined method for
J. Energy Storage, vol. 72, Nov. 2023, Art. no. 108777. state-of-charge estimation for lithium-ion batteries using a long short-term
memory network and an adaptive cubature Kalman filter,’’ Appl. Energy,
[15] J. Tian, C. Chen, W. Shen, F. Sun, and R. Xiong, ‘‘Deep learning frame-
vol. 265, May 2020, Art. no. 114789.
work for lithium-ion battery state of charge estimation: Recent advances
[36] C. Bian, H. He, and S. Yang, ‘‘Stacked bidirectional long short-term
and future perspectives,’’ Energy Storage Mater., vol. 61, Aug. 2023,
memory networks for state-of-charge estimation of lithium-ion batteries,’’
Art. no. 102883.
Energy, vol. 191, Jan. 2020, Art. no. 116538.
[16] E. Chemali, P. J. Kollmeyer, M. Preindl, R. Ahmed, and A. Emadi, ‘‘Long
[37] S. Elmi and K.-L. Tan, ‘‘DeepFEC: Energy consumption prediction under
short-term memory networks for accurate state-of-charge estimation of Li-
real-world driving conditions for smart cities,’’ in Proc. Web Conf., 2021,
ion batteries,’’ IEEE Trans. Ind. Electron., vol. 65, no. 8, pp. 6730–6739,
pp. 1880–1890.
Aug. 2018.
[38] A. Gomaa, M. M. Abdelwahab, M. Abo-Zahhad, T. Minematsu, and
[17] D. Liu, L. Li, Y. Song, L. Wu, and Y. Peng, ‘‘Hybrid state of charge R.-I. Taniguchi, ‘‘Robust vehicle detection and counting algorithm
estimation for lithium-ion battery under dynamic operating conditions,’’ employing a convolution neural network and optical flow,’’ Sensors,
Int. J. Elect. Power Energy Syst., vol. 110, pp. 48–61, Sep. 2019. vol. 19, no. 20, p. 4588, Oct. 2019.
[18] B. Xiao, Y. Liu, and B. Xiao, ‘‘Accurate state-of-charge estimation [39] A. Gomaa, T. Minematsu, M. M. Abdelwahab, M. Abo-Zahhad, and
approach for lithium-ion batteries by gated recurrent unit with ensemble R. Taniguchi, ‘‘Faster CNN-based vehicle detection and counting strat-
optimizer,’’ IEEE Access, vol. 7, pp. 54192–54202, 2019. egy for fixed camera scenes,’’ Multimedia Tools Appl., vol. 81, no. 18,
[19] P. Eleftheriadis, A. Dolara, and S. Leva, ‘‘An overview of data-driven pp. 25443–25471, 2022.
methods for the online state of charge estimation,’’ in Proc. IEEE Int. [40] Y. Chang, Z. Tu, W. Xie, B. Luo, S. Zhang, H. Sui, and J. Yuan, ‘‘Video
Conf. Environ. Electr. Eng. IEEE Ind. Commercial Power Syst. Eur. anomaly detection with spatio-temporal dissociation,’’ Pattern Recognit.,
(EEEIC/I&CPS Europe), Jun. 2022, pp. 1–6. vol. 122, Feb. 2022, Art. no. 108213.
[20] Z. Huang, F. Yang, F. Xu, X. Song, and K.-L. Tsui, ‘‘Convolutional gated [41] A. Gomaa, M. M. Abdelwahab, and M. Abo-Zahhad, ‘‘Efficient vehicle
recurrent unit–recurrent neural network for state-of-charge estimation of detection and tracking strategy in aerial videos by employing morphologi-
lithium-ion batteries,’’ IEEE Access, vol. 7, pp. 93139–93149, 2019. cal operations and feature points motion analysis,’’ Multimedia Tools Appl.,
[21] Z. Yi and P. H. Bauer, ‘‘Effects of environmental factors on electric vehicle vol. 79, no. 35, pp. 26023–26043, 2020.
energy consumption: A sensitivity analysis,’’ IET Electr. Syst. Transp.,
vol. 7, no. 1, pp. 3–13, Mar. 2017.
[22] F. Mohammadi, ‘‘Lithium-ion battery state-of-charge estimation based on
an improved Coulomb-counting algorithm and uncertainty evaluation,’’ SELVARAJ VEDHANAYAKI received the B.E.
J. Energy Storage, vol. 48, Apr. 2022, Art. no. 104061. degree in electrical and electronics engineering
[23] J. Meng, M. Ricco, G. Luo, M. Swierczynski, D.-I. Stroe, A.-I. Stroe, and from Anna University and the M.E. degree in
R. Teodorescu, ‘‘An overview and comparison of online implementable power systems from the Government College
SOC estimation methods for lithium-ion battery,’’ IEEE Trans. Ind. Appl., of Technology, Coimbatore. She is currently a
vol. 54, no. 2, pp. 1583–1591, Mar. 2018. Research Scholar with the School of Electrical
[24] K. Qian, X. Liu, Y. Wang, X. Yu, and B. Huang, ‘‘Modified dual extended Engineering, VIT University, Vellore, Tamil Nadu.
Kalman filters for SOC estimation and online parameter identification of Her research interests include power converters,
lithium-ion battery via modified gray wolf optimizer,’’ Proc. Inst. Mech.
artificial intelligence, and electric vehicle.
Eng., D, J. Automobile Eng., vol. 236, no. 8, pp. 1761–1774, 2022.
[25] F. Yang, S. Zhang, W. Li, and Q. Miao, ‘‘State-of-charge estimation of
lithium-ion batteries using LSTM and UKF,’’ Energy, vol. 201, Jun. 2020,
Art. no. 117664.
[26] C. Li, F. Xiao, and Y. Fan, ‘‘An approach to state of charge estimation VAIRAVASUNDARAM INDRAGANDHI recei-
of lithium-ion batteries based on recurrent neural networks with gated
ved the B.E. degree in electrical and electron-
recurrent unit,’’ Energies, vol. 12, no. 9, p. 1592, Apr. 2019.
ics engineering from Bharathidasan University,
[27] P. Eleftheriadis, S. Leva, and E. Ogliari, ‘‘Bayesian hyperparameter opti-
in 2004, the M.E. degree in power electronics
mization of stacked bidirectional long short-term memory neural network
and drives from Anna University, Chennai, India,
for the state of charge estimation,’’ Sustain. Energy, Grids Netw., vol. 36,
Dec. 2023, Art. no. 101160. and the Doctor of Philosophy degree. She accom-
[28] C. Menos-Aikateriniadis, I. Lamprinos, and P. S. Georgilakis, ‘‘Particle plished the thesis for her Ph.D. entitled ‘‘Analysis
swarm optimization in residential demand-side management: A review and Modeling of Multi-Port DC-DC Boost Con-
on scheduling and control algorithms for demand response provision,’’ verter for Hybrid Power Generation System’’ with
Energies, vol. 15, no. 6, p. 2211, Mar. 2022. Anna University. She is currently a Professor with
[29] L. Chen, Z. Wang, Z. Lü, J. Li, B. Ji, H. Wei, and H. Pan, ‘‘A novel state- the School of Electrical Engineering, VIT University, Vellore, Tamil Nadu.
of-charge estimation method of lithium-ion batteries combining the grey She has been engaged in research work for the past 15 years in the area
model and genetic algorithms,’’ IEEE Trans. Power Electron., vol. 33, of power electronics, drives and renewable energy systems. Her research
no. 10, pp. 8797–8807, Oct. 2018. interests include power converters, artificial intelligence, and power quality.
[30] M. Jiao, D. Wang, and J. Qiu, ‘‘A GRU-RNN based momentum optimized She was awarded the Gold Medal for the achievement of the university’s first
algorithm for SOC estimation,’’ J. Power Sour., vol. 459, May 2020, rank.
Art. no. 228051.

VOLUME 12, 2024 43327

You might also like