0% found this document useful (0 votes)
17 views6 pages

Parameter Identification For Electrochemical Models of Lithium-Ion Batteries Using Bayesian Optimization

This study explores the use of Bayesian Optimization for parameter identification in electrochemical models of lithium-ion batteries, specifically focusing on a nickel-manganese-cobalt (NMC)-graphite cell. The results demonstrate that Bayesian Optimization significantly outperforms traditional gradient-based and metaheuristic methods, achieving reductions in average testing loss and variance. This approach enhances the accuracy and efficiency of battery model calibration, which is essential for improving battery performance and reliability.

Uploaded by

2607983109
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views6 pages

Parameter Identification For Electrochemical Models of Lithium-Ion Batteries Using Bayesian Optimization

This study explores the use of Bayesian Optimization for parameter identification in electrochemical models of lithium-ion batteries, specifically focusing on a nickel-manganese-cobalt (NMC)-graphite cell. The results demonstrate that Bayesian Optimization significantly outperforms traditional gradient-based and metaheuristic methods, achieving reductions in average testing loss and variance. This approach enhances the accuracy and efficiency of battery model calibration, which is essential for improving battery performance and reliability.

Uploaded by

2607983109
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Parameter Identification for

Electrochemical Models of Lithium-Ion


Batteries Using Bayesian Optimization
Jianzong Pi ∗ Samuel Filgueira da Silva ∗∗
Mehmet Fatih Ozkan ∗∗ Abhishek Gupta ∗ Marcello Canova ∗∗

Department of Electrical and Computer Engineering, The Ohio State
University, Columbus, OH, (e-mail: [email protected],
[email protected])
∗∗
Department of Mechanical and Aerospace Engineering, The Ohio
State University, Columbus, OH, (e-mail: [email protected],
arXiv:2405.10750v1 [eess.SY] 17 May 2024

[email protected], [email protected])

Abstract: Efficient parameter identification of electrochemical models is crucial for accurate


monitoring and control of lithium-ion cells. This process becomes challenging when applied
to complex models that rely on a considerable number of interdependent parameters that
affect the output response. Gradient-based and metaheuristic optimization techniques, although
previously employed for this task, are limited by their lack of robustness, high computational
costs, and susceptibility to local minima. In this study, Bayesian Optimization is used for tuning
the dynamic parameters of an electrochemical equivalent circuit battery model (E-ECM) for a
nickel-manganese-cobalt (NMC)-graphite cell. The performance of the Bayesian Optimization
is compared with baseline methods based on gradient-based and metaheuristic approaches. The
robustness of the parameter optimization method is tested by performing verification using an
experimental drive cycle. The results indicate that Bayesian Optimization outperforms Gradient
Descent and PSO optimization techniques, achieving reductions on average testing loss by 28.8%
and 5.8%, respectively. Moreover, Bayesian optimization significantly reduces the variance in
testing loss by 95.8% and 72.7%, respectively.

Keywords: Parameter identification, Bayesian Optimization, Electrochemical Models,


Lithium-ion Batteries.

1. INTRODUCTION model to predict the cell behavior (Seals et al. (2022)).


By fine-tuning these parameters, electrochemical models
Lithium ion (li-ion) batteries are at the forefront of en- can more precisely replicate the physical and chemical
ergy storage technology and play an important role in processes within the cell, ultimately leading to improved
promoting electrification in transportation (Cano et al. prediction of voltage and capacity, enhancing the overall
(2018); Chen et al. (2019)). These energy storage de- battery performance and reliability.
vices have had a significant impact on industries such On the other hand, parameter identification in electro-
as consumer electronics and electric vehicles because of chemical models presents significant challenges, notably
their high energy density, long operating lifespan, and low the large number of parameters that need to be identi-
maintenance requirements (Xu et al. (2023); Buhmann and fied simultaneously, as exemplified in studies like (Forman
Criado (2023)). An in-depth understanding and accurate et al. (2012)). Traditional numerical gradient-based op-
modeling of li-ion battery behavior is essential for opti- timization algorithms (Boyd and Vandenberghe (2004))
mizing performance and ensuring reliability in practical are commonly used in data-driven optimization tasks, but
applications. they exhibit two main drawbacks. First, they are prone to
Physics-based (electrochemical) battery models are essen- converging to local minima, with their effectiveness signif-
tial tools for understanding, accurately predicting and icantly dependent on the initial point in the descent - this
monitoring lithium-ion batteries (Wu et al. (2021)), as compromises the robustness of the tuning process subject
they employ sophisticated mathematical formulations to to the initialization. Secondly, calculating the gradient (or
elucidate the physical mechanisms governing mass and sub-gradient) numerically can be computationally inten-
charge transport within the li-ion cell electrodes and elec- sive and time consuming when the objective function is
trolyte solution. Parameters in electrochemical models are complex to evaluate. More recently, reinforcement learning
linked to physical, chemical and material properties of the algorithms have shown promise in identifying optimal pa-
cell components, such as electrolyte diffusion coefficient rameters in real-time through interaction with the system
and rate constants, and their numerical values are crucial (Unagar et al. (2021)). However, a significant drawback of
as they directly influence the accuracy and ability of the these algorithms is their reliance on large datasets, which,
in the context of our specific application, are impractical V (t) = Up (cse,p , t) − Un (cse,n , t) − (ηp (cse,p , t)
to obtain within a reasonable timeframe. This highlights (1)
the need for more efficient and adaptable optimization −ηn (cse,n , t)) + ϕdif f (t) + ϕohm (t) − I(t)Rc
strategies in this domain. The kinetic overpotential ηi and the exchange current
Numerous model-free optimization techniques, including density i0i are defined as:
evolutionary algorithms (Li et al. (2016)), and particle
swarm optimization (PSO) (Varga et al. (2013); Noel R̄T0 (−Ji I(t))
(2012); Miranda et al. (2023); Cavalca and Fernandes ηi (t) = (2)
(2018); Dangwal and Canova (2021)), were employed to F i0,i
improve both the speed of convergence and the resilience of
model calibration. The primary hurdles include the consid-   
erable computational time required for function evaluation 1 1 Eioi
q
i0i (t) = exp − F ki ci (cmax,i − ci )ce,i
and gradient computation, the chance of converging to Tref T (t) R̄
local minima, and the robustness of convergence points (3)
to randomly initialized optimization points.
Bayesian optimization (Mockus (2005)) is a powerful tool where i = p, n. The Pade approximation is used to
for globally optimizing black-box functions that come calculate the lithium surface concentration ci for the solid
with the challenge of being computationally expensive to phase diffusion:
evaluate. Bayesian optimization provides an efficient and
systematic approach to finding the global optimum by −Ri
leveraging probabilistic models to systematically explore ci (s) = ci0 + (Gb (s) + Gd (s)) I(s) (4)
3F ϵam,i Li A
and exploit within the searching domain. Bayesian opti-
mization enables the optimization of complex systems effi-
ciently and robustly. The method has been widely applied where the transfer function for bulk concentration Gb (s)
to hyperparameter tuning in Wu et al. (2019); Cho et al. and the transfer function for diffusion dynamics Gd (s) are
(2020), model selection in Malkomes et al. (2016), robotics expressed as:
control in Martinez-Cantin (2017), and so on. In these
applications, the key advantage of Bayesian optimization 2 Ri 3
is its ability to provide robust and high-quality solutions 7 Di s + Ri
Gb (s) = 2 (5)
with fewer evaluations of the objective function, which is 1 Ri 2
35 Di s + s
particularly beneficial in situations where each evaluation
is costly and time consuming.
Ri
The goal of this study is to apply the Bayesian opti- 5Di
mization approach to the multi-parameter optimization Gd (s) = 2 (6)
1 Ri
problem for electrochemical models of li-ion cells, and 35 Di s + 1
test its performance and robustness. The study utilizes a
reduced-order electrochemical model adapted from Seals The potential drop in the electrolyte solution, ϕe , is
et al. (2022). The analysis is conducted considering also analogously given by:
gradient-based approaches (such as gradient descent) and
model-free methods (such as PSO), evaluating their rela- C1
tive performance and robustness in the parameter identi- ϕe (s) = (Gpos (s) + Gneg (s)) I(s) (7)
fication tasks. De

where C1 is defined as a function of the cell parameters,


2. OVERVIEW OF LI-ION CELL MODEL as shown in Eq. 8, and Gpos (Eq. 9) and Gneg (Eq. 10) are
EQUATIONS positive and negative transfer functions.
In this work, the Electrochemical Equivalent Circuit
Model (E-ECM) is used as a computational-friendly model 0.124γp
Gpos (s) = 0.1052L2cell (9)
to reduce computation time when performing parameter s+ 1
De
identification, resulting in shorter optimization process
(Seals et al. (2022)). The E-ECM is based on order re-
duction, linearization and simplification of the governing 0.117γn
equations of the Extended Single-Particle Model (ESPM), Gneg (s) = 0.0997L2cell (10)
leading to a mathematical form that allows for fast com- De s+ 1
puting while maintaining the relationship between the
model parameters and physical processes described by the Last, the potential drop due to ohmic losses, ϕohm , is
original model equations. The constitutive equations of expressed by Eq. 11, where κ is a function of the initial
the E-ECM are summarized for clarity. Readers can refer electrolyte concentration ce0 and cell temperature T .
to Seals et al. (2022) for a complete description of the
assumptions and derivation framework.
−I(t)Lcell
The terminal voltage V of the E-ECM model is given by: ϕohm (t) = (11)
κA
Separator
Anode Cathode

Fig. 1. Schematic of the E-ECM model.

   r   c 1.5 
−Lcell 1 ce0 e0
C1 = 2RT (1 − t+
0 )(1 + β) 0.601 − 0.24 + 0.982 1 − 0.0052(T0 − Tref ) (8)
As F 2 ce0 1000 1000

mean vector = µX + ΣXY Σ−1


Y Y (y − µY ),
Sensitivity analysis has been previously used to rank
the parameters of li-ion battery models with respect to covariance matrix = ΣXX − ΣXY Σ−1
Y Y ΣY X ,
their effect on the terminal voltage output response (Jin where ΣXY ∈ Rp×m , ΣY X ∈ Rm×p are the covariance
et al. (2018); Park et al. (2018)). In Dangwal and Canova matrices between X and Y .
(2021), seven parameters are highlighted for holding high
influence on model response in transient conditions. In this Gaussian Process Let f : Θ → R be the objective
current work, three of those parameters are evaluated for function. A Gaussian process over domain Θ is a collec-
the implementation of the multi-parameter identification tion of random variables (f (θ))θ∈Θ such that each finite
process: electrolyte diffusion coefficient De , cathode rate subcollection (f (θi ))N i=1 are jointly Gaussian with mean
constant kp , and anode rate constant kn . E[f (θi )] := µ(θi ) and covariance E[(f (θi ) − µ(θi )(f (θj ) −
µ(θj ))] := k(θi , θj ). Let k be the kernel function that
3. BAYESIAN OPTIMIZATION determines the covariance structure. Assume k : Θ ×
Θ → R to be positive definite: the kernel evaluation matrix
Bayesian Optimization is an efficient method for finding [k(θi , θj )]i,j ∈ RN ×N is positive semi-definite for every
the maximum or minimum of an objective function that finite collection (θi )N i=1 . In practice, one may pick k to be
is expensive to evaluate. The method combines a proba- the Gaussian kernel (or the exponential squared function):
bilistic model, typically a Gaussian Process, to estimate ∥θ − θ′ ∥2
 
the function and an acquisition function to decide where k(θ, θ′ ) = exp − .
2
to sample next. This approach effectively balances explo-
ration of unknown areas and exploitation of promising re- The Gaussian process defined above is denoted as
gions, making it ideal for tasks like hyperparameter tuning GP (µ(·), k(·, ·)).
in machine learning, where evaluations (such as simulating The process of inference with function evaluations is as
a high-fidelity model) are costly and time-consuming. follows: first, set the prior of f to be GP (0, k(·, ·)). Then,
randomly pick s points (θ1 , . . . θs ), run the function evalu-
3.1 Notations ations on θ1 , . . . θs and record f (θ1 ), . . . , f (θs ). Then, one
can construct the kernel covariance matrix K ∈ Rs×s ,
Let Θ ⊂ Rn be the parameter space and T := [0, T ]. where [K]ij = k(θi , θj ).
Let L∞ (T), Cb (T) be the collection of almost everywhere For a new point θs+1 , Fact 1 implies f (θs+1 ) is Gaussian
bounded and continuous bounded functions defined on T with mean µ(θs+1 ) = kT K −1 [f (θ1 ), . . . , f (θs )]T , and vari-
respectively. ance σ 2 (θs+1 ) = k(θs+1 , θs+1 ) − kT K −1 k. Where the vec-
tor k := [k(θ1 , θs+1 ), . . . , k(θs , θs+1 )]T ∈ Rs . See (Garnett,
3.2 Preliminaries 2023, pp.16 - 26) for a detailed discussion of inference with
Gaussian processes.
Fact 1. (Conditional Multivariate Gaussian). Consider two
multivariate Gaussian random variables X ∼ N (µX , ΣX ) ∈ Aquisition Function Let D be the dataset containing
Rp and Y ∼ N (µY , ΣY ) ∈ Rm . Then the distribution of observations {θi , f (θi )}i so far. Let a : Θ → R+ be the
X|(Y = y) is multivariate Gaussian with mean acquisition function that determines which parameter to
be evaluated, specifically, the algorithm determines the set of three model parameters θ = (kp , kn , De ) to optimize
next point to evaluate by a proxy optimization θN +1 = concurrently. Thus, Θ represents a three-dimensional box
arg maxθ∈Θ a(θ). The acquisition function that is applied containing the lower and upper limits of the parameters.
in this experimental study is the expected improvement
The optimization of the objective function in Eq. (12)
(Jones et al. (1998)): Suppose f˜ is the minimal value of f
is conducted using a training set comprising a dynamic,
observed so far, then the expected improvement is defined
step-wise current profile, depicted in the upper panel of
as:
Figure 2. This profile, named RCID, spans the entire
aEI (θ) := E[max(0, f˜ − f (θ)|D] feasible C-rate and state of charge (SOC) range of a li-ion
Other commonly used acquisition functions are probability cell. To assess the generalization power of the optimized
of improvement, Bayesian expected loss, Thompson sam- parameters, the results are evaluated on a different duty
pling and so on (for further details, see Frazier (2018)). cycle formed by repetitions of a drive cycle profile, as
Combining Gaussian process and the acquisition function, shown in the lower panel of Figure 2.
the Bayesian optimization algorithm is shown in Algo- RCID @ 25 Degrees Celsius
rithm 1.

Current (A)
400
Algorithm 1 Bayesian Optimization 200
Require: Gaussian prior on f , acquisition function a. 0
Sample s0 points and observe {θi , f (θi )}si=1
0
. 0 1 2 3 4 5 6 7 8 9 10
Time (s) 104
for i = 1, . . . , S do
Drive Cycle @ 45 Degrees Celsius
Update posterior on objective function f .
100
Current (A)
Update acquisition function a.
0
Choose θi = arg maxθ∈Θ a(θ). -100
Evaluate objective function and observe (θi , f (θi )). -200
end for 200 400 600 800 1000 1200 1400
return Observed θi that minimizes f . Time (s)

Fig. 2. The RCID (training set) and drive cycle (verifica-


4. PROBLEM FORMULATION tion set) input current profiles.

Denote the physical Li-ion cell as Ā : L∞ (T) → Cb (T)


which outputs the terminal cell voltage as a time series 6. RESULTS AND DISCUSSION
V ∈ Cb (T) and represent an E-ECM as a nonlinear operator
A : Θ × L∞ (T) → Cb (T). For every fixed parameter The training and testing losses of the Bayesian optimiza-
θ ∈ Θ, and every input excitation u ∈ L∞ (T), Aθ tion method applied to the multi-parameter identifica-
outputs the model terminal cell voltage of time series tion problem are evaluated by benchmarking against two
Vm ∈ Cb (T). Assuming that experimental input excitation common methods, namely gradient descent and PSO. For
profiles u1 , . . . , uK and Ā(u1 ), . . . , Ā(uK ) are available, each method, the starting point is initialized uniformly
the following optimization problem is formulated: at random within Θ. To ensure fairness, the number of
K
X function evaluations is capped at 50 and 10 repetitions
θ∗ = arg min ∥A(θ, uk ) − Ā(uk )∥22 =: f (θ) (12) are executed for each method. Mean and variance values
θ∈Θ
k=1 for training time, training losses, and testing losses are
Evaluation of this objective function is costly because it computed and summarized in Table 1.
involves computing a time series output by solving the set Additionally, the voltage error for two input profiles
of differential and algebraic equations underlying in the (RCID and drive cycle) is visualized. The tuned parame-
E-ECM. Furthermore, determining the gradients of these ters corresponding to the median training loss are utilized
objective functions poses an even greater challenge, due to generate the error plots depicted in Figures 3 and 4. It
to the large number of samples required to numerically is evident that Bayesian optimization more accurately pre-
compute the gradient with respect to the optimization dicts the terminal voltage than gradient descent and PSO
variables. methods, due to its superior tuning of the electrochemical
The Bayesian optimization method addresses the afore- parameters in these tests.
mentioned challenges. Since Gaussian processes are used As depicted in the Table 1, the Bayesian optimization
as the function approximator, the following assumption is method surpasses both gradient descent and PSO in both
required: training and testing losses, with improvements of 28.8%
PK
Assumption 1. The mapping θ 7→ f (θ) = k=1 ∥A(θ, uk )− and 5.8% on average, respectively. Moreover, Bayesian op-
Ā(uk )∥22 is continuous over Θ. timization significantly reduces the variance in the training
and testing losses by 95.8% and 72.7%, respectively.
5. CASE STUDY To further showcase the robustness of the Bayesian op-
timization method on test sets, the box plot in Fig-
In the optimization framework defined by Equation (12), ure 5 illustrates the testing loss evaluated across the 10
the parameter space Θ can be either a one-dimensional repetitions for the drive cycle. The results indicate that
interval or a multi-dimensional box. This work considers a the gradient descent method shows higher mean, median
Table 1. Comparison among gradient descent, PSO and Bayesian optimization in optimization
time, and training and testing losses.
Time (seconds) Training loss (V2 ) Testing loss (V2 )
Mean Variance Mean Variance Mean Variance
Gradient Descent 215.865 7625.6 33.956 3.662 0.848 0.071
PSO 580.115 433.45 32.04 0.233 0.641 0.011
Bayesian Optimization 540.395 17.337 31.556 0.007 0.604 0.003

RCID @ 25 Degrees Celsius


Gradient descent 1.1

0.6
0.4 1
Error (V)

0.2

Test loss [V2]


0.9
0
-0.2
0.8
0 1 2 3 4 5 6 7 8 9 10
Time (s) 10 4
Particle Swarm Optimization 0.7
0.6
0.4 0.6
Error (V)

0.2
0 0.5

-0.2 Bayesian Optimization Gradient Descent PSO

0 1 2 3 4 5 6 7 8 9 10
Time (s) 10
4 Fig. 5. Box plot for the testing losses.
Bayesian Optimization
0.6 losses, and greater variability compared to the Bayesian
0.4 optimization and the PSO methods, indicating less con-
Error (V)

0.2 sistent performance across test cases. On the other hand,


0 both the Bayesian optimization and PSO have narrower
-0.2 ranges compared to gradient descent, indicating more sta-
0 1 2 3 4 5 6 7 8 9 10
Time (s)
ble performance within a tighter range of losses. It is
10 4
worth highlighting that the Bayesian optimization method
demonstrates better performance consistency compared to
Fig. 3. Cell terminal voltage error in RCID test among
PSO by showing less variability in testing losses.
converged parameters in gradient descent, PSO and
Bayesian optimization (data collected at T = 25◦ C). The findings collectively demonstrate that Bayesian op-
timization emerges as a promising approach for multi-
Drive Cycle @ 45 Degrees Celsius parameter identification of the electrochemical battery
Gradient descent
model, excelling in both performance and robustness.
0.04

0.02 7. CONCLUSION
Error (V)

0
This study addresses the challenge of multi-dimensional
-0.02
parameter tuning in electrochemical battery modeling by
200 400 600 800 1000 1200 1400 formulating the problem as a large-scale optimization and
Time (s)
Particle Swarm Optimization applying the Bayesian optimization method.
0.04

Compared to conventional methods such as particle swarm


0.02
Error (V)

optimization (PSO) and gradient descent, Bayesian op-


0 timization exhibits significant improvements. It not only
enhances training performance and robustness but also
-0.02
200 400 600 800 1000 1200 1400
reduces training losses more effectively—achieving average
Time (s) improvements of 28.8% over gradient descent and 5.8%
Bayesian Optimization
0.04 over PSO. Furthermore, Bayesian optimization demon-
strates a reduction in variance by 95.8% and 72.7% relative
0.02
Error (V)

to gradient descent and PSO, respectively. These findings


0 underscore the potential of Bayesian optimization as a
robust algorithm for parameter tuning in battery models.
-0.02
200 400 600 800 1000 1200 1400
Time (s)
Potential future work may include expanding the opti-
mization to larger sets of parameters. Furthermore, the
Fig. 4. Cell terminal voltage error in drive cycle testamong Bayesian optimization algorithm could be augmented by
converged parameters in gradient descent, PSO and integrating sensitivity information in the objective func-
Bayesian optimization (data collected at T = 45◦ C). tion. As indicated by Makrygiorgos et al. (2023); Ament
et al. (2024), leveraging sensitivity information can en-
hance Bayesian optimization by adjusting acquisition func- Li, T., Sun, X., Lu, Z., Wu, Y., et al. (2016). A novel
tions and altering the approach to exploring new points. multiobjective optimization method based on sensitivity
By incorporating these concepts, there is potential to analysis. Mathematical Problems in Engineering, 2016.
accelerate the optimization process, ultimately improving Makrygiorgos, G., Paulson, J.A., and Mesbah, A. (2023).
improve both performance and robustness in the multi- Gradient-enhanced bayesian optimization via acquisi-
parameter optimization task. tion ensembles with application to reinforcement learn-
ing. IFAC-PapersOnLine, 56(2), 638–643.
ACKNOWLEDGEMENTS Malkomes, G., Schaff, C., and Garnett, R. (2016).
Bayesian optimization for automated model selection.
In Advances in Neural Information Processing Systems,
The authors wish to acknowledge the Honda Research volume 29.
Institute for the inspiring discussions and helpful feedback Martinez-Cantin, R. (2017). Bayesian optimization with
that contributed to the research presented in this paper. adaptive kernels for robot control. In 2017 IEEE
International Conference on Robotics and Automation
REFERENCES (ICRA). IEEE.
Miranda, M.H., Silva, F.L., Lourenço, M.A., Eckert, J.J.,
Ament, S., Daulton, S., Eriksson, D., Balandat, M., and and Silva, L.C. (2023). Particle swarm optimization of
Bakshy, E. (2024). Unexpected improvements to ex- elman neural network applied to battery state of charge
pected improvement for bayesian optimization. Ad- and state of health estimation. Energy, 285, 129503.
vances in Neural Information Processing Systems, 36. Mockus, J. (2005). The bayesian approach to global
Boyd, S.P. and Vandenberghe, L. (2004). Convex opti- optimization. In System Modeling and Optimization:
mization. Cambridge university press. Proceedings of the 10th IFIP Conference New York City,
Buhmann, K.M. and Criado, J.R. (2023). Consumers’ USA, August 31–September 4, 1981, 473–481. Springer.
preferences for electric vehicles: The role of status and Noel, M.M. (2012). A new gradient based particle swarm
reputation. Transportation research part D: transport optimization algorithm for accurate computation of
and environment, 114, 103530. global minimum. Applied Soft Computing, 12(1), 353–
Cano, Z.P., Banham, D., Ye, S., Hintennach, A., Lu, J., 359.
Fowler, M., and Chen, Z. (2018). Batteries and fuel cells Park, S., Kato, D., Gima, Z., Klein, R., and Moura, S.
for emerging electric vehicle markets. Nature Energy, (2018). Optimal experimental design for parameteri-
3(4), 279–289. doi:10.1038/s41560-018-0108-1. zation of an electrochemical lithium-ion battery model.
Cavalca, D.L. and Fernandes, R.A. (2018). Gradient-based Journal of The Electrochemical Society, 165(7), A1309.
mechanism for pso algorithm: A comparative study on Seals, D., Ramesh, P., D’Arpino, M., and Canova, M.
numerical benchmarks. In 2018 IEEE Congress on (2022). Physics-based equivalent circuit model for
Evolutionary Computation (CEC), 1–7. IEEE. lithium-ion cells via reduction and approximation of
Chen, W., Liang, J., Yang, Z., and Li, G. (2019). A review electrochemical model. SAE International Journal of
of lithium-ion battery for electric vehicle applications Advances and Current Practices in Mobility, 4(2022-01-
and beyond. Energy Procedia, 158, 4363–4368. doi: 0701), 1154–1165.
10.1016/j.egypro.2019.01.783. Innovative Solutions for Unagar, A., Tian, Y., Chao, M.A., and Fink, O. (2021).
Energy Transitions. Learning to calibrate battery models in real-time with
Cho, H., Kim, Y., Lee, E., Choi, D., Lee, Y., and Rhee, deep reinforcement learning. Energies, 14(5), 1361.
W. (2020). Basic enhancement strategies when using Varga, T., Király, A., and Abonyi, J. (2013). Im-
bayesian optimization for hyperparameter tuning of provement of pso algorithm by memory-based gradi-
deep neural networks. IEEE Access, 8, 52588–52608. ent search—application in inventory management. In
Dangwal, C. and Canova, M. (2021). Parameter identifica- Swarm intelligence and bio-inspired computation, 403–
tion for electrochemical models of lithium-ion batteries 422. Elsevier.
using sensitivity analysis. ASME Letters in Dynamic Wu, J., Chen, X.Y., Zhang, H., Xiong, L.D., Lei, H., and
Systems and Control, 1(4), 041014. Deng, S.H. (2019). Hyperparameter optimization for
Forman, J.C., Moura, S.J., Stein, J.L., and Fathy, H.K. machine learning models based on bayesian optimiza-
(2012). Genetic identification and fisher identifiability tion. Journal of Electronic Science and Technology,
analysis of the doyle–fuller–newman model from exper- 17(1), 26–40.
imental cycling of a lifepo4 cell. Journal of Power Wu, L., Liu, K., and Pang, H. (2021). Evaluation and
Sources, 210, 263–275. observability analysis of an improved reduced-order elec-
Frazier, P.I. (2018). A tutorial on bayesian optimization. trochemical model for lithium-ion battery. Electrochim-
arXiv preprint arXiv:1807.02811. ica Acta, 368, 137604.
Garnett, R. (2023). Bayesian optimization. Cambridge Xu, J., Cai, X., Cai, S., Shao, Y., Hu, C., Lu, S., and Ding,
University Press. S. (2023). High-energy lithium-ion batteries: recent
Jin, N., Danilov, D.L., Van den Hof, P.M., and Donkers, progress and a promising future in applications. Energy
M. (2018). Parameter estimation of an electrochemistry- & Environmental Materials, 6(5), e12450.
based lithium-ion battery model using a two-step proce-
dure and a parameter sensitivity analysis. International
Journal of Energy Research, 42(7), 2417–2430.
Jones, D.R., Schonlau, M., and Welch, W.J. (1998). Ef-
ficient global optimization of expensive black-box func-
tions. Journal of Global optimization, 13, 455–492.

You might also like