CH0003 Nguyen v1
CH0003 Nguyen v1
ABSTRACT
We present model reduction techniques for parametrized nonlinear partial differential
equations (PDEs). The main ingredients of our approach are reduced basis (RB) spaces
to provide rapidly convergent approximations to the parametric manifold; Galerkin pro-
jection of the underlying PDEs onto the RB space to provide reduction in the number
of degrees of freedom; and empirical interpolation schemes to provide rapid evaluation
of the nonlinear terms associated with the Galerkin projection. We devise a first-order
empirical interpolation method to construct an inexpensive and stable interpolation of the
nonlinear terms. We consider two different hyper-reduction strategies: hyper-reduction
followed by linearization, and linearization followed by hyper-reduction. We employ
the proposed techniques to develop reduced-order models for a nonlinear convection-
diffusion-reaction problem. Numerical results are presented to illustrate the accuracy,
efficiency, and stability of the reduced-order models.
KEYWORDS
Model reduction, Reduced order model, Reduced basis method, Empirical interpolation,
Hyper-reduction, Galerkin projection, Proper orthogonal decomposition, Finite element
method, Nonlinear PDEs
1.1 INTRODUCTION
Numerous applications in engineering and science require repeated solutions of
parametrized partial differential equations (PDEs). This occurs in the context
of design, optimization, control, uncertainty quantification. Numerical approx-
1. I would like to thank Professor Peraire of MIT and Professor Patera of MIT for many invaluable
contributions to this work. I would also like to thank Professor Yvon Maday of University Paris
VI for many helpful discussions. This work was supported by the U.S. Department of Energy
under Contract No. DE-NA0003965, the Air Force Office of Scientific Research under Grant No.
FA9550-22-1-0356, the National Science Foundation under grant number NSF-PHY-2028125,
and the MIT Portugal program under the seed grant number 6950138.
1
2
2. An integral whose integrand is a nonlinear function of the state variables is called nonlinear
integral.
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 3
We shall assume that N𝑞 is large enough such that 𝑰 N ( 𝝁) can be considered in-
distinguishable from 𝑰( 𝝁) for any 𝝁 ∈ D. Following the custom in the reduced-
basis community, 𝑰 N ( 𝝁) is referred to as “truth” approximation to the exact
integrals 𝑰( 𝝁). Gaussian quadrature possesses generality in that its quadrature
points and weights are independent of the integrand. Nonetheless, its computa-
tional expense renders it inefficient for utilization in model reduction techniques.
Hyper-reduction techniques aim to reduce the (online) computational cost of
evaluating the parametrized integrals. Existing hyper-reduction techniques can
be broadly classified as empirical quadrature methods, empirical interpolation
methods, and integral interpolation methods.
4
𝐾
∑︁
𝑔 𝑔 𝑔 𝑔 𝑔
𝑰 𝐾 ( 𝝁) = 𝜔 𝑘 𝑔(𝑢( 𝒙¯ 𝑘 , 𝝁))𝝋( 𝒙¯ 𝑘 ) = 𝑫 𝑁 𝐾 𝒃 𝐾 ( 𝝁). (1.3)
𝑘=1
𝑔 𝑔
Here ( 𝒙¯ 𝑘 , 𝜔 𝑘 ), 1 ≤ 𝑘 ≤ 𝐾, are empirical quadrature points and weights, which
are dependent on the parametrized integrand and chosen such that the integration
error ∥ 𝑰 𝐾 ( 𝝁) − 𝑰 N ( 𝝁) ∥ is sufficiently small for any 𝝁 ∈ D, and such that
𝐾 ≪ N𝑔 . Note that 𝑫 𝑁 𝐾 ∈ R 𝑁 ×𝐾 has entries 𝐷 𝑁 𝐾 ,𝑛𝑘 = 𝜔 𝑘 𝜑 𝑛 ( 𝒙¯ 𝑘 ) and
𝑔 𝑔 𝑔 𝑔
𝑔 𝑔 𝑔
𝒃 𝐾 ( 𝝁) ∈ R𝐾 has entries 𝑏 𝐾 ,𝑘 ( 𝝁) = 𝑔(𝑢( 𝒙¯ 𝑘 , 𝝁)) for 1 ≤ 𝑛 ≤ 𝑁, 1 ≤ 𝑘 ≤ 𝐾.
The quadrature points and weights are determined in the offline stage. In the
online stage, the online cost of evaluating 𝑰 𝐾 ( 𝝁) for any 𝝁 ∈ D is 𝑂 (𝑁𝐾).
The first scheme of this type was introduced by An et al. in [3] in the context
of computer graphics applications. The method involves determining a reduced
set of quadrature points and associated positive weights so that the integration
error is minimized over a set of representative samples of the integrand. This
method was further extended and introduced in model reduction community by
Farhat et al. [37, 38] and Hernández et al. [48]. In [100], Ryu and Boyd propose
an ℓ 1 framework for the empirical quadrature. This quadrature framework is
further developed in DeVore et al. [29] and extended to the parametric context in
[84]. The stronger norm yields quadrature rules that are sparse. Furthermore, the
offline problem can be cast as a linear program (LP) which is efficiently treated
by a simplex method [84, 116]. Empirical quadrature methods offer a unique
advantage in the sense that the Jacobian matrix of the resulting ROM inherits
the spectral properties of its FOM counterpart [38, 48]. In essence, if the FE
stiffness matrix is symmetric and positive definite, so will be its reduced-order
counterpart. This desirable feature might not be found in other hyper-reduction
techniques.
The linear system (1.5) can be solved for the coefficient vector 𝒂 𝑀 ( 𝝁) as follows
𝑔 −1 𝑔
𝒂 𝑀 ( 𝝁) = 𝑩 𝑀 𝒃 𝑀 ( 𝝁), (1.6)
independent.
The surrogate integrals (1.7) are computed by using an offline-online de-
𝑔
composition. In the offline stage, we construct the basis set Ψ𝑀 and the in-
𝑔 𝑔
terpolation point set 𝑇𝑀 , and compute and store the matrix 𝑫 𝑁 𝑀 . The offline
stage can be costly due to the construction of the basis set and the computation
𝑔
of the matrix 𝑫 𝑁 𝑀 . In the online stage, for any given 𝝁 ∈ D, we evaluate
𝑔 𝑔
𝑏 𝑀,𝑘 ( 𝝁) = 𝑔(𝑢(𝒙 𝑚 , 𝝁)), 1 ≤ 𝑚 ≤ 𝑀 and calculate 𝑰 𝑀 ( 𝝁) from (1.7). The
online cost of evaluating 𝑰 𝑀 ( 𝝁) for any 𝝁 ∈ D is only 𝑂 (𝑀 𝑁). The approx-
𝑔
imation accuracy depends crucially on the basis set Ψ𝑀 and the interpolation
𝑔
point set 𝑇𝑀 .
Empirical interpolation focuses on the approximation of the parametrized
nonlinear function 𝑔(𝑢(𝒙, 𝝁)), whereas empirical quadrature aims to directly
approximate the integral of the parametrized integrands 𝑔(𝑢(𝒙, 𝝁))𝝋(𝒙). Em-
pirical quadrature has online complexity of 𝑂 (𝐾 𝑁), where 𝐾 is the number
of empirical quadrature points. Hence, both approaches offer linear scaling in
terms of the number of interpolation/quadrature points. However, since empiri-
cal quadrature constructs quadrature points based on the integrands, 𝐾 may scale
with the number of integrands 𝑁.
The empirical interpolation method (EIM) was first proposed in the seminal
paper [8] for constructing the basis set and the interpolation point set, and
developing efficient RB approximation of non-affine PDEs. Shortly later, the
empirical interpolation method was extended to develop efficient ROMs for
nonlinear PDEs [42]. Since the pioneer work [8, 42], the EIM has been widely
used to construct efficient ROMs of nonaffine and nonlinear PDEs for different
6
applications [42, 72, 31, 70, 54, 53, 24]. Rigorous a posteriori error bounds for
the empirical interpolation method is developed by Eftang et al. [34].
Several attempts have been made to extend the EIM in diverse ways. The
best-points interpolation method (BPIM) [74, 75] employs proper orthgogonal
decomposition to generate the basis set and least-squares method to compute the
interpolation point set. The discrete empirical interpolation method (DEIM) [22]
is a discrete variant of the empirical interpolation method. DEIM consider a col-
lection of vectors arising from the spatial discretization of parameter-dependent
functions and select a subset of vectors and associated interpolation indices.
Generalized empirical interpolation method (GEIM) [66, 67] generalizes the
EIM concept by replacing the pointwise function evaluations by more general
measures defined as linear functionals. More recently, the first-order empiri-
cal interpolation method (FOEIM) [79] makes use of partial derivatives of the
parametrized nonlinear function to construct the basis functions and interpo-
lation points. In Section 1.3, we extend this new hyper-reduction method to
approximate the parametrized integrals using higher-order partial derivatives.
𝑃
𝑰 𝑃 ( 𝝁) = 𝑫 𝑁 𝑃 𝑰 N ( 𝝁), (1.8)
which allows us to obtain the first interpolation point and the first basis function
𝑔 𝑔 𝑔
𝒙 1 = arg sup |𝜉 𝑗1 (𝒙)|, 𝜓1 (𝒙) = 𝜉 𝑗1 (𝒙)/𝜉 𝑗1 (𝒙1 ). (1.11)
𝒙∈Ω
𝑔 𝑔 𝑔 𝑔
We set Ψ1 = span{𝜓1 } and 𝑇1 = {𝒙1 }. For 𝑀 = 2, . . . , 𝑁, we solve the linear
systems
𝑀
∑︁−1
𝑔 𝑔
𝜓 𝑚 (𝒙 𝑘 )𝜎𝑛𝑚 = 𝜉 𝑛 (𝒙 𝑘 ), 1 ≤ 𝑘 ≤ 𝑀 − 1, (1.12)
𝑚=1
and set
𝑔 𝑔 𝑔
𝒙 𝑀 = arg sup |𝑟 𝑀 (𝒙)|, 𝜓 𝑀 (𝒙) = 𝑟 𝑀 (𝒙)/𝑟 𝑀 (𝒙 𝑀 ), (1.14)
𝒙∈Ω
8
𝜕𝑔(𝜁)
𝐺 (𝑢, 𝜁) = 𝑔(𝜁) + (𝑢 − 𝜁). (1.21)
𝜕𝑢
Taking 𝑢 = 𝜁 𝑚 and 𝜁 = 𝜁 𝑛 , where (𝜁 𝑚 , 𝜁 𝑛 ) are any pair of two functions in 𝑊 𝑁
𝑢
𝜕𝑔(𝜁 𝑛 , 𝝁 𝑛 ) 𝜕𝑔(𝜁 𝑛 , 𝝁 𝑛 )
𝜂 𝑘 = 𝑔(𝜁 𝑛 , 𝝁 𝑛 ) + (𝜁 𝑚 − 𝜁 𝑛 ) + · ( 𝝁 𝑚 − 𝝁 𝑛 ). (1.24)
𝜕𝑢 𝜕𝝁
𝑁2
∑︁ (𝜂𝑖 , 𝜑) 2
min 𝜂𝑖 − 𝜑 , (1.25)
𝑖=1
∥𝜑∥ 2
10
subject to the constraints ∥𝜑∥ 2 = 1. It is shown in [56] (see Chapter 3) that the
problem (1.25) is equivalent to solving the eigenfunction equation
𝑁2
1 ∑︁
(𝜂𝑖 , 𝜑)𝜂𝑖 = 𝜆 𝜑. (1.26)
𝑁 2 𝑖=1
𝑁2
∑︁
𝜑= 𝑎 𝑖 𝜂𝑖 . (1.27)
𝑖=1
𝑪 𝒂 = 𝜆𝒂 , (1.28)
the first 𝐾 eigenvalues and eigenvectors from which the POD basis functions
𝜑 𝑘 , 1 ≤ 𝑘 ≤ 𝐾, are constructed by (1.27). We then introduce the following space
𝑔
√
Φ𝐾 := span{𝜙 𝑘 = 𝜆 𝑘 𝜑 𝑘 , 1 ≤ 𝑘 ≤ 𝐾 }. (1.29)
and set
𝑔 𝑔 𝑔
𝒙1 = arg sup |𝜙 𝑗1 (𝒙)|, 𝜓1 (𝒙) = 𝜙 𝑗1 (𝒙)/𝜙 𝑗1 (𝒙1 ). (1.31)
𝒙∈Ω
For 𝑀 = 2, . . . , 𝐾, we solve the linear systems
𝑀
∑︁−1
𝑔 𝑔 𝑔
𝜓 𝑚 (𝒙 𝑘 )𝜎𝑙𝑚 = 𝜙𝑙 (𝒙 𝑘 ), 1 ≤ 𝑘 ≤ 𝑀 − 1, 1 ≤ 𝑙 ≤ 𝐾, (1.32)
𝑚=1
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 11
we then find
𝑀
∑︁−1
𝑔
𝑗 𝑀 = arg max ∥𝜙𝑙 (𝒙) − 𝜎𝑙𝑚 𝜓 𝑚 (𝒙) ∥ 𝐿 ∞ (Ω) , (1.33)
1≤𝑙 ≤𝐾
𝑚=1
and set
𝑔 𝑔
𝒙 𝑀 = arg sup |𝑟 𝑀 (𝒙)|, 𝜓 𝑀 (𝒙) = 𝑟 𝑀 (𝒙)/𝑟 𝑀 (𝒙 𝑀 ), (1.34)
𝒙∈Ω
where the residual function 𝑟 𝑀 (𝒙) is given by
𝑀
∑︁−1
𝑔
𝑟 𝑀 (𝒙) = 𝜙 𝑗𝑀 (𝒙) − 𝜎 𝑗𝑀 𝑚 𝜓 𝑚 (𝒙); (1.35)
𝑚=1
finally, we define
𝑔 𝑔 𝑔 𝑔 𝑔 𝑔
Ψ𝑀 = span{𝜓1 , . . . , 𝜓 𝑀 }, 𝑇𝑀 = {𝒙1 , . . . , 𝒙 1 }. (1.36)
𝑔 𝑔 𝑔 𝑔
The outputs of the first-order EIP are Ψ𝑀 and 𝑇𝑀 satisfying Ψ𝑀 −1 ⊂ Ψ𝑀 and
𝑔 𝑔
𝑇𝑀 −1 ⊂ 𝑇𝑀 . In practice, the supremum sup 𝒙∈Ω |𝑟 𝑀 (𝒙)| is computed on the set
of quadrature points on all elements in the mesh. In other words, the interpolation
points are selected from the quadrature points.
For any given parameter sample set 𝑆 𝑁 , the first-order EIM algorithm con-
𝑔
structs nested sets of interpolation points 𝑇𝑀 , 1 ≤ 𝑀 ≤ 𝐾, and nested subspaces
𝑔
Ψ𝑀 , 1 ≤ 𝑀 ≤ 𝐾. Hence, the first-order EIM leverages first-order partial deriva-
tives to generate 𝐾 interpolation points and 𝐾 basis functions. For the same
sample set 𝑆 𝑁 , the original EIM can only generate 𝑁 interpolation points and 𝑁
basis functions. If we want to generate 𝐾 > 𝑁 interpolation points with the orig-
inal EIM, then we must expand the parameter sample set to include 𝐾 parameter
points. Unfortunately, this will demand 𝐾 solutions of the underlying FOM.
For nonlinear PDEs considered herein, the computational complexity of ROMs
via empirical interpolation in the online stage is 𝑂 (𝑀 𝑁 2 + 𝑁 3 ) per Newton
iteration. While the online cost scales cubically with 𝑁, it scales linearly with
𝑀. Therefore, we gainfully use 𝑀 > 𝑁 to obtain stable and accurate ROMs.
The first-order EIM makes it possible to construct ROMs with 𝑀 > 𝑁 without
increasing the parameter sample set 𝑆 𝑁 .
This theorem implies that the procedure yields unique interpolation points
and linearly independent basis functions as long as 𝑀 is less than or equal to
the dimension of the function space used to construct the basis functions and the
interpolation points. Furthermore, the procedure reorders members of the func-
𝑔 𝑔 𝑔
tion space in such a way that Ψ𝑀 = span{𝜙 𝑗1 , . . . , 𝜙 𝑗𝑀 } = span{𝜓1 , . . . , 𝜓 𝑀 }.
Hence, the procedure allows for selecting a subset of basis functions from a
larger set.
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 13
If 𝑑b𝑛 (G) converges to zero as 𝑛 goes to infinity as fast as 𝑑 𝑛 (G), then the
interpolation procedure I𝑛 (X𝑛 , T𝑛 ) is stable and accurate.
The interpolation 𝑛-width raises two important questions: Is there a construc-
tive optimal selection for the interpolation points? Is there a constructive optimal
construction of the approximation subspaces? The first-order EIM provides a
positive answer to the first question by generating a unique set of interpola-
tion points that yield a stable and unique interpolant. The first-order EIM also
14
provides a positive answer to the second question by using the first-order par-
𝑔
tial derivatives to construct good approximation spaces. Since Ψ𝑀 converges
𝑔
rapidly to Φ𝐾 as 𝑀 tends to 𝐾, the first-order EIM can yield good approximation
spaces for the parametric manifold G. Indeed, we note from the first-order EIM
procedure that
𝑔 𝑔 𝑔
∥𝑞 − I𝑀 [𝑞] ∥ 𝐿 ∞ (Ω) ≤ ∥𝜙 𝑗𝑀+1 − I𝑀 [𝜙 𝑗𝑀+1 ] ∥ 𝐿 ∞ (Ω) , ∀𝑞 ∈ Φ𝐾 . (1.43)
This last quantity is one of the outputs of the first-order EIP and plays the role
𝑔
of a priori error estimate. The convergence of ∥𝜙 𝑗𝑀+1 − I𝑀 [𝜙 𝑗𝑀+1 ] ∥ 𝐿 ∞ (Ω) as 𝑀
increases can give a sense of the convergence of the interpolation error.
𝑔
Proof. Since by assumption 𝑔(𝑢(𝒙, 𝝁)) ∈ Ψ𝑀+𝑃 , we have
𝑀+𝑃
∑︁
𝑔
𝑔(𝑢(𝒙, 𝝁)) − 𝑔 𝑀 (𝒙, 𝝁) = 𝜅 𝑚 ( 𝝁) 𝜓 𝑚 (𝒙),
𝑚=1
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 15
which yields
𝑀+𝑃
∑︁
𝑔 𝑔 𝑔 𝑔
𝜓 𝑚 (𝒙𝑖 ) 𝜅 𝑚 ( 𝝁) = 𝑔(𝑢(𝒙𝑖 , 𝝁)) − 𝑔 𝑀 (𝒙𝑖 , 𝝁), 1 ≤ 𝑖 ≤ 𝑀 + 𝑃.
𝑚=1
𝑔 𝑔 𝑔 𝑔
Since 𝑔(𝑢(𝒙𝑖 , 𝝁)) − 𝑔 𝑀 (𝒙𝑖 , 𝝁) = 0, 1 ≤ 𝑖 ≤ 𝑀 and the matrix 𝜓 𝑚 (𝒙𝑖 ) is lower
triangular with unity diagonal, we have 𝜅 𝑚 ( 𝝁) = 0, 1 ≤ 𝑚 ≤ 𝑀. Therefore, the
above system reduces to the following system
𝑃
∑︁
𝑔 𝑔 𝑔 𝑔
𝜓 𝑀+ 𝑗 (𝒙 𝑀+𝑖 ) 𝜅 𝑀+ 𝑗 ( 𝝁) = 𝑔(𝑢(𝒙 𝑀+𝑖 , 𝝁)) − 𝑔 𝑀 (𝒙 𝑀+𝑖 , 𝝁), 1 ≤ 𝑖 ≤ 𝑃.
𝑗=1
The desired result directly follows from taking the 𝐿 ∞ (Ω) norm on both sides,
𝑔
using the triangle inequality, and ∥𝜓 𝑀+ 𝑗 (𝒙) ∥ 𝐿 ∞ (Ω) = 1, 1 ≤ 𝑗 ≤ 𝑃.
The operation count of evaluating the error estimator (1.45) is only 𝑂 (𝑃2 ).
Hence, the error estimator is very inexpensive. The integration errors are
bounded by
As shown in Figure 1.1(a), the parameter points in 𝑆 𝑁max are mainly distributed
around the corner (0.05, 0.05) of the parameter domain. We consider three
different values of 𝑀, namely, 𝑀 = 𝑁, 𝑀 = 2𝑁, and 𝑀 = 3𝑁, for the first-order
EIM. The interpolation points are plotted in Figure 1.1(b) for 𝑀 = 2𝑁 = 128.
We note that the interpolation points are largely allocated along the top and right
boundaries the physical domain Ω. These results are expected because 𝑢(𝒙, 𝝁)
develop boundary layers along the top and right boundaries when 𝝁 is small.
FIGURE 1.1 Distribution of the parameter sample set 𝑆 𝑁max in the parameter domain (a), and
𝑔
distribution of the interpolation point set 𝑇𝑀 ( 𝑀 = 2𝑁 = 128) in the physical domain (b).
1 ∑︁ 1 ∑︁
𝜀𝑀 = 𝜀 𝑀 ( 𝝁), 𝛿𝑀 = |𝐼 ( 𝝁) − 𝐼 𝑀 ( 𝝁)|
𝑁test 𝑔 𝑁test 𝑔
𝝁∈𝑆Test 𝝁∈𝑆Test
as the average interpolation error and the average integration error, respectively.
We display in Figure 1.2 𝜀 𝑀 and Λ 𝑀 as a function of 𝑁 for the EIM and the
first-order EIM (FOEIM) with 𝑀 = 𝑁, 𝑀 = 2𝑁, and 𝑀 = 3𝑁. We observe that
𝜀 𝑀 converge rapidly with 𝑁 while the Lebesgue constant Λ 𝑀 grows slowly with
𝑁. We see from Figure 1.2(a) that FOEIM (𝑀 = 𝑁) yields smaller interpolation
errors than EIM, which can be attributed to the use of partial derivatives. We
also observe that FOEIM (𝑀 = 2𝑁) yields significantly smaller interpolation
errors than both EIM and FOEIM (𝑀 = 𝑁). Indeed, the interpolation errors
for FOEIM (𝑀 = 2𝑁) are several orders of magnitude less than those for EIM.
Increasing 𝑀 to 3𝑁 reduces the interpolation errors even further.
Table 1.1 shows the average interpolation and integration errors of the first-
order EIM for different values of 𝑁 and for 𝑀 = 𝑁, 𝑀 = 2𝑁, and 𝑀 = 3𝑁.
We see from Table 1.1 that the errors drop rapidly as 𝑁 increases. Increasing
𝑀 from 𝑀 = 𝑁 to 𝑀 = 2𝑁 and 𝑀 = 3𝑁 considerably reduces the errors. We
observe that the integration errors are one or two orders of magnitude less than
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 17
FIGURE 1.2 The average interpolation error (a) and the Lesbegue constant (b) as a function of 𝑁
for the EIM and the first-order EIM .
the interpolation errors. To verify how sharp the error estimators are, we define
1 ∑︁ b 𝜀 𝑀,𝑃 ( 𝝁) 1 ∑︁ 𝛿 𝑀,𝑃 ( 𝝁)
b
κ 𝑀, 𝑃 = , 𝜅 𝑀,𝑃 =
𝑁test 𝑔 𝜀 𝑀 ( 𝝁) 𝑁test 𝑔 |𝐼 ( 𝝁) − 𝐼 𝑀 ( 𝝁)|
𝝁∈𝑆Test 𝝁∈𝑆Test
𝑀=𝑁 𝑀 = 2𝑁 𝑀 = 3𝑁
𝑁 𝜀𝑀 𝛿𝑀 𝜀𝑀 𝛿𝑀 𝜀𝑀 𝛿𝑀
4 2.17e-2 1.89e-3 8.58e-3 4.89e-4 4.69e-3 4.66e-4
9 4.66e-3 2.01e-4 1.73e-3 1.80e-4 4.70e-4 1.72e-5
16 1.28e-3 1.02e-4 1.85e-4 6.86e-6 4.38e-5 9.04e-7
25 3.32e-4 1.15e-5 2.90e-5 2.81e-7 7.21e-6 3.18e-8
36 1.16e-4 3.15e-6 4.57e-6 8.98e-8 5.91e-7 4.23e-9
49 3.24e-5 5.71e-7 7.79e-7 1.25e-8 1.08e-7 1.25e-9
64 7.74e-6 2.19e-7 1.48e-7 3.10e-9 1.99e-8 1.67e-10
81 2.25e-6 3.00e-8 4.10e-8 6.00e-10 4.09e-9 1.56e-11
TABLE 1.1 Average interpolation and integration errors of the first-order EIM for
different values of 𝑁 and for 𝑀 = 𝑁 , 𝑀 = 2𝑁 , and 𝑀 = 3𝑁 .
18
𝑀=𝑁 𝑀 = 2𝑁 𝑀 = 3𝑁
𝑁 κ 𝑀, 𝑃 𝜅 𝑀,𝑃 κ 𝑀,𝑃 𝜅 𝑀,𝑃 κ 𝑀,𝑃 𝜅 𝑀,𝑃
4 2.31 145.17 2.02 120.97 1.69 100.91
9 2.00 271.79 1.47 44.32 1.77 128.01
16 2.15 105.97 1.51 358.13 1.98 1726.29
25 1.88 142.02 2.01 1443.84 1.86 789.49
36 1.45 514.87 2.15 2023.45 1.71 863.79
49 1.83 327.36 1.81 588.78 1.80 250.53
64 2.06 267.40 1.95 139.57 2.28 215.74
81 1.60 393.80 1.83 182.06 1.77 353.96
TABLE 1.2 Average effectivities for the interpolation and integration errors of the
first-order EIM for different values of 𝑁 and for 𝑀 = 𝑁 , 𝑀 = 2𝑁 , and 𝑀 = 3𝑁 .
Note that 𝑃 = 4 is used to compute the error estimates.
Because the dimension of the FE space 𝑋 is large, the FOM (1.50) may be
expensive for the many-query context that requires repeated evaluation of the
input-output relationship. Model reduction techniques are needed to provide
rapid yet accurate prediction of the input-output relationship induced by the
parametric FOM. For nonlinear PDEs, model reduction process is carried out in
two steps. In the first step, Galerkin (more generally, Petrov-Galerkin) projection
is used to project the underlying FOM onto a low-dimensional subspace. In the
second step, hyper-reduction method is employed to reduce the computational
cost of evaluating the nonlinear terms. We are going to describe the first step.
the increment 𝛿𝑢 𝑁 ( 𝝁) ∈ 𝑊 𝑁
𝑢 as the solution of the following linear system
𝑢
¯
𝑎(𝛿𝑢 𝑁 ( 𝝁), 𝑣; 𝝁) = −𝑟¯ (𝑣; 𝝁), ∀𝑣 ∈ 𝑊 𝑁 . (1.52)
where the bilinear form 𝑎¯ is given by
∫ ∫ ∫
¯
𝑎(𝑤, 𝑣; 𝝁) ≡ ∇𝑤 · ∇𝑣 − 𝝁 𝑤 · ∇𝑣 + 𝑔𝑢 ( 𝑢¯ 𝑁 )𝑤 𝑣, (1.53)
Ω Ω Ω
Here 𝑔𝑢 (·) = 𝜕𝑔(·)/𝜕𝑢 denotes the first-order partial derivative. The bar symbol
on the bilinear form and linear functional signifies their dependency on the
current iterate 𝑢¯ 𝑁 ( 𝝁).
Í𝑁
We express 𝛿𝑢 𝑁 ( 𝝁) = 𝑛=1 𝛼 𝑁 ,𝑛 ( 𝝁)𝜁 𝑛 and choose test functions 𝑣 = 𝜁 𝑗 , 1 ≤
𝑗 ≤ 𝑁, in (1.52), to obtain the linear system in matrix form
𝑱 𝑁 ( 𝝁)𝛿𝜶 𝑁 ( 𝝁) = −𝒓 𝑁 ( 𝝁) (1.55)
where 𝑱 𝑁 ( 𝝁) ∈ R 𝑁 × 𝑁 and 𝒓 𝑁 ( 𝝁) ∈ R 𝑁 have entries
𝐽 𝑁 ,𝑖 𝑗 ( 𝝁) = 𝑎(𝜁
¯ 𝑗 , 𝜁𝑖 ; 𝝁), 𝑟 𝑁 ,𝑖 ( 𝝁) = 𝑟¯ (𝜁𝑖 ; 𝝁), 𝑖, 𝑗 = 1, . . . , 𝑁. (1.56)
Both the matrix 𝑱 𝑁 ( 𝝁) and the vector 𝒓 𝑁 ( 𝝁) must be computed at each Newton
iteration since they depend on the current iterate 𝑢¯ 𝑁 ( 𝝁). However, they are
computationally expensive due to the presence of nonlinear integrals in both the
bilinear form 𝑎¯ and functional 𝑟.
¯ Consequently, although the linear system (1.55)
is small, it is computationally expensive due to the N -dependent complexity of
forming 𝑱 𝑁 ( 𝝁) and 𝒓 𝑁 ( 𝝁). As a result, the RB approximation does not offer a
significant speedup over the FE approximation.
We devise two different hyper-reduction approaches to deal with the nonlinear
terms. The first approach is hyper-reduction followed by linearization: the first-
order EIM is applied to approximate the nonlinear integrals in the nonlinear
system (1.51); Newton’s method is then used to linearize the resulting system.
The second approach is linearization followed by hyper-reduction: the first-order
EIM is applied to approximate the nonlinear integrals in the bilinear form (1.53)
and the linear functional (1.54) of the linear system (1.52), which results from
the linearization of the nonlinear system (1.51) by Newton’s method. We are
going to describe the first approach.
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 21
𝜷 𝑀 ( 𝝁) = (𝑩 𝑀 ) −1 𝒃 𝑀 ( 𝝁).
𝑔 𝑔 𝑔
(1.57)
𝑔 𝑔 𝑔 𝑔 𝑔
Here 𝐵 𝑀,𝑘𝑚 = 𝜓 𝑚 (𝒙 𝑘 ) and 𝑏 𝑀,𝑘 ( 𝝁) = 𝑔(𝑢 𝑁 (𝒙 𝑘 , 𝝁)) for 1 ≤ 𝑘, 𝑚 ≤ 𝑀. We
𝑔 𝑔 𝑀 𝑔 𝑔
compute 𝑇𝑀 = {𝒙 𝑚 } 𝑚=1 and Ψ𝑀 = span{𝜓 𝑚 (𝒙), 1 ≤ 𝑚 ≤ 𝑀 } by applying the
𝑔
EIM to the LT-POD space Φ𝐾 defined in (1.29).
By replacing the nonlinear terms in (1.51) with their interpolants we arrive
at the following ROM: for any 𝝁 ∈ D, we find 𝑢 𝑁 ,𝑀 ( 𝝁) ∈ 𝑊 𝑁 𝑢 as the solution
of
∫ ∫ 𝑀
∑︁ ∫
𝑔 𝑔 𝑢
∇𝑢 𝑁 , 𝑀 · ∇𝑣 − 𝑢 𝑁 , 𝑀 𝝁 · ∇𝑣 + 𝛽 𝑀,𝑚 ( 𝝁) 𝜓 𝑚 𝑣 = 0, ∀𝑣 ∈ 𝑊 𝑁 . (1.58)
Ω Ω 𝑚=1 Ω
Í𝑁
By expressing 𝑢 𝑁 , 𝑀 ( 𝝁) = 𝑛=1 𝛼 𝑁 ,𝑛 ( 𝝁)𝜁 𝑛 and choosing test functions 𝑣 =
𝜁 𝑗 , 1 ≤ 𝑗 ≤ 𝑁, in (1.58), we arrive at the algebraic system
( 𝑨 𝑁 − 𝜇1 𝑪 1𝑁 − 𝜇2 𝑪 2𝑁 )𝜶 𝑁 ( 𝝁) + 𝑮 𝑁 𝑀 𝜷 𝑀 ( 𝝁) = 0
𝑔
(1.59)
where
∫ ∫ ∫
𝑖 𝜕𝜁 𝑗 𝑔
𝐴 𝑁 , 𝑗𝑛 = ∇𝜁 𝑛 · ∇𝜁 𝑗 , 𝐶𝑁 , 𝑗𝑛 = 𝜁𝑛 , 𝐺 𝑁 𝑀, 𝑗𝑚 = 𝜓 𝑚 𝜁 𝑗 , (1.60)
Ω Ω 𝜕𝑥 𝑖 Ω
of the nonlinear terms based on the first-order EIM. This step is known as hyper-
reduction. Although the system (1.62) is nonlinear, it is purely algebraic in the
sense that it does not contain any integrals. Since the number of unknowns
is equal to the RB dimension 𝑁, the system can be efficiently solved by using
Newton’s method to linearize it. Thus, the hyper-reduction step precedes the
linearization step.
We use Newton method to linearize (1.62) at a given iterate 𝜶¯ 𝑁 ( 𝝁), and thus
arrive at the following linear system
where
𝑁
©∑︁
𝐼 𝑀 𝑁 ,𝑚𝑛 ( 𝜶¯ 𝑁 ( 𝝁)) = 𝑔𝑢 𝛼¯ 𝑁 , 𝑗 ( 𝝁)𝐸 𝑀 𝑁 ,𝑚 𝑗 ® 𝐸 𝑀 𝑁 ,𝑚𝑛 . (1.66)
ª
« 𝑗=1 ¬
Note that 𝑔𝑢 (·) = 𝜕𝑔(·)/𝜕𝑢 are first-order derivatives. We solve the linear system
(1.63) for 𝛿𝜶 𝑁 ( 𝝁) and update the RB vector 𝛼 𝑁 ( 𝝁). Upon convergence of the
Newton iterations, we calculate the RB output as
𝑁
∑︁
𝑠 𝑁 , 𝑀 ( 𝝁) = 𝐿 𝑁 ,𝑛 𝛼 𝑁 ,𝑛 ( 𝝁), 𝐿 𝑁 ,𝑛 = ℓ(𝜁 𝑛 ), 1 ≤ 𝑛 ≤ 𝑁. (1.67)
𝑛=1
where
𝜕𝑔(𝜁 𝑛 ) 𝜕 2 𝑔(𝜁 𝑛 )
𝐺 𝑢 (𝜁 𝑚 , 𝜁 𝑛 ) = + (𝜁 𝑚 − 𝜁 𝑛 ). (1.69)
𝜕𝑢 𝜕𝑢 2
𝑔 𝑔 𝑔 𝑔
Next, we compute 𝑇𝑀𝑢 = {𝒙 𝑚𝑢 } 𝑚=1
𝑀 and Ψ 𝑢 = span{𝜓 𝑢 (𝒙), 1 ≤ 𝑚 ≤ 𝑀 } by
𝑀 𝑔 𝑚
applying the first-order EIM to the LT space 𝑊 𝑁𝑢2 . Finally, the approximation of
𝑔𝑢 ( 𝑢¯ 𝑁 ( 𝝁)) is given by
𝑀
∑︁
𝑔 𝑔
I𝑀 [𝑔𝑢 ( 𝑢¯ 𝑁 ( 𝝁))] = 𝑢
𝛽 𝑀,𝑚 ( 𝝁) 𝜓 𝑚𝑢 (𝒙), (1.70)
𝑚=1
where
𝜷 𝑀𝑢 ( 𝝁) = (𝑩 𝑀𝑢 ) −1 𝒃 𝑀𝑢 ( 𝝁).
𝑔 𝑔 𝑔
(1.71)
𝑔𝑢 𝑔 𝑔 𝑔𝑢 𝑔
Here 𝐵 𝑀,𝑘𝑚 = 𝜓 𝑚𝑢 (𝒙 𝑘 𝑢 )
and = 𝑏 𝑀,𝑘 ( 𝝁) 𝝁)) for 1 ≤ 𝑘, 𝑚 ≤ 𝑀.
𝑔𝑢 ( 𝑢¯ 𝑁 (𝒙 𝑘 𝑢 ,
We replace 𝑔𝑢 ( 𝑢¯ 𝑁 ( 𝝁)) in (1.53) with the interpolant I𝑀 [𝑔𝑢 ( 𝑢¯ 𝑁 ( 𝝁))] to
arrive at the following bilinear form
∫ ∫ 𝑀
∑︁ ∫
𝑔 𝑔
¯
𝑎(𝑤, 𝑣; 𝝁) = ∇𝑤 · ∇𝑣 − 𝑤𝝁 · ∇𝑣 + 𝑢
𝛽 𝑀,𝑚 ( 𝝁) 𝜓 𝑚𝑢 𝑤 𝑣. (1.72)
Ω Ω 𝑚=1 Ω
where
𝜷𝐾 ( 𝝁) = (𝑩 𝐾 ) −1 𝒃 𝐾 ( 𝝁).
𝑔 𝑔 𝑔
(1.77)
𝑔 𝑔 𝑔 𝑔 𝑔
Here 𝐵 𝐾 ,𝑘𝑚= 𝜓 𝑚 (𝒙 𝑘 )
and 𝑏 𝐾 ,𝑘 ( 𝝁)
= 𝝁)) for 1 ≤ 𝑘, 𝑚 ≤ 𝐾. Note
𝑔( 𝑢¯ 𝑁 (𝒙 𝑘 ,
that we use 𝐾 interpolation points to construct the interpolant of the nonlinear
function 𝑔( 𝑢¯ 𝑁 ( 𝝁)) in (1.54). It thus follows that the RB residual vector 𝒓 𝑁 ( 𝝁) ∈
R 𝑁 is given by
where, 1 ≤ 𝑛 ≤ 𝑁, 1 ≤ 𝑘 ≤ 𝐾, we have
∫
𝑸 𝑁 𝐾 = 𝑮 𝑁 𝐾 (𝑩 𝐾 ) −1 ,
𝑔 𝑔 𝑔
𝐸 𝐾 𝑁 ,𝑘𝑛 = 𝜁 𝑛 (𝒙 𝑘 ), 𝐺 𝑁 𝐾 ,𝑛𝑘 = 𝜓 𝑘 𝜁𝑛 . (1.79)
Ω
In the online stage, we orm the RB residual vector with the computational cost
of 𝑂 (𝑁𝐾). Hence, the computational cost of forming the RB residual vector
is significantly lower than that of forming the RB Jacobian matrix. It allows us
to use 𝐾 interpolation points to maximize the accuracy of the resulting ROM
without increasing the online complexity.
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 25
𝑠 1 ∑︁
𝑠 𝑢 1 ∑︁
𝑢
𝜖¯𝑁 ,𝑀 = 𝜖𝑁 ,𝑀 ( 𝝁), 𝜖¯𝑁 ,𝑀 = 𝜖𝑁 ,𝑀 ( 𝝁), (1.80)
𝑁Test 𝑔 𝑁Test 𝑔
𝝁∈𝑆Test 𝝁∈𝑆Test
and
1 ∑︁ 1 ∑︁
𝜂¯𝑠𝑁 , 𝑀 = 𝜂 𝑠𝑁 ,𝑀 ( 𝝁), 𝜂¯𝑢𝑁 ,𝑀 = 𝜂𝑢𝑁 ,𝑀 ( 𝝁). (1.81)
𝑁Test 𝑔 𝑁Test 𝑔
𝝁∈𝑆Test 𝝁∈𝑆Test
FIGURE 1.4 Convergence of the average error in the solution (a) and output (b) for the standard
RB approximation and the HL-ROM. Since the LH-ROM results are almost identical to those of the
standard RB approximation, they are not shown.
We present in Figure 1.4 the average errors of the solution and the output as
a function of 𝑁 for the RB approximation and the HL-ROM. As 𝑀 increases,
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 27
𝜖¯𝑁
𝑠
, 𝑀 (respectively, 𝜖¯𝑁 , 𝑀 ) converges to 𝜖¯𝑁 (respectively, 𝜖¯𝑁 ). The HL-ROM
𝑢 𝑠 𝑢
𝑀=𝑁 𝑀 = 2𝑁 𝑀 = 3𝑁 𝑀 = 2𝑁
𝑁 𝜂¯𝑢𝑁 , 𝑀 𝜂¯𝑠𝑁 , 𝑀 𝜂¯𝑢𝑁 ,𝑀 𝜂¯𝑠𝑁 ,𝑀 𝜂¯𝑢𝑁 ,𝑀 𝜂¯𝑠𝑁 ,𝑀 𝜂¯𝑢𝑁 ,𝑀 𝜂¯𝑠𝑁 ,𝑀
16 1.84 11.95 1.66 4.32 1.43 2.2 1.00 1.02
25 4.46 20.93 1.44 7.21 1.16 3.45 1.02 1.03
36 4.97 20.31 1.94 14.32 1.07 7.10 1.05 1.03
49 10.05 31.42 3.49 10.42 1.45 3.49 1.01 1.10
64 10.87 27.82 2.60 20.32 1.44 6.76 1.00 1.11
81 20.55 25.60 3.06 20.19 1.20 6.37 1.02 1.03
TABLE 1.3 Average effectivities for the hyper-reduced ROMs. The first three
columns are results of the HL-ROM, while the last column corresponds to the LH-
ROM.
solution behaviors not encountered during the offline stage. In response, adaptive
model reduction has emerged as a solution to the limitations posed by global
models relying on linear subspace approximations. Offline adaptivity approach
partitions the parameter domain and constructs an individualized ROM for each
partition [32, 47, 49, 69]. This adaptive partitioning enables reduced spaces of
lower dimension compared to global strategies and is particularly effective for
problems exhibiting vastly different behaviors in distinct regions of the parameter
domain. The main limitation of the offline adaptivity strategy lies in their a priori
adaptivity, that is, the construction of the different reduced order models is done
during the offline phase. Online adaptivity techniques have been developed
to overcome this limitation by updating the RB space during the online phase
according to various criteria associated with changes of the system dynamics
in parameters and time [52, 85, 86, 102, 119]. Multiscale RB method [11, 73]
and static condensation RB element method [35, 59, 106] have been developed
to handle problems in which the coefficients of the differential operators are
characterized by a large number of independent parameters. Boyaval et al.
[12, 13] develop a reduced basis approach for solving variational problems with
stochastic parameters and reducing the variance in the Monte-Carlo simulation
of a stochastic differential equation [111, 110].
The application of model reduction methods to convection-dominated prob-
lems and wave prorogation problems might lead to poor approximations because
traveling waves, moving shocks, sharp gradients, and discontinuities exhibit
slowly decaying Kolmogorov 𝑛 widths and do not admit low-dimensional rep-
resentation. A family of model reduction methods has been developed to deal
with transport phenomena efficiently by recasting the problem in a coordinate
frame where it is more amenable to low-dimensional approximation. A common
approach is to construct the RB approximation of the form
𝑁
∑︁
𝑢 𝑁 (𝒙, 𝝁) = 𝛼 𝑁 ,𝑛 ( 𝝁)𝜁 𝑛 (𝝓(𝒙, 𝝁)) (1.82)
𝑛=1
[1] Babak Maboudi Afkham and Jan S. Hesthaven. Structure preserving model reduction of
parametric Hamiltonian systems. SIAM Journal on Scientific Computing, 39(6):A2616–
A2644, 2017.
[2] Marzieh Alireza Mirhoseini and Matthew J. Zahr. Model reduction of convection-dominated
partial differential equations via optimization-based implicit feature tracking. Journal of
Computational Physics, 473, 2023.
[3] Steven S. An, Theodore Kim, and Doug L. James. Optimizing cubature for efficient integration
of subspace deformations. ACM Transactions on Graphics, 27(5):165, 2008.
[4] J. P. Argaud, B. Bouriquet, H. Gong, Y. Maday, and O. Mula. Stabilization of (G)EIM in
Presence of Measurement Noise: Application to Nuclear Reactor Physics. In Lecture Notes
in Computational Science and Engineering, volume 119, pages 133–145, 2017.
[5] Patricia Astrid, Siep Weiland, Karen Willcox, and Ton Backx. Missing point estimation in
models described by proper orthogonal decomposition. IEEE Transactions on Automatic
Control, 53(10):2237–2251, 2008.
[6] C. Audouze, F. de Vuyst, and P. B. Nair. Reduced-order modeling of parameterized PDEs using
time-space-parameter principal component analysis. International Journal for Numerical
Methods in Engineering, 80(8):1025–1057, 2009.
[7] M. Barrault, Y. Maday, N. C. Nguyen, and A. T. Patera. An ’empirical interpolation’ method:
application to efficient reduced-basis discretization of partial differential equations. Comptes
Rendus Mathematique, 339(9):667–672, 2004.
[8] Maxime Barrault, Yvon Maday, Ngoc Cuong Nguyen, and Anthony T. Patera. An ‘em-
pirical interpolation’ method: application to efficient reduced-basis discretization of partial
differential equations. Comptes Rendus Mathematique, 339(9):667–672, nov 2004.
[9] W. J. Beyn and V. Thümmler. Freezing solutions of equivariant evolution equations. SIAM
Journal on Applied Dynamical Systems, 3(2):85–116, 2004.
[10] Peter Binev, Albert Cohen, Wolfgang Dahmen, Ronald Devore, Guergana Petrova, and Prze-
myslaw Wojtaszczyk. Convergence rates for greedy algorithms in reduced basis methods.
SIAM Journal on Mathematical Analysis, 43(3):1457–1472, 2011.
[11] S Boyaval. Reduced-Basis Approach for Homogenization beyond the Periodic Setting. SIAM
Multiscale Modeling & Simulation, 7(1):466–494, 2008.
[12] Sébastien Boyaval, Claude Le Bris, Tony Lelièvre, Yvon Maday, Ngoc Cuong Nguyen,
and Anthony T Patera. Reduced basis techniques for stochastic problems. Archives of
Computational Methods in Engineering, 17:435–454, 2010.
[13] Sébastien Boyaval, Claude Le Bris, Yvon Maday, Ngoc Cuong Nguyen, and Anthony T. Patera.
A reduced basis approach for variational problems with stochastic parameters: Application
to heat conduction with variable Robin coefficient. Computer Methods in Applied Mechanics
and Engineering, 198(41-44):3187–3206, sep 2009.
[14] Patrick Buchfink, Ashish Bhatt, and Bernard Haasdonk. Symplectic Model Order Reduction
with Non-Orthonormal Bases. Mathematical and Computational Applications, 24(2):43,
2019.
[15] A. Buffa, Y. Maday, A. T. Patera, C. Prud’homme, and G. Turinici. A priori convergence
of the Greedy algorithm for the parametrized reduced basis method. ESAIM: Mathematical
Modelling and Numerical Analysis, 46(03):595–603, 2012.
[16] Nicolas Cagniart, Yvon Maday, and Benjamin Stamm. Model order reduction for problems
with large convection effects. In Computational Methods in Applied Sciences, volume 47,
pages 131–150. 2019.
[17] Kevin Carlberg, Charbel Bou-Mosleh, and Charbel Farhat. Efficient non-linear model reduc-
tion via a least-squares Petrov-Galerkin projection and compressive tensor approximations.
32
[35] Jens L. Eftang and Anthony T. Patera. Port reduction in parametrized component static
condensation: Approximation and a posteriori error estimation. International Journal for
Numerical Methods in Engineering, 96(5):269–302, 2013.
[36] R. Everson and L. Sirovich. Karhunen-Loeve procedure for gappy data. Opt. Soc. Am. A,
12(8):1657–1664, 1995.
[37] Charbel Farhat, Philip Avery, Todd Chapman, and Julien Cortial. Dimensional reduction
of nonlinear finite element dynamic models with finite rotations and energy-based mesh
sampling and weighting for computational efficiency. International Journal for Numerical
Methods in Engineering, 98(9):625–662, 2014.
[38] Charbel Farhat, Todd Chapman, and Philip Avery. Structure-preserving, stability, and ac-
curacy properties of the energy-conserving sampling and weighting method for the hyper
reduction of nonlinear finite element dynamic models. International Journal for Numerical
Methods in Engineering, 102(5):1077–1110, 2015.
[39] D. Galbally, K. Fidkowski, K. Willcox, and O. Ghattas. Non-linear model reduction for un-
certainty quantification in large-scale inverse problems. International Journal for Numerical
Methods in Engineering, 81(12):1581–1608, 2010.
[40] Zhen Gao, Qi Liu, Jan S. Hesthaven, Bao Shan Wang, Wai Sun Don, and Xiao Wen.
Non-intrusive reduced order modeling of convection dominated flows using artificial neural
networks with application to Rayleigh-Taylor instability. Communications in Computational
Physics, 30(1):97–123, 2021.
[41] Yuezheng Gong, Qi Wang, and Zhu Wang. Structure-preserving Galerkin POD reduced-
order modeling of Hamiltonian systems. Computer Methods in Applied Mechanics and
Engineering, 315:780–798, 2017.
[42] Martin A. Grepl, Yvon Maday, Ngoc C. Nguyen, and Anthony T. Patera. Efficient reduced-
basis treatment of nonaffine and nonlinear partial differential equations. Mathematical Mod-
elling and Numerical Analysis, 41(3):575–605, aug 2007.
[43] Martin A. Grepl and Anthony T. Patera. A posteriori error bounds for reduced-basis approx-
imations of parametrized parabolic partial differential equations. Mathematical Modelling
and Numerical Analysis, 39(1):157–181, 2005.
[44] Mengwu Guo and Jan S. Hesthaven. Reduced order modeling for nonlinear structural analysis
using Gaussian process regression. Computer Methods in Applied Mechanics and Engineer-
ing, 341:807–826, 2018.
[45] B Haasdonk and M Ohlberger. Reduced basis method for finite volume approximations of
parametrized linear evolution equations. Mathematical Modelling and Numerical Analysis
(M2AN), 42(3):277–302, 2008.
[46] Bernard Haasdonk. Convergence rates of the pod—greedy method. Mathematical Modelling
and Numerical Analysis, 47(3):859–873, 2013.
[47] Bernard Haasdonk, Markus Dihlmann, and Mario Ohlberger. A training set and multiple bases
generation approach for parameterized model reduction based on adaptive grids in parameter
space. Mathematical and Computer Modelling of Dynamical Systems, 17(4):423–442, 2011.
[48] J. A. Hernández, M. A. Caicedo, and A. Ferrer. Dimensional hyper-reduction of nonlinear
finite element models via empirical cubature. Computer Methods in Applied Mechanics and
Engineering, 313:687–722, 2017.
[49] Martin Hess, Alessandro Alla, Annalisa Quaini, Gianluigi Rozza, and Max Gunzburger. A
localized reduced-order modeling approach for PDEs with bifurcating solutions. Computer
Methods in Applied Mechanics and Engineering, 351:379–403, 2019.
[50] J. S. Hesthaven and S. Ubbiali. Non-intrusive reduced order modeling of nonlinear problems
using neural networks. Journal of Computational Physics, 363:55–78, 2018.
34
[51] Jan S. Hesthaven and Cecilia Pagliantini. Structure-preserving reduced basis methods for
Poisson systems. Mathematics of Computation, 90(330):1701–1740, 2021.
[52] Jan S. Hesthaven, Cecilia Pagliantini, and Nicolo Ripamonti. Rank-adaptive structure-
preserving model order reduction of Hamiltonian systems. ESAIM: Mathematical Modelling
and Numerical Analysis, 56(2):617–650, 2022.
[53] Jan S. Hesthaven, Cecilia Pagliantini, and Gianluigi Rozza. Reduced basis methods for
time-dependent problems. Acta Numerica, 31:265–345, 2022.
[54] Jan S. Hesthaven, Benjamin Stamm, and Shun Zhang. Efficient greedy algorithms for high-
dimensional parameter spaces with applications to empirical interpolation and reduced basis
methods. ESAIM: Mathematical Modelling and Numerical Analysis, 48(1):259–283, 2014.
[55] R Loek Van Heyningen, Ngoc Cuong Nguyen, Patrick Blonigan, and Jaime Peraire. Adaptive
model reduction of high-order solutions of compressible flows via optimal transport, 2023.
[56] Philip Holmes, John L. Lumley, Gahl Berkooz, and Clarence W. Rowley. Turbulence, Coher-
ent Structures, Dynamical Systems and Symmetry. Cambridge University Press, 2012.
[57] D. B.P. Huynh and A. T. Patera. Reduced basis approximation and a posteriori error estimation
for stress intensity factors. International Journal for Numerical Methods in Engineering,
72(10):1219–1259, 2007.
[58] D.B.P. Huynh, D.J. Knezevic, Y. Chen, J.S. Hesthaven, and A.T. Patera. A natural-norm
Successive Constraint Method for inf-sup lower bounds. Computer Methods in Applied
Mechanics and Engineering, 199(29-32):1963–1975, jun 2010.
[59] Dinh Bao Phuong Huynh, David J. Knezevic, and Anthony T. Patera. A Static conden-
sation Reduced Basis Element method : Approximation and a posteriori error estimation.
Mathematical Modelling and Numerical Analysis, 47(1):213–251, 2013.
[60] Angelo Iollo and Damiano Lombardi. Advection modes by optimal mass transfer. Physical
Review E - Statistical, Nonlinear, and Soft Matter Physics, 89(2), 2014.
[61] Mark Kärcher, Zoi Tokoutsi, Martin A. Grepl, and Karen Veroy. Certified Reduced Basis
Methods for Parametrized Elliptic Optimal Control Problems with Distributed Controls.
Journal of Scientific Computing, 75(1):276–307, 2018.
[62] P. Kerfriden, O. Goury, T. Rabczuk, and S. P.A. Bordas. A partitioned model order reduction
approach to rationalise computational expenses in nonlinear fracture mechanics. Computer
Methods in Applied Mechanics and Engineering, 256:169–188, 2013.
[63] David J. Knezevic, Ngoc Cuong Nguyen, and Anthony T. Patera. Reduced basis approxima-
tion and a posteriori error estimation for the parametrized unsteady Boussinesq equations.
Mathematical Models and Methods in Applied Sciences, 21(7):1415–1442, 2011.
[64] David J. Knezevic and Anthony T. Patera. A certified reduced basis method for the fokker-
planck equation of dilute polymeric fluids: Fene dumbbells in extensional flow. SIAM Journal
on Scientific Computing, 32(2):793–817, 2010.
[65] Sanjay Lall, Petr Krysl, and Jerrold E. Marsden. Structure-preserving model reduction for
mechanical systems. In Physica D: Nonlinear Phenomena, volume 184, pages 304–318,
2003.
[66] Y. Maday, O. Mula, A.T. Patera, and M. Yano. The Generalized Empirical Interpolation
Method: Stability theory on Hilbert spaces with an application to the Stokes equation.
Computer Methods in Applied Mechanics and Engineering, 287:310–334, apr 2015.
[67] Yvon Maday and Olga Mula. A generalized empirical interpolation method: Application of
reduced basis techniques to data assimilation. In Springer INdAM Series, volume 4, pages
221–235. Springer, 2013.
[68] Yvon Maday, Ngoc Cuong Nguyen, Anthony T. Patera, and George S.H. Pau. A general
multipurpose interpolation procedure: The magic points. Communications on Pure and
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 35
[85] Benjamin Peherstorfer and Karen Willcox. Dynamic data-driven reduced-order models.
Computer Methods in Applied Mechanics and Engineering, 291:21–41, 2015.
[86] Benjamin Peherstorfer and Karen Willcox. Online adaptive model reduction for nonlinear
systems via low-rank updates. SIAM Journal on Scientific Computing, 37(4):A2123–A2150,
2015.
[87] Liqian Peng and Kamran Mohseni. Geometric model reduction of forced and dissipative
Hamiltonian systems. In 2016 IEEE 55th Conference on Decision and Control, CDC 2016,
pages 7465–7470, 2016.
[88] Liqian Peng and Kamran Mohseni. Symplectic model reduction of Hamiltonian systems.
SIAM Journal on Scientific Computing, 38(1):A1–A27, 2016.
[89] Federico Pichi, Francesco Ballarin, Gianluigi Rozza, and Jan S. Hesthaven. An artificial neural
network approach to bifurcating phenomena in computational fluid dynamics. Computers
and Fluids, 254, 2023.
[90] Annika Radermacher and Stefanie Reese. POD-based model reduction with empirical in-
terpolation applied to nonlinear elasticity. International Journal for Numerical Methods in
Engineering, 107(6):477–495, 2016.
[91] S. S. Ravindran. A reduced-order approach for optimal control of fluids using proper orthog-
onal decomposition. International Journal for Numerical Methods in Fluids, 34(5):425–448,
2000.
[92] J. Reiss, P. Schulze, J. Sesterhenn, and V. Mehrmann. The shifted proper orthogonal de-
composition: A mode decomposition for multiple transport phenomena. SIAM Journal on
Scientific Computing, 40(3):A1322–A1344, 2018.
[93] Donsub Rim, Scott Moe, and Randall J. LeVeque. Transport reversal for model reduction of
hyperbolic partial differential equations. SIAM-ASA Journal on Uncertainty Quantification,
6(1):118–150, 2018.
[94] C W Rowley, T Colonius, and R M Murray. Model reduction for compressible flows using
POD and Galerkin projection. Physica D. Nonlinear Phenomena, 189(1-2):115–129, 2004.
[95] Clarence W. Rowley, Ioannis G. Kevrekidis, Jerrold E. Marsden, and Kurt Lust. Reduction
and reconstruction for self-similar dynamical systems. Nonlinearity, 16(4):1257–1275, 2003.
[96] Clarence W. Rowley and Jerrold E. Marsden. Reconstruction equations and the Karhunen-
Loève expansion for systems with symmetry. Physica D: Nonlinear Phenomena, 2000.
[97] G. Rozza, D. B. P. Huynh, and A. T. Patera. Reduced basis approximation and a posteri-
ori error estimation for affinely parametrized elliptic coercive partial differential equations:
Application to transport and continuum mechanics. Archives Computational Methods in
Engineering, 15(4):229–275, 2008.
[98] Gianluigi Rozza. Reduced-basis methods for elliptic equations in sub-domains with a poste-
riori error bounds and adaptivity. Applied Numerical Mathematics, 55(4):403–424, 2005.
[99] David Ryckelynck. A priori hyperreduction method: An adaptive approach. Journal of
Computational Physics, 202(1):346–366, 2005.
[100] Ernest K. Ryu and Stephen P. Boyd. Extensions of Gauss Quadrature Via Linear Programming.
Foundations of Computational Mathematics, 15(4):953–971, 2015.
[101] B. Sanderse. Non-linearly stable reduced-order models for incompressible flow with energy-
conserving finite volume methods. Journal of Computational Physics, 421, 2020.
[102] Themistoklis P. Sapsis and Pierre F.J. Lermusiaux. Dynamically orthogonal field equations
for continuous stochastic dynamical systems. Physica D: Nonlinear Phenomena, 238(23-
24):2347–2360, 2009.
[103] Alexander Schein, Kevin T. Carlberg, and Matthew J. Zahr. Preserving general physical
properties in model reduction of dynamical systems via constrained-optimization projection.
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 37