0% found this document useful (0 votes)
16 views37 pages

CH0003 Nguyen v1

Uploaded by

beckerrolandh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views37 pages

CH0003 Nguyen v1

Uploaded by

beckerrolandh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Chapter 1

Model reduction techniques for


parametrized nonlinear partial
differential equations
Ngoc Cuong Nguyena,1
a Center
for Computational Engineering, Department of Aeronautics and Astronautics,
Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA, 02139, USA

ABSTRACT
We present model reduction techniques for parametrized nonlinear partial differential
equations (PDEs). The main ingredients of our approach are reduced basis (RB) spaces
to provide rapidly convergent approximations to the parametric manifold; Galerkin pro-
jection of the underlying PDEs onto the RB space to provide reduction in the number
of degrees of freedom; and empirical interpolation schemes to provide rapid evaluation
of the nonlinear terms associated with the Galerkin projection. We devise a first-order
empirical interpolation method to construct an inexpensive and stable interpolation of the
nonlinear terms. We consider two different hyper-reduction strategies: hyper-reduction
followed by linearization, and linearization followed by hyper-reduction. We employ
the proposed techniques to develop reduced-order models for a nonlinear convection-
diffusion-reaction problem. Numerical results are presented to illustrate the accuracy,
efficiency, and stability of the reduced-order models.

KEYWORDS
Model reduction, Reduced order model, Reduced basis method, Empirical interpolation,
Hyper-reduction, Galerkin projection, Proper orthogonal decomposition, Finite element
method, Nonlinear PDEs

1.1 INTRODUCTION
Numerous applications in engineering and science require repeated solutions of
parametrized partial differential equations (PDEs). This occurs in the context
of design, optimization, control, uncertainty quantification. Numerical approx-

1. I would like to thank Professor Peraire of MIT and Professor Patera of MIT for many invaluable
contributions to this work. I would also like to thank Professor Yvon Maday of University Paris
VI for many helpful discussions. This work was supported by the U.S. Department of Energy
under Contract No. DE-NA0003965, the Air Force Office of Scientific Research under Grant No.
FA9550-22-1-0356, the National Science Foundation under grant number NSF-PHY-2028125,
and the MIT Portugal program under the seed grant number 6950138.

1
2

imation of the parametrized PDEs can be achieved by standard discretization


methods such as finite element (FE), finite difference (FD) and finite volume
(FV) methods, which will be referred to as full order models (FOMs). However,
FOMs typically require a very large number of degrees of freedom to obtain suffi-
ciently accurate solutions. Computing many solutions in such large-scale settings
often lead to unmanageable demands on computational resources. Alleviation of
this computational burden is the main motivation for developing reduced order
models (ROMs), which are low-dimensional models that are significantly faster
to evaluate than the underlying FOM while maintaining comparable accuracy.
Nonlinear PDEs describe a wide variety of physical phenomena in fluid
mechanics, solid mechanics, quantum mechanics, and electromagnetics. They
often lead to computationally demanding simulations for FOMs due to their high
dimensionality and nonlinear nature. The development of inexpensive ROMs
for parametrized nonlinear PDEs entails three sequential stages: (1) construction
of reduced basis (RB) spaces, (2) projection of the underlying PDEs onto the
RB spaces, and (3) approximation of the nonlinear terms. Proper orthogonal
decomposition (POD) [43, 45, 46, 53, 91, 94, 114] and greedy sampling [10, 15,
30] are widely used for constructing RB spaces. The projection stage consists
in approximating the state variables within a RB space, and then projecting
the residual equation onto the same RB space (Galerkin projection) or onto a
different RB space (Petrov-Galerkin projection). This operation naturally leads
to a significant reduction in the number of degrees of freedom for the resulting
ROM, which is often several orders of magnitude less than that of the underlying
FOM. However, for general nonlinear PDEs, the computational cost of evaluating
the projection-based ROM still depends on the size of the underlying FOM due
to the presence of the nonlinear integrals2. Evaluating the nonlinear integrals
in projection-based ROMs via Gaussian quadrature requires a large number of
integration points. Hyper-reduction methods [3, 4, 5, 7, 19, 22, 36, 37, 48, 74,
99, 113, 116] offer an efficient alternative by reducing the number of integration
points and thus providing more efficient computation of the nonlinear integrals.
This chapter presents model reduction techniques for parametrized nonlinear
PDEs. It delves into projection-based methods and hyper-reduction techniques.
The chapter focuses on a new hyper-reduction technique that makes use of
partial derivatives to approximate the nonlinear terms. In particular, we devise
a first-order empirical interpolation method to construct an inexpensive and
stable interpolation of the nonlinear terms by using their partial derivatives. We
describe two different hyper-reduction strategies: hyper-reduction followed by
linearization and linearization followed by hyper-reduction. We demonstrate
that the second strategy allows for more accurate ROMs than the first strategy.
We employ the proposed techniques to develop ROMs for steady-state nonlinear
PDEs. We present numerical results to illustrate the accuracy, efficiency, and

2. An integral whose integrand is a nonlinear function of the state variables is called nonlinear
integral.
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 3

stability of our hyper-reduced ROMs, and compare their performance to FOMs,


ROMs without hyper-reduction. We end the chapter with remarks on open
questions related to a posterior error estimation for noninear problems, ROMs
for convection-dominated problems, and structure-preserving ROMs.

1.2 HYPER-REDUCTION METHODS


1.2.1 Parametrized integrals
Let Ω ∈ R𝑑 be the physical domain in which the spatial point 𝒙 resides. Let
D ⊂ R 𝑃 be the parameter domain in which our 𝑃-tuple parameter point 𝝁
resides. Let 𝑢(𝒙, 𝝁) ∈ 𝐿 ∞ (Ω × D) be a function of space and parameters with
sufficient regularity. Let 𝑔(𝑢(𝒙, 𝝁)) ∈ 𝐿 ∞ (Ω × D) be a nonlinear function
of 𝑢. More generally, 𝑔 can also be explicitly dependent on 𝒙 and 𝝁. Let
𝝋(𝒙) = (𝜑1 (𝒙), . . . , 𝜑 𝑁 (𝒙)) be parameter-independent vector-valued function,
where 𝜑 𝑛 (𝒙) ∈ 𝐿 ∞ (Ω), 1 ≤ 𝑛 ≤ 𝑁, are known. We are interested in evaluating
parametrized integrals: given 𝝁 ∈ D, find 𝑰( 𝝁) ∈ R 𝑁 as

𝑰( 𝝁) = 𝑔(𝑢(𝒙, 𝝁))𝝋(𝒙)𝑑𝒙. (1.1)
Ω

As discussed later, these parametrized integrals stem from projection-based


model reduction of parametrized nonlinear PDEs. Within this context, fast and
accurate computation of the parametrized integrals stands as a crucial factor in
enabling efficient model reduction.
Gaussian quadrature is one of the most widely used approaches for computing
𝑰( 𝝁). First, the physical domain Ω is discretized by a finite element mesh
Tℎ := {𝐾𝑛 , 1 ≤ 𝑛 ≤ N𝑒 } of N𝑒 simple elements. Let ( 𝒙¯ 𝑛 , 𝜔 𝑛 ), 1 ≤ 𝑛 ≤ N𝑞 , be
the quadrature points and weights on the elements of Tℎ , where N𝑞 is the total
number of quadrature points. An approximation to 𝑰( 𝝁) via Gaussian quadrature
is given by
N𝑞
∑︁
𝑰 N ( 𝝁) = 𝜔 𝑛 𝑔(𝑢( 𝒙¯ 𝑛 , 𝝁))𝝋( 𝒙¯ 𝑛 ). (1.2)
𝑛=1

We shall assume that N𝑞 is large enough such that 𝑰 N ( 𝝁) can be considered in-
distinguishable from 𝑰( 𝝁) for any 𝝁 ∈ D. Following the custom in the reduced-
basis community, 𝑰 N ( 𝝁) is referred to as “truth” approximation to the exact
integrals 𝑰( 𝝁). Gaussian quadrature possesses generality in that its quadrature
points and weights are independent of the integrand. Nonetheless, its computa-
tional expense renders it inefficient for utilization in model reduction techniques.
Hyper-reduction techniques aim to reduce the (online) computational cost of
evaluating the parametrized integrals. Existing hyper-reduction techniques can
be broadly classified as empirical quadrature methods, empirical interpolation
methods, and integral interpolation methods.
4

1.2.2 Empirical quadrature methods


Various methods have been introduced to approximate 𝑰( 𝝁) by an empirical
quadrature rule. The central idea is to construct quadrature points and associated
weights specifically tailored for the parametrized integrals as follows

𝐾
∑︁
𝑔 𝑔 𝑔 𝑔 𝑔
𝑰 𝐾 ( 𝝁) = 𝜔 𝑘 𝑔(𝑢( 𝒙¯ 𝑘 , 𝝁))𝝋( 𝒙¯ 𝑘 ) = 𝑫 𝑁 𝐾 𝒃 𝐾 ( 𝝁). (1.3)
𝑘=1

𝑔 𝑔
Here ( 𝒙¯ 𝑘 , 𝜔 𝑘 ), 1 ≤ 𝑘 ≤ 𝐾, are empirical quadrature points and weights, which
are dependent on the parametrized integrand and chosen such that the integration
error ∥ 𝑰 𝐾 ( 𝝁) − 𝑰 N ( 𝝁) ∥ is sufficiently small for any 𝝁 ∈ D, and such that
𝐾 ≪ N𝑔 . Note that 𝑫 𝑁 𝐾 ∈ R 𝑁 ×𝐾 has entries 𝐷 𝑁 𝐾 ,𝑛𝑘 = 𝜔 𝑘 𝜑 𝑛 ( 𝒙¯ 𝑘 ) and
𝑔 𝑔 𝑔 𝑔
𝑔 𝑔 𝑔
𝒃 𝐾 ( 𝝁) ∈ R𝐾 has entries 𝑏 𝐾 ,𝑘 ( 𝝁) = 𝑔(𝑢( 𝒙¯ 𝑘 , 𝝁)) for 1 ≤ 𝑛 ≤ 𝑁, 1 ≤ 𝑘 ≤ 𝐾.
The quadrature points and weights are determined in the offline stage. In the
online stage, the online cost of evaluating 𝑰 𝐾 ( 𝝁) for any 𝝁 ∈ D is 𝑂 (𝑁𝐾).
The first scheme of this type was introduced by An et al. in [3] in the context
of computer graphics applications. The method involves determining a reduced
set of quadrature points and associated positive weights so that the integration
error is minimized over a set of representative samples of the integrand. This
method was further extended and introduced in model reduction community by
Farhat et al. [37, 38] and Hernández et al. [48]. In [100], Ryu and Boyd propose
an ℓ 1 framework for the empirical quadrature. This quadrature framework is
further developed in DeVore et al. [29] and extended to the parametric context in
[84]. The stronger norm yields quadrature rules that are sparse. Furthermore, the
offline problem can be cast as a linear program (LP) which is efficiently treated
by a simplex method [84, 116]. Empirical quadrature methods offer a unique
advantage in the sense that the Jacobian matrix of the resulting ROM inherits
the spectral properties of its FOM counterpart [38, 48]. In essence, if the FE
stiffness matrix is symmetric and positive definite, so will be its reduced-order
counterpart. This desirable feature might not be found in other hyper-reduction
techniques.

1.2.3 Empirical interpolation methods


Empirical interpolation is an alternative approach for computing the parametrized
𝑔
integrals. In this approach, we first construct a set of basis functions Ψ𝑀 ≡
𝑔 𝑔 𝑔 𝑔
span{𝜓 𝑚 (𝒙), 1 ≤ 𝑚 ≤ 𝑀 } and a set of interpolation points 𝑇𝑀 ≡ {𝒙1 , . . . , 𝒙 𝑀 }
in the offline stage. We then define an interpolant of the parametrized nonlinear
function 𝑔 as
𝑀
∑︁
𝑔
𝑔 𝑀 (𝒙, 𝝁) = 𝑎 𝑚 ( 𝝁)𝜓 𝑚 (𝒙), (1.4)
𝑚=1
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 5

where the coefficients 𝑎 𝑚 ( 𝝁), 1 ≤ 𝑚 ≤ 𝑀, are found as the solution of the


following linear system
𝑀
∑︁
𝑔 𝑔 𝑔
𝜓 𝑚 (𝒙 𝑘 )𝑎 𝑚 ( 𝝁) = 𝑔(𝑢(𝒙 𝑘 , 𝝁)), 1 ≤ 𝑘 ≤ 𝑀. (1.5)
𝑚=1

The linear system (1.5) can be solved for the coefficient vector 𝒂 𝑀 ( 𝝁) as follows
𝑔  −1 𝑔
𝒂 𝑀 ( 𝝁) = 𝑩 𝑀 𝒃 𝑀 ( 𝝁), (1.6)

where 𝑩 𝑀 ∈ R 𝑀 ×𝑀 has entries 𝐵 𝑀,𝑘𝑚 = 𝜓 𝑚 (𝒙 𝑘 ) and 𝒃 𝑀 ( 𝝁) ∈ R 𝑀 has entries


𝑔 𝑔 𝑔 𝑔 𝑔
𝑔 𝑔
𝑏 𝑀,𝑘 ( 𝝁) = 𝑔(𝑢(𝒙 𝑘 , 𝝁)) for 1 ≤ 𝑘, 𝑚 ≤ 𝑀. The interpolant 𝑔 𝑀 (𝒙, 𝝁) serves as
a surrogate for 𝑔 to yield an approximation to 𝑰( 𝝁) as follows:
𝑀
∑︁ ∫
𝑔 𝑔 𝑔
𝑰 𝑀 ( 𝝁) = 𝑎 𝑚 ( 𝝁) 𝜓 𝑚 (𝒙)𝝋(𝒙)𝑑𝒙 = 𝑪 𝑁 𝑀 𝒂 𝑀 ( 𝝁) = 𝑫 𝑁 𝑀 𝒃 𝑀 ( 𝝁). (1.7)
𝑚=1 Ω

Here 𝑪 𝑁 𝑀 ∈ R 𝑁 × 𝑀 is a parameter-independent matrix with entries 𝐶 𝑁 𝑀,𝑛𝑚 =


𝑔 𝑔
∫ 𝑔
𝜓 (𝒙)𝜑 𝑛 (𝒙)𝑑𝒙, 1 ≤ 𝑚 ≤ 𝑀, 1 ≤ 𝑛 ≤ 𝑁, which are computed by using
Ω 𝑚
Gaussian quadrature rule. Note that 𝑫 𝑁 𝑀 = 𝑪 𝑁 𝑀 (𝑩 𝑀 ) −1 is also parameter-
𝑔 𝑔 𝑔

independent.
The surrogate integrals (1.7) are computed by using an offline-online de-
𝑔
composition. In the offline stage, we construct the basis set Ψ𝑀 and the in-
𝑔 𝑔
terpolation point set 𝑇𝑀 , and compute and store the matrix 𝑫 𝑁 𝑀 . The offline
stage can be costly due to the construction of the basis set and the computation
𝑔
of the matrix 𝑫 𝑁 𝑀 . In the online stage, for any given 𝝁 ∈ D, we evaluate
𝑔 𝑔
𝑏 𝑀,𝑘 ( 𝝁) = 𝑔(𝑢(𝒙 𝑚 , 𝝁)), 1 ≤ 𝑚 ≤ 𝑀 and calculate 𝑰 𝑀 ( 𝝁) from (1.7). The
online cost of evaluating 𝑰 𝑀 ( 𝝁) for any 𝝁 ∈ D is only 𝑂 (𝑀 𝑁). The approx-
𝑔
imation accuracy depends crucially on the basis set Ψ𝑀 and the interpolation
𝑔
point set 𝑇𝑀 .
Empirical interpolation focuses on the approximation of the parametrized
nonlinear function 𝑔(𝑢(𝒙, 𝝁)), whereas empirical quadrature aims to directly
approximate the integral of the parametrized integrands 𝑔(𝑢(𝒙, 𝝁))𝝋(𝒙). Em-
pirical quadrature has online complexity of 𝑂 (𝐾 𝑁), where 𝐾 is the number
of empirical quadrature points. Hence, both approaches offer linear scaling in
terms of the number of interpolation/quadrature points. However, since empiri-
cal quadrature constructs quadrature points based on the integrands, 𝐾 may scale
with the number of integrands 𝑁.
The empirical interpolation method (EIM) was first proposed in the seminal
paper [8] for constructing the basis set and the interpolation point set, and
developing efficient RB approximation of non-affine PDEs. Shortly later, the
empirical interpolation method was extended to develop efficient ROMs for
nonlinear PDEs [42]. Since the pioneer work [8, 42], the EIM has been widely
used to construct efficient ROMs of nonaffine and nonlinear PDEs for different
6

applications [42, 72, 31, 70, 54, 53, 24]. Rigorous a posteriori error bounds for
the empirical interpolation method is developed by Eftang et al. [34].
Several attempts have been made to extend the EIM in diverse ways. The
best-points interpolation method (BPIM) [74, 75] employs proper orthgogonal
decomposition to generate the basis set and least-squares method to compute the
interpolation point set. The discrete empirical interpolation method (DEIM) [22]
is a discrete variant of the empirical interpolation method. DEIM consider a col-
lection of vectors arising from the spatial discretization of parameter-dependent
functions and select a subset of vectors and associated interpolation indices.
Generalized empirical interpolation method (GEIM) [66, 67] generalizes the
EIM concept by replacing the pointwise function evaluations by more general
measures defined as linear functionals. More recently, the first-order empiri-
cal interpolation method (FOEIM) [79] makes use of partial derivatives of the
parametrized nonlinear function to construct the basis functions and interpo-
lation points. In Section 1.3, we extend this new hyper-reduction method to
approximate the parametrized integrals using higher-order partial derivatives.

1.2.4 Integral interpolation methods


This class of hyper-reduction methods focuses on interpolating the parametrized
integrals rather than the parametrized integrands. In particular, for any 𝝁 ∈ D,
an approximation to 𝑰 N ( 𝝁) is given by

𝑃
𝑰 𝑃 ( 𝝁) = 𝑫 𝑁 𝑃 𝑰 N ( 𝝁), (1.8)

where 𝑫 𝑁 𝑃 ∈ R 𝑁 ×𝑃 is a parameter-independent interpolation matrix and


𝑰N𝑃 ( 𝝁) ∈ R 𝑃 is a subset of 𝑰 ( 𝝁) at 𝑃 interpolation indices. This class of
N
methods is only effective if 𝜑 𝑛 , 1 ≤ 𝑛 ≤ 𝑁, are FE shape functions. In this case,
𝑰 N ( 𝝁) is a FE vector in high dimensions. Due to the locality of the FE shape
functions, each entry of the vector 𝑰 N ( 𝝁) is computed by using Gauss quadrature
on a few elements that surround the degree of freedom associated with that entry.
Therefore, computing only 𝑃 entries of 𝑰 N ( 𝝁) can be considerably faster than
computing the whole vector if 𝑃 is relatively small.
The interpolation matrix is obtained by applying POD to snapshots of the
parametrized integral vector 𝑰 N ( 𝝁). The set of interpolation indices can be
determined using procedures such as the EIM, BPIM, DEIM, discrete BPIM,
and missing point estimation method. The idea behind this integral interpolation
approach has its roots in the landmark work of Everson and Sirovich [8] for
reconstruction of “gappy” data, and was historically the first method for dealing
with nonlinear terms in model order reduction. It has been adopted by other
researchers in the context of nonlinear model reduction [17, 23, 31, 39, 62, 90].
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 7

1.3 FIRST-ORDER EMPIRICAL INTERPOLATION METHOD


Most hyper-reduction methods discussed in the previous section do not use
derivative information to handle nonlinear terms, with the exception of the
recent work [79]. In [79], using first-order partial derivatives of the nonlinear
terms in the empirical interpolation procedure can significantly improves the
performance of ROMs for parametrized nonlinear PDEs. In this section, we
extend the previous work [79] to higher-order derivative information.

1.3.1 Empirical interpolation procedure


We review the empirical interpolation procedure (EIP) [68] to construct hierar-
chical basis functions and interpolation points. The input of the EIP is a set of
parameter samples 𝑆 𝑁 = {𝝁1 ∈ D, . . . , 𝝁 𝑁 ∈ D}. The sample set 𝑆 𝑁 allows
us to define two separate RB spaces
𝑢
𝑊𝑁 = span{𝜁 𝑛 (𝒙) ≡ 𝑢(𝒙, 𝝁 𝑛 ), 1 ≤ 𝑛 ≤ 𝑁 }, (1.9a)
𝑔
𝑊𝑁 = span{𝜉 𝑛 (𝒙) ≡ 𝑔(𝜁 𝑛 (𝒙)), 1 ≤ 𝑛 ≤ 𝑁 }. (1.9b)

In model reduction context, these RB spaces are constructed by solving the


underlying FOM for all parameter points in 𝑆 𝑁 . Hence, 𝑢(𝒙, 𝝁 𝑛 ), 1 ≤ 𝑛 ≤ 𝑁,
represent the numerical solutions of the parametrized nonlinear PDEs. The RB
space 𝑊 𝑁
𝑢 will also be used to generate ROMs via the Galerkin projection.

The EIP proceeds as follows. First, we find an index 𝑗1 given by

𝑗 1 = arg max ∥𝜉 𝑗 ∥ 𝐿 ∞ (Ω) , (1.10)


1≤ 𝑗 ≤ 𝑁

which allows us to obtain the first interpolation point and the first basis function
𝑔 𝑔 𝑔
𝒙 1 = arg sup |𝜉 𝑗1 (𝒙)|, 𝜓1 (𝒙) = 𝜉 𝑗1 (𝒙)/𝜉 𝑗1 (𝒙1 ). (1.11)
𝒙∈Ω

𝑔 𝑔 𝑔 𝑔
We set Ψ1 = span{𝜓1 } and 𝑇1 = {𝒙1 }. For 𝑀 = 2, . . . , 𝑁, we solve the linear
systems
𝑀
∑︁−1
𝑔 𝑔
𝜓 𝑚 (𝒙 𝑘 )𝜎𝑛𝑚 = 𝜉 𝑛 (𝒙 𝑘 ), 1 ≤ 𝑘 ≤ 𝑀 − 1, (1.12)
𝑚=1

for 𝑛 = 1, . . . , 𝑁; we find index 𝑗 𝑀 from


𝑀
∑︁−1
𝑗 𝑀 = arg max ∥𝜉 𝑛 (𝒙) − 𝜎𝑛𝑚 𝜓 𝑚 (𝒙) ∥ 𝐿 ∞ (Ω) , (1.13)
1≤𝑛≤ 𝑁
𝑚=1

and set
𝑔 𝑔 𝑔
𝒙 𝑀 = arg sup |𝑟 𝑀 (𝒙)|, 𝜓 𝑀 (𝒙) = 𝑟 𝑀 (𝒙)/𝑟 𝑀 (𝒙 𝑀 ), (1.14)
𝒙∈Ω
8

where the residual function 𝑟 𝑀 (𝒙) is given by


𝑀
∑︁−1
𝑟 𝑀 (𝒙) = 𝜉 𝑗𝑀 (𝒙) − 𝜎 𝑗𝑀 𝑚 𝜓 𝑚 (𝒙). (1.15)
𝑚=1
𝑔 𝑔
In essence, the interpolation point 𝒙 𝑀 and the basis function 𝜓 𝑀 (𝒙) are the
maximum point and the normalization of the residual function 𝑟 𝑀 (𝒙) which
results from the interpolation of 𝜉 𝑗𝑀 (𝒙) by using the previous interpolation
points and basis functions.
𝑔 𝑔 𝑔 𝑔 𝑔 𝑔
The outputs of the EIP are Ψ𝑀 = span{𝜓1 , . . . , 𝜓 𝑀 } and 𝑇𝑀 = {𝒙1 , . . . , 𝒙 𝑀 },
𝑔 𝑔 𝑔 𝑔
which are hierarchical in the following sense Ψ𝑀 −1 ⊂ Ψ𝑀 and 𝑇𝑀 −1 ⊂ 𝑇𝑀 . It
𝑔 𝑔
is shown in [68] that the EIP is well-defined since both 𝑇𝑀 and Ψ𝑀 are uniquely
𝑔
defined. Furthermore, the interpolation is exact for any function in Ψ𝑀 , that is,
𝑔
∥𝜉 (𝒙) − I𝑀 [𝜉] ∥ 𝐿 ∞ (Ω) = 0, ∀𝜉 ∈ Ψ𝑀 . (1.16)
𝑔 𝑔
Here I𝑀 [𝜉] denotes the interpolant of 𝜉 by using Ψ𝑀 and 𝑇𝑀 . Moreover, it
follows from the EIP that the following inequality holds
𝑔
∥𝜉 − I𝑀 [𝜉] ∥ 𝐿 ∞ (Ω) ≤ ∥𝜉 𝑗𝑀+1 − I𝑀 [𝜉 𝑗𝑀+1 ] ∥ 𝐿 ∞ (Ω) , ∀𝜉 ∈ W𝑁 . (1.17)
𝑔
Let G := {𝑔(𝑢(𝒙, 𝝁)) : 𝝁 ∈ D} be the parametric manifold. If the subspace 𝑊 𝑁
satisfies sup𝑞∈ G inf 𝜉 ∈ W 𝑔 ∥𝑞 − 𝜉 ∥ 𝐿 ∞ (Ω) ≤ 𝜖 for 𝜖 small, we have
𝑁

∥𝑞 − I𝑀 [𝑞] ∥ 𝐿 ∞ (Ω) ≤ ∥𝜉 𝑗𝑀+1 − I𝑀 [𝜉 𝑗𝑀+1 ] ∥ 𝐿 ∞ (Ω) ≡ 𝜀 𝑀 [𝜉 𝑗𝑀+1 ],


∀𝑞 ∈ G.
(1.18)
This last quantity is one of the outputs of the EIP and plays the role of a priori
error estimate. The convergence of 𝜀 𝑀 [𝜉 𝑗𝑀+1 ] as 𝑀 increases can give a sense
of the convergence of the interpolation error.
𝑔
It is shown in [8] that if 𝑔(𝑢) ∈ Ψ𝑀+1 , then the interpolation error is bounded
by
𝑔 𝑔
∥𝑔(𝑢) − I𝑀 [𝑔] ∥ 𝐿 ∞ (Ω) ≤ |𝑔(𝑢(𝒙 𝑀+1 , 𝝁)) − I𝑀 [𝑔] (𝒙 𝑀+1 , 𝝁)| ≡ b
𝜀 𝑀 ( 𝝁). (1.19)

The quantity b 𝜀 𝑀 ( 𝝁) is a posteriori estimate for the interpolation error. As a


𝑔
result, if 𝑔 ∈ Ψ𝑀+1 , then the integration error is bounded by

∥ 𝑰( 𝝁) − 𝑰 𝑀 ( 𝝁) ∥ ℓ ≤ b
∞ 𝜀 𝑀 ( 𝝁) max |𝜑 𝑛 (𝒙)|𝑑𝒙. (1.20)
1≤𝑛≤ 𝑁 Ω
𝑔
Of course, in general 𝑔 ∉ Ψ𝑀+1 , and hence our estimators are not rigorous upper
bounds. However, they are very inexpensive to compute.
In practice, the empirical interpolation procedure is carried out over a set of
discretization points on the physical domain Ω. There are two different options
for the choice of the discretization points: (i) nodal points on all elements in
the mesh and (ii) quadrature points on all elements in the mesh. The quadrature
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 9

points yield more accurate approximation of the parametrized integrals than


the nodal points. Hence, the interpolation points are selected from the set of
quadrature points on all elements in the mesh.
𝑔
For any given sample set 𝑆 𝑁 , the number of interpolation points in 𝑇𝑀 and
𝑔
basis functions in 𝑊 𝑀 can not exceed 𝑁. In order to achieve a desired accuracy,
one may need to choose the sample set 𝑆 𝑁 conservatively large enough. In the
context of model reduction, a large sample set 𝑆 𝑁 will incur a high computational
cost for the offline stage because we must compute 𝑁 solutions of the underlying
FOM to construct the function spaces defined in (1.9). Therefore, we want
to keep 𝑆 𝑁 as small as possible, while being able to generate more than 𝑁
interpolation points and basis functions. Towards this end, we are going to
extend the EIP.

1.3.2 First-order empirical interpolation procedure


The main idea of the first-order empirical interpolation method is based on the
Taylor expansion of 𝑔(𝑢) at any function 𝜁:

𝜕𝑔(𝜁)
𝐺 (𝑢, 𝜁) = 𝑔(𝜁) + (𝑢 − 𝜁). (1.21)
𝜕𝑢
Taking 𝑢 = 𝜁 𝑚 and 𝜁 = 𝜁 𝑛 , where (𝜁 𝑚 , 𝜁 𝑛 ) are any pair of two functions in 𝑊 𝑁
𝑢

defined in (1.9a), we arrive at


𝜕𝑔(𝜁 𝑛 )
𝐺 (𝜁 𝑚 , 𝜁 𝑛 ) = 𝑔(𝜁 𝑛 ) + (𝜁 𝑚 − 𝜁 𝑛 ). (1.22)
𝜕𝑢
for 𝑚, 𝑛 = 1, . . . , 𝑁. These functions define the following Lagrange-Taylor (LT)
space
𝑔
𝑊 𝑁 2 = span{𝜂 𝑘 ≡ 𝐺 (𝜁 𝑚 , 𝜁 𝑛 ), 𝑘 = 𝑚 + 𝑁 (𝑛 − 1), 1 ≤ 𝑚, 𝑛 ≤ 𝑁 }. (1.23)

If the function 𝑔 is explicitly dependent on both 𝑢 and 𝝁, then we define the LT


space as 𝑊 𝑁 2 = span{𝜂 𝑘 , 1 ≤ 𝑘 ≤ 𝑁 2 }, where
𝑔

𝜕𝑔(𝜁 𝑛 , 𝝁 𝑛 ) 𝜕𝑔(𝜁 𝑛 , 𝝁 𝑛 )
𝜂 𝑘 = 𝑔(𝜁 𝑛 , 𝝁 𝑛 ) + (𝜁 𝑚 − 𝜁 𝑛 ) + · ( 𝝁 𝑚 − 𝝁 𝑛 ). (1.24)
𝜕𝑢 𝜕𝝁

for 𝑘 = 𝑚 + 𝑁 (𝑛 − 1) and 𝑚, 𝑛 = 1, . . . , 𝑁. Here 𝝁 𝑛 , 1 ≤ 𝑛 ≤ 𝑁, are the


parameter points in the sample set 𝑆 𝑁 .
Next, we apply the proper orthogonal decomposition (POD) to the LT space
𝑔
𝑊 𝑁 2 to generate an orthogonal basis set. The POD determines the basis set to
minimize the following mean squared error:

𝑁2
∑︁ (𝜂𝑖 , 𝜑) 2
min 𝜂𝑖 − 𝜑 , (1.25)
𝑖=1
∥𝜑∥ 2
10

subject to the constraints ∥𝜑∥ 2 = 1. It is shown in [56] (see Chapter 3) that the
problem (1.25) is equivalent to solving the eigenfunction equation

𝑁2
1 ∑︁
(𝜂𝑖 , 𝜑)𝜂𝑖 = 𝜆 𝜑. (1.26)
𝑁 2 𝑖=1

The method of snapshots [105] expresses a typical eigenfunction 𝜑 as a linear


combination of the snapshots

𝑁2
∑︁
𝜑= 𝑎 𝑖 𝜂𝑖 . (1.27)
𝑖=1

Inserting (1.27) into (1.26), we obtain the following eigenvalue problem

𝑪 𝒂 = 𝜆𝒂 , (1.28)

where 𝑪 ∈ R 𝑁2×𝑁2 is known as the correlation matrix with entries 𝐶𝑖𝑖 ′ =


1 ′ 2
𝑁 2 (𝜂 𝑖 , 𝜂 𝑖 ′ ) , 1 ≤ 𝑖, 𝑖 ≤ 𝑁 . The eigenproblem (1.28) can then be solved for

the first 𝐾 eigenvalues and eigenvectors from which the POD basis functions
𝜑 𝑘 , 1 ≤ 𝑘 ≤ 𝐾, are constructed by (1.27). We then introduce the following space
𝑔

Φ𝐾 := span{𝜙 𝑘 = 𝜆 𝑘 𝜑 𝑘 , 1 ≤ 𝑘 ≤ 𝐾 }. (1.29)

Note that the eigenvalues follow the descending order 𝜆1 ≥ 𝜆2 ≥ . . . ≥ 𝜆 𝐾 . The


Í𝑁 2 Í𝑁 2
integer 𝐾 is chosen as the smallest integer such that 𝑘=𝐾+1 𝜆 𝑘 / 𝑛=1 𝜆 𝑛 ≤ 𝜖 for
some 𝜖 small.
POD captures the most dominant modes in the LT space to enable accurate
𝑔
representation with fewer basis functions. In particular, the LT-POD space Φ𝐾
𝑔
can accurately approximate any function in 𝑊 𝑁 2 even though the dimension
𝐾 may be significantly less than 𝑁 2 . We will apply the EIM procedure to
𝑔 𝑔
Φ𝐾 instead of 𝑊 𝑁 2 . Hence, POD reduces the computational cost of the EIM
procedure by a factor of 𝑁 2 /𝐾, while generating optimal basis functions.
We now apply the EIM described in the previous subsection to the LT-POD
𝑔
space Φ𝐾 defined in (1.29) to obtain interpolation points and basis functions.
First, we find
𝑗 1 = arg max ∥𝜙 𝑗 ∥ 𝐿 ∞ (Ω) , (1.30)
1≤ 𝑗 ≤𝐾

and set
𝑔 𝑔 𝑔
𝒙1 = arg sup |𝜙 𝑗1 (𝒙)|, 𝜓1 (𝒙) = 𝜙 𝑗1 (𝒙)/𝜙 𝑗1 (𝒙1 ). (1.31)
𝒙∈Ω
For 𝑀 = 2, . . . , 𝐾, we solve the linear systems
𝑀
∑︁−1
𝑔 𝑔 𝑔
𝜓 𝑚 (𝒙 𝑘 )𝜎𝑙𝑚 = 𝜙𝑙 (𝒙 𝑘 ), 1 ≤ 𝑘 ≤ 𝑀 − 1, 1 ≤ 𝑙 ≤ 𝐾, (1.32)
𝑚=1
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 11

we then find
𝑀
∑︁−1
𝑔
𝑗 𝑀 = arg max ∥𝜙𝑙 (𝒙) − 𝜎𝑙𝑚 𝜓 𝑚 (𝒙) ∥ 𝐿 ∞ (Ω) , (1.33)
1≤𝑙 ≤𝐾
𝑚=1

and set
𝑔 𝑔
𝒙 𝑀 = arg sup |𝑟 𝑀 (𝒙)|, 𝜓 𝑀 (𝒙) = 𝑟 𝑀 (𝒙)/𝑟 𝑀 (𝒙 𝑀 ), (1.34)
𝒙∈Ω
where the residual function 𝑟 𝑀 (𝒙) is given by
𝑀
∑︁−1
𝑔
𝑟 𝑀 (𝒙) = 𝜙 𝑗𝑀 (𝒙) − 𝜎 𝑗𝑀 𝑚 𝜓 𝑚 (𝒙); (1.35)
𝑚=1

finally, we define
𝑔 𝑔 𝑔 𝑔 𝑔 𝑔
Ψ𝑀 = span{𝜓1 , . . . , 𝜓 𝑀 }, 𝑇𝑀 = {𝒙1 , . . . , 𝒙 1 }. (1.36)
𝑔 𝑔 𝑔 𝑔
The outputs of the first-order EIP are Ψ𝑀 and 𝑇𝑀 satisfying Ψ𝑀 −1 ⊂ Ψ𝑀 and
𝑔 𝑔
𝑇𝑀 −1 ⊂ 𝑇𝑀 . In practice, the supremum sup 𝒙∈Ω |𝑟 𝑀 (𝒙)| is computed on the set
of quadrature points on all elements in the mesh. In other words, the interpolation
points are selected from the quadrature points.
For any given parameter sample set 𝑆 𝑁 , the first-order EIM algorithm con-
𝑔
structs nested sets of interpolation points 𝑇𝑀 , 1 ≤ 𝑀 ≤ 𝐾, and nested subspaces
𝑔
Ψ𝑀 , 1 ≤ 𝑀 ≤ 𝐾. Hence, the first-order EIM leverages first-order partial deriva-
tives to generate 𝐾 interpolation points and 𝐾 basis functions. For the same
sample set 𝑆 𝑁 , the original EIM can only generate 𝑁 interpolation points and 𝑁
basis functions. If we want to generate 𝐾 > 𝑁 interpolation points with the orig-
inal EIM, then we must expand the parameter sample set to include 𝐾 parameter
points. Unfortunately, this will demand 𝐾 solutions of the underlying FOM.
For nonlinear PDEs considered herein, the computational complexity of ROMs
via empirical interpolation in the online stage is 𝑂 (𝑀 𝑁 2 + 𝑁 3 ) per Newton
iteration. While the online cost scales cubically with 𝑁, it scales linearly with
𝑀. Therefore, we gainfully use 𝑀 > 𝑁 to obtain stable and accurate ROMs.
The first-order EIM makes it possible to construct ROMs with 𝑀 > 𝑁 without
increasing the parameter sample set 𝑆 𝑁 .

1.3.3 Stability of the first-order empirical interpolation


The first-order EIM has all the desirable properties of the EIM. The first-order
EIM is well-defined in the sense that the basis functions are linearly independent
𝑔 𝑔 𝑔 𝑔
and the matrix 𝑩 𝑀 with entries 𝐵 𝑀,𝑖 𝑗 = 𝜓 𝑗 (𝒙𝑖 ), 1 ≤ 𝑖, 𝑗, ≤ 𝑀, is invertible.
The interpolant of the parametrized nonlinear function 𝑔(𝑢(𝒙, 𝝁)) is defined as
𝑀
∑︁
𝑔
𝑔 𝑀 (𝒙, 𝝁) = 𝑎 𝑚 ( 𝝁)𝜓 𝑚 (𝒙), (1.37)
𝑚=1
12

where the coefficients 𝑎 𝑚 ( 𝝁), 1 ≤ 𝑚 ≤ 𝑀, are given by


𝑔  −1 𝑔
𝒂 𝑀 ( 𝝁) = 𝑩 𝑀 𝒃 𝑀 ( 𝝁). (1.38)
𝑔 𝑔 𝑔
Here 𝒃 𝑀 ( 𝝁) ∈ R 𝑀 has entries 𝑏 𝑀,𝑚 ( 𝝁) = 𝑔(𝑢(𝒙 𝑚 , 𝝁)) for 1 ≤ 𝑚 ≤ 𝑀. In
𝑔
what follows, we denote the interpolant of any function 𝑣 by I𝑀 [𝑣] to simplify
the notation. We obtain an intermediate result:
𝑔 𝑔 𝑔
Lemma 1. Assume that Ψ𝑀 −1 = span {𝜓1 , . . . , 𝜓 𝑀 −1 } is of dimension 𝑀 − 1
𝑔 𝑔 𝑔
and that 𝑩 𝑀 −1 is invertible, then we have I𝑀 −1 [𝑣] = 𝑣 for any 𝑣 ∈ Ψ𝑀 −1 . In
𝑔
other words, the interpolation is exact for all v in Ψ𝑀 −1 .

Proof. Let 𝜷 𝑀 −1 ∈ R 𝑀 −1 be the coefficient vector that defines


Í 𝑀 −1 the interpolant
𝑔 𝑔 𝑔
I𝑀 −1 [𝑣]. For 𝑣 ∈ Ψ𝑀 −1 , which can be expressed as 𝑣(𝒙) = 𝑚=1 𝛾 𝑀 −1,𝑚 𝜓 𝑚 (𝒙),
𝑔 𝑔 Í −1 𝑔
we consider 𝒙 = 𝒙 𝑚 to obtain 𝑣(𝒙 𝑚 ) = 𝑀 𝑗=1 𝛾 𝑀 −1, 𝑗 𝜓 𝑗 (𝒙 𝑚 ), 1 ≤ 𝑚 ≤ 𝑀 − 1.
𝑔
It thus follows from the invertibility of 𝑩 𝑀 −1 that 𝜸 𝑀 −1 = 𝜷 𝑀 −1 . Hence, we
𝑔
have I𝑀 −1 [𝑣] = 𝑣.

We can then prove the following theorem:


𝑔
Theorem 1. Assume that the dimension of the LT-POD space Φ𝐾 is 𝐾. Then,
𝑔 𝑔 𝑔
for any 𝑀 ≤ 𝐾, the space Ψ𝑀 = span{𝜓1 , . . . , 𝜓 𝑀 } is of dimension 𝑀. In
𝑔
addition, the matrix 𝑩 𝑀 is lower triangular with unity diagonal.
𝑔 𝑔
Proof. We shall proceed by induction. Clearly, Ψ1 = span {𝜓1 } is of dimen-
𝑔 𝑔
sion 1 and the matrix 𝑩1 = 1 is invertible. Next we assume that Ψ𝑀 −1 =
𝑔 𝑔 𝑔
span {𝜓1 , . . . , 𝜓 𝑀 −1 } is of dimension 𝑀 − 1 and the matrix 𝐵 𝑀 −1 is invertible.
𝑔 𝑔 𝑔
We must then prove (i) Ψ𝑀 = span{𝜓1 , . . . , 𝜓 𝑀 } is of dimension 𝑀 and (ii) the
𝑔
matrix 𝑩 𝑀 is invertible. To prove (i), we note from our “arg max” construction
(1.33) and the assumption stated in Theorem 1 that ∥𝑟 𝑀 (𝒙) ∥ 𝐿 ∞ (Ω) > 0. Hence,
𝑔 𝑔
if dim(Ψ𝑀 ) ≠ 𝑀, we have 𝜙 𝑗𝑀 ∈ Ψ𝑀 −1 and thus ∥𝑟 𝑀 (𝒙) ∥ 𝐿 ∞ (Ω) = 0 by Lemma
1; however, the latter contradicts ∥𝑟 𝑀 (𝒙) ∥ 𝐿 ∞ (Ω) > 0. To prove (ii), we just note
𝑔 𝑔 𝑔
from the construction procedure (1.30)-(1.35) that 𝐵 𝑀,𝑖 𝑗 = 𝑟 𝑗 (𝒙𝑖 )/𝑟 𝑗 (𝒙 𝑗 ) = 0
𝑔 𝑔 𝑔 𝑔
for 𝑖 < 𝑗; that 𝐵 𝑀,𝑖 𝑗 = 𝑟 𝑗 (𝒙 𝑖 )/𝑟 𝑗 (𝒙 𝑗 ) = 1 for 𝑖 = 𝑗; and that 𝐵 𝑀,𝑖 𝑗 =
𝑔 𝑔 𝑔
𝑟 𝑗 (𝒙𝑖 )/𝑟 𝑗 (𝒙 𝑗 ) ≤ 1 for 𝑖 > 𝑗 since 𝒙 𝑗 = arg max 𝑥 ∈Ω |𝑟 𝑗 (𝒙)|, 1 ≤ 𝑗 ≤ 𝑀.
𝑔
Hence, 𝑩 𝑀 is lower triangular with unity diagonal.

This theorem implies that the procedure yields unique interpolation points
and linearly independent basis functions as long as 𝑀 is less than or equal to
the dimension of the function space used to construct the basis functions and the
interpolation points. Furthermore, the procedure reorders members of the func-
𝑔 𝑔 𝑔
tion space in such a way that Ψ𝑀 = span{𝜙 𝑗1 , . . . , 𝜙 𝑗𝑀 } = span{𝜓1 , . . . , 𝜓 𝑀 }.
Hence, the procedure allows for selecting a subset of basis functions from a
larger set.
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 13

1.3.4 Convergence of the first-order empirical interpolation


The convergence analysis of the interpolation procedure involves the Lebesgue
constant as follows
Theorem 2. The interpolation error 𝜀 𝑀 ( 𝝁) ≡ ∥𝑔(𝑢(𝒙, 𝝁)) − 𝑔 𝑀 (𝒙, 𝝁) ∥ 𝐿 ∞ (Ω)
is bounded by

𝜀 𝑀 ( 𝝁) ≤ (1 + Λ 𝑀 ) inf𝑔 ∥𝑔(𝑢(𝒙, 𝝁)) − 𝑣∥ 𝐿 ∞ (Ω) , (1.39)


𝑣∈Ψ𝑀

where Λ 𝑀 is the Lebesgue constant


𝑀 ∑︁
∑︁ 𝑀
𝑔 𝑔 −1
Λ 𝑀 = sup 𝜓 𝑚 (𝒙) [𝑩 𝑀 ] 𝑚𝑘 . (1.40)
𝒙∈Ω 𝑗=1 𝑚=1

And the Lebesgue constant satisfies Λ 𝑀 ≤ 2 𝑀 − 1.


This result has been proven in [8]. The last term in the right hand side of the
above inequality is known as the best approximation error. Although the upper
bound on the Lebesgue constant is very pessimistic, it can be realized in some
extreme cases [68].
We recall the parametric manifold G := {𝑔(𝑢(𝒙, 𝝁)) : 𝝁 ∈ D}. The
best approximation of an element 𝑞 ∈ G in some finite dimensional space
X𝑛 of dimension 𝑛 is given by the orthogonal projection onto X𝑛 , namely,
𝑞 ∗ = arg inf 𝑝∈ X𝑛 ∥𝑞 − 𝑝∥ 𝐿 ∞ (Ω) . In many cases, the evaluation of the best
approximation may be costly and the knowledge of 𝑞 over the entire domain Ω
is required. Thus, interpolation is referred to as a inexpensive surrogate to the
best approximation. The 𝑛-width for an interpolation process is defined by

𝑑b𝑛 (G) = inf sup ∥𝑞 − I𝑛 (X𝑛 , T𝑛 ) [𝑞] ∥ 𝐿 ∞ (Ω) , (1.41)


X𝑛 𝑞 ∈ G

where I𝑛 (X𝑛 , T𝑛 ) [𝑞] denotes an interpolant of 𝑞 in the linear subspace X𝑛 using


𝑛 interpolation points in T𝑛 . The interpolation 𝑛-width measures the extent to
which G may be interpolated by the interpolation procedure I𝑛 (X𝑛 , T𝑛 ). The
interpolation 𝑛-width 𝑑b𝑛 (G) is an upper bound of the Kolmogorov 𝑛-width

𝑑 𝑛 (G) = inf sup inf ∥𝑞 − 𝑝∥ 𝐿 ∞ (Ω) . (1.42)


X𝑛 𝑞∈ G 𝑝∈ X𝑛

If 𝑑b𝑛 (G) converges to zero as 𝑛 goes to infinity as fast as 𝑑 𝑛 (G), then the
interpolation procedure I𝑛 (X𝑛 , T𝑛 ) is stable and accurate.
The interpolation 𝑛-width raises two important questions: Is there a construc-
tive optimal selection for the interpolation points? Is there a constructive optimal
construction of the approximation subspaces? The first-order EIM provides a
positive answer to the first question by generating a unique set of interpola-
tion points that yield a stable and unique interpolant. The first-order EIM also
14

provides a positive answer to the second question by using the first-order par-
𝑔
tial derivatives to construct good approximation spaces. Since Ψ𝑀 converges
𝑔
rapidly to Φ𝐾 as 𝑀 tends to 𝐾, the first-order EIM can yield good approximation
spaces for the parametric manifold G. Indeed, we note from the first-order EIM
procedure that
𝑔 𝑔 𝑔
∥𝑞 − I𝑀 [𝑞] ∥ 𝐿 ∞ (Ω) ≤ ∥𝜙 𝑗𝑀+1 − I𝑀 [𝜙 𝑗𝑀+1 ] ∥ 𝐿 ∞ (Ω) , ∀𝑞 ∈ Φ𝐾 . (1.43)

This last quantity is one of the outputs of the first-order EIP and plays the role
𝑔
of a priori error estimate. The convergence of ∥𝜙 𝑗𝑀+1 − I𝑀 [𝜙 𝑗𝑀+1 ] ∥ 𝐿 ∞ (Ω) as 𝑀
increases can give a sense of the convergence of the interpolation error.

1.3.5 Error estimate of the first-order empirical interpolation


The convergence analysis of the first-order empirical interpolation provides an
estimate for the number of interpolation points needed to achieve a specific
accuracy in the offline stage. However, it does not provide an estimate of the
interpolation error of the interpolant for any given parameter 𝝁 in the online
stage. The following error estimate has been obtained in [8].
𝑔
Proposition 1. If 𝑔(𝑢(𝒙, 𝝁)) ∈ Ψ𝑀+1 , then the interpolation error 𝜀 𝑀 ( 𝝁) ≡
∥𝑔(𝑢(𝒙, 𝝁)) − 𝑔 𝑀 (𝒙, 𝝁) ∥ 𝐿 ∞ (Ω) is bounded by
𝑔 𝑔
𝜀 𝑀 ( 𝝁) ≤ 𝜀ˆ 𝑀 ( 𝝁) ≡ |𝑔(𝑢(𝒙 𝑀+1 , 𝝁)) − 𝑔 𝑀 (𝒙 𝑀+1 , 𝝁)|. (1.44)
𝑔
Of course, in general, 𝑔(𝑢(𝒙, 𝝁)) ∉ Ψ𝑀+1 , and hence the error estimator
𝜀ˆ 𝑀 ( 𝝁) is not quite an upper bound. Indeed, 𝜀ˆ 𝑀 ( 𝝁) must be a lower bound of
𝑔 𝑔
the interpolation error 𝜀 𝑀 ( 𝝁) due to the fact |𝑔(𝑢(𝒙 𝑀+1 , 𝝁)) − 𝑔 𝑀 (𝒙 𝑀+1 , 𝝁)| ≤
∥𝑔(𝑢(𝒙, 𝝁)) − 𝑔 𝑀 (𝒙, 𝝁) ∥ 𝐿 ∞ (Ω) . We extend the above result to improve the error
estimate as follows.
𝑔
Theorem 3. If 𝑔(𝑢(𝒙, 𝝁)) ∈ Ψ𝑀+𝑃 for 𝑃 ∈ N+ , then the interpolation error
𝜀 𝑀 ( 𝝁) ≡ ∥𝑔(𝑢(𝒙, 𝝁)) − 𝑔 𝑀 (𝒙, 𝝁) ∥ 𝐿 ∞ (Ω) is bounded by
𝑃
∑︁
𝜀 𝑀 ( 𝝁) ≤ 𝜀ˆ 𝑀,𝑃 ( 𝝁) ≡ |𝑒 𝑗 ( 𝝁)|, (1.45)
𝑗=1

where 𝑒 𝑗 ( 𝝁), 1 ≤ 𝑗 ≤ 𝑃, solve the following linear system


𝑃
∑︁
𝑔 𝑔 𝑔 𝑔
𝜓 𝑀+ 𝑗 (𝒙 𝑀+𝑖 )𝑒 𝑗 ( 𝝁) = 𝑔(𝑢(𝒙 𝑀+𝑖 , 𝝁)) − 𝑔 𝑀 (𝒙 𝑀+𝑖 , 𝝁), 1 ≤ 𝑖 ≤ 𝑃. (1.46)
𝑗=1

𝑔
Proof. Since by assumption 𝑔(𝑢(𝒙, 𝝁)) ∈ Ψ𝑀+𝑃 , we have
𝑀+𝑃
∑︁
𝑔
𝑔(𝑢(𝒙, 𝝁)) − 𝑔 𝑀 (𝒙, 𝝁) = 𝜅 𝑚 ( 𝝁) 𝜓 𝑚 (𝒙),
𝑚=1
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 15

which yields
𝑀+𝑃
∑︁
𝑔 𝑔 𝑔 𝑔
𝜓 𝑚 (𝒙𝑖 ) 𝜅 𝑚 ( 𝝁) = 𝑔(𝑢(𝒙𝑖 , 𝝁)) − 𝑔 𝑀 (𝒙𝑖 , 𝝁), 1 ≤ 𝑖 ≤ 𝑀 + 𝑃.
𝑚=1
𝑔 𝑔 𝑔 𝑔
Since 𝑔(𝑢(𝒙𝑖 , 𝝁)) − 𝑔 𝑀 (𝒙𝑖 , 𝝁) = 0, 1 ≤ 𝑖 ≤ 𝑀 and the matrix 𝜓 𝑚 (𝒙𝑖 ) is lower
triangular with unity diagonal, we have 𝜅 𝑚 ( 𝝁) = 0, 1 ≤ 𝑚 ≤ 𝑀. Therefore, the
above system reduces to the following system
𝑃
∑︁
𝑔 𝑔 𝑔 𝑔
𝜓 𝑀+ 𝑗 (𝒙 𝑀+𝑖 ) 𝜅 𝑀+ 𝑗 ( 𝝁) = 𝑔(𝑢(𝒙 𝑀+𝑖 , 𝝁)) − 𝑔 𝑀 (𝒙 𝑀+𝑖 , 𝝁), 1 ≤ 𝑖 ≤ 𝑃.
𝑗=1

It follows from Theorem 1 that 𝑒 𝑗 ( 𝝁) = 𝜅 𝑀+ 𝑗 ( 𝝁), 1 ≤ 𝑗 ≤ 𝑃, and thus we obtain


𝑃
∑︁
𝑔
𝑔(𝑢(𝒙, 𝝁)) − 𝑔 𝑀 (𝒙, 𝝁) = 𝑒 𝑗 ( 𝝁) 𝜓 𝑀+ 𝑗 (𝒙).
𝑗=1

The desired result directly follows from taking the 𝐿 ∞ (Ω) norm on both sides,
𝑔
using the triangle inequality, and ∥𝜓 𝑀+ 𝑗 (𝒙) ∥ 𝐿 ∞ (Ω) = 1, 1 ≤ 𝑗 ≤ 𝑃.

The operation count of evaluating the error estimator (1.45) is only 𝑂 (𝑃2 ).
Hence, the error estimator is very inexpensive. The integration errors are
bounded by

|𝐼𝑛 ( 𝝁) − 𝐼 𝑀,𝑛 ( 𝝁)| ≤ b


𝜀 𝑀,𝑃 ( 𝝁) 𝐶 𝜙𝑛 ≡ b
𝛿 𝑀,𝑃 ( 𝝁), 1 ≤ 𝑛 ≤ 𝑁, (1.47)

where 𝐶 𝜙𝑛 = Ω |𝜙 𝑛 | are parameter-independent and thus pre-computed.

1.3.6 Numerical results


   
1−𝑥1 1− 𝑥2
We consider a parametrized function 𝑢(𝒙, 𝝁) = 𝑥1 𝑥2 tanh 𝜇1 tanh 𝜇2 for
2
𝒙 ∈ Ω ≡ (0, 1) and 𝝁 ∈ D ≡∫ [0.05, 1] 2 .
We are interested in evaluating the
parametrized integral 𝐼 ( 𝝁) = Ω 𝑔(𝑢(𝒙, 𝝁))𝜑(𝒙)𝑑𝒙, where 𝑔(·) is a Gaussian
function 𝑔(𝑢) = exp(−𝑢 2 ) and 𝜑(𝒙) = 1. We are going to present numerical
results obtained by using both the EIM and the first-order EIM to approximate
the parametrized integral 𝐼 ( 𝝁).
We choose for 𝑆 𝑁max a deterministic grid of 𝑁max = 9 × 9 parameter points
over D, and generate a sequence of nested sample sets 𝑆 𝑁 ∈ 𝑆 𝑁max for 𝑁 =
2 × 2, 3 × 3, . . . , 9 × 9 by using the following logarithm distribution
exp(𝛼(𝑥 − 𝑎)/(𝑏 − 𝑎)) − 1
𝑦(𝑥) = 𝑎 + (𝑏 − 𝑎) ,
exp(𝛼) − 1
for 𝑥 ∈ [𝑎, 𝑏] and 𝑎 = 0.05, 𝑏 = 1, 𝛼 = 2. The function 𝑦(𝑥) maps a uniform
grid into a logarithmic grid such that the resulting grid is clustered toward 0.05.
16

As shown in Figure 1.1(a), the parameter points in 𝑆 𝑁max are mainly distributed
around the corner (0.05, 0.05) of the parameter domain. We consider three
different values of 𝑀, namely, 𝑀 = 𝑁, 𝑀 = 2𝑁, and 𝑀 = 3𝑁, for the first-order
EIM. The interpolation points are plotted in Figure 1.1(b) for 𝑀 = 2𝑁 = 128.
We note that the interpolation points are largely allocated along the top and right
boundaries the physical domain Ω. These results are expected because 𝑢(𝒙, 𝝁)
develop boundary layers along the top and right boundaries when 𝝁 is small.

FIGURE 1.1 Distribution of the parameter sample set 𝑆 𝑁max in the parameter domain (a), and
𝑔
distribution of the interpolation point set 𝑇𝑀 ( 𝑀 = 2𝑁 = 128) in the physical domain (b).

We now introduce a uniform grid of size 𝑁Test = 30 × 30 as a parameter test


𝑔
sample 𝑆Test . We then define

1 ∑︁ 1 ∑︁
𝜀𝑀 = 𝜀 𝑀 ( 𝝁), 𝛿𝑀 = |𝐼 ( 𝝁) − 𝐼 𝑀 ( 𝝁)|
𝑁test 𝑔 𝑁test 𝑔
𝝁∈𝑆Test 𝝁∈𝑆Test

as the average interpolation error and the average integration error, respectively.
We display in Figure 1.2 𝜀 𝑀 and Λ 𝑀 as a function of 𝑁 for the EIM and the
first-order EIM (FOEIM) with 𝑀 = 𝑁, 𝑀 = 2𝑁, and 𝑀 = 3𝑁. We observe that
𝜀 𝑀 converge rapidly with 𝑁 while the Lebesgue constant Λ 𝑀 grows slowly with
𝑁. We see from Figure 1.2(a) that FOEIM (𝑀 = 𝑁) yields smaller interpolation
errors than EIM, which can be attributed to the use of partial derivatives. We
also observe that FOEIM (𝑀 = 2𝑁) yields significantly smaller interpolation
errors than both EIM and FOEIM (𝑀 = 𝑁). Indeed, the interpolation errors
for FOEIM (𝑀 = 2𝑁) are several orders of magnitude less than those for EIM.
Increasing 𝑀 to 3𝑁 reduces the interpolation errors even further.
Table 1.1 shows the average interpolation and integration errors of the first-
order EIM for different values of 𝑁 and for 𝑀 = 𝑁, 𝑀 = 2𝑁, and 𝑀 = 3𝑁.
We see from Table 1.1 that the errors drop rapidly as 𝑁 increases. Increasing
𝑀 from 𝑀 = 𝑁 to 𝑀 = 2𝑁 and 𝑀 = 3𝑁 considerably reduces the errors. We
observe that the integration errors are one or two orders of magnitude less than
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 17

FIGURE 1.2 The average interpolation error (a) and the Lesbegue constant (b) as a function of 𝑁
for the EIM and the first-order EIM .

the interpolation errors. To verify how sharp the error estimators are, we define

1 ∑︁ b 𝜀 𝑀,𝑃 ( 𝝁) 1 ∑︁ 𝛿 𝑀,𝑃 ( 𝝁)
b
κ 𝑀, 𝑃 = , 𝜅 𝑀,𝑃 =
𝑁test 𝑔 𝜀 𝑀 ( 𝝁) 𝑁test 𝑔 |𝐼 ( 𝝁) − 𝐼 𝑀 ( 𝝁)|
𝝁∈𝑆Test 𝝁∈𝑆Test

where b 𝜀 𝑀, 𝑃 ( 𝝁) is the estimate for the interpolation error 𝜀 𝑀 ( 𝝁), while b


𝛿 𝑀,𝑃 ( 𝝁)
is the estimate for the integration error. Here we use 𝑃 = 4 to compute the error
estimates. We display in Table 1.2 κ 𝑀,𝑃 and 𝜅 𝑀,𝑃 for different values of 𝑁
and for 𝑀 = 𝑁, 𝑀 = 2𝑁, and 𝑀 = 3𝑁. Since κ 𝑀,𝑃 is greater than 1 and
less than 2.5, the estimate for the interpolation error is rigorous and very tight.
However, since 𝜅 𝑀, 𝑃 is greater than 100, the estimate for the integration error is
not tight. Because the integration errors converge very fast and are really small,
effectivities of 𝑂 (100) or even 𝑂 (1000) are still acceptable.

𝑀=𝑁 𝑀 = 2𝑁 𝑀 = 3𝑁
𝑁 𝜀𝑀 𝛿𝑀 𝜀𝑀 𝛿𝑀 𝜀𝑀 𝛿𝑀
4 2.17e-2 1.89e-3 8.58e-3 4.89e-4 4.69e-3 4.66e-4
9 4.66e-3 2.01e-4 1.73e-3 1.80e-4 4.70e-4 1.72e-5
16 1.28e-3 1.02e-4 1.85e-4 6.86e-6 4.38e-5 9.04e-7
25 3.32e-4 1.15e-5 2.90e-5 2.81e-7 7.21e-6 3.18e-8
36 1.16e-4 3.15e-6 4.57e-6 8.98e-8 5.91e-7 4.23e-9
49 3.24e-5 5.71e-7 7.79e-7 1.25e-8 1.08e-7 1.25e-9
64 7.74e-6 2.19e-7 1.48e-7 3.10e-9 1.99e-8 1.67e-10
81 2.25e-6 3.00e-8 4.10e-8 6.00e-10 4.09e-9 1.56e-11
TABLE 1.1 Average interpolation and integration errors of the first-order EIM for
different values of 𝑁 and for 𝑀 = 𝑁 , 𝑀 = 2𝑁 , and 𝑀 = 3𝑁 .
18

𝑀=𝑁 𝑀 = 2𝑁 𝑀 = 3𝑁
𝑁 κ 𝑀, 𝑃 𝜅 𝑀,𝑃 κ 𝑀,𝑃 𝜅 𝑀,𝑃 κ 𝑀,𝑃 𝜅 𝑀,𝑃
4 2.31 145.17 2.02 120.97 1.69 100.91
9 2.00 271.79 1.47 44.32 1.77 128.01
16 2.15 105.97 1.51 358.13 1.98 1726.29
25 1.88 142.02 2.01 1443.84 1.86 789.49
36 1.45 514.87 2.15 2023.45 1.71 863.79
49 1.83 327.36 1.81 588.78 1.80 250.53
64 2.06 267.40 1.95 139.57 2.28 215.74
81 1.60 393.80 1.83 182.06 1.77 353.96
TABLE 1.2 Average effectivities for the interpolation and integration errors of the
first-order EIM for different values of 𝑁 and for 𝑀 = 𝑁 , 𝑀 = 2𝑁 , and 𝑀 = 3𝑁 .
Note that 𝑃 = 4 is used to compute the error estimates.

1.4 MODEL REDUCTION TECHNIQUES


1.4.1 Parametrized nonlinear full order model
We consider the following parametrized nonlinear PDE
−∇2 𝑢 e + ∇ · ( 𝝁𝑢) + 𝑔(𝑢 e ) = 0, in Ω, (1.48)
with homogeneous Dirichlet condition 𝑢 = 0 on 𝜕Ω. Here Ω = (0, 1) 2 is a
unit square domain, while 𝝁 = (𝜇1 , 𝜇2 ) is the parameter vector in a parameter
domain D ≡ [0, 30] × [0, 30]. The vector 𝝁 represents the convection term,
while the scalar function 𝑔(𝑢 e (𝒙, 𝝁)) is the reaction term. The reaction term is
a nonlinear function of the state variable 𝑢 e as follows
𝑔(𝑢 e ) = −2𝜋 exp(sin(2𝜋𝑢 e )). (1.49)
For simplicity of exposition, the convection term is assumed to be independent of
𝒙 and 𝑢. We can also treat problems in which the convection terms are functions
of 𝑢, 𝝁 and 𝒙.
Let 𝑋 ∈ 𝐻01 (Ω) be a finite element (FE) approximation space of dimension
N , where 𝐻01 (Ω) is the Hilbert space of square integrable functions that vanish
on the boundary and their derivatives are also square integrable. In particular,
we consider 𝑋 = {𝑣 ∈ 𝐻01 (Ω) : 𝑣| 𝐾 ∈ P 3 (𝑇), ∀𝑇 ∈ Tℎ }, where P 3 (𝑇) is a space
of polynomials of degree 3 on an element 𝑇 ∈ Tℎ and Tℎ is a finite element grid
of 40 × 40 quadrilaterals. The dimension of the FE space 𝑋 is N = 14641. The
FE approximation 𝑢( 𝝁) ∈ 𝑋 of the exact solution 𝑢 e ( 𝝁) is the solution of
∫ ∫ ∫
∇𝑢 · ∇𝑣 − 𝝁𝑢 · ∇𝑣 + 𝑔(𝑢) 𝑣 = 0, ∀𝑣 ∈ 𝑋 . (1.50)
Ω Ω Ω

The output of interest is evaluated as 𝑠( 𝝁) = ℓ(𝑢( 𝝁)), where ℓ(𝑣) ≡ Ω 𝑣 is a
linear functional. Figure 1.3 shows four instances of 𝑢(𝒙, 𝝁) corresponding to
the four corners of the parameter domain.
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 19

FIGURE 1.3 Four instances of the parametrized solution 𝑢( 𝒙, 𝝁).

Because the dimension of the FE space 𝑋 is large, the FOM (1.50) may be
expensive for the many-query context that requires repeated evaluation of the
input-output relationship. Model reduction techniques are needed to provide
rapid yet accurate prediction of the input-output relationship induced by the
parametric FOM. For nonlinear PDEs, model reduction process is carried out in
two steps. In the first step, Galerkin (more generally, Petrov-Galerkin) projection
is used to project the underlying FOM onto a low-dimensional subspace. In the
second step, hyper-reduction method is employed to reduce the computational
cost of evaluating the nonlinear terms. We are going to describe the first step.

1.4.2 Reduced basis approximation


We introduce the parameter sample set 𝑆 𝑁 = {𝝁1 ∈ D, · · · , 𝝁 𝑁 ∈ D}, and
associated RB space 𝑊 𝑁 𝑢 = span{𝜁 ≡ 𝑢( 𝝁 ), 1 ≤ 𝑗 ≤ 𝑁 }, where 𝑢( 𝝁 ) is
𝑗 𝑗 𝑗
the FE solution at 𝝁 = 𝝁 𝑗 . We then orthonormalize the 𝜁 𝑗 , 1 ≤ 𝑗 ≤ 𝑁, with
respect to (·, ·) 𝑋 so that (𝜁𝑖 , 𝜁 𝑗 ) 𝑋 = 𝛿𝑖 𝑗 , 1 ≤ 𝑖, 𝑗 ≤ 𝑁. The RB approximation is
obtained by a standard Galerkin projection: given 𝝁 ∈ D, we find 𝑢 𝑁 ( 𝝁) ∈ 𝑊 𝑁 𝑢
20

as the solution of the following nonlinear system


∫ ∫ ∫
𝑢
∇𝑢 𝑁 · ∇𝑣 − 𝑢 𝑁 𝝁 · ∇𝑣 + 𝑔(𝑢 𝑁 ) 𝑣 = 0, ∀𝑣 ∈ 𝑊 𝑁 . (1.51)
Ω Ω Ω

We then evaluate the RB output as 𝑠 𝑁 ( 𝝁) = ℓ(𝑢 𝑁 ( 𝝁)). To solve (1.51), we use


Newton’s method to linearize it at a current iterate 𝑢¯ 𝑁 ( 𝝁) ∈ 𝑊 𝑁
𝑢 . Thus, we find

the increment 𝛿𝑢 𝑁 ( 𝝁) ∈ 𝑊 𝑁
𝑢 as the solution of the following linear system

𝑢
¯
𝑎(𝛿𝑢 𝑁 ( 𝝁), 𝑣; 𝝁) = −𝑟¯ (𝑣; 𝝁), ∀𝑣 ∈ 𝑊 𝑁 . (1.52)
where the bilinear form 𝑎¯ is given by
∫ ∫ ∫
¯
𝑎(𝑤, 𝑣; 𝝁) ≡ ∇𝑤 · ∇𝑣 − 𝝁 𝑤 · ∇𝑣 + 𝑔𝑢 ( 𝑢¯ 𝑁 )𝑤 𝑣, (1.53)
Ω Ω Ω

and the linear functional 𝑟¯ is given by


∫ ∫ ∫
𝑟¯ (𝑣; 𝝁) ≡ ∇𝑢¯ 𝑁 · ∇𝑣 − 𝑢¯ 𝑁 𝝁 · ∇𝑣 + 𝑔( 𝑢¯ 𝑁 ) 𝑣 . (1.54)
Ω Ω Ω

Here 𝑔𝑢 (·) = 𝜕𝑔(·)/𝜕𝑢 denotes the first-order partial derivative. The bar symbol
on the bilinear form and linear functional signifies their dependency on the
current iterate 𝑢¯ 𝑁 ( 𝝁).
Í𝑁
We express 𝛿𝑢 𝑁 ( 𝝁) = 𝑛=1 𝛼 𝑁 ,𝑛 ( 𝝁)𝜁 𝑛 and choose test functions 𝑣 = 𝜁 𝑗 , 1 ≤
𝑗 ≤ 𝑁, in (1.52), to obtain the linear system in matrix form
𝑱 𝑁 ( 𝝁)𝛿𝜶 𝑁 ( 𝝁) = −𝒓 𝑁 ( 𝝁) (1.55)
where 𝑱 𝑁 ( 𝝁) ∈ R 𝑁 × 𝑁 and 𝒓 𝑁 ( 𝝁) ∈ R 𝑁 have entries
𝐽 𝑁 ,𝑖 𝑗 ( 𝝁) = 𝑎(𝜁
¯ 𝑗 , 𝜁𝑖 ; 𝝁), 𝑟 𝑁 ,𝑖 ( 𝝁) = 𝑟¯ (𝜁𝑖 ; 𝝁), 𝑖, 𝑗 = 1, . . . , 𝑁. (1.56)
Both the matrix 𝑱 𝑁 ( 𝝁) and the vector 𝒓 𝑁 ( 𝝁) must be computed at each Newton
iteration since they depend on the current iterate 𝑢¯ 𝑁 ( 𝝁). However, they are
computationally expensive due to the presence of nonlinear integrals in both the
bilinear form 𝑎¯ and functional 𝑟.
¯ Consequently, although the linear system (1.55)
is small, it is computationally expensive due to the N -dependent complexity of
forming 𝑱 𝑁 ( 𝝁) and 𝒓 𝑁 ( 𝝁). As a result, the RB approximation does not offer a
significant speedup over the FE approximation.
We devise two different hyper-reduction approaches to deal with the nonlinear
terms. The first approach is hyper-reduction followed by linearization: the first-
order EIM is applied to approximate the nonlinear integrals in the nonlinear
system (1.51); Newton’s method is then used to linearize the resulting system.
The second approach is linearization followed by hyper-reduction: the first-order
EIM is applied to approximate the nonlinear integrals in the bilinear form (1.53)
and the linear functional (1.54) of the linear system (1.52), which results from
the linearization of the nonlinear system (1.51) by Newton’s method. We are
going to describe the first approach.
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 21

1.4.3 Hyper-reduction followed by linearization


Í𝑀 𝑔 𝑔
We approximate 𝑔(𝑢 𝑁 (𝒙, 𝝁)) in (1.51) with 𝑔 𝑀 (𝒙, 𝝁) = 𝑚=1 𝛽 𝑀, 𝑚 ( 𝝁)𝜓 𝑚 (𝒙),
𝑔
where 𝜷 𝑀 ( 𝝁) is given by

𝜷 𝑀 ( 𝝁) = (𝑩 𝑀 ) −1 𝒃 𝑀 ( 𝝁).
𝑔 𝑔 𝑔
(1.57)
𝑔 𝑔 𝑔 𝑔 𝑔
Here 𝐵 𝑀,𝑘𝑚 = 𝜓 𝑚 (𝒙 𝑘 ) and 𝑏 𝑀,𝑘 ( 𝝁) = 𝑔(𝑢 𝑁 (𝒙 𝑘 , 𝝁)) for 1 ≤ 𝑘, 𝑚 ≤ 𝑀. We
𝑔 𝑔 𝑀 𝑔 𝑔
compute 𝑇𝑀 = {𝒙 𝑚 } 𝑚=1 and Ψ𝑀 = span{𝜓 𝑚 (𝒙), 1 ≤ 𝑚 ≤ 𝑀 } by applying the
𝑔
EIM to the LT-POD space Φ𝐾 defined in (1.29).
By replacing the nonlinear terms in (1.51) with their interpolants we arrive
at the following ROM: for any 𝝁 ∈ D, we find 𝑢 𝑁 ,𝑀 ( 𝝁) ∈ 𝑊 𝑁 𝑢 as the solution

of
∫ ∫ 𝑀
∑︁ ∫
𝑔 𝑔 𝑢
∇𝑢 𝑁 , 𝑀 · ∇𝑣 − 𝑢 𝑁 , 𝑀 𝝁 · ∇𝑣 + 𝛽 𝑀,𝑚 ( 𝝁) 𝜓 𝑚 𝑣 = 0, ∀𝑣 ∈ 𝑊 𝑁 . (1.58)
Ω Ω 𝑚=1 Ω

Í𝑁
By expressing 𝑢 𝑁 , 𝑀 ( 𝝁) = 𝑛=1 𝛼 𝑁 ,𝑛 ( 𝝁)𝜁 𝑛 and choosing test functions 𝑣 =
𝜁 𝑗 , 1 ≤ 𝑗 ≤ 𝑁, in (1.58), we arrive at the algebraic system

( 𝑨 𝑁 − 𝜇1 𝑪 1𝑁 − 𝜇2 𝑪 2𝑁 )𝜶 𝑁 ( 𝝁) + 𝑮 𝑁 𝑀 𝜷 𝑀 ( 𝝁) = 0
𝑔
(1.59)

where
∫ ∫ ∫
𝑖 𝜕𝜁 𝑗 𝑔
𝐴 𝑁 , 𝑗𝑛 = ∇𝜁 𝑛 · ∇𝜁 𝑗 , 𝐶𝑁 , 𝑗𝑛 = 𝜁𝑛 , 𝐺 𝑁 𝑀, 𝑗𝑚 = 𝜓 𝑚 𝜁 𝑗 , (1.60)
Ω Ω 𝜕𝑥 𝑖 Ω

for 1 ≤ 𝑗, 𝑛 ≤ 𝑁, 1 ≤ 𝑚 ≤ 𝑀, 1 ≤ 𝑖 ≤ 2. It follows from 𝑢 𝑁 (𝒙, 𝝁) =


Í𝑁
𝑛=1 𝛼 𝑁 ,𝑛 ( 𝝁)𝜁 𝑛 (𝒙) that
𝑔 𝑔
𝒃 𝑀 ( 𝝁) = 𝑔(𝑬 𝑀 𝑁 𝜶 𝑁 ( 𝝁)), 𝐸 𝑀 𝑁 ,𝑚𝑛 = 𝜁 𝑛 (𝒙 𝑚 ). (1.61)

Substituting (1.57) and (1.61) into (1.59) yeilds

( 𝑨 𝑁 − 𝜇1 𝑪 1𝑁 − 𝜇2 𝑪 2𝑁 )𝜶 𝑁 ( 𝝁) + 𝑸 𝑁 𝑀 𝑔(𝑬 𝑀 𝑁 𝜶 𝑁 ( 𝝁)) = 0 (1.62)

where 𝑸 𝑁 𝑀 = 𝑮 𝑁 𝑀 (𝑩 𝑀 ) −1 . The system (1.62) results from the approximation


𝑔

of the nonlinear terms based on the first-order EIM. This step is known as hyper-
reduction. Although the system (1.62) is nonlinear, it is purely algebraic in the
sense that it does not contain any integrals. Since the number of unknowns
is equal to the RB dimension 𝑁, the system can be efficiently solved by using
Newton’s method to linearize it. Thus, the hyper-reduction step precedes the
linearization step.
We use Newton method to linearize (1.62) at a given iterate 𝜶¯ 𝑁 ( 𝝁), and thus
arrive at the following linear system

𝑱 𝑁 ( 𝜶¯ 𝑁 ( 𝝁)) 𝛿𝜶 𝑁 ( 𝝁) = −𝒓 𝑁 ( 𝜶¯ 𝑁 ( 𝝁)) (1.63)


22

where

𝑱 𝑁 ( 𝜶¯ 𝑁 ( 𝝁)) = ( 𝑨 𝑁 − 𝜇1 𝑪 1𝑁 − 𝜇2 𝑪 2𝑁 ) + 𝑸 𝑁 𝑀 𝑰 𝑀 𝑁 ( 𝜶¯ 𝑁 ( 𝝁)) (1.64)

𝒓 𝑁 ( 𝜶¯ 𝑁 ( 𝝁)) = ( 𝑨 𝑁 − 𝜇1 𝑪 1𝑁 − 𝜇2 𝑪 2𝑁 ) 𝜶¯ 𝑁 ( 𝝁) + 𝑸 𝑁 𝑀 𝑔(𝑬 𝑀 𝑁 𝜶¯ 𝑁 ( 𝝁)). (1.65)


Here, 1 ≤ 𝑗 ≤ 𝑁 and 1 ≤ 𝑚 ≤ 𝑀, we have

𝑁
©∑︁
𝐼 𝑀 𝑁 ,𝑚𝑛 ( 𝜶¯ 𝑁 ( 𝝁)) = 𝑔𝑢 ­ 𝛼¯ 𝑁 , 𝑗 ( 𝝁)𝐸 𝑀 𝑁 ,𝑚 𝑗 ® 𝐸 𝑀 𝑁 ,𝑚𝑛 . (1.66)
ª

« 𝑗=1 ¬
Note that 𝑔𝑢 (·) = 𝜕𝑔(·)/𝜕𝑢 are first-order derivatives. We solve the linear system
(1.63) for 𝛿𝜶 𝑁 ( 𝝁) and update the RB vector 𝛼 𝑁 ( 𝝁). Upon convergence of the
Newton iterations, we calculate the RB output as
𝑁
∑︁
𝑠 𝑁 , 𝑀 ( 𝝁) = 𝐿 𝑁 ,𝑛 𝛼 𝑁 ,𝑛 ( 𝝁), 𝐿 𝑁 ,𝑛 = ℓ(𝜁 𝑛 ), 1 ≤ 𝑛 ≤ 𝑁. (1.67)
𝑛=1

Since 𝑳 𝑁 , 𝑨 𝑁 , 𝑪 1𝑁 , 𝑪 2𝑁 , 𝑬 𝑀 𝑁 , and 𝑸 𝑁 𝑀 are independent of 𝝁, they can be


computed in the offline stage.
The offline and online stages of the hyperreduction-then-linearization ap-
proach are summarized in Algorithm 1 and Algorithm 2, respectively. The
offline stage is expensive and performed once. All the quantities computed in
the offline stage are independent of 𝝁. In the online stage, the RB output 𝑠 𝑁 ( 𝝁)
is calculated for any 𝝁 ∈ D. The computational cost of the online stage is
𝑂 (𝑛Newton (𝑀 𝑁 2 + 𝑁 3 )), where 𝑛Newton is the number of Newton iterations. The
online stage can be executed many times.

Algorithm 1 Offline stage of the hyperreduction-then-linearization approach.


Require: The parameter sample set 𝑆 𝑁 = {𝝁 𝑗 , 1 ≤ 𝑗 ≤ 𝑁 }.
Ensure: 𝑳 𝑁 , 𝑨 𝑁 , 𝑪 1𝑁 , 𝑪 2𝑁 , 𝑬 𝑀 𝑁 , 𝑸 𝑁 𝑀 .
1: Solve the parametric FOM (1.50) for each 𝝁 𝑗 ∈ 𝑆 𝑁 to obtain 𝑢( 𝝁 𝑗 ).
2: Construct a RB space 𝑊 𝑁 𝑢 = span{𝜁 ≡ 𝑢( 𝝁 ), 1 ≤ 𝑗 ≤ 𝑁 }.
𝑗 𝑗
𝑔
3: Construct the LT-POD space Φ𝐾 defined by (1.29).
𝑔 𝑔
4: Apply the EIP to the LT-POD space to obtain Ψ𝑀 and 𝑇𝑀 .
5: Form and store 𝑳 𝑁 , 𝑨 𝑁 , 𝑪 1𝑁 , 𝑪 2𝑁 , 𝑬 𝑀 𝑁 , 𝑸 𝑁 𝑀 .

Hence, as required in the many-query or real-time contexts, the online com-


plexity is independent of N , which is the dimension of the FOM. If 𝑁, 𝑀 ≪ N ,
then we expect computational savings of several orders of magnitude relative to
both the FOM and the RB approximation described earlier. It is important to
note that the computational complexity scales linearly with 𝑀. Therefore, it is
advantageous to increase 𝑀 to make the resulting ROM more accurate.
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 23

Algorithm 2 Online stage of the hyperreduction-then-linearization approach.


Require: Parameter point 𝝁 ∈ D and initial guess 𝜶¯ 𝑁 ( 𝝁).
Ensure: RB output 𝑠 𝑁 , 𝑀 ( 𝝁) and updated coefficients 𝜶¯ 𝑁 ( 𝝁).
1: Form the RB vector 𝒓 𝑁 ( 𝜶 ¯ 𝑁 ( 𝝁)) from (1.65).
2: Form 𝑰 𝑀 𝑁 ( 𝜶 ¯ 𝑁 ( 𝝁)) from (1.66).
3: Form the RB matrix 𝑱 𝑁 ( 𝜶 ¯ 𝑁 ( 𝝁)) from (1.64).
4: Solve 𝑱 𝑁 ( 𝜶 ¯ 𝑁 ( 𝝁)) 𝛿𝜶 𝑁 ( 𝝁) = −𝒓 𝑁 ( 𝜶¯ 𝑁 ( 𝝁)).
5: Update 𝜶 ¯ 𝑁 ( 𝝁) = 𝜶¯ 𝑁 ( 𝝁) + 𝛿𝜶 𝑁 ( 𝝁).
6: If ∥ 𝒓 𝑁 ( 𝜶
¯ 𝑁 ( 𝝁)) ∥ ≤ 𝜖, then calculate 𝑠 𝑁 ,𝑀 ( 𝝁) = 𝑳 𝑇𝑁 𝜶¯ 𝑁 ( 𝝁) and stop.
7: Otherwise, go back to Step 1.

1.4.4 Linearization followed by hyper-reduction


This approach employs the first-order EIM to treat the nonlinear integrals in the
linearized system (1.52)-(1.54). To approximate the nonlinear term 𝑔𝑢 ( 𝑢¯ 𝑁 ( 𝝁))
in the bilinear form 𝑎¯ defined in (1.53), we first construct the following LT space
𝑔
𝑊 𝑁𝑢2 = span{𝐺 𝑢 (𝜁 𝑚 , 𝜁 𝑛 ), 1 ≤ 𝑚, 𝑛 ≤ 𝑁 }, (1.68)

where
𝜕𝑔(𝜁 𝑛 ) 𝜕 2 𝑔(𝜁 𝑛 )
𝐺 𝑢 (𝜁 𝑚 , 𝜁 𝑛 ) = + (𝜁 𝑚 − 𝜁 𝑛 ). (1.69)
𝜕𝑢 𝜕𝑢 2
𝑔 𝑔 𝑔 𝑔
Next, we compute 𝑇𝑀𝑢 = {𝒙 𝑚𝑢 } 𝑚=1
𝑀 and Ψ 𝑢 = span{𝜓 𝑢 (𝒙), 1 ≤ 𝑚 ≤ 𝑀 } by
𝑀 𝑔 𝑚
applying the first-order EIM to the LT space 𝑊 𝑁𝑢2 . Finally, the approximation of
𝑔𝑢 ( 𝑢¯ 𝑁 ( 𝝁)) is given by
𝑀
∑︁
𝑔 𝑔
I𝑀 [𝑔𝑢 ( 𝑢¯ 𝑁 ( 𝝁))] = 𝑢
𝛽 𝑀,𝑚 ( 𝝁) 𝜓 𝑚𝑢 (𝒙), (1.70)
𝑚=1

where
𝜷 𝑀𝑢 ( 𝝁) = (𝑩 𝑀𝑢 ) −1 𝒃 𝑀𝑢 ( 𝝁).
𝑔 𝑔 𝑔
(1.71)
𝑔𝑢 𝑔 𝑔 𝑔𝑢 𝑔
Here 𝐵 𝑀,𝑘𝑚 = 𝜓 𝑚𝑢 (𝒙 𝑘 𝑢 )
and = 𝑏 𝑀,𝑘 ( 𝝁) 𝝁)) for 1 ≤ 𝑘, 𝑚 ≤ 𝑀.
𝑔𝑢 ( 𝑢¯ 𝑁 (𝒙 𝑘 𝑢 ,
We replace 𝑔𝑢 ( 𝑢¯ 𝑁 ( 𝝁)) in (1.53) with the interpolant I𝑀 [𝑔𝑢 ( 𝑢¯ 𝑁 ( 𝝁))] to
arrive at the following bilinear form
∫ ∫ 𝑀
∑︁ ∫
𝑔 𝑔
¯
𝑎(𝑤, 𝑣; 𝝁) = ∇𝑤 · ∇𝑣 − 𝑤𝝁 · ∇𝑣 + 𝑢
𝛽 𝑀,𝑚 ( 𝝁) 𝜓 𝑚𝑢 𝑤 𝑣. (1.72)
Ω Ω 𝑚=1 Ω

It thus follows that the RB Jacobian matrix 𝑱 𝑁 ( 𝝁) ∈ R 𝑁 × 𝑁 is given by


∫ ∫ 𝑀
∑︁ ∫
𝑔 𝑔
𝐽 𝑁 ,𝑛 𝑗 ( 𝝁) = ∇𝜁 𝑗 · ∇𝜁 𝑛 − 𝜁 𝑗 𝝁 · ∇𝜁 𝑛 + 𝑢
𝛽 𝑀,𝑚 ( 𝝁) 𝜓 𝑚𝑢 𝜁 𝑗 𝜁 𝑛 (1.73)
Ω Ω 𝑚=1 Ω
24

which can be written in the matrix form as


𝑀
∑︁
𝑱 𝑁 ( 𝝁) = ( 𝑨 𝑁 − 𝜇1 𝑪 𝑁 − 𝜇2 𝑪 2𝑁 ) +
𝑔
𝑢
𝛽 𝑀,𝑚 ( 𝝁) 𝑫 𝑚
𝑁. (1.74)
𝑚=1

Here 𝑫 𝑚 𝑁 × 𝑁 are given by


𝑁 ∈ R

𝑔
𝐷𝑚 𝑁 ,𝑛 𝑗 = 𝜓 𝑚𝑢 𝜁 𝑗 𝜁 𝑛 , 1 ≤ 𝑛, 𝑗 ≤ 𝑁, 1 ≤ 𝑚 ≤ 𝑀. (1.75)
Ω

The computational cost of forming 𝑱 𝑁 ( 𝝁) in the online stage is 𝑂 (𝑀 𝑁 2 ).


We consider the exact calculation of the RB residual vector 𝒓 𝑁 ( 𝝁) by eval-
uating 𝑟¯ (𝜁 𝑛 ; 𝝁), 1 ≤ 𝑛 ≤ 𝑁, from (1.54). In this case, the RB Jacobian matrix
is approximate, whereas the RB residual vector is exact. Hence, the model re-
duction process is equivalent to applying quasi-Newton method to the Galerkin
projection of the nonlinear FOM. Provided that the quasi-Newton iterations
converge, the resulting ROM would be the same as the RB approximation de-
scribed in Section 1.4.2. Although the resulting ROM is more efficient than the
RB approximation, its online complexity remains dependent on N due to exact
calculation of the RB residual vector.
To recover online N -independence for the linearization followed by hyper-
reduction approach, we replace the nonlinear functions in the residual (1.54)
with their interpolant counterparts to obtain
∫ ∫ 𝐾
∑︁ ∫
𝑔 𝑔
𝑟¯ (𝑣; 𝝁) = ∇𝑢¯ 𝑁 · ∇𝑣 − 𝑢¯ 𝑁 𝝁 · ∇𝑣 + 𝛽𝐾 ,𝑘 ( 𝝁) 𝜓 𝑘 𝑣, (1.76)
Ω Ω 𝑘=1 Ω

where
𝜷𝐾 ( 𝝁) = (𝑩 𝐾 ) −1 𝒃 𝐾 ( 𝝁).
𝑔 𝑔 𝑔
(1.77)
𝑔 𝑔 𝑔 𝑔 𝑔
Here 𝐵 𝐾 ,𝑘𝑚= 𝜓 𝑚 (𝒙 𝑘 )
and 𝑏 𝐾 ,𝑘 ( 𝝁)
= 𝝁)) for 1 ≤ 𝑘, 𝑚 ≤ 𝐾. Note
𝑔( 𝑢¯ 𝑁 (𝒙 𝑘 ,
that we use 𝐾 interpolation points to construct the interpolant of the nonlinear
function 𝑔( 𝑢¯ 𝑁 ( 𝝁)) in (1.54). It thus follows that the RB residual vector 𝒓 𝑁 ( 𝝁) ∈
R 𝑁 is given by

𝒓 𝑁 ( 𝝁) = ( 𝑨 𝑁 − 𝜇1 𝑪 1𝑁 − 𝜇2 𝑪 2𝑁 ) 𝜶¯ 𝑁 ( 𝝁) + 𝑸 𝑁 𝐾 𝑔(𝑬 𝐾 𝑁 𝜶¯ 𝑁 ( 𝝁)), (1.78)

where, 1 ≤ 𝑛 ≤ 𝑁, 1 ≤ 𝑘 ≤ 𝐾, we have

𝑸 𝑁 𝐾 = 𝑮 𝑁 𝐾 (𝑩 𝐾 ) −1 ,
𝑔 𝑔 𝑔
𝐸 𝐾 𝑁 ,𝑘𝑛 = 𝜁 𝑛 (𝒙 𝑘 ), 𝐺 𝑁 𝐾 ,𝑛𝑘 = 𝜓 𝑘 𝜁𝑛 . (1.79)
Ω

In the online stage, we orm the RB residual vector with the computational cost
of 𝑂 (𝑁𝐾). Hence, the computational cost of forming the RB residual vector
is significantly lower than that of forming the RB Jacobian matrix. It allows us
to use 𝐾 interpolation points to maximize the accuracy of the resulting ROM
without increasing the online complexity.
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 25

Algorithm 3 Offline stage of the linearization-then-hyperreduction approach.


Require: The parameter sample set 𝑆 𝑁 = {𝝁 𝑗 , 1 ≤ 𝑗 ≤ 𝑁 }.
Ensure: 𝑳 𝑁 , 𝑨 𝑁 , 𝑪 1𝑁 , 𝑪 2𝑁 , 𝑬 𝐾 𝑁 , 𝑸 𝑁 𝐾 , 𝑫 𝑚
𝑁 , 1 ≤ 𝑚 ≤ 𝑀.
1: Solve the parametric FOM (1.50) for each 𝝁 𝑗 ∈ 𝑆 𝑁 to obtain 𝑢( 𝝁 𝑗 ).
2: Construct a RB space 𝑊 𝑁 𝑢 = span{𝜁 ≡ 𝑢( 𝝁 ), 1 ≤ 𝑗 ≤ 𝑁 }.
𝑗 𝑗
𝑔 𝑔 𝑔
3: Apply the first-order EIP to the LT space 𝑊 2 to obtain Ψ𝐾 and 𝑇𝐾 .
𝑁
𝑔 𝑔 𝑔
4: Apply the first-order EIP to the LT space 𝑊 𝑢2 to obtain Ψ𝑀𝑢 and 𝑇𝑀𝑢 .
𝑁
5: Form 𝑳 𝑁 , 𝑨 𝑁 , 𝑪 1𝑁 , 𝑪 2𝑁 , 𝑬 𝐾 𝑁 , 𝑸 𝑁 𝐾 , 𝑫 𝑚
𝑁 , 1 ≤ 𝑚 ≤ 𝑀.

Algorithm 4 Online stage of the linearization-then-hyperreduction approach.


Require: Parameter point 𝝁 ∈ D and initial guess 𝜶¯ 𝑁 ( 𝝁).
Ensure: RB output 𝑠 𝑁 , 𝑀 ( 𝝁) and updated coefficients 𝜶¯ 𝑁 ( 𝝁).
1: Form the RB vector 𝒓 𝑁 ( 𝜶 ¯ 𝑁 ( 𝝁)) from (1.78).
𝑔
2: Compute 𝜷 𝑀𝑢 ( 𝝁) from (1.71).
3: Form the RB matrix 𝑱 𝑁 ( 𝜶 ¯ 𝑁 ( 𝝁)) from (1.74).
4: Solve 𝑱 𝑁 ( 𝜶 ¯ 𝑁 ( 𝝁)) 𝛿𝜶 𝑁 ( 𝝁) = −𝒓 𝑁 ( 𝜶¯ 𝑁 ( 𝝁)).
5: Update 𝜶 ¯ 𝑁 ( 𝝁) = 𝜶¯ 𝑁 ( 𝝁) + 𝛿𝜶 𝑁 ( 𝝁).
6: If ∥ 𝒓 𝑁 ( 𝜶
¯ 𝑁 ( 𝝁)) ∥ ≤ 𝜖, then calculate 𝑠 𝑁 ,𝑀 ( 𝝁) = 𝑳 𝑇𝑁 𝜶¯ 𝑁 ( 𝝁) and stop.
7: Otherwise, go back to Step 1.

The offline and online stages of the linearization-then-hyperreduction ap-


proach are summarized in Algorithm 3 and Algorithm 4, respectively. In ad-
dition to approximating the nonlinear function 𝑔(𝑢 𝑁 (𝒙, 𝝁)), the approach also
approximates the derivative of the function, namely, 𝑔𝑢 (𝑢 𝑁 (𝒙, 𝝁)). It differs
from the hyperreduction-then-linearization approach which approximates the
nonlinear function 𝑔(𝑢 𝑁 (𝒙, 𝝁)) only. The computational cost of the online
stage is 𝑂 (𝑛Newton (𝑀 𝑁 2 + 𝑁 3 )), which is the same as that of the online stage
of the hyperreduction-then-linearization approach. While the linearization-then-
hyperreduction approach has the same computational cost as the hyperreduction-
then-linearization approach, the former is more accurate than the latter. Indeed,
since the RB residual vector is well approximated by using the largest number of
interpolation points and basis functions, the linearization-then-hyperreduction
approach can be as accurate as the RB approximation described in Section
1.4.2. The former should be many times faster than the latter due to its online
N -independent complexity.

1.4.5 Numerical results


We present numerical results for the model problem in Section 1.4.1. To assess
the accuracy of the output approximation, we define the following output errors
𝑠 𝑠
𝜖𝑁 ( 𝝁) = |𝑠( 𝝁) − 𝑠 𝑁 ( 𝝁)|, 𝜖𝑁 ,𝑀 ( 𝝁) = |𝑠( 𝝁) − 𝑠 𝑁 ,𝑀 ( 𝝁)|
26

for the standard RB approximation and the hyper-reduced ROMs, respectively.


Similarly, we introduce the following errors to assess the solution approximation
𝑢 𝑢
𝜖𝑁 ( 𝝁) = ∥𝑢( 𝝁) − 𝑢 𝑁 ( 𝝁) ∥ 𝑋 , 𝜖𝑁 ,𝑀 ( 𝝁) = ∥𝑢( 𝝁) − 𝑢 𝑁 ,𝑀 ( 𝝁) ∥ 𝑋 .

In general, we expect 𝜖 𝑁 , 𝑀 ( 𝝁) ≥ 𝜖 𝑁 ( 𝝁) and 𝜖 𝑁 ,𝑀 ( 𝝁) ≥ 𝜖 𝑁 ( 𝝁). The effectivi-


𝑠 𝑠 𝑢 𝑢

ties as defined below


,𝑀 ( 𝝁) ,𝑀 ( 𝝁)
𝑠 𝑢
𝜖𝑁 𝜖𝑁
𝜂 𝑠𝑁 , 𝑀 ( 𝝁) = , 𝜂𝑢𝑁 ,𝑀 ( 𝝁) =
𝑠
𝜖𝑁 ( 𝝁) 𝑢 ( 𝝁)
𝜖𝑁
will measure the accuracy of the hyper-reduced ROMs relative to the standard
RB approximation. If the effectivities are close to unity, the hyper-reduced
ROMs can be considered as accurate as the standard RB approximation. Next,
we introduce a uniform grid of size 𝑁Test = 30 × 30 as a parameter test sample
𝑔
𝑆Test , and define

𝑠 1 ∑︁
𝑠 𝑢 1 ∑︁
𝑢
𝜖¯𝑁 ,𝑀 = 𝜖𝑁 ,𝑀 ( 𝝁), 𝜖¯𝑁 ,𝑀 = 𝜖𝑁 ,𝑀 ( 𝝁), (1.80)
𝑁Test 𝑔 𝑁Test 𝑔
𝝁∈𝑆Test 𝝁∈𝑆Test

and
1 ∑︁ 1 ∑︁
𝜂¯𝑠𝑁 , 𝑀 = 𝜂 𝑠𝑁 ,𝑀 ( 𝝁), 𝜂¯𝑢𝑁 ,𝑀 = 𝜂𝑢𝑁 ,𝑀 ( 𝝁). (1.81)
𝑁Test 𝑔 𝑁Test 𝑔
𝝁∈𝑆Test 𝝁∈𝑆Test

The quantities 𝜖¯𝑁


𝑠 𝑢 are similarly defined via 𝜖 𝑠 ( 𝝁) and 𝜖 𝑢 ( 𝝁), respec-
and 𝜖¯𝑁 𝑁 𝑁
tively. In what follows, HL-ROM refers to the hypereduction followed by lin-
earization approach, whereas LH-ROM refers to the linearization followed by
hypereduction approach.

FIGURE 1.4 Convergence of the average error in the solution (a) and output (b) for the standard
RB approximation and the HL-ROM. Since the LH-ROM results are almost identical to those of the
standard RB approximation, they are not shown.

We present in Figure 1.4 the average errors of the solution and the output as
a function of 𝑁 for the RB approximation and the HL-ROM. As 𝑀 increases,
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 27

𝜖¯𝑁
𝑠
, 𝑀 (respectively, 𝜖¯𝑁 , 𝑀 ) converges to 𝜖¯𝑁 (respectively, 𝜖¯𝑁 ). The HL-ROM
𝑢 𝑠 𝑢

with 𝑀 = 3𝑁 is almost as accurate as the RB approximation, while the LH-


ROM with 𝑀 = 2𝑁 is almost identical to the RB approximation. We display
in Table 1.3 𝜂¯𝑠𝑁 , 𝑀 and 𝜂¯𝑢𝑁 ,𝑀 as a function of 𝑁. We observe that the average
effectivities decrease toward unity as 𝑀 increases. Furthermore, the LH-ROM
yields smaller effectivties than the HL-ROM. Since the effectivties of the LH-
ROM are very close to unity, the LH-ROM is almost indistinguishable from the
RB approximation.

𝑀=𝑁 𝑀 = 2𝑁 𝑀 = 3𝑁 𝑀 = 2𝑁
𝑁 𝜂¯𝑢𝑁 , 𝑀 𝜂¯𝑠𝑁 , 𝑀 𝜂¯𝑢𝑁 ,𝑀 𝜂¯𝑠𝑁 ,𝑀 𝜂¯𝑢𝑁 ,𝑀 𝜂¯𝑠𝑁 ,𝑀 𝜂¯𝑢𝑁 ,𝑀 𝜂¯𝑠𝑁 ,𝑀
16 1.84 11.95 1.66 4.32 1.43 2.2 1.00 1.02
25 4.46 20.93 1.44 7.21 1.16 3.45 1.02 1.03
36 4.97 20.31 1.94 14.32 1.07 7.10 1.05 1.03
49 10.05 31.42 3.49 10.42 1.45 3.49 1.01 1.10
64 10.87 27.82 2.60 20.32 1.44 6.76 1.00 1.11
81 20.55 25.60 3.06 20.19 1.20 6.37 1.02 1.03
TABLE 1.3 Average effectivities for the hyper-reduced ROMs. The first three
columns are results of the HL-ROM, while the last column corresponds to the LH-
ROM.

We present in Table 1.4 the online computational times to calculate 𝑠 𝑁 ( 𝝁)


and 𝑠 𝑁 , 𝑀 ( 𝝁) as a function of 𝑁. The values are normalized with respect to the
computational time of the truth approximation output 𝑠( 𝝁). The computational
saving is significant: for an relative accuracy of about 10−6 in the output (𝑁 = 49,
𝑀 = 98), the reduction in online cost is more than a factor of 1000; this is mainly
because the matrix assembly of the nonlinear terms for the truth approximation
is computationally very expensive. The standard RB approximation has similar
computational times as the truth FE approximation, and is between 100 and 1000
times slower than the hyper-reduced ROMs.

FEM RBA HL-ROM HL-ROM HL-ROM LH-ROM


𝑁 𝑠( 𝝁) 𝑠 𝑁 ( 𝝁) 𝑀=𝑁 𝑀 = 2𝑁 𝑀 = 3𝑁 𝑀 = 2𝑁
16 1 5.18e-1 2.50e-4 2.80e-4 3.12e-4 2.77e-4
25 1 5.48e-1 3.99e-4 4.58e-4 5.28e-4 4.16e-4
36 1 5.95e-1 5.66e-4 6.32e-4 7.17e-4 5.80e-4
49 1 6.33e-1 8.29e-4 1.00e-3 1.23e-3 9.23e-4
64 1 6.95e-1 1.46e-3 1.65e-3 2.39e-3 1.63e-3
81 1 7.84e-1 1.61e-3 1.90e-3 2.27e-3 1.91e-3
TABLE 1.4 Online computational times (normalized with respect to the time to com-
pute 𝑠( 𝝁) using the FE method) for the standard RB approximation and hyper-
reduced ROMs.
28

1.5 CONCLUDING REMARKS

This chapter focuses on hyper-reduction methods for constructing efficient and


accurate approximations of nonlinear operators. Additionally, it explores two
distinct approaches aimed at generating efficient ROMs through hyper-reduction
methods. The linearization-then-hyperreduction approach typically outperforms
the hyperreduction-then-linearization approach. Its superior accuracy and effi-
ciency stem from the flexibility to employ a broader spectrum of interpolation
points and basis functions, enabling a very accurate approximation of the resid-
ual operator. Consequently, this approach showcases the potential to achieve
accuracy levels akin to projection-based ROMs without hyper-reduction. This
chapter has not addressed a number of emerging topics in model reduction.
Indeed, the development of model reduction methods is a very active area of
research filled with open questions concerning the stability, accuracy, efficiency,
and robustness of the reduced models. These factors are fundamental in deter-
mining the practical applicability and reliability of these models across diverse
fields and applications.
Model reduction methods might yield unstable or incorrect solutions if the
reduced model does not inherit the full model’s geometric structures. These
structures encompass conservation laws, symmetries, coercivity, symplecticity,
reversibility, and invariants of motion, transcending specific coordinate repre-
sentations. Recognizing the importance of maintaining the geometric structures,
structure-preserving model reduction methods [1, 14, 20, 18, 21, 41, 51, 65, 88,
87, 101, 103] aim to construct reduced models that accurately retain certain
geometric structures from the full model. This insistence on preserving certain
geometric structures not only ensures stability within the reduced models, but
also often leads to enhanced accuracy. Although hyper-reduction methods have
led to the successful construction of inexpensive low-dimensional models, few
work has been done to retaining specific structures of the nonlinear operators
in the hyper-reduction step. Peng and Mohseni proposed in [88] a symplec-
tic discrete empirical interpolation method (SDEIM) that applies DEIM to the
nonlinear Hamiltonian gradient. Chaturantabut et al. proposed in [21] a vari-
ation of the DEIM that preserves the Hamiltonian structure by approximating
the nonlinear Hamiltonian velocity field in the space where the DEIM projec-
tion is orthogonal. The energy-conserving sampling and weighting (ECSW)
scheme guarantees exact preservation of the gradient structure by mapping the
nonlinear terms into the RB space via structure-preserving projection and then
approximating the resulting reduced terms [38].
Traditional model reduction techniques rely on global approximations of the
solution space. However, when faced with complex problems, several significant
challenges emerge: (1) the Kolmogorov 𝑛-width decays so slowly that there
might not be a low-dimensional representation of the solution manifold; (2) it is
computationally expensive to generate enough snapshots for high-dimensional
parameter domains; (3) the reduced models should be robust enough to handle
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 29

solution behaviors not encountered during the offline stage. In response, adaptive
model reduction has emerged as a solution to the limitations posed by global
models relying on linear subspace approximations. Offline adaptivity approach
partitions the parameter domain and constructs an individualized ROM for each
partition [32, 47, 49, 69]. This adaptive partitioning enables reduced spaces of
lower dimension compared to global strategies and is particularly effective for
problems exhibiting vastly different behaviors in distinct regions of the parameter
domain. The main limitation of the offline adaptivity strategy lies in their a priori
adaptivity, that is, the construction of the different reduced order models is done
during the offline phase. Online adaptivity techniques have been developed
to overcome this limitation by updating the RB space during the online phase
according to various criteria associated with changes of the system dynamics
in parameters and time [52, 85, 86, 102, 119]. Multiscale RB method [11, 73]
and static condensation RB element method [35, 59, 106] have been developed
to handle problems in which the coefficients of the differential operators are
characterized by a large number of independent parameters. Boyaval et al.
[12, 13] develop a reduced basis approach for solving variational problems with
stochastic parameters and reducing the variance in the Monte-Carlo simulation
of a stochastic differential equation [111, 110].
The application of model reduction methods to convection-dominated prob-
lems and wave prorogation problems might lead to poor approximations because
traveling waves, moving shocks, sharp gradients, and discontinuities exhibit
slowly decaying Kolmogorov 𝑛 widths and do not admit low-dimensional rep-
resentation. A family of model reduction methods has been developed to deal
with transport phenomena efficiently by recasting the problem in a coordinate
frame where it is more amenable to low-dimensional approximation. A common
approach is to construct the RB approximation of the form
𝑁
∑︁
𝑢 𝑁 (𝒙, 𝝁) = 𝛼 𝑁 ,𝑛 ( 𝝁)𝜁 𝑛 (𝝓(𝒙, 𝝁)) (1.82)
𝑛=1

where 𝝓(·, 𝝁) : Ω → Ω represents a suitable change of coordinates of the


spatial domain Ω. The key idea is to find a parameter-dependent map 𝝓(·, 𝝁)
that transforms the reference coordinates 𝒙 to make the snapshots smooth and
regular in the new coordinates 𝒚 = 𝝓(𝒙, 𝝁). Several methods have been pro-
posed to construct such map. Rowley and Marsden proposed a POD-Galerkin
method in a shifted frame of reference determined by using template fitting and
reconstruction [96]. This method was extended to decompose the full model
solution into a group component 𝝓(𝒙) and a frozen solution 𝑣 = 𝑢(𝝓) [9, 82, 95].
The frozen solution is kept as time-invariant as possible by introducing a set of
algebraic constraints called phase conditions to determine the group component.
The optimal transport methods compute the mapping by minimizing the Wasser-
stein distance derived from Monge–Kantorovich formulation [60] or solving the
Monge-Ampère equation [55, 81]. Shifted POD [92], transport reversal [93], and
30

transport snapshot [71] methods introduce time-dependent shifts of the snapshot


matrix based on the dominant transport velocities of the problem. In [2, 16],
the map is determined by minimizing the fully discrete equation residual in the
spirit of shock-fitting methods [117, 118]. The registration method constructs
the mapping from a set of snapshots with the aim of low-dimensional repre-
sentations of the mapped solution manifold [107]. The method is based on a
nonlinear non-convex minimization of the difference between a reference state
and the mapped snapshots at the training parameters.
With advancements in data analytics, machine learning, and artificial in-
telligence, data-driven ROMs have received considerable attention in recent
years. Data-driven ROMs generate reduced models without acessing to the
mathematical operators of the underlying FOM. They represent an alternative to
projection-based ROMs in which the underlying FOM is projected onto a RB
space of a much smaller dimension by means of Galerkin or Petrov-Galerkin
projection. Both data-driven and projection-based methods start with a set of
full-order solutions, which are compressed to construct a low-dimensional repre-
sentation of the solution manifold by using POD. Their difference lies in how the
expansion coefficients are computed. Data-driven methods infer the expansion
coefficients by using a regression model such as radial basis function regression
[6, 27, 55, 112, 115], Gaussian process regression [44, 76, 83], and artificial
neural networks [50, 40, 89]. They are non-intrusive in the sense that FOM
can be considered as a black box that produces the snapshots upon which a
data-driven ROM is built. Data-driven ROMs are easier to implement since
one can make use of any available simulation codes for data generation. How-
ever, data-driven ROMs usually lack conservation of physical principles and a
rigorous error certification. Another major limitation concerns a large number
of snapshots needed for accurate approximations, which can be prohibitively
expensive for high-dimensional complex systems.
A posterior error estimation offers a unique advantage by instilling confidence
in the outputs of the reduced model and allowing practitioners to determine the
appropriate size of the reduced model based on specific accuracy requirements.
Rigorous a posterior error estimation have been extensively developed for linear
and weakly nonlinear PDEs [25, 26, 28, 33, 57, 58, 59, 61, 63, 64, 78, 72, 77, 80,
97, 98, 104, 108, 109]. However, a posterior error estimation for highly nonlinear
PDEs poses considerable challenges. In such cases, the analysis frequently
produces error estimates that are overly conservative or, in some instances, proves
impractical due to limitations in existing analytical tools. While efforts have
been made to advance in this direction, addressing these complexities requires
delving deeper into the fundamental principles underlying these methods. The
quest for a robust error estimation framework for time-dependent or nonlinear
problems remains an ongoing research. Although some progress has been made,
substantial gaps persist and demand new ideas to establish more practical and
reliable error bounds.
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 31

[1] Babak Maboudi Afkham and Jan S. Hesthaven. Structure preserving model reduction of
parametric Hamiltonian systems. SIAM Journal on Scientific Computing, 39(6):A2616–
A2644, 2017.
[2] Marzieh Alireza Mirhoseini and Matthew J. Zahr. Model reduction of convection-dominated
partial differential equations via optimization-based implicit feature tracking. Journal of
Computational Physics, 473, 2023.
[3] Steven S. An, Theodore Kim, and Doug L. James. Optimizing cubature for efficient integration
of subspace deformations. ACM Transactions on Graphics, 27(5):165, 2008.
[4] J. P. Argaud, B. Bouriquet, H. Gong, Y. Maday, and O. Mula. Stabilization of (G)EIM in
Presence of Measurement Noise: Application to Nuclear Reactor Physics. In Lecture Notes
in Computational Science and Engineering, volume 119, pages 133–145, 2017.
[5] Patricia Astrid, Siep Weiland, Karen Willcox, and Ton Backx. Missing point estimation in
models described by proper orthogonal decomposition. IEEE Transactions on Automatic
Control, 53(10):2237–2251, 2008.
[6] C. Audouze, F. de Vuyst, and P. B. Nair. Reduced-order modeling of parameterized PDEs using
time-space-parameter principal component analysis. International Journal for Numerical
Methods in Engineering, 80(8):1025–1057, 2009.
[7] M. Barrault, Y. Maday, N. C. Nguyen, and A. T. Patera. An ’empirical interpolation’ method:
application to efficient reduced-basis discretization of partial differential equations. Comptes
Rendus Mathematique, 339(9):667–672, 2004.
[8] Maxime Barrault, Yvon Maday, Ngoc Cuong Nguyen, and Anthony T. Patera. An ‘em-
pirical interpolation’ method: application to efficient reduced-basis discretization of partial
differential equations. Comptes Rendus Mathematique, 339(9):667–672, nov 2004.
[9] W. J. Beyn and V. Thümmler. Freezing solutions of equivariant evolution equations. SIAM
Journal on Applied Dynamical Systems, 3(2):85–116, 2004.
[10] Peter Binev, Albert Cohen, Wolfgang Dahmen, Ronald Devore, Guergana Petrova, and Prze-
myslaw Wojtaszczyk. Convergence rates for greedy algorithms in reduced basis methods.
SIAM Journal on Mathematical Analysis, 43(3):1457–1472, 2011.
[11] S Boyaval. Reduced-Basis Approach for Homogenization beyond the Periodic Setting. SIAM
Multiscale Modeling & Simulation, 7(1):466–494, 2008.
[12] Sébastien Boyaval, Claude Le Bris, Tony Lelièvre, Yvon Maday, Ngoc Cuong Nguyen,
and Anthony T Patera. Reduced basis techniques for stochastic problems. Archives of
Computational Methods in Engineering, 17:435–454, 2010.
[13] Sébastien Boyaval, Claude Le Bris, Yvon Maday, Ngoc Cuong Nguyen, and Anthony T. Patera.
A reduced basis approach for variational problems with stochastic parameters: Application
to heat conduction with variable Robin coefficient. Computer Methods in Applied Mechanics
and Engineering, 198(41-44):3187–3206, sep 2009.
[14] Patrick Buchfink, Ashish Bhatt, and Bernard Haasdonk. Symplectic Model Order Reduction
with Non-Orthonormal Bases. Mathematical and Computational Applications, 24(2):43,
2019.
[15] A. Buffa, Y. Maday, A. T. Patera, C. Prud’homme, and G. Turinici. A priori convergence
of the Greedy algorithm for the parametrized reduced basis method. ESAIM: Mathematical
Modelling and Numerical Analysis, 46(03):595–603, 2012.
[16] Nicolas Cagniart, Yvon Maday, and Benjamin Stamm. Model order reduction for problems
with large convection effects. In Computational Methods in Applied Sciences, volume 47,
pages 131–150. 2019.
[17] Kevin Carlberg, Charbel Bou-Mosleh, and Charbel Farhat. Efficient non-linear model reduc-
tion via a least-squares Petrov-Galerkin projection and compressive tensor approximations.
32

International Journal for Numerical Methods in Engineering, 86(2):155–181, 2011.


[18] Kevin Carlberg, Youngsoo Choi, and Syuzanna Sargsyan. Conservative model reduction for
finite-volume models. Journal of Computational Physics, 2018.
[19] Kevin Carlberg, Charbel Farhat, Julien Cortial, and David Amsallem. The GNAT method for
nonlinear model reduction: Effective implementation and application to computational fluid
dynamics and turbulent flows. Journal of Computational Physics, 242:623–647, 2013.
[20] Kevin Carlberg, Ray Tuminaro, and Paul Boggs. Preserving lagrangian structure in non-
linear model reduction with application to structural dynamics. SIAM Journal on Scientific
Computing, 37(2):B153–B184, 2015.
[21] S. Chaturantabut, C. Beattie, and S. Gugercin. Structure-preserving model reduction for
nonlinear port-Hamiltonian systems. SIAM Journal on Scientific Computing, 38(5):B837–
B865, 2016.
[22] Saifon Chaturantabut and Danny C. Sorensen. Nonlinear model reduction via discrete em-
pirical interpolation. SIAM Journal on Scientific Computing, 32(5):2737–2764, 2010.
[23] Saifon Chaturantabut and Danny C. Sorensen. Application of POD and DEIM on dimension
reduction of non-linear miscible viscous fingering in porous media. Mathematical and
Computer Modelling of Dynamical Systems, 17(4):337–353, 2011.
[24] Yanlai Chen, Sigal Gottlieb, Lijie Ji, and Yvon Maday. An EIM-degradation free reduced basis
method via over collocation and residual hyper reduction-based error estimation. Journal of
Computational Physics, 444:110545, 2021.
[25] Yanlai Chen, Jan S Hesthaven, Yvon Maday, and Jerónimo Rodríguez. Improved successive
constraint method based a posteriori error estimate for reduced basis approximation of 2D
Maxwell’s problem. EsaimMathematical Modelling and Numerical AnalysisModelisation
Mathematique Et Analyse Numerique, 43:1099–1116, 2009.
[26] Yanlai Chen, Jan S. Hesthaven, Yvon Maday, and Jerónimo Rodríguez. Certified Reduced
Basis Methods and Output Bounds for the Harmonic Maxwell’s Equations. SIAM Journal on
Scientific Computing, 32(2):970–996, jan 2010.
[27] David S. Ching, Patrick J. Blonigan, Francesco Rizzi, and Jeffrey A. Fike. Reduced Order
Modeling of Hypersonic Aerodynamics with Grid Tailoring. In AIAA Science and Technology
Forum and Exposition, AIAA SciTech Forum 2022, 2022.
[28] Simone Deparis. Reduced basis error bound computation of parameter-dependent Navier-
stokes equations by the natural norm approach. SIAM Journal on Numerical Analysis,
46(4):2039–2067, 2008.
[29] Ronald DeVore, Simon Foucart, Guergana Petrova, and Przemyslaw Wojtaszczyk. Computing
a Quantity of Interest from Observational Data. Constructive Approximation, 49(3):461–508,
2019.
[30] Ronald DeVore, Guergana Petrova, and Przemyslaw Wojtaszczyk. Greedy Algorithms for
Reduced Bases in Banach Spaces. Constructive Approximation, 37(3):455–466, 2013.
[31] Martin Drohmann, Bernard Haasdonk, and Mario Ohlberger. Reduced basis approximation
for nonlinear parametrized evolution equations based on empirical operator interpolation.
SIAM Journal on Scientific Computing, 34(2):A937–A969, 2012.
[32] J Eftang, A Patera, and E Ronquist. An "hp" Certified Reduced Basis Method for Parametrized
Elliptic Partial Differential Equations. SIAM Journal on Scientific Computing, 32(6):3170–
3200, 2010.
[33] J. L. Eftang, D. B.P. Huynh, D. J. Knezevic, and A. T. Patera. A two-step certified reduced
basis method. Journal of Scientific Computing, 51(1):28–58, 2012.
[34] Jens L. Eftang, Martin A. Grepl, and Anthony T. Patera. A posteriori error bounds for the
empirical interpolation method. Comptes Rendus Mathematique, 348(9-10):575–579, 2010.
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 33

[35] Jens L. Eftang and Anthony T. Patera. Port reduction in parametrized component static
condensation: Approximation and a posteriori error estimation. International Journal for
Numerical Methods in Engineering, 96(5):269–302, 2013.
[36] R. Everson and L. Sirovich. Karhunen-Loeve procedure for gappy data. Opt. Soc. Am. A,
12(8):1657–1664, 1995.
[37] Charbel Farhat, Philip Avery, Todd Chapman, and Julien Cortial. Dimensional reduction
of nonlinear finite element dynamic models with finite rotations and energy-based mesh
sampling and weighting for computational efficiency. International Journal for Numerical
Methods in Engineering, 98(9):625–662, 2014.
[38] Charbel Farhat, Todd Chapman, and Philip Avery. Structure-preserving, stability, and ac-
curacy properties of the energy-conserving sampling and weighting method for the hyper
reduction of nonlinear finite element dynamic models. International Journal for Numerical
Methods in Engineering, 102(5):1077–1110, 2015.
[39] D. Galbally, K. Fidkowski, K. Willcox, and O. Ghattas. Non-linear model reduction for un-
certainty quantification in large-scale inverse problems. International Journal for Numerical
Methods in Engineering, 81(12):1581–1608, 2010.
[40] Zhen Gao, Qi Liu, Jan S. Hesthaven, Bao Shan Wang, Wai Sun Don, and Xiao Wen.
Non-intrusive reduced order modeling of convection dominated flows using artificial neural
networks with application to Rayleigh-Taylor instability. Communications in Computational
Physics, 30(1):97–123, 2021.
[41] Yuezheng Gong, Qi Wang, and Zhu Wang. Structure-preserving Galerkin POD reduced-
order modeling of Hamiltonian systems. Computer Methods in Applied Mechanics and
Engineering, 315:780–798, 2017.
[42] Martin A. Grepl, Yvon Maday, Ngoc C. Nguyen, and Anthony T. Patera. Efficient reduced-
basis treatment of nonaffine and nonlinear partial differential equations. Mathematical Mod-
elling and Numerical Analysis, 41(3):575–605, aug 2007.
[43] Martin A. Grepl and Anthony T. Patera. A posteriori error bounds for reduced-basis approx-
imations of parametrized parabolic partial differential equations. Mathematical Modelling
and Numerical Analysis, 39(1):157–181, 2005.
[44] Mengwu Guo and Jan S. Hesthaven. Reduced order modeling for nonlinear structural analysis
using Gaussian process regression. Computer Methods in Applied Mechanics and Engineer-
ing, 341:807–826, 2018.
[45] B Haasdonk and M Ohlberger. Reduced basis method for finite volume approximations of
parametrized linear evolution equations. Mathematical Modelling and Numerical Analysis
(M2AN), 42(3):277–302, 2008.
[46] Bernard Haasdonk. Convergence rates of the pod—greedy method. Mathematical Modelling
and Numerical Analysis, 47(3):859–873, 2013.
[47] Bernard Haasdonk, Markus Dihlmann, and Mario Ohlberger. A training set and multiple bases
generation approach for parameterized model reduction based on adaptive grids in parameter
space. Mathematical and Computer Modelling of Dynamical Systems, 17(4):423–442, 2011.
[48] J. A. Hernández, M. A. Caicedo, and A. Ferrer. Dimensional hyper-reduction of nonlinear
finite element models via empirical cubature. Computer Methods in Applied Mechanics and
Engineering, 313:687–722, 2017.
[49] Martin Hess, Alessandro Alla, Annalisa Quaini, Gianluigi Rozza, and Max Gunzburger. A
localized reduced-order modeling approach for PDEs with bifurcating solutions. Computer
Methods in Applied Mechanics and Engineering, 351:379–403, 2019.
[50] J. S. Hesthaven and S. Ubbiali. Non-intrusive reduced order modeling of nonlinear problems
using neural networks. Journal of Computational Physics, 363:55–78, 2018.
34

[51] Jan S. Hesthaven and Cecilia Pagliantini. Structure-preserving reduced basis methods for
Poisson systems. Mathematics of Computation, 90(330):1701–1740, 2021.
[52] Jan S. Hesthaven, Cecilia Pagliantini, and Nicolo Ripamonti. Rank-adaptive structure-
preserving model order reduction of Hamiltonian systems. ESAIM: Mathematical Modelling
and Numerical Analysis, 56(2):617–650, 2022.
[53] Jan S. Hesthaven, Cecilia Pagliantini, and Gianluigi Rozza. Reduced basis methods for
time-dependent problems. Acta Numerica, 31:265–345, 2022.
[54] Jan S. Hesthaven, Benjamin Stamm, and Shun Zhang. Efficient greedy algorithms for high-
dimensional parameter spaces with applications to empirical interpolation and reduced basis
methods. ESAIM: Mathematical Modelling and Numerical Analysis, 48(1):259–283, 2014.
[55] R Loek Van Heyningen, Ngoc Cuong Nguyen, Patrick Blonigan, and Jaime Peraire. Adaptive
model reduction of high-order solutions of compressible flows via optimal transport, 2023.
[56] Philip Holmes, John L. Lumley, Gahl Berkooz, and Clarence W. Rowley. Turbulence, Coher-
ent Structures, Dynamical Systems and Symmetry. Cambridge University Press, 2012.
[57] D. B.P. Huynh and A. T. Patera. Reduced basis approximation and a posteriori error estimation
for stress intensity factors. International Journal for Numerical Methods in Engineering,
72(10):1219–1259, 2007.
[58] D.B.P. Huynh, D.J. Knezevic, Y. Chen, J.S. Hesthaven, and A.T. Patera. A natural-norm
Successive Constraint Method for inf-sup lower bounds. Computer Methods in Applied
Mechanics and Engineering, 199(29-32):1963–1975, jun 2010.
[59] Dinh Bao Phuong Huynh, David J. Knezevic, and Anthony T. Patera. A Static conden-
sation Reduced Basis Element method : Approximation and a posteriori error estimation.
Mathematical Modelling and Numerical Analysis, 47(1):213–251, 2013.
[60] Angelo Iollo and Damiano Lombardi. Advection modes by optimal mass transfer. Physical
Review E - Statistical, Nonlinear, and Soft Matter Physics, 89(2), 2014.
[61] Mark Kärcher, Zoi Tokoutsi, Martin A. Grepl, and Karen Veroy. Certified Reduced Basis
Methods for Parametrized Elliptic Optimal Control Problems with Distributed Controls.
Journal of Scientific Computing, 75(1):276–307, 2018.
[62] P. Kerfriden, O. Goury, T. Rabczuk, and S. P.A. Bordas. A partitioned model order reduction
approach to rationalise computational expenses in nonlinear fracture mechanics. Computer
Methods in Applied Mechanics and Engineering, 256:169–188, 2013.
[63] David J. Knezevic, Ngoc Cuong Nguyen, and Anthony T. Patera. Reduced basis approxima-
tion and a posteriori error estimation for the parametrized unsteady Boussinesq equations.
Mathematical Models and Methods in Applied Sciences, 21(7):1415–1442, 2011.
[64] David J. Knezevic and Anthony T. Patera. A certified reduced basis method for the fokker-
planck equation of dilute polymeric fluids: Fene dumbbells in extensional flow. SIAM Journal
on Scientific Computing, 32(2):793–817, 2010.
[65] Sanjay Lall, Petr Krysl, and Jerrold E. Marsden. Structure-preserving model reduction for
mechanical systems. In Physica D: Nonlinear Phenomena, volume 184, pages 304–318,
2003.
[66] Y. Maday, O. Mula, A.T. Patera, and M. Yano. The Generalized Empirical Interpolation
Method: Stability theory on Hilbert spaces with an application to the Stokes equation.
Computer Methods in Applied Mechanics and Engineering, 287:310–334, apr 2015.
[67] Yvon Maday and Olga Mula. A generalized empirical interpolation method: Application of
reduced basis techniques to data assimilation. In Springer INdAM Series, volume 4, pages
221–235. Springer, 2013.
[68] Yvon Maday, Ngoc Cuong Nguyen, Anthony T. Patera, and George S.H. Pau. A general
multipurpose interpolation procedure: The magic points. Communications on Pure and
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 35

Applied Analysis, 8(1):383–404, oct 2009.


[69] Yvon Maday and Benjamin Stamm. Locally adaptive greedy approximations for anisotropic
parameter reduced basis spaces. SIAM Journal on Scientific Computing, 35(6), 2013.
[70] Andrea Manzoni, Alfio Quarteroni, and Gianluigi Rozza. Shape optimization for viscous flows
by reduced basis methods and free-form deformation. International Journal for Numerical
Methods in Fluids, 70(5):646–670, 2012.
[71] Nirmal J. Nair and Maciej Balajewicz. Transported snapshot model order reduction approach
for parametric, steady-state fluid flows containing parameter-dependent shocks. International
Journal for Numerical Methods in Engineering, 117(12):1234–1262, 2019.
[72] N. C. Nguyen. A posteriori error estimation and basis adaptivity for reduced-basis approx-
imation of nonaffine-parametrized linear elliptic partial differential equations. Journal of
Computational Physics, 227(2):983–1006, dec 2007.
[73] N. C. Nguyen. A multiscale reduced-basis method for parametrized elliptic partial differential
equations with multiple scales. Journal of Computational Physics, 227(23):9807–9822, 2008.
[74] N. C. Nguyen, A. T. Patera, and J. Peraire. A ’best points’ interpolation method for efficient
approximation of parametrized functions. International Journal for Numerical Methods in
Engineering, 73(4):521–543, jan 2008.
[75] N. C. Nguyen and J. Peraire. An efficient reduced-order modeling approach for non-linear
parametrized partial differential equations. International Journal for Numerical Methods in
Engineering, 76(1):27–55, oct 2008.
[76] N. C. Nguyen and J. Peraire. Gaussian functional regression for output prediction: Model
assimilation and experimental design. Journal of Computational Physics, 309:52–68, mar
2016.
[77] N. C. Nguyen, G. Rozza, D. B.P. Huynh, and A. T. Patera. Reduced basis approximation
and a posteriori error estimation for parametrized parabolic pdes: Application to real-time
bayesian parameter estimation. In Biegler, Biros, Ghattas, Heinkenschloss, Keyes, Mallick,
Tenorio, van Bloemen Waanders, and Willcox, editors, Large-Scale Inverse Problems and
Quantification of Uncertainty, pages 151–177. John Wiley and Sons, UK, 2010.
[78] N. C. Nguyen, K. Veroy, and A. T. Patera. Certified Real-Time Solution of Parametrized
Partial Differential Equations. In S Yip, editor, Handbook of Materials Modeling, pages
1523–1559. Kluwer Academic Publishing, 2004.
[79] Ngoc Cuong Nguyen and Jaime Peraire. Efficient and accurate nonlinear model reduction via
first-order empirical interpolation. Journal of Computational Physics, 494:112512, 2023.
[80] Ngoc-Cuong Nguyen, Gianluigi Rozza, and Anthony T. Patera. Reduced basis approximation
and a posteriori error estimation for the time-dependent viscous Burgers’ equation. Calcolo,
46(3):157–185, jun 2009.
[81] Ngoc Cuong Nguyen, Jordi Vila-Pérez, and Jaime Peraire. An adaptive viscosity regulariza-
tion approach for the numerical solution of conservation laws: Application to finite element
methods. Journal of Computational Physics, 494:112507, 2023.
[82] Mario Ohlberger and Stephan Rave. Nonlinear reduced basis approximation of parameterized
evolution equations via the method of freezing. Comptes Rendus Mathematique, 351(23-
24):901–906, 2013.
[83] Giulio Ortali, Nicola Demo, and Gianluigi Rozza. A Gaussian Process Regression approach
within a data-driven POD framework for engineering problems in fluid dynamics. Mathe-
matics In Engineering, 4(3):1–16, 2022.
[84] Anthony T. Patera and Masayuki Yano. Une procédure de quadrature empirique par pro-
grammation linéaire pour les fonctions à paramètres. Comptes Rendus Mathematique,
355(11):1161–1167, 2017.
36

[85] Benjamin Peherstorfer and Karen Willcox. Dynamic data-driven reduced-order models.
Computer Methods in Applied Mechanics and Engineering, 291:21–41, 2015.
[86] Benjamin Peherstorfer and Karen Willcox. Online adaptive model reduction for nonlinear
systems via low-rank updates. SIAM Journal on Scientific Computing, 37(4):A2123–A2150,
2015.
[87] Liqian Peng and Kamran Mohseni. Geometric model reduction of forced and dissipative
Hamiltonian systems. In 2016 IEEE 55th Conference on Decision and Control, CDC 2016,
pages 7465–7470, 2016.
[88] Liqian Peng and Kamran Mohseni. Symplectic model reduction of Hamiltonian systems.
SIAM Journal on Scientific Computing, 38(1):A1–A27, 2016.
[89] Federico Pichi, Francesco Ballarin, Gianluigi Rozza, and Jan S. Hesthaven. An artificial neural
network approach to bifurcating phenomena in computational fluid dynamics. Computers
and Fluids, 254, 2023.
[90] Annika Radermacher and Stefanie Reese. POD-based model reduction with empirical in-
terpolation applied to nonlinear elasticity. International Journal for Numerical Methods in
Engineering, 107(6):477–495, 2016.
[91] S. S. Ravindran. A reduced-order approach for optimal control of fluids using proper orthog-
onal decomposition. International Journal for Numerical Methods in Fluids, 34(5):425–448,
2000.
[92] J. Reiss, P. Schulze, J. Sesterhenn, and V. Mehrmann. The shifted proper orthogonal de-
composition: A mode decomposition for multiple transport phenomena. SIAM Journal on
Scientific Computing, 40(3):A1322–A1344, 2018.
[93] Donsub Rim, Scott Moe, and Randall J. LeVeque. Transport reversal for model reduction of
hyperbolic partial differential equations. SIAM-ASA Journal on Uncertainty Quantification,
6(1):118–150, 2018.
[94] C W Rowley, T Colonius, and R M Murray. Model reduction for compressible flows using
POD and Galerkin projection. Physica D. Nonlinear Phenomena, 189(1-2):115–129, 2004.
[95] Clarence W. Rowley, Ioannis G. Kevrekidis, Jerrold E. Marsden, and Kurt Lust. Reduction
and reconstruction for self-similar dynamical systems. Nonlinearity, 16(4):1257–1275, 2003.
[96] Clarence W. Rowley and Jerrold E. Marsden. Reconstruction equations and the Karhunen-
Loève expansion for systems with symmetry. Physica D: Nonlinear Phenomena, 2000.
[97] G. Rozza, D. B. P. Huynh, and A. T. Patera. Reduced basis approximation and a posteri-
ori error estimation for affinely parametrized elliptic coercive partial differential equations:
Application to transport and continuum mechanics. Archives Computational Methods in
Engineering, 15(4):229–275, 2008.
[98] Gianluigi Rozza. Reduced-basis methods for elliptic equations in sub-domains with a poste-
riori error bounds and adaptivity. Applied Numerical Mathematics, 55(4):403–424, 2005.
[99] David Ryckelynck. A priori hyperreduction method: An adaptive approach. Journal of
Computational Physics, 202(1):346–366, 2005.
[100] Ernest K. Ryu and Stephen P. Boyd. Extensions of Gauss Quadrature Via Linear Programming.
Foundations of Computational Mathematics, 15(4):953–971, 2015.
[101] B. Sanderse. Non-linearly stable reduced-order models for incompressible flow with energy-
conserving finite volume methods. Journal of Computational Physics, 421, 2020.
[102] Themistoklis P. Sapsis and Pierre F.J. Lermusiaux. Dynamically orthogonal field equations
for continuous stochastic dynamical systems. Physica D: Nonlinear Phenomena, 238(23-
24):2347–2360, 2009.
[103] Alexander Schein, Kevin T. Carlberg, and Matthew J. Zahr. Preserving general physical
properties in model reduction of dynamical systems via constrained-optimization projection.
Model reduction techniques for parametrized nonlinear partial differential equations Chapter | 1 37

International Journal for Numerical Methods in Engineering, 122(14):3368–3399, 2021.


[104] S Sen, K Veroy, D B P Huynh, S Deparis, N C Nguyen, and A T Patera. Natural norm”
a posteriori error estimators for reduced basis approximations. Journal of Computational
Physics, 217(1):37–62, 2006.
[105] L Sirovich. Turbulence and the Dynamics of Coherent Structures, Part 1: Coherent Structures.
Quarterly of Applied Mathematics, 45(3):561–571, oct 1987.
[106] Kathrin Smetana. A new certification framework for the port reduced static condensation
reduced basis element method. Computer Methods in Applied Mechanics and Engineering,
283:352–383, 2015.
[107] Tommaso Taddei. A registration method for model order reduction: Data compression and
geometry reduction. SIAM Journal on Scientific Computing, 42(2):A997–A1027, 2020.
[108] K. Veroy and A. T. Patera. Certified real-time solution of the parametrized steady incompress-
ible Navier-Stokes equations: Rigorous reduced-basis a posteriori error bounds. International
Journal for Numerical Methods in Fluids, 47(8-9):773–788, 2005.
[109] Karen Veroy, Dimitrios V. Rovas, and Anthony T. Patera. A posteriori error estimation for
reduced-basis approximation of parametrized elliptic coercive partial differential equations:
"Convex inverse" bound conditioners. ESAIM - Control, Optimisation and Calculus of
Variations, 8:1007–1028, 2002.
[110] F. Vidal-Codina, N. C. Nguyen, M. B. Giles, and J. Peraire. A model and variance reduction
method for computing statistical outputs of stochastic elliptic partial differential equations.
Journal of Computational Physics, 297:700–720, 2015.
[111] F. Vidal-Codina, N. C. Nguyen, M. B. Giles, and J. Peraire. An empirical interpolation and
model-variance reduction method for computing statistical outputs of parametrized stochastic
partial differential equations. SIAM J. Uncertainty Quantification, Submitted, 2015.
[112] S. Walton, O. Hassan, and K. Morgan. Reduced order modelling for unsteady fluid flow
using proper orthogonal decomposition and radial basis functions. Applied Mathematical
Modelling, 37(20-21):8930–8945, 2013.
[113] K. Willcox. Unsteady Flow Sensing and Estimation via the Gappy Proper Orthogonal De-
composition. Computers and Fluids, 35:208–226, 2006.
[114] K Willcox and J Peraire. Balanced Model Reduction via the Proper Orthogonal Decomposi-
tion. AIAA, 40(11):2323, 2002.
[115] D. Xiao, F. Fang, C. Pain, and G. Hu. Non-intrusive reduced-order modelling of the Navier-
Stokes equations based on RBF interpolation. International Journal for Numerical Methods
in Fluids, 79(11):580–595, 2015.
[116] Masayuki Yano and Anthony T. Patera. An LP empirical quadrature procedure for reduced
basis treatment of parametrized nonlinear PDEs. Computer Methods in Applied Mechanics
and Engineering, 344:1104–1123, 2019.
[117] M. J. Zahr and P. O. Persson. An optimization-based approach for high-order accurate
discretization of conservation laws with discontinuous solutions. Journal of Computational
Physics, 365:105–134, 2018.
[118] M. J. Zahr, A. Shi, and P. O. Persson. Implicit shock tracking using an optimization-based
high-order discontinuous Galerkin method. Journal of Computational Physics, 410:109385,
2020.
[119] Ralf Zimmermann, Benjamin Peherstorfer, and Karen Willcox. Geometric subspace updates
with applications to online adaptive nonlinear model reduction. SIAM Journal on Matrix
Analysis and Applications, 39(1):234–261, 2018.

You might also like