DeepXDE A Deep Learning Library For Solving Differ
DeepXDE A Deep Learning Library For Solving Differ
DIFFERENTIAL EQUATIONS
LU LU∗ , XUHUI MENG∗ , ZHIPING MAO∗ , AND GEORGE EM KARNIADAKIS∗†
Abstract. Deep learning has achieved remarkable success in diverse applications; however, its
use in solving partial differential equations (PDEs) has emerged only recently. Here, we present an
overview of physics-informed neural networks (PINNs), which embed a PDE into the loss of the
neural network using automatic differentiation. The PINN algorithm is simple, and it can be applied
to different types of PDEs, including integro-differential equations, fractional PDEs, and stochastic
PDEs. Moreover, PINNs solve inverse problems as easily as forward problems. We propose a new
arXiv:1907.04502v1 [cs.LG] 10 Jul 2019
residual-based adaptive refinement (RAR) method to improve the training efficiency of PINNs. For
pedagogical reasons, we compare the PINN algorithm to a standard finite element method. We also
present a Python library for PINNs, DeepXDE, which is designed to serve both as an education
tool to be used in the classroom as well as a research tool for solving problems in computational
science and engineering. DeepXDE supports complex-geometry domains based on the technique
of constructive solid geometry, and enables the user code to be compact, resembling closely the
mathematical formulation. We introduce the usage of DeepXDE and its customizability, and we
also demonstrate the capability of PINNs and the user-friendliness of DeepXDE for five different
examples. More broadly, DeepXDE contributes to the more rapid development of the emerging
Scientific Machine Learning field.
Key words. education software, DeepXDE, differential equations, deep learning, physics-
informed neural networks, scientific machine learning
1. Introduction. In the last 15 years, deep learning in the form of deep neural
networks (NNs), has been used very effectively in diverse applications [20], such as
computer vision and natural language processing. Despite the remarkable success in
these and related areas, deep learning has not yet been widely used in the field of
scientific computing. However, more recently, solving partial differential equations
(PDEs) via deep learning has emerged as a potentially new sub-field under the name
of Scientific Machine Learning (SciML) [3].
To solve a PDE via deep learning, a key step is to constrain the neural network
to minimize the PDE residual, and several approaches have been proposed to ac-
complish this. Compared to the traditional mesh-based methods, such as the finite
difference method (FDM) and the finite element method (FEM), deep learning could
be a mesh-free approach by taking advantage of the automatic differentiation [30],
and could break the curse of dimensionality [28, 12]. Among these approaches, some
can only be applied to particular types of problems, such as image-like input do-
main [16, 21, 39] or parabolic PDEs [4, 13]. Some researchers adopt the variational
form of PDEs and minimize the corresponding energy functional [10, 14]. However,
not all PDEs can be derived from a known functional, and thus Galerkin type pro-
jections have also been considered [22]. Alternatively, one could use the PDE in
strong form directly [9, 33, 18, 19, 5, 32, 30]; in this form, automatic differentiation
could be used directly to avoid truncation errors and the numerical quadrature errors
of variational forms. This strong form approach was introduced in [30] coining the
term physics-informed neural networks (PINNs). An attractive feature of PINNs is
that it can be used to solve inverse problems with minimum change of the code for
1
2 L. LU, X. MENG, Z. MAO, AND G. E. KARNIADAKIS
forward problems [30, 31]. In addition, PINNs have been further extended to solve
integro-differential equations (IDEs), fractional differential equations (FDEs) [25], and
stochastic differential equations (SDEs) [38, 36, 24, 37].
In this paper, we present various PINN algorithms implemented in a Python
library DeepXDE1 , which is designed to serve both as an education tool to be used
in the classroom as well as a research tool for solving problems in computational
science and engineering (CSE). DeepXDE can be used to solve multi-physics problems,
and supports complex-geometry domains based on the technique of constructive solid
geometry (CSG), hence avoiding tedious and time-consuming computational geometry
tasks. By using DeepXDE, time-dependent PDEs can be solved as easily as steady
states by only defining the initial conditions. In addition to the main workflow of
DeepXDE, users can readily monitor and modify the solution process via callback
functions, e.g., monitoring the Fourier spectrum of the neural network solution, which
can reveal the leaning mode of the NN Figure 2. Last but not least, DeepXDE is
designed to make the user code stay compact and manageable, resembling closely the
mathematical formulation.
The paper is organized as follows. In section 2, after briefly introducing deep
neural networks, we present the algorithm, approximation theory, and error analysis
of PINNs, and make a comparison between PINNs and FEM. We then discuss how to
use PINNs to solve integro-differential equations and inverse problems. In addition,
we propose the residual-based adaptive refinement (RAR) method to improve the
training efficiency of PINNs. In section 3, we introduce the usage of our library,
DeepXDE, and its customizability. In section 4, we demonstrate the capability of
PINNs and friendly use of DeepXDE for five different examples. Finally, we conclude
the paper in section 5.
1 Sourcecode is published under the Apache License, Version 2.0 on GitHub. https://github.
com/lululxvi/deepxde
DEEPXDE 3
B(u, x) = 0 on ∂Ω,
PDE(λ)
∂
NN(x, t; θ) ∂t
2
Tf
∂ û
∂t − λ ∂∂xû2
σ σ ∂2
∂x2
x σ σ
.. .. û Minimize
t . . I û(x, t) − gD (x, t) Loss θ∗
σ σ ∂ ∂ û
Tb
∂n ∂n (x, t) − gR (u, x, t)
BC & IC
2
Fig. 1. Schematic of a PINN for solving the diffusion equation ∂u
∂t
= λ ∂∂xu
2 with mixed boundary
∂u
conditions (BC) u(x, t) = gD (x, t) on ΓD ⊂ ∂Ω and ∂n (x, t) = gR (u, x, t) on ΓR ⊂ ∂Ω. The initial
condition (IC) is treated as a special type of boundary conditions. Tf and Tb denote the two sets of
residual points for the equation and BC/IC.
The algorithm of PINN [19, 30] is shown in Procedure 2.1, and visually in the
∂2u
schematic of Figure 1 solving a diffusion equation ∂u ∂t = λ ∂x2 with mixed boundary
∂u
conditions u(x, t) = gD (x, t) on ΓD ⊂ ∂Ω and ∂n (x, t) = gR (u, x, t) on ΓR ⊂ ∂Ω. We
explain each step as follows. In a PINN, we first construct a neural network û(x; θ)
as a surrogate of the solution u(x), which takes the input x and outputs a vector with
the same dimension as u. Here, θ = {W ` , b` }1≤`≤L is the set of all weight matrices
and bias vectors in the neural network û. One advantage of PINNs by choosing neural
4 L. LU, X. MENG, Z. MAO, AND G. E. KARNIADAKIS
networks as the surrogate of u is that we can take the derivatives of û with respect
to its input x by applying the chain rule for differentiating compositions of functions
using the automatic differentiation (AD), which is conveniently integrated in machine
learning packages, such as TensorFlow [1] and PyTorch [26].
In the next step, we need to restrict the neural network û to satisfy the physics
imposed by the PDE and boundary conditions. It is hard to restrict û in the whole
domain, but instead we restrict û on some scattered points, i.e., the training data
T = {x1 , x2 , . . . , x|T | } of size |T |. In addition, T is comprised of two sets Tf ⊂ Ω and
Tb ⊂ ∂Ω, which are the points in the domain and on the boundary, respectively. We
refer Tf and Tb as the sets of “residual points”.
To measure the discrepancy between the neural network û and the constraints,
we consider the loss function defined as the weighted summation of the L2 norm of
residuals for the equation and boundary conditions:
where
2
1 X ∂ û ∂ û ∂ 2 û ∂ 2 û
Lf (θ; Tf ) = f x; ,..., ; ,..., ;...;λ ,
|Tf | ∂x1 ∂xd ∂x1 ∂x1 ∂x1 ∂xd 2
x∈Tf
1 X
Lb (θ; Tb ) = kB(û, x)k22 ,
|Tb |
x∈Tb
and wf and wb are the weights. The loss involves derivatives, such as the partial
derivative ∂ û/∂x1 or the normal derivative at the boundary ∂ û/∂n = ∇û · n, which
are handled via AD.
In the last step, the procedure of searching for a good θ by minimizing the loss
L(θ; T ) is called “training”. Considering the fact that the loss is highly nonlinear and
non-convex with respect to θ [6], we usually minimize the loss function by gradient-
based optimizers, such as gradient descent, Adam [17], and L-BFGS [8].
In the algorithm of PINN introduced above, we enforce soft constraints of bound-
ary/initial conditions through the loss Lb . This approach can be used for complex
domains and any type of boundary conditions. On the other hand, it is possible to
enforce hard constraints for simple cases [18]. For example, when the boundary con-
dition is u(0) = u(1) = 0 with Ω = [0, 1], we can simply choose the surrogate model
as û(x) = x(x − 1)N (x) to satisfy the boundary condition automatically, where N (x)
is a neural network.
We note that it is very flexible to choose the residual points T , and here we
provide three possible strategies:
DEEPXDE 5
2.3. Approximation theory and error analysis for PINNs. One funda-
mental question related to PINNs is whether there exists a neural network satisfying
both the PDE equation and the boundary conditions, i.e., whether there exists a neu-
ral network that can simultaneously and uniformly approximate a function and its
partial derivatives. To address this question, we first introduce some notation. Let
Zd+ be the set of d-dimensional nonnegative integers. For m = (m1 , . . . , md ) ∈ Zd+ ,
6 L. LU, X. MENG, Z. MAO, AND G. E. KARNIADAKIS
F
ũT uT uF u
Optimization Generalization Approximation
error Eopt error Egen error Eapp
Fig. 3. Illustration of errors of a PINN. The total error consists of the approximation error,
the optimization error, and the generalization error. Here, u is the PDE solution, uF is the best
function close to u in the function space F , uT is the neural network whose loss is at a global
minimum, and ũT is the function obtained by training a neural network.
∂ |m|
Dm := .
∂xm
1
1
. . . ∂xm
d
d
The approximation error Eapp measures how closely uF can approximate u. The
generalization error Egen is determined by the number/locations of residual points
in T and the capacity of the family F. Neural networks of larger size have smaller
approximation errors but could lead to higher generalization errors, which is called
DEEPXDE 7
PINN FEM
Basis function Neural network (nonlinear) Piecewise polynomial (linear)
Parameters Weights and biases Point values
Training points Scattered points (mesh-free) Mesh points
PDE embedding Loss function Algebraic system
Parameter solver Gradient-based optimizer Linear solver
Errors Eapp , Egen and Eopt (subsection 2.3) Approximation/quadrature errors
and then we use a PINN to solve the following PDE instead of the original equation
X n
dy
+ y(x) ≈ wi eti (x)−x y(ti (x)).
dx i=1
PINNs can also be easily extended to solve FDEs [25] and SDEs [38, 36, 24, 37], but
we do not discuss here such cases due to the page limit.
8 L. LU, X. MENG, Z. MAO, AND G. E. KARNIADAKIS
Quadrature,
AD
FDM, FEM, ...
R ∂ γ
α
∂ ∂ ∂2
·dx, ∂t , (−∆) 2 , . . . ∂t , ∂x , ∂x2 , . . .
Integral &
Integro-differential Differential
R
∂ γ
α
∂ ∂ ∂2
PDE f ∂t , ∂x , ∂x2 , ·dx, ∂t , (−∆) 2 , . . . u=0
Fig. 4. Schematic illustrating the modificaiton of the PINN algorithm for solving integro-
differential equations. We employ the automatic differentiation technique to analytically derive
the integer-order derivatives, and we approximate integral operators numerically using standard
methods. (The figure is reproduced from [25].)
2.6. PINNs for solving inverse problems. In inverse problems, there are
some unknown parameters λ in (2.1), but we have some extra information on some
points Ti ⊂ Ω besides the differential equation and boundary conditions:
I(u, x) = 0, for x ∈ Ti .
PINNs solve inverse problems as easily as forward problems. The only difference
between solving forward and inverse problems is that we add an extra loss term to
(2.2):
where
1 X
Li (θ, λ; Ti ) = kI(û, x)k22 .
|Ti |
x∈Ti
We then optimize θ and λ together, and our solution is θ ∗ , λ∗ = arg minθ,λ L(θ, λ; T ).
Procedure 2.2 RAR for improving the distribution of residual points for training.
Step 1 Select the initial residual points T , and train the neural network for a limited
number of iterations.
Step 2 Estimate the mean PDE residual Er in (2.3) by Monte Carlo integration,
i.e., by the average of values at a set of randomly sampled locations S =
{x1 , x2 , . . . , x|S| }:
1 X ∂ û ∂ û ∂ 2 û ∂ 2 û
Er ≈ f x; ,..., ; ,..., ;...;λ .
|S| ∂x1 ∂xd ∂x1 ∂x1 ∂x1 ∂xd
x∈S
Step 3 Stop if Er < E0 . Otherwise, add m new points with the largest residuals in S
to T , and go to Step 2.
Differential Boundary/initial
Geometry Neural net
equations conditions
Model.train(...,
Model.predict(...) Model.compile(...)
callbacks=...)
Fig. 5. Flowchart of DeepXDE corresponding to Procedure 3.1. The white boxes define the
PDE problem and the training hyperparameters. The blue boxes combine the PDE problem and
training hyperparameters in the white boxes. The orange boxes are the three steps (from right to
left) to solve the PDE.
A B
A|B
A-B | &
A&B
Fig. 6. Examples of constructive solid geometry (CSG) in 2D. (left) A and B represent the
rectangle and circle, respectively. The union A|B, difference A − B, and intersection A&B are
constructed from A and B. (right) A complex geometry (top) is constructed from a polygon, a
rectangle and two circles (bottom) through the union, difference, and intersection operations. This
capability is included in the module geometry of DeepXDE.
proximate functions from a dataset with constraints, and approximate functions from
multi-fidelity data using the method proposed in [23].
3.2. Customizability. All the components of DeepXDE are loosely coupled,
and thus DeepXDE is well-structured and highly configurable. In this subsection, we
discuss how to customize DeepXDE to meet the new demands.
3.2.1. Geometry. As we introduced above, DeepXDE has already supported 7
basic geometries and the CSG technique. However, it is still possible that the user
needs a new geometry, which cannot be constructed in DeepXDE. In this situation,
a new geometry can be defined as shown in Procedure 3.2.
Procedure 3.2 Customization of the new geometry module MyGeometry. The class
methods should only be implemented as needed.
1 class MyGeometry(Geometry):
2 def inside(self, x):
3 """Check if x is inside the geometry."""
4 def on_boundary(self, x):
5 """Check if x is on the geometry boundary."""
6 def boundary_normal(self, x):
7 """Compute the unit normal at x for Neumann or Robin boundary conditions."""
8 def periodic_point(self, x, component):
9 """Compute the periodic image of x for periodic boundary condition."""
10 def uniform_points(self, n, boundary=True):
11 """Compute the equispaced point locations in the geometry."""
12 def random_points(self, n, random="pseudo"):
13 """Compute the random point locations in the geometry."""
14 def uniform_boundary_points(self, n):
15 """Compute the equispaced point locations on the boundary."""
16 def random_boundary_points(self, n, random="pseudo"):
17 """Compute the random point locations on the boundary."""
Procedure 3.4 Customization of the callback MyCallback. Here, we only show how
to add functions to be called at the beginning/end of every epoch. Similarly, we can
call functions at the other training stages, such as at the beginning of training.
1 class MyCallback(Callback):
2 def on_epoch_begin(self):
3 """Called at the beginning of every epoch."""
4 def on_epoch_end(self):
5 """Called at the end of every epoch."""
−∆u(x, y) = 1, (x, y) ∈ Ω,
u(x, y) = 0, (x, y) ∈ ∂Ω.
We choose 1200 and 120 random points drawn from a uniform distribution as Tf
and Tb , respectively. The PINN solution is given in Figure 7B. For comparison, we
also present the numerical solution obtained by using the spectral element method
(SEM) [15] (Figure 7A). The result of the absolute error is shown in Figure 7C.
4.2. RAR for Burgers equation. We consider the Burgers equation:
∂u ∂u ∂2u
+u = ν 2, x ∈ [−1, 1], t ∈ [0, 1],
∂t ∂x ∂x
u(x, 0) = − sin(πx), u(−1, t) = u(1, t) = 0.
DEEPXDE 13
A B C
Fig. 7. Example 4.1. Comparison of the PINN solution with the solution obtained by using
spectral element method (SEM). (A) the SEM solution uSEM , (B) the PINN solution uN N , (C)
the absolute error |uSEM − uN N |.
A1 B 0.4
PINN w/o RAR PINN w/o RAR
PINN w/ RAR PINN w/ RAR
0.5 FD 20000 0.3
L2 relative error
FD 2400
Reference
0 0.2
u
-0.5 0.1
0
-1
-1 -0.5 0 0.5 1 2500 2510 2520 2530 2540
x # Residual points
Fig. 8. Example 4.2. Comparisons of the PINN solutions with and without RAR. (A) The
cyan line, green line and red line represent the reference solution of u from [30], PINN solution
without RAR, the PINN solution with RAR at t = 0.9, respectively. For the finite difference (FD)
method, 200 × 100 = 20000 spatial-temporal grid points are used to achieve a good solution (blue
line). If only 60 × 40 = 2400 points are used, the FD solution has large oscillations around the
discontinuity (brown line). (B) L2 relative error versus the number of residual points. The red solid
line and shaded region correspond to the mean and one-standard-deviation band for the L2 relative
error of PINN with RAR, respectively. The blue dashed line is the mean and one-standard-deviation
for the error of PINN using 2540 random residual points. The mean and standard deviation are
obtained from 10 runs with random initial residual points.
14 L. LU, X. MENG, Z. MAO, AND G. E. KARNIADAKIS
4.3. Inverse problem for the Lorenz system. Consider the parameter iden-
tification problem of the following Lorenz system
dx dy dz
= ρ(y − x), = x(σ − z) − y, = xy − βz,
dt dt dt
with the initial condition (x(0), y(0), z(0)) = (−8, 7, 27), where ρ, σ and β are the three
parameters to be identified from the observations at certain times. The observations
are produced by solving the above system to t = 3 using Runge-Kutta (4,5) with
the underlying true parameters (ρ, σ, β) = (10, 15, 8/3). We choose 400 uniformly
distributed random points and 25 equispaced points as the residual points Tf and Ti ,
respectively. The evolution trajectories of ρ, σ and β are presented in Figure 9A, with
the final identified values of (ρ, σ, β) = (10.002, 14.999, 2.668).
A 16 B 3
12 2
Parameter value
Parameter value
8 1
True ρ Identified ρ
True σ Identified σ
4 True β Identified β 0
True kf Identified kf
True D Identified D
0 -1
0 1 2 3 4 5 6 0 1 2 3 4 5 6 7 8
# Iterations (104) # Iterations (104)
Fig. 9. Examples 4.3 and 4.4. The identified values of (A) the Lorenz system and (B) diffusion-
reaction system converge to the true values during the training process. The parameter values are
scaled for plotting.
∂CA ∂ 2 CA 2 ∂CB ∂ 2 CB 2
=D − k f CA CB , = D − 2kf CA CB , x ∈ [0, 1], t ∈ [0, 10],
∂t ∂x2 ∂t ∂x2
−20x
CA (x, 0) = CB (x, 0) = e , CA (0, t) = CB (0, t) = 1, CA (1, t) = CB (1, t) = 0,
where D = 2 × 10−3 is the effective diffusion coefficient, and kf = 0.1 is the effective
reaction rate. Because D and kf depend on the pore structure and are difficult to
measure directly, we estimate D and kf based on 40000 observations of the concen-
trations CA and CB in the spatio-temporal domain. The identified D (1.98 × 10−3 )
and kf (0.0971) are displayed in Figure 9B, which agree well with their true values.
4.5. Volterra IDE. Here, we consider the first-order integro-differential equa-
tion of the Volterra type in the domain [0, 5]:
Z x
dy
+ y(x) = et−x y(t)dt, y(0) = 1,
dx 0
DEEPXDE 15
with the exact solution y(x) = e−x cosh x. We solve this IDE using the method in
subsection 2.5, and approximate the integral using Gaussian-Legendre quadrature of
degree 20. The L2 relative error is 2 × 10−3 , and the solution is shown in Figure 10.
1
Exact
0.9 PINN
Training points
0.8
0.7
y
0.6
0.5
0.4
0 1 2 3 4 5
x
Fig. 10. Example 4.5. The PINN algorithm for solving Volterra IDE. The blue solid line is the
exact solution, and the red dash line is the numerical solution from PINN. 12 equispaced residual
points (black dots) are used.
enforce the strong form of PDEs, which is easy to be implemented by automatic dif-
ferentiation, alternative weak/variational forms may also be effective, although they
require the use of quadrature grids. Many other extensions for multi-physics and
multi-scale problems are possible across different scientific disciplines by creatively
designing the loss function and introducing suitable solution spaces. For instance, in
the five examples we present here, we only assume data on scattered points, however,
in geophysics or biomedicine we may have mixed data in the form of images and point
measurements. In this case, we can design a composite neural network consisting of
one convolutional neural network and one PINN sharing the same set of parameters,
and minimize the total loss which could be a weighted summation of multiple losses
from each neural network.
REFERENCES
[17] D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, in International
Conference on Learning Representations, 2015.
[18] I. E. Lagaris, A. Likas, and D. I. Fotiadis, Artificial neural networks for solving ordinary and
partial differential equations, IEEE Transactions on Neural Networks, 9 (1998), pp. 987–
1000.
[19] I. E. Lagaris, A. C. Likas, and D. G. Papageorgiou, Neural-network methods for bound-
ary value problems with irregular boundaries, IEEE Transactions on Neural Networks, 11
(2000), pp. 1041–1049.
[20] Y. LeCun, Y. Bengio, and G. Hinton, Deep learning, Nature, 521 (2015), p. 436.
[21] Z. Long, Y. Lu, X. Ma, and B. Dong, PDE-net: Learning PDEs from data, in International
Conference on Machine Learning, 2018, pp. 3214–3222.
[22] A. J. Meade Jr and A. A. Fernandez, The numerical solution of linear ordinary differen-
tial equations by feedforward neural networks, Mathematical and Computer Modelling, 19
(1994), pp. 1–25.
[23] X. Meng and G. E. Karniadakis, A composite neural network that learns from multi-fidelity
data: Application to function approximation and inverse PDE problems, arXiv preprint
arXiv:1903.00104, (2019).
[24] M. A. Nabian and H. Meidani, A deep neural network surrogate for high-dimensional random
partial differential equations, arXiv preprint arXiv:1806.02957, (2018).
[25] G. Pang, L. Lu, and G. E. Karniadakis, fPINNs: Fractional physics-informed neural net-
works, SIAM Journal on Scientific Computing, (2019), p. to appear.
[26] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison,
L. Antiga, and A. Lerer, Automatic differentiation in pytorch, (2017).
[27] A. Pinkus, Approximation theory of the MLP model in neural networks, Acta Numerica, 8
(1999), pp. 143–195.
[28] T. Poggio, H. Mhaskar, L. Rosasco, B. Miranda, and Q. Liao, Why and when can deep-but
not shallow-networks avoid the curse of dimensionality: a review, International Journal of
Automation and Computing, 14 (2017), pp. 503–519.
[29] N. Rahaman, A. Baratin, D. Arpit, F. Draxler, M. Lin, F. A. Hamprecht, Y. Ben-
gio, and A. Courville, On the spectral bias of neural networks, arXiv preprint
arXiv:1806.08734, (2018).
[30] M. Raissi, P. Perdikaris, and G. E. Karniadakis, Physics-informed neural networks: A deep
learning framework for solving forward and inverse problems involving nonlinear partial
differential equations, Journal of Computational Physics, 378 (2019), pp. 686–707.
[31] M. Raissi, A. Yazdani, and G. E. Karniadakis, Hidden fluid mechanics: A Navier-Stokes
informed deep learning framework for assimilating flow visualization data, arXiv preprint
arXiv:1808.04327, (2018).
[32] J. Sirignano and K. Spiliopoulos, DGM: A deep learning algorithm for solving partial dif-
ferential equations, Journal of Computational Physics, 375 (2018), pp. 1339–1364.
[33] B. P. van Milligen, V. Tribaldos, and J. Jiménez, Neural network differential equation and
plasma equilibrium solver, Physical Review Letters, 75 (1995), p. 3594.
[34] N. Winovich, K. Ramani, and G. Lin, ConvPDE-UQ: Convolutional neural networks with
quantified uncertainty for heterogeneous elliptic partial differential equations on varied
domains, Journal of Computational Physics, 394 (2019), pp. 263–279.
[35] Z.-Q. J. Xu, Y. Zhang, T. Luo, Y. Xiao, and Z. Ma, Frequency principle: Fourier analysis
sheds light on deep neural networks, arXiv preprint arXiv:1901.06523, (2019).
[36] L. Yang, D. Zhang, and G. E. Karniadakis, Physics-informed generative adversarial net-
works for stochastic differential equations, arXiv preprint arXiv:1811.02033, (2018).
[37] D. Zhang, L. Guo, and G. E. Karniadakis, Learning in modal space: Solving time-dependent
stochastic PDEs using physics-informed neural networks, arXiv preprint arXiv:1905.01205,
(2019).
[38] D. Zhang, L. Lu, L. Guo, and G. E. Karniadakis, Quantifying total uncertainty in physics-
informed neural networks for solving forward and inverse stochastic problems, arXiv
preprint arXiv:1809.08327, (2018).
[39] Y. Zhu, N. Zabaras, P.-S. Koutsourelakis, and P. Perdikaris, Physics-constrained deep
learning for high-dimensional surrogate modeling and uncertainty quantification without
labeled data, arXiv preprint arXiv:1901.06314, (2019).
[40] B. Zoph and Q. V. Le, Neural architecture search with reinforcement learning, arXiv preprint
arXiv:1611.01578, (2016).