0% found this document useful (0 votes)
82 views

Liu Dissertation

Uploaded by

Mayank
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views

Liu Dissertation

Uploaded by

Mayank
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 215

ABSTRACT

Title of Dissertation: QUANTUM ALGORITHMS FOR LINEAR AND


NONLINEAR DIFFERENTIAL EQUATIONS

Jin-Peng Liu
Doctor of Philosophy, 2022

Dissertation Directed by: Professor Andrew M. Childs


Department of Computer Science

Quantum computers are expected to dramatically outperform classical computers

for certain computational problems. Originally developed for simulating quantum physics,

quantum algorithms have been subsequently developed to address diverse computational

challenges.

There has been extensive previous work for linear dynamics and discrete models,

including Hamiltonian simulations and systems of linear equations. However, for more

complex realistic problems characterized by differential equations, the capability of quantum

computing is far from well understood. One fundamental challenge is the substantial

difference between the linear dynamics of a system of qubits and real-world systems with

continuum and nonlinear behaviors.

My research is concerned with mathematical aspects of quantum computing. In

this dissertation, I focus mainly on the design and analysis of quantum algorithms for

differential equations. Systems of linear ordinary differential equations (ODEs) and linear

elliptic partial differential equations (PDEs) appear ubiquitous in natural and social science,
engineering, and medicine. I propose a variety of quantum algorithms based on finite

difference methods and spectral methods for producing the quantum encoding of the

solutions, with an exponential improvement in the precision over previous quantum algorithms.

Nonlinear differential equations exhibit rich phenomena in many domains but are

notoriously difficult to solve. Whereas previous quantum algorithms for general nonlinear

equations have been severely limited due to the linearity of quantum mechanics, I give

the first efficient quantum algorithm for nonlinear differential equations with sufficiently

strong dissipation. I also establish a lower bound, showing that nonlinear differential

equations with sufficiently weak dissipation have worst-case complexity exponential in

time, giving an almost tight classification of the quantum complexity of simulating nonlinear

dynamics.

Overall, utilizing advanced linear algebra techniques and nonlinear analysis, I attempt

to build a bridge between classical and quantum mechanics, understand and optimize

the power of quantum computation, and discover new quantum speedups over classical

algorithms with provable guarantees.


QUANTUM ALGORITHMS FOR LINEAR AND
NONLINEAR DIFFERENTIAL EQUATIONS

by

Jin-Peng Liu

Dissertation submitted to the Faculty of the Graduate School of the


University of Maryland, College Park in partial fulfillment
of the requirements for the degree of
Doctor of Philosophy
2022

Advisory Committee:
Professor Andrew M. Childs, Chair/Advisor
Professor Alexey V. Gorshkov
Professor Carl A. Miller
Professor Konstantina Trivisa
Professor Xiaodi Wu
© Copyright by
Jin-Peng Liu
2022
Acknowledgments

I would like to express my gratitude to my advisor, Andrew M. Childs, for his

invaluable guidance, encouragement, and support. Andrew is an incredible researcher as

well as an amazing advisor from whom I can constantly learn a lot. Without Andrew,

I cannot imagine how I would get through the challenges of my studies. He taught me

all I know about becoming an independent researcher, from solving research problems,

writing papers, to completing a professional presentation. I will never forget the days

when we worked on problems, talked about my career path, and had lunch together on

occasion. I am always proud of being a Childs group member.

I am grateful to Alexey Gorshkov, Carl Miller, Konstantina Trivisa, and Xiaodi

Wu for serving as my dissertation committee members. Many thanks to Konstantina for

her invaluable advice and warm-hearted assistance throughout my applied mathematics

studies and during my post-doctoral applications. I would like to thank Furong Huang for

serving as my preliminary exam committee member and for discussing machine learning

topics.

The work described in this dissertation is the product of collaborations with many

people. I cherish the opportunities to work with Dong An, Di Fang, Herman Kolden, Hari

Krovi, Tongyang Li, Noah Linden, Nuno Loureiro, Ashley Montanaro, Aaron Ostrander,

Changpeng Shao, Chunhao Wang, Chenyi Zhang, and Ruizhe Zhang. Despite the fact

ii
that most of the collaborations were virtual, I benefited greatly from their diverse research

backgrounds and areas of expertise. I expect to meet everyone in person in the future.

At the University of Maryland, I had the privilege of learning from many fantastic

faculties, including Gorjan Alagic, John Baras, Alexander Barg, Jacob Bedrossian, James

Duncan, Howard Elman, Tom Goldstein, Lise-Marie Imbert-Gérard, Pierre-Emmanuel

Jabin, Richard La, Brad Lackey, David Levermore, Yi-Kai Liu, Ricardo Nochetto, and

Eitan Tadmor. I appreciate the instructors’ willingness to share knowledge and skills with

me, as well as widening my horizons.

At the Joint Center for Quantum Information and Computer Science (QuICS), I had

in-depth interactions with many quantum information colleagues: Chen Bai, Aniruddha

Bapat, Charles Cao, Shouvanik Chakrabarti, Nai-Hui Chia, Ze-Pei Cian, Abhinav Deshpande,

Bill Fefferman, Hong Hao Fu, Andrew Guo, Kaixin Huang, Shih-Han Hung, Jiaqi Leng,

Yuxiang Peng, Eddie Schoute, Yuan Su, Minh Tran, Chiao-Hsuan Wang, Daochen Wang,

Guoming Wang, Xin Wang, Yidan Wang, Xinyao Wu, Penghui Yao, Qi Zhao, Daiwei

Zhu, Guanyu Zhu, and Shaopeng Zhu. We had wonderful days at QuICS. I would also like

to thank Andrea Svejda, the QuICS coordinator. Whenever I was looking for assistance,

she was always there to offer help.

I appreciated the support from the QISE-NET Award, which allows me to collaborate

with Microsoft Quantum researchers: Guang Hao Low and Stephen Jordan. Special

thanks to Stephen for his generous assistance during my graduate studies, regardless of

whether he works for the University of Maryland or Microsoft. I enjoyed my internship

at Amazon Web Sciences (AWS) Center for Quantum Computing in the summer of 2021,

and I am grateful to Fernando Brandao, Earl Campbell, Michael Kastoryano, and Nicola

iii
Pancotti for their mentorship. Over that summer, I had a lot of fun speaking with AWS

scientists: Steve Flammia, Sam McArdle, Ash Milstead, and Martin Schuetz, as well as

talking with my fellow internship students: Alexander Delzell, Hsin-Yuan Huang, Noah

Shutty, Thomas Bohdanowicz, and Kianna Wan.

I enjoyed many unforgettable visits to Berkeley, Harvard, MIT, Caltech, Chicago,

Corolado, and Shenzhen. In the winter of 2019 and the spring of 2020, I appreciated

the hosts from the University of California, Berkeley, the Lawrence Berkeley National

Laboratory, and the Simons Institute for the Theory of Computing. I would like to thank

Lin Lin and Chao Yang for the numerous discussions about many valuable ideas, as well

as the many challenging questions. I greatly appreciated Lin’s strong support during

my post-doctoral applications. I am also grateful to Umesh Vazirani for organizing the

Simons programs and for inviting me to participate. In the winter of 2021, I had the

pleasure of visiting the MIT Center for Theoretical Physics and Plasma Science and

Fusion Center in Boston. I was delighted to interact with Soonwon Choi, Issac Chuang,

Edward Farhi, Aram Harrow, and Seth Lloyd. Indeed, my graduate studies relied heavily

on Harrow and Lloyd’s results. It is an honor for me to collaborate with and compete

against MIT folks. In the winter of 2021, I was also thrilled to interview the Harvard

Quantum Initiative, where I had the opportunity to speak with Arthur Jaffe, Mikhail

Lukin, and Susannie Yelin. In these years, I also enjoyed attending conferences hosted by

Caltech, Chicago, Corolado, and Shenzhen. I would like to express my gratitude to many

individuals for their friendliness and hospitality during my visits.

I enjoyed numerous enlightening discussions with many incredible quantum information

researchers apart from those mentioned above: Scott Aaronson, Anurag Anshu, Ryan

iv
Babbush, Dominic Berry, Adam Bouland, Sergey Bravyi, Paola Cappellaro, Steve Flammia,

David Gosset, Stuart Hadfield, Matthew Hastings, Patrick Hayden, Zhengfeng Ji, Robin

Kothari, Debbie Liang, Shunlong Luo, Jarrod McClean, Christopher Monroe, Michele

Mosca, John Preskill, Yun Shang, Fang Song, Nikitas Stamatopoulos, Nathan Wiebe,

Thomas Vidick, Beni Yoshida, Henry Yuan, Bei Zeng, William Zeng, and Shenggeng

Zheng. I wish I could be as smart as these amazing people.

I would also like to thank my colleagues and friends in the broader quantum information

community: Kaifeng Bu, Chenfeng Cao, Ningping Cao, Lijie Chen, Mo Chen, Yifang

Chen, Rui Chao, Andrea Coladangelo, David Ding, Yulong Dong, Xun Gao, András

Gilyén, Cupjin Huang, Yichen Huang, Hong-Ye Hu, Jiala Ji, Jiaqing Jiang, Ce Jin, Gushu

Li, Jianqiang Li, Yinan Li, Jiahui Liu, Jinguo Liu, Junyu Liu, Mengke Liu, Qipeng Liu,

Yunchao Liu, Yupan Liu, Ziwen Liu, Chuhan Lu, Di Luo, Chinmay Nirkhe, Luowen

Qian, Yihui Quek, Yixin Shen, Qichen Song, Ewin Tang, Yuanjia Wang, Yadong Wu,

Zhujing Xu, Yuxiang Yang, Jiahao Yao, Cong Yu, Zhan Yu, Haimeng Zhang, Jiayu Zhang,

Yuxuan Zhang, Zhendong Zhang, Zijian Zhang, Chen Zhao, Chunlu Zhou, Hengyun

Zhou, Shangnai Zhou, Sisi Zhou, and Jiamin Zhu. I wish you all the best in your future

endeavors.

During my studies at the University of Maryland, I enjoyed illuminating discussions

with fellow graduate students at the Applied Mathematics & Statistics, and Scientific

Computation (AMSC) Program and the Department of Mathematics: Stephanie Allen,

Christopher Dock, Alexis Boleda, Muhammed Elgebali, Luke Evans, Blake Fritz, Siming

He, Gareth Johnson, Sophie Kessler, Wenbo Li, Ying Li, Yiran Li, Jiaxing Liang, Yuchen

Luo, Jingcheng Lu, Michael Rawson, Tengfei Su, Manyuan Tao, Cem Unsal, Peng Wan,

v
Qiong Wu, Shuo Yang, Anqi Ye, Yiran Zhang, and Yi Zhou. These five years could not

be so enjoyable without you. I would like to thank Jessica Sadler, the AMSC program

coordinator, for offering help during my graduate years. I also enjoyed insightful interactions

with many fellow graduate students at the Department of Computer Science: Jingling Li,

Yanchao Sun, Jiahao Su, and Xuchen You. I would like to thank Jingling in particular for

her thoughtfulness and heartwarming encouragement. Thank you for being there, and I

will cherish every moment with you.

I would especially thank Yanwen Zhang for giving me the courage to believe in

myself, so that I can complete my Ph.D. and pursue my dream of becoming a successful

professor. Yanwen, I will never stop moving forward, as you encouraged.

Finally, I would like to thank my parents for their endless love, faith, and support.

You are the true heroes.

vi
Table of Contents

Acknowledgements ii

Table of Contents vii

List of Tables ix

List of Figures x

Chapter 1: Introduction: quantum scientific computation 1


1.1 Notations and terminologies . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Hamiltonian simulations . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Quantum linear system algorithms . . . . . . . . . . . . . . . . . . . . . 7
1.4 Quantum algorithms for linear differential equations . . . . . . . . . . . . 10
1.5 Quantum algorithms for nonlinear differential equations . . . . . . . . . . 16

Chapter 2: High-precision quantum algorithms for linear ordinary differential


equations 21
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2 Spectral method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.3 Linear system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.4 Solution error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.5 Condition number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.6 Success probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.7 State preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.8 Main result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.9 Boundary value problems . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.10 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

Chapter 3: High-precision quantum algorithms for linear elliptic partial differential


equations 65
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.2 Linear PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.3 Finite difference method . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.3.1 Linear system . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.3.2 Condition number . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.3.3 Error analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3.3.4 FDM algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

vii
3.3.5 Boundary conditions via the method of images . . . . . . . . . . 83
3.4 Multi-dimensional spectral method . . . . . . . . . . . . . . . . . . . . . 86
3.4.1 Quantum shifted Fourier transform and quantum cosine transform 90
3.4.2 Linear system . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.4.3 Condition number . . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.4.4 State preparation . . . . . . . . . . . . . . . . . . . . . . . . . . 117
3.4.5 Main result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
3.5 Discussion and open problems . . . . . . . . . . . . . . . . . . . . . . . 122

Chapter 4: Efficient quantum algorithms for dissipative nonlinear differential


equations 125
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
4.2 Quadratic ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
4.3 Quantum Carleman linearization . . . . . . . . . . . . . . . . . . . . . . 131
4.4 Algorithm analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
4.4.1 Solution error . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
4.4.2 Condition number . . . . . . . . . . . . . . . . . . . . . . . . . . 154
4.4.3 State preparation . . . . . . . . . . . . . . . . . . . . . . . . . . 158
4.4.4 Measurement success probability . . . . . . . . . . . . . . . . . . 161
4.4.5 Proof of Theorem 4.1 . . . . . . . . . . . . . . . . . . . . . . . . 164
4.5 Lower bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
4.5.1 Hardness of state discrimination . . . . . . . . . . . . . . . . . . 170
4.5.2 State discrimination with nonlinear dynamics . . . . . . . . . . . 171
4.5.3 Proof of Theorem 4.2 . . . . . . . . . . . . . . . . . . . . . . . . 175
4.6 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
4.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184

Chapter 5: Conclusion and future work 188

Bibliography 192

viii
List of Tables

3.1 Summary of the time complexities of classical and quantum algorithms


for d-dimensional PDEs with error tolerance ϵ. Portions of the complexity
in bold represent best known dependence on that parameter. . . . . . . . 68

ix
List of Figures

4.1 Integration of the forced viscous Burgers equation using Carleman linearization
on a classical computer (source code available at https://github.
com/hermankolden/CarlemanBurgers). The viscosity is set so
that the Reynolds number Re = U0 L0 /ν = 20. The parameters nx = 16
and nt = 4000 are the number of spatial and temporal discretization
intervals, respectively. The corresponding Carleman convergence parameter
is R = 43.59. Top: Initial condition and solution plotted at a third of
L0
the nonlinear time 13 Tnl = 3U 0
. Bottom: l2 norm of the absolute error
between the Carleman solutions at various truncation levels N (left), and
the convergence of the corresponding time-maximum error (right). . . . . 182

x
Chapter 1: Introduction: quantum scientific computation

Quantum computing exploits quantum-mechanical phenomena such as superposition

and entanglement to perform computation. Quantum computers have the potential to

dramatically outperform classical computers at solving diverse computational challenges.

Quantum scientific computation is a fast-growing multidisciplinary field that combines

classical numerical analysis with advanced quantum technologies to model, analyze, and

solve complex problems arising in physics, chemistry, biology, engineering, social sciences,

and so on. Originally developed for simulating quantum physics, various quantum algorithms

have been proposed to address scientific computational problems by performing linear

algebra in Hilbert space. For such problems, quantum algorithms are supposed to provide

polynomial and even exponential speedups.

The basic notations and terminologies of quantum computing are covered in this

chapter, followed by quantum algorithms for scientific computation problems, including

topics such as Hamiltonian simulations, linear systems, and differential equations.

1.1 Notations and terminologies

To comprehend the remainder of the dissertation, we provide a brief overview of

the notations and terminologies of quantum computing. More details are available in

1
standard textbooks, such as Nielsen and Chuang, Quantum Computation and Quantum

Information [1], and Watrous, The Theory of Quantum Information [2].

Quantum mechanics can be formulated in terms of linear algebra. Throughout this

dissertation, we consider a finite-dimensional complex vector space CN equipped with an

inner product (the Hilbert space). We usually take N = 2n for some non-negative integer

n. We use the Dirac notation |ψ⟩ to represent a quantum state, and ⟨ϕ| = |ϕ⟩† to represent

its Hermitian conjugation. The scalar ⟨ϕ|ψ⟩ gives the inner product of |ψ⟩ and |ϕ⟩. For

the quantum state, we always assume ⟨ψ|ψ⟩ = 1, i.e. |ψ⟩ is a unit complex vector. We

also let {|j⟩}N


j=1 be the standard basis of space. The j-th entry of |ψ⟩ can be written as

⟨j|ψ⟩.

Given two quantum states, |ψ1 ⟩ ∈ CN1 , |ψ2 ⟩ ∈ CN2 , their tensor product can be

written as |ψ1 ⟩ |ψ2 ⟩ = |ψ1 ⟩ ⊗ |ψ2 ⟩ ∈ CN1 ×N2 , where ⊗ denotes the Kronecker product.

One quantum bit (one qubit) is a quantum state in C2 , and the tensor product of one-qubit
n
state |ψj ⟩ forms an n-qubit state |ψ1 ⟩ ⊗ . . . ⊗ |ψn ⟩ ∈ C2 .
n ×2n
The n-qubit quantum gate U ∈ C2 is a unitary matrix, i.e. U † U = I, where I

is the identity matrix. It is used to map one n-qubit state to the other, i.e. U : |ψ⟩ → |ψ ′ ⟩.

A sequence of quantum gates as a product forms a (reversible) quantum logic circuit.

The universality of two-qubit gates means that every n-qubit gate can be written as

a composition of a sequence of two-qubit gates. Therefore, we usually count the number

of two-qubit gates as the gate complexity of quantum algorithms.

For the quantum measurement, we consider a quantum observable that corresponds


P
to a Hermitian M . It has the spectral decomposition M = m λm Pm , where Pm is the

projection operator onto the eigenspace, i.e. Pm2 = Pm , associated the eigenvalue λm . We

2
P
usually assume m Pm = I. When the quantum state |ψ⟩ is measured by M , the outcome
P
is λm with probability pm = ⟨ψ| Pm |ψ⟩, with m pm = 1. After the measurement, the

post-state is Pm |ψ⟩ / pm .

We now discuss the notations of norms. For a vector a = [a1 , a2 , . . . , an ] ∈ Rn , we

denote the vector ℓp norm as

n
!1/p
X
∥a∥p := |ak |p . (1.1)
k=1

For a matrix A ∈ Rn×n , we denote the operator norm ∥·∥p,q induced by the vector ℓp and

ℓq norms as
∥Ax∥q
∥A∥p,q := sup , ∥A∥p := ∥A∥p,p . (1.2)
x̸=0 ∥x∥p

For a continuous scalar function f (t) : [0, T ] → R, we denote the L∞ norm as

∥f ∥∞ := max |f (t)|. (1.3)


t∈[0,T ]

For a continuous scalar function u(x, t) : Ω × [0, T ] → R, where Ω ⊂ Rd , for a fixed t,

the Lp norm of u(·, t) is given by

Z 1/p
p
∥u(·, t)∥Lp (Ω) := |u(x, t)| dx . (1.4)

In particular, when no subscript is used in the vector, matrix and function norms, we mean

∥·∥ = ∥·∥2 by default.

For real functions f, g : R → R, we write f = O(g) if there exists c > 0, such that

3
|f (τ )| ≤ c|g(τ )| for all τ ∈ R. We write f = Ω(g) if g = O(f ), and f = Θ(g) if both

f = O(g) and g = O(f ). We use O e to suppress logarithmic factors in the asymptotic


 
expression, i.e. f = O(g)
e if f = O g poly(log g) .

1.2 Hamiltonian simulations

Simulating quantum physics is one of the primary applications of quantum computers

[3]. The first explicit quantum simulation algorithm was proposed by Lloyd [4] using

product formulas, and numerous quantum algorithms for quantum simulations have been

developed since then[5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23,

24, 25, 26, 27], with various applications ranging from quantum field theory [28, 29] to

quantum chemistry [18, 30, 31] and condensed matter physics [32].

We first introduce the quantum oracles. We assume there is a state preparation

oracle Oψ that produces an N -dimensional quantum state |ψ⟩, i.e.

Oψ |0⟩ = |ψ⟩. (1.5)

We then assume there is a sparse matrix oracle OH that computes the locations and values

of nonzero entries of an N × N sparse Hermitian matrix H. In detail, on input (j, l), OH

gives the location of the l-th nonzero entry in row j, denoted as k. Then the oracle OH

gives the value Hj,k , i.e.

OH (|j⟩|k⟩|0⟩) = |j⟩|k⟩|Hj,k ⟩. (1.6)

The quantum oracle OH is reversible, and it allows access to different input elements (j, l)

4
in superposition, which is essential for quantum computing. Here we require the number

of nonzero entries of this matrix in every row and column is at most s, where s is much

smaller than the dimension N .

We count the number of querying these oracles as the query complexity. If such

oracles can be implemented by one- and two-qubit gates, we usually count the number of

these one- and two-qubit gates as the gate complexity of quantum algorithms.

We state the Hamiltonian simulation problem as follows.

Problem 1.1. In the Hamiltonian simulation problem, we consider an N -dimensional

Hamiltonian system

d
i |ψ(t)⟩ = H(t) |ψ(t)⟩ , |ψ(0)⟩ = |ψin ⟩ . (1.7)
dt

Given the ability to prepare a quantum state |ψ(0)⟩, a sparse matrix oracle to provide the

locations and values of nonzero entries of a Hamiltonian H, and an evolution time T , the

goal is to produce a quantum state that is ϵ-close to |ψ(T )⟩ in ℓ2 norm.

When the Hamiltonian H is time-independent, we have the close-form solution

|ψ(T )⟩ = e−iHT |ψ(0)⟩, for which a computation process attempts to produce an evolution

operator as an approximation of e−iHT .

This is a difficult problem in classical computation, since a classical computer

cannot even represent the quantum state efficiently. All classical algorithms require

time complexity at least Ω(N ) to explicitly store all the entries of the quantum state.

In quantum computation, we intend to design a sequence of quantum gates (a quantum

circuit) to process |ψ(t)⟩ from |ψ(0)⟩ to |ψ(T )⟩ while lowering the cost dramatically.

5
In Problem 1.1, we aim to produce the final state |ψ(T )⟩ with query and gate

complexity poly(log N ), an exponential improvement over classical simulations.

The first explicit Hamiltonian simulation algorithm proposed by Lloyd is based on

product formulas [4]. Specifically, if H is a time-independent k-local Hamiltonian, i.e.

Hj and each Hj acts on k = O(1) qubits, then the evolution operator e−iHt for
P
H= j

a short time t can be well approximated by the Lie-Trotter formula,

Y
e−iHt = e−iHj t + O(t2 ). (1.8)
j

Each operator e−iHj t can be efficiently implemented on a quantum computer. For a long

time t, we can divide the time interval onto r subintervals, in which we simulate each

e−iHj t/r , and finally approximate e−iHt by

Y
e−iHt = ( e−iHj t/r )r + O(t2 /r). (1.9)
j

It takes r = O(∥H∥2 t2 /ϵ) to ensure the error tolerance of exact and approximated normalized

states in ℓ2 norm at most ϵ. High-order product formulas are known for a better approximation.

Using the 2kth-order Suzuki formula, Berry, Ahokas, Cleve, and Sanders showed that the

number of exponentials required for an approximation with error at most ϵ is

52k ∥H∥1+1/2k t1+1/2k /ϵ1/2k [33]. A large body of work has substantially developed fast

quantum algorithms based on product formulas [33, 34, 35, 36, 37, 38, 39, 40].

Recently, some Hamiltonian simulation algorithms based on post-Trotter methods

have emerged diversely. Berry, Childs, Cleve, Kothari, and Somma proposed high-precision

6
Hamiltonian simulations by implementing the linear combinations of unitaries (LCU)

with complexity t poly(log(t/ϵ)) [35, 36], an exponential improvement over Trotter-

based Hamiltonian simulations with respect to ϵ. Low and Chuang developed Hamiltonian

simulations based on the quantum signal processing and qubitization [41, 42] with complexity
log(1/ϵ)
O(t + log log(1/ϵ)
), realizing an optimal tradeoff in terms of t and ϵ.

Note that the Hamiltonian simulation problem can be regarded as a particular initial

value problem of the differential equation. A more general problem will be introduced

later.

1.3 Quantum linear system algorithms

Quantum computers are expected to outperform classical computers for characterizing

the solution of a N -dimensional linear system in Hilbert space. Originally proposed by

Harrow, Hassidim, and Lloyd [43], advanced quantum linear system algorithms [43, 44,

45, 46, 47, 48, 49, 50, 51] have been well developed to provide a quantum state encoding

the solution with complexity poly(log N ). Such algorithms have been considerably applied

to address high-dimensional problems governed by linear differential equations [52, 53,

54, 55, 56, 57, 58, 59, 60, 61].

We introduce the oracles for the quantum linear system problem. Given an N -

dimensional vector b, we assume there is a state preparation oracle Ob as defined in (1.5)

that produces an N -dimensional quantum state |b⟩, where |b⟩ is proportional to the vector

b. Given an N × N sparse matrix A, we then assume there is a sparse matrix oracle OA as

defined in (1.6). In detail, on input (j, l), OA gives the location of the l-th nonzero entry

7
in row j, denoted as k. Then the oracle OA gives the value Aj,k . Here we require the

number of nonzero entries of this matrix in every row and column is at most s, where s is

much smaller than the dimension N .

We state the quantum linear system problem as follows.

Problem 1.2. In the quantum linear system problem, we consider an N -dimensional

linear system

Ax = b. (1.10)

Given the ability to prepare a quantum state proportional to the vector b, and a sparse

matrix oracle to provide the locations and values of nonzero entries of a matrix A, the

goal is to produce a quantum state that is ϵ-close to the normalized A−1 b in ℓ2 norm.

It takes at least Ω(N ) for a classical computer (and even a quantum computer) to

explicitly write down every entry of an N -dimensional vector. In Problem 1.2, we aim to

design a quantum circuit to provide a quantum state as an encoding of the inverse solution

A−1 b with ℓ2 normalization, with query and gate complexity poly(log N ).

The first quantum linear system algorithm (QLSA), known as the HHL algorithm,

was proposed by Harrow, Hassidim, and Lloyd [43]. The algorithm requires that A be

Hermitian so that it can be converted into a unitary operator. If A is not Hermitian, the

linear system in Problem 1.2 is modified as

    
 0 A  0   b 
   =  . (1.11)
    
A† 0 x 0

The algorithm requires that a quantum state |b⟩ proportional to the vector b is given.

8
Let {λj , |νj ⟩}N
j=1 be eigenvalues and eigenvectors of A, and let |b⟩ be expanded on the

P
eigenbasis of A as |b⟩ = j βj |νj ⟩.

The HHL algorithm queries the unitary operator eiAt with the sparse matrix oracle.

Hamiltonian simulation techniques are employed to perform eiAt with a superposition of

different time t onto |b⟩. The HHL algorithm then estimates the corresponding λj for each

eigenbasis of |b⟩1 , yielding the state

X X
βj |0⟩|νj ⟩ → βj |λj ⟩|νj ⟩, (1.12)
j j

where each λj is stored in the ancilla register. Next, the HHL algorithm performs a

controlled rotation to multiply λ−1


j onto each eigenbasis,

X X
βj |λj ⟩|νj ⟩ → βj λ−1
j |λj ⟩|νj ⟩. (1.13)
j j

Noticing the fact that A−1 |b⟩ = βj λ−1


P
j j |νj ⟩, we have prepared the inverse solution

encoded in the quantum state.

This algorithm achieves query and gate complexity κ2 poly(log N )/ϵ, where where

N is the dimension, ϵ measures the error tolerance of exact and approximated normalized

states in ℓ2 norm, and κ is the condition number of the matrix A.

Subsequent work has been widely developed to improve the performance of that

algorithm. Regarding the dependence on ϵ, Childs, Kothari, and Somma exponentially

improved the scale of ϵ from poly(1/ϵ) to poly(log(1/ϵ)) by utilizing the linear combinations
1
The HHL algorithm uses the quantum phase estimation (QPE) [62] to estimate the eigenvalues. More
details can be found in [1].

9
of unitaries (LCU) [46]. The LCU approach approximates the inverse of A by employing

a Fourier series or a Chebyshev polynomial, which can be implemented by linear combinations

of unitary operations with cost poly(log(1/ϵ)). Gilyén, Su, Low, and Wiebe provided an

alternative encoding to approximate the inverse of A based on the quantum singular value

transformation (QSVT) with the same ϵ scaling [47].

The dependence on the condition number κ also attracts considerable attension.

Although the best-known classical linear system solver based on conjugate gradient descent

can produce explicit N entries of the solution with cost scaling as κ, it is known that a

quantum algorithm must make at least κ queries for encoding the solution of Problem 1.2

in poly(log N ) [43]. Compared to the HHL algorithm, Ambainis adopted the variable

time amplitude amplification (VTAA) to improve the κ2 scaling to linear scaling[44], but

it resulted in a worse 1/ϵ3 scaling. Childs, Kothari, and Somma combined VTAA with

LCU as introduced above to reach κ poly(log N, log(1/ϵ)) [46]. Alternatively, quantum

adiabatic approaches [45, 48, 49, 51] have been recently investigated to reach the same

complexity without using the VTAA.

1.4 Quantum algorithms for linear differential equations

Models governed by both ordinary differential equations (ODEs) and partial differential

equations (PDEs) arise extensively in natural and social science, medicine, and engineering.

Such equations characterize physical and biological systems that exhibit a wide variety

of complex phenomena, including turbulence and chaos. By utilizing QLSAs, quantum

algorithms offer the prospect of rapidly characterizing the solutions of high-dimensional

10
systems of linear ODEs [52, 53, 54] and PDEs [55, 56, 57, 58, 59, 60, 61].

We introduce the oracles for the quantum linear ODE problem. Given an N -

dimensional vector u(0), we assume there is a state preparation oracle O0 as defined in

(1.5) that produces an N -dimensional quantum state |u(0)⟩, where |u(0)⟩ is proportional

to the vector u(0). Given an N -dimensional vector f (t) with a specific t, we assume there

is a state preparation oracle Of such that

Of (|t⟩|0⟩) = |t⟩|f (t)⟩, (1.14)

which produces an N -dimensional quantum state |f (t)⟩ proportional to f (t). Given an

N × N sparse matrix A(t) with a specific t, we assume there is a sparse matrix oracle OA

as defined in (1.6). In detail, on input (t, j, l), OA gives the location of the l-th nonzero

entry in row j, denoted as k. Then the oracle OA gives the value Aj,k (t), i.e.

OA (|t⟩|j⟩|k⟩|0⟩) = |t⟩|j⟩|k⟩|Aj,k (t)⟩. (1.15)

Here we require the number of nonzero entries of this matrix in every row and column is

at most s for any time t, where s is much smaller than the dimension N .

We informally state the quantum linear ODE problem as follows.

Problem 1.3. In the quantum linear ODE problem, we are given a system of d-dimensional

differential equations
du(t)
= A(t)u(t) + f (t). (1.16)
dt

Given the ability to prepare a quantum state proportional to the initial condition u(0) and

11
the inhomogeneity f (t) with a specific t, a sparse matrix oracle to provide the locations

and values of nonzero entries of a matrix A(t) with a specific t, and an evolution time T ,

the goal is to produce a quantum state that is ϵ-close to the normalized u(T ) in ℓ2 norm.

Problem 2.1 gives a formal statement of Problem 1.3.

Berry presented the first quantum algorithm for general linear ODEs [52]. This

work explicitly considered the time-independent case, assuming that real parts of the

eigenvalues of A are non-positive, whereas in principle, it is natural to extend it to general

time-dependent ODEs. Berry’s algorithm discretized the system of differential equations

into small time intervals as a system of linear equations using the Euler method or high-

order linear multistep methods [63, 64], for which QLSAs can be applied to produce

an approximated encoded solution. For instance, the first-order forward Euler method

approximates the time derivative at the point x(t) as

dx(t) x(t + h) − x(t)


= + O(h). (1.17)
dt h

The kth-order linear multistep methods can reduce the error to O(hk ). This approach

achieves complexity poly-logarithmic in the dimension d. However, when solving an

equation over the interval [0, T ], the number of iterations is T /h = Θ(ϵ−1/k ) for fixed k,

offering a total complexity poly(1/ϵ) even using high-precision QLSAs.

Reference [53] improved Berry’s result from poly(1/ϵ) to poly(log(1/ϵ)) by using

a high-precision QLSA based on linear combinations of unitaries [46] to solve a linear

system that encodes a truncated Taylor series. However, this approach assumes that A(t)

and f (t) are time-independent so that the solution of the ODE can be written as an explicit

12
series, and it is unclear how to generalize the algorithm to time-dependent ODEs. Prior to

the work [54], it was an open problem regarding whether there was a quantum algorithm

for time-dependent ODEs with complexity poly(log(1/ϵ)).

In Chapter 2, we introduce the quantum spectral method with such an improvement

[54]. Our main contribution is to implement a method that uses a global approximation of

the solution instead of locally discretizing the ODEs into small time intervals. We do this

by developing a quantum version of so-called spectral methods, a technique from classical


P
numerical analysis that represents the components of the solution u(t)i ≈ j cij ϕj (t) as

linear combinations of basis functions ϕj (t) expressing the time dependence. Specifically,

we implement a Chebyshev pseudospectral method [65, 66] using a high-precision QLSA.

This approach approximates the solution by a truncated Chebyshev series with undetermined

coefficients and solves for those coefficients using a linear system that interpolates the

differential equations. According to the convergence theory of spectral methods, the

solution error decreases exponentially provided the solution is sufficiently smooth [67,

68]. We use the LCU-based QLSA to solve this linear system with high precision [46].

To analyze the algorithm, we upper bound the solution error and condition number of the

linear system and lower bound the success probability of the final measurement. Overall,

we show that the total complexity of this approach is poly(log(1/ϵ)) for general time-

dependent ODEs. We give a formal statement of the main result in Theorem 2.1.

It is natural to extend the ordinary differential equations to the partial differential

equations (PDEs) by involving multivariate derivatives. Prominent examples include

Maxwell’s equations for electromagnetism, Boltzmann’s equation and the Fokker-Planck

equation in thermodynamics, and Schrödinger’s equation in continuum quantum mechanics.

13
For solving PDEs on a digital computer, it is common to consider a system of linear

equations that approximates the PDE on the grid space, and produce the solution on those

grid points within the ℓ2 discretization error ϵ.

We introduce the oracles for the quantum linear PDE problem. Given an N -dimensional

vector f (x) with a specific x, we assume there is a state preparation oracle Ob as defined

in (1.14) that produces an N -dimensional quantum state |f (x)⟩, where |f (x)⟩ is proportional

to the vector f (x). Given N × N sparse matrices Aj1 j2 (x), Aj (x), and A0 (x) with

a specific x, we then assume there are sparse matrix oracles as defined in (1.15). For

instance, Aj1 j2 (x) is modeled by a sparse matrix oracle that, on input (m, l), gives the

location of the l-th nonzero entry in row m, denoted as n, and gives the value Aj1 j2 (x)m,n .

We informally state the quantum linear PDE problem as follows.

Problem 1.4. In the quantum linear PDE problem, we are given a system of second-order

d-dimensional equations

d d
X ∂ 2 u(x) X ∂u(x)
Aj1 j2 (x) + Aj (x) + A0 (x)u(x) = f (x). (1.18)
j1 ,j2 =1
∂xj1 ∂xj2 j=1 ∂xj

Given the ability to prepare a quantum state proportional to the inhomogeneity f (x),

sparse matrix oracles to provide the locations and values of nonzero entries of a matrix

Aj1 j2 (x), Aj (x), and A0 (x) on a set of interpolation nodes x, the goal is to produce

a quantum state |u(x)⟩ that is ϵ-close to the normalized u(x) on a set of interpolation

nodes x in ℓ2 norm.

Problem 3.1 gives a formal statement of Problem 1.4, for which we consider additional

technical assumptions introduced in Chapter 3.

14
The discretized solution u(x) on a set of interpolation nodes x is a multi-dimensional

vector function. If each spatial coordinate has n discrete values, then nd points are needed

to discretize a d-dimensional problem. Simply producing the solution on these grid points

takes time Ω(nd ).

Compared to classical algorithms, quantum algorithms can produce a quantum

state proportional to the solution on the grid, which requires only poly(d, log n) space.

There are a large variety of quantum algorithms using different numerical schemes and

the QLSA to reach poly(d, 1/ϵ) [55, 56, 57, 58, 59, 60, 61], because of the additional

approximation errors in the numerical schemes. It was unknown how to achieve

poly(d, log(1/ϵ)) prior to the work [59].

In Chapter 3, we introduce two classes of quantum algorithms for linear elliptic

PDEs with such an improvement [59]. Our first algorithm is based on a quantum version

of the FDM approach: we use a finite-difference approximation to produce a system

of linear equations and then solve that system using the QLSA. We analyze our FDM

algorithm as applied to Poisson’s equation under periodic, Dirichlet, and Neumann boundary

conditions. Whereas previous FDM approaches [69, 70] considered fixed orders of truncation,

we adapt the order of truncation depending on ϵ, inspired by the classical adaptive FDM

[71]. As the order increases, the eigenvalues of the FDM matrix approach the eigenvalues

of the continuous Laplacian, allowing for more precise approximations. The main algorithm

we present uses the quantum Fourier transform (QFT) and takes advantage of the high-

precision LCU-based QLSA [46]. This quantum adaptive FDM approach produces a

quantum state that approximates the solution of the Poisson’s equation with complexity

d6.5 poly(log d, log(1/ϵ)).

15
We also propose a quantum algorithm for more general second-order elliptic PDEs

under periodic or non-periodic Dirichlet boundary conditions. This algorithm is based on

the quantum spectral method [54] that globally approximates the solution of a PDE by a

truncated Fourier or Chebyshev series with undetermined coefficients, and then finds the

coefficients by solving a linear system. This system is exponentially large in d, so solving

it is infeasible for classical algorithms but feasible in a quantum context. To be able to

apply the QLSA efficiently, we show how to make the system sparse using variants of the

quantum Fourier transform. Our bound on the condition number of the linear system uses

global strict diagonal dominance, and introduces a factor in the complexity that measures

the extent to which this condition holds. We give a complexity of d2 poly(log(1/ϵ)) for

producing a quantum state approximating the solution of general second-order elliptic

PDEs with Dirichlet boundary conditions.

Both of these approaches have complexity poly(d, log(1/ϵ)), providing optimal

dependence on ϵ and an exponential improvement over classical methods with respect

to d. We state our main results in Theorem 3.1 and Theorem 3.2.

1.5 Quantum algorithms for nonlinear differential equations

We now turn attention to the nonlinear generalization of Problem 1.3. We focus

here on differential equations with nonlinearities that can be expressed with quadratic

polynomials. Note that polynomials of degree higher than two, and even more general

nonlinearities, can be reduced to the quadratic case by introducing additional variables

[72, 73]. The quadratic case also directly includes many archetypal models, such as the

16
logistic equation in biology, the Lorenz system in atmospheric dynamics, and the Navier–

Stokes equations in fluid dynamics.

We introduce the oracles for the quantum quadratic PDE problem. Given an N -

dimensional vector u(0), we assume there is a state preparation oracle O0 as defined in

(1.5) that produces an N -dimensional quantum state |u(0)⟩, where |u(0)⟩ is proportional

to the vector u(0). Given N × N sparse matrices F2 , F1 , and F0 (t) with a specific t, we

then assume there are sparse matrix oracles as defined in (1.6) and (1.15). For instance,

F2 is modeled by a sparse matrix oracle OF2 that, on input (j, l), gives the location of the

l-th nonzero entry in row j, denoted as k, and gives the value (F2 )j,k .

We informally state the quantum quadratic ODE problem as follows.

Problem 1.5. In the quantum quadratic ODE problem, we are given a system of N -

dimensional differential equations

du(t)
= F2 u⊗2 (t) + F1 u(t) + F0 (t). (1.19)
dt

Given the ability to prepare a quantum state proportional to the initial condition u(0) and

sparse matrix oracles to provide the locations and values of nonzero entries of matrices

F2 , F1 , and F0 (t) for any specified t, and an evolution time T , the goal is to produce a

quantum state that is ϵ-close to the normalized u(T ) in ℓ2 norm.

Problem 4.1 gives a formal statement of Problem 1.5.

Early work on quantum algorithms for differential equations already considered the

nonlinear case by Leyton and Osborne [74]. It gave a quantum algorithm that simulates

the nonlinear ODE by storing and maintaining multiple copies of the solution. In each

17
iteration from t → t + ∆t, it costs multiple copies |x(t)⟩ to represent the nonlinearity

F (x(t)) to obtain one copy of |x(t + ∆t)⟩. The complexity of this approach is polynomial

in the logarithm of the dimension but exponential in the evolution time, scaling as O(1/ϵT )

due to exponentially increasing resources used to maintain sufficiently many copies of the

solution throughout the evolution.

Recently, heuristic quantum algorithms for nonlinear ODEs have been studied.

Reference [75] explores a linearization technique known as the Koopman–von Neumann

method that might be amenable to the quantum linear system algorithm. In [76], the

authors provide a high-level description of how linearization can help solve nonlinear

equations on a quantum computer. However, neither paper makes precise statements

about concrete implementations or running times of quantum algorithms. The recent

preprint [77] also describes a quantum algorithm to solve a nonlinear ODE by linearizing

it using a different approach from the one taken here. However, neither paper makes

precise statements about concrete implementations or rigours time complexities of quantum

algorithms. The authors also do not describe how barriers such as those of [78] could be

avoided in their approach.

While quantum mechanics is described by linear dynamics, possible nonlinear modifications

of the theory have been widely studied. Generically, such modifications enable quickly

solving hard computational problems (e.g., solving unstructured search among n items

in time poly(log n)), making nonlinear dynamics exponentially difficult to simulate in

general [78, 79, 80]. Therefore, constructing efficient quantum algorithms for general

classes of nonlinear dynamics has been considered largely out of reach.

Prior to the work [81], it was a long-standing and celebrated open problem regarding

18
whether quantum computing can efficiently characterize nonlinear differential equations.

In Chapter 4, we design and analyze a quantum algorithm that overcomes this

limitation using Carleman linearization [73, 82, 83]. This approach embeds polynomial

nonlinearities into an infinite-dimensional system of linear ODEs, and then truncates it

to obtain a finite-dimensional linear approximation. We discretize the finite ODE system

in time using the forward Euler method and solve the resulting linear equations with the

quantum linear system algorithm [46, 84]. We control the approximation error of this

approach by combining a novel convergence theorem with a bound for the global error

of the Euler method. Furthermore, we upper bound the condition number of the linear

system and lower bound the success probability of the final measurement. Subject to the

condition R < 1, where the quantity R characterizes the relative strength of the nonlinear

and dissipative linear terms, we show that the total complexity of this quantum Carleman

linearization algorithm is sT 2 q poly(log T, log n, log 1/ϵ)/ϵ, where s is the sparsity, T

is the evolution time, q quantifies the decay of the final solution relative to the initial

condition, n is the dimension, and ϵ is the allowed error (see Theorem 4.1). In the regime

R < 1, this is an exponential improvement over [74], which has complexity exponential

in T . We state our main algorithmic result in Theorem 4.1.

We also provide a quantum lower bound for the worst-case complexity of simulating

strongly nonlinear dynamics, demonstrating that the algorithm’s condition R < 1 cannot

be significantly improved in general. Following the approach of [78, 79], we construct

a protocol for distinguishing two states of a qubit driven by a certain quadratic ODE.

Provided R ≥ 2, this procedure distinguishes states with overlap 1−ϵ in time poly(log(1/ϵ)).

Since nonorthogonal quantum states are hard to distinguish, this implies a lower bound

19
on the complexity of the quantum ODE problem. We state our main lower bound result

in Theorem 4.2.

Our quantum algorithm could potentially be applied to study models governed

by quadratic ODEs arising in biology and epidemiology as well as in fluid and plasma

dynamics. In particular, the celebrated Navier–Stokes equation with linear damping,

which describes many physical phenomena, can be treated by our approach provided

the Reynolds number is sufficiently small. We also note that while the formal validity of

our arguments assumes R < 1, we find in one numerical experiment that our proposed

approach remains valid for larger R.

The remainder of this dissertation is outlined as follows. Chapter 2 covers high-

precision quantum algorithms for linear ordinary differential equations. Chapter 3 presents

high-precision quantum algorithms for linear elliptic partial differential equations. Chapter 4

introduces an efficient quantum algorithm for dissipative nonlinear differential equations.

Finally, we conclude the results of the dissertation and discuss future work in Chapter 5.

20
Chapter 2: High-precision quantum algorithms for linear ordinary differential

equations

2.1 Introduction

In this chapter, we focus on systems of first-order linear ordinary differential equations

(ODEs)1 . As earlier introduced in Problem 1.3, such equations can be written in the form

dx(t)
= A(t)x(t) + f (t) (2.1)
dt

where t ∈ [0, T ] for some T > 0, the solution x(t) ∈ Cd is a d-dimensional vector, and

the system is determined by a time-dependent matrix A(t) ∈ Cd×d and a time-dependent

inhomogeneity f (t) ∈ Cd . Provided A(t) and f (t) are continuous functions of t, the

initial value problem (i.e., the problem of determining x(t) for a given initial condition

x(0)) has a unique solution [85].

Recent work has developed quantum algorithms with the potential to extract information

about solutions of systems of differential equations even faster than is possible classically.

This body of work grew from the quantum linear systems algorithm (QLSA) [84], which

produces a quantum state proportional to the solution of a sparse system of d linear


1
This chapter is based on the paper [54].

21
equations in time poly(log d). We have introduced the quantum linear system algorithm

in Chapter 1.

Berry presented the first efficient quantum algorithm for general linear ODEs [52].

His algorithm represents the system of differential equations as a system of linear equations

using a linear multistep method and solves that system using the QLSA. This approach

achieves complexity logarithmic in the dimension d and, by using a high-order integrator,

close to quadratic in the evolution time T . While this method could in principle be applied

to handle time-dependent equations, the analysis of [52] only explicitly considers the

time-independent case for simplicity.

Since it uses a finite difference approximation, the complexity of Berry’s algorithm

as a function of the solution error ϵ is poly(1/ϵ) [52]. Reference [53] improved this to

poly(log(1/ϵ)) by using a high-precision QLSA based on linear combinations of unitaries

[46] to solve a linear system that encodes a truncated Taylor series. However, this approach

assumes that A(t) and f (t) are time-independent so that the solution of the ODE can be

written as an explicit series, and it is unclear how to generalize the algorithm to time-

dependent ODEs.

Most of the aforementioned algorithms use a local approximation: they discretize

the differential equations into small time intervals to obtain a system of linear equations or

linear differential equations that can be solved by the QLSA or Hamiltonian simulation.

For example, the central difference scheme approximates the time derivative at the point

x(t) as
dx(t) x(t + h) − x(t − h)
= + O(h2 ). (2.2)
dt 2h

22
High-order finite difference or finite element methods can reduce the error to O(hk ),

where k − 1 is the order of the approximation. However, when solving an equation over

the interval [0, T ], the number of iterations is T /h = Θ(ϵ−1/k ) for fixed k, giving a

total complexity that is poly(1/ϵ) even using high-precision methods for the QLSA or

Hamiltonian simulation.

For ODEs with special structure, some prior results already show how to avoid

a local approximation and thereby achieve complexity poly(log(1/ϵ)). When A(t) is

anti-Hermitian and f (t) = 0, we can directly apply Hamiltonian simulation [37]; if

A and f are time-independent, then [53] uses a Taylor series to achieve complexity

poly(log(1/ϵ)). However, the case of general time-dependent linear ODEs had remained

elusive.

In this chapter, we use a nonlocal representation of the solution of a system of

differential equations to give a new quantum algorithm with complexity poly(log(1/ϵ))

even for time-dependent equations. While this is an exponential improvement in the

dependence on ϵ over previous work, it does not necessarily give an exponential runtime

improvement in the context of an algorithm with classical output. In general, statistical

error will introduce an overhead of poly(1/ϵ) when attempting to measure an observable

with precision ϵ. However, achieving complexity poly(log(1/ϵ)) can result in a polynomial

improvement in the overall running time. In particular, if an algorithm is used as a

subroutine k times, we should ensure error O(1/k) for each subroutine to give an overall

algorithm with bounded error. A subroutine with complexity poly(log(1/ϵ)) can potentially

give significant polynomial savings in such a case.

Time-dependent linear differential equations describe a wide variety of systems in

23
science and engineering. Examples include the wave equation and the Stokes equation

(i.e., creeping flow) in fluid dynamics [86], the heat equation and the Boltzmann equation

in thermodynamics [87, 88], the Poisson equation and Maxwell’s equations in electromagnetism

[89, 90], and of course Schrödinger’s equation in quantum mechanics. Moreover, some

nonlinear differential equations can be studied by linearizing them to produce time-dependent

linear equations (e.g., the linearized advection equation in fluid dynamics [91]).

We focus our discussion on first-order linear ODEs. Higher-order ODEs can be

transformed into first-order ODEs by standard methods. Also, by discretizing space,

PDEs with both time and space dependence can be regarded as sparse linear systems

of time-dependent ODEs. Thus we focus on an equation of the form (2.1) with initial

condition

x(0) = γ (2.3)

for some specified γ ∈ Cd . We assume that A(t) is s-sparse (i.e., has at most s nonzero

entries in any row or column) for any t ∈ [0, T ]. Furthermore, we assume that A(t), f (t),

and γ are provided by black-box subroutines (which serve as abstractions of efficient

computations). In particular, following essentially the same model as in [53] (see also

Section 1.1 of [46]), suppose we have an oracle OA (t) that, for any t ∈ [0, T ] and any

given row or column specified as input, computes the locations and values of the nonzero

entries of A(t) in that row or column. We also assume oracles Ox and Of (t) that, for

any t ∈ [0, T ], prepare normalized states |γ⟩ and |f (t)⟩ proportional to γ and f (t), and

that also compute ∥γ∥ and ∥f (t)∥, respectively. Given such a description of the instance,

the goal is to produce a quantum state ϵ-close to |x(T )⟩ (a normalized quantum state

24
proportional to x(T )).

As mentioned above, our main contribution is to implement a method that uses a

global approximation of the solution. We do this by developing a quantum version of so-

called spectral methods, a technique from classical numerical analysis that (approximately)
P
represents the components of the solution x(t)i ≈ j cij ϕj (t) as linear combinations

of basis functions ϕj (t) expressing the time dependence. Specifically, we implement a

Chebyshev pseudospectral method [65, 66] using the QLSA. This approach approximates

the solution by a truncated Chebyshev series with undetermined coefficients and solves

for those coefficients using a linear system that interpolates the differential equations.

According to the convergence theory of spectral methods, the solution error decreases

exponentially provided the solution is sufficiently smooth [67, 68]. We use the LCU-based

QLSA to solve this linear system with high precision [46]. To analyze the algorithm,

we upper bound the solution error and condition number of the linear system and lower

bound the success probability of the final measurement. Overall, we show that the total

complexity of this approach is poly(log(1/ϵ)) for general time-dependent ODEs. Informally,

we show the following:

Theorem 2.1 (Informal). Consider a linear ODE (2.1) with given initial conditions.

Assume A(t) is s-sparse and diagonalizable, and Re(λi (t)) ≤ 0 for all eigenvalues of

A(t). Then there exists a quantum algorithm that produces a state ϵ-close in l2 norm

to the exact solution, succeeding with probability Ω(1), with query and gate complexity

O s∥A∥T poly(log(s∥A∥T /ϵ))).

In addition to initial value problems (IVPs), our approach can also address boundary

25
value problems (BVPs). Given an oracle for preparing a state α|x(0)⟩+β|x(T )⟩ expressing

a general boundary condition, the goal of the quantum BVP is to produce a quantum state

ϵ-close to |x(t)⟩ (a normalized state proportional to x(t)) for any desired t ∈ [0, T ].

We also give a quantum algorithm for this problem with complexity poly(log(1/ϵ)), as

follows:

Theorem 2.2 (Informal). Consider a linear ODE (2.1) with given boundary conditions.

Assume A(t) is s-sparse and diagonalizable, and Re(λi (t)) ≤ 0 for all eigenvalues of

A(t). Then there exists a quantum algorithm that produces a state ϵ-close in l2 norm

to the exact solution, succeeding with probability Ω(1), with query and gate complexity

O s∥A∥4 T 4 poly(log(s∥A∥T /ϵ))).

We give formal statements of Theorem 2.1 and Theorem 2.2 in Section 2.8 and

Section 2.9, respectively. Note that the dependence of the complexity on ∥A∥ and T is

worse for BVPs than for IVPs. This is because a rescaling approach that we apply for

IVPs (introduced in Section 2.3) cannot be extended to BVPs.

The remainder of this chapter is organized as follows. Section 2.2 introduces the

spectral method and Section 2.3 shows how to encode it into a quantum linear system.

Then Section 2.4 analyzes the exponential decrease of the solution error, Section 2.5

bounds the condition number of the linear system, Section 2.6 lower bounds the success

probability of the final measurement, and Section 2.7 describes how to prepare the initial

quantum state. We combine these bounds in Section 2.8 to establish the main result.

We then extend the analysis for initial value problems to boundary value problems in

Section 2.9. Finally, we conclude in Section 2.10 with a discussion of the results and

26
some open problems.

2.2 Spectral method

Spectral methods provide a way of solving differential equations using global

approximations [67, 68]. The main idea of the approach is as follows. First, express an

approximate solution as a linear combination of certain basis functions with undetermined

coefficients. Second, construct a system of linear equations that such an approximate

solution should satisfy. Finally, solve the linear system to determine the coefficients of

the linear combination.

Spectral methods offer a flexible approach that can be adapted to different settings

by careful choice of the basis functions and the linear system. A Fourier series provides an

appropriate basis for periodic problems, whereas Chebyshev polynomials can be applied

more generally. The linear system can be specified using Gaussian quadrature (giving a

spectral element method or Tau method), or one can simply interpolate the differential

equations using quadrature nodes (giving a pseudo-spectral method) [68]. Since general

linear ODEs are non-periodic, and interpolation facilitates constructing a straightforward

linear system, we develop a quantum algorithm based on the Chebyshev pseudo-spectral

method [65, 66].

In this approach, we consider a truncated Chebyshev approximation x(t) of the

exact solution x̂(t), namely

n
X
xi (t) = ci,k Tk (t), i ∈ [d]0 := {0, 1, . . . , d − 1} (2.4)
k=0

27
for any n ∈ Z+ . Here Tk (t) = cos(k arccos x) is the Chebyshev polynomial of the first

kind. (See [54, Appendix A] for its properties.) The coefficients ci,k ∈ C for all i ∈ [d]0

and k ∈ [n + 1]0 are determined by demanding that x(t) satisfies the ODE and initial

conditions at a set of interpolation nodes {tl }nl=0 (with 1 = t0 > t1 > · · · > tn = −1),

where x(t0 ) and x(tn ) are the initial and final states, respectively. In other words, we

require
dx(tl )
= A(tl )x(tl ) + f (tl ), ∀ l ∈ [n + 1], t ∈ [−1, 1], (2.5)
dt

and

xi (t0 ) = γi , i ∈ [d]0 . (2.6)

We choose the domain [−1, 1] in (2.5) because this is the natural domain for Chebyshev

polynomials. Correspondingly, in the following section, we rescale the domain of initial

value problems to be [−1, 1]. We would like to be able to increase the accuracy of the

approximation by increasing n, so that

∥x̂(t) − x(t)∥ → 0 as n → ∞. (2.7)

There are many possible choices for the interpolation nodes. Here we use the

Chebyshev-Gauss-Lobatto quadrature nodes, tl = cos lπ


n
for l ∈ [n + 1]0 , since these

nodes achieve the highest convergence rate among all schemes with the same number

of nodes [63, 64]. These nodes also have the convenient property that Tk (tl ) = cos klπ
n
,

making it easy to compute the values xi (tl ).

To evaluate the condition (2.5), it is convenient to define coefficients c′i,k for i ∈ [d]0

28
and k ∈ [n + 1]0 such that
n
dxi (t) X ′
= ci,k Tk (t). (2.8)
dt k=0

We can use the differential property of Chebyshev polynomials,

′ ′
Tk+1 (t) Tk−1 (t)
2Tk (t) = − , (2.9)
k+1 k−1

to determine the transformation between ci,k and c′i,k . Based on the property of derivatives

of Chebyshev polynomials (as detailed in [54, Appendix A]), we have

n
X
c′i,k = [Dn ]kj ci,j , i ∈ [d]0 , k ∈ [n + 1]0 , (2.10)
j=0

where Dn is the (n + 1) × (n + 1) upper triangular matrix with nonzero entries

2j
[Dn ]kj = , k + j odd, j > k, (2.11)
σk

where 

2 k = 0


σk := (2.12)


1 k ∈ [n] := {1, 2, . . . , n}.

Using this expression in (2.5), (2.10), and (2.11), we obtain the following linear

equations:

n
X d−1
X n
X
Tk (tl )c′i,k = Aij (tl ) Tk (tl )cj,k + f (tl )i , i ∈ [d]0 , l ∈ [n + 1]0 . (2.13)
k=0 j=0 k=0

29
We also demand that the Chebyshev series satisfies the initial condition xi (1) = γi for all

i ∈ [d]0 . This system of linear equations gives a global approximation of the underlying

system of differential equations. Instead of locally approximating the ODE at discretized

times, these linear equations use the behavior of the differential equations at the n + 1

times {tl }nl=0 to capture their behavior over the entire interval [−1, 1].

Our algorithm solves this linear system using the high-precision QLSA [46]. Given

an encoding of the Chebyshev coefficients cik , we can obtain the approximate solution

x(t) as a suitable linear combination of the cik , a computation that can also be captured

within a linear system. The resulting approximate solution x(t) is close to the exact

solution x̂(t):

Lemma 2.1 (Lemma 19 of [67]). Let x̂(t) ∈ C r+1 (−1, 1) be the solution of the differential

equations (2.1) and let x(t) satisfy (2.5) and (2.6) for {tl = cos lπ }n . Then there is a
n l=0

constant C, independent of n, such that

∥x̂(n+1) (t)∥
max ∥x̂(t) − x(t)∥ ≤ C max . (2.14)
t∈[−1,1] t∈[−1,1] nr−2

This shows that the convergence behavior of the spectral method is related to the

smoothness of the solution. For a solution in C r+1 , the spectral method approximates the

solution with n = poly(1/ϵ). Furthermore, if the solution is smoother, we have an even

tighter bound:

Lemma 2.2 (Eq. (1.8.28) of [68]). Let x̂(t) ∈ C ∞ (−1, 1) be the solution of the differential

30
equations (2.1) and let x(t) satisfy (2.5) and (2.6) for {tl = cos lπ }n . Then
n l=0

r
2  e n
max ∥x̂(t) − x(t)∥ ≤ max ∥x̂(n+1) (t)∥ . (2.15)
t∈[−1,1] π t∈[−1,1] 2n

p
For simplicity, we replace the value 2/π by the upper bound of 1 in the following

analysis.

This result implies that if the solution is in C ∞ , the spectral method approximates

the solution to within ϵ using only n = poly(log(1/ϵ)) terms in the Chebyshev series.

Consequently, this approach gives a quantum algorithm with complexity poly(log(1/ϵ)).

2.3 Linear system

In this section we construct a linear system that encodes the solution of a system of

differential equations via the Chebyshev pseudospectral method introduced in Section 2.2.

We consider a system of linear, first-order, time-dependent ordinary differential equations,

and focus on the following initial value problem. This is a formal statement of Problem 1.3.

Problem 2.1. In the quantum ODE problem, we are given a system of equations

dx(t)
= A(t)x(t) + f (t) (2.16)
dt

where x(t) ∈ Cd , A(t) ∈ Cd×d is s-sparse, and f (t) ∈ Cd for all t ∈ [0, T ]. We

assume that Aij , fi ∈ C ∞ (0, T ) for all i, j ∈ [d]. We are also given an initial condition

x(0) = γ ∈ Cd . Given oracles that compute the locations and values of nonzero entries

31
of A(t) for any t2 , and that prepare normalized states |γ⟩ proportional to γ and |f (t)⟩

proportional to f (t) for any t ∈ [0, T ], the goal is to output a quantum state |x(T )⟩ that

is ϵ-close to the normalized x(T ) in ℓ2 norm.

Without loss of generality, we rescale the interval [0, T ] onto [−1, 1] by the linear

d
map t 7→ 1 − 2t/T . Under this rescaling, we have dt
7→ − T2 dt
d
, so A 7→ − T2 A, which

can increase the spectral norm. To reduce the dependence on T —specifically, to give

an algorithm with complexity close to linear in T —we divide the interval [0, T ] into

subintervals [0, Γ1 ], [Γ1 , Γ2 ], . . . , [Γm−1 , T ] with Γ0 := 0, Γm := T . Each subinterval

[Γh , Γh+1 ] for h ∈ [m]0 is then rescaled onto [−1, 1] with the linear map Kh : [Γh , Γh+1 ] →

[−1, 1] defined by
2(t − Γh )
Kh : t 7→ 1 − , (2.17)
Γh+1 − Γh

which satisfies Kh (Γh ) = 1 and Kh (Γh+1 ) = −1. To solve the overall initial value

problem, we simply solve the differential equations for each successive interval (as encoded

into a single system of linear equations).

Now let τh := |Γh+1 − Γh | and define

τh
Ah (t) := − A(Kh (t)) (2.18)
2
xh (t) := x(Kh (t)) (2.19)
τh
fh (t) := − f (Kh (t)). (2.20)
2
2
A(t) is modeled by a sparse matrix oracle OA that, on input (j, l), gives the location of the l-th nonzero
entry in row j, denoted as k, and gives the value A(t)j,k .

32
Then, for each h ∈ [m]0 , we have the rescaled differential equations

dxh
= Ah (t)xh (t) + fh (t) (2.21)
dt

for t ∈ [−1, 1] with the initial conditions



γ h=0


xh (1) = (2.22)


xh−1 (−1) h ∈ [m].

By taking
2
τh ≤ (2.23)
maxt∈[Γh ,Γh+1 ] ∥A(t)∥

where ∥·∥ denotes the spectral norm, we can ensure that ∥Ah (t)∥ ≤ 1 for all t ∈ [−1, 1].

In particular, it suffices to take

2
τ := max τh ≤ . (2.24)
h∈{0,1,...,m−1} maxt∈[0,T ] ∥A(t)∥

Having rescaled the equations to use the domain [−1, 1], we now apply the Chebyshev

pseudospectral method. Following Section 2.2, we substitute the truncated Chebyshev

series of x(t) into the differential equations with interpolating nodes {tl = cos lπ
n
:l ∈

[n]}, giving the linear system

dx(tl )
= Ah (tl )x(tl ) + fh (tl ), h ∈ [m]0 , l ∈ [n + 1] (2.25)
dt

33
with initial condition

x(t0 ) = γ. (2.26)

Note that in the following, terms with l = 0 refer to this initial condition.

We now describe a linear system

L|X⟩ = |B⟩ (2.27)

that encodes the Chebyshev pseudospectral approximation and uses it to produce an

approximation of the solution at time T .

The vector |X⟩ ∈ Cm+p ⊗ Cd ⊗ Cn+1 represents the solution in the form

m−1 d−1 X
n m+p d−1 n
XX X XX
|X⟩ = ci,l (Γh+1 )|hil⟩ + xi |hil⟩ (2.28)
h=0 i=0 l=0 h=m i=0 l=0

where ci,l (Γh+1 ) are the Chebyshev series coefficients of x(Γh+1 ) and xi := x(Γm )i is the

ith component of the final state x(Γm ).

The right-hand-side vector |B⟩ represents the input terms in the form

m−1
X
|B⟩ = |h⟩|B(fh )⟩ (2.29)
h=0

where
d−1
X d−1 X
X n
|B(fh )⟩ = γi |i0⟩ + fh (cos lπ ) |il⟩,
n i
h ∈ [m − 1]. (2.30)
i=0 i=0 l=1

Here γ is the initial condition and fh (cos lπ ) is ith component of fh at the interpolation
n i

point tl = cos lπ
n
.

34
We decompose the matrix L in the form

m−1 m m+p m+p


X X X X
L= |h⟩⟨h|⊗(L1 +L2 (Ah ))+ |h⟩⟨h−1|⊗L3 + |h⟩⟨h|⊗L4 + |h⟩⟨h−1|⊗L5 .
h=0 h=1 h=m h=m+1
(2.31)

We now describe each of the matrices Li for i ∈ [5] in turn.

dx
The matrix L1 is a discrete representation of dt
, satisfying

d−1 X
X n d−1
X n
X
|h⟩⟨h| ⊗ L1 |X⟩ = Tk (t0 )ci,k |hi0⟩ + Tk (tl )[Dn ]kr ci,r |hil⟩ (2.32)
i=0 k=0 i=0 l=1,k,r=0

(recall from (2.8) and (2.10) that Dn encodes the action of the time derivative on a

Chebyshev expansion). Thus L1 has the form

d−1 X
n d−1 n
X X X klπ
L1 = Tk (t0 )|i0⟩⟨ik| + cos [Dn ]kr |il⟩⟨ir| (2.33)
i=0 k=0 i=0 l=1,k,r=0
n
n
X
= Id ⊗ (|0⟩⟨0|Pn + |l⟩⟨l|Pn Dn ) (2.34)
l=1

where the interpolation matrix is a discrete cosine transform matrix:

n
X klπ
Pn := cos |l⟩⟨k|. (2.35)
l,k=0
n

The matrix L2 (Ah ) discretizes Ah (t), i.e.,

d−1 X
X n
|h⟩⟨h| ⊗ L2 (Ah )|X⟩ = − Ah (tl )ij Tk (tl )cj,k |hil⟩. (2.36)
i,j=0 l=1,k=0

35
Thus

d−1 X
n
X klπ
L2 (Ah ) = − Ah (tl )ij cos |il⟩⟨jk| (2.37)
i,j=0 l=1,k=0
n
n
X
=− Ah (tl ) ⊗ |l⟩⟨l|Pn . (2.38)
l=1

Note that if Ah is time-independent, then

L2 (Ah ) = −Ah ⊗ Pn . (2.39)

The matrix L3 combines the Chebyshev series coefficients ci,l to produce xi for

each i ∈ [d]0 . To express the final state x(−1), L3 represents the linear combination
Pn Pn k
xi (−1) = k=0 ci,k Tk (−1) = k=0 (−1) ci,k . Thus we take

d−1 X
X n
L3 = (−1)k |i0⟩⟨ik|. (2.40)
i=0 k=0

Notice that L3 has zero rows for l ∈ [n].

When h = m, L4 is used to construct xi from the output of L3 for l = 0, and to

repeat xi n times for l ∈ [n]. When m+1 ≤ h ≤ m+p, both L4 and L5 are used to repeat

xi (n + 1)p times for l ∈ [n]. This repetition serves to increase the success probability of

the final measurement. In particular, we take

d−1 X
X n d−1 X
X n
L4 = − |il⟩⟨il − 1| + |il⟩⟨il| (2.41)
i=0 l=1 i=0 l=0

36
and
d−1
X
L5 = − |i0⟩⟨in|. (2.42)
i=0

In summary, the linear system is as follows. For each h ∈ [m]0 , (L1 +L2 (Ah ))|X⟩ =

|Bh ⟩ solves the differential equations over [Γh , Γh+1 ], and the coefficients ci,l (Γh+1 ) are

combined by L3 into the (h + 1)st block as initial conditions. When h = m, the final

coefficients ci,l (Γm ) are combined by L3 and L4 into the final state with coefficients xi ,

and this solution is repeated (p + 1)(n + 1) times by L4 and L5 .

2.4 Solution error

In this section, we bound how well the solution of the linear system defined above

approximates the actual solution of the system of differential equations.

Lemma 2.3. For the linear system L|X⟩ = |B⟩ defined in (2.27), let x be the approximate

ODE solution specified by the linear system and let x̂ be the exact ODE solution. Then

for n sufficiently large, the error in the solution at time T satisfies

(n+1) en+1
∥x̂(T ) − x(T )∥ ≤ m max ∥x̂ (t)∥ . (2.43)
t∈[0,T ] (2n)n

Proof. First we carefully choose n satisfying

 
e log(ω)
n≥ (2.44)
2 log(log(ω))

37
where
∥x̂(n+1) (t)∥
ω := max (m + 1) (2.45)
t∈[0,T ] ∥γ∥

to ensure that
∥x̂(n+1) (t)∥  e n 1
max ≤ . (2.46)
t∈[0,T ] ∥γ∥ 2n m+1

According to the quantum spectral method defined in Section 2.3, we solve

dx
= Ah (t)x(t) + fh (t), h ∈ [m]0 . (2.47)
dt

Pd Pn n
We denote the exact solution by x̂(Γh+1 ), and we let x(Γh+1 ) = i=0 l=0 (−1) ci,l (Γh+1 ),

where ci,l (Γh+1 ) is defined in (2.28). Define

∆h+1 := ∥x̂(Γh+1 ) − x(Γh+1 )∥. (2.48)

For h = 0, Lemma 2.2 implies

 e n
∆1 = ∥x̂(Γ1 ) − x(Γ1 )∥ ≤ max ∥x̂(n+1) (t)∥ . (2.49)
t∈[0,T ] 2n

For h ∈ [m], the error in the approximate solution of dx


dt
= Ah (t)x(t)+fh (t) has two

contributions: the error from the linear system and the error in the initial condition. We

e(Γh+1 ) denote the solution of the linear system L1 + L2 (Ah ) |e
let x x(Γh+1 )⟩ = |B(fh )⟩

under the initial condition x̂(Γh ). Then

∆h+1 ≤ ∥x̂(Γh+1 ) − x
e(Γh+1 )∥ + ∥e
x(Γh+1 ) − x(Γh+1 )∥. (2.50)

38
The first term can be bounded using Lemma 2.2, giving

 e n
e(Γh+1 )∥ ≤ max ∥x̂(n+1) (t)∥
∥x̂(Γh+1 ) − x . (2.51)
t∈[0,T ] 2n

The second term comes from the initial error ∆h , which is transported through the linear

system. Let

Eh+1 = Êh+1 + δh+1 (2.52)

where Eh+1 is the solution of the linear system with input ∆h and Êh+1 is the exact

dx
solution of dt
= Ah+1 (t)x(t) + fh+1 (t) with initial condition x(Γh ) = ∆h . Then by

Lemma 2.2,

∆h (n+1)
 e n
∥δh+1 ∥ = ∥Êh+1 − Eh+1 ∥ ≤ max ∥x̂ (t)∥ , (2.53)
∥γ∥ t∈[0,T ] 2n

so
∆h  e n
∥e
x(Γh+1 ) − x(Γh+1 )∥ ≤ ∆h + max ∥x̂(n+1) (t)∥ . (2.54)
∥γ∥ t∈[0,T ] 2n

Thus, we have an inequality recurrence for bounding ∆h :

 ∥x̂(n+1) (t)∥  e n  (n+1)


 e n
∆h+1 ≤ 1 + max ∆h + max ∥x̂ (t)∥ . (2.55)
t∈[0,T ] ∥γ∥ 2n t∈[0,T ] 2n

Now we iterate h from 1 to m. Equation (2.46) implies

∥x̂(n+1) (t)∥  e n 1 1
max ≤ ≤ , (2.56)
t∈[0,T ] ∥γ∥ 2n m+1 m

39
so
 ∥x̂(n+1) (t)∥  e n m−1  1 m
1 + max ≤ 1+ ≤ e. (2.57)
t∈[0,T ] ∥γ∥ 2n m

Therefore

 ∥x̂(n+1) (t)∥  e n m−1


∆m ≤ 1 + max ∆1
t∈[0,T ] ∥γ∥ 2n
m−1
X ∥x̂(n+1) (t)∥  e n h−1  e n
+ 1 + max max ∥x̂(n+1) (t)∥
h=1
t∈[0,T ] ∥γ∥ 2n t∈[0,T ] 2n
 1 m−1  1 m−1 (n+1)
 e n
(2.58)
≤ 1+ ∆1 + (m − 1) 1 + max ∥x̂ (t)∥
m m t∈[0,T ] 2n
(n+1) en+1 (n+1) en+1
≤ max ∥x̂ (t)∥ + (m − 1) max ∥x̂ (t)∥
t∈[0,T ] (2n)n t∈[0,T ] (2n)n
en+1
= m max ∥x̂(n+1) (t)∥ ,
t∈[0,T ] (2n)n

which shows that the solution error decreases exponentially with n. In other words, the

linear system approximates the solution with error ϵ using n = poly(log(1/ϵ)).

Note that for time-independent differential equations, we can directly estimate ∥x̂(n+1) (t)∥

using

x̂(n+1) (t) = An+1 n


h x̂(t) + Ah fh . (2.59)

Writing Ah = Vh Λh Vh−1 where Λh = diag(λ0 , . . . , λd−1 ), we have eAh = Vh eΛh Vh−1 .

Thus the exact solution of time-independent equation with initial condition x̂(1) = γ is

x̂(t) = eAh (1−t) γ + (eAh (1−t) − I)A−1


h fh
(2.60)
= Vh e Λh
Vh−1 γ + Vh (e Λh (1−t)
− I)Λ−1 −1
h Vh fh .

40
Since Re(λi ) ≤ 0 for all eigenvalues λi of Ah for i ∈ [d]0 , we have ∥eΛh ∥ ≤ 1. Therefore

∥x̂(t)∥ ≤ κV (∥γ∥ + 2∥fh ∥). (2.61)

Furthermore, since maxh,t ∥Ah (t)∥ ≤ 1, we have

max ∥x̂(n+1) (t)∥ ≤ max (∥x̂(t)∥ + ∥fh (t)∥)


t∈[0,T ] t∈[0,T ]

≤ κV (∥γ∥ + 3∥fh ∥) (2.62)

≤ κV (∥γ∥ + 2τ ∥f ∥).

Thus the solution error satisfies

en+1
∥x̂(T ) − x(T )∥ ≤ mκV (∥γ∥ + 2τ ∥f ∥) . (2.63)
(2n)n

Note that, although we represent the solution differently, this bound is similar to the

corresponding bound in [53, Theorem 6].

2.5 Condition number

We now analyze the condition number of the linear system.

Lemma 2.4. Consider an instance of the quantum ODE problem as defined in Problem 2.1.

For all t ∈ [0, T ], assume A(t) can be diagonalized as A(t) = V (t)Λ(t)V −1 (t) for

some Λ(t) = diag(λ0 (t), . . . , λd (t)), with Re(λi (t)) ≤ 0 for all i ∈ [d]0 . Let κV :=

maxt∈[0,T ] κV (t) be an upper bound on the condition number of V (t). Then for m, p ∈ Z+

41
and n sufficiently large, the condition number of L in the linear system (2.27) satisfies

κL ≤ (πm + p + 2)(n + 1)3.5 (2κV + e∥γ∥). (2.64)

Proof. We begin by bounding the norms of some operators that appear in the definition

of L. First we consider the l∞ norm of Dn since this is straightforward to calculate:



 n(n+2) n even,

n 
X 2
∥Dn ∥∞ := max |[Dn ]ij | = (2.65)
1≤i≤n 
j=0
 (n+1)2 − 2 n odd.


2

Thus we have the upper bound

√ (n + 1)2.5
∥Dn ∥ ≤ n + 1∥Dn ∥∞ ≤ . (2.66)
2

Next we upper bound the spectral norm of the discrete cosine transform matrix Pn :

n
X klπ
∥Pn ∥2 ≤ max cos2 ≤ max {n + 1} = n + 1. (2.67)
0≤l≤n
k=0
n 0≤l≤n

Therefore

∥Pn ∥ ≤ n + 1. (2.68)

Thus we can upper bound the norm of L1 as

(n + 1)3
∥L1 ∥ ≤ ∥Dn ∥∥Pn ∥ ≤ . (2.69)
2

42
Next we consider the spectral norm of L2 (Ah ) for any h ∈ [m]0 . We have

n
X
L2 (Ah ) = − Ah (tl ) ⊗ |l⟩⟨l|Pn . (2.70)
l=1

Since the eigenvalues of each Ah (tl ) for l ∈ [n + 1]0 are all eigenvalues of

n
X
Ah (tl ) ⊗ |l⟩⟨l|, (2.71)
l=0

we have

n
X n
X
Ah (tl ) ⊗ |l⟩⟨l| ≤ Ah (tl ) ⊗ |l⟩⟨l| ≤ max ∥Ah (t)∥ ≤ 1 (2.72)
t∈[−1,1]
l=1 l=0

by (2.23). Therefore

∥L2 (Ah )∥ ≤ ∥Pn ∥ ≤ n + 1. (2.73)

By direct calculation, we have


∥L3 ∥ = n + 1, (2.74)

∥L4 ∥ ≤ 2, (2.75)

∥L5 ∥ = 1. (2.76)

Thus, for n ≥ 5, we find

(n + 1)3 √ √
∥L∥ ≤ + n + 1 + n + 1 + 2 + 1 ≤ (n + 1)3 . (2.77)
2

43
Next we upper bound ∥L−1 ∥. By definition,

∥L−1 ∥ = sup ∥L−1 |B⟩∥. (2.78)


∥|B⟩∥≤1

We express |B⟩ as

m+p
n X
d−1 m+p
n
XX XX
|B⟩ = βhil |hil⟩ = |bhl ⟩ (2.79)
h=0 l=0 i=0 h=0 l=0

Pd−1 Pd−1
where |bhl ⟩ := i=0 βhil |hil⟩ satisfies ∥|bhl ⟩∥2 = i=0 |βhil |2 ≤ 1. For any fixed

h ∈ [m + p + 1]0 and l ∈ [n + 1]0 , we first upper bound ∥L−1 |bhl ⟩∥ and use this to

upper bound the norm of L−1 applied to linear combinations of such vectors.

Recall that the linear system comes from (2.13), which is equivalent to

n
X d−1
X n
X
Tk′ (tr )ci,k (Γh ) = Ah (tr )ij Tk (tr )cj,k (Γh ) + fh (tr )i , i ∈ [d]0 , r ∈ [n + 1]0 .
k=0 j=0 k=0
(2.80)

For fixed h ∈ [m + p + 1]0 and r ∈ [n + 1]0 , define vectors xhr , x′hr ∈ Cd with

n
X n
X
(xhr )i := Tk (tr )ci,k (Γh ), (x′hr )i := Tk′ (tr )ci,k (Γh ) (2.81)
k=0 k=0

for i ∈ [d]0 . We claim that xhr = x′hr = 0 for any r ̸= l. Combining only the equations

from (2.80) with r ̸= l gives the system

x′hr = Ah (tr )xhr . (2.82)

44
Consider a corresponding system of differential equations

dx̂hr (t)
= Ah (tr )x̂(t) + b (2.83)
dt

where x̂hr (t) ∈ Cd for all t ∈ [−1, 1]. The solution of this system with b = 0 and initial

condition x̂hr (1) = 0 is clearly x̂hr (t) = 0 for all t ∈ [−1, 1]. Then the nth-order

truncated Chebyshev approximation of (2.83), which should satisfy the linear system

(2.82) by (2.4) and (2.5), is exactly xhr . Using Lemma 2.3 and observing that x̂(n+1) (t) =

0, we have

xhr = x̂hr (t) = 0. (2.84)

When t = tl , we let |B⟩ = |bhl ⟩ denote the first nonzero vector. Combining only

the equations from (2.80) with r = l gives the system

x′hl = Ah (tl )xhl . (2.85)

Consider a corresponding system of differential equations

dx̂hr (t)
= Ah (tr )x̂(t) + b, (2.86)
dt

with γ = bh0 , b = 0 for l = 0; or γ = 0, b = bhl for l ∈ [n].

Using the diagonalization Ah (tl ) = V (tl )Λh (tl )V −1 (tl ), we have eA = V (tl )eΛh (tl ) V −1 (tl ).

Thus the exact solution of the differential equations (2.83) with r = l and initial condition

45
x̂hr (1) = γ is

x̂hr (t) = eAh (tl )(1−t) γ + (eAh (tl )(1−t) − I)Ah (tl )−1 b
(2.87)
Λh (tl )(1−t) −1 Λh (tl )(1−t) −1 −1
= V (tl )e V (tl )γ + V (e − I)Λh (tl ) V b.

According to equation (2.46) in the proof of Lemma 2.3, we have

xhl = x̂hl (−1) + δhl (2.88)

where
(n+1) en+1 e∥γ∥
∥δhl ∥ ≤ max ∥x̂hl (t)∥ n
≤ . (2.89)
t∈[0,T ] (2n) m+1

Now for h ∈ [m + 1]0 , we take xhl to be the initial condition γ for the next

subinterval to obtain x(h+1)l . Using (2.87) and (2.88), starting from γ = bh0 , b = 0

for l = 0, we find

m−h+1
Y  m−h
X k
Y 
−1
xml = V (tl ) e 2Λh (tl )
V (tl )γ+ V (tl ) e 2Λh (tl )
V −1 (tl )δ(m−k)l . (2.90)
j=1 k=0 j=1

Since ∥Λh (tl )∥ ≤ ∥Λ∥ ≤ 1 and Λh (tl ) = diag(λ0 , . . . , λd−1 ) with Re(λi ) ≤ 0 for

i ∈ [d]0 , we have ∥e2Λh (tl ) ∥ ≤ 1. Therefore

∥xhl ∥ ≤ ∥xml ∥ ≤ κV (tl )∥bhl ∥ + (m − h + 1)κV (tl )∥δhl ∥ ≤ κV (tl ) + e∥γ∥ ≤ κV + e∥γ∥.

(2.91)

46
On the other hand, with γ = 0, b = bhl for l ∈ [n], we have

m−h
Y 
2Λh (tl )
(e2Λh (tl ) − I)Λh (tl )−1 V −1 (tl )b

xml = V (tl ) e
j=1
m−h k
Y  (2.92)
X
+ V (tl ) e2Λh (tl )
V −1 (tl )δ(m−k)l ,
k=0 j=1

so

∥xhl ∥ ≤ 2κV (tl )∥bhl ∥+(m−h+1)κV (tl )∥δhl ∥ ≤ 2κV (tl )+e∥γ∥ ≤ 2κV +e∥γ∥. (2.93)

For h ∈ {m, m + 1, . . . , m + p}, according to the definition of L4 and L5 , we similarly

have

∥xhl ∥ = ∥xml ∥ ≤ 2κV + e∥γ∥. (2.94)

According to (2.87), x̂hl (t) is a monotonic function of t ∈ [−1, 1], which implies

∥x̂hl (t)∥2 ≤ max{∥x̂hl (−1)∥2 , ∥x̂hl (1)∥2 } ≤ (2κV + e∥γ∥)2 . (2.95)

Using the identity


Z 1
dt
√ = π, (2.96)
−1 1 − t2

we have

Z 1 Z 1
2 dt dt
∥x̂hl (t)∥ √ ≤ (2κV + e∥γ∥)2 √ = π(2κV + e∥γ∥)2 . (2.97)
1 − t 2 1 − t2
−1 −1

47
Consider the Chebyshev expansion of x̂hl (t) as in (2.4):


d−1 X
X
x̂hl (t) = ci,l (Γh+1 )Tl (t). (2.98)
i=0 l=0

By the orthogonality of Chebyshev polynomials (as specified in [54, Appendix A]), we

have

Z 1 Z 1 X ∞
d−1 X 2
2 dt dt
∥x̂hl (t)∥ √ = ci,l (Γh+1 )Tl (t) √
−1 1 − t2 −1 i=0 l=0
1 − t2

d−1 X d−1 d−1 X
n d−1
(2.99)
X X X X
= c2i,l (Γh+1 ) + 2 c2i,0 (Γh+1 ) ≥ c2i,l (Γh+1 ) + 2 c2i,0 (Γh+1 ).
i=0 l=1 i=0 i=0 l=1 i=0

Using (2.97), this gives

d−1 X
n Z 1
X dt
c2i,l (Γh+1 ) ≤ x̂2hl (t) √ ≤ π(2κV + e∥γ∥)2 . (2.100)
1−t 2
i=0 l=0 −1

Now we compute ∥|X⟩∥, summing the contributions from all ci,r (Γh ) and xmr , and

notice that ci,r = 0 and xmr = 0 for all r ̸= l, giving

m−1
XX d−1
∥|X⟩∥2 = c2i,l (Γh+1 ) + (p + 1)(xml )2
h=0 i=0

(2.101)
≤ πm(2κV + e∥γ∥)2 + (p + 1)(κV + e∥γ∥)2

≤ (πm + p + 1)(2κV + e∥γ∥)2 .

Finally, considering all h ∈ [m + p + 1]0 and l ∈ [n + 1]0 , from (2.79) we have

m+p
n
XX
2
∥|B⟩∥ = ∥|bhl ⟩∥2 ≤ 1, (2.102)
h=0 l=0

48
so
m+p
n
XX
−1 2 −1
∥L ∥ = sup ∥L |B⟩∥ = sup 2
∥L−1 |bhl ⟩∥2
∥|B⟩∥≤1 ∥|B⟩∥≤1 h=0 l=0

(2.103)
≤ (πm + p + 1)(m + p + 1)(n + 1)(2κV + e∥γ∥)2

≤ (πm + p + 1)2 (n + 1)(2κV + e∥γ∥)2 ,

and therefore

∥L−1 ∥ ≤ (πm + p + 1)(n + 1)0.5 (2κV + e∥γ∥). (2.104)

Finally, combining (2.77) and (2.104) gives

κL = ∥L∥∥L−1 ∥ ≤ (πm + p + 1)(n + 1)3.5 (2κV + e∥γ∥) (2.105)

as claimed.

2.6 Success probability

We now evaluate the success probability of our approach to the quantum ODE

problem.

Lemma 2.5. Consider an instance of the quantum ODE problem as defined in Problem 2.1

with the exact solution x̂(t) for t ∈ [0, T ], and its corresponding linear system (2.27)

with m, p ∈ Z+ and n sufficiently large. When applying the QLSA to this system, the
Pd−1
probability of measuring a state proportional to |x(T )⟩ = i=0 xi |i⟩ is

(p + 1)(n + 1)
Pmeasure ≥ , (2.106)
πmq 2 + (p + 1)(n + 1)

49
where xi is defined in (2.28), τ is defined in (2.24), and

∥x̂(t)∥
q := max . (2.107)
t∈[0,T ] ∥x(T )∥

Proof. After solving the linear system (2.27) using the QLSA, we measure the first and

third registers of |X⟩ (as defined in (2.28)). We decompose this state as

|X⟩ = |Xbad ⟩ + |Xgood ⟩, (2.108)

where

m−1
XX d−1 X
n
|Xbad ⟩ = ci,l (Γh+1 )|hil⟩, (2.109)
h=0 i=0 l=0
m+p d−1 n
X XX
|Xgood ⟩ = xi |hil⟩. (2.110)
h=m i=0 l=0

When the first register is observed in some h ∈ {m, m + 1, . . . , m + p} (no matter

what outcome is seen for the third register), we output the second register, which is then

in a normalized state proportional to the final state:

|x(T )⟩
|Xmeasure ⟩ = , (2.111)
∥|x(T )⟩∥

with
d−1
X d−1 X
X n
|x(T )⟩ = xi |i⟩ = ci,k Tk (t)|i⟩. (2.112)
i=0 i=0 k=0

50
Notice that
d−1
X
2
∥|x(T )⟩∥ = x2i (2.113)
i=0

and

d−1
X
2
∥|Xgood ⟩∥ = (p + 1)(n + 1) x2i = (p + 1)(n + 1)∥|x(T )⟩∥2 . (2.114)
i=0

Considering the definition of q, the contribution from time interval h under the

rescaling (2.17), and the identity (2.96), we have

1 1
Z
2 2 2 dτ
q ∥x(T )∥ = max ∥x̂(t)∥ = √ max ∥x̂(t)∥2
t∈[0,T ] π −1 1 − τ 2 t∈[0,T ]
1 1
Z

≥ √ max ∥x̂(t)∥2
π −1 1 − τ 2 t∈[Γh ,Γh+1 ]
(2.115)
1 1
Z

= √ max ∥x̂h (t)∥2
π −1 1 − τ 2 t∈[−1,1]
1 1
Z
dt
≥ ∥x̂h (t)∥2 p ,
π −1 1−t
2

where x̂h (t) is the solution of (2.47) with the rescaling in (2.19). By the orthogonality of

Chebyshev polynomials (as specified in [54, Appendix A]),

1 Z d−1 ∞
1 1 XX
Z
2 1 2 2 dt 2 dt
q ∥x(T )∥ ≥ ∥x̂h (t)∥ p = ( c i,k (Γ h+1 )Tk (t)) p
π −1 1−t
2 π −1 i=0 k=0 1−t
2

d−1 ∞ d−1 d−1 n


1 XX 2 X
2 1 XX 2
= ( c (Γh+1 ) + 2 ci,0 (Γh+1 )) ≥ c (Γh+1 ).
π i=0 k=1 i,k i=0
π i=0 k=0 i,k
(2.116)

51
For all h ∈ [m]0 , we have

m−1 d−1 n
2 2
X 1 XX 2 1
mq ∥x(T )∥ ≥ ci,k (Γh+1 ) = ∥|Xbad ⟩∥2 , (2.117)
h=0
π i=0 k=0 π

and therefore

(p + 1)(n + 1)
∥|Xgood ⟩∥2 = (p + 1)(n + 1)∥x(T )∥2 ≥ ∥|Xbad ⟩∥2 . (2.118)
πmq 2

Thus we see that the success probability of the measurement satisfies

(p + 1)(n + 1)
Pmeasure ≥ (2.119)
πmq 2 + (p + 1)(n + 1)

as claimed.

2.7 State preparation

We now describe a procedure for preparing the vector |B⟩ in the linear system

(2.27) (defined in (2.29) and (2.30)) using the given ability to prepare the initial state of

the system of differential equations. We also evaluate the complexity of this procedure.

Lemma 2.6. Consider state preparation oracles acting on a state space with basis vectors

|h⟩|i⟩|l⟩ for h ∈ [m]0 , i ∈ [d]0 , l ∈ [n]0 , where m, d, n ∈ N, encoding an initial condition

γ ∈ Cd and function fh (cos lπ


n
) ∈ Cd as in (2.30). Specifically, for any h ∈ [m]0 and

l ∈ [n], let Ox be a unitary oracle that maps |0⟩|0⟩|0⟩ to a state proportional to |0⟩|γ⟩|0⟩

and |h⟩|ϕ⟩|l⟩ to |h⟩|ϕ⟩|l⟩ for any |ϕ⟩ orthogonal to |0⟩; let Of (h, l) be a unitary that maps

52
|h⟩|0⟩|l⟩ to a state proportional to |h⟩|fh (cos lπ
n
)⟩|l⟩ and maps |0⟩|ϕ⟩|0⟩ to |0⟩|ϕ⟩|0⟩ for

any |ϕ⟩ orthogonal to |0⟩. Suppose ∥γ∥ and ∥fh (cos lπ


n
)∥ are known. Then the normalized

quantum state
m−1
XX n
|B⟩ ∝ |0⟩|γ⟩|0⟩ + |h⟩|fh (cos lπ
n
)⟩|l⟩ (2.120)
h=0 l=1

can be prepared with gate and query complexity O(mn).

Proof. We normalize the components of the state using the coefficients

∥γ∥
b00 = q Pn ,
2 lπ 2
∥γ∥ + l=1 ∥fh (cos n )∥
(2.121)
∥fh (cos lπ
n
)∥
bhl = q , h ∈ [m]0 , l ∈ [n]
∥γ∥2 + nl=1 ∥fh (cos lπ
P 2
n
)∥

so that
m−1
XX n
b2hl = 1. (2.122)
h=0 l=0

First we perform a unitary transformation mapping

|0⟩|0⟩|0⟩ 7→ b00 |0⟩|0⟩|0⟩ + b01 |0⟩|0⟩|1⟩ + · · · + b(m−1)n |m − 1⟩|0⟩|n⟩. (2.123)

This can be done in time complexity O(mn) by standard techniques [92]. Then we

perform Ox and Of (h, l) for all h ∈ [m]0 , l ∈ [n], giving

m−1
XX n
|0⟩|γ⟩|0⟩ + |h⟩|fh (cos lπ
n
)⟩|l⟩ (2.124)
h=0 l=1

using O(mn) queries.

53
2.8 Main result

Having analyzed the solution error, condition number, success probability, and state

preparation procedure for our approach, we are now ready to establish the main result.

Theorem 2.1. Consider an instance of the quantum ODE problem as defined in Problem 2.1.

Assume A(t) can be diagonalized as the form A(t) = V (t)Λ(t)V −1 (t) where Λ(t) =

diag(λ1 (t), . . . , λd (t)) with Re(λi (t)) ≤ 0 for each i ∈ [d]0 and t ∈ [0, T ]. Then there

exists a quantum algorithm that produces a state x(T )/∥x(T )∥ ϵ-close to x̂(T )/∥x̂(T )∥

in l2 norm, succeeding with probability Ω(1), with a flag indicating success, using

O κV s∥A∥T q poly(log(κV s∥A∥g ′ T /ϵg))



(2.125)

queries to oracles OA (h, l) (a sparse matrix oracle for Ah (tl ) as defined in (2.18)) and

Ox and Of (h, l) (as defined in Lemma 2.6). Here ∥A∥ := maxt∈[0,T ] ∥A(t)∥; κV :=

maxt κV (t), where κV (t) is the condition number of V (t); and

∥x̂(t)∥
g := ∥x̂(T )∥, g ′ := max max ∥x̂(n+1) (t)∥, q := max . (2.126)
t∈[0,T ] n∈N t∈[0,T ] ∥x(T )∥

The gate complexity is larger than the query complexity by a factor of poly(log(κV ds∥A∥g ′ T /ϵ)).

Proof. We first present the algorithm and then analyze its complexity.

Statement of the algorithm. First, we choose m to guarantee

∥A∥T
≤ 1. (2.127)
2m

54
Then, as in Section 2.3, we divide the interval [0, T ] into small subintervals

[0, Γ1 ], [Γ1 , Γ2 ], . . . , [Γm−1 , T ] with Γ0 = 0, Γm = T , and define

T
τ := max {τh }, τh := |Γh+1 − Γh | = . (2.128)
0≤h≤m−1 m

Each subinterval [Γh , Γh+1 ] for h ∈ [m − 1] is mapped onto [−1, 1] with a linear

mapping Kh satisfying Kh (Γh ) = 1, Kh (Γh+1 ) = −1:

2(t − Γh )
Kh : t 7→ t = 1 − . (2.129)
Γh+1 − Γh

We choose
   
e log(Ω) log(ω)
n = max , (2.130)
2 log(log(Ω)) log(log(ω))

where
g ′ em g ′ em(1 + ϵ)
Ω := = (2.131)
δ gϵ

and
g′
ω := (m + 1). (2.132)
∥γ∥

Since maxt∈[0,T ] ∥x̂(n+1) (t)∥ ≤ g ′ , by Lemma 2.3, this choice guarantees

en+1
∥x̂(T ) − x(T )∥ ≤ m max ∥x̂(n+1) (t)∥ ≤δ (2.133)
t∈[0,T ] (2n)n

and
∥x̂(n+1) (t)∥  e n 1
max ≤ . (2.134)
t∈[0,T ] ∥γ∥ 2n m+1

55
Now ∥x̂(T ) − x(T )∥ ≤ δ implies

x̂(T ) x(T ) δ δ
− ≤ ≤ =: ϵ, (2.135)
∥x̂(T )∥ ∥x(T )∥ min{∥x̂(T )∥, ∥x(T )∥} g−δ

so we can choose such n to ensure that the normalized output state is ϵ-close to x̂(T )/∥x̂(T )∥.

Following Section 2.3, we build the linear system L|X⟩ = |B⟩ (see (2.27)) that

encodes the quantum spectral method. By Lemma 2.4, the condition number of this linear

system is at most (πm + p + 1)(n + 1)3.5 (2κV + e∥γ∥∞ ). Then we use the QLSA from

reference [46] to obtain a normalized state |X⟩ and measure the first and third register of

|X⟩ in the standard basis. If the measurement outcome for the first register belongs to

S = {m, m + 1, . . . , m + p}, (2.136)

we output the state of the second register, which is a normalized state |x(T )⟩/∥|x(T )⟩∥

satisfying (2.135). By Lemma 2.5, the probability of this event happening is at least
(p+1)(n+1)
πmq 2 +(p+1)(n+1)
. To ensure m + p = O(∥A∥T ), we can choose

p = O(m) = O(∥A∥T ), (2.137)


so we can achieve success probability Ω(1) with O(q/ n) repetitions of the above procedure.

Analysis of the complexity. The matrix L is an (m+p+1)d(n+1)×(m+p+1)d(n+1)

matrix with O(ns) nonzero entries in any row or column. By Lemma 2.4 and our choice of

parameters, the condition number of L is O κV (m + p)n3.5 . Consequently, by Theorem

56
5 of [46], the QLSA produces the state |x(T )⟩ with

O κV (m + p)n4.5 s poly(log(κV mns/δ)) = O κV s∥A∥T poly(log(κV s∥A∥g ′ T /ϵg))


 

(2.138)

queries to the oracles OA (h, l), Ox , and Of (h, l), and its gate complexity is larger by

a factor of poly(log(κV mnds/δ)). Using O(q/ n) steps of amplitude amplification to

achieve success probability Ω(1), the overall query complexity of our algorithm is

O κV (m+p)n4 sq poly(log(κV mns/δ)) = O κV s∥A∥T q poly(log(κV s∥A∥g ′ T /ϵg)) ,


 

(2.139)

and the gate complexity is larger by a factor of

poly(log(κV ds∥A∥g ′ T /ϵg)) (2.140)

as claimed.

In general, g ′ could be unbounded above as n → ∞. However, we could obtain a

useful bound in such a case by solving the implicit equations (2.133) and (2.134).

Note that for time-independent differential equations, we can replace g ′ by ∥γ∥ +

2τ ∥f ∥ as shown in (2.62). In place of (2.131) and (2.132), we choose

(∥γ∥ + 2τ ∥f ∥)emκV (∥γ∥ + 2τ ∥f ∥)emκV (1 + ϵ)


Ω := = (2.141)
δ gϵ

57
and
∥γ∥ + 2τ ∥f ∥
ω := (m + 1)κV . (2.142)
∥γ∥

By Lemma 2.3, this choice guarantees

en+1
∥x̂(T ) − x(T )∥ ≤ max ∥x̂(t) − x(t)∥ ≤ mκV (∥γ∥ + 2τ ∥f ∥) ≤δ (2.143)
t∈[−1,1] (2n)n

and

∥x̂(n+1) (t)∥  e n κV (∥γ∥ + 2τ ∥f ∥)  e n 1


max ≤ ≤ . (2.144)
t∈[0,T ] ∥γ∥ 2n ∥γ∥ 2n m+1

Thus we have the following:

Corollary 2.1. For time-independent differential equations, under the same assumptions

of Theorem 2.1, there exists a quantum algorithm using


O κV s∥A∥T q poly(log(κV sγ∥A∥∥f ∥T /ϵg)) (2.145)

queries to OA (h, l), Ox , and Of (h, l). The gate complexity of this algorithm is larger than

its query complexity by a factor of poly(log(κV dsγ∥A∥∥f ∥T /ϵ)).

The complexity of our algorithm depends on the parameter q in defined in (2.126),

which characterizes the decay of the final state relative to the initial state. As discussed

in Section 8 of [53], it is unlikely that the dependence on q can be significantly improved,

since renormalization of the state effectively implements postselection and an efficient

procedure for performing this would have the unlikely consequence BQP = PP.

58
We also require the real parts of the eigenvalues of A(t) to be non-positive for all t ∈

[0, T ] so that the solution cannot grow exponentially. This requirement is essentially the

same as in the time-independent case considered in [53] and improves upon the analogous

condition in [52] (which requires an additional stability condition). Also as in [53], our

algorithm can produce approximate solutions for non-diagonalizable A(t), although the

dependence on ϵ degrades to poly(1/ϵ). For further discussion of these considerations,

see Sections 1 and 8 of [53].

2.9 Boundary value problems

So far we have focused on initial value problems (IVPs). Boundary value problems

(BVPs) are another widely studied class of differential equations that appear in many

applications, but that can be harder to solve than IVPs.

Consider a sparse, linear, time-dependent system of differential equations as in

Problem 2.1 but with a constraint on some linear combination of the initial and final

states:

Problem 2.2. In the quantum BVP, we are given a system of equations

dx(t)
= A(t)x(t) + f (t), (2.146)
dt

where x(t) ∈ Cd , A(t) ∈ Cd×d is s-sparse, and f (t) ∈ Cd for all t ∈ [0, T ], and a

boundary condition αx(0) + βx(T ) = γ with α, β, γ ∈ Cd . Suppose there exists a unique

solution x̂ ∈ C ∞ (0, T ) of this boundary value problem. Given oracles that compute the

59
locations and values of nonzero entries of A(t) for any t, and that prepare quantum states

α|x(0)⟩+β|x(T )⟩ = |γ⟩ and |f (t)⟩ for any t, the goal is to output a quantum state |x(t∗ )⟩

that is proportional to x(t∗ ) for some specified t∗ ∈ [0, T ].

As before, we can rescale [0, T ] onto [−1, 1] by a linear mapping. However, since

we have boundary conditions at t = 0 and t = T , we cannot divide [0, T ] into small

subintervals. Instead, we directly map [0, T ] onto [−1, 1] with a linear map K satisfying

K(0) = 1 and K(T ) = −1:


2t
K : t 7→ t = 1 − . (2.147)
T

Now the new differential equations are

dx T 
= − A(t)x + f (t) . (2.148)
dt 2

If we define AK (t) := − T2 A(t) and fK (t) = − T2 f (t), we have

dx
= AK (t)x(t) + fK (t) (2.149)
dt

for t ∈ [−1, 1]. Now the boundary condition takes the form

αx(1) + βx(−1) = γ. (2.150)

Since we only have one solution interval, we need to choose a larger order n of the

60
Chebyshev series to reduce the solution error. In particular, we take

   
e log(Ω) log(ω)
n = ∥A∥T max , (2.151)
2 log(log(Ω)) log(log(ω))

where Ω and ω are the same as Theorem 2.1.

As in Section 2.3, we approximate x(t) by a finite Chebyshev series with interpolating

nodes {tl = cos lπ


n
: l ∈ [n]} and thereby obtain a linear system

dx(tl )
= AK (tl )x(tl ) + f (tl ), l ∈ [n] (2.152)
dt

with the boundary condition

αx(t0 ) + βx(tn ) = γ. (2.153)

Observe that the linear equations have the same form as in (2.25). Instead of (2.26),

the term with l = 0 encodes the condition (2.153) expanded in a Chebyshev series, namely

n
X n
X
αi ci,k Tk (t0 ) + βi ci,k Tk (tn ) = γi (2.154)
k=0 k=0

for each i ∈ [d]0 . Since Tk (t0 ) = 1 and Tk (tn ) = (−1)k , this can be simplified as

n
X
(αi + (−1)k βi )ci,k = γi . (2.155)
k=0

If αi + (−1)k βi = 0, the element of |il⟩⟨ik| of L2 (AK ) is zero; if αi + (−1)k βi ̸= 0,

without loss of generality, the two sides of this equality can be divided by αi + (−1)k βi

61
to guarantee that the terms with l = 0 can be encoded as in (2.26).

Now this system can be written in the form of equation (2.27) with m = 1. Here

L, |X⟩, and |B⟩ are the same as in (2.31), (2.28), and (2.29), respectively, with m = 1,

except for adjustments to L3 that we now describe.


Pn
The matrix L3 represents the linear combination xi (t∗ ) = ∗
k=0 ci,k Tk (t ). Thus

we take
d−1 X
X n
L3 = Tk (t∗ )|i0⟩⟨ik|. (2.156)
i=0 k=0

Since |Tk (t∗ )| ≤ 1, we have

∥L3 ∥ ≤ n + 1, (2.157)

and it follows that Lemma 2.3 also holds for boundary value problems. Similarly, Lemma 2.4

still holds with m = 1.

We are now ready to analyze the complexity of the quantum BVP algorithm. The

matrix L defined above is a (p + 2)d(n + 1) × (p + 2)d(n + 1) matrix with O(ns) nonzero

entries in any row or column, with condition number O(κV pn3.5 ). By Lemma 2.5 with

p = O(1), O(q/ n) repetitions suffice to ensure success probability Ω(1). By (2.151), n

is linear in ∥A∥T and poly-logarithmic in Ω and ω. Therefore, we have the following:

Theorem 2.2. Consider an instance of the quantum BVP as defined in Problem 2.2.

Assume A(t) can be diagonalized as the form A(t) = V (t)Λ(t)V −1 (t) where Λ(t) =

diag(λ1 (t), . . . , λd (t)) with Re(λi (t)) ≤ 0 for each i ∈ [d]0 and t ∈ [0, T ]. Then there

exists a quantum algorithm that produces a state x(t∗ )/∥x(t∗ )∥ ϵ-close to x̂(t∗ )/∥x̂(t∗ )∥

62
in l2 norm, succeeding with probability Ω(1), with a flag indicating success, using

O κV s∥A∥4 T 4 q poly(log(κV s∥A∥g ′ T /ϵg))



(2.158)

queries to OA (h, l), Ox , and Of (h, l). Here ∥A∥, κV , g, g ′ and q are defined as in

Theorem 2.1. The gate complexity is larger than the query complexity by a factor of

poly(log(κV ds∥A∥g ′ T /ϵ)).

As for initial value problems, we can simplify this result in the time-independent

case.

Corollary 2.2. For a time-independent boundary value problem, under the same assumptions

of Theorem 2.2, there exists a quantum algorithm using

O κV s∥A∥4 T 4 q poly(log(κV sγ∥A∥∥f ∥T /ϵg))



(2.159)

queries to OA (h, l), Ox , and Of (h, l). The gate complexity of this algorithm is larger than

its query complexity by a factor of poly(log(κV dsγ∥A∥∥f ∥T /ϵ)).

2.10 Discussion

In this chapter, we presented a quantum algorithm to solve linear, time-dependent

ordinary differential equations. Specifically, we showed how to employ a global approximation

based on the spectral method as an alternative to the more straightforward finite difference

method. Our algorithm handles time-independent differential equations with almost the

same complexity as [53], but unlike that approach, can also handle time-dependent differential

63
equations. Compared to [52], our algorithm improves the complexity of solving time-

dependent linear differential equations from poly(1/ϵ) to poly(log(1/ϵ)).

This work raises several natural open problems. First, our algorithm must assume

1
that the solution is smooth. If the solution is in C r , the solution error is O( nr−2 ) by

Lemma 2.1. Can we improve the complexity to poly(log(1/ϵ)) under such weaker smoothness

assumptions?

Second, the complexity of our algorithm is logarithmic in the parameter g ′ defined

in (2.126), which characterizes the amount of fluctuation in the solution. However,

the query complexity of Hamiltonian simulation is independent of that parameter [34,

35]. Can we develop quantum algorithms for general differential equations with query

complexity independent of g ′ ?

Third, our algorithm has nearly optimal dependence on T , scaling as O(T poly(log T )).

According to the no-fast-forwarding theorem [33], the complexity must be at least linear

in T , and indeed linear complexity is achievable for the case of Hamiltonian simulation

[93]. Can we handle general differential equations with complexity linear in T ? Furthermore,

can we achieve an optimal tradeoff between T and ϵ as shown for Hamiltonian simulation

in [42]?

64
Chapter 3: High-precision quantum algorithms for linear elliptic partial

differential equations

3.1 Introduction

In this chapter, we study high-precision quantum algorithms for linear elliptic partial

differential equations (PDEs)1 . As earlier introduced in Problem 1.4, the problem we

address can be stated as follows: Given a linear PDE with boundary conditions and an

error parameter ϵ, output a quantum state that is ϵ-close to one whose amplitudes are

proportional to the solution of the PDE at a set of grid points in the domain of the PDE.

We focus on elliptic PDEs, and we assume a technical condition that we call global strict

diagonal dominance (defined in (3.8)).

Our first algorithm is based on a quantum version of the FDM approach: we use

a finite-difference approximation to produce a system of linear equations and then solve

that system using the QLSA. We analyze our FDM algorithm as applied to Poisson’s

equation (which automatically satisfies global strict diagonal dominance) under periodic,

Dirichlet, and Neumann boundary conditions. Whereas previous FDM approaches [69,

70] considered fixed orders of truncation, we adapt the order of truncation depending on

ϵ, inspired by the classical adaptive FDM [71]. As the order increases, the eigenvalues
1
This chapter is based on the paper [59].

65
of the FDM matrix approach the eigenvalues of the continuous Laplacian, allowing for

more precise approximations. The main algorithm we present uses the quantum Fourier

transform (QFT) and takes advantage of the high-precision LCU-based QLSA [46]. We

first consider periodic boundary conditions, but by restricting to appropriate subspaces,

this approach can also be applied to homogeneous Dirichlet and Neumann boundary

conditions. We state our result in Theorem 3.1, which (informally) says that this quantum

adaptive FDM approach produces a quantum state approximating the solution of Poisson’s

equation with complexity d6.5 poly(log d, log(1/ϵ)).

We also propose a quantum algorithm for more general second-order elliptic PDEs

under periodic or non-periodic Dirichlet boundary conditions. This algorithm is based on

quantum spectral methods [54]. The spectral method globally approximates the solution

of a PDE by a truncated Fourier or Chebyshev series (which converges exponentially

for smooth functions) with undetermined coefficients, and then finds the coefficients by

solving a linear system. This system is exponentially large in d, so solving it is infeasible

for classical algorithms but feasible in a quantum context. To be able to apply the QLSA

efficiently, we show how to make the system sparse using variants of the quantum Fourier

transform. Our bound on the condition number of the linear system uses global strict

diagonal dominance, and introduces a factor in the complexity that measures the extent

to which this condition holds. We state our result in Theorem 3.2, which (informally)

gives a complexity of d2 poly(log(1/ϵ)) for producing a quantum state approximating the

solution of general second-order elliptic PDEs with Dirichlet boundary conditions.

Both of these approaches have complexity poly(d, log(1/ϵ)), providing optimal

dependence on ϵ and an exponential improvement over classical methods as a function of

66
the spatial dimension d. Bounding the complexities of these algorithms requires analyzing

how d and ϵ affect the condition numbers of the relevant linear systems (finite difference

matrices and matrices relating the spectral coefficients) and accounting for errors in the

approximate solution provided by the QLSA. Furthermore, the complexities of both approaches

scale logarithmically with high-order derivatives of the solution and the inhomogeneity.

The detailed complexity dependence is presented in Theorem 3.1 and Theorem 3.2, and

is further discussed in Section 3.5.

Table 3.1 compares the performance of our approaches to other classical and quantum

algorithms for PDEs. Compared to classical algorithms, quantum algorithms improve the

dependence on spatial dimension from exponential to polynomial (with the significant

caveat that they produce a different representation of the solution). Compared to previous

quantum FDM/FEM/FVM algorithms [57, 69, 70, 94], the quantum adaptive FDM and

quantum spectral method improve the error dependence from poly(1/ϵ) to poly(log(1/ϵ)).

Our approaches achieve the best known dependence on the parameter ϵ for the Poisson

equation with homogeneous boundary conditions. Furthermore, our quantum spectral

method approach not only achieves the best known dependence on d and ϵ for elliptic

PDEs with inhomogeneous Dirichlet boundary conditions, but also improves the dependence

on d for the Poisson equation with inhomogeneous Dirichlet boundary conditions, as

compared to previous quantum algorithms.

The remainder of the chapter is structured as follows. Section 3.2 introduces technical

details about linear PDEs and formally states the problem we solve. Section 3.3 covers

our FDM algorithm for Poisson’s equation. Section 3.4 details the spectral algorithm for

elliptic PDEs. Finally, Section 3.5 concludes with a brief discussion of the results, their

67
Algorithm Equation Boundary conditions Complexity
FDM/FEM/FVM general general poly((1/ϵ)d )

Classical
Adaptive FDM/FEM [71] general general poly((log (1/ϵ))d )
Spectral method [67, 68] general general poly((log (1/ϵ))d )
Sparse grid FDM/FEM [95, 96] general general poly((1/ϵ)(log(1/ϵ))d )
Sparse grid spectral method [97, 98] elliptic general poly(log (1/ϵ)(log log(1/ϵ))d )
FEM [57] Poisson homogeneous poly(d, 1/ϵ)
FDM [69] Poisson homogeneous Dirichlet d poly(log d, 1/ϵ)
Quantum

FDM [70] wave homogeneous d5/2 poly(1/ϵ)


FVM [94] hyperbolic periodic d poly(1/ϵ)
Adaptive FDM [59] Poisson periodic, homogeneous d13/2 poly(log d, log (1/ϵ))
Spectral method [59] Poisson homogeneous Dirichlet d poly ( log d, log (1/ϵ))
Spectral method [59] elliptic inhomogeneous Dirichlet d2 poly ( log (1/ϵ))

Table 3.1: Summary of the time complexities of classical and quantum algorithms for d-
dimensional PDEs with error tolerance ϵ. Portions of the complexity in bold represent
best known dependence on that parameter.

possible applications, and some open problems.

3.2 Linear PDEs

In this chapter, we focus on systems of linear PDEs. Such equations can be written

in the form

L (u(x)) = f (x), (3.1)

where the variable x = (x1 , . . . , xd ) ∈ Cd is a d-dimensional vector, the solution u(x) ∈

C and the inhomogeneity f (x) ∈ C are scalar functions, and L is a linear differential

operator acting on u(x). In general, L can be written in a linear combination of u(x)

and its derivatives. A linear differential operator L of order h has the form

X ∂j
L (u(x)) = Aj (x) u(x), (3.2)
∂xj
∥j∥1 ≤h

68
where j = (j1 , . . . , jd ) is a d-dimensional non-negative vector with ∥j∥1 = j1 +· · ·+jd ≤

h, Aj (x) ∈ C, and
∂j ∂ j1 ∂ jd
u(x) = · · · u(x). (3.3)
∂xj ∂xj11 ∂xjdd

The problem reduces to a system of linear ordinary differential equations (ODEs) when

d = 1. For d ≥ 2, we call (3.1) a (multi-dimensional) PDE.

For example, systems of first-order linear PDEs can be written in the form

d
X ∂u(x)
Aj (x) + A0 (x)u(x) = f (x), (3.4)
j=1
∂xj

where Aj (x), A0 (x), f (x) ∈ C for j ∈ [d] := {1, . . . , d}. Similarly, systems of second-

order linear PDEs can be expressed in the form

d d
X ∂ 2 u(x) X ∂u(x)
Aj1 j2 (x) + Aj (x) + A0 (x)u(x) = f (x), (3.5)
j1 ,j2 =1
∂xj1 ∂xj2 j=1 ∂xj

where Aj1 ,j2 (x), Aj (x), A0 (x), f (x) ∈ C for j1 , j2 , j ∈ [d]. A well-known second-order

linear PDEs is the Poisson equation

d
X ∂2
∆u(x) := u(x) = f (x). (3.6)
j=1
∂x2j

A linear PDE of order h is called elliptic if its differential operator (3.2) satisfies

X
Aj (x)ξ j ̸= 0, (3.7)
∥j∥1 =h

69
for all nonzero ξ j = ξ1j1 . . . ξdjd with ξ1 , . . . , ξd ∈ Rm and all x. Note that ellipticity only

depends on the highest-order terms. When h = 2, the linear PDE (3.5) is called a second-

order elliptic PDE if and only if Aj1 j2 (x) is positive-definite or negative-definite for any

x. In particular, the Poisson equation (3.6) is a second-order elliptic PDE.

We consider a class of elliptic PDEs that also satisfy the condition

d
X 1 X
C := 1 − |Aj1 ,j2 (x)| > 0 (3.8)
j1 =1
|Aj1 ,j1 (x)|
j2 ∈[d]\{j1 }

for all x. We call this condition global strict diagonal dominance, since it is a strengthening

of the standard (strict) diagonal dominance condition

d
X 1 X
d− |Aj1 ,j2 (x)| > 0. (3.9)
j1 =1
|Aj1 ,j1 (x)|
j2 ∈[d]\{j1 }

Observe that (3.8) holds for the Poisson equation (3.6) with C = 1.

In this chapter, we focus on the following boundary value problem. This is a formal

statement of Problem 1.4.

Problem 3.1. In the quantum PDE problem, we are given a system of second-order

elliptic equations

d
X ∂j X ∂ 2 u(x)
L (u(x)) = Aj j u(x) = Aj1 j2 = f (x) (3.10)
∂x j ,j =1
∂x j 1 ∂x j 2
∥j∥1 =2 1 2

satisfying the global strict diagonal dominance condition (3.8), where the variable x =

(x1 , . . . , xd ) ∈ D = [−1, 1]d is a d-dimensional vector, the inhomogeneity f (x) ∈ C is a

70
scalar function of x satisfying f (x) ∈ C ∞ , and the linear coefficients Aj ∈ C. We are
∂u(x)
also given boundary conditions u(x) = γ(x) ∈ ∂D or ∂xj xj =±1
= γ(x)|xj =±1 ∈ ∂D

where γ(x) ∈ C ∞ . We assume there exists a weak solution û(x) ∈ C for the boundary

value problem (see Reference [99, Section 6.1.2]). Given sparse matrix oracles to provide

the locations and values of nonzero entries of a matrix Aj1 j2 (x), Aj (x), and A0 (x) on

a set of interpolation nodes x2 , and that prepare normalized states |γ(x)⟩ and |f (x)⟩

whose amplitudes are proportional to γ(x) and f (x) on a set of interpolation nodes x,

the goal is to output a quantum state |u(x)⟩ that is ϵ-close to the normalized u(x) on a

set of interpolation nodes x in ℓ2 norm.

3.3 Finite difference method

We now describe our first approach to quantum algorithms for linear PDEs, based

on the finite difference method (FDM). Using this approach, we show the following.

Theorem 3.1. There exists a quantum algorithm that outputs a state ϵ-close to |u⟩ that

runs in time

r
  d2k+1 u  h  d2k+1 u  i
6.5 4.5 3
Õ d log /ϵ log d4 log /ϵ /ϵ (3.11)
dx2k+1 dx2k+1

and makes

r
  d2k+1 u  h  d2k+1 u  i
4 3 4 3
Õ d log /ϵ log d log /ϵ /ϵ (3.12)
dx2k+1 dx2k+1
2
For instance, Aj1 j2 (x) is modeled by a sparse matrix oracle that, on input (m, l), gives the location of
the l-th nonzero entry in row m, denoted as n, and gives the value Aj1 j2 (x)m,n .

71
queries to the oracle for f .

To show this, we first construct a linear system corresponding to the finite difference

approximation of Poisson’s equation with periodic boundary conditions and bound the

error of this high-order FDM in Section 3.3.1 (Lemma 3.1). Then we bound the condition

number of this system in Section 3.3.2 (Lemma 3.2 and Lemma 3.3) and bound the

error of approximation in Section 3.3.3 (Lemma 3.4). We use these results to give an

efficient quantum algorithm in Section 3.3.4, establishing Theorem 3.1. We conclude by

discussing how to use the method of images to apply this algorithm for Neumann and

Dirichlet boundary conditions in Section 3.3.5.

The FDM approximates the derivative of a function f at a point x in terms of the

values of f on a finite set of points near x. Generally there are no restrictions on where

these points are located relative to x, but they are typically taken to be uniformly spaced

points with respect to a certain coordinate. This corresponds to discretizing [−1, 1]d

(or [0, 2π)d ) to a d-dimensional rectangular lattice (where we use periodic boundary

conditions).

For a scalar field, in which u(x) ∈ C, the canonical elliptic PDE is Poisson’s

equation (3.6), which we consider solving on [0, 2π)d with periodic boundary conditions.

This also implies results for the domain Ω = [−1, 1]d under Dirichlet (u(∂Ω) = 0) and

Neumann (n̂·∇u(∂Ω) = 0 where n̂ denotes the normal direction to ∂Ω, which for domain

∂u
Ω = [−1, 1]d is equivalent to ∂xj xj =±1
= 0 for j ∈ [d]) boundary conditions.

72
3.3.1 Linear system

To approximate the second derivatives appearing in Poisson’s equation, we apply

the central finite difference formula of order 2k. Taking xj = jh for a lattice with spacing

h, this formula gives the approximation

k
′′ 1 X
f (0) ≈ 2 rj f (jh) (3.13)
h j=−k

where the coefficients are [6, 100]


2(−1)j+1 (k!)2

j ∈ [k]


j 2 (k−j)!(k+j)!






rj := −2 Pk rj j=0 (3.14)

 j=1





r

−j j ∈ −[k].

We leave the dependence on k implicit in this notation. The following lemma characterizes

the error of this formula.

Lemma 3.1 ([6, Theorem 7]). Let k ≥ 1 and suppose f (x) ∈ C 2k+1 for x ∈ R. Define

the coefficients rj as in (3.14). Then

k
d2 u(x0 ) 1 X  d2k+1 u  eh 2k−1 
= r j f (x 0 + jh) + O (3.15)
dx2 h2 j=−k dx2k+1 2

73
where

d2k+1 u d2k+1 u
:= max (y) . (3.16)
dx2k+1 y∈[x0 −kh,x0 +kh] dx2k+1

Since we assume periodic boundary conditions and apply the same FDM formula

at each lattice site, the matrices we consider are circulant. Define the 2n × 2n matrix

S to have entries Si,j = δi,j+1 mod 2n . If we represent the solution u(x) as a vector u =
P2n
j=1 u(πj/n)ej , then we can approximate Poisson’s equation using a central difference

formula as

k
1 1 X
j −j

Lu = r0 I + rj (S + S ) u=f (3.17)
h2 h2 j=1

P2n
where f = j=1 f (πj/n)ej . The solution u corresponds exactly with the quantum

state we want to produce, so we do not have to perform any post-processing such as

in Reference [70] and other quantum differential equation algorithms. The matrix in

this linear system is just the finite difference matrix, so it suffices to bound its condition

number and approximation error (whereas previous quantum algorithms involved more

complicated linear systems).

3.3.2 Condition number

The following lemma characterizes the condition number of a circulant Laplacian

on 2n points.

Pk
Lemma 3.2. For k < (6/π 2 )1/3 n2/3 , the matrix L = r0 I + j=1 rj (S
j
+ S −j ) with rj as

74
in (3.14) has condition number κ(L) = O(n2 ).

Proof. We first upper bound ∥L∥ using Gershgorin’s circle theorem [101] (a similar

argument appears in Reference [6]). Note that

2(k!)2 2
|rj | = 2
≤ 2 (3.18)
j (k − j)!(k + j)! j

since

(k!)2 k(k − 1) · · · (k − j + 1)
= < 1. (3.19)
(k − j)!(k + j)! (k + j)(k + j − 1) · · · (k + 1)

The radii of the Gershgorin discs are

k k
X X 2 2π 2
2 |rj | ≤ 2 ≤ . (3.20)
j=1 j=1
j2 3

The discs are centered at r0 , and

k
X 2π 2
|r0 | ≤ 2 |rj | ≤ , (3.21)
j=1
3

4π 2
so ∥L∥ ≤ 3
.

To lower bound ∥L−1 ∥ we lower bound the (absolute value of the) smallest non-

zero eigenvalue of L (since by construction the all-ones vector is a zero eigenvector). Let

75
ω := exp(πi/n). Since L is circulant, its eigenvalues are

k
X
λl = r0 + rj (ω lj + ω −lj ) (3.22)
j=1
k
X  πlj 
= r0 + 2rj cos (3.23)
j=1
n
k
π 2 l2 j 2 (πcj )4
X   πc 
j
= r0 + 2rj 1 − 2
+ 4
cos (3.24)
j=1
2n 4!n n
k  22 2
X π l j (πcj )4  πc 
j
= 2rj − 2
+ 4
cos (3.25)
j=1
2n 4!n n

where the cj ∈ [0, lj] arise from the Taylor remainder theorem. Using (3.18), we have

k
π2 X 2 π4k3
λ1 + 2 rj j ≤ . (3.26)
n j=1 6n4

76
We now compute the sum

k k
X
2
X 2(−1)j (k!)2
− rj j = j2 (3.27)
j=1 j=1
j 2 (k + j)!(k − j)!
k
X (−1)j
= 2(k!)2 (3.28)
j=1
(k + j)!(k − j)!
k
2(k!)2 X
 
j 2k
= (−1) (3.29)
(2k)! j=1 k+j
2k
2(k!)2 X
 
j+k 2k
= (−1) (3.30)
(2k)! j=k+1 j
2 2k  
k (k!) j 2k
X
= (−1) (−1) (3.31)
(2k)! j=0, j̸=k j
2
  
k (k!) 2k k 2k
= (−1) (1 − 1) − (−1) (3.32)
(2k)! k

= −1. (3.33)

Therefore, we have

π2 π4k3
λ1 ≤ − + . (3.34)
n2 6n4

Finally, we see that

κ(L) = ∥L∥∥L−1 ∥ (3.35)


4π 2  π 2 π 4 k 3 −1
≤ − (3.36)
3 n2 6n4
4  π 2 k 3 −1
= n2 1 − (3.37)
3 6n2

which is O(n2 ) provided k < (6/π 2 )1/3 n2/3 .

77
In d dimensions, a similar analysis holds.

Pk
Lemma 3.3. For k < (6/π 2 )1/3 n2/3 , let L := r0 I + j=1 rj (S
j
+ S −j ) with rj as in

(3.14). The matrix L′ := L ⊗ I ⊗d−1 + I ⊗ L ⊗ I ⊗d−2 + · · · + I ⊗d−1 ⊗ L has condition

number κ(L′ ) = O(dn2 ).

Proof. By the triangle inequality for spectral norms, ∥L′ ∥ ≤ d∥L∥. Since L has zero-sum

rows by construction, the all-ones vector lies in its kernel, and thus the smallest non-zero

eigenvalue of L is the same as that of L′ . Therefore we have

4  π 2 k 3 −1
κ(L′ ) ≤ dn2 1 − (3.38)
3 6n2

which is O(dn2 ) provided k < (6/π 2 )1/3 n2/3 .

3.3.3 Error analysis

There are two types of error relevant to our analysis: the FDM error and the QLSA

error. We assume that we are able to perfectly generate states proportional to f . The FDM

errors arise from the remainder terms in the finite difference formulas and from inexact

approximations of the eigenvalues.

We introduce several states for the purpose of error analysis. Let |u⟩ be the quantum
P Nd
state that is proportional to u = j∈Zd2n u(πj/n) i=1 eji for the exact solution of the

differential equation. Let |ū⟩ be the state output by a QLSA that exactly solves the

linear system. Let |ũ⟩ be the state output by a QLSA with error. Then the total error

78
of approximating |u⟩ by |ũ⟩ is bounded by

∥|u⟩ − |ũ⟩∥ ≤ ∥|u⟩ − |ū⟩∥ + ∥|ū⟩ − |ũ⟩∥ (3.39)

= ϵFDM + ϵQLSA (3.40)

and without loss of generality we can take ϵFDM and ϵQLSA to be of the same order of

magnitude.

d2
Pd d
Lemma 3.4. Let u(x) be the exact solution of ( i=1 dx2i )u(x) = f (x). Let u ∈ R(2n)
Nd d
Let ū ∈ R(2n)
P
encode the exact solution in the sense that u = j∈Zd2n u(πj/n) i=1 eji .

1 ′
be the exact solution of the FDM linear system h2
L ū = f , where L′ is a d-dimensional
P2n
(2k)th-order Laplacian as above with k < (6/π 2 )1/3 n3/2 , and f = j=1 f (πj/n)ej .

d2k+1 u
Then ∥u − ū∥ ≤ O(2d/2 n(d/2)−2k+1 dx2k+1
(e2 /4)k ).

d2k+1 u
Proof. The remainder term of the central difference formula is O( dx2k+1
h2k−1 (e/2)2k ),

so

1 ′  d2k+1 u 
2k−1
L u = f + O (eh/2) ϵ (3.41)
h2 dx2k+1

where ϵ is a (2n)d dimensional vector whose entries are O(1). This implies

1 ′  d2k+1 u 
2k−1
L (u − ū) = O (eh/2) ϵ (3.42)
h2 dx2k+1

79
and therefore

 d2k+1 u 
∥u − ū∥ = O (eh/2) 2k+1
∥(L′ )−1 ϵ∥ (3.43)
dx2k+1
 d2k+1 u 
= O (2n)d/2 (eh/2) 2k+1
/λ 1 . (3.44)
dx2k+1

By Lemma 3.2 we have λ1 = Θ(1/n2 ), and since h = Θ(1/n), we have

 d2k+1 u 
∥u − ū∥ = O 2d/2 n(d/2)−2k+1 (e/2)2k
(3.45)
dx2k+1

as claimed.

3.3.4 FDM algorithm

To apply QLSAs, we must consider the complexity of simulating Hamiltonians

that correspond to Laplacian FDM operators. For periodic boundary conditions, the

Laplacians are circulant, so they can be diagonalized by the QFT F (or a tensor product of

QFTs for the multi-dimensional Laplacian L′ ), i.e., D = F † LF is diagonal. In this case

the simplest way to simulate exp(iLt) is to perform the inverse QFT, apply controlled

phase rotations to implement exp(iDt), and perform the QFT. Reference [102] shows how

to exactly implement arbitrary diagonal unitaries on m qubits using O(2m ) gates. Since

we consider Laplacians on n lattice sites, simulating exp(iLt) takes O(n) gates with

the dominant contribution coming from the phase rotations (alternatively, the methods

of Reference [103] or Reference [104] could also be used). Using this Hamiltonian

simulation algorithm in a QLSA for the FDM linear system gives us the following theorem.

80
We restate Theorem 3.1 as follows.

Theorem 3.1. There exists a quantum algorithm that outputs a state ϵ-close to |u⟩ that

runs in time

r
  d2k+1 u  h  d2k+1 u  i
6.5 4.5 3
Õ d log /ϵ log d4 log /ϵ /ϵ (3.11)
dx2k+1 dx2k+1

and makes

r
  d2k+1 u  h  d2k+1 u  i
4 3 4 3
Õ d log /ϵ log d log /ϵ /ϵ (3.12)
dx2k+1 dx2k+1

queries to the oracle for f .

Proof. We use the Fourier series based QLSA from Reference [46]. By Theorem 3
p
of that work, the QLSA makes O(κ log(κ/ϵQLSA )) uses of a Hamiltonian simulation

algorithm and uses of the oracle for the inhomogeneity. For Hamiltonian simulation we

use d parallel QFTs and phase rotations as described in Reference [102], for a total of
p
O(dnκ log(κ/ϵQLSA )) gates. The condition number for the d-dimensional Laplacian

scales as κ = O(dn2 ).

We take ϵFDM and ϵQLSA to be of the same order and just write ϵ. Then the QLSA has
p
time complexity O(d2 n3 log(dn2 /ϵ)) and query complexity O(dn2 log(dn2 /ϵ)). The

adjustable parameters are the number of lattice sites n and the order 2k of the finite

difference formula. To keep the error below the target error of ϵ we require

d/2 (d/2)−2k+1 d2k+1 u


2 n 2k+1
(e/2)2k = O(ϵ), (3.46)
dx

81
or equivalently,

  d2k+1 u 
(−d/2) + (2k − 1 − (d/2)) log(n) − 2k log(e/2) = Ω log /ϵ . (3.47)
dx2k+1

Now we focus on the choice of adjustable n and k relying on ϵ. This procedure is inspired

by the classical adaptive FDM [71], so we call it the adaptive FDM approach. We must

have 2k − 1 > d/2 for the left-hand side of (3.47) to be positive for large n. Indeed,

we find the best performance by taking k as large as possible subject to the assumption

of Lemma 3.2, i.e., k = cn2/3 where c := (6/π 2 )1/3 . For this choice of k and for n

sufficiently large, (3.47) is equivalent to

  d2k+1 u 
k log(n) = cn2/3 log(n) = Ω log /ϵ . (3.48)
dx2k+1

To satisfy the condition 2cn2/3 − 1 > d/2, we must have n = Ω(d3/2 ). Combining this

observation with (3.48), we choose

  d2k+1 u 
n = Θ d3/2 log3/2 /ϵ (3.49)
dx2k+1

so that

  d2k+1 u 
k = cn2/3 = Θ d log /ϵ . (3.50)
dx2k+1

82
The QLSA then has the stated time complexity

r
p
  d2k+1 u h  d2k+1 u  i
2 3 2 6.5 4.5 4 3
Õ(d n log(dn /ϵ)) = O d log /ϵ) log d log /ϵ /ϵ ,
dx2k+1 dx2k+1

(3.51)

and makes

r
 d2k+1 u 
 h  d2k+1 u i
2 2 4 3 3
Õ(dn log(dn /ϵ)) = O d log /ϵ log d4 log /ϵ)/ϵ .
dx2k+1 dx2k+1

(3.52)

queries to the oracle for f .

This can be compared to the cost of using the conjugate gradient method to solve

the same linear system classically. The sparse conjugate gradient algorithm for an N × N

matrix has time complexity O(N s κ log(1/ϵ)). For arbitrary dimension N = Θ(nd ), we

have s = dk = cdn2/3 and κ = O(dn2 ), so that the time complexity is O(d4+3d/2 log(1/ϵ)

d2k+1 u
log5/2+3d/2 ( dx2k+1
/ϵ)). Alternatively, d fast Fourier transforms could be used, although

d2k+1 u
this will generally take Ω(nd ) = Ω(d3d/2 log3d/2 ( dx2k+1
/ϵ)) time.

3.3.5 Boundary conditions via the method of images

We can apply the method of images to deal with homogeneous Neumann and

Dirichlet boundary conditions using the algorithm for periodic boundary conditions described

above. In the method of images, the domain [−1, 1] is extended to include all of R, and

the boundary conditions are related to symmetries of the solutions. For a pair of Dirichlet

83
boundary conditions there are two symmetries: the solutions are anti-symmetric about −1

(i.e., f (−x − 1) = −f (x − 1)) and anti-symmetric about 1 (i.e., f (1 + x) = −f (1 − x)).

Continuity and anti-symmetry about −1 and 1 imply f (−1) = f (1) = 0, and furthermore

that f (x) = 0 for all odd x ∈ Z and that f (x + 4) = f (x) for all x ∈ R. For Neumann

boundary conditions, the solutions are instead symmetric about −1 and 1, which also

implies f (x + 2) = f (x) for all x ∈ R.

We would like to combine the method of images with the FDM to arrive at finite

difference formulas for this special case. In both cases, the method of images implies

that the solutions are periodic, so without loss of generality we can consider a lattice on

[0, 2π) instead of a lattice on R. It is useful to think of this lattice in terms of the cycle

graph on 2n vertices, i.e., (V, E) = (Z2n , {(i, i + 1) | i ∈ Z2n }), which means that the

vectors encoding the solution u(x) will lie in R2n . Let each vector ej correspond to the

vertex j. Then we divide R2n into a symmetric and an anti-symmetric subspace, namely

span{ej + e2n+1−j }nj=1 and span{ej − e2n+1−j }nj=1 , respectively. Vectors lying in the

symmetric subspace correspond to solutions that are symmetric about 0 and π, so they

obey Neumann boundary conditions at 0 and π; similarly, vectors in the anti-symmetric

space correspond to solutions obeying Dirichlet boundary conditions at 0 and π.

Restricting to a subspace of vectors reduces the size of the FDM vectors and matrices

we consider, and the symmetry of that subspace indicates how to adjust the coefficients.

84
If the FDM linear system is L′′ u′′ = f ′′ then L′′ has entries



r|i−j| ± ri+j−1 i≤k








L′′i,j = r|i−j| k <i≤n−k (3.53)







r

|i−j|±r 2n−i−j+1 n−k ≤i

where + (−) is chosen for Neumann (Dirichlet) boundary conditions and due to the

truncation order k, rj = 0 for any j > k. This is similar to how Laplacian coefficients are

modified when imposing boundary conditions in discrete variable representations [105].

For the purpose of solving the new linear systems using quantum algorithms, we

still treat these cases as obeying periodic boundary conditions. We assume access to an

oracle that produces states |f ′′ ⟩ proportional to the inhomogeneity f ′′ (x). Then we apply

the QLSA for periodic boundary conditions using |f ′′ ⟩|±⟩ to encode the inhomogeneity,

which will output solutions of the form |u′′ ⟩|±⟩. Here the ancillary state is chosen to be

|+⟩ (|−⟩) for Neumann (Dirichlet) boundary conditions.

Typically, the (second-order) graph Laplacian for the path graph with Dirichlet

boundary conditions has diagonal entries that are all equal to 2; however, using the above

specification for the entries of L leads to the (1, 1) and (n, n) entries being 3 while the

rest of the diagonal entries are 2.

To reproduce this case, we consider an alternative subspace restriction used in

Reference [106] to diagonalize the Dirichlet graph Laplacian. In this case it is easiest to

consider the lattice of a cycle graph on 2n + 2 vertices, where the vertices 0 and n + 1 are

selected as boundary points where the field takes the value 0. The relevant antisymmetric

85
subspace is now span({ej − e2n+2−j }nj=1 ) (which has no support on e0 and en+1 ).

If we again write the linear system as L′′ u′′ = f ′′ , then the Laplacian has entries



r|i−j| − ri+j i≤k








L′′i,j = r|i−j| k <i≤n−k







r

|i−j|−r 2n−i−j+2 n − k ≤ i.

We again assume access to an oracle producing states proportional to f ′′ (x); however,

we assume that this oracle operates in a Hilbert space with one additional dimension

compared to the previous approaches (i.e., whereas previously we considered implementing


 
U 0
U , here we consider implementing 0T 1 ). With this oracle we again prepare the state

|f ′′ ⟩|−⟩ and solve Poisson’s equation for periodic boundary conditions to output a state

|u′′ ⟩|−⟩ (where |u′′ ⟩ lies in an (n + 1)-dimensional Hilbert space but has no support on

the (n + 1)st basis state).

3.4 Multi-dimensional spectral method

We now turn our attention to the spectral method for multi-dimensional PDEs.

Since interpolation facilitates constructing a straightforward linear system, we develop

a quantum algorithm based on the pseudo-spectral method [67, 68, 107] for second-

order elliptic equations with global strict diagonal dominance, under various boundary

conditions. Using this approach, we show the following.

Theorem 3.2. Consider an instance of the quantum PDE problem as defined in Problem 3.1

with Dirichlet boundary conditions (3.81). Then there exists a quantum algorithm that

86
produces a state in the form of (3.82) whose amplitudes are proportional to u(x) on a set

of interpolation nodes x (with respect to the uniform grid nodes for periodic boundary

conditions or the Chebyshev-Gauss-Lobatto quadrature nodes for non-periodic boundary

conditions, as defined in in (3.60)), where u(x)/∥u(x)∥ is ϵ-close to û(x)/∥û(x)∥ in l2

norm for all nodes x, succeeding with probability Ω(1), with a flag indicating success,

using
 
d∥A∥Σ
+ qd poly(log(g ′ /gϵ))
2
(3.54)
C∥A∥∗

P
queries to oracles as defined in Section 3.4.4. Here ∥A∥Σ := ∥j∥1 ≤h ∥Aj ∥, ∥A∥∗ :=
Pd
j=1 |Aj,j |, C > 0 is defined in (3.8), and

g = min ∥û(x)∥, g ′ := max max ∥û(n+1) (x)∥, (3.55)


x x n∈N
v
Pd ˆ2
u ∥k∥∞ ≤n j=1 fk + (Aj,j γ̂kj+ )2 + (Aj,j γ̂kj− )2
uP
q=t P . (3.56)
Pd ˆ j+ j− 2
∥k∥∞ ≤n j=1 (fk + Aj,j γ̂k + Aj,j γ̂k )

The gate complexity is larger than the query complexity by a factor of poly(log(d∥A∥Σ /ϵ)).

After introducing the method, we discuss the complexity of the quantum shifted

Fourier transform (Lemma 3.5) and the quantum cosine transform (Lemma 3.6) in Section 3.4.1.

These transforms are used as subroutines in our algorithm. Then we construct a linear

system whose solution encodes the solution of the PDE in Section 3.4.2 , analyze its

condition number in Section 3.4.3 (Lemma 3.10, established using Lemma 3.7, Lemma 3.8,

and Lemma 3.9), and consider the complexity of state preparation in Section 3.4.4 (Lemma 3.11).

Finally, we prove our main result (Theorem 3.2) in Section 3.4.5.

In the spectral approach, we approximate the exact solution û(x) by a linear combination

87
of basis functions
X
u(x) = ck ϕk (x) (3.57)
∥k∥∞ ≤n

for some n ∈ Z+ . Here k = (k1 , . . . , kd ) with kj ∈ [n + 1]0 := {0, 1, . . . , n}, ck ∈ C,

and
d
Y
ϕk (x) = ϕkj (xj ), j ∈ [d]. (3.58)
j=1

We choose different basis functions for the case of periodic boundary conditions

and for the more general case of non-periodic boundary conditions. When the boundary

conditions are periodic, the algorithm implementation is more straightforward, and in

some cases (e.g., for the Poisson equation), can be faster. Specifically, for any kj ∈

[n + 1]0 and xj ∈ [−1, 1], we take



ei(kj −⌊n/2⌋)πxj , periodic conditions,


ϕkj (xj ) = (3.59)


Tkj (xj ) := cos(kj arccos xj ), non-periodic conditions.

Here Tk is the degree-k Chebyshev polynomial of the first kind.

The coefficients ck are determined by demanding that u(x) satisfies the ODE and

boundary conditions at a set of interpolation nodes {χl = (χl1 , . . . , χld )}∥l∥∞≤n with

lj ∈ [n + 1]0 , where


2lj

− 1, periodic conditions,


 n+1
χlj = (3.60)

cos πlj ,

non-periodic conditions.

n

88
2l
Here { n+1 − 1 : l ∈ [n + 1]0 } are called the uniform grid nodes, and {cos πl
n
: l ∈

[n + 1]0 } are called the Chebyshev-Gauss-Lobatto quadrature nodes.

We require the numerical solution u(x) to satisfy

L (u(χl )) = f (χl ), ∀ lj ∈ [n + 1]0 , j ∈ [d]. (3.61)

We would like to be able to increase the accuracy of the approximation by increasing n,

so that

∥û(x) − u(x)∥ → 0 as n → ∞. (3.62)

The convergence behavior of the spectral method is related to the smoothness of the

solution. For a solution in C r+1 , the spectral method approximates the solution with n =

poly(1/ϵ). Furthermore, if the solution is in C ∞ , the spectral method approximates the

solution to within ϵ using only n = poly(log(1/ϵ)) [68]. Since we require kj ∈ [n + 1]0

for all j ∈ [d], we have (n + 1)d terms in total. Consequently, a classical pseudo-spectral

method solves multi-dimensional PDEs with complexity poly(logd (1/ϵ)). Such classical

spectral methods rapidly become infeasible since the number of coefficients (n + 1)d

grows exponentially with d.

Here we develop a quantum algorithm for multi-dimensional PDEs. The algorithm

applies techniques from the quantum spectral method for ODEs [54]. However, in the

case of PDEs, the linear system to be solved is non-sparse. We address this difficulty

using a quantum transform that restores sparsity.

89
3.4.1 Quantum shifted Fourier transform and quantum cosine transform

The well-known quantum Fourier transform (QFT) can be regarded as an analogue

of the discrete Fourier transform (DFT) acting on the amplitudes of a quantum state. The

QFT maps the (n + 1)-dimensional quantum state v = (v0 , v1 , . . . , vn ) ∈ Cn+1 to the

state v̂ = (v̂0 , v̂1 , . . . , v̂n ) ∈ Cn+1 with

n
1 X  2πikl 
v̂l = √ exp vk , l ∈ [n + 1]0 . (3.63)
n + 1 k=0 n+1

In other words, the QFT is the unitary transform

n
1 X  2πikl 
Fn := √ exp |l⟩⟨k|. (3.64)
n + 1 k,l=0 n+1

Here we also consider the quantum shifted Fourier transform (QSFT), an analogue

of the classical shifted discrete Fourier transform, which maps v ∈ Cn+1 to v̂ ∈ Cn+1

with

n  2πi(k − ⌊n/2⌋)(l − (n + 1)/2) 


1 X
v̂l = √ exp vk , l ∈ [n + 1]0 . (3.65)
n + 1 k=0 n+1

In other words, the QSFT is the unitary transform

n  2πi(k − ⌊n/2⌋)(l − (n + 1)/2) 


1 X
Fns := √ exp |l⟩⟨k|. (3.66)
n + 1 k,l=0 n+1

90
We define the multi-dimensional QSFT by the tensor product, namely

d
1 X Y 2πi(kj −⌊n/2⌋)(lj −(n+1)/2) 
Fns := p exp n+1
|l1 ⟩ . . . |ld ⟩⟨k1 | . . . ⟨kd |,
(n + 1)d ∥k∥∞ ,∥l∥∞ ≤n j=1
(3.67)

where k = (k1 , . . . , kd ) and l = (l1 , . . . , ld ) are d-dimensional vectors with kj , lj ∈ [n]0 .

The QSFT can be efficiently implemented as follows:

Lemma 3.5. The QSFT Fns defined by (3.66) can be performed with gate complexity

O(log n log log n). More generally, the d-dimensional QSFT Fns defined by (3.67) can be

performed with gate complexity O(d log n log log n).

Proof. The unitary matrix Fns can be written as the product of three unitary matrices

Fns = Sn Fn Rn , (3.68)

where
n
X  2πik(n + 1)/2 
Rn = exp − |k⟩⟨k| (3.69)
k=0
n + 1

and
n
X  2πi ⌊n/2⌋ (l − (n + 1)/2) 
Sn = exp − |l⟩⟨l|. (3.70)
l=0
n+1

It is well known that Fn can be implemented with gate complexity O(log n log log n), and

it is straightforward to implement Rn and Sn with gate complexity O(log n). Thus the

total complexity is O(log n log log n).

91
We rewrite v in the form

X
v= vk |k1 ⟩ . . . |kd ⟩, (3.71)
∥k∥∞ ≤n

where vk ∈ C with k = (k1 , . . . , kd ), and each kj ∈ [n]0 for j ∈ [d]. The unitary matrix

Fns can be written as the tensor product

d
O
Fns = Fns . (3.72)
j=1

Performing the multi-dimensional QSFT is equivalent to performing the one-dimensional

QSFT on each register. Thus, the gate complexity of performing Fns is O(d log n log log n).

Another efficient quantum transformation is the quantum cosine transform (QCT)

[108, 109]. The QCT can be regarded as an analogue of the discrete cosine transform

(DCT). The QCT maps v ∈ Cn+1 to v̂ ∈ Cn+1 with

r n
2X klπ
v̂l = δk δl cos vk , l ∈ [n + 1]0 , (3.73)
n k=0 n

where 

 √12 l = 0, n


δl := (3.74)


1 l ∈ [n − 1].

92
In other words, the QCT is the orthogonal transform

r n
2 X klπ
Cn := δl δk cos |l⟩⟨k|. (3.75)
n k,l=0 n

Again we define the multi-dimensional QCT by the tensor product, namely

d
r 
2 d X Y kj lj π
Cn := δkj δlj cos |l1 ⟩ . . . |ld ⟩⟨k1 | . . . ⟨kd |, (3.76)
n n
∥k∥∞ ,∥l∥∞ ≤n j=1

where k = (k1 , . . . , kd ) and l = (l1 , . . . , ld ) are d-dimensional vectors with kj , lj ∈

[n + 1]0 .

The classical DCT on (n + 1)-dimensional vectors takes Θ(n log n) gates, while

the QCT on (n + 1)-dimensional quantum states can be implemented with complexity

poly(log n). According to Theorem 1 of Reference [108], the gate complexity of performing

Cn is O(log2 n). We observe that this can be improved as follows.

Lemma 3.6. The quantum cosine transform Cn defined by (3.75) can be performed

with gate complexity O(log n log log n). More generally, the multi-dimensional QCT Cn

defined by (3.76) can be performed with gate complexity O(d log n log log n).

Proof. According to the quantum circuit in Figure 2 of Reference [108], Cn can be

93
decomposed into a QFT Fn+1 , a permutation

 
 1
 
 
1 
 
 
 
Pn = 
 1
,
 (3.77)
 
..
 

 . 

 
 
1

and additional operations with O(1) cost. The QFT Fn+1 has gate complexity O(log n

log log n). We then consider an alternative way to implement Pn that improves over the

approach in [110].

The permutation Pn can be decomposed as

Pn = Fn Tn Fn−1 , (3.78)

Pn 2πik
where Fn is the Fourier transform (3.64) and Tn = k=0 e− n+1 |k⟩⟨k| is diagonal. The

gate complexities of performing Fn and Tn are O(log n log log n) and O(log n), respectively.

It follows that Cn can be implemented with circuit complexity O(log n log log n).

The matrix Cn can be written as the tensor product

d
O
Cn = Cn . (3.79)
j=1

As in Lemma 3.5, performing the multi-dimensional QCT is equivalent to performing a

QCT on each register. Thus, the gate complexity of performing Cn is O(d log n log log n).

94
3.4.2 Linear system

In this section we introduce the quantum PDE solver for the problem (3.1). We

construct a linear system that encodes the solution of (3.1) according to the pseudo-

spectral method introduced above, using the QSFT/QCT introduced in Section 3.4.1 to

ensure sparsity.

We consider a linear PDE problem (Problem 3.1) with periodic boundary conditions

u(x + 2v) = u(x) ∀ x ∈ D, ∀ v ∈ Zd (3.80)

or non-periodic Dirichlet boundary conditions

u(x) = γ(x) ∀ x ∈ ∂D. (3.81)

According to the elliptic regularity theorem (Theorem 6 in Section 6.3 of Reference [99]),

there exists a unique solution û(x) in C ∞ for Problem 3.1.

We now show how to apply the Fourier and Chebyshev pseudo-spectral methods to

this problem. Our goal is to obtain the quantum state

X
|u⟩ ∝ ck ϕk (χl )|l1 ⟩ . . . |ld ⟩, (3.82)
∥k∥∞ ,∥l∥∞ ≤n

where ϕk (χl ) is defined by (3.58) using (3.59) for the appropriate boundary conditions

95
(periodic or non-periodic). This state corresponds to a truncated Fourier/Chebyshev

approximation and is ϵ-close to the exact solution û(χl ) with n = poly(log(1/ϵ)) [68].

Note that this state encodes the values of the solution at the interpolation nodes (3.60)

appropriate to the boundary conditions (the uniform grid nodes in the Fourier approach,

for periodic boundary conditions, and the Chebyshev-Gauss-Lobatto quadrature nodes in

the Chebyshev approach, for non-periodic boundary conditions).

Instead of developing our algorithm for the standard basis, we aim to produce a

state
X
|c⟩ ∝ ck |k1 ⟩ . . . |kd ⟩ (3.83)
∥k∥∞ ≤n

that is the inverse QSFT/QCT of |u⟩. We then apply the QSFT/QCT to transform back

into the interpolation node basis.

The truncated spectral series of the inhomogeneity f (x) and the boundary conditions

γ(x) can be expressed as


X
f (x) = fˆk ϕk (x) (3.84)
∥k∥∞ ≤n

and
X
γ(x) = γ̂k ϕk (x), (3.85)
∥k∥∞ ≤n

respectively. We define quantum states |f ⟩ and |γ⟩ by interpolating the nodes {χl }

defined by (3.60) as

X
|f ⟩ ∝ ϕkj (χl )fˆk |l1 ⟩ . . . |ld ⟩, (3.86)
∥k∥∞ ,∥l∥∞ ≤n

96
and
X
|γ⟩ ∝ ϕkj (χl )γ̂k |l1 ⟩ . . . |ld ⟩, (3.87)
∥k∥∞ ,∥l∥∞ ≤n

respectively. These are the states that we assume we can produce using oracles. We

perform the multi-dimensional inverse QSFT/QCT to obtain the states

X
|fˆ⟩ ∝ fˆk |k1 ⟩ . . . |kd ⟩, (3.88)
∥k∥∞ ≤n

and
X
|γ̂⟩ ∝ γ̂k |k1 ⟩ . . . |kd ⟩. (3.89)
∥k∥∞ ≤n

Having defined these states, we now detail the construction of the linear system. At

a high level, we construct two linear systems: one system Ax = f (where x corresponds

to (3.83)) describes the differential equation, and another system Bx = g describes the

boundary conditions. We combine these into a linear system with the form

Lx = (A + B)x = f + g. (3.90)

Even though we do not impose the two linear systems separately, we show that there exists

a unique solution of (3.90) (which is therefore the solution of the simultaneous equations

Ax = f and Bx = g), since we show that L has full rank, and indeed we upper bound its

condition number in Section 3.4.3.

97
Part of this linear system will correspond to just the differential equation

X ∂j
L (u(χl )) = Aj u(χl ) = f (χl ), (3.91)
∂xj
∥j∥1 =2

S
while another part will come from imposing the boundary conditions on ∂D = j∈[d] ∂Dj ,

where ∂Dj := {x ∈ D | xj = ±1} is a (d − 1)-dimensional subspace. More specifically,

the boundary conditions

u(χl ) = γ(χl ) ∀ χl ∈ ∂D (3.92)

can be expressed as conditions on each boundary:

u(x1 , . . . , xj−1 , 1, xj+1 , . . . , xd ) = γ j+ , x ∈ ∂Dj , j ∈ [d]


(3.93)
u(x1 , . . . , xj−1 , −1, xj+1 , . . . , xd ) = γ j− , x ∈ ∂Dj , j ∈ [d].

3.4.2.1 Linear system from the differential equation

To evaluate the matrix corresponding to the differential operator from (3.91), it is


(j)
convenient to define coefficients ck and ∥k∥∞ ≤ n such that

∂j X (j)
u(x) = ck ϕk (x) (3.94)
∂xj
∥k∥∞ ≤n

for some fixed j ∈ Nd (as we explain below, such a decomposition exists for the choices of

basis functions in (3.59)). Using this expression, we obtain the following linear equations

98
(j)
for ck :

X X (j)
X
Aj ϕk (χl )ck |l1 ⟩ . . . |ld ⟩ = ϕk (χl )fˆk |l1 ⟩ . . . |ld ⟩. (3.95)
∥j∥1 =2 ∥k∥∞ ,∥l∥∞ ≤n ∥k∥∞ ,∥l∥∞ ≤n

(j)
To determine the transformation between ck and ck , we can make use of the differential

properties of Fourier and Chebyshev series, namely

d ikπx
e = ikπeikπx (3.96)
dx

and

′ ′
Tk+1 (t) Tk−1 (t)
2Tk (t) = − , (3.97)
k+1 k−1

respectively. We have

(j)
X
ck = [Dn(j) ]kr cr. , ∥k∥∞ ≤ n, (3.98)
∥r∥∞ ≤n

(j)
where Dn can be expressed as the tensor product

Dn(j) = Dnj1 ⊗ Dnj2 ⊗ · · · ⊗ Dnjd , (3.99)

99
with j = (j1 , . . . , jd ). The matrix Dn for the Fourier basis functions in (3.59) can be

written as the (n + 1) × (n + 1) diagonal matrix with entries

[Dn ]kk = i(k − ⌊n/2⌋)π. (3.100)

As detailed in Appendix A of Reference [54], the matrix Dn for the Chebyshev polynomials

in (3.59) can be expressed as the (n + 1) × (n + 1) upper triangular matrix with nonzero

entries
2r
[Dn ]kr = , k + r odd, r > k, (3.101)
σk

where 

2 k = 0


σk := (3.102)


1 k ∈ [n].

Substituting (3.99) into (3.95), with Dn defined by (3.100) in the periodic case or

(3.101) in the non-periodic case, and performing the multi-dimensional inverse QSFT/QCT

(for a reason that will be explained in the next section), we obtain the following linear

equations for cr :

X X X
Aj [Dn(j) ]kr cr |l1 ⟩ . . . |ld ⟩ = fˆk |l1 ⟩ . . . |ld ⟩. (3.103)
∥j∥1 =2 ∥k∥∞ ,∥l∥∞ ,∥r∥∞ ≤n ∥k∥∞ ,∥l∥∞ ≤n

Notice that the matrices (3.100) and (3.101) are not full rank. More specifically,

there exists at least one zero row in the matrix of (3.103) when using either (3.100) (k =

⌊n/2⌋) or (3.101) (k = n). To obtain an invertible linear system, we next introduce the

boundary conditions.

100
3.4.2.2 Adding the linear system from the boundary conditions

When we use the form (3.82) of u(x) to write linear equations describing the

boundary conditions (3.93), we obtain a non-sparse linear system. Thus, for each x ∈

∂Dj in (3.93), we perform the (d − 1)-dimensional inverse QSFT/QCT on the d − 1

registers except the jth register to obtain the linear equations

X X
ck |k1 ⟩ . . . |kd ⟩ = γ̂k1+ |k1 ⟩ . . . |kd ⟩, γ̂kj+ ∈ ∂Dj ,
∥k∥∞ ≤n ∥k∥∞ ≤n
kj =n kj =n
(3.104)
X X
kj
(−1) ck |k1 ⟩ . . . |kd ⟩ = γ̂k1− |k1 ⟩ . . . |kd ⟩, γ̂kj− ∈ ∂Dj
∥k∥∞ ≤n ∥k∥∞ ≤n
kj =n−1 kj =n−1

for all j ∈ [d], where the values of kj indicate that we place these constraints in the last

two rows with respect to the jth coordinate. We combine these equations with (3.103) to

obtain the linear system

d
X X (j) X X
Aj [D n ]kr cr |k1 ⟩ . . . |kd ⟩ = (Aj,j γ̂kj+ +Aj,j γ̂kj− +fˆk )|k1 ⟩ . . . |kd ⟩,
∥j∥1 =2 ∥k∥∞ ,∥r∥∞ ≤n ∥k∥∞ ≤n j=1
(3.105)

where 
Dn(j) + G(j)

n , ∥j∥1 = 2, ∥j∥∞ = 2;


(j)
Dn = (3.106)

Dn(j) ,

∥j∥1 = 2, ∥j∥∞ = 1

(j) (j) (j) (j)


with Gn defined below. In other words, D n = Dn + Gn for each j that has exactly
(j) (j)
one entry equal to 2 and all other entries 0, whereas D n = Dn for each j that has
(j)
exactly two entries equal to 1 and all other entries 0. Here Gn can be expressed as the

101
tensor product

⊗r−1
G(j)
n = I ⊗ Gn ⊗ I ⊗d−r (3.107)

where the rth entry of j is 2 and all other entries are 0. For the Fourier case in (3.59) used

for periodic boundary conditions, Dn comes from (3.100), and the nonzero entries of Gn

are

[Gn ]⌊n/2⌋,k = 1, k ∈ [n + 1]0 . (3.108)

Alternatively, for the Chebyshev case in (3.59) used for non-periodic boundary conditions,

Dn comes from (3.101), and the nonzero entries of Gn are

[Gn ]n,k = 1, k ∈ [n + 1]0 ,


(3.109)
k
[Gn ]n−1,k = (−1) , k ∈ [n + 1]0 .

The system (3.105) has the form of (3.90). For instance, the matrix in (3.90) for

Poisson’s equation (3.6) is

(2,0,...,0) (0,2,...,0) (0,0,...,2)


LPoisson := D n + Dn + · · · + Dn (3.110)
d
M (2) (2) (2) (2)
= Dn = Dn ⊗ I ⊗d−1 + I ⊗ Dn ⊗ I ⊗d−2 + · · · + I ⊗d−1 ⊗ Dn .
j=1

(3.111)

For periodic boundary conditions, using (3.98), (3.100), and (3.108), the second-order

102
(2)
differential matrix Dn has nonzero entries

(2)
[Dn ]k,k = −((k − ⌊n/2⌋)π)2 , k ∈ [n + 1]0 \{⌊n/2⌋},
(3.112)
(2)
[Dn ]⌊n/2⌋,k = 1, k ∈ [n + 1]0 .

(2)
For non-periodic boundary conditions, using (3.98), (3.101), and (3.109), Dn has nonzero

entries

r−1 r−1
(2) X 2r X 2l r(r2 − k 2 )
[Dn ]kr = [Dn ]kl [Dn ]lr = = , k + r even, r > k + 1,
l=k+1
σk l=k+1 σl σk
k + l odd k + l odd
l + r odd l + r odd

(2)
[Dn ]n,k = 1, k ∈ [n + 1]0 ,

(2)
[Dn ]n−1,k = (−1)k , k ∈ [n + 1]0 .
(3.113)

We discuss the invertible linear system (3.105) and upper bound its condition number

in the following section.

3.4.3 Condition number

We now analyze the condition number of the linear system. We begin with two

lemmas bounding the singular values of the matrices (3.112) and (3.113) that appear in

the linear system.

Lemma 3.7. Consider the case of periodic boundary conditions. Then for n ≥ 4, the

103
(2)
largest and smallest singular values of Dn defined in (3.112) satisfy

(2)
σmax (Dn ) ≤ (2n)2.5 ,
(3.114)
(2) 1
σmin (Dn ) ≥√ .
2

Proof. By direct calculation of the l∞ norm (i.e., the maximum absolute column sum) of

(3.112), for n ≥ 4, we have

 2
(2) (n + 1)π
∥Dn ∥∞ ≤ ≤ (2n)2 . (3.115)
2

Then the inverse of the matrix (3.112) is

(2) 1
[(Dn )−1 ]k,k = − , k ∈ [n + 1]0 \{⌊n/2⌋},
((k − ⌊n/2⌋)π)2
(3.116)
(2) 1
[(Dn )−1 ]⌊n/2⌋,k = , k ∈ [n + 1]0
((k − ⌊n/2⌋)π)2

as can easily be verified by a direct calculation.

By direct calculation of the Frobenius norm of (3.112), we have


2 X 1 2 π4
∥(Dn )−1 ∥2F ≤1+2 = 1 + ≤ 2. (3.117)
k=1
k4π4 π 4 90

Thus we have the result in (3.114):

(2) √ 2
σmax (Dn ) ≤ n + 1∥Dn ∥∞ ≤ (2n)2.5 ,
(3.118)
(2) 1
1
σmin (Dn ) ≥ 2 −1 ≥√
∥(Dn ) ∥F 2

104
as claimed.

Lemma 3.8. Consider the case of non-periodic boundary conditions. Then the largest
(2)
and smallest singular values of Dn defined in (3.113) satisfy

(2)
σmax (Dn ) ≤ n4 ,
(3.119)
(2) 1
σmin (Dn ) ≥ .
16

Proof. By direct calculation of the Frobenius norm of (3.113), we have

2
r(r2 − k 2 )

(2)
∥Dn ∥2F 2
≤ n max ≤ n2 · n6 = n8 . (3.120)
k,r σk

(2)
Next we upper bound ∥(Dn )−1 ∥. By definition,

(2) 2
∥(Dn )−1 ∥ = sup ∥(Dn )−1 b∥. (3.121)
∥b∥≤1

Given any vector b satisfying ∥b∥ ≤ 1, we estimate ∥x∥ defined by the full-rank linear

system
(2)
Dn x = b. (3.122)

(2)
Notice that Dn is the sum of the upper triangular matrix Dn2 and (3.109), the coordinates

x2 , . . . , xn are only defined by coordinates b0 , . . . , bn−2 . So we only focus on the partial

system

Dn(2) [0, 0, x2 , . . . , xn ]T = [b0 , . . . , bn−2 , 0, 0]T . (3.123)

105
Given the same b, we also define the vector y by

Dn [0, y1 , . . . , yn−1 , 0]T = [b0 , . . . , bn−2 , 0, 0]T , (3.124)

where each coordinate of y can be expressed by

n−1 n−1
X X 2l
bk = [Dn ]kl yl = yl , k + l odd, l > k, k ∈ [n − 1]0 . (3.125)
l=1 l=1
σ k

Using this equation with k = l − 1 and k = l + 1, we can express yl in terms of bl−1 and

bl+1 :
2l 1
yl = bl−1 − bl+1 , l ∈ [n − 1], (3.126)
σl−1 σl−1

where we let bn−1 = bn = 0. Thus we have

n−1 n−1   2


X X σl−1 1
yl2 = bl−1 − bl+1
l=1 l=1
2l σl−1
n−1 2  
X σl−1 1
≤ 2
1+ 2 (b2l−1 + b2l+1 )
l=1
4l σ l−1
n−2
(3.127)
5 1 X 2
≤ (b20 + b22 ) + (b + b2l+1 )
4 16 l=2 l−1
n−2
X
≤2 b2l .
l=0

We notice that y also satisfies

[0, y1 , . . . , yn−1 , 0]T = Dn [0, 0, x2 , . . . , xn ]T , (3.128)

106
where each coordinate of y can be expressed by

n n
X X 2r
yl = [Dn ]lr xr = xr , l + r odd, r > l, l ∈ [n − 1]. (3.129)
r=1 r=1
σl

Substituting the (r − 1)st and the (r + 1)st equations of (3.129), we can express x in terms

of y:
2r 1
xr = yr−1 − yr+1 , r ∈ [n]\{1}, (3.130)
σr−1 σr−1

where we let yn = yn+1 = 0. Similarly, according to (3.130), we also have

n
X n−1
X
x2l ≤2 yl2 . (3.131)
l=2 l=1

Then we calculate x20 +x21 based on the last two equations of (3.122), (3.127), and (3.130),

107
giving

1
x20 + x21 = [(x0 + x1 )2 + (x0 − x1 )2 ]
2
n
!2 n
!2 
1 X X
=  bn − xl + bn−1 − (−1)l xl 
2 l=2 l=2

n  !2
1 X σl−1 1
= bn − yl−1 − yl+1
2 l=2
2l σl−1

n  !2
X σl−1 1
+ bn−1 − (−1)l yl−1 − yl+1 
l=2
2l σ l−1

n
!" n  2
2
1 X σl−1 2
X 1 (3.132)
≤ 1+ 2
bn + yl−1 − yl+1 + b2n−1
2 l=2
4l l=2
σ l−1
n
#
 2
X 1
+ yl−1 − yl+1
l=2
σl−1

! " n   #
1 1X 1 X 1
≤ 1+ b2n + b2n−1 + 1+ 2 2
(yl−1 2
+ yl+1 )
2 4 l=2 l2 l=2
σ l−1
 2
" n−1
#
1 π X
≤ 1+ b2n + b2n−1 + 4 yl2
2 24 l=1
n−2
X
≤ b2n + b2n−1 +8 b2l .
l=0

Thus, based on (3.127), (3.131), and (3.132), the inequality

n
X n
X
x2l = x20 + x21 + x2l
l=0 l=2
n−2
X n−2
X
≤ b2n + b2n−1 +8 b2l +4 b2l (3.133)
l=0 l=0
n−2
X
≤ b2n + b2n−1 + 12 b2l ≤ 12
l=0

108
holds for any vectors b satisfying ∥b∥ ≤ 1. Thus

(2)
∥(Dn )−1 ∥ = sup ∥x∥ ≤ 12 < 16. (3.134)
∥b∥≤1

Altogether, we have

(2) 2
σmax (Dn ) ≤ ∥Dn ∥F ≤ n4 ,
(3.135)
(2) 11
σmin (Dn ) ≥ 2 −1 ≥
∥(Dn ) ∥ 16

as claimed in (3.119).

Using these two lemmas, we first upper bound the condition number of the linear

system for Poisson’s equation, and then extend the result to general elliptic PDEs.

For the case of the Poisson equation, we use the following simple bounds on the

extreme singular values of a Kronecker sum.

Lemma 3.9. Let

d
M
L= Mj = M1 ⊗ I ⊗d−1 + I ⊗ M2 ⊗ I ⊗d−2 + · · · + I ⊗d−1 ⊗ Md , (3.136)
j=1

where {Mj }dj=1 are square matrices. If the largest and smallest singular values of Mj

satisfy
σmax (Mj ) ≤ smax
j ,
(3.137)
σmin (Mj ) ≥ smin
j ,

109
respectively, then the condition number of L satisfies

Pd max
j=1 sj
κL ≤ Pd min
. (3.138)
j=1 sj

Proof. We bound the singular values of the matrix exponential exp(Mj ) by

max
σmax (exp(Mj )) ≤ esj ,
(3.139)
smin
σmin (exp(Mj )) ≥ e j

Nd
using (3.137). The singular values of the Kronecker product j=1 exp(Mj ) are

d
O  Yd
σk1 ,...,kd exp(Mj ) = σkj (exp(Mj )) (3.140)
j=1 j=1

where σkj (exp(Mj )) are the singular values of the matrix exp(Mj ) for each j ∈ [d], where

kj runs from 1 to the dimension of Mj . Using the property of the Kronecker sum that

d
M  d
O
exp(L) = exp Mj = exp(Mj ), (3.141)
j=1 j=1

we bound the singular values of the matrix exponential of (3.111) by

Pd max
σmax (exp(L)) ≤ e j=1 sj ,
(3.142)
Pd min
σmin (exp(L)) ≥ e j=1 sj .

110
Finally, we bound the singular values of the matrix logarithm of (3.142) by

d
X
σmax (L) ≤ smax
j ,
j=1
d
(3.143)
X
σmin (L) ≥ smin
j .
j=1

Thus the condition number of L satisfies

Pd
j=1 smax
j
κL ≤ Pd min
(3.144)
j=1 sj

as claimed.

This lemma easily implies a bound on the condition number of the linear system for

Poisson’s equation:

Corollary 3.1. Consider an instance of the quantum PDE problem as defined in Problem 3.1

for Poisson’s equation (3.6) with Dirichlet boundary conditions (3.81). Then for n ≥ 4,

the condition number of LPoisson in the linear system (3.90) satisfies

κLPoisson ≤ (2n)4 . (3.145)

Proof. The matrix in (3.90) for Poisson’s equation (3.6) is LPoisson defined in (3.111). For

both the periodic and the non-periodic case, we have

(2)
σmax (Dn ) ≤ n4 ,
(3.146)
(2) 1
σmin (Dn ) ≥
16

111
(2)
by Lemma 3.7 and Lemma 3.8. Let Mj = Dn for j ∈ [d] in (3.136), and apply

Lemma 3.9 with smax


j = n4 and smin
j = 1/16 in (3.138). Then the condition number

of LPoisson is bounded by

(2)
σmax (Dn )
κLPoisson ≤ (2)
≤ (2n)4 (3.147)
σmin (Dn )

as claimed.

We now consider the condition number of the linear system for general elliptic

PDEs.

Lemma 3.10. Consider an instance of the quantum PDE problem as defined in Problem 3.1

with Dirichlet boundary conditions (3.81). Then for n ≥ 4, the condition number of L in

the linear system (3.90) satisfies

∥A∥Σ
κL ≤ (2n)4 , (3.148)
C∥A∥∗

P Pd Pd
where ∥A∥Σ := ∥j∥1 ≤2 |Aj | = j1 ,j2 =1 |Aj1 ,j2 |, ∥A∥∗ := j=1 |Aj,j |, and C > 0 is

defined in (3.8).

Recall that C quantifies the extent to which the global strict diagonal dominance

condition holds.

Proof. According to (3.105), the matrix in (3.90) is

X (j)
L= Aj D n . (3.149)
∥j∥1 =2

112
We upper bound the spectral norm of the matrix L by

X (j)
∥L∥ ≤ |Aj |∥D n ∥. (3.150)
∥j∥1 =2

(j)
For the matrix D n defined by (3.106), Lemma 3.7 (in the periodic case) and Lemma 3.8

(in the non-periodic case) give the inequality

(j)
∥D n ∥ ≤ n4 , (3.151)

so we have
X
∥L∥ ≤ |Aj |n4 = ∥A∥Σ n4 . (3.152)
∥j∥1 =2

Next we lower bound ∥Lξ∥ for any ∥ξ∥ = 1.

It is non-trivial to directly compute the singular values of a sum of non-normal

matrices. Instead, we write L as a sum of terms L1 and L2 , where L1 is a tensor sum

similar to (3.111) that can be bounded by Lemma 3.9, and L2 is a sum of tensor products

that are easily bounded. Specifically, we have

(2) (2)
L1 = A1,1 Dn ⊗ I ⊗d−1 + · · · + Ad,d I ⊗d−1 ⊗ Dn
(3.153)
L2 = L − L1 .

Aj (x)ξ j ̸= 0, can only hold if the Aj,j for


P
The ellipticity condition (3.7), ∀ξ ∥j∥1 =h

j ∈ [d] are either all positive or all negative; we consider Aj,j > 0 without loss of

113
generality, so
d
X d
X
∥A∥∗ = |Aj,j | = Aj,j . (3.154)
j=1 j=1

Also, the global strict diagonal dominance condition (3.8) simplifies to

d
X 1 X
C =1− |Aj1 ,j2 | > 0, (3.155)
j1 =1
Aj1 ,j1
j2 ∈[d]\{j1 }

where 0 < C ≤ 1.
(j)
We now upper bound ∥L2 L−1 −1
1 ∥ by bounding ∥Dn L1 ∥ for each j = (j1 , . . . , jd )

that has exactly two entries equal to 1 and all other entries 0. Specifically, consider jr1 =

jr2 = 1 for r1 , r2 ∈ [d], r1 ̸= r2 , and jr = 0 for r ∈ [d]\{r1 , r2 }. We denote

L(j) := I ⊗r1 −1 ⊗ Dn2 ⊗ I ⊗d−r1 + I ⊗r2 −1 ⊗ Dn2 ⊗ I ⊗d−r2 . (3.156)

(j) (j)
We first upper bound ∥Dn ∥ by 21 ∥L(j) ∥. Notice the matrices Dn and L(j) share the

same singular vectors. For k ∈ [n + 1]0 , we let vk and λk denote the right singular

vectors and corresponding singular values of Dn , respectively. Then the right singular
(j) Nd
vectors of Dn and L(j) are vk := j=1 vkj , where k = (k1 , . . . , kd ) with kj ∈ [n + 1]0
P
for j ∈ [d]. For any vector v = ∥k∥∞ ≤n αk vk , we have

X X
∥Dn(j) v∥2 = |αk |2 ∥Dn(j) vk ∥2 = |αk |2 (λkjr1 λkjr2 )2 , (3.157)
∥k∥∞ ≤n ∥k∥∞ ≤n

X X
∥L(j) v∥2 = |αk |2 ∥L(j) vk ∥2 = |αk |2 (λ2kjr + λ2kjr )2 , (3.158)
1 2
∥k∥∞ ≤n ∥k∥∞ ≤n

(j)
which implies ∥Dn v∥ ≤ 21 ∥L(j) v∥ by the inequality of arithmetic and geometric means

114
(also known as the AM-GM inequality). Since this holds for any vector v, we have

1 (j) −1
∥Dn(j) L−1
1 ∥ ≤ ∥L L1 ∥. (3.159)
2

(2)
Next we upper bound ∥Dn2 ∥ by ∥Dn ∥. For any vector u = [u0 , . . . , un ]T , define

two vectors w = [w0 , . . . , wn ]T and w = [w0 , . . . , wn ]T such that

Dn2 [u0 , . . . , un ]T = [w0 , . . . , wn ]T (3.160)

and
2
Dn [u0 , . . . , un ]T = [w0 , . . . , wn ]T . (3.161)

Notice that w⌊n/2⌋ = 0 and wk = wk for k ∈ [n + 1]0 \{⌊n/2⌋} for periodic conditions,

and wn−1 = wn = 0 and wk = wk for k ∈ [n + 1]0 \{n − 1, n} for non-periodic

conditions. Thus, for any vector v,

n n
X X (2)
∥Dn2 v∥2 = ∥w∥ = 2
wk2 ≤ w2k = ∥w∥2 = ∥Dn v∥2 . (3.162)
k=0 k=0

Therefore,

2 2
X X (2)
∥L(j) L−1
1 ∥ ≤ ∥I ⊗rs −1 ⊗ Dn2 ⊗ I ⊗d−rs L−1
1 ∥ ≤ ∥I ⊗rs −1 ⊗ Dn ⊗ I ⊗d−rs L−1
1 ∥.
s=1 s=1
(3.163)

We also have
2
1 X ⊗rs −1 (2)
∥Dn(j) L−1
1 ∥ ≤ ∥I ⊗ Dn ⊗ I ⊗d−rs L−1
1 ∥. (3.164)
2 s=1

115
(2)
We can rewrite I ⊗rs −1 ⊗ Dn ⊗ I ⊗d−rs L−1
1 in the form

d
!−1
(2) X (2)
I ⊗rs −1 ⊗ Dn ⊗ I ⊗d−rs Ah,h I ⊗rh −1 ⊗ Dn ⊗ I ⊗d−rh . (3.165)
h=1

(2)
The matrices I ⊗rh −1 ⊗ Dn ⊗ I ⊗d−rh share the same singular values and singular vectors,

so

(2) λkrs 1
∥I ⊗rs −1 ⊗ Dn ⊗ I ⊗d−rs L−1
1 ∥ = max Pd < , (3.166)
λk r
h=1 Ah,h λkh Ars ,rs

(2)
where λkh are singular values of I ⊗rh −1 ⊗ Dn ⊗ I ⊗d−rh for kh ∈ [n]0 , h ∈ [d]. This

implies
1 1 1
∥Dn(j) L−1
1 ∥ ≤ ( + ). (3.167)
2 Ar1 ,r1 Ar2 ,r2

(j)
Using (3.155), considering each instance of Dn in L2 , we have

d
X X 1 X
∥L2 L−1
1 ∥ ≤ |Aj1 ,j2 |∥Dn(j) L−1
1 ∥ ≤ |Aj1 ,j2 | ≤ 1 − C. (3.168)
j1 ̸=j2 j1 =1
Aj1 ,j1
j2 ∈[d]\{j1 }

Since L and L1 are invertible, ∥L−1


1 L2 ∥ ≤ 1 − C < 1, and by Lemma 3.9 applied to

∥L−1
1 ∥, we have

1
−1 −1 ∥L−1
1 ∥ 1/ 16 ∥A∥∗ 16
∥L ∥ = ∥(L1 +L2 ) ∥ ≤ ∥(I+L2 L−1 −1 −1
1 ) ∥∥L1 ∥ ≤ −1 ≤ = .
1 − ∥L2 L1 ∥ C C∥A∥∗
(3.169)

Thus we have
∥A∥Σ
κL = ∥L∥∥L−1 ∥ ≤ (2n)4 (3.170)
C∥A∥∗

116
as claimed.

3.4.4 State preparation

We now describe a state preparation procedure for the vector f + g in the linear

system (3.90).

Lemma 3.11. Let Of be a unitary oracle that maps |0⟩|0⟩ to a state proportional to

|0⟩|f ⟩, and |ϕ⟩|0⟩ to |ϕ⟩|0⟩ for any |ϕ⟩ orthogonal to |0⟩; let Ox be a unitary oracle that

maps |0⟩|0⟩ to |0⟩|0⟩, |j⟩|0⟩ to a state proportional to |j⟩|γ j+ ⟩ for j ∈ [d], and |j + d⟩|0⟩

to a state proportional to |j + d⟩|γ j− ⟩ for j ∈ [d]. Suppose ∥|f ⟩∥, ∥|γ j+ ⟩∥, ∥|γ j− ⟩∥ and

Aj,j for j ∈ [d] are known. Define the parameter

v
u ∥k∥∞ ≤n dj=1 [fˆk2 + (Aj,j γ̂kj+ )2 + (Aj,j γ̂kj− )2 ]
uP P
q := t . (3.171)
P Pd
|fˆk + Aj,j γ̂ j+ + Aj,j γ̂ j− |2
∥k∥∞ ≤n j=1 k k

Then the normalized quantum state

d
X X
|B⟩ ∝ (fˆk + Aj,j γ̂kj+ + Aj,j γ̂kj− )|k1 ⟩ . . . |kd ⟩, (3.172)
∥k∥∞ ≤n j=1

with coefficients defined as in (3.88) and (3.89), can be prepared with gate and query

complexity O(qd2 log n log log n).

Proof. Starting from the initial state |0⟩|0⟩, we first perform a unitary transformation U

117
satisfying

∥|f ⟩∥
U |0⟩ = q Pd  |0⟩
∥|f ⟩∥2 + j=1 A2j,j ∥|γ j+ ⟩∥2 + A2j,j ∥|γ j− ⟩∥2
d
X Aj,j ∥|γ j+ ⟩∥
+ q Pd 2
 |j⟩ (3.173)
j=1 ∥|f ⟩∥2 + j+ 2
j=1 Aj,j ∥|γ ⟩∥ + A2j,j ∥|γ j− ⟩∥2
d
X Aj,j ∥|γ j− ⟩∥
+ q Pd  |j + d⟩
j=1 ∥|f ⟩∥2 + j=1 A2j,j ∥|γ j+ ⟩∥2 + A2j,j ∥|γ j− ⟩∥2

on the first register to obtain

∥|f ⟩∥|0⟩ + A1,1 ∥|γ 1+ ⟩∥|1⟩ + · · · + Ad,d ∥|γ d− ⟩∥|2d⟩


q Pd 2 2
 |0⟩. (3.174)
2 j+ 2
∥|f ⟩∥ + j=1 Aj,j ∥|γ ⟩∥ + Aj,j ∥|γ ⟩∥ j− 2

This can be done in time O(2d + 1) by standard techniques [92]. Then we apply Ox and

Of to obtain

|0⟩|f ⟩ + A1,1 |1⟩|γ 1+ ⟩ + · · · + Ad,d |2d⟩|γ d− ⟩,


X (3.175)
∝ ϕk (χl )(fˆk |0⟩ + A1,1 γ̂k1+ |1⟩ + · · · + Ad,d γ̂kd− |2d⟩)|l1 ⟩ . . . |ld ⟩,
∥k∥∞ ,∥l∥∞ ≤n

according to (3.86) and (3.87). We then perform the d-dimensional inverse QSFT (for

periodic boundary conditions) or inverse QCT (for non-periodic boundary conditions) on

the last d registers, obtaining

X
(fˆk |0⟩ + A1,1 γ̂k1+ |1⟩ + · · · + Ad,d γ̂kd− |2d⟩)|k1 ⟩ . . . |kd ⟩. (3.176)
∥k∥∞ ≤n

Finally, observe that if we measure the first register in a basis containing the uniform

118
superposition |0⟩ + |1⟩ + · · · + |2d⟩ (say, the Fourier basis) and obtain the outcome

corresponding to the uniform superposition, we produce the state

d
X X
(fˆk + Aj,j γ̂kj+ + Aj,j γ̂kj− )|k1 ⟩ . . . |kd ⟩. (3.177)
∥k∥∞ ≤n j=1

Since this outcome occurs with probability 1/q 2 , we can prepare this state with probability

close to 1 using O(q) steps of amplitude amplification. According to Lemma 3.5 and

Lemma 3.6, the d-dimensional (inverse) QSFT or QCT can be performed with gate

complexity O(d log n log log n). Thus the total gate and query complexity is

O(qd2 log n log log n).

Alternatively, if it is possible to directly prepare the quantum state |B⟩, then we

may be able to avoid the factor of q in the complexity of the overall algorithm.

3.4.5 Main result

Having analyzed the condition number and state preparation procedure for our

approach, we are now ready to establish the main result. Theorem 3.2 as follows.

Theorem 3.2. Consider an instance of the quantum PDE problem as defined in Problem 3.1

with Dirichlet boundary conditions (3.81). Then there exists a quantum algorithm that

produces a state in the form of (3.82) whose amplitudes are proportional to u(x) on a set

of interpolation nodes x (with respect to the uniform grid nodes for periodic boundary

conditions or the Chebyshev-Gauss-Lobatto quadrature nodes for non-periodic boundary

conditions, as defined in in (3.60)), where u(x)/∥u(x)∥ is ϵ-close to û(x)/∥û(x)∥ in l2

119
norm for all nodes x, succeeding with probability Ω(1), with a flag indicating success,

using
 
d∥A∥Σ
+ qd poly(log(g ′ /gϵ))
2
(3.54)
C∥A∥∗

P
queries to oracles as defined in Section 3.4.4. Here ∥A∥Σ := ∥j∥1 ≤h ∥Aj ∥, ∥A∥∗ :=
Pd
j=1 |Aj,j |, C > 0 is defined in (3.8), and

g = min ∥û(x)∥, g ′ := max max ∥û(n+1) (x)∥, (3.55)


x x n∈N
v
Pd ˆ2
u ∥k∥∞ ≤n j=1 fk + (Aj,j γ̂kj+ )2 + (Aj,j γ̂kj− )2
uP
q=t P . (3.56)
(fˆk + Aj,j γ̂ j+ + Aj,j γ̂ j− )2
Pd
∥k∥∞ ≤n j=1 k k

The gate complexity is larger than the query complexity by a factor of poly(log(d∥A∥Σ /ϵ)).

Proof. We analyze the complexity of the algorithm presented in Section 3.4.2.

First we choose
 
log(Ω)
n := , (3.178)
log(log(Ω))

where
g ′ (1 + ϵ)
Ω= . (3.179)

By Eq. (1.8.28) of Reference [107], this choice guarantees

(n+1) en g′ gϵ
∥û(x) − u(x)∥ ≤ max ∥û (x)∥ n
≤ = =: δ. (3.180)
x (2n) Ω 1+ϵ

120
Now ∥û(x) − u(x)∥ ≤ δ implies

û(x) u(x) δ δ
− ≤ ≤ = ϵ, (3.181)
∥û(x)∥ ∥u(x)∥ min{∥û(x)∥, ∥u(x)∥} g−δ

so we can choose n to ensure that the normalized output state is ϵ-close to û(x)/∥û(x)∥.

As described in Section 3.4.2, the algorithm uses the high-precision QLSA from

Reference [46] and the multi-dimensional QSFT/QCT (and its inverse). According to

Lemma 3.5 and Lemma 3.6, the d-dimensional (inverse) QSFT or QCT can be performed

with gate complexity O(d log n log log n). According to Lemma 3.11, the query and gate

complexity for state preparation is O(qd2 log n log log n).

For the linear system Lx = f + g in (3.90), the matrix L is an (n + 1)d × (n + 1)d

matrix with (n + 1) or (n + 1)d nonzero entries in any row or column for periodic or

non-periodic conditions, respectively. According to Lemma 3.10, the condition number


∥A∥Σ
of L is upper bounded by C∥A∥∗
(2n)4 . Consequently, by Theorem 5 of Reference [46],
d∥A∥Σ
the QLSA produces a state proportional to x with O( C∥A∥∗
(2n)5 ) queries to the oracles,

and its gate complexity is larger by a factor of poly(log(d∥A∥Σ n)). Using the value of n

specified in (3.178), the overall query complexity of our algorithm is

 
d∥A∥Σ
+ qd poly(log(g ′ /gϵ)),
2
(3.182)
C∥A∥∗

and the gate complexity is

 
d∥A∥Σ
poly(log(d∥A∥Σ /ϵ)) + qd poly(log(g ′ /gϵ))
2
(3.183)
C∥A∥∗

121
which is larger by a factor of poly(log(d∥A∥Σ /ϵ)), as claimed.

Note that we can establish a more efficient algorithm in the special case of the

Poisson equation with homogeneous boundary conditions. In this case, ∥A∥Σ = ∥A∥∗ =

d and C = 1. Under homogeneous boundary conditions, the complexity of state preparation

can be reduced to d poly(log(g ′ /gϵ)), since we can remove 2d applications of the QSFT

or QCT for preparing a state depending on the boundary conditions, and since γ = 0

there is no need to postselect on the uniform superposition to incorporate the boundary

conditions. In summary, the query complexity of the Poisson equation with homogeneous

boundary conditions is

d poly(log(g ′ /gϵ)); (3.184)

again the gate complexity is larger by a factor of poly(log(d∥A∥Σ /ϵ)).

3.5 Discussion and open problems

We have presented high-precision quantum algorithms for d-dimensional PDEs

using the FDM and spectral methods. These algorithms use high-precision QLSAs to

solve Poisson’s equation and other second-order elliptic equations. Whereas previous

algorithms scaled as poly(d, 1/ϵ), our algorithms scale as poly(d, log(1/ϵ)).

This work raises several natural open problems. First, for the quantum adaptive

FDM, we only deal with Poisson’s equation with homogeneous boundary conditions.

Can we apply the adaptive FDM to other linear equations or to inhomogeneous boundary

conditions? The quantum spectral algorithm applies to second-order elliptic PDEs with

Dirichlet boundary conditions. Can we generalize it to other linear PDEs with Neumann

122
or mixed boundary conditions? Also, can we develop algorithms for space- and time-

dependent PDEs? These cases are more challenging since the quantum Fourier transform

cannot be directly applied to ensure sparsity. Finally, can we improve the dependence on

d?

Second, the complexity scales logarithmically with high-order derivatives (of the

inhomogeneity or solution) for both the adaptive FDM and the spectral method. In

particular, Theorem 3.1 shows that the complexity of the quantum adaptive FDM scales

d2k+1 u
logarithmically with dx2k+1
, and Theorem 3.2 shows that the complexity of the quantum

spectral method is poly(log g ′ ), where g ′ upper bounds ∥û(n+1) (x)∥ (see (3.55)). Such a

logarithmic dependence on high-order derivatives of the solution is typical for classical

algorithms, including the classical adaptive FDM (see for example Theorem 7 of Reference [6])

and spectral methods (see for example Eq. (1.8.28) of Reference [107]), both of which

d2k+1 u
have the same logarithmic dependence on dx2k+1
and g ′ . This logarithmic dependence

means that the algorithm is efficient even when faced with a highly oscillatory solution

with an exponentially large derivative.

However, the query complexity of time-dependent Hamiltonian simulation only

depends on the first-order derivatives of the Hamiltonian [34, 35]. Can we develop

quantum algorithms for PDEs with query complexity independent of high-order derivatives,

and henceforth develop an unexpected advantage of quantum algorithms for PDEs?

Third, can we use quantum algorithms for PDEs as a subroutine of other quantum

algorithms? For example, some PDE algorithms have state preparation steps that require

inverting finite difference matrices (such as Reference [70] using certain oracles for the

initial conditions); are there other scenarios in which state preparation can be done using

123
the solution of another system of PDEs? Can quantum algorithms for PDEs be applied to

other algorithmic tasks, such as optimization?

Finally, how should these algorithms be applied? While PDEs have broad applications,

much more work remains to understand the extent to which quantum algorithms can be

of practical value. Answering this question will require careful consideration of various

technical aspects of the algorithms. In particular: What measurements give useful information

about the solutions, and how can those measurements be efficiently implemented? How

should the oracles encoding the equations and boundary conditions be implemented in

practice? And with these aspects taken into account, what are the resource requirements

for quantum computers to solve classically intractable problems related to PDEs?

124
Chapter 4: Efficient quantum algorithms for dissipative nonlinear differential

equations

4.1 Introduction

In this chapter, we study efficient quantum algorithm for dissipative nonlinear differential

equations1 . As earlier introduced in Problem 1.5, we focus here on differential equations

with nonlinearities that can be expressed with quadratic polynomials, as described in

(4.1). Note that polynomials of degree higher than two, and even more general nonlinearities,

can be reduced to the quadratic case by introducing additional variables [72, 73]. The

quadratic case also directly includes many archetypal models, such as the logistic equation

in biology, the Lorenz system in atmospheric dynamics, and the Navier–Stokes equations

in fluid dynamics.

As discussed in Chapter 1, quantum algorithms offer the prospect of rapidly characterizing

the solutions of high-dimensional systems of linear ODEs [52, 53, 54] and PDEs [55,

56, 57, 58, 59, 60, 61]. Such algorithms can produce a quantum state proportional to

the solution of a sparse (or block-encoded) n-dimensional system of linear differential

equations in time poly(log n) using the quantum linear system algorithm [84].

Early work on quantum algorithms for differential equations already considered the
1
This chapter is based on the paper [81].

125
nonlinear case [74]. It gave a quantum algorithm for ODEs that simulates polynomial

nonlinearities by storing multiple copies of the solution. The complexity of this approach

is polynomial in the logarithm of the dimension but exponential in the evolution time,

scaling as O(1/ϵT ) due to exponentially increasing resources used to maintain sufficiently

many copies of the solution to represent the nonlinearity throughout the evolution.

Recently, heuristic quantum algorithms for nonlinear ODEs have been studied.

Reference [75] explores a linearization technique known as the Koopman–von Neumann

method that might be amenable to the quantum linear system algorithm. In [76], the

authors provide a high-level description of how linearization can help solve nonlinear

equations on a quantum computer. However, neither paper makes precise statements

about concrete implementations or running times of quantum algorithms. The recent

preprint [77] also describes a quantum algorithm to solve a nonlinear ODE by linearizing

it using a different approach from the one taken here. However, a proof of correctness of

their algorithm involving a bound on the condition number and probability of success is

not given. The authors also do not describe how barriers such as those of [78] could be

avoided in their approach.

While quantum mechanics is described by linear dynamics, possible nonlinear modifications

of the theory have been widely studied. Generically, such modifications enable quickly

solving hard computational problems (e.g., solving unstructured search among n items

in time poly(log n)), making nonlinear dynamics exponentially difficult to simulate in

general [78, 79, 80]. Therefore, constructing efficient quantum algorithms for general

classes of nonlinear dynamics has been considered largely out of reach.

We design and analyze a quantum algorithm that overcomes this limitation using

126
Carleman linearization [73, 82, 83]. This approach embeds polynomial nonlinearities into

an infinite-dimensional system of linear ODEs, and then truncates it to obtain a finite-

dimensional linear approximation. The Carleman method has previously been used in

the analysis of dynamical systems [111, 112, 113] and the design of control systems

[114, 115, 116], but to the best of our knowledge it has not been employed in the context of

quantum algorithms. We discretize the finite ODE system in time using the forward Euler

method and solve the resulting linear equations with the quantum linear system algorithm

[46, 84]. We control the approximation error of this approach by combining a novel

convergence theorem with a bound for the global error of the Euler method. Furthermore,

we provide an upper bound for the condition number of the linear system and lower

bound the success probability of the final measurement. Subject to the condition R < 1,

where the quantity R (defined in Problem 4.1 below) characterizes the relative strength

of the nonlinear and dissipative linear terms, we show that the total complexity of this

quantum Carleman linearization algorithm is sT 2 q poly(log T, log n, log 1/ϵ)/ϵ, where s

is the sparsity, T is the evolution time, q quantifies the decay of the final solution relative

to the initial condition, n is the dimension, and ϵ is the allowed error (see Theorem 4.1).

In the regime R < 1, this is an exponential improvement over [74], which has complexity

exponential in T .

Note that the solution cannot decay exponentially in T for the algorithm to be

efficient, as captured by the dependence of the complexity on q—a known limitation

of quantum ODE algorithms [53]. For homogeneous ODEs with R < 1, the solution

necessarily decays exponentially in time (see equation (4.30)), so the algorithm is not

asymptotically efficient. However, even for solutions with exponential decay, we still

127
find an improvement over the best previous result O(1/ϵT ) [74] for sufficiently small ϵ.

Thus our algorithm might provide an advantage over classical computation for studying

evolution for short times. More significantly, our algorithm can handle inhomogeneous

quadratic ODEs, for which it can remain efficient in the long-time limit since the solution

can remain asymptotically nonzero (for an explicit example, see the discussion just before

the proof of Lemma 4.2), or can decay slowly (i.e., q can be poly(T )). Inhomogeneous

equations arise in many applications, including for example the discretization of PDEs

with nontrivial boundary conditions.

We also provide a quantum lower bound for the worst-case complexity of simulating

strongly nonlinear dynamics, showing that the algorithm’s condition R < 1 cannot be

significantly improved in general (Theorem 4.2). Following the approach of [78, 79], we

construct a protocol for distinguishing two states of a qubit driven by a certain quadratic

ODE. Provided R ≥ 2, this procedure distinguishes states with overlap 1 − ϵ in time

poly(log(1/ϵ)). Since nonorthogonal quantum states are hard to distinguish, this implies

a lower bound on the complexity of the quantum ODE problem.

Our quantum algorithm could potentially be applied to study models governed

by quadratic ODEs arising in biology and epidemiology as well as in fluid and plasma

dynamics. In particular, the celebrated Navier–Stokes equation with linear damping,

which describes many physical phenomena, can be treated by our approach provided

the Reynolds number is sufficiently small. We also note that while the formal validity of

our arguments assumes R < 1, we find in one numerical experiment that our proposed

approach remains valid for larger R (see Section 4.6).

We emphasize that, as in related quantum algorithms for linear algebra and differential

128
equations, instantiating our approach requires an implicit description of the problem that

allows for efficient preparation of the initial state and implementation of the dynamics.

Furthermore, since the output is encoded in a quantum state, readout is restricted to

features that can be revealed by efficient quantum measurements. More work remains

to understand how these methods might be applied, as we discuss further in Section 4.7.

The remainder of this chapter is structured as follows. Section 4.2 introduces

the quantum quadratic ODE problem. Section 4.3 presents the Carleman linearization

procedure and describes its performance. Section 4.4 gives a detailed analysis of the

quantum Carleman linearization algorithm. Section 4.5 establishes a quantum lower

bound for simulating quadratic ODEs. Section 4.6 describes how our approach could

be applied to several well-known ODEs and PDEs and presents numerical results for the

case of the viscous Burgers equation. Finally, we conclude with a discussion of the results

and some possible future directions in Section 4.7.

4.2 Quadratic ODEs

We focus on an initial value problem described by the n-dimensional quadratic

ODE
du
= F2 u⊗2 + F1 u + F0 (t), u(0) = uin . (4.1)
dt

2
Here u = [u1 , . . . , un ]T ∈ Rn , u⊗2 = [u21 , u1 u2 , . . . , u1 un , u2 u1 , . . . , un un−1 , u2n ]T ∈ Rn ,

each uj = uj (t) is a function of t on the interval [0, T ] for j ∈ [n] := {1, . . . , n}, F2 ∈
2
Rn×n , F1 ∈ Rn×n are time-independent matrices, and the inhomogeneity F0 (t) ∈ Rn is

a C 1 continuous function of t. We let ∥ · ∥ denote the spectral norm.

129
The main computational problem we consider is as follows.

Problem 4.1. In the quantum quadratic ODE problem, we consider an n-dimensional

quadratic ODE as in (4.1). We assume F2 , F1 , and F0 are s-sparse (i.e., have at most s

nonzero entries in each row and column), F1 is diagonalizable, and that the eigenvalues

λj of F1 satisfy Re (λn ) ≤ · · · ≤ Re (λ1 ) < 0. We parametrize the problem in terms of

the quantity
 
1 ∥F0 ∥
R := ∥uin ∥∥F2 ∥ + . (4.2)
| Re (λ1 )| ∥uin ∥

For some given T > 0, we assume the values Re (λ1 ), ∥F2 ∥, ∥F1 ∥, ∥F0 (t)∥ for each

t ∈ [0, T ], and ∥F0 ∥ := maxt∈[0,T ] ∥F0 (t)∥, ∥F0′ ∥ := maxt∈[0,T ] ∥F0′ (t)∥ are known, and

that we are given oracles OF2 , OF1 , and OF0 that provide the locations and values of the

nonzero entries of F2 , F1 , and F0 (t) for any specified t, respectively, for any desired row

or column2 . We are also given the value ∥uin ∥ and an oracle Ox that maps |00 . . . 0⟩ ∈ Cn

to a quantum state proportional to uin . Our goal is to produce a quantum state |u(T )⟩

that is ϵ-close to the normalized u(T ) for some given T > 0 in ℓ2 norm.

∥uin ∥∥F2 ∥
When F0 (t) = 0 (i.e., the ODE is homogeneous), the quantity R = | Re (λ1 )|
is

qualitatively similar to the Reynolds number, which characterizes the ratio of the (nonlinear)

convective forces to the (linear) viscous forces within a fluid [117, 118]. More generally,

R quantifies the combined strength of the nonlinearity and the inhomogeneity relative to

dissipation.

Note that without loss of generality, given a quadratic ODE satisfying (4.1) with
2
For instance, F1 is modeled by a sparse matrix oracle OF1 that, on input (j, l), gives the location of the
l-th nonzero entry in row j, denoted as k, and gives the value (F1 )j,k .

130
R < 1, we can modify it by rescaling u → γu with a suitable constant γ to satisfy

∥F2 ∥ + ∥F0 ∥ < |Re (λ1 )| (4.3)

and

∥uin ∥ < 1, (4.4)

with R left unchanged by the rescaling. We use this rescaling in our algorithm and

its analysis. With this rescaling, a small R implies both small ∥uin ∥∥F2 ∥ and small

|F0 ∥/∥uin ∥ relative to |Re (λ1 )|.

4.3 Quantum Carleman linearization

It is challenging to directly simulate quadratic ODEs using quantum computers,

and indeed the complexity of the best known quantum algorithm is exponential in the

evolution time T [74]. However, for a dissipative nonlinear ODE without a source,

any quadratic nonlinear effect will only be significant for a finite time because of the

dissipation. To exploit this, we can create a linear system that approximates the initial

nonlinear evolution within some controllable error. After the nonlinear effects are no

longer important, the linear system properly captures the almost-linear evolution from

then on.

We develop such a quantum algorithm using the concept of Carleman linearization

[73, 82, 83]. Carleman linearization is a method for converting a finite-dimensional

system of nonlinear differential equations into an infinite-dimensional linear one. This is

131
achieved by introducing powers of the variables into the system, allowing it to be written

as an infinite sequence of coupled linear differential equations. We then truncate the

system to N equations, where the truncation level N depends on the allowed approximation

error, giving a finite linear ODE system.

Let us describe the Carleman linearization procedure in more detail. Given a system

of quadratic ODEs (4.1), we apply the Carleman procedure to obtain the system of linear

ODEs
dŷ
= A(t)ŷ + b(t), ŷ(0) = ŷin (4.5)
dt

with the tri-diagonal block structure

      
1 1
 ŷ1  A1 A2   ŷ1  F0 (t)
      
      
 ŷ  A2 A2 A2   ŷ   0 
 2   1 2 3   2   
      
      
 ŷ   3 3 3
d  3  
   A2 A 3 A 4
  ŷ   0 
  3   
=   .   .  , (4.6)
 + 
dt  .
 ..  
 
... ... ...   ..   .. 
      
      
      
N −1 N −1 N −1  
ŷN −1   AN −2 AN −1 AN  ŷN −1   0 
     
      
      
ŷN AN
N −1 AN
N ŷN 0

j j j ×nj+1 j ×nj
where ŷj = u⊗j ∈ Rn , ŷin = [uin ; u⊗2 ⊗N
in ; . . . ; uin ], and Aj+1 ∈ R
n
, Ajj ∈ Rn ,
j ×nj−1
Ajj−1 ∈ Rn for j ∈ [N ] satisfying

Ajj+1 = F2 ⊗ I ⊗j−1 + I ⊗ F2 ⊗ I ⊗j−2 + · · · + I ⊗j−1 ⊗ F2 , (4.7)

Ajj = F1 ⊗ I ⊗j−1 + I ⊗ F1 ⊗ I ⊗j−2 + · · · + I ⊗j−1 ⊗ F1 , (4.8)

Ajj−1 = F0 (t) ⊗ I ⊗j−1 + I ⊗ F0 (t) ⊗ I ⊗j−2 + · · · + I ⊗j−1 ⊗ F0 (t). (4.9)

132
Note that A is a (3N s)-sparse matrix. The dimension of (4.5) is

nN +1 − n
∆ := n + n2 + · · · + nN = = O(nN ). (4.10)
n−1

To construct a system of linear equations, we next divide [0, T ] into m = T /h time

steps and apply the forward Euler method on (4.5), letting

y k+1 = [I + A(kh)h]y k + b(kh) (4.11)

where y k ∈ R∆ approximates ŷ(kh) for each k ∈ [m + 1]0 := {0, 1, . . . , m}, with y 0 =

yin := ŷ(0) = ŷin , and letting all y k be equal for k ∈ [m + p + 1]0 \ [m + 1]0 , for some

sufficiently large integer p. (It is unclear whether another discretization could improve

performance, as discussed further in Section 4.7.) This gives an (m + p + 1)∆ × (m +

p + 1)∆ linear system

L|Y ⟩ = |B⟩ (4.12)

that encodes (4.11) and uses it to produce a numerical solution at time T , where

m+p m m+p
X X X
L= |k⟩⟨k| ⊗ I − |k⟩⟨k − 1| ⊗ [I + A((k − 1)h)h] − |k⟩⟨k − 1| ⊗ I (4.13)
k=0 k=1 h=m+1

and

m
1  X 
|B⟩ = √ ∥yin ∥|0⟩ ⊗ |yin ⟩ + ∥b((k − 1)h)∥|k⟩ ⊗ |b((k − 1)h)⟩ (4.14)
Bm k=1

133
with a normalizing factor Bm . Observe that the system (4.12) has the lower triangular

structure
    
 I   y0   yin 
    
    
 1  
−[I +A(0)h] I  y   b(0)
 

    
  ..   ..
    
.. ..
. .
 
  .   . 
    
    
=
  y  b((m − 1)h) .
  m   

 −[I +A((m − 1)h)h] I    
    
    
 −I I   y m+1   0 
    
    
 .. ..   .
  ..  
  .
..


 . .   


    
    
−I I y m+p 0
(4.15)

In the above system, the first n components of y k for k ∈ [m + p + 1]0 (i.e., y1k )

approximate the exact solution u(T ), up to normalization. We apply the high-precision

quantum linear system algorithm (QLSA) [46] to (4.12) and postselect on k to produce

y1k /∥y1k ∥ for some k ∈ [m + p + 1]0 \ [m]0 . The resulting error is at most

u(T ) yk
ϵ := max − 1k . (4.16)
k∈[m+p+1]0 \[m]0 ∥u(T )∥ ∥y1 ∥

This error includes contributions from both Carleman linearization and the forward Euler

method. (The QLSA also introduces error, which we bound separately. Note that we

could instead apply the original QLSA [84] instead of its subsequent improvement [46],

but this would slightly complicate the error analysis and might perform worse in practice.)

Now we state our main algorithmic result.

Theorem 4.1 (Quantum Carleman linearization algorithm). Consider an instance of the

quantum quadratic ODE problem as defined in Problem 4.1, with its Carleman linearization

134
as defined in (4.5). Assume R < 1. Let

∥uin ∥
g := ∥u(T )∥, q := . (4.17)
∥u(T )∥

There exists a quantum algorithm producing a state that approximates u(T )/∥u(T )∥ with

error at most ϵ ≤ 1, succeeding with probability Ω(1), with a flag indicating success,

using at most

sT 2 q[(∥F2 ∥ + ∥F1 ∥ + ∥F0 ∥)2 + ∥F0′ ∥] sT ∥F2 ∥∥F1 ∥∥F0 ∥∥F0′ ∥


 
poly log / log(1/∥uin ∥)
(1 − ∥uin ∥)2 (∥F2 ∥ + ∥F0 ∥)gϵ (1 − ∥uin ∥)gϵ
(4.18)

queries to the oracles OF2 , OF1 , OF0 and Ox . The gate complexity is larger than the query

complexity by a factor of poly log(nsT ∥F2 ∥∥F1 ∥∥F0 ∥∥F0′ ∥/(1−∥uin ∥)gϵ)/ log(1/∥uin ∥) .


Furthermore, if the eigenvalues of F1 are all real, the query complexity is

sT 2 q[(∥F2 ∥ + ∥F1 ∥ + ∥F0 ∥)2 + ∥F0′ ∥] sT ∥F2 ∥∥F1 ∥∥F0 ∥∥F0′ ∥


 
poly log / log(1/∥uin ∥)
gϵ gϵ
(4.19)

and the gate complexity is larger by poly log(nsT ∥F2 ∥∥F1 ∥∥F0 ∥∥F0′ ∥/gϵ)/ log(1/∥uin ∥) .


4.4 Algorithm analysis

In this section we establish several lemmas and use them to prove Theorem 4.1.

135
4.4.1 Solution error

The solution error has three contributions: the error from applying Carleman linearization

to (4.1), the error in the time discretization of (4.5) by the forward Euler method, and the

error from the QLSA. Since the QLSA produces a solution with error at most ϵ with

complexity poly(log(1/ϵ)) [46], we focus on bounding the first two contributions.

4.4.1.1 Error from Carleman linearization

First, we provide an upper bound for the error from Carleman linearization for

arbitrary evolution time T . To the best of our knowledge, the first and only explicit

bound on the error of Carleman linearization appears in [73]. However, they only consider

homogeneous quadratic ODEs; and furthermore, to bound the error for arbitrary T , they

assume the logarithmic norm of F1 is negative (see Theorems 4.2 and 4.3 of [73]), which

is too strong for our case. Instead, we give a novel analysis under milder conditions,

providing the first convergence guarantee for general inhomogeneous quadratic ODEs.

We begin with a lemma that describes the decay of the solution of (4.1).

Lemma 4.1. Consider an instance of the quadratic ODE (4.1), and assume R < 1 as

defined in (4.2). Let

p
− Re(λ1 ) ± Re(λ1 )2 − 4∥F2 ∥∥F0 ∥
r± := . (4.20)
2∥F2 ∥

Then r± are distinct real numbers with 0 ≤ r− < r+ , and the solution u(t) of (4.1)

satisfies ∥u(t)∥ < ∥uin ∥ < r+ for any t > 0.

136
Proof. Consider the derivative of ∥u(t)∥. We have

d∥u∥2
= u† F2 (u ⊗ u) + (u† ⊗ u† )F2† u + u† (F1 + F1† )u + u† F0 (t) + F0 (t)† u,
dt
≤ 2∥F2 ∥∥u∥3 + 2 Re(λ1 )∥u∥2 + 2∥F0 ∥∥u∥. (4.21)

If ∥u∥ =
̸ 0, then
d∥u∥
≤ ∥F2 ∥∥u∥2 + Re(λ1 )∥u∥ + ∥F0 ∥. (4.22)
dt

Letting a = ∥F2 ∥ > 0, b = Re(λ1 ) < 0, and c = ∥F0 ∥ > 0 with a, b, c > 0, we consider

a 1-dimensional quadratic ODE

dx
= ax2 + bx + c, x(0) = ∥uin ∥. (4.23)
dt

c
Since R < 1 ⇔ b > a∥uin ∥ + ∥uin ∥
, the discriminant satisfies

 2  2
2 c c c
b − 4ac > a∥uin ∥ + − 4a∥uin ∥ · ≥ a∥uin ∥ − ≥ 0. (4.24)
∥uin ∥ ∥uin ∥ ∥uin ∥

Thus, r± defined in (4.20) are distinct real roots of ax2 + bx + c. Since r− + r+ = − ab > 0

c
and r− r+ = a
≥ 0, we have 0 ≤ r− < r+ . We can rewrite the ODE as

dx
= ax2 + bx + c = a(x − r− )(x − r+ ), x(0) = ∥uin ∥. (4.25)
dt

137
Letting y = x − r− , we obtain an associated homogeneous quadratic ODE

dy
= −a(r+ − r− )y + ay 2 = ay[y − (r+ − r− )], y(0) = ∥uin ∥ − r− . (4.26)
dt

Since the homogeneous equation has the closed-form solution

r+ − r−
y(t) = , (4.27)
1− ea(r+ −r− )t [1 − (r+ − r− )/(∥uin ∥ − r− )]

the solution of the inhomogeneous equation can be obtained as

r+ − r−
x(t) = + r− . (4.28)
1− ea(r+ −r− )t [1 − (r+ − r− )/(∥uin ∥ − r− )]

Therefore we have

r+ − r−
∥u(t)∥ ≤ + r− . (4.29)
1− ea(r+ −r− )t [1 − (r+ − r− )/(∥uin ∥ − r− )]

c
Since R < 1 ⇔ a∥uin ∥ + ∥uin ∥
< −b ⇔ a∥uin ∥2 + b∥uin ∥ + c < 0, ∥uin ∥ is located

between the two roots r− and r+ , and thus 1 − (r+ − r− )/(∥uin ∥ − r− ) < 0. This implies

∥u(t)∥ in (4.29) decreases from u(0) = ∥uin ∥, so we have ∥u(t)∥ < ∥uin ∥ < r+ for any

t > 0.

d∥u∥
We remark that limt→∞ dt
= 0 since d∥u∥
dt
< 0 and ∥u(t)∥ ≥ 0, so u(t) approaches

to a stationary point of the right-hand side of (4.1) (called an attractor in the theory of

dynamical systems).

Note that for a homogeneous equation (i.e., ∥F0 ∥ = 0), this shows that the dissipation

138
inevitably leads to exponential decay. In this case we have r− = 0, so (4.29) gives

∥uin ∥r+
∥u(t)∥ = , (4.30)
ear+ t (r+ − ∥uin ∥) + ∥uin ∥

which decays exponentially in t.

On the other hand, as mentioned in the introduction, the solution of a dissipative

inhomogeneous quadratic ODE can remain asymptotically nonzero. Here we present an


duj
example of this. Consider a time-independent uncoupled system with dt
= f2 u2j +

f1 uj + f0 , j ∈ [n], with uj (0) = x0 > 0, f2 > 0, f1 < 0, f0 > 0, and R < 1. We see that

−f1 − f12 −4f2 f0
each uj (t) decreases from x0 to x1 := 2f2
> 0, with 0 < x1 < uj (t) < x0 .
√ √
Hence, the norm of u(t) is bounded as 0 < nx1 < ∥u(t)∥ < nx0 for any t > 0. In

general, it is hard to lower bound ∥u(t)∥, but the above example shows that a nonzero

inhomogeneity can prevent the norm of the solution from decreasing to zero.

We now give an upper bound on the error of Carleman linearization.

Lemma 4.2. Consider an instance of the quadratic ODE (4.1), with its corresponding

Carleman linearization as defined in (4.5). As in Problem 4.1, assume that the eigenvalues

λj of F1 satisfy Re (λn ) ≤ · · · ≤ Re (λ1 ) < 0. Assume that R defined in (4.2) satisfies

R < 1. Then for any j ∈ [N ], the error ηj (t) := u⊗j (t) − ŷj (t) satisfies

∥ηj (t)∥ ≤ ∥η(t)∥ ≤ tN ∥F2 ∥∥uin ∥N +1 . (4.31)

139
Proof. The exact solution u(t) of the original quadratic ODE (4.1) satisfies

      
1 1
 u  A1 A2   u  F0 (t)
      
      
 u⊗2  A2 A2 A2   u⊗2   0 
   1 2 3     
      
      
 u⊗3   A3
A 3
A 3   u⊗3   0 
   2 3 4    
      
d  .  
   .. .. ..   .   . 
.
. = . . .   .. + ..  ,
dt 

 
 


 
 


      
u⊗(N −1)   N −1 N −1 N −1 ⊗(N −1)
   AN −2 AN −1 AN  u
 
  0 
  
      
.
      
 ⊗N  
 u AN A N . .   u⊗N   0 
   
  N −1 N 
      
 .    .   . 
.. . .. . .. .. ..
(4.32)

and the approximated solution ŷj (t) satisfies (4.6). Comparing these equations, we have


= A(t)η + b̂(t), η(0) = 0 (4.33)
dt

with the tri-diagonal block structure

      
1 1
 η1  A1 A2   η1   0 
      
      
 η  A2 A2 A2  η   0 
 2   1 2 3  2   
      
      
 η   3 3 3
d  3   A2 A3 A4   η3  
    0 

 =  + ,
dt  ..  
  
.. .. ..
  ..  
  .. 
 .   . . .  .  
  . 

      
      
−1 N −1 N −1
ηN −1   AN A A η 0
       
N −2 N −1 N   N −1   
      
      
⊗(N +1)
ηN AN
N −1 A N
N η N AN
N +1 u
(4.34)

140
Consider the derivative of ∥η(t)∥. We have

d∥η∥2
= η † (A(t) + A† (t))η + η † b̂(t) + b̂(t)† η. (4.35)
dt

For η † (A(t) + A† (t))η, we bound each term as

ηj† Ajj+1 ηj+1 + ηj+1



(Ajj+1 )† ηj ≤ j∥F2 ∥∥ηj+1 ∥∥ηj ∥,

ηj† [Ajj + (Ajj )† ]ηj ≤ 2j Re(λ1 )∥ηj ∥2 , (4.36)

ηj† Ajj−1 ηj−1 + ηj−1



(Ajj−1 )† ηj ≤ 2j∥F0 ∥∥ηj−1 ∥∥ηj ∥

using the definitions in (4.7)–(4.9).

Define a matrix G ∈ Rn×n with nonzero entries Gj−1,j = j∥F0 ∥, Gj,j = j Re(λ1 ),

and Gj+1,j = j∥F2 ∥. Then

η † (A(t) + A† (t))η ≤ η † (G + G† )η. (4.37)

Since ∥F2 ∥+∥F0 ∥ < |Re (λ1 )|, G is strictly diagonally dominant and thus the eigenvalues

νj of G satisfy Re (νn ) ≤ · · · ≤ Re (ν1 ) < 0. Thus we have

η † (G + G† )η ≤ 2 Re(ν1 )∥η∥2 . (4.38)

For η † b̂(t) + b̂(t)† η, we have

η † b̂(t) + b̂(t)† η ≤ 2∥b̂∥∥η∥ ≤ 2∥AN


N +1 ∥∥u
⊗(N +1)
∥∥η∥. (4.39)

141
⊗(N +1)
Since ∥AN
N +1 ∥ = N ∥F2 ∥, and ∥u ∥ = ∥u∥N +1 ≤ ∥uin ∥N +1 , we have

η † b̂(t) + b̂(t)† η ≤ 2N ∥F2 ∥∥uin ∥N +1 ∥η∥. (4.40)

Using (4.37), (4.38), and (4.40) in (4.35), we find

d∥η∥2
≤ 2 Re(ν1 )∥η∥2 + 2N ∥F2 ∥∥uin ∥N +1 ∥η∥, (4.41)
dt

so elementary calculus gives

d∥η∥ 1 d∥η∥2
= ≤ Re(ν1 )∥η∥ + N ∥F2 ∥∥uin ∥N +1 . (4.42)
dt 2∥η∥ dt

Solving the differential inequality as an equation with η(0) = 0 gives us a bound on ∥η∥:

Z t Z t
Re(ν1 )(t−s) N +1 N +1
∥η(t)∥ ≤ e N ∥F2 ∥∥uin ∥ ds ≤ N ∥F2 ∥∥uin ∥ eRe(ν1 )(t−s) ds.
0 0

(4.43)

Finally, using
t
1 − eRe(ν1 )t
Z
eRe(ν1 )(t−s) ds = ≤t (4.44)
0 |Re(ν1 )|

(where we used the inequality 1 − eat ≤ −at with a < 0), (4.43) gives the bound

∥ηj (t)∥ ≤ ∥η(t)∥ ≤ tN ∥F2 ∥∥uin ∥N +1 (4.45)

as claimed.

142
Note that (4.44) can be bounded alternatively by

t
1 − eRe(ν1 )t
Z
1
eRe(ν1 )(t−s) ds = ≤ , (4.46)
0 |Re(ν1 )| |Re(ν1 )|

1
and thus ∥ηj (t)∥ ≤ ∥η(t)∥ ≤ |Re(ν1 )|
N ∥F2 ∥∥uin ∥N +1 . We select (4.44) because it avoids

including an additional parameter Re(ν1 ).

We also give an improved analysis that works for homogeneous quadratic ODEs

(F0 (t) = 0) under milder conditions. This analysis follows the proof in [73] closely.

Corollary 4.1. Under the same setting of Lemma 4.2, assume F0 (t) = 0 in (4.1). Then

for any j ∈ [N ], the error ηj (t) := u⊗j (t) − ŷj (t) satisfies

∥ηj (t)∥ ≤ ∥uin ∥j RN +1−j . (4.47)

For j = 1, we have the tighter bound

N
∥η1 (t)∥ ≤ ∥uin ∥ RN 1 − eRe(λ1 )t . (4.48)

Proof. We again consider η satisfying (4.33). Since F0 (t) = 0, (4.33) reduces to a time-

independent ODE with an upper triangular block structure,

dηj
= Ajj ηj + Ajj+1 ηj+1 , j ∈ [N − 1] (4.49)
dt

and
dηN ⊗(N +1)
= AN N
N ηN + AN +1 u . (4.50)
dt

143
We proceed by backward substitution. Since ηN (0) = 0, we have

Z t
N ⊗(N +1)
ηN (t) = eAN (t−s0 ) AN
N +1 u (s0 ) ds0 . (4.51)
0

j
For j ∈ [N ], (4.8) gives ∥eAj t ∥ = ej Re(λ1 )t and (4.7) gives ∥Ajj+1 ∥ = j∥F2 ∥. By

Lemma 4.1, ∥u⊗(N +1) ∥ = ∥u∥N +1 ≤ ∥uin ∥N +1 . We can therefore upper bound (4.51)

by
Z t
N ⊗(N +1)
∥ηN (t)∥ ≤ ∥eAN (t−s0 ) ∥ · ∥AN
N +1 u (s0 )∥ ds0
0
(4.52)
Z t
≤ N ∥F2 ∥∥uin ∥N +1 eN Re(λ1 )(t−s0 ) ds0 .
0

For j = N − 1, (4.49) gives

dηN −1 −1 N −1
= AN
N −1 ηN −1 + AN ηN . (4.53)
dt

Again, since ηN −1 (0) = 0, we have

Z t
N −1
−1
ηN −1 (t) = eAN −1 (t−s1 ) AN
N ηN (s1 ) ds1 (4.54)
0

which has the upper bound

Z t
N −1
−1
∥ηN −1 (t)∥ ≤ ∥eAN −1 (t−s1 ) ∥ · ∥AN
N ηN (s1 )∥ ds1
0
Z t
≤ (N − 1)∥F2 ∥ e(N −1) Re(λ1 )(t−s1 ) ∥ηN (s1 )∥ ds1
0
Z t Z s1
(N −1) Re(λ1 )(t−s1 )
≤ N (N − 1)∥F2 ∥ ∥uin ∥ 2 N +1
e eN Re(λ1 )(s1 −s) ds0 ds1 ,
0 0
(4.55)

where we used (4.52) in the last step. Iterating this procedure for j = N − 2, . . . , 1, we

144
find

Z t Z sN −j
N!
∥ηj (t)∥ ≤ ∥F2 ∥ N +1−j
∥uin ∥ N +1
ej Re(λ1 )(t−sN −j )
e(j+1) Re(λ1 )(sN −j −sN −1−j )
(j − 1)! 0 0
Z s2 Z s1
··· e(N −1) Re(λ1 )(s2 −s1 ) eN Re(λ1 )(s1 −s0 ) ds0 · · · dsN −j
0 0
Z sN +1−j Z s2Z s1
N! PN −j+1
= ∥F2 ∥N +1−j ∥uin ∥N +1 ··· eRe(λ1 )(−N s0 + k=1 sk )
ds0 · · · dsN −j .
(j − 1)! 0 0 0
(4.56)

Finally, using

sk+1
1 − e(N −k) Re(λ1 )sk+1
Z
1
e(N −k) Re(λ1 )(sk+1 −sk ) dsk = ≤ (4.57)
0 (N − k)|Re(λ1 )| (N − k)|Re(λ1 )|

for k = 0, . . . , N − j, (4.56) can be bounded by

N! (j − 1)!
∥ηj (t)∥ ≤ ∥F2 ∥N +1−j ∥uin ∥N +1
(j − 1)! N !|Re(λ1 )|N +1−j
(4.58)
∥uin ∥N +1 ∥F2 ∥N +1−j
= = ∥uin ∥j RN +1−j .
|Re(λ1 )|N +1−j

For j = 1, the bound can be further improved. By Lemma 5.2 of [73], if a ̸= 0,

sN s2 s1
(easN − 1)N
Z Z Z PN
··· ea(−N s0 + k=1 sk ) ds0 ds1 · · · dsN −1 = . (4.59)
0 0 0 N !aN

145
With sN = t and a = Re(λ1 ), we find

Z sN Z s2 Z s1 PN
N N +1
∥η1 (t)∥ ≤ N !∥F2 ∥ ∥uin ∥ ··· eRe(λ1 )(−N s0 + k=1 sk ) ds0 ds1 · · · dsN −1
0 0 0
Re(λ1 )t N
(e − 1)
≤ N !∥F2 ∥N ∥uin ∥N +1
N ! Re(λ1 )N
N
= ∥uin ∥ RN 1 − eat ,
(4.60)

which is tighter than the j = 1 case in (4.58).

While Problem 4.1 makes some strong assumptions about the system of differential

equations, they appear to be necessary for our analysis. Specifically, the conditions

Re (λ1 ) < 0 and R < 1 are required to ensure arbitrary-time convergence.

Since the Euler method for (4.5) is unstable if Re (λ1 ) > 0 [52, 119], we only

consider the case Re (λ1 ) ≤ 0. If Re (λ1 ) = 0, (4.59) reduces to

sN s2 s1
tN
Z Z Z PN
a(−N s0 + k=1 sk )
··· e ds0 ds1 · · · dsN −1 = , (4.61)
0 0 0 N!

giving the error bound

∥η1 (t)∥ ≤ ∥uin ∥(∥uin ∥∥F2 ∥t)N (4.62)

instead of (4.60). Then the error bound can be made arbitrarily small for a finite time by

increasing N , but after t > 1/∥uin ∥∥F2 ∥, the error bound diverges.

Furthermore, if R ≥ 1, Bernoulli’s inequality gives

∥uin ∥(1 − N eRe(λ1 )t ) ≤ ∥uin ∥(1 − N eRe(λ1 )t ) RN ≤ ∥uin ∥(1 − eRe(λ1 )t )N RN , (4.63)

146
where the right-hand side upper bounds ∥η1 (t)∥ as in (4.60). Assuming ∥η1 (t)∥ =

∥u(t) − ŷ1 (t)∥ is smaller than ∥uin ∥, we require

N = Ω(e| Re(λ1 )|t ). (4.64)

In other words, to apply (4.48) for the Carleman linearization procedure, the truncation

order given by Lemma 4.2 must grow exponentially with t.



In fact, we prove in Section 4.5 that for R ≥ 2, no quantum algorithm (even one

based on a technique other than Carleman linearization) can solve Problem 4.1 efficiently.

It remains open to understand the complexity of the problem for 1 ≤ R < 2.

On the other hand, if R < 1, both (4.47) and (4.48) decrease exponentially with N ,

making the truncation efficient. We discuss the specific choice of N in (4.149) below.

4.4.1.2 Error from forward Euler method

Next, we provide an upper bound for the error incurred by approximating (4.5) with

the forward Euler method. This problem has been well studied for general ODEs. Given

dz
an ODE dt
= f (z) on [0, T ] with an inhomogenity f that is an L-Lipschitz continuous

function of z, the global error of the solution is upper bounded by eLT , although in

most cases this bound overestimates the actual error [120]. To remove the exponential

dependence on T in our case, we derive a tighter bound for time discretization of (4.5) in

Lemma 4.3 below. This lemma is potentially useful for other ODEs as well and can be

straightforwardly adapted to other problems.

Lemma 4.3. Consider an instance of the quantum quadratic ODE problem as defined in

147
Problem 4.1, with R < 1 as defined in (4.2). Choose a time step

 
1 2(| Re(λ1 )| − ∥F2 ∥ − ∥F0 ∥)
h ≤ min , (4.65)
N ∥F1 ∥ N (| Re(λ1 )|2 − (∥F2 ∥ + ∥F0 ∥)2 + ∥F1 ∥2 )

in general, or
1
h≤ (4.66)
N ∥F1 ∥

if the eigenvalues of F1 are all real. Suppose the error from Carleman linearization η(t)

as defined in Lemma 4.2 is bounded by

g
∥η(t)∥ ≤ , (4.67)
4

where g is defined in (4.17). Then the global error from the forward Euler method (4.11)

on the interval [0, T ] for (4.5) satisfies

∥ŷ1 (T ) − y1m ∥ ≤ ∥ŷ(T ) − y m ∥ ≤ 3N 2.5 T h[(∥F2 ∥ + ∥F1 ∥ + ∥F0 ∥)2 + ∥F0′ ∥]. (4.68)

Proof. We define a linear system that locally approximates the initial value problem (4.5)

on [kh, (k + 1)h] for k ∈ [m]0 as

z k = [I + A((k − 1)h)h]ŷ((k − 1)h) + b((k − 1)h), (4.69)

where ŷ(t) is the exact solution of (4.5). For k ∈ [m], we denote the local truncation error

148
by

ek := ∥ŷ(kh) − z k ∥ (4.70)

and the global error by

g k := ∥ŷ(kh) − y k ∥, (4.71)

where y k in (4.11) is the numerical solution. Note that g m = ∥ŷ(T ) − y m ∥.

For the local truncation error, we Taylor expand ŷ((k − 1)h) with a second-order

Lagrange remainder, giving

ŷ ′′ (ξ)h2
ŷ(kh) = ŷ((k − 1)h) + ŷ ′ ((k − 1)h)h + (4.72)
2

for some ξ ∈ [(k − 1)h, kh]. Since ŷ ′ ((k − 1)h) = A((k − 1)h)ŷ((k − 1)h) + b((k − 1)h)

by (4.5), we have

ŷ ′′ (ξ)h2 ŷ ′′ (ξ)h2
ŷ(kh) = [I +A((k −1)h)h]ŷ((k −1)h)+b((k −1)h)+ = zk + , (4.73)
2 2

and thus the local error (4.70) can be bounded as

h2 M h2
ek = ∥ŷ(kh) − z k ∥ = ∥ŷ ′′ (ξ)h∥ ≤ , (4.74)
2 2

where M := maxt∈[0,T ] ∥ŷ ′′ (τ )∥.

By the triangle inequality, the global error (4.71) can therefore be bounded as

g k = ∥ŷ(kh) − y k ∥ ≤ ∥ŷ(kh) − z k ∥ + ∥z k − y k ∥ ≤ ek + ∥z k − y k ∥. (4.75)

149
Since y k and z k are obtained by the same linear system with different right-hand sides,

we have the upper bound

∥z k − y k ∥ = ∥[I + A((k − 1)h)h][ŷ((k − 1)h) − y k−1 ]∥

≤ ∥I + A((k − 1)h)h∥ · ∥ŷ((k − 1)h) − y k−1 ∥ (4.76)

= ∥I + A((k − 1)h)h∥g k−1 .

In order to provide an upper bound for ∥I + A(t)h∥ for all t ∈ [0, T ], we write

I + A(t)h = H2 + H1 + H0 (t) (4.77)

where

N
X −1
H2 = |j⟩⟨j + 1| ⊗ Ajj+1 h, (4.78)
j=1
N
X
H1 = I + |j⟩⟨j| ⊗ Ajj h, (4.79)
j=1
N
X
H0 (t) = |j⟩⟨j − 1| ⊗ Ajj−1 h. (4.80)
j=2

We provide upper bounds separately for ∥H2 ∥, ∥H1 ∥, and ∥H0 ∥ := maxt∈[0,T ] ∥H0 (t)∥,

and use the bound maxt∈[0,T ] ∥I + A(t)h∥ ≤ ∥H2 ∥ + ∥H1 ∥ + ∥H0 ∥.

The eigenvalues of Ajj consist of all j-term sums of the eigenvalues of F1 . More

λI j }I j ∈[n]j where {λℓ }ℓ∈[n] are the eigenvalues of F1 and I j ∈


P
precisely, they are { ℓ∈[j] ℓ

[n]j is a j-tuple of indices. The eigenvalues of H1 are thus {1 + h


P
ℓ∈[j] λI j }I j ∈[n]j ,j∈[N ] .

150
With J := maxℓ∈[n] | Im(λℓ )|, we have

X 2 X 2 X 2
1+h λI j = 1+h Re(λI j ) + h Im(λI j )
ℓ ℓ ℓ
ℓ∈[j] ℓ∈[j] ℓ∈[j] (4.81)
≤ 1 − 2N h| Re(λ1 )| + N 2 h2 (| Re(λ1 )|2 + J 2 )

where we used N h| Re(λ1 )| ≤ 1 by (4.65). Therefore

X p
∥H1 ∥ = max max
j j
1+h λI j ≤ 1 − 2N h| Re(λ1 )| + N 2 h2 (| Re(λ1 )|2 + J 2 ).
j∈[N ] I ∈[n] ℓ
ℓ∈[j]
(4.82)

We also have

N
X −1
∥H2 ∥ = |j⟩⟨j + 1| ⊗ Ajj+1 h ≤ max ∥Ajj+1 ∥h ≤ N ∥F2 ∥h (4.83)
j∈[N ]
j=1

and

N
X −1
∥H0 ∥ = max |j⟩⟨j01|⊗Ajj−1 h ≤ max max ∥Ajj−1 ∥h ≤ N max ∥F0 (t)∥h ≤ N ∥F0 ∥h.
t∈[0,T ] t∈[0,T ] j∈[N ] t∈[0,T ]
j=1
(4.84)

Using the bounds (4.82) and (4.83), we aim to select the value of h to ensure

max ∥I + A(t)h∥ ≤ ∥H2 ∥ + ∥H1 ∥ + ∥H0 ∥ ≤ 1. (4.85)


t∈[0,T ]

The assumption (4.65) implies

2(| Re(λ1 )| − ∥F2 ∥ − ∥F0 ∥)


h≤ (4.86)
N (| Re(λ1 )|2 − (∥F2 ∥ + ∥F0 ∥)2 + J 2 )

151
(note that the denominator is non-zero due to (4.3)). Then we have

N 2 h2 (| Re(λ1 )|2 − ∥F2 ∥2 + J 2 ) ≤ 2N h(| Re(λ1 )| − ∥F2 ∥ − ∥F0 ∥)


p
=⇒ 1 − 2N h| Re(λ1 )| + N 2 h2 (| Re(λ1 )|2 + J 2 ) ≤ 1 − N (∥F2 ∥ + ∥F0 ∥)h,
(4.87)

so maxt∈[0,T ] ∥I + A(t)h∥ ≤ 1 as claimed.

The choice (4.65) can be improved if an upper bound on J is known. In particular,

if J = 0, (4.86) simplifies to

2
h≤ , (4.88)
N (|λ1 | + ∥F2 ∥ + ∥F0 ∥)

which is satisfied by (4.66) using ∥F2 ∥ + ∥F0 ∥ < |λ1 | ≤ ∥F1 ∥.

Using this in (4.76), we have

∥z k − y k ∥ ≤ ∥I + A((k − 1)h)h∥g k−1 ≤ g k−1 . (4.89)

Plugging (4.89) into (4.75) iteratively, we find

k
X
k k−1 k k−2 k−1 k
g ≤g +e ≤g +e + e ≤ ··· ≤ ej , k ∈ [m + 1]0 . (4.90)
j=1

Using (4.74), this shows that the global error from the forward Euler method is bounded

by
k
X M kh2
∥ŷ 1 (kh) − y1k ∥ ≤ ∥ŷ(kh) − y k ∥ = g k ≤ ej ≤ , (4.91)
j=1
2

152
and when k = m, mh = T ,

M h2 MT h
∥ŷ 1 (T ) − y1m ∥ ≤ g m ≤ m = . (4.92)
2 2

Finally, we remove the dependence on M = maxt∈[0,T ] ∥ŷ ′′ (τ )∥. Since

ŷ ′′ (t) = [A(t)ŷ(t) + b(t)]′ = A(t)ŷ ′ (t) + A′ (t)ŷ(t) + b′ (t)


(4.93)
′ ′
= A(t)[A(t)ŷ(t) + b(t)] + A (t)ŷ(t) + b (t),

we have

∥ŷ ′′ (t)∥ = ∥A(t)∥2 ∥ŷ(t)∥ + ∥A(t)∥∥b(t)∥ + ∥A′ (t)∥∥ŷ(t)∥ + ∥b′ (t)∥. (4.94)

Maximizing each term for t ∈ [0, T ], we have

N
X −1 N
X N
X
max ∥A(t)∥ = |j⟩⟨j + 1| ⊗ Ajj+1 + |j⟩⟨j| ⊗ Ajj + |j⟩⟨j − 1| ⊗ Ajj−1
t∈[0,T ]
j=1 j=1 j=2

≤ N (∥F2 ∥ + ∥F1 ∥ + ∥F0 ∥),


(4.95)
N
X
max ∥A′ (t)∥ = |j⟩⟨j − 1| ⊗ (Ajj−1 )′ ≤ N ∥F0′ (t)∥ ≤ N ∥F0′ ∥, (4.96)
t∈[0,T ]
j=2

max ∥b(t)∥ = max ∥F0 (t)∥ ≤ ∥F0 ∥, (4.97)


t∈[0,T ] t∈[0,T ]

max ∥b′ (t)∥ = max ∥F0′ (t)∥ ≤ ∥F0′ ∥, (4.98)


t∈[0,T ] t∈[0,T ]

and using ∥u∥ ≤ ∥uin ∥ < 1, ∥ηj (t)∥ ≤ ∥η(t)∥ ≤ g/4 < ∥uin ∥/4 by (4.67), and R < 1,

153
we have

N
X N
X N
X
∥ŷ(t)∥2 ≤ ∥ŷj (t)∥2 = ∥u⊗j (t) − ηj (t)∥2 ≤ 2 (∥u⊗j (t)∥2 + ∥ηj (t)∥2 )
j=1 j=1 j=1
N  N
∥uin ∥2

X
2j
X 1
≤2 ∥uin ∥ + <2 (1 + )∥uin ∥2 < 4N ∥uin ∥2 < 4N
j=1
16 j=1
16
(4.99)

for all t ∈ [0, T ]. Therefore, substituting the bounds (4.95)–(4.99) into (4.94), we find

M ≤ max ∥A(t)∥2 ∥ŷ(t)∥ + ∥A(t)∥∥b(t)∥ + ∥A′ (t)∥∥ŷ(t)∥ + ∥b′ (t)∥


t∈[0,T ]

≤ 2N 2.5 (∥F2 ∥ + ∥F1 ∥ + ∥F0 ∥)2 + N (∥F2 ∥ + ∥F1 ∥ + ∥F0 ∥)∥F0 ∥ + 2N 1.5 ∥F0′ ∥ + ∥F0′ ∥

≤ 6N 2.5 [(∥F2 ∥ + ∥F1 ∥ + ∥F0 ∥)2 + ∥F0′ ∥].


(4.100)

Thus, (4.92) gives

∥ŷ 1 (kh) − y1m ∥ ≤ 3N 2.5 kh2 [(∥F2 ∥ + ∥F1 ∥ + ∥F0 ∥)2 + ∥F0′ ∥], (4.101)

and when k = m, mh = T ,

∥ŷ 1 (T ) − y1m ∥ ≤ 3N 2.5 T h[(∥F2 ∥ + ∥F1 ∥ + ∥F0 ∥)2 + ∥F0′ ∥] (4.102)

as claimed.

4.4.2 Condition number

Now we upper bound the condition number of the linear system.

154
Lemma 4.4. Consider an instance of the quantum quadratic ODE problem as defined in

Problem 4.1. Apply the forward Euler method (4.11) with time step (4.65) to the Carleman

linearization (4.5). Then the condition number of the matrix L defined in (4.12) satisfies

κ ≤ 3(m + p + 1). (4.103)

Proof. We begin by upper bounding ∥L∥. We write

L = L1 + L2 + L3 , (4.104)

where

m+p
X
L1 = |k⟩⟨k| ⊗ I, (4.105)
k=0
Xm
L2 = − |k⟩⟨k − 1| ⊗ [I + A((k − 1)h)h], (4.106)
k=1
m+p
X
L3 = − |k⟩⟨k − 1| ⊗ I. (4.107)
k=m+1

Clearly ∥L1 ∥ = ∥L3 ∥ = 1. Furthermore, ∥L2 ∥ ≤ maxt∈[0,T ] ∥I + A(t)h∥ ≤ 1 by (4.85),

which follows from the choice of time step (4.65). Therefore,

∥L∥ ≤ ∥L1 ∥ + ∥L2 ∥ + ∥L3 ∥ ≤ 3. (4.108)

155
Next we upper bound

∥L−1 ∥ = sup ∥L−1 |B⟩∥. (4.109)


∥|B⟩∥≤1

We express |B⟩ as
m+p m+p
X X
|B⟩ = βk |k⟩ = |bk ⟩, (4.110)
k=0 k=0

where |bk ⟩ := βk |k⟩ satisfies

m+p
X
∥|bk ⟩∥2 = ∥|B⟩∥2 ≤ 1. (4.111)
k=0

Given any |bk ⟩ for k ∈ [m + p + 1]0 , we define

m+p m+p
X X
|Y k ⟩ := L−1 |bk ⟩ = γlk |l⟩ = |Ylk ⟩, (4.112)
l=0 l=0

where |Ylk ⟩ := γlk |l⟩. We first upper bound ∥|Y k ⟩∥ = ∥L−1 |bk ⟩∥, and then use this to

upper bound ∥L−1 |B⟩∥.

We consider two cases. First, for fixed k ∈ [m + 1]0 , we directly calculate |Ylk ⟩ for

each l ∈ [m + p + 1]0 by the forward Euler method (4.11), giving







 0, if l ∈ [k]0 ;





|bk ⟩, if l = k;


|Ylk ⟩ = (4.113)

l−1

Πj=k [I + A(jh)h]|bk ⟩, if l ∈ [m + 1]0 \ [k + 1]0 ;









Πm−1 k

j=k [I + A(jh)h]|b ⟩, if l ∈ [m + p + 1]0 \ [m + 1]0 .

156
Since maxt∈[0,T ] ∥I + A(t)h∥| ≤ 1 by (4.85), (4.112) gives

m+p m m+p
X X X
k 2 k 2 k 2
∥|Y ⟩∥ = ∥|Yl ⟩∥ ≤ ∥|b ⟩∥ + ∥|bk ⟩∥2
l=0 l=k l=m+1 (4.114)
≤ (m + p + 1 − k)∥|bk ⟩∥2 ≤ (m + p + 1)∥|bk ⟩∥2 .

Second, for fixed k ∈ [m + p + 1]0 \ [m + 1]0 , similarly to (4.113), we directly

calculate |Ylk ⟩ using (4.11), giving



0, if l ∈ [k]0 ;


k
|Yl ⟩ = (4.115)


|bk ⟩, if l ∈ [m + p + 1]0 \ [k]0 .

Similarly to (4.114), we have (again using (4.112))

m+p m+p
X X
k 2 k 2
∥|Y ⟩∥ = ∥|Yl ⟩∥ = ∥|bk ⟩∥2 = (m + p + 1 − k)∥|bk ⟩∥2 ≤ (m + p + 1)∥|bk ⟩∥2 .
l=0 l=k
(4.116)

Combining (4.114) and (4.116), for any k ∈ [m + p + 1]0 , we have

∥|Y k ⟩∥2 = ∥L−1 |bk ⟩∥2 ≤ (m + p + 1)∥|bk ⟩∥2 . (4.117)

By the definition of |Y k ⟩ in (4.112), (4.117) gives

∥L−1 ∥2 = sup ∥L−1 |B⟩∥2 ≤ (m+p+1) sup ∥L−1 |bk ⟩∥2 ≤ (m+p+1)2 , (4.118)
∥|B⟩∥≤1 ∥|bk ⟩∥≤1

157
and therefore

∥L−1 ∥ ≤ (m + p + 1). (4.119)

Finally, combining (4.108) with (4.119), we conclude

κ = ∥L∥∥L−1 ∥ ≤ 3(m + p + 1) (4.120)

as claimed.

4.4.3 State preparation

We now describe a procedure for preparing the right-hand side |B⟩ of the linear

system (4.12), whose entries are composed of |yin ⟩ and |b((k − 1)h)⟩ for k ∈ [m].

The initial vector yin is a direct sum over spaces of different dimensions, which is

cumbersome to prepare. Instead, we prepare an equivalent state that has a convenient

tensor product structure. Specifically, we embed yin into a slightly larger space and

prepare the normalized version of

zin = [uin ⊗ v0N −1 ; u⊗2 N −2


in ⊗ v0 ; . . . ; u⊗N
in ], (4.121)

where v0 is some standard vector (for simplicity, we take v0 = |0⟩). If uin lives in a vector

space of dimension n, then zin lives in a space of dimension N nN while yin lives in a

slightly smaller space of dimension ∆ = n + n2 + · · · + nN = (nN +1 − n)/(n − 1). Using

standard techniques, all the operations we would otherwise apply to yin can be applied

instead to zin , with the same effect.

158
Lemma 4.5. Assume we are given the value ∥uin ∥, and let Ox be an oracle that maps

|00 . . . 0⟩ ∈ Cn to a normalized quantum state |uin ⟩ proportional to uin . Assume we

are also given the values ∥F0 (t)∥ for each t ∈ [0, T ], and let OF0 be an oracle that

provides the locations and values of the nonzero entries of F0 (t) for any specified t.

Then the quantum state |B⟩ defined in (4.14) (with yin replaced by zin ) can be prepared

using O(N ) queries to Ox and O(m) queries to OF0 , with gate complexity larger by a

poly(log N, log n) factor.

Proof. We first show how to prepare the state

N
1 X
|zin ⟩ = √ ∥uin ∥j |j⟩|uin ⟩⊗j |0⟩⊗N −j , (4.122)
V j=1

where
N
X
V := ∥uin ∥2j . (4.123)
j=1

This state can be prepared using N queries to the initial state oracle Ox applied in superposition

to the intermediate state

N
1 X
|ψint ⟩ := √ ∥uin ∥j |j⟩ ⊗ |0⟩⊗N . (4.124)
V j=1

To efficiently prepare |ψint ⟩, notice that

1 k−1
∥uin ∥ X Y ℓ
|ψint ⟩ = √ ∥uin ∥jℓ 2 |j0 j1 . . . jk−1 ⟩ ⊗ |0⟩⊗N , (4.125)
V j0 ,j1 ,...,jk−1 =0 ℓ=0

where k := log2 N (assuming for simplicity that N is a power of 2) and jk−1 . . . j1 j0 is

159
the k-bit binary expansion of j − 1. Observe that

k−1  1 
O 1 X jℓ 2ℓ
|ψint ⟩ = √ ∥uin ∥ |jℓ ⟩ ⊗ |0⟩⊗N (4.126)
ℓ=0
Vℓ j =0

where
ℓ+1
Vℓ := 1 + ∥uin ∥2 . (4.127)

Qk−1
(Notice that ℓ=0 Vℓ = V /∥uin ∥2 .) Each tensor factor in (4.126) is a qubit that can be

produced in constant time. Overall, we prepare these k = log2 N qubit states and then

apply Ox N times.

We now discuss how to prepare the state

m
1  X 
|B⟩ = √ ∥zin ∥|0⟩ ⊗ |zin ⟩ + ∥b((k − 1)h)∥|k⟩ ⊗ |b((k − 1)h)⟩ , (4.128)
Bm k=1

in which we replace yin by zin in (4.14), and define

m
X
2
Bm := ∥zin ∥ + ∥b((k − 1)h)∥2 . (4.129)
k=1

This state can be prepared using the above procedure for |0⟩ 7→ |zin ⟩ and m queries to

OF0 with t = (k − 1)h that implement |0⟩ 7→ |b((k − 1)h)⟩ for k ∈ {1, . . . , m}, applied

in superposition to the intermediate state

m
1  X 
|ϕint ⟩ = √ ∥zin ∥|0⟩ ⊗ |0⟩ + ∥b((k − 1)h)∥|k⟩ ⊗ |0⟩ . (4.130)
Bm k=1

160
Here the queries are applied conditionally upon the value in the first register: we prepare

|zin ⟩ if the first register is |0⟩ and |b((k−1)h)⟩ if the first register is |k⟩ for k ∈ {1, . . . , m}.

We can prepare |ϕint ⟩ (i.e., perform a unitary transform mapping |0⟩|0⟩ 7→ |ϕint ⟩) in time

complexity O(m) [92] using the known values of ∥uin ∥ and ∥b((k − 1)h)∥.

Overall, we use O(N ) queries to Ox and O(m) queries to OF0 to prepare |B⟩. The

gate complexity is larger by a poly(log N, log n) factor.

4.4.4 Measurement success probability

After applying the QLSA to (4.12), we perform a measurement to extract a final

state of the desired form. We now consider the probability of this measurement succeeding.

Lemma 4.6. Consider an instance of the quantum quadratic ODE problem defined in

Problem 4.1, with the QLSA applied to the linear system (4.12) using the forward Euler

method (4.11) with time step (4.65). Suppose the error from Carleman linearization
g
satisfies ∥η(t)∥ ≤ 4
as in (4.67), and the global error from the forward Euler method

as defined in Lemma 4.3 is bounded by

g
∥ŷ(T ) − y m ∥ ≤ , (4.131)
4

where g is defined in (4.17). Then the probability of measuring a state |y1k ⟩ for k =

[m + p + 1]0 \ [m + 1]0 satisfies

p+1
Pmeasure ≥ , (4.132)
9(m + p + 1)N q 2

161
where q is also defined in (4.17).

Proof. The idealized quantum state produced by the QLSA applied to (4.12) has the form

m+p m+p N
X XX
k
|Y ⟩ = |y ⟩|k⟩ = |yjk ⟩|j⟩|k⟩ (4.133)
k=0 k=0 j=1

where the states |y k ⟩ and |yjk ⟩ for k ∈ [m + p + 1]0 and j ∈ [N ] are subnormalized to

ensure ∥|Y ⟩∥ = 1.

We decompose the state |Y ⟩ as

|Y ⟩ = |Ybad ⟩ + |Ygood ⟩, (4.134)

where
m−1 N m+p N
XX XX
|Ybad ⟩ := |yjk ⟩|j⟩|k⟩ + |yjk ⟩|j⟩|k⟩,
k=0 j=1 k=m j=2
m+p
(4.135)
X
|Ygood ⟩ := |y1k ⟩|1⟩|k⟩.
k=m

Note that |y1k ⟩ = |y1m ⟩ for all k ∈ {m, m + 1, . . . , m + p}. We lower bound

∥|Ygood ⟩∥2 (p + 1)∥|y1m ⟩∥2


Pmeasure := = (4.136)
∥|Y ⟩∥2 ∥|Y ⟩∥2

by lower bounding the terms of the product

∥|y1m ⟩∥2 ∥|y1m ⟩∥2 ∥|y10 ⟩∥2


= · . (4.137)
∥|Y ⟩∥2 ∥|y10 ⟩∥2 ∥|Y ⟩∥2

First, according to (4.67) and (4.131), the exact solution u(T ) and the approximate

162
solution y1m defined in (4.12) satisfy

g
∥u(T )−y1m ∥ ≤ ∥u(T )− ŷ1 (T )∥+∥ŷ1 (T )−y1m ∥ ≤ ∥η(t)∥+∥ŷ(T )−y m ∥ ≤ . (4.138)
2

Since y10 = (yin )1 = uin , using (4.138), we have

∥|y1m ⟩∥ ∥y1m ∥ ∥u(T )∥ − ∥u(T ) − y1m ∥ g − ∥u(T ) − y1m ∥ g 1


0
= ≥ = ≥ = .
∥|y1 ⟩∥ ∥uin ∥ ∥uin ∥ ∥uin ∥ 2∥uin ∥ 2q
(4.139)

Second, we upper bound ∥y k ∥2 by

∥y k ∥2 = ∥ŷ(kh) − [y(kh) − y k ]∥2 ≤ 2(∥ŷ(kh)∥2 + ∥y(kh) − y k ∥2 ). (4.140)

Using ∥ŷ(t)∥2 < 4N ∥uin ∥2 by (4.99), and ∥y(kh)−y k ∥ ≤ ∥ŷ(T )−y m ∥ ≤ g/4 < ∥uin ∥/4

by (4.131), and R < 1, we have

∥uin ∥2
 
k 2 2
∥y ∥ ≤ 2 4N ∥uin ∥ + < 9N ∥uin ∥2 . (4.141)
16

Therefore

∥|y10 ⟩∥2 ∥|y10 ⟩∥2 ∥uin ∥2 1


2
= m+p ≥ 2
= . (4.142)
∥|Y ⟩∥ 9N (m + p + 1)∥uin ∥
P k
k=0 ∥|y ⟩∥
2 9N (m + p + 1)

Finally, using (4.139) and (4.142) in (4.137) and (4.136), we have

p+1
Pmeasure ≥ (4.143)
9(m + p + 1)N q 2

163
as claimed.

Choosing m = p, we have Pmeasure = Ω(1/N q 2 ). Using amplitude amplification,



O( N q) iterations suffice to succeed with constant probability.

4.4.5 Proof of Theorem 4.1

Proof. We first present the quantum Carleman linearization (QCL) algorithm and then

analyze its complexity.

The QCL algorithm. We start by rescaling the system to satisfy (4.3) and (4.4). Given a

quadratic ODE (4.1) satisfying R < 1 (where R is defined in (4.2)), we define a scaling

factor γ > 0, and rescale u to

u := γu. (4.144)

Replacing u by u in (4.1), we have

du 1
= F2 u⊗2 + F1 u + γF0 (t),
dt γ
(4.145)
u(0) = uin := γuin .

We let F 2 := γ1 F2 , F 1 := F1 , and F 0 (t) := γF0 (t) so that

du
= F 2 u⊗2 + F 1 u + F 0 (t),
dt (4.146)
u(0) = uin .

Note that R is invariant under this rescaling, so R < 1 still holds for the rescaled equation.

164
Concretely, we take3
1
γ=p . (4.147)
∥uin ∥r+

After rescaling, the new quadratic ODE satisfies ∥uin ∥r+ = γ 2 ∥uin ∥r+ = 1. Since

∥uin ∥ < r+ by Lemma 4.1, we have r− < ∥uin ∥ < 1 < r+ , so (4.4) holds. Furthermore, 1

is located between the two roots r− and r+ , which implies ∥F 2 ∥·12 +|Re (λ1 )|·1+∥F 0 ∥ <

0 as shown in Lemma 4.1, so (4.3) holds for the rescaled problem.

Having performed this rescaling, we henceforth assume that (4.3) and (4.4) are

satisfied. We then introduce the choice of parameters as follows. Given g and an error

bound ϵ ≤ 1, we define
gϵ g
δ := ≤ . (4.148)
1+ϵ 2

Given ∥uin ∥, ∥F2 ∥, and Re(λ1 ) < 0, we choose

   
log(2T ∥F2 ∥/δ) log(2T ∥F2 ∥/δ)
N= = . (4.149)
log(1/∥uin ∥) log(r+ )

Since ∥uin ∥/δ > 1 by (4.148) and g < ∥uin ∥, Lemma 4.2 gives

1 N +1 δ
∥u(T ) − ŷ1 (T )∥ ≤ ∥η(T )∥ ≤ T N ∥F2 ∥∥uin ∥N +1 = T N ∥F2 ∥( ) ≤ . (4.150)
r+ 2

Thus, (4.67) holds since δ ≤ g/2.

Now we discuss the choice of h. On the one hand, h must satisfy (4.65) to satisfy

the conditions of Lemma 4.3 and Lemma 4.4. On the other hand, Lemma 4.3 gives the
1 1
3

In fact, one can show that any γ ∈ r+ , ∥uin ∥ suffices to satisfy (4.3) and (4.4).

165
upper bound

gϵ gϵ δ
∥ŷ1 (T ) − y1m ∥ ≤ 3N 2.5 T h[(∥F2 ∥ + ∥F1 ∥ + ∥F0 ∥)2 + ∥F0′ ∥] ≤ ≤ = .
4 2(1 + ϵ) 2
(4.151)

This also ensures that (4.131) holds since δ ≤ g/2. Thus, we choose


gϵ 1
h ≤ min ′
, ,
12N 2.5 T [(∥F2 ∥ + ∥F1 ∥ + ∥F0 ∥)2 + ∥F0 ∥] N ∥F1 ∥
 (4.152)
2(| Re(λ1 )| − ∥F2 ∥ − ∥F0 ∥)
N (| Re(λ1 )|2 − (∥F2 ∥ + ∥F0 ∥)2 + ∥F1 ∥2 )

to satisfy (4.65) and (4.151).

Combining (4.150) with (4.151), we have

∥u(T ) − y1m ∥ ≤ ∥u(T ) − ŷ1 (T )∥ + ∥ŷ1 (T ) − y1m ∥ ≤ δ. (4.153)

Thus, (4.138) holds since δ ≤ g/2. Using

u(T ) ym ∥u(T ) − y1m ∥ ∥u(T ) − y1m ∥


− 1m ≤ ≤ (4.154)
∥u(T )∥ ∥y1 ∥ min{∥u(T )∥, ∥y1m ∥} g − ∥u(T ) − y1m ∥

and (4.153), we obtain

u(T ) y1m δ
− m ≤ = ϵ, (4.155)
∥u(T )∥ ∥y1 ∥ g−δ

u(T )
i.e., the normalized output state is ϵ-close to ∥u(T )∥
.

We follow the procedure in Lemma 4.5 to prepare the initial state |ŷin ⟩. We apply

166
the QLSA [46] to the linear system (4.12) with m = p = ⌈T /h⌉, giving a solution

|Y ⟩. We then perform a measurement to obtain a normalized state of |yjk ⟩ for some k ∈

[m + p + 1]0 and j ∈ [N ]. By Lemma 4.6, the probability of obtaining a state |y1k ⟩ for

some k ∈ [m + p + 1]0 \ [m + 1]0 , giving the normalized vector y1m /∥y1m ∥, is

p+1 1
Pmeasure ≥ 2
≥ . (4.156)
9(m + p + 1)N q 18N q 2


By amplitude amplification, we can achieve success probability Ω(1) with O( N q)

repetitions of the above procedure.

Analysis of the complexity. By Lemma 4.5, the right-hand side |B⟩ in (4.12) can be

prepared with O(N ) queries to Ox and O(m) queries to OF0 , with gate complexity larger

by a poly(log N, log n) factor. The matrix L in (4.12) is an (m + p + 1)∆ × (m + p + 1)∆

matrix with O(N s) nonzero entries in any row or column. By Lemma 4.4 and our choice

of parameters, the condition number of L is at most

3(m + p + 1)
N T [(∥F2 ∥ + ∥F1 ∥ + ∥F0 ∥)2 + ∥F0′ ∥]
 2.5 2
=O + N T ∥F1 ∥
δ
N T [| Re(λ1 )|2 − (∥F2 ∥ + ∥F0 ∥)2 + ∥F1 ∥2 ]

+
2(| Re(λ1 )| − ∥F2 ∥ − ∥F0 ∥)
N T [(∥F2 ∥ + ∥F1 ∥ + ∥F0 ∥)2 + ∥F0′ ∥]
 2.5 2
N T ∥F1 ∥2

1
=O + ·
gϵ (1 − ∥uin ∥)2 ∥F2 ∥ + ∥F0 ∥
N T [(∥F2 ∥ + ∥F1 ∥ + ∥F0 ∥)2 + ∥F0′ ∥]
 2.5 2 
=O .
(1 − ∥uin ∥)2 (∥F2 ∥ + ∥F0 ∥)gϵ
(4.157)

167
Here we use ∥F2 ∥ + ∥F0 ∥ < | Re(λ1 )| ≤ ∥F1 ∥ and

1
2(| Re(λ1 )| − ∥F2 ∥ − ∥F0 ∥) > (∥uin ∥ + − 2)(∥F2 ∥ + ∥F0 ∥)
∥uin ∥
(4.158)
1
= (1 − ∥uin ∥)2 (∥F2 ∥ + ∥F0 ∥) > (1 − ∥uin ∥)2 (∥F2 ∥ + ∥F0 ∥).
∥uin ∥

The first inequality above is from the sum of | Re(λ1 )| > ∥F2 ∥∥uin ∥ + ∥F0 ∥/∥uin ∥ and

| Re(λ1 )| = ∥F2 ∥r+ + ∥F0 ∥/r+ , where r+ = 1/∥uin ∥. Consequently, by Theorem 5 of

[46], the QLSA produces the state |Y ⟩ with

N 3.5 sT 2 [(∥F2 ∥ + ∥F1 ∥ + ∥F0 ∥)2 + ∥F0′ ∥] N sT ∥F2 ∥∥F1 ∥∥F0 ∥∥F0′ ∥
 
poly log
(1 − ∥uin ∥)2 (∥F2 ∥ + ∥F0 ∥)gϵ (1 − ∥uin ∥)gϵ
sT 2 [(∥F2 ∥ + ∥F1 ∥ + ∥F0 ∥)2 + ∥F0′ ∥] sT ∥F2 ∥∥F1 ∥∥F0 ∥∥F0′ ∥
 
= poly log / log(1/∥uin ∥)
(1 − ∥uin ∥)2 (∥F2 ∥ + ∥F0 ∥)gϵ (1 − ∥uin ∥)gϵ
(4.159)

queries to the oracles OF2 , OF1 , OF0 , and Ox . Using O( N q) steps of amplitude amplification

to achieve success probability Ω(1), the overall query complexity of our algorithm is

N 4 sT 2 q[(∥F2 ∥ + ∥F1 ∥ + ∥F0 ∥)2 + ∥F0′ ∥] N sT ∥F2 ∥∥F1 ∥∥F0 ∥∥F0′ ∥


 
poly log
(1 − ∥uin ∥)2 (∥F2 ∥ + ∥F0 ∥)gϵ (1 − ∥uin ∥)gϵ
sT 2 q[(∥F2 ∥ + ∥F1 ∥ + ∥F0 ∥)2 + ∥F0′ ∥] sT ∥F2 ∥∥F1 ∥∥F0 ∥∥F0′ ∥
 
= poly log / log(1/∥uin ∥)
(1 − ∥uin ∥)2 (∥F2 ∥ + ∥F0 ∥)gϵ (1 − ∥uin ∥)gϵ
(4.160)

and the gate complexity exceeds this by a factor of poly log(nsT ∥F2 ∥∥F1 ∥∥F0 ∥∥F0′ ∥/(1−

R)gϵ)/ log(1/∥uin ∥) .

If the eigenvalues λj of F1 are all real, by (4.66), the condition number of L is at

168
most

N 2.5 T 2 [(∥F2 ∥ + ∥F1 ∥ + ∥F0 ∥)2 + ∥F0′ ∥]


 
3(m + p + 1) = O + N T ∥F1 ∥
δ
(4.161)
N T [(∥F2 ∥ + ∥F1 ∥ + ∥F0 ∥)2 + ∥F0′ ∥]
 2.5 2 
=O .

Similarly, the QLSA produces the state |Y ⟩ with

sT 2 [(∥F2 ∥ + ∥F1 ∥ + ∥F0 ∥)2 + ∥F0′ ∥] sT ∥F2 ∥∥F1 ∥∥F0 ∥∥F0′ ∥


 
poly log / log(1/∥uin ∥)
gϵ gϵ
(4.162)

queries to the oracles OF2 , OF1 , OF0 , and Ox . Using amplitude amplification to achieve

success probability Ω(1), the overall query complexity of the algorithm is

sT 2 q[(∥F2 ∥ + ∥F1 ∥ + ∥F0 ∥)2 + ∥F0′ ∥] sT ∥F2 ∥∥F1 ∥∥F0 ∥∥F0′ ∥


 
poly log / log(1/∥uin ∥)
gϵ gϵ
(4.163)

and the gate complexity is larger by poly log(nsT ∥F2 ∥∥F1 ∥∥F0 ∥∥F0′ ∥/gϵ)/ log(1/∥uin ∥)


as claimed.

4.5 Lower bound

In this section, we establish a limitation on the ability of quantum computers to

solve the quadratic ODE problem when the nonlinearity is sufficiently strong. We quantify

the strength of the nonlinearity in terms of the quantity R defined in (4.2). Whereas there

is an efficient quantum algorithm for R < 1 (as shown in Theorem 4.1), we show here

that the problem is intractable for R ≥ 2.

169

Theorem 4.2. Assume R ≥ 2. Then there is an instance of the quantum quadratic

ODE problem defined in Problem 4.1 such that any quantum algorithm for producing

a quantum state approximating u(T )/∥u(T )∥ with bounded error must have worst-case

time complexity exponential in T .

We establish this result by showing how the nonlinear dynamics can be used to

distinguish nonorthogonal quantum states, a task that requires many copies of the given

state. Note that since our algorithm only approximates the quantum state corresponding

to the solution, we must lower bound the query complexity of approximating the solution

of a quadratic ODE.

4.5.1 Hardness of state discrimination

Previous work on the computational power of nonlinear quantum mechanics shows

that the ability to distinguish non-orthogonal states can be applied to solve unstructured

search (and other hard computational problems) [78, 79, 80]. Here we show a similar

limitation using an information-theoretic argument.

Lemma 4.7. Let |ψ⟩, |ϕ⟩ be states of a qubit with |⟨ψ|ϕ⟩| = 1 − ϵ. Suppose we are

either given a black box that prepares |ψ⟩ or a black box that prepares |ϕ⟩. Then any

bounded-error protocol for determining whether the state is |ψ⟩ or |ϕ⟩ must use Ω(1/ϵ)

queries.

Proof. Using the black box k times, we prepare states with overlap (1 − ϵ)k . By the well-

known relationship between fidelity and trace distance, these states have trace distance at
p √
most 1 − (1 − ϵ)2k ≤ 2kϵ. Therefore, by the Helstrom bound (which states that the

170
advantage over random guessing for the best measurement to distinguish two quantum

states is given by their trace distance [121]), we need k = Ω(1/ϵ) to distinguish the states

with bounded error.

4.5.2 State discrimination with nonlinear dynamics

Lemma 4.7 can be used to establish limitations on the ability of quantum computers

to simulate nonlinear dynamics, since nonlinear dynamics can be used to distinguish

nonorthogonal states. Whereas previous work considers models of nonlinear quantum

dynamics (such as the Weinberg model [79, 80] and the Gross-Pitaevskii equation [78]),

here we aim to show the difficulty of efficiently simulating more general nonlinear ODEs—

in particular, quadratic ODEs with dissipation—using quantum algorithms.

Lemma 4.8. There exists an instance of the quantum quadratic ODE problem as defined

in Problem 4.1 with R ≥ 2, and two states of a qubit with overlap 1 − ϵ (for 0 < ϵ <

1 − 3/ 10) as possible initial conditions, such that the two final states after evolution

time T = O(log(1/ϵ)) have an overlap no larger than 3/ 10.

Proof. Consider a 2-dimensional system of the form

du1
= −u1 + ru21 ,
dt (4.164)
du2
= −u2 + ru22 ,
dt

for some r > 0, with an initial condition u(0) = [u1 (0); u2 (0)] = uin satisfying ∥uin ∥ = 1.

According to the definition of R in (4.2), we have R = r, so henceforth we write this

171
parameter as R. The analytic solution of (4.164) is

1
u1 (t) = ,
R −et (R −1/u1 (0))
(4.165)
1
u2 (t) = .
R −et (R −1/u2 (0))

When u2 (0) > 1/ R, u2 (t) is finite within the domain

 
∗ R
0 ≤ t < t := log ; (4.166)
R −1/u2 (0)

when u2 (0) = 1/ R, we have u2 (t) = 1/ R for all t; and when u2 (0) < 1/ R, u2 (t) goes

to 0 as t → ∞. The behavior of u1 (t) depends similarly on u1 (0).

Without loss of generality, we assume u1 (0) ≤ u2 (0). For u2 (0) ≥ u1 (0) > 1/ R,

both u1 (t) and u2 (t) are finite within the domain (4.166), which we consider as the domain

of u(t).

Now we consider 1-qubit states that provide inputs to (4.164). Given a sufficiently

small ϵ > 0, we first define θ ∈ (0, π/4) by

θ
2 sin2 = ϵ. (4.167)
2

We then construct two 1-qubit states with overlap 1 − ϵ, namely

1
|ϕ(0)⟩ = √ (|0⟩ + |1⟩) (4.168)
2

172
and
 π  π
|ψ(0)⟩ = cos θ + |0⟩ + sin θ + |1⟩. (4.169)
4 4

Then the overlap between the two initial states is

⟨ϕ(0)|ψ(0)⟩ = cos θ = 1 − ϵ. (4.170)


The initial overlap (4.170) is larger than the target overlap 3/ 10 in Lemma 4.8 provided

ϵ < 1 − 3/ 10. For simplicity, we denote

 π
v0 := cos θ + ,
4 (4.171)
 π
w0 := sin θ + ,
4

and let v(t) and w(t) denote solutions of (4.164) with initial conditions v(0) = v0 and

w(0) = w0 , respectively. Since w0 > 1/ R, we see that w(t) increases with t, satisfying

1 1
≤ √ < w0 < w(t), (4.172)
R 2

and

v(t) < w(t) (4.173)

for any time 0 < t < t∗ , whatever the behavior of v(t).

We now study the outputs of our problem. For the state |ϕ(0)⟩, the initial condition

173
√ √
for (4.164) is [1/ 2; 1/ 2]. Thus, the output for any t ≥ 0 is

1
|ϕ(t)⟩ = √ (|0⟩ + |1⟩). (4.174)
2

For the state |ψ(0)⟩, the initial condition for (4.164) is [v0 ; w0 ]. We now discuss

how to select a terminal time T to give a useful output state |ψ(T )⟩. For simplicity, we

denote the ratio of w(t) and v(t) by

w(t)
K(t) := . (4.175)
v(t)

Noticing that w(t) goes to infinity as t approaches t∗ , while v(t) remains finite within

(4.166), there exists a terminal time T such that4

K(T ) ≥ 2. (4.176)

The normalized output state at this time T is

1
|ψ(T )⟩ = p (|0⟩ + K(T )|1⟩). (4.177)
K(T )2 + 1

Combining (4.174) with (4.177), the overlap of |ϕ(T )⟩ and |ψ(T )⟩ is

K(T ) + 1 3
⟨ϕ(T )|ψ(T )⟩ = p ≤√ (4.178)
2K(T )2 + 2 10
4
More concretely, we take vmax = max{v0 , v(t∗ )} that upper bounds v(t) on the domain [0, t∗ ), in
which v(t∗ ) is a finite value since v0 < w0 . Then there exists a terminal time T such that w(T ) = 2vmax ,
and hence K(T ) = w(T )/v(T ) ≥ 2.

174
using (4.176).

Finally, we estimate the evolution time T , which is implicitly defined by (4.176).

We can upper bound its value by t∗ . According to (4.166), we have

   √ 
∗ R 2
T < t = log < log √ (4.179)
R − w10 2 − w10

since the function log(x/(x − c)) decreases monotonically with x for x > c > 0. Using

(4.170) to rewrite this expression in terms of ϵ, we have

√ !  
∗ 2 1
T < t < log √ 1
= log 1 + √ , (4.180)
2− sin(θ+ π4 )
2ϵ − ϵ2 − ϵ

which scales like 12 log(1/2ϵ) as ϵ → 0. Therefore T = O(log(1/ϵ)) as claimed.

4.5.3 Proof of Theorem 4.2

We now establish our main lower bound result.

Proof. As introduced in the proof of Lemma 4.8, consider the quadratic ODE (4.164); the

two initial states of a qubit |ϕ(0)⟩ and |ψ(0)⟩ defined in (4.168) and (4.169), respectively;

and the terminal time T defined in (4.176).

Suppose we have a quantum algorithm that, given a black box to prepare a state

that is either |ϕ(0)⟩ or |ψ(0)⟩, can produce quantum states |ϕ′ (T )⟩ or |ψ ′ (T )⟩ that are

within distance δ of |ϕ(T )⟩ and |ψ(T )⟩, respectively. Since by Lemma 4.8, |ϕ(T )⟩ and

|ψ(T )⟩ have constant overlap, the overlap between |ϕ′ (T )⟩ and |ψ ′ (T )⟩ is also constant

175
for sufficiently small δ. More precisely, we have

3
⟨ϕ(T )|ψ(T )⟩ ≤ √ (4.181)
10

by (4.178), which implies

s  
3
∥|ϕ(T )⟩ − |ψ(T )⟩ ∥ ≥ 2 1 − √ > 0.32. (4.182)
10

We also have

∥ |ϕ(T )⟩ − |ϕ′ (T )⟩ ∥ ≤ δ, (4.183)

and similarly for ψ(T ). These three inequalities give us

∥ |ϕ′ (T )⟩ − |ψ ′ (T )⟩ ∥ = ∥(|ϕ(T )⟩ − |ψ(T ))⟩ − (|ϕ(T )⟩ − |ϕ′ (T ))⟩ − (|ψ ′ (T )⟩ − |ψ(T ))⟩ ∥

≥ ∥(|ϕ(T )⟩ − |ψ(T ))⟩ ∥ − ∥(|ϕ(T )⟩ − |ϕ′ (T ))⟩ ∥ − ∥(|ψ ′ (T )⟩ − |ψ(T ))⟩ ∥

> 0.32 − 2δ, (4.184)

which is at least a constant for (say) δ < 0.15.

Lemma 4.7 therefore shows that preparing the states |ϕ′ (T )⟩ and |ψ ′ (T )⟩ requires

time Ω(1/ϵ), as these states can be used to distinguish the two possibilities with bounded

error. By Lemma 4.8, this time is 2Ω(T ) . This shows that we need at least exponential

simulation time to approximate the solution of arbitrary quadratic ODEs to within sufficiently

small bounded error when R ≥ 2.

Note that exponential time is achievable since our QCL algorithm can solve the

176
problem by taking N to be exponential in T , where N is the truncation level of Carleman

linearization. (The algorithm of Leyton and Osborne also solves quadratic differential

equations with complexity exponential in T , but requires the additional assumptions that

the quadratic polynomial is measure-preserving and Lipschitz continuous [74].)

4.6 Applications

Due to the assumptions of our analysis, our quantum Carleman linearization algorithm

can only be applied to problems with certain properties. First, there are two requirements

to guarantee convergence of the inhomogeneous Carleman linearization: the system must

have linear dissipation, manifested by Re(λ1 ) < 0; and the dissipation must be sufficiently

stronger than both the nonlinear and the forcing terms, so that R < 1. Dissipation

typically leads to an exponentially decaying solution, but for the dependency on g and

q in (4.160) to allow an efficient algorithm, the solution cannot exponentially approach

zero.

However, this issue does not arise if the forcing term F0 resists the exponential

decay towards zero, instead causing the solution to decay towards some non-zero (possibly

time-dependent) state. The norm of the state that is exponentially approached can possibly

decay towards zero, but this decay itself must happen slower than exponentially for the

algorithm to be efficient.5

We now investigate possible applications that satisfy these conditions. First we

present an application governed by ordinary differential equations, and then we present


5
Also note that the QCL algorithm might provide an advantage over classical computation for
homogeneous equations in cases where only evolution for a short time is of interest.

177
possible applications in partial differential equations.

Several physical systems can be represented in terms of quadratic ODEs. Examples

include models of interacting populations of predators and prey [122], dynamics of chemical

reactions [123, 124], and the spread of an epidemic [125]. We now give an example of

the latter, based on the epidemiological model used in [126] to describe the early spread

of the COVID-19 virus.

The so-called SEIR model divides a population of P individuals into four components:

susceptible (PS ), exposed (PE ), infected (PI ), and recovered (PR ). We denote the rate

of transmission from an infected to a susceptible person by rtra , the typical time until

an exposed person becomes infectious by the latent time Tlat , and the typical time an

infectious person can infect others by the infectious time Tinf . Furthermore we assume that

there is a flux Λ of individuals constantly refreshing the population. This flux corresponds

to susceptible individuals moving into, and individuals of all components moving out of,

the population, in such a way that the total population remains constant.

To ensure that there is sufficiently strong linear decay to guarantee Carleman convergence,

we also add a vaccination term to the PS component. We choose an approach similar to

that of [127], but denote the vaccination rate, which is approximately equal to the fraction

178
of susceptible individuals vaccinated each day, by rvac . The model is then

dPS PS PI
= −Λ − rvac PS + Λ − rtra PS (4.185)
dt P P
dPE PE PE PI
= −Λ − + rtra PS (4.186)
dt P Tlat P
dPI PI PE PI
= −Λ + − (4.187)
dt P Tlat Tinf
dPR PR PI
= −Λ + rvac PS + . (4.188)
dt P Tinf

The sum of equations (4.185)–(4.188) shows that P = PS + PE + PI + PR is a

constant. Hence we do not need to include the equation for PR in our analysis, which

is crucial since the PR component would have introduced positive eigenvalues. The

maangces corresponding to (4.1) are then

   
Λ
Λ − P − rvac 0 0 
   
   
0 ,
F0 =   F1 =  0 − PΛ − 1
0 , (4.189)
 Tlat 
   
   
1
0 0 Tlat
− PΛ − 1
Tinf
 
r
0 0 − Ptra 0 0 0 0 0 0
 
 
F2 = 0 0 rtra 0 0 0 0 0 0

. (4.190)
P
 
 
0 0 0 0 0 0 0 0 0

Since F1 is a triangular matrix, its eigenvalues are located on its diagonal, so



Re(λ1 ) = −Λ/P − min {rvac , 1/Tlat , 1/Tinf }. Furthermore we can bound P/ 3 ≤

179

∥uin ∥ ≤ P , ∥F0 ∥ = Λ, and ∥F2 ∥ = 2rtra /P , so

√ √
2rtra + 3Λ/P
R≤ . (4.191)
min {rvac , 1/Tlat , 1/Tinf } + Λ/P


We see that the condition for guaranteed convergence of Carleman linearization is 2rtra <

min {rvac , 1/Tlat , 1/Tinf } − ( 3 − 1). Essentially, the Carleman method only converges if

the (nonlinear) transmission is sufficiently slower than the (linear) decay.

To assess how restrictive this assumption is, we consider the SEIR parameters

used in [126]. Note that they also included separate components for asymptomatic and

hospitalized persons, but to simplify the analysis we include both of these components in

the PI component. In their work, they considered a city with approximately P = 107

inhabitants. In a specific period, they estimated a travel flux6 of Λ ≈ 0 individuals

per day, latent time Tlat = 5.2 days, infectious time Tinf = 2.3 days, and transmission

rate rtra ≈ 0.13 days−1 . We let the initial condition be dominated by the susceptible

component so that ∥uin ∥ ≈ P , and we assume7 that rvac > 1/Tlat ≈ 0.19 days−1 . With

the stated parameters, a direct calculation gives R = 0.956, showing that the assumptions

of our algorithm can correspond to some real-world problems that are only moderately

nonlinear.

While the example discussed above has only a constant number of variables, this

example can be generalized to a high-dimensional system of ODEs that models the early
6
Since we require that u(t) does not approach 0 exponentially, we can assume that the travel flux is
some non-zero, but small, value, e.g., Λ = 1 individual per day.
7
This example arguably corresponds to quite rapid vaccination, and is chosen here such that R remains
smaller than one, as required to formally guarantee convergence of the Carleman method. However,
as shown in the upcoming example of the Burgers equation, larger values of R might still allow for
convergence in practice, suggesting that our algorithm might handle lower values of the vaccination rate.

180
spread over a large number of cities with interaction, similar to what is done in [128] and

[129].

Other examples of high-dimensional ODEs arise from the discretization of certain

PDEs. Consider, for example, equations for u(r, t) of the type

∂t u + (u · ∇)u + βu = ν∇2 u + f . (4.192)

with the forcing term f being a function of both space and time. This equation can be cast

in the form of (4.1) by using standard discretizations of space and time. Equations of the

form (4.192) can represent Navier–Stokes-type equations, which are ubiquitous in fluid

mechanics [118], and related models such as those studied in [130, 131, 132] to describe

the formation of large-scale structure in the universe. Similar equations also appear in

models of magnetohydrodynamics (e.g., [133]), or the motion of free particles that stick

to each other upon collision [134]. In the inviscid case, ν = 0, the resulting Euler-type

equations with linear damping are also of interest, both for modeling micromechanical

devices [135] and for their intimate connection with viscous models [136].

As a specific example, consider the one-dimensional forced viscous Burgers equation

∂t u + u∂x u = ν∂x2 u + f, (4.193)

which is the one-dimensional case of equation (4.192) with β = 0. Equation (4.193) is

often used as a simple model of convective flow [117]. For concreteness, let the initial

condition be u(x, 0) = U0 sin(2πx/L0 ) on the domain x ∈ [−L0 /2, L0 /2], and use

181
Figure 4.1: Integration of the forced viscous Burgers equation using Carleman linearization
on a classical computer (source code available at https://github.com/
hermankolden/CarlemanBurgers). The viscosity is set so that the Reynolds
number Re = U0 L0 /ν = 20. The parameters nx = 16 and nt = 4000 are the number
of spatial and temporal discretization intervals, respectively. The corresponding
Carleman convergence parameter is R = 43.59. Top: Initial condition and solution
L0
plotted at a third of the nonlinear time 31 Tnl = 3U 0
. Bottom: l2 norm of the absolute
error between the Carleman solutions at various truncation levels N (left), and the
convergence of the corresponding time-maximum error (right).

182
Dirichlet boundary conditions u(−L0 /2, 0) = u(L0 /2, 0) = 0. We force this equation

using a localized off-center Gaussian with a sinusoidal time dependence,8 given by f (x, t) =
 2

U0 exp − (x−L 0 /4)
2(L0 /32)2
cos(2πt). To solve this equation using the Carleman method, we

discretize the spatial domain into nx points and use central differences for the derivatives

to get
ui+1 − 2ui + ui−1 u2i+1 − u2i−1
∂t ui = ν − + fi (4.194)
∆x2 4∆x

with ∆x = L0 /(nx − 1). This equation is of the form (4.1) and can thus generate the

Carleman system (4.6). The resulting linear ODE can then be integrated using the forward

Euler method, as shown in Figure 4.1. In this example, the viscosity ν is defined such that

the Reynolds number Re := U0 L0 /ν = 20, and nx = 16 spatial discretization points

were sufficient to resolve the solution. The figure compares the Carleman solution with

the solution obtained via direct integration of (4.194) with the forward Euler method (i.e.,

without Carleman linearization). By inserting the matrices F0 , F1 , and F2 corresponding

to equation (4.194) into the definition of R (4.2), we find that Re(λ1 ) is indeed negative

as required, given Dirichlet boundary conditions, but the parameters used in this example

result in R ≈ 44. Even though this does not satisfy the requirement R < 1 of the QCL

algorithm, we see from the absolute error plot in Figure 4.1 that the maximum absolute

error over time decreases exponentially as the truncation level N is incremented (in this

example, the maximum Carleman truncation level considered is N = 4). Surprisingly,

this suggests that in this example, the error of the classical Carleman method converges

exponentially with N , even though R > 1.


8
Note that this forcing does not satisfy the general conditions for efficient implementation of our
algorithm since it is not sparse. However, we expect that the algorithm can still be implemented efficiently
for a structured non-sparse forcing term such as in this example.

183
4.7 Discussion

In this chapter we have presented a quantum Carleman linearization (QCL) algorithm

for a class of quadratic nonlinear differential equations. Compared to the previous approach

of [74], our algorithm improves the complexity from an exponential dependence on T to a

nearly quadratic dependence, under the condition R < 1 as defined in (4.2). Qualitatively,

this means that the system must be dissipative and that the nonlinear and inhomogeneous

effects must be small relative to the linear effects. We have also provided numerical results

suggesting the classical Carleman method may work on certain PDEs that do not strictly

satisfy the assumption R < 1. Furthermore, we established a lower bound showing that

for general quadratic differential equations with R ≥ 2, quantum algorithms must have

worst-case complexity exponential in T . We also discussed several potential applications

arising in biology and fluid and plasma dynamics.

It is natural to ask whether the result of Theorem 4.1 can be achieved with a

classical algorithm, i.e., whether the assumption R < 1 makes differential equations

classically tractable. Clearly a naive integration of the truncated Carleman system (4.6) is

not efficient on a classical computer since the system size is Θ(nN ). But furthermore, it

is unlikely that any classical algorithm for this problem can run in time polylogarithmic

in n. If we consider Problem 4.1 with dissipation that is small compared to the total

evolution time, but let the nonlinearity and forcing be even smaller such that R < 1, then

in the asymptotic limit we have a linear differential equation with no dissipation. Hence

any classical algorithm that could solve Problem 4.1 could also solve non-dissipative

linear differential equations, which is a BQP-hard problem even when the dynamics are

184
unitary [137]. In other words, an efficient classical algorithm for this problem would

imply efficient classical algorithms for any problem that can be solved efficiently by a

quantum computer, which is considered unlikely.



Our upper and lower bounds leave a gap in the range 1 ≤ R < 2, for which we

do not know the complexity of the quantum quadratic ODE problem. We hope that future

work will close this gap and determine for which R the problem can be solved efficiently

by quantum computers in the worst case.

Furthermore, the complexity of our algorithm has nearly quadratic dependence on

T , namely T 2 poly(log T ). It is unknown whether the complexity for quadratic ODEs

must be at least linear or quadratic in T . Note that sublinear complexity is impossible in

general because of the no-fast-forwarding theorem [33]. However, it should be possible

to fast-forward the dynamics in special cases, and it would be interesting to understand

the extent to which dissipation enables this.

The complexity of our algorithm depends on the parameter q defined in Theorem 4.1,

which characterizes the decay of the final solution relative to the initial condition. This

restricts the utility of our result, since we must have a suitable initial condition and

terminal time such that the final state is not exponentially smaller than the initial state.

However, it is unlikely that such a dependence can be significantly improved, since

renormalization of the state can be used to implement postselection, which would imply

the unlikely consequence BQP = PP (see Section 8 of [53] for further discussion).

As discussed in the introduction, the solution of a homogeneous dissipative equation

necessarily decays exponentially in time, so our method is not asymptotically efficient.

However, for inhomogeneous equations the final state need not be exponentially smaller

185
than the initial state even in a long-time simulation, suggesting that our algorithm could

be especially suitable for models with forcing terms.

It is possible that variations of the Carleman linearization procedure could increase

the accuracy of the result. For instance, instead of using just tensor powers of u as

auxiliary variables, one could use other nonlinear functions. Several previous papers on

Carleman linearization have suggested using multidimensional orthogonal polynomials

[73, 138]. They also discuss approximating higher-order terms with lower-order ones in

(4.6) instead of simply dropping them, possibly improving accuracy. Such changes would

however alter the structure of the resulting linear ODE, which could affect the quantum

implementation.

The quantum part of the algorithm might also be improved. In this chapter we limit

ourselves to the first-order Euler method to discretize the linearized ODEs in time. This

is crucial for the analysis in Lemma 4.3, which states the global error increases at most

linearly with T . To establish this result for the Euler method, it suffices to choose the

time step (4.65) to ensure ∥I + Ah∥ ≤ 1, and then estimate the growth of global error

by (4.92). However, it is unclear how to give a similar bound for higher-order numerical

schemes. If this obstacle could be overcome, the error dependence of the complexity

might be improved.

It is also natural to ask whether our approach can be improved by taking features

of particular systems into account. Since the Carleman method has only received limited

attention and has generally been used for purposes other than numerical integration, it

seems likely that such improvements are possible. In fact, the numerical results discussed

in Section 4.6 (see in particular Figure 4.1) suggest that the condition R < 1 is not a

186
strict requirement for the viscous Burgers equation, since we observe convergence even

though R ≈ 44. This suggests that some property of equation (4.193) makes it more

amenable to Carleman linearization than our current analysis predicts. We leave a detailed

investigation of this for future work.

A related question is whether our algorithm can efficiently simulate systems exhibiting

dynamical chaos. The condition R < 1 might preclude chaos, but we do not have a proof

of this. More generally, the presence or absence of chaos might provide a more fine-

grained picture of the algorithm’s efficiency.

When contemplating applications, it should be emphasized that our approach produces

a state vector that encodes the solution without specifying how information is to be

extracted from that state. Simply producing a state vector is not enough for an end-to-end

application since the full quantum state cannot be read out efficiently. In some cases in

may be possible to extract useful information by sampling a simple observable, whereas in

other cases, more sophisticated postprocessing may be required to infer a desired property

of the solution. Our method does not address this issue, but can be considered as a

subroutine whose output will be parsed by subsequent quantum algorithms. We hope that

future work will address this issue and develop end-to-end applications of these methods.

Finally, the algorithm presented in this chapter might be extended to solve related

mathematical problems on quantum computers. Obvious candidates include initial value

problems with time-dependent coefficients and boundary value problems. Carleman methods

for such problems are explored in [83], but it is not obvious how to implement those

methods in a quantum algorithm. It is also possible that suitable formulations of problems

in nonlinear optimization or control could be solvable using related techniques.

187
Chapter 5: Conclusion and future work

In this dissertation, we developed an understanding of quantum algorithms for

differential equations concerning their design, analysis, and applications.

In Chapter 2, we have presented high-precision quantum algorithms to solve linear,

time-dependent, d-dimensional systems of ordinary differential equations. Specifically,

we showed how to employ a global approximation based on the spectral method as

an alternative to the more straightforward finite difference method. Compared to the

previous work [52], our algorithm improves the complexity of solving time-dependent

linear differential equations from poly(1/ϵ) to poly(log(1/ϵ)).

In Chapter 3, we have presented high-precision quantum algorithms for d-dimensional

partial differential equations. We developed quantum adaptive finite difference methods

for Poisson’s equation, and generalized the quantum spectral method for general second-

order elliptic equations. Whereas previous algorithms scaled as poly(d, 1/ϵ), our algorithms

scale as poly(d, log(1/ϵ)).

In Chapter 4, we have presented the first efficient quantum algorithm for a class of

nonlinear differential equations, exponentially improving the dependence of the evolution

time over previous quantum algorithms. The key ingredient we adopted and modified is

the Carleman linearization, which provides a long-time stable linear approximation to the

188
nonlinear system. We developed quantum Carleman linearization (QCL) for dissipative

nonlinear differential equations provided the condition R < 1. Here the quantity R is used

to quantify the ℓ2 relative strength of the nonlinearity and forcing to the linear dissipation.

The complexity of this approach is e(T 2 q/ϵ), where T is the evolution time, ϵ is the

allowed error, and q measures the decay of the solution. Moreover, We also provided

a quantum lower bound for the worst-case exponential time complexity of simulating

general nonlinear dynamics given R ≥ 2. Beyond the worst-case complexity analysis,

the numerical experiments for the Burgers equation revealed that Carleman linearization

could be practicable for certain nonlinear problems even for larger R.

Our work has raised enthusiasm from the communities in quantum physics, computer

science, applied mathematics, fluid and plasma dynamics, and other fields. Detailed open

questions and future directions followed by each topic can be found in the discussion

sections in Chapter 2, Chapter 3, and Chapter 4. In general, it is great interest for us to

find answers to the following questions:

From the aspect of theories, can we offer exponential quantum speedups for more

general differential equations with provable guarantees? It is possible that the assumptions

on the models can be relaxed. For linear ODEs and PDEs, we wonder if we can develop

high-precision quantum algorithms under weaker smoothness assumptions, and if we can

relax the elliptic condition for second-order linear PDEs. And for nonlinear differential

equations, we wonder whether the weak nonlinearity condition R < 1 can be relaxed.

Numerical results in Chapter 4 suggest that R < 1 is not a strict requirement for the

viscous Burgers equation, since we observe convergence even though R ≈ 44. This

suggests that some properties of nonlinear systems make them more amenable to Carleman

189
linearization than our current analysis predicts. We also wonder whether the complexity

of quantum algorithms for quadratic ODEs e(T 2 q/ϵ) can be improved. It is implausible to

significantly reduce the dependence on q, since renormalization of the state can implement

postselection, which could imply the unlikely consequence BQP = PP. The error

dependence might be improved if the quantum adaptive FDM or the quantum spectral

method could be applied to the nonlinear case, as a rigorous bound on the condition

number of the resulting quantum linear system is required. It is unclear whether a linear

or even sublinear dependence on T is achievable. While there is a no-fast-forwarding

theorem for Hamiltonian simulations, it might be feasible to fast-forward certain initial

and boundary value problems of linear and nonlinear differential equations. Despite

that previous studies focus on the performance in terms of the worst-case error, we

may find further improvements when considering the average-case error with random

initial inputs of typical differential equations. Finally, it is also likely that the techniques

presented in this dissertation could inspire new quantum algorithms for related problems

in optimization, control, and machine learning.

From the aspect of applications, can we investigate end-to-end quantum applications

for differential equations and related real-world problems? For the preprocessing procedure,

we are concerned with how to efficiently prepare the initial states by loading classical

data. The ability to prepare such quantum states requires the implementation of quantum

random access memory (qRAM), which is widely thought to be impractical. But for

some particular initial conditions such as the Gaussian distribution, it is promising to

generate the initial states by a certain efficient routine instead of involving qRAM. For

the postprocessing procedure, we are of great interest in extracting meaningful classical

190
observables of practical interest. Typical instances include the scattering cross section

from a wave propagation, and the free energy from a physical or biological system. Our

algorithms only produce a state vector that encodes the solution, which is not enough for

an end-to-end application since the full quantum state cannot be read out efficiently. In

some circumstances, it may be able to extract useful information by sampling a simple

observable, whereas in other cases, more sophisticated postprocessing may be required

to infer the desired property of the solution. For producing such classical readouts by

quantum computers, we can investigate the complexity and figure out whether an exponential

quantum speedup is possible. In addition, we are interested in the smallest system that

could be used to demonstrate proof-of-principle versions of these algorithms. It is of

great interest to estimate the implementation cost (e.g. the one- or two-qubit gate counts)

for early fault-tolerant quantum computers. We hope that future work will address these

issues toward end-to-end applications.

191
Bibliography

[1] Michael A. Nielsen and Isaac Chuang. Quantum computation and quantum
information, 2002.

[2] John Watrous. The theory of quantum information. Cambridge university press,
2018.

[3] Richard P. Feynman. Simulating physics with computers. International Journal of


Theoretical Physics, 21(6):467–488, 1982.

[4] Seth Lloyd. Universal quantum simulators. Science, pages 1073–1078, 1996.

[5] Ivan Kassal, Stephen P. Jordan, Peter J. Love, Masoud Mohseni, and Alán
Aspuru-Guzik. Polynomial-time quantum algorithm for the simulation of chemical
dynamics. Proceedings of the National Academy of Sciences, 105(48):18681–
18686, 2008. arXiv:0801.2986.

[6] Ian D. Kivlichan, Nathan Wiebe, Ryan Babbush, and Alán Aspuru-Guzik.
Bounding the costs of quantum simulation of many-body physics in real space.
Journal of Physics A: Mathematical and Theoretical, 50(30):305301, 2017.

[7] Borzu Toloui and Peter J. Love. Quantum algorithms for quantum chemistry based
on the sparsity of the CI-matrix, 2013. arXiv:1312.2579.

[8] Ryan Babbush, Dominic W. Berry, Yuval R. Sanders, Ian D. Kivlichan, Artur
Scherer, Annie Y. Wei, Peter J. Love, and Alán Aspuru-Guzik. Exponentially
more precise quantum simulation of fermions in the configuration interaction
representation. Quantum Science and Technology, 3(1):015006, 2017.

[9] Ryan Babbush, Dominic W. Berry, Jarrod R. McClean, and Hartmut Neven.
Quantum simulation of chemistry with sublinear scaling in basis size. Npj
Quantum Information, 5(1):1–7, 2019.

[10] Yuan Su, Dominic W Berry, Nathan Wiebe, Nicholas Rubin, and Ryan Babbush.
Fault-tolerant quantum simulations of chemistry in first quantization. PRX
Quantum, 2(4):040332, 2021. arXiv:2105.12767.

192
[11] Alán Aspuru-Guzik, Anthony D. Dutoi, Peter J. Love, and Martin Head-Gordon.
Simulated quantum computation of molecular energies. Science, 309(5741):1704–
1707, 2005. arXiv:quant-ph/0604193.

[12] James D. Whitfield, Jacob Biamonte, and Alán Aspuru-Guzik. Simulation of


electronic structure Hamiltonians using quantum computers. Molecular Physics,
109(5):735–750, 2011. arXiv:1001.3855.

[13] Jacob T. Seeley, Martin J. Richard, and Peter J. Love. The Bravyi-Kitaev
transformation for quantum computation of electronic structure. The Journal of
Chemical Physics, 137(22):224109, 2012. arXiv:1208.5986.

[14] Dave Wecker, Bela Bauer, Bryan K. Clark, Matthew B. Hastings, and Matthias
Troyer. Gate-count estimates for performing quantum chemistry on small quantum
computers. Physical Review A, 90(2):022305, 2014. arXiv:1312.1695.

[15] Matthew B. Hastings, Dave Wecker, Bela Bauer, and Matthias Troyer.
Improving quantum algorithms for quantum chemistry. Quantum Information &
Computation, 15(1-2):1–21, 2015. arXiv:1403.1539.

[16] David Poulin, Matthew B. Hastings, David Wecker, Nathan Wiebe, Andrew C.
Doberty, and Matthias Troyer. The Trotter step size required for accurate quantum
simulation of quantum chemistry. Quantum Information & Computation, 15(5-
6):361–384, 2015. arXiv:1406.4920.

[17] Jarrod R. McClean, Ryan Babbush, Peter J. Love, and Alán Aspuru-Guzik.
Exploiting locality in quantum computation for quantum chemistry. The Journal
of Physical Chemistry Letters, 5(24):4368–4380, 2014.

[18] Ryan Babbush, Jarrod McClean, Dave Wecker, Alán Aspuru-Guzik, and Nathan
Wiebe. Chemical basis of Trotter-Suzuki errors in quantum chemistry simulation.
Physical Review A, 91(2):022311, 2015. arXiv:1410.8159.

[19] Markus Reiher, Nathan Wiebe, Krysta M. Svore, Dave Wecker, and Matthias
Troyer. Elucidating reaction mechanisms on quantum computers. Proceedings of
the National Academy of Sciences, 114(29):7555–7560, 2017. arXiv:1605.03590.

[20] Ryan Babbush, Dominic W. Berry, Ian D. Kivlichan, Annie Y. Wei, Peter J.
Love, and Alán Aspuru-Guzik. Exponentially more precise quantum simulation
of fermions in second quantization. New Journal of Physics, 18(3):033032, 2016.

[21] Mario Motta, Erika Ye, Jarrod R. McClean, Zhendong Li, Austin J. Minnich,
Ryan Babbush, and Garnet Kin-Lic Chan. Low rank representations for quantum
simulation of electronic structure. npj Quantum Information, 7(1):1–7, 2021.

[22] Earl Campbell. Random compiler for fast Hamiltonian simulation. Physical
Review Letters, 123(7):070503, 2019. arXiv:1811.08017.

193
[23] Dominic W. Berry, Craig Gidney, Mario Motta, Jarrod R. McClean, and Ryan
Babbush. Qubitization of arbitrary basis quantum chemistry leveraging sparsity
and low rank factorization. Quantum, 3:208, 2019. arXiv:1902.02134.

[24] Andrew M. Childs, Yuan Su, Minh C. Tran, Nathan Wiebe, and Shuchen
Zhu. Theory of Trotter error with commutator scaling. Physical Review X,
11(1):011020, 2021.

[25] Vera von Burg, Guang Hao Low, Thomas Häner, Damian S. Steiger, Markus
Reiher, Martin Roetteler, and Matthias Troyer. Quantum computing enhanced
computational catalysis. Physical Review Research, 3(3):033055, 2021.

[26] Joonho Lee, Dominic Berry, Craig Gidney, William J. Huggins, Jarrod R.
McClean, Nathan Wiebe, and Ryan Babbush. Even more efficient quantum
computations of chemistry through tensor hypercontraction. PRX Quantum,
2(3):030305, 2021. arXiv:2011.03494.

[27] Yuan Su, Hsin-Yuan Huang, and Earl T. Campbell. Nearly tight Trotterization of
interacting electrons. Quantum, 5:495, 2021. arXiv:2012.09194.

[28] Stephen P. Jordan, Keith S.M. Lee, and John Preskill. Quantum algorithms for
quantum field theories. Science, 336(6085):1130–1133, 2012.

[29] John Preskill. Simulating quantum field theory with a quantum computer. In The
36th Annual International Symposium on Lattice Field Theory, volume 334, page
024. SISSA Medialab, 2019. arXiv:1811.10085.

[30] Bela Bauer, Sergey Bravyi, Mario Motta, and Garnet Kin-Lic Chan. Quantum
algorithms for quantum chemistry and quantum materials science. Chemical
Reviews, 120(22):12685–12717, 2020. arXiv:2001.03685.

[31] Yudong Cao, Jonathan Romero, Jonathan P. Olson, Matthias Degroote, Peter D.
Johnson, Mária Kieferová, Ian D. Kivlichan, Tim Menke, Borja Peropadre, Nicolas
P. D. Sawaya, et al. Quantum chemistry in the age of quantum computing.
Chemical Reviews, 119(19):10856–10915, 2019. arXiv:1812.09976.

[32] Ryan Babbush, Nathan Wiebe, Jarrod McClean, James McClain, Hartmut Neven,
and Garnet Kin-Lic Chan. Low-depth quantum simulation of materials. Physical
Review X, 8(1):011044, 2018.

[33] Dominic W. Berry, Graeme Ahokas, Richard Cleve, and Barry C. Sanders.
Efficient quantum algorithms for simulating sparse Hamiltonians. Communications
in Mathematical Physics, 270(2):359–371, 2007. arXiv:quant-ph/0508139.

[34] David Poulin, Angie Qarry, Rolando D. Somma, and Frank Verstraete. Quantum
simulation of time-dependent Hamiltonians and the convenient illusion of Hilbert
space. Physical Review Letters, 106(17):170501, 2011. arXiv:1102.1360.

194
[35] Dominic W. Berry, Andrew M. Childs, Richard Cleve, Robin Kothari, and
Rolando D. Somma. Exponential improvement in precision for simulating sparse
Hamiltonians. Forum of Mathematics, Sigma, 5:e8, 2017. arXiv:1312.1414.

[36] Dominic W. Berry, Andrew M. Childs, Richard Cleve, Robin Kothari, and
Rolando D. Somma. Simulating Hamiltonian dynamics with a truncated taylor
series. Physical Review Letters, 114(9):090502, 2015. arXiv:1412.4687.

[37] Dominic W. Berry, Andrew M. Childs, and Robin Kothari. Hamiltonian simulation
with nearly optimal dependence on all parameters. In IEEE 56th Annual
Symposium on Foundations of Computer Science, pages 792–809. IEEE, 2015.
arXiv:1501.01715.

[38] Gilles Brassard, Peter Hoyer, Michele Mosca, and Alain Tapp. Quantum amplitude
amplification and estimation. Contemporary Mathematics, 305:53–74, 2002.
arXiv:quant-ph/0005055.

[39] Dominic W. Berry and Leonardo Novo. Corrected quantum walk for optimal
hamiltonian simulation. arXiv:1606.03443, 2016.

[40] Leonardo Novo and Dominic W Berry. Improved hamiltonian simulation via a
truncated taylor series and corrections. arXiv:1611.10033, 2016.

[41] Guang Hao Low and Isaac L Chuang. Hamiltonian simulation by qubitization.
Quantum, 3:163, 2019. arXiv:1610.06546.

[42] Guang Hao Low and Isaac L. Chuang. Optimal hamiltonian simulation by
quantum signal processing. Physical Review Letters, 118(1):010501, 2017.
arXiv:1606.02685.

[43] Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd. Quantum algorithm for
linear systems of equations. Physical Review Letters, 103(15):150502, 2009.
arXiv:0811.3171.

[44] Andris Ambainis. Variable time amplitude amplification and quantum algorithms
for linear algebra problems. In 29th Symposium on Theoretical Aspects of
Computer Science, volume 14, pages 636–647. LIPIcs, 2012. arXiv:1010.4458.

[45] Dong An and Lin Lin. Quantum linear system solver based on time-optimal
adiabatic quantum computing and quantum approximate optimization algorithm,
2019. arXiv:1909.05500.

[46] Andrew M. Childs, Robin Kothari, and Rolando D. Somma. Quantum


algorithm for systems of linear equations with exponentially improved dependence
on precision. SIAM Journal on Computing, 46(6):1920–1950, 2017.
arXiv:1511.02306.

195
[47] András Gilyén, Yuan Su, Guang Hao Low, and Nathan Wiebe. Quantum singular
value transformation and beyond: exponential improvements for quantum matrix
arithmetics. In Proceedings of the 51st Annual ACM SIGACT Symposium on
Theory of Computing, pages 193–204, 2019. arXiv:1806.01838.

[48] Lin Lin and Yu Tong. Optimal quantum eigenstate filtering with application to
solving quantum linear systems. Quantum, 4:361, 2020. arXiv:1910.14596.

[49] Yiğit Subaşı, Rolando D. Somma, and Davide Orsucci. Quantum algorithms for
systems of linear equations inspired by adiabatic quantum computing. Physical
Review Letters, 122(6):060504, 2019. arXiv:1805.10549.

[50] Yu Tong, Dong An, Nathan Wiebe, and Lin Lin. Fast inversion, preconditioned
quantum linear system solvers, and fast evaluation of matrix functions, 2020.
arXiv:2008.13295.

[51] Pedro Costa, Dong An, Yuval R. Sanders, Yuan Su, Ryan Babbush, and
Dominic W. Berry. Optimal scaling quantum linear systems solver via discrete
adiabatic theorem, 2021. arXiv:2111.08152.

[52] Dominic W. Berry. High-order quantum algorithm for solving linear differential
equations. Journal of Physics A: Mathematical and Theoretical, 47(10):105301,
2014. arXiv:1010.2745.

[53] Dominic W. Berry, Andrew M. Childs, Aaron Ostrander, and Guoming


Wang. Quantum algorithm for linear differential equations with exponentially
improved dependence on precision. Communications in Mathematical Physics,
356(3):1057–1081, 2017. arXiv:1701.03684.

[54] Andrew M. Childs and Jin-Peng Liu. Quantum spectral methods for differential
equations. arXiv:1901.00961, 2019.

[55] B. David Clader, Bryan C. Jacobs, and Chad R. Sprouse. Preconditioned


quantum linear system algorithm. Physical Review Letters, 110(25):250504, 2013.
arXiv:1301.2340.

[56] Yudong Cao, Anargyros Papageorgiou, Iasonas Petras, Joseph Traub, and Sabre
Kais. Quantum algorithm and circuit design solving the Poisson equation. New
Journal of Physics, 15(1):013021, 2013. arXiv:1207.2485.

[57] Ashley Montanaro and Sam Pallister. Quantum algorithms and the finite element
method. Physical Review A, 93(3):032324, 2016. arXiv:1512.05903.

[58] Pedro C. S. Costa, Stephen Jordan, and Aaron Ostrander. Quantum algorithm
for simulating the wave equation. Physical Review A, 99(1):012323, 2019.
arXiv:1711.05394.

[59] Andrew M. Childs, Jin-Peng Liu, and Aaron Ostrander. High-precision quantum
algorithms for partial differential equations, 2020. arXiv:2002.07868.

196
[60] Alexander Engel, Graeme Smith, and Scott E. Parker. Quantum algorithm for the
Vlasov equation. Physical Review A, 100(6):062315, 2019. arXiv:1907.09418.

[61] Noah Linden, Ashley Montanaro, and Changpeng Shao. Quantum vs. classical
algorithms for solving the heat equation. arXiv:2004.06516.

[62] Alexei Y. Kitaev. Quantum measurements and the abelian stabilizer problem, 1995.
arXiv:quant-ph/9511026.

[63] David Kahaner, Cleve Moler, and Stephen Nash. Numerical methods and software.
Prentice-Hall, Inc., 1989.

[64] Rainer Kress. Numerical Analysis: V. 181. Springer Verlag, 1998.

[65] Esmail Babolian and Mohammad Mahdi Hosseini. A modified spectral method for
numerical solution of ordinary differential equations with non-analytic solution.
Applied Mathematics and Computation, 132(2-3):341–351, 2002.

[66] Mohammad Mahdi Hosseini. A modified pseudospectral method for numerical


solution of ordinary differential equations systems. Applied mathematics and
computation, 176(2):470–475, 2006.

[67] Călin Ioan Gheorghiu. Spectral methods for differential problems. Casa Cărţii de
Ştiinţă Cluj-Napoca, 2007.

[68] Jie Shen, Tao Tang, and Li-Lian Wang. Spectral methods: algorithms, analysis
and applications, volume 41. Springer Science & Business Media, 2011.

[69] Yudong Cao, Anargyros Papageorgiou, Iasonas Petras, Joseph Traub, and Sabre
Kais. Quantum algorithm and circuit design solving the Poisson equation. New
Journal of Physics, 15(1):013021, 2013. arXiv:1207.2485.

[70] Pedro C. S. Costa, Stephen Jordan, and Aaron Ostrander. Quantum algorithm
for simulating the wave equation. Physical Review A, 99(1):012323, 2019.
arXiv:1711.05394.

[71] Ivo Babuška and Manil Suri. The h − p version of the finite element method
with quasiuniform meshes. Mathematical Modelling and Numerical Analysis,
21(2):199–238, 1987.

[72] Edward H. Kerner. Universal formats for nonlinear ordinary differential systems.
Journal of Mathematical Physics, 22(7):1366–1371, 1981.

[73] Marcelo Forets and Amaury Pouly. Explicit error bounds for Carleman
linearization, 2017. arXiv:1711.02552.

[74] Sarah K. Leyton and Tobias J. Osborne. A quantum algorithm to solve nonlinear
differential equations. arXiv:0812.4423, 2008.

197
[75] Ilon Joseph. Koopman-von Neumann approach to quantum simulation of nonlinear
classical dynamics, 2020. arXiv:2003.09980.

[76] Ilya Y. Dodin and Edward A. Startsev. On applications of quantum computing to


plasma simulations, 2020. arXiv:2005.14369.

[77] Seth Lloyd, Giacomo De Palma, Can Gokler, Bobak Kiani, Zi-Wen Liu, Milad
Marvian, Felix Tennie, and Tim Palmer. Quantum algorithm for nonlinear
differential equations, 2020. arXiv:2011.06571.

[78] Andrew M. Childs and Joshua Young. Optimal state discrimination and
unstructured search in nonlinear quantum mechanics. Physical Review A,
93(2):022314, 2016. arXiv:1507.06334.

[79] Daniel S. Abrams and Seth Lloyd. Nonlinear quantum mechanics implies
polynomial-time solution for NP-complete and #P problems. Physical Review
Letters, 81(18):3992, 1998. arXiv:quant-ph/9801041.

[80] Scott Aaronson. NP-complete problems and physical reality. ACM SIGACT News,
36(1):30–52, 2005. arXiv:quant-ph/0502072.

[81] Jin-Peng Liu, Herman Øie Kolden, Hari K. Krovi, Nuno F. Loureiro, Konstantina
Trivisa, and Andrew M. Childs. Efficient quantum algorithm for dissipative
nonlinear differential equations, 2020. arXiv:2011.03185.

[82] Torsten Carleman. Application de la théorie des équations intégrales linéaires aux
systèmes d’équations différentielles non linéaires. Acta Mathematica, 59(1):63–
87, 1932.

[83] Krzysztof Kowalski and Willi-Hans Steeb. Nonlinear Dynamical Systems and
Carleman Linearization. World Scientific, 1991.

[84] Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd. Quantum algorithm for
linear systems of equations. Physical Review Letters, 103(15):150502, 2009.
arXiv:0811.3171.

[85] Tom M. Apostol. Calculus, volume ii: multi-variable calculus and linear algebra,
with applications to differential equations and probability, 1969.

[86] Sangtae Kim and Seppo J. Karrila. Microhydrodynamics: principles and selected
applications. Courier Corporation, 2013.

[87] R. K. Michael Thambynayagam. The diffusion handbook: applied solutions for


engineers. McGraw Hill Professional, 2011.

[88] Stewart Harris. An introduction to the theory of the Boltzmann equation. Courier
Corporation, 2004.

[89] John David Jackson. Classical electrodynamics, 1999.

198
[90] Bo Thidé. Electromagnetic field theory. Upsilon books Uppsala, 2004.

[91] R. Byron Bird. Transport phenomena. Appl. Mech. Rev., 55(1):R1–R4, 2002.

[92] Vivek V. Shende, Stephen S. Bullock, and Igor L. Markov. Synthesis of quantum-
logic circuits. IEEE Transactions on Computer-Aided Design of Integrated Circuits
and Systems, 25(6):1000–1010, 2006. arXiv:quant-ph/0406176.

[93] Andrew M Childs. On the relationship between continuous-and discrete-time


quantum walk. Communications in Mathematical Physics, 294(2):581–603, 2010.
arXiv:0810.0312.

[94] François Fillion-Gourdeau and Emmanuel Lorin. Simple digital quantum


algorithm for symmetric first-order linear hyperbolic systems. Numerical
Algorithms, 82(3):1009–1045, 2019. arXiv:1705.09361.

[95] Hans-Joachim Bungartz and Michael Griebel. Sparse grids. Acta Numerica,
13:147–269, 2004.

[96] Christoph Zenger. Sparse grids, 1991. https://www5.in.tum.de/pub/zenger91sg.pdf.

[97] Jie Shen and Haijun Yu. Efficient spectral sparse grid methods and applications
to high-dimensional elliptic problems. SIAM Journal on Scientific Computing,
32(6):3228–3250, 2010.

[98] Jie Shen and Haijun Yu. Efficient spectral sparse grid methods and applications
to high-dimensional elliptic equations II. Unbounded domains. SIAM Journal on
Scientific Computing, 34(2):A1141–A1164, 2012.

[99] Lawrence C. Evans. Partial differential equations (2nd ed.). American


Mathematical Society, 2010.

[100] Jianping Li. General explicit difference formulas for numerical differentiation.
Journal of Computational and Applied Mathematics, 183(1):29–52, 2005.

[101] Roger A. Horn and Charles R. Johnson. Matrix analysis. Cambridge University
Press, 2012.

[102] Norbert Schuch and Jens Siewert. Programmable networks for quantum
algorithms. Physical Review Letters, 91(2):027902, 2003. arXiv:quant-
ph/0303063.

[103] Jonathan Welch, Daniel Greenbaum, Sarah Mostame, and Alán Aspuru-Guzik.
Efficient quantum circuits for diagonal unitaries without ancillas. New Journal
of Physics, 16(3):033040, 2014. arXiv:1306.3991.

[104] Dominic W. Berry, Andrew M. Childs, Richard Cleve, Robin Kothari, and
Rolando D. Somma. Simulating Hamiltonian dynamics with a truncated Taylor
series. Physical Review Letters, 114(9):090502, 2015. arXiv:1412.4687.

199
[105] Daniel T. Colbert and William H. Miller. A novel discrete variable representation
for quantum mechanical reactive scattering via the S-matrix Kohn method. The
Journal of Chemical Physics, 96(3):1982–1991, 1992.

[106] Daniel Spielman. Rings, paths, and Cayley graphs (course notes), 2014. http:
//www.cs.yale.edu/homes/spielman/561/lect05-15.pdf.

[107] Tao Tang. Spectral and high-order methods with applications. Science Press
Beijing, 2006.

[108] Andreas Klappenecker and Martin Rotteler. Discrete cosine transforms on quantum
computers. In Proceedings of the 2nd International Symposium on Image and
Signal Processing and Analysis, pages 464–468, 2001. arXiv:quant-ph/0111038.

[109] Martin Rötteler, Markus Püschel, and Thomas Beth. Fast signal transforms for
quantum computers. In Proceedings of the Workshop on Physics and Computer
Science, pages 31–43, 1999.

[110] Markus Püschel, Martin Rötteler, and Thomas Beth. Fast quantum Fourier
transforms for a class of non-abelian groups. In International Symposium on
Applied Algebra, Algebraic Algorithms, and Error-Correcting Codes, pages 148–
159, 1999. arXiv:quant-ph/9807064.

[111] Willi-Hans Steeb and F. Wilhelm. Non-linear autonomous systems of differential


equations and Carleman linearization procedure. Journal of Mathematical Analysis
and Applications, 77(2):601–611, 1980.

[112] Roberto F. S. Andrade. Carleman embedding and Lyapunov exponents. Journal of


Mathematical Physics, 23(12):2271–2275, 1982.

[113] Gerasimos Lyberatos and Christos A. Tsiligiannis. A linear algebraic method for
analysing Hopf bifurcation of chemical reaction systems. Chemical Engineering
Science, 42(5):1242–1244, 1987.

[114] Roger Brockett. The early days of geometric nonlinear control. Automatica,
50(9):2203–2224, 2014.

[115] Andreas Rauh, Johanna Minisini, and Harald Aschemann. Carleman linearization
for control and for state and disturbance estimation of nonlinear dynamical
processes. IFAC Proceedings Volumes, 42(13):455–460, 2009.

[116] Alfredo Germani, Costanzo Manes, and Pasquale Palumbo. Filtering of differential
nonlinear systems via a Carleman approximation approach. In Proceedings of the
44th IEEE Conference on Decision and Control, pages 5917–5922, 2005.

[117] Johannes M. Burgers. A mathematical model illustrating the theory of turbulence.


In Advances in Applied Mechanics, volume 1, pages 171–199. Elsevier, 1948.

200
[118] Pierre Gilles Lemarié-Rieusset. The Navier-Stokes problem in the 21st century.
CRC Press, 2018.
[119] Germund G. Dahlquist. A special stability problem for linear multistep methods.
BIT Numerical Mathematics, 3(1):27–43, 1963.
[120] Kendall E. Atkinson. An introduction to numerical analysis. John Wiley & Sons,
2008.
[121] Carl W. Helstrom. Quantum detection and estimation theory. Journal of Statistical
Physics, 1:231–252, 1969.
[122] Paul Waltman. Competition models in population biology. SIAM, 1983.
[123] Elliott W. Montroll. On coupled rate equations with quadratic nonlinearities.
Proceedings of the National Academy of Sciences, 69(9):2532–2536, 1972.
[124] Alessandro Ceccato, Paolo Nicolini, and Diego Frezzato. Recasting the mass-
action rate equations of open chemical reaction networks into a universal quadratic
format. Journal of Mathematical Chemistry, 57:1001–1018, 2019.
[125] Fred Brauer and Carlos Castillo-Chavez. Mathematical models in population
biology and epidemiology, volume 2. Springer, 2012.
[126] Chaolong Wang, Li Liu, Xingjie Hao, Huan Guo, Qi Wang, Jiao Huang, Na He,
Hongjie Yu, Xihong Lin, An Pan, Sheng Wei, and Tangchun Wu. Evolving
epidemiology and impact of non-pharmaceutical interventions on the outbreak of
coronavirus disease 2019 in Wuhan, China. Journal of the American Medical
Association, 323(19):1915–1923, 2020.
[127] Gul Zaman, Yong Han Kang, and Il Hyo Jung. Stability analysis and optimal
vaccination of an SIR epidemic model. Biosystems, 93(3):240–249, 2008.
[128] Derdei Bichara, Yun Kang, Carlos Castillo-Chávez, Richard Horan, and Charles
Perrings. SIS and SIR epidemic models under virtual dispersal, 03 2015.
[129] Rany Qurratu Aini, Deden Aldila, and Kiki Sugeng. Basic reproduction number of
a multi-patch SVI model represented as a star graph topology, 10 2018.
[130] Lev Kofman, Dmitri Pogosian, and Sergei Shandarin. Structure of the universe
in the two-dimensional model of adhesion. Monthly Notices of the Royal
Astronomical Society, 242(2):200–208, 1990.
[131] Sergei F. Shandarin and Yakov B. Zeldovich. The large-scale structure of
the universe: turbulence, intermittency, structures in a self-gravitating medium.
Reviews of Modern Physics, 61(2):185–220, 1989.
[132] Massimo Vergassola, Bérengère Dubrulle, Uriel Frisch, and Alain Noullez.
Burgers’ equation, devil’s staircases and the mass distribution for large-scale
structures. Astronomy and Astrophysics, 289:325–356, 1994.

201
[133] P. A. Davidson. An Introduction to Magnetohydrodynamics. Cambridge Texts in
Applied Mathematics. Cambridge University Press, 2001.

[134] Yann Brenier and Emmanuel Grenier. Sticky particles and scalar conservation
laws. SIAM Journal on Numerical Analysis, 35(6):2317–2328, 1998.

[135] Min-Hang Bao. Micro mechanical transducers: pressure sensors, accelerometers


and gyroscopes. Elsevier, 2000.

[136] Constantine M. Dafermos. Hyperbolic conservation laws in continuum physics,


volume 3. Springer, 2005.

[137] Richard P. Feynman. Quantum mechanical computers. Optics News, 11:11–20,


1985.

[138] Richard Bellman and John M. Richardson. On some questions arising in the
approximate solution of nonlinear differential equations. Quarterly of Applied
Mathematics, 20:333–339, 1963.

202

You might also like