Intro To Quantum Computing
Intro To Quantum Computing
Contents
0 Introduction
0.1
21
21
0.1.1
21
0.1.2
21
22
0.2.1
22
0.2.2
22
0.2.3
24
0.2.4
24
25
0.3.1
Early Results . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
0.3.2
. . . . . . . . . . . . . . . .
26
26
0.4.1
Circuit Design . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
0.4.2
Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . .
28
0.5
Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
0.6
29
0.2
0.3
0.4
. . . . . . . . . . . . . . . . . . . . . . . . . .
1 Complex Arithmetic
30
1.1
30
1.2
31
1.2.1
31
1.2.2
The Definition of C . . . . . . . . . . . . . . . . . . . . . . . .
31
1.2.3
32
1.2.4
33
1.2.5
C is a Field . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35
35
1.3
1.4
1.5
1.3.1
35
1.3.2
35
1.3.3
36
38
1.4.1
1.4.2
41
1.4.3
41
1.4.4
42
1.4.5
42
Roots of Unity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
44
1.5.1
N Distinct Solutions to z N = 1 . . . . . . . . . . . . . . . .
44
1.5.2
Eulers Identity . . . . . . . . . . . . . . . . . . . . . . . . . .
46
1.5.3
Summation Notation . . . . . . . . . . . . . . . . . . . . . . .
47
1.5.4
48
51
2.1
51
2.2
52
2.2.1
52
2.2.2
55
2.2.3
58
2.2.4
59
59
2.3.1
59
2.3.2
. . . . . . . . . . . . . . . . . . . . . . .
60
2.3.3
Coordinates of Vectors . . . . . . . . . . . . . . . . . . . . . .
65
2.3.4
67
2.4
Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
68
2.5
68
2.6
More Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
71
2.3
3 Matrices
72
3.1
72
3.2
Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
73
3.2.1
73
Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3
Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . .
73
3.3.1
Row Column . . . . . . . . . . . . . . . . . . . . . . . . . .
74
3.3.2
74
3.3.3
76
3.4
Matrix Transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
77
3.5
78
3.6
78
3.7
Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
79
3.7.1
Determinant of a 2 2 Matrix . . . . . . . . . . . . . . . . .
79
3.7.2
Determinant of a 3 3 Matrix . . . . . . . . . . . . . . . . .
80
3.7.3
Determinant of an n n Matrix . . . . . . . . . . . . . . . .
81
3.7.4
Determinants of Products . . . . . . . . . . . . . . . . . . . .
81
3.8
Matrix Inverses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
82
3.9
83
3.9.1
83
3.9.2
Cramers Rule . . . . . . . . . . . . . . . . . . . . . . . . . . .
84
4 Hilbert Space
4.1
89
89
4.1.1
89
4.1.2
90
4.2
. . . . . . . . . . . . . . . . . . . . .
90
4.3
91
4.3.1
93
4.3.2
Expansion Coefficients . . . . . . . . . . . . . . . . . . . . . .
94
Hilbert Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
97
4.4.1
Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
97
4.4.2
98
4.4.3
99
4.4
4.5
4.6
4.5.2
4.5.3
Why? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
5 Linear Transformations
106
5.1
5.2
5.3
5.1.2
5.2.2
5.4
5.5
5.6
107
5.4.2
5.4.3
5.4.4
5.5.2
5.5.3
132
6.1
6.2
6.3
6.4
. . . . 132
6.2.1
6.2.2
6.2.3
. . . . . 133
6.3.2
6.3.3
6.4.2
6.4.3
6.4.4
6.4.5
6.5
6.6
6.7
6.4.6
6.4.7
6.5.2
6.5.3
6.5.4
6.5.5
6.5.6
6.5.7
6.6.2
6.6.3
6.6.4
6.6.5
6.6.6
153
7.1
7.2
7.3
7.4
7.5
7.6
7.3.2
7.3.3
7.4.2
7.5.2
7.5.3
7.7
7.8
7.9
7.6.1
7.6.2
7.6.3
. . . . . . . . . . . . . . . . . . 167
7.7.1
7.7.2
7.7.3
7.8.2
191
8.1
8.2
8.3
8.2.1
8.2.2
8.4
8.5
8.6
8.7
8.3.1
8.3.2
8.4.2
8.5.2
8.5.3
8.5.4
8.5.5
8.6.2
8.6.3
9 The Qubit
212
9.1
9.2
9.3
9.4
9.5
9.3.2
9.3.3
9.4.2
9.4.3
9.4.4
9.5.2
9.5.3
9.5.4
. . . . . . . . . . . . . . 235
9.6
9.7
9.5.5
9.5.6
9.6.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
9.7.2
Rewriting |i . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
9.7.3
9.7.4
10 Tensor Products
248
269
335
368
405
416
. . . . . . . . . . . . . . . . 420
. . . . . . . 430
442
. . . 446
. . . . . . . . . . . . . . . 447
451
. . . . . . . 451
n
. . . . . . 453
16.2.3 The Third Environment: The Finite Group Z 2n with Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Z2 )n and (Z
Z2n , ) . . . . 459
16.2.4 Interchangeable Notation of H(n) , (Z
17 Quantum Oracles
462
. . . . . . . . . . . . . . 475
484
. . . . . . . . . . . . . . . . . . . . 489
. . . . . . . . . . . 494
. . . . . . . . . . 501
18.7.3 The Orthogonality of A Register Output Relative to the Unknown Period a . . . . . . . . . . . . . . . . . . . . . . . . . 503
18.7.4 Foregoing the Conceptual Measurement . . . . . . . . . . . . . 505
18.8 Circuit Analysis Conclusion . . . . . . . . . . . . . . . . . . . . . . . 506
18.9 Simons Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
18.9.1 Producing n 1 Linearly Independent Vectors . . . . . . . . 507
18.9.2 The Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 508
18.9.3 Strange Behavior . . . . . . . . . . . . . . . . . . . . . . . . . 510
18.10Time Complexity of the Quantum Algorithm . . . . . . . . . . . . . . 512
18.10.1 Producing n1 Linearly-Independent wk in Polynomial Time
Argument 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
18.10.2 Proof of Theorem Used by Argument 1 . . . . . . . . . . . . . 512
18.10.3 Summary of Argument 1 . . . . . . . . . . . . . . . . . . . . . 517
18.10.4 Producing n1 Linearly-Independent wk in Polynomial Time
Argument 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
18.10.5 Proof of Theorem Used by Argument 2 . . . . . . . . . . . . . 518
18.10.6 Summary of Argument 2 . . . . . . . . . . . . . . . . . . . . . 521
18.10.7 Discussion of the Two Proofs Complexity Estimates . . . . . 521
18.11The Hidden Classical Algorithms and Their Cost . . . . . . . . . . . 522
18.11.1 Unaccounted for Steps . . . . . . . . . . . . . . . . . . . . . . 522
18.12Solving Systems of Mod-2 Equations . . . . . . . . . . . . . . . . . . 523
18.12.1 Gaussian Elimination and Back Substitution . . . . . . . . . . 523
18.12.2 Gaussian Elimination . . . . . . . . . . . . . . . . . . . . . . . 524
18.12.3 Back Substitution . . . . . . . . . . . . . . . . . . . . . . . . . 528
18.12.4 The Total Cost of the Classical Techniques for Solving Mod-2
Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
18.13Applying GE and Back Substitution to Simons Problem . . . . . . . 529
18.13.1 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . 530
18.13.2 Completing the Basis with an nth Vector Not Orthogonal to a 531
18.13.3 Using Back-Substitution to Close the Deal . . . . . . . . . . . 534
18.13.4 The Full Cost of the Hidden Classical Algorithms . . . . . . . 534
18.14Adjusted Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
18.14.1 New Linear Independence Step . . . . . . . . . . . . . . . . . 534
18.14.2 New Solution of System Step . . . . . . . . . . . . . . . . . . 537
18.14.3 Cost of Adjusted Implementation of Simons Algorithm . . . . 537
18.15Classical Complexity of Simons Problem . . . . . . . . . . . . . . . . 538
542
560
. . . . . . . . . . . . . . . . . . . . . . 566
. . . . . . . . . . . . . . . . . . . . . 567
. . . . . . . . . . . . . . . . . . . 570
580
. . . . . . . . . . . . . . . . . . . . . . . . 582
. . . . . . . . . . . . . . . . . . 591
. . . . . . . . . . . . . . . . . . . . . . 592
. . . . . . 597
610
. . . . . . . . . . . . . . 610
. . . . . . . . . . . . . . . . . 612
. . . . . . . . . . . . . . . . . . . . . . . . 613
638
. . . . . . . . . . . . . . 648
23.4 Shors Quantum Circuit Overview and the Master Plan . . . . . . . . 649
23.4.1 The Circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
23.4.2 The Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
23.5 The Circuit Breakdown . . . . . . . . . . . . . . . . . . . . . . . . . . 651
23.6 Circuit Analysis Prior to Conceptual Measurement: Point B . . . . . 651
23.6.1 The Hadamard Preparation of the A register . . . . . . . . . . 651
23.6.2 The Quantum Oracle . . . . . . . . . . . . . . . . . . . . . . . 652
23.6.3 The Quantum Oracle on Hadamard Superposition Inputs . . . 653
23.7 Fork-in-the Road: An Instructional Case Followed by the General Case 653
23.8 Intermezzo Notation for GCD and Coprime . . . . . . . . . . . . . 654
23.8.1 Greatest Common Divisor . . . . . . . . . . . . . . . . . . . . 654
23.8.2 Coprime (Relatively Prime) . . . . . . . . . . . . . . . . . . . 654
23.9 First Fork: Easy Case (aN ) . . . . . . . . . . . . . . . . . . . . . . 655
23.9.1 Partitioning the Domain into Cosets . . . . . . . . . . . . . . 655
23.9.2 Rewriting the Output of the Oracle . . . . . . . . . . . . . . . 656
23.9.3 Implication of a Hypothetical Measurement of the B register
Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
23.9.4 Effect of a Final QFT on the A Register . . . . . . . . . . . . 659
23.9.5 Computation of Final Measurement Probabilities (Easy Case)
661
{yc }a1
c=0
. . . . . . . . . . 694
711
725
. . . . . . . . . . . . . 728
. . 730
List of Figures
736
List of Tables
741
Chapter 0
Introduction
0.1
0.1.1
This book accompanies CS 83A, the first of a three quarter quantum computing
sequence offered to sophomores during their second or third years at Foothill Community College in Los Altos Hills, California. This first course focuses on quantum
computing basics under the assumption that we have noise-free quantum components
with which to build our circuits. The subsequent courses deal with advanced algorithms and quantum computing in the presence of noise, specifically, error correction
and quantum encryption.
This is not a survey course; it skips many interesting aspects of quantum computing. On the other hand, it is in-depth. The focus is on doing. My hope is that some
of you will apply the hard skills you learn here to discover quantum algorithms
of your own. However, even if this is the only course you take on the subject, the
computational tools you learn will be applicable far beyond the confines of quantum
information theory.
0.1.2
This short introduction contains samples of the math symbolism we will learn later
in the course. These equations are intended only to serve as a taste of whats ahead.
You are not expected to know what the expressions and symbols mean yet, so dont
panic. All will be revealed in the next few weeks. Consider this introduction to be a
no-obligation sneak preview of things to come.
21
0.2
0.2.1
|0i + |1i .
The symbol |0i corresponds to classical 0 value, while |1i is associated with a
classical value of 1. Meanwhile, the symbols and stand for numbers that express
how much 0 and how much 1 are present in the qubit.
Well make this precise shortly, but the idea that you can have a teaspoon of 0
and a tablespoon of 1 contained in a single qubit immediately puts us on alert that
we are no longer in the world of classical computing. This eerie concept becomes all
the more magical when you consider that a qubit exists on a sub-atomic level (as
photon or the spin state of an electron, for example), orders of magnitude smaller
than the physical embodiment of a single classical bit which requires about a million
atoms (or in research labs as few as 12).
That an infinitely small entity such as a qubit can store so much more information
than a bulky classical bit comes at a price, however.
0.2.2
A Classical Experiment
If 100 classical one-bit memory locations are known to all hold the same value call
it x until we know what that value is then they all hold x = 0 or all hold x = 1. If
we measure the first location and find it to be 1, then we will have determined that
all 100 must hold a 1 (because of the assumption that all 100 locations are storing
the exact same value). Likewise, if we measure a 0, wed know that all 100 locations
contain the value 0. Measuring the other 99 locations would confirm our conclusion.
Everything is logical.
A Quantum Experiment
Qubits are a lot more slippery. Imagine a quantum computer capable of storing
qubits. In this hypothetical we can inspect the contents of any memory location in
our computer by attaching an output meter to that location and read a result off the
meter.
Lets try that last experiment in our new quantum computer. We load 100 qubit
memory locations with 100 identically prepared qubits. Identically prepared means
that each qubit has the exact same value, call it |i. (Never mind that I havent
explained what the value of a qubit means; it has some meaning, and Im asking you
imagine that all 100 have the same value.)
Next, we use our meter to measure the first location. As in the classical case
we discover that the output meter registers either a 0 or a 1. Thats already a
disappointment. We were hoping to get some science-fictiony-looking measurement
from a qubit, especially one with a name like |i. Never mind; we carry on. Say
the location gives us a measurement of 1.
Summary to this point. We loaded up all 100 locations with the same qubit,
peered into the first location, and saw that it contained an ordinary 1.
What should we expect if we measure the other 99 locations?
have no idea what to expect.
Answer: We
23
the measurement will have permanently destroyed the original state we prepared, leaving it in a classical condition of either 0 or 1, no more magical
superposition left in there,
as already stated we know nothing (well, almost nothing, but thats for another
day) about the measurement outcomes of the other 99 supposedly identically
prepared locations, and
most bizarre of all, in certain situations, measuring the state of any one of these
qubits will cause another qubit in a different computer, room, planet or galaxy
to be modified without the benefit of wires, radio waves or time.
0.2.3
Such wild behavior is actually well managed using quantum mechanics, the mathematical symbol-manipulation game that was invented in the early 20th century to
help explain and predict the behavior of very, very small things. I cited the truculent nature of qubits in this introduction as a bit of a sensationalism to both scare
and stimulate you. We can work with these things very easily despite their unusual
nature.
The challenge for us is that quantum mechanics and its application to information and algorithms is not something one can learn in a week or two. But one can
learn it in a few months, and thats what were going to do in this course. Ive prepared a sequence of lessons which will walk you through the fascinating mathematics
and quantum mechanics needed to understand the new algorithms. Because it takes
hard, analytical work, quantum computing isnt for everyone. But my hope is that
some among you will find this volume an accessible first step from which you can go
on to further study and eventually invent quantum algorithms of your own.
0.2.4
So as not to appear too secretive, Ill give you taste of what and roughly mean
for the state |i. They tell us the respective probabilities that we would obtain a
reading of either a 0 or a 1 were we to look into the memory location where |i
is stored. (In our quantum jargon, this is called measuring the state |i.) If, for
example, the values happened to be
1
and
= ,
2
3
=
,
2
then the
(
0 would be (1/2)2 = 14 = 25%
probability of measuring
.
|0i + |1i
with probability
||2
with probability
||2 .
and
This is far from the whole story as well learn in our very first lecture, but it gives
you a feel for how the probabilistic nature of the quantum world can be both slippery
and quantitative at the same time. We dont know what well get when we query a
quantum memory register, but we do know what the probabilities will be.
0.3
0.3.1
If were going to expend time studying math and quantum mechanics, we should
expect something in return. The field is evolving rapidly, with new successes and
failures being reported weekly. However, there are a few established results which are
incontrovertible, and they are the reason so much effort is being brought to bear on
quantum computer design.
Of the early results, perhaps the most dramatic is Shors period-finding algorithm,
and it is this that I have selected as the endpoint of our first volume. It provides
a basis for factoring extremely large numbers in a reasonable time when such feats
would take classical computers billions of years. The applications, once implemented
are profound. However the consequences may give one pause; network security as we
know it would become obsolete.
Fortunately, or so we believe, there are different quantum techniques that offer
alternatives to current network security and which could render it far more secure
than it is today. (These require additional theory, beyond the basics that we learn in
volume 1 and will be covered in the sequel.)
There are also less sensational, but nevertheless real, improvements that have
been discovered. Grovers search algorithm for unsorted linear lists, while offering
a modest speed-up over classical searches, is attractive merely by the ubiquity of
search in computing. Related search techniques that look for items over a network
are being discovered now and promise to replicate such results for graphs rather than
list structures.
Quantum teleportation and super dense coding are among the simplest applications of quantum computing, and they provide a glimpse into possible new approaches
to more efficient communication. Well get to these in this volume.
25
0.3.2
Quantum computers dont exist yet. There is production grade hardware that appears
to leverage quantum behavior but does not exhibit the simple qubit processing needs
of the early or indeed most of the current quantum algorithms in computer
science. On the other hand, many university rigs possess the right stuff for quantum
algorithms, but they are years away from having the stability and/or size to appear
in manufactured form.
The engineers and physicists are doing their part.
The wonderful news for us computer scientists is that we dont have to wait.
Regardless of what the hardware ultimately looks like, we already know what it
will do. Thats because it is based on the most fundamental, firmly established and
despite my scary sounding lead-in surprisingly simple quantum mechanics. We know
what a qubit is, how a quantum logic gate will affect it, and what the consequences of
reading qubit registers are. There is nothing preventing us from designing algorithms
right now.
0.4
Given that we can strap ourselves in and start work immediately, we should be clear
on the tasks at hand. There are two.
Circuit Design. We know what the individual components will be, even if
they dont exist yet. So we must gain some understanding and proficiency in
the assembly of these parts to produce full circuits.
Algorithm Design. Because quantum mechanics is probabilistic by nature,
well have to get used to the idea that the circuits dont always give us the
answer right away. In some algorithms they do, but in others, we have to send
the same inputs into the same circuits many times and let the laws of probability
play out. This requires us to analyze the math so we can know whether we have
a fighting chance of our algorithm converging to an answer with adequate error
tolerance.
0.4.1
Circuit Design
Classical
Classical logic gates are relatively easy to understand. An AND gate, for example,
has a common symbol and straightforward truth table that defines it:
26
x
y
xy
xy
You were introduced to logic like this in your first computer science class. After about
20 minutes of practice with various input combinations, you likely absorbed the full
meaning of the AND gate without serious incident.
Quantum
A quantum logic gate requires significant vocabulary and symbolism to even define,
never mind apply. If you promise not to panic, Ill give you a peek. Of course, youll
be trained in all the math and quantum mechanics in this course before we define
such a circuit officially. By then, youll be eating quantum logic for breakfast.
Well take the example of something called a second order Hadamard gate. We
would start by first considering the second order qubit on which the gate operates.
Such a thing is symbolized using the mysterious notation and a column of numbers,
|i2 =
.
Next, we would send this qubit through the Hadamard gate using the symbolism
(
|i2
H 2
H 2 |i2
Although it means little to us at this stage, the diagram shows the qubit |i2 entering
the Hadamard gate, and another, H 2 |i2 , coming out.
Finally, rather than a truth table, we will need a matrix to describe the behavior of
the gate. Its action on our qubit would be the result of matrix multiplication (another
27
1
1
1
1
1
1
1
1
1
H 2 |i2 =
1 1 1
21
1 1 1
1
+++
1
+ .
2 +
+
Again, we see that there is a lot of unlearned symbolism, and not the kind that can be
explained in a few minutes or hours. Well need weeks. But the weeks will be packed
with exciting and useful information that you can apply to all areas of engineering
and science, not just quantum computing.
0.4.2
Algorithms
In quantum computing, we first design a small circuit using the components that are
(or will one day become) available to us. An example of such a circuit in diagram
form (with no explanation offered today) is
|0in
H n
H n
Uf
| {z }
(actual)
|0in
| {z }
(conceptual)
There are access points, A, B and C, to assist with the analysis of the circuit. When
we study these circuits in a few weeks, well be following the state of a qubit as it
makes its way through each access point.
Deterministic
The algorithm may be deterministic, in which case we get an answer immediately.
The final steps in the algorithm might read:
We run the circuit one time only and measure the output.
If we read a zero the function is constant.
If we read any non-zero value, the function is balanced.
This will differ from a corresponding classical algorithm that requires, typically, many
evaluations of the circuit (or in computer language, many loop passes).
28
Probabilistic
Or perhaps our algorithm will be probabilistic, which means that once in a blue moon
it will yield an incorrect answer. The final steps, then, might be:
If the above loop ended after n + T full passes, we failed.
Otherwise, we succeeded and have solved the problem with error probability < 1/2T and with a big-O time complexity of O (n3 ), i.e., in
polynomial time.
Once Again: I dont expect you to know about time complexity or probability
yet. Youll learn it all here.
Whether deterministic or probabilistic, we will be designing circuits and their
algorithms that can do things faster than their classical cousins.
0.5
Perspective
Quantum computing does not promise to do everything better than classical computing. In fact, the majority of our processing needs will almost certainly continue to
be met more efficiently with todays bit-based logic. We are designing new tools for
currently unsolvable problems, not to fix things that are currently unbroken.
0.6
Most students will find that a cover-to-cover reading of this book does not match
their individual preparation or goals. One person may skip the chapters on complex
arithmetic and linear algebr a, while another may devote considerable and I hope
pleasurable time luxuriating in those subjects. You will find your path in one of
three ways:
1. Self-Selection. The titles of chapters and sections are visible in the click-able
table of contents. You can use them to evaluate whether a set of topics is likely
to be worth your time.
2. Chapter Introductions. The first sentence of some chapters or sections may
qualify them as optional, intended for those who want more coverage of a specific topic. If any such optional component is needed in the later volumes
accompanying CS 83B or CS 83C, the student will be referred back to it.
3. Tips Found at the Course Site. Students enrolled in CS 83A at Foothill
College will have access to the course web site where weekly modules, discussion
forums and private messages will contain individualized navigation advice.
Lets begin by learning some math.
29
Chapter 1
Complex Arithmetic
1.1
|0i + |1i .
But what are these numbers and ? If you are careful not to take it too seriously,
you can imagine them to be numbers between 0 and 1, where small values mean less
probable or a small dose and larger values mean more probable or a high dose.
So a particular qubit value, call it |0 i, defined to be
|0 i
would mean a large amount of the classical bit 0 and a small amount of the classical bit 1. Im intentionally using pedestrian terminology because its going to take
us a few weeks to rigorously define all this. However, I can reveal something immediately: real numbers will not work for or . The vagaries of quantum mechanics
require that these numbers be taken from the richer pool of complex numbers.
Our quest to learn quantum computing takes us through the field of quantum
mechanics, and the first step in that effort must always be a mastery of complex
arithmetic. Today we check off that box. And even if youve studied it in the past,
our treatment today might include a few surprises results well be using repeatedly
like Eulers formula, how to sum complex roots-of-unity, the complex exponential
function and polar forms. So without further ado, lets get started.
30
1.2
1.2.1
0,
and we no longer have any solutions in the real numbers. Yet we need such solutions
in physics, engineering and indeed, in every quantitative field from economics to
neurobiology.
The problem is that the real numbers do not constitute a complete field, a term
that expresses the fact that there are equations that have no solutions in that number
system. We can force the last equation to have a solution by royal decree: we declare
the number
i
1
to be added to R. It is called an imaginary number. Make sure you understand the
meaning here. We are not computing a square root. We are defining a new number
whose name is i, and proclaiming that it have the property that
i2
1 .
1.2.2
The Definition of C
However, we get lucky. The number i cant just be thrown in without also specifying
how we will respond when someone wants to add or multiply it by a number like
3 or -71.6. And once we do that, we start proliferating new combinations called
complex numbers. Each such non-trivial combination (i.e., one with an i in it) will be
a solutions to some equation that doesnt have a real zero. Here are a few examples
of the kinds of new numbers we will get:
1.2.3
Since every complex number is defined by an ordered pair of real numbers, (a, b),
where it is understood that
(a, b)
a + bi,
we have a natural way to represent each such number on a plane, whose x-axis is
the real axis (which expresses the value a), and y-axis is the imaginary axis (which
expresses the value b) (figure 1.1). This looks a lot like the real Cartesian plane, R2 ,
32
c = a + ib,
w = u + iv
In quantum computing, our complex numbers are usually coefficients of the computational basis states a new term for those special symbols |0i and |1i we have been
toying with in which case we may use use Greek letters or , for the complex
numbers,
|i
|0i + |1i .
This notation emphasizes the fact that the complex numbers are scalars of the complex vector space under consideration.
[Note. If terms like vector space or scalar are new to you, fear not. Im not
officially defining them yet, and well have a full lecture on them. I just want to start
exposing you to some vocabulary early.]
Equality of Two Complex Numbers
The criteria for two complex numbers to be equal follows the template set by two
points in R2 being equal: both coordinates must be equal. If
z = x + iy
and
w = u + iv,
then
z=w
1.2.4
x=u
and
y = v.
(a c) + i(b d)
and
(a + ib) (c + id)
33
c + id
c + id
c id
(ac + bd) + i(bc ad)
c2 + d 2
bc ad
ac + bd
+ i 2
,
where c2 + d2 6= 0 .
= 2
2
c +d
c + d2
A special consequence of this is the oft cited identity
1
= i .
i
[Exercise. Prove it.]
=
Addition (or subtraction) can be pictured as the vectorial sum of the two complex
numbers (see figure 1.2). However, multiplication is more easily visualized when we
1.2.5
C is a Field
A number system that has addition and multiplication replete with the usual properties is called a field. What we have outlined above is the fact that C is, like R, a
field. When you have a field, you can then create a vector space over that field by
taking n-tuples of numbers from that field. Just as we have real n-dimensional vector
spaces, Rn , we can as easily create n-dimensional vector spaces over C which we call
Cn . (We have a whole lesson devoted to defining real and complex vector spaces.)
1.3
1.3.1
We already saw that each complex number has two aspects to it: the real term and
the term that has the i in it. This creates a natural correspondence between C and
R2 ,
x + iy
(x, y) .
1.3.2
The axes can still be referred to as the x-axis and y-axis, but they are more commonly
called the real -axis and imaginary axis. The number i sits on the imaginary axis, one
unit above the real axis. The number -3 is three units to the left of the imaginary
axis on the real axis. The number 1 + i is in the first quadrant. [Exercise. Look
up that term and describe what the second, third and fourth quadrants are.]
Besides using (x, y) to describe z, we can use the polar representation suggested
by polar coordinates (r, ) of the complex plane (figure 1.3).
x + iy r (cos + i sin )
35
Figure 1.3: the connection between cartesian and polar coordinates of a complex
number
Terminology
x = Re(z) ,
y = Im(z) ,
r = |z| ,
= arg z ,
3i, i, 900i, etc.
Note. Modulus will be discussed more fully in a moment. For now, |z| = r can be
taken as a definition or just terminology with the definition to follow.
1.3.3
x + iy
then its complex conjugate (or just conjugate) is designated and defined as
z
x iy ,
z .
Geometrically, this is like reflecting z across the x (real) axis (figure 1.4).
36
=
=
=
=
35 8i
2 +
2i
(3 2 i
1.5
It is easy to show that conjugation distributes across sums and products, i.e.,
(w z)
(w + z)
=
=
w z
and
w + z .
These little factoids will come in handy when we study kets, bras and Hermitian
conjugates in a couple weeks.
[Exercise. Prove both assertions. What about quotients?]
The Modulus of a Complex Number
Just as in the case of R2 , the modulus of the complex z is the distance of the line
segment (in the complex plane) 0 z, that is,
p
|z|
Re(z)2 + Im(z)2
p
=
x2 + y 2 .
A short computation shows that multiplying z by its conjugate, z , results in a
37
|z|
zz =
zz
zz
x2 + y 2
1.4
The term transcendental function has a formal definition, but for our purposes it
means functions like sin x, cos x, ex , sinh x, etc. Its time to talk about how they are
defined and relate to complex arithmetic.
1.4.1
From calculus, you may have learned that the real exponential function, exp(x) = ex ,
can be expressed by some authors, defined in terms of an infinite sum, the Taylor
38
ex
X
xn
n=0
n!
This suggests that we can define a complex exponential function that has a similar
expansion, only for a complex z rather than a real x.
Complex Exponential of a Pure Imaginary Number and Eulers Formula
We start by defining a new function of a purely imaginary number, i, where is the
real angle (or arg) of the the number,
exp(i)
X
(i)n
n=0
n!
(A detail that I am skipping is the proof that this series converges to a complex
number for all real . But believe me, it does.)
Lets expand the sum, but first, an observation about increasing powers of i.
i0
i1
i2
i3
i4
i5
=1
=i
= 1
= i
=1
=i
...
n+4
i
= in
Apply these powers of i to the infinite sum.
i
2
i3
4
i
+
1 +
1!
2!
3!
4!
i5
6
i7
i8
+
+
5!
6!
7!
8!
+ ... .
Rearrange the terms so that all the real terms are together and all the imaginary
terms are together,
2
4
6
8
i
e
=
1
+
+
+ ...
2!
4!
6!
8!
3
5
7
+ i
+ ... .
1!
3!
5!
7!
You may recognize the two parenthetical expressions as the Taylor series for cos and
sin , and we pause to summarize this result of profound and universal importance.
39
Eulers Formula
ei
cos + i sin .
cos2 + sin2
1,
which leads to one of the most necessary and widely used facts in all of physics and
engineering,
i
e = 1 ,
for real .
[Exercise. Prove this last equality without recourse to Eulers formula, using
exponential identities alone.]
Eulers formula tells us how to visualize the exponential of a pure imaginary. If we
think of as time, then ei is a spec (if you graph it in the complex plane) traveling
around the unit-circle counter-clockwise at 1 radian-per-second (see Figure 1.6). I
40
1.4.2
Now try this: Plug (minus theta) into Eulers formula, then add (or subtract) the
resulting equation to the original formula. Because of the trigonometric identities
sin ()
cos ()
sin (),
cos () ,
=
=
and
the so-called oddness and evenness of sin and cos, respectively, you would quickly
discover the first (or second) equality
cos
sin
ei + ei
,
2
ei ei
.
2i
These appear often in physics and engineering, and well be relying on them later in
the course.
1.4.3
We have defined exp() for pure imaginaries as an infinite sum, and we already knew
that exp() for reals was an infinite sum, so we combine the two to define exp() for
any complex number. If z = x + iy,
exp(z)
ez
ex+iy
ex eiy .
We can do this because each factor on the far right is a complex number (the first,
of course, happens to also be real), so we can take their product.
For completeness, we should note (this requires proof, not supplied) that everything we have done leads to the promised Taylor expansion for ez as a function of a
complex z, namely,
exp(z)
ez
X
zn
n=0
n!
This definition implies, among other things, the correct behavior of exp(z) with regard
to addition of exponents, that is
exp(z + w)
exp(z) exp(w) ,
41
ez ew .
cos(A + B) + i sin(A + B)
=
=
Because of the law of exponents we know that the LHS of the last two equations are
equal, so their RHSs must also be equal. Finally, equate the real and imaginary parts:
cos(A + B)
sin(A + B)
=
=
QED
1.4.4
We wont really need these, but lets record them for posterity. For a complex z, we
have,
cos z
z2
z4
z6
z8
+
+
+ ...
2!
4!
6!
8!
and
sin z
z
z3
z5
z7
+ ... .
1!
3!
5!
7!
1.4.5
cos z + i sin z .
We are comfortable expressing a complex number as the sum of its real and imaginary
parts,
z
x + iy .
42
But from the equivalence of the Cartesian and polar coordinates of a complex number
seen in figure 1.3 and expressed by
x + iy r (cos + i sin ) ,
we can use the Euler formula on the RHS to obtain the very useful
z
rei .
The latter version gives us a variety of important identities. If z and w are two
complex numbers expressed in polar form,
z
rei
sei ,
and
then we have
zw = rs ei(+)
r i()
e
.
z/w =
s
and
Notice how the moduli multiply or divide and the args add or subtract (figure 1.7 ).
1 i
e
r
= rei .
=
and
In fact, that last equation is so useful, Ill restate it slightly. For any real number, ,
rei
= rei .
[Exercise. Using polar notation, find a short proof that the conjugate of a product
(quotient) is the product (quotient) of the conjugates. (This is an exercise you may
have done above using more ink.)]
43
A common way to use these relationships is through equivalent identities that put
the emphasis on the modulus and arg, separately,
|zw|
z
w
|z |
|z| |w| ,
|z|
,
|w|
|z| ,
arg(zw)
z
arg
w
arg z + arg w ,
arg z arg w ,
arg(z )
arg z .
and
[Exercise. Verify all of the last dozen or so polar identities using the results of the
earlier sections.]
1.5
Roots of Unity
1.5.1
N Distinct Solutions to z N = 1
Here is something you dont see much in the real numbers. The equation
zN
1,
with a positive integer, N , has either one or two real roots (zeros), depending on
the parity of N . That is, if N is odd, the only zero is 1. If N is even, there are two
zeros, -1 and 1.
In complex algebra, things are different. Were going to see that there are N
distinct solutions to this equation. But first, we take a step back.
Consider the complex number (in polar form),
ei(2/N ) .
N is usually written
e2i/N ,
but I grouped the non-i factor in the exponent so you could clearly see that N was
of the form ei that we just finished studying. Indeed, knowing that = 2/N is a
real number allows us to use Eulers formula,
ei
cos + i sin ,
3
i,
2
and
for N > 4, as N increases (5, 6, 7, etc.), N marches clockwise along the upper
half of the unit circle approaching 1 (but never reaching it). For example 1000
is almost indistinguishable from 1, just above it.
For any N > 0, we can see that
(N )N
=
=
1,
N
1.
N =
In fact, we should call this the primitive N th root-of-unity to distinguish it from its
siblings which well meet in a moment. Finally, we can see that N is a non-real
(when N > 2) solution to the equation
zN
1.
Often, when we are using the same N and N for many pages, we omit the subscript
and use the simpler,
e2i/N ,
N
1 = ei(2/N ) , ei2(2/N ) , ei3(2/N ) , . . . , ei2 = 1 .
(See Figure 1.8.) These are also N th roots-of-unity, generated by taking powers of
the primitive N th root.
[Exercise. Why are they called N th roots-of-unity? Hint: Raise any one of them
to the N th power.]
45
1.5.2
Eulers Identity
By looking at the fourth root-of-unity, you can get some interesting relationships.
ei/2
i,
e2i
1,
1 .
and
(See Figure 1.9.) That last equation is also known as Eulers identity (distinct from
Eulers formula). Sometimes it is written in the form
ei + 1
46
0.
1.5.3
Summation Notation
Theres a common notation used to write out long sums called the summation notation. Well use it throughout the course, beginning with the next subsection, so lets
formally introduce it here. Instead of using the ellipsis (. . .) to write a long sum, as
in
a1 + a2 + a3 + . . . + an ,
we symbolize like this
n
X
ak .
k=1
The index starting the sum is indicated below the large Greek Sigma, (), and the
final index in the sum is placed above it. For example, if we wanted to start the sum
from a0 and end at an1 , we would write
n1
X
ak .
k=0
ak ,
k=0
47
where the start of the sum can be anything we want it to be: 0, 1, 5 or even .
[Exercise. Write out the sums
1 + 2 + 3 + . . . + 1999 ,
and
0 + 2 + 4 + . . . + 2N ,
using summation notation. ]
1.5.4
We finish this tutorial on complex arithmetic by presenting some facts about roots-ofunity that will come in handy in a few weeks. You can take some of these as exercises
to confirm that you have mastered the ability to calculate with complex numbers.
For the remainder of this section, lets use the shorthand that I advertised earlier
and call the primitive N th root-of-unity , omitting the subscript N , which will be
implied.
We have seen that , as well as all of its integral powers,
,
2,
...,
N 1 ,
N = 0 = 1,
0.
Exercises
(a) Show that when 0 l < N ,
l(N 1) + l(N 2) + . . . + l + 1
(
N, l = 0
0, 0 < l < N
(
N, l = 0, N
0, N < l < N, l 6= 0
Hint: Prove that, for all l, l+N = l , and apply the last result.
(c) Show that for any integer l,
l(N 1)
l(N 2)
+ ... + + 1
(
N, l = 0 (mod N )
0, l 6= 0 (mod N )
Hint: Add (or subtract) an integral multiple of N to (or from) l to bring it into
0
the interval [N, N ) and call l0 the new value of l. Argue that l (N 2) = l(N 2) ,
so this doesnt change the value of the sum. Finally, apply the last result.
(d) Show that for 0 j < N and 0 m < N ,
(jm)(N 1) + (jm)(N 2) + . . . + (jm) + 1
(
N, j = m
0, j 6= m
You will see Kronecker delta throughout the course (and beyond), starting immediately, as I rewrite result (d) using it.
N
1
X
(jm)k
N jm .
k=0
50
Chapter 2
Real Vector Spaces
2.1
|0i + |1i ,
where last lecture revealed two of the symbols, and , to be complex numbers.
Today, we add a little more information about it.
The qubit, |i, is a vector quantity.
Every vector lives in a world of other vectors, all of which bear certain similarities.
That world is called a vector space, and two of the similarities that all vectors in any
given vector space share are
its dimension (i.e., is it two dimensional? three dimensional? 10 dimensional?),
and
the kinds of ordinary numbers or scalars that support the vector space
operations (i.e., does it use real numbers? complex numbers? the tiny set of
integers {0, 1}?).
In this lecture well restrict our study to real vector spaces those whose scalars
are the real numbers. Well get to complex vector spaces in a couple days. As for
dimension, well start by focusing on two-dimensional vector spaces, and follow that
by meeting some higher dimensional spaces.
51
2.2
R2 , sometimes referred to as Euclidean 2-space, will be our poster child for vector
spaces, and well see that everything we learn about R2 applies equally well to higher
dimensional vector spaces like R3 (three-dimensional), R4 (four-dimensional) or Rn
(n-dimensional, for any positive integer, n).
With that overview, lets get back down to earth and define the two-dimensional
real vector space R2 .
2.2.1
Every vector space has some rules, called the axioms, that define its objects and the
way in which those objects can be combined. When you learned about the integers,
you were introduced to the objects, (. . . 3, 2, 1, 0, 1, 2, 3, . . .) and the rules
(2 + 2 = 4, (3)(2) = 6, etc.). We now define the axioms the objects and rules
for the special vector space R2 .
The Objects
A vector space requires two sets of things to make sense: the scalars and the vectors.
Scalars
A vector space is based on some number system. For example, R2 is built on the
real numbers, R. These are the scalars of the vector space. In math lingo the scalars
are referred to as the underlying field.
Vectors
The other set of objects that constitute a vector space are the vectors. In the case
of R2 they are ordered pairs,
3
500
1
0
r=
, a=
, x
=
, y
=
.
7
1+
0
1
Youll note that I use boldface to name the vectors. Thats to help distinguish a
vector variable name like r, x or v, from a scalar name, likea, x or . Also, we will
3
usually consider vectors to be written as columns like
, not rows like (3, 7),
7
although this varies by author and context.
A more formal and complete description of the vectors in R2 is provided using set
notation,
(
)
x
R2
x, y R .
y
(See figure 2.1.) This is somewhat incomplete, though, because it only tells what the
52
c
x1
y1
=
cx1
cy1
x
=
1
0
,
(a) verify the associativity of vector addition using these three vectors,
(b) multiply each by the scalar 1/, and
(c) verify the distributivity using the first two vectors and the scalar 1/.]
54
2.2.2
The vector spaces we encounter will have a metric a way to measure distances. The
metric is a side effect of a dot product, and we define this optional feature now.
[Caution. The phrase optional equipment means that not every vector space
has an inner product, not that this is optional reading. It is crucial that you master
this material for quantum mechanics and quantum computing since all of our vector
spaces will have inner products and we will use them constantly.]
Dot (or Inner) Product
When a vector space has this feature, it provides a way to multiply two vectors in
order to produce a scalar,
v w 7 c .
In R2 this is called the dot product, but in other contexts it may be referred to as an
inner product. There can be a difference between a dot product and an inner product
(in complex vector spaces, for example), so dont assume the terms are synonymous.
However, for R2 they are, with both defined by
x1
x2
= x1 x 2 + y 1 y 2 .
y1
y2
Inner products can be defined differently in different vector spaces,. However they
are defined, they must obey certain properties to get the title inner or dot product.
I wont burden you with them all, but one that is very common is a distributive
property.
v ( w1 + w2 )
=
55
v w1 + v w2 .
[Exercise. Look-up and list another property that an inner product must obey.]
[Exercise. Prove that the dot product in R2 , as defined above, obeys the distributive property.]
When we get to Hilbert spaces, there will be more to say about this.
Length (or Modulus or Norm)
An inner product, when present in a vector space, confers each vector with a length
(a.k.a. modulus or norm), denoted by either |v| or ||v||. The length (in most situations, including ours) of each vector must be a non-negative real number, even for
complex vector spaces coming later. So,
|v|
0.
x
,
y
vv
s
x
x
y
y
p
x2 + y 2 ,
0,
we say that they are orthogonal or mutually perpendicular. In the relatively visualizable spaces R2 and R3 , we can imagine the line segments 0 v and 0 w forming
right-angles with one another. (See figure 2.4.)
56
1.1
0
,
0
1.1
v 6= 0 kvk > 0 .
When a proposed inner product fails to meet these conditions, it is often not granted
the status inner (or dot) product but is instead called a pairing. When we come
across a pairing, Ill call it to your attention, and we can take appropriate action.
[Exercise. Assume
3
500
r=
, a=
7
1+
and x
=
1
0
.
2.2.3
Even though we have not learned what dimension means, we have an intuition that R2
is somehow two-dimensional: we graph its vectors on paper and it seems flat. Lets
take a leap now and see what its like to dip our toes into the higher dimensional
vector space R3 (over R). R3 is the set of all triples (or 3-tuples) of real numbers,
R3
x
y
x, y, z R .
(This only tells us what the vectors are, but Im telling you now that the scalars
continue to be R.)
58
2
0
/2
1 , 1 , 100 .
3.5
0
9
Its harder to graph these objects, but it can be done using three-D sketches (See
figure 2.5.).
[Exercise. Repeat some of the examples and exercises we did for R2 for this
richer vector space. In particular, define vector addition, scalar multiplication, dot
products, etc.]
2.2.4
Well roll out complex vector spaces in their full glory when we come to the Hilbert
space lesson, but it wont hurt to put our cards on the table now. The most useful
vector spaces in this course will be ones in which the scalars are the complex numbers.
The simplest example is C2 .
Definition. C2 is the set of all ordered
(
x
C2
y
You can verify that this is a vector space and guess its dimension and how the inner
product is defined. Then check your guesses online or look ahead in these lectures.
All I want to do here is introduce C2 so youll be ready to see vectors that have
complex components.
2.3
If someone were to ask,What is the single most widely used mathematical concept is
in quantum mechanics and quantum computing, I would answer, the vector basis.
(At least thats my answer today.) It is used or implied at every turn, so much so,
that one forgets its there. But it is there, always. Without a solid understanding of
what a basis is, how it affects our window into a problem, and how different bases
relate to one another, one cannot participate in the conversation. Its time to check
off the what is a basis box.
Incidentally, we are transitioning now to properties that are not axioms. They are
consequences of the axioms, i.e., we can prove them.
2.3.1
Whenever you have a finite set of two or more vectors, you can combine them using
both scalar multiplication and vector addition. For example, with two vectors, v, w,
59
av + bw .
Mathematicians call it a linear combination of v and w. Physicists call it a superposition of the two vectors. Superposition or linear combination, the idea is the same.
We are weighting each of the vectors, v and w, by scalar weights, a and b, respectively, then adding the results. In a sense, the two scalars tell the relative amounts
of each vector that we want in our result. (However, if you lean too heavily on that
metaphor, you will find yourself doing some fast talking when your students ask you
about negative numbers and complex scalars, so dont take it too far.)
The concept extends to sets containing more than two vectors. Say we have a
finite set of n vectors {vk } and corresponding scalars {ck }. Now a linear combination
would be expressed either long-hand or using summation notation,
u
=
=
c0 v0 + c1 v1 + . . . + cn1 vn1
n1
X
ck v k .
k=0
2.3.2
One can find a subset of the vectors (usually a very tiny fraction of them compared
to the infinity of vectors in the space) which can be used to generate all the other
vectors through linear combinations. When we have such a subset that is, in a sense,
minimal (to be clarified), we call it a basis for the space.
The Natural Basis
In R2 , only two vectors are needed to produce through linear combination all the
rest. The most famous basis for this space is the standard (or natural or preferred )
basis, which Ill call A for now,
1
0
A = {
x, y
} =
,
.
0
1
60
For example, the vector (15, 3)t can be expressed as the linear combination
15
= 15 x
+ 3y
.
3
Note. In the diagram that follows, the vector pictured is not intended to be (15, 3)t .
(See figure 2.6.)
3x
+ 2
y,
0 ,
1
A
0
0
Since we cannot express (3, 1, 2)t as a linear combination of these two, i.e.,
3
1
0
1, 6= x 0 + y 1
2
0
0
for any x, y R, A00 does not span the set of vectors in R3 . In other words, A00
is not complete.
[Exercise. Find two vectors, such that if either one (individually) were added to
A , the augmented set would span the space.]
00
[Exercise. Find two vectors, such that, if either one were added to A00 , the
augmented set would still fail to be a spanning set.]
62
Definition of Basis
We now know enough to formally define a vector space basis.
Basis. A basis is a set of vectors that is linearly independent and
complete (spans the space).
Another way to phrase it is that a basis is a minimal spanning set, meaning we
cant remove any vectors from it without losing the spanning property.
Theorem. All bases for a given vector space have the same number of elements.
This is easy to prove (you can do it as an [exercise] if you wish). One consequence
is that all bases for R2 must have two vectors, since we know that the natural basis
has two elements. Similarly, all bases for R3 must have three elements.
Definition. The dimension of a vector space is the number of elements in any
basis.
[Exercise. Describe some vector space (over R) that is 10-dimensional. Hint:
The set of five-tuples, {(x0 , x1 , x2 , x3 , x4 )t xk R } forms a five-dimensional vector
space over R.]
Here is an often used fact that we should prove.
Theorem. If a set of vectors {vk }, is orthonormal, it is necessarily linearly
independent.
Proof. Well assume the theorem is false and arrive at a contradiction. So, we
pretend that {vk } is an orthonormal collection, yet one of them, say v0 , is a linear
combination of the others,
v0
n1
X
ck v k ,
k=1
where not all the ck can be 0, since that would imply that v0 = 0, which cannot
be a member of any orthonormal set (remember from earlier?). By orthonormality
v0 vk = 0 for all k 6= 0, but of course v0 v0 = 1, so we get the following chain of
equalities:
!
n1
n1
X
X
1 = v0 v0 = v0
ck v k
=
ck (v0 vk ) = 0 ,
k=1
a contradiction.
k=1
QED
Notice that even if the vectors were orthogonal, weaker than their being orthonormal, they would still have to be linearly independent.
63
Alternate Bases
There are many different pairs of vectors in R2 which can be used as a basis. Every
basis has exactly two vectors (by our theorem). Here is an alternate basis for R2 :
1
4
B = {b0 , b1 } =
,
.
1
1
For example, the vector (15, 3)t can be expressed as the linear combination
15
= (27/5) b0 + (12/5) b1 .
3
[Exercise. Multiply this out to verify that the coefficients, 27/5 and 12/5 work for
that vector and basis.]
And here is yet a third basis for R2 :
C
{c0 , c1 }
2/2 , 2/2
,
2/2
2/2
[Exercise. Multiply this out to verify that the coefficients, 9 2 and 6 2 work for
that vector and basis.]
Note. In the diagrams, the vector pictured is not intended to be (15, 3)t . (See
figures 2.7 and 2.8.)
x
x
=y
y
= 1,
b0 b0 = 2 6= 1,
([Exercise])
x
y
=0
b0 b1 = 3 6= 0
If they are mutually perpendicular but do not have unit length, they are almost as
useful. Such a basis is called an orthogonal basis. If you get a basis to be orthogonal,
your hard work is done; you simply divide each basis vector by its norm in order to
make it orthonormal.
2.3.3
Coordinates of Vectors
vx x
+ vy y
,
65
its coordinates relative to the natural basis are vx and vy . In other words, the coordinates are just weighting factors needed to expand that vector in that given basis.
Vocabulary. Sometimes the term coefficient is used instead of coordinate.
If we have a different basis, like B = {b0 , b1 }, and we expand v along that basis,
v
v0 b0 + v1 b1 ,
c0
c0
0
0
(0 c0 + 1 c1 )
0 c 0 + c 0 1 c 1
(c0 c0 ) + 1 (c0 c1 )
(1) + 1 (0) = 0
[Exercise. Justify each equality in this last derivation using the axioms of vector
space and assumption of orthonormality of C.]
[Exercise. This trick works almost as well with an orthogonal basis which does
not happen to be orthonormal. We just have to add a step; when computing the
expansion coefficient for the basis vector, ck , we must divide the dot product by |ck |2 .
Prove this and give an example.]
Thus, dotting by c0 produced the expansion coefficient 0 . Likewise, to find 1
just dot the v with c1 .
For the specific vector v = (15, 3)t and the basis C, lets verify that this actually
works for, say, the 0th expansion coefficient:
2/2 15
= 15 2/2 + 3 2/2 = 9 2 X
c0 v =
3
2/2
The reason we could add the check-mark is that this agrees with the expression we
had earlier for the vector v expanded along the C basis.
Remark. We actually did not have to know things in terms of the natural basis
A in order for this to work. If we had known the coordinates of v in some other basis
(it doesnt even have to be orthonormal), say B, and we also knew coordinates of the
C basis vectors with respect to B, then we could have done the same thing.
[Exercise. If youre up for it, prove this.]
2.3.4
The definition of inner product assumed that the n-tuples were, themselves, the
vectors and not some coordinate representation expanded along a specific basis. Now
67
that we know a vector can have different coordinates relative to different bases, we
ask is the inner product formula that I gave independent of basis?, i.e., can we
use the coordinates, (dx , dy ) and (ex , ey ) relative to some basis rather than the
numbers in the raw vector ordered pair to compute the inner product using the
simple dx ex + dy ey ? In general the answer is no.
[Exercise. Compute the length of the vector (15, 3)t by dotting it with itself. Now
do the same thing, but this time compute using that vectors coordinates relative to
the three bases A, B and C through use of the imputed formula given above. Do you
get the same answers? Which bases coordinates give the right inner-product answer?]
However, when working with orthonormal bases, the answer is yes, one can use
coordinates relative to that basis, instead of the pure vector coordinates, and apply
the simple formula to the coordinates.
[Exercise. Explain the results of the last exercise in light of this new assertion.]
[Exercise. See if you can prove the last assertion.]
Note. There is a way to use non-orthonormal basis coordinates to compute dot
products, but one must resort to a more elaborate matrix multiplication, the details
of which we shall skip (but its a nice [Exercise] should you wish to attempt it).
2.4
Subspaces
The set of vectors that are scalar multiples of a single vector, such as
2
a
=
a
aR ,
aR
1
.5a
is, itself, a vector space. It can be called a subspace of the larger space R2 . As
an exercise, you can confirm that any two vectors in this set, when added together,
produce a third vector which is also in the set. Same with scalar multiplication. So
the subspace is said to be closed under vector addition and scalar multiplication. In
fact, thats what we mean by a subspace.
Vector Subspace. A subspace of a (parent) vector space is a subset of
the parent vectors that is closed under the vector/scalar operations.
2.5
What we just covered lays the groundwork for more exotic yet commonly used
vector spaces in physics and engineering. Thats why I first listed the ideas and facts
in the more familiar setting of R2 (and R3 ). If you can abstract these ideas, rather
than just memorize their use in R2 and R3 , it will serve you well, even in this short
quantum computing course.
68
Axioms
We can extend directly from ordered pairs or triples to ordered n-tuples for any
positive integer n:
x0
x1
n
R
.. , xk R, k = 0 to n 1
n1
Set n = 10 and you are thinking about a 10-dimensional space.
The underlying field for Rn is the same real number system, R, that we used
for R2 . The components are named x0 , x1 , x2 , x3 , . . ., instead of x, y, z, . . .
(although, when you deal with relativity or most engineering problems, youll use the
four-dimensional sybols x, y, z and t, the last meaning time).
[Exercise. Define vector addition and scalar multiplication for Rn following the
lead set by R2 and R3 .]
Inner Product
The dot product is defined as you would expect. If
a0
a1
and
b =
a = ..
.
an1
b0
b1
..
.
bn1
then
ab
n
X
ak b k .
k=1
1
0
0 1
A =
.. , .. , . . . ,
. .
0
0
= {x
0 , x
1 , . . . , x
n1 } .
69
0
..
.
{ b0 , b1 , . . . , bn1 } ,
for Rn will therefore have n vectors. The orthonormal property would be satisfied by
the alternate basis B if and only if
bk bj = kj .
We defined the last symbol, kj , in our complex arithmetic lesson, but since many of
our readers will be skipping one or more of the early chapters, Ill reprise the definition
here.
Kronecker Delta. kj , the Kronecker delta, is the mathematical way
to express anything that is to be 0 unless the index k = j, in which case
it is 1,
(
1, if k = j
.
kj =
0, otherwise
Expressing any vector, v, in terms of a basis, say B, looks like
v =
n
X
k bk ,
k=1
and all the remaining properties and definitions follow exactly as in the case of the
smaller dimensions.
Computing Expansion Coefficients
I explained how to compute the expansion coefficient of an arbitrary vector along
an orthonormal basis. This move is done so frequently in a variety of contexts in
quantum mechanics and electromagnetism that it warrants being restated here in the
more general cases.
When the basis is orthonormal, we can find the expansion coefficients for a vector
v =
n
X
k bk ,
k=1
by dotting v with the basis vectors one-at-a-time. In practical terms, this means we
dot with bj to get j ,
bj v = bj
n
X
(k bk ) =
k=1
n
X
k (bj bk ) =
k=1
n
X
k=1
n
X
k=1
70
bj (k bk )
k jk = j .
2.6
More Exercises
71
Chapter 3
Matrices
3.1
If a qubit is the replacement for the classical bit, what replaces the classical logic
gate? I gave you a sneak peek at the answer in the introductory lesson. There, I
first showed you the truth table of the conventional logic gate known to all computer
science freshmen as the AND gate (symbol ).
x
xy
Then, I mentioned that in the quantum world these truth tables get replaced by
something more abstract, called matrices. Like the truth table, a matrix contains
the rules of engagement when a qubit steps into its foyer. The matrix for a quantum
operator that well study later in the quarter is
1 1
1
1
1
1 1 1 1 ,
1 1 1
21
1 1 1
1
which represents a special gate called the second order Hadamard operator. Well
meet that officially in a few weeks.
Our job today is to define matrices formally and learn the specific ways in which
they can be manipulated and combined with the vectors we met the previous chapter.
72
3.2
Definitions
Definition of a Matrix. A matrix is a rectangular array of numbers,
variables or pretty much anything. It has rows and columns. Each
matrix has a particular size expressed as [# rows] [# columns],
for example, 2 2, 3 4, 7 1, 10 10, etc.
1 2 3 4
5 6 7 8
9 10 11 12
A 4 2 matrix (call it B) might be:
1
3
5
7
3.2.1
2
4
6
8
Notation
column 1
3.3
Matrix Multiplication
Matrix multiplication will turn out to be sensitive to order. In math lingo, it is not
a commutative operation,
AB 6= BA .
73
Therefore, its important that we note the order of the product in the definition.
First, we can only multiply the matrices A (n p) and B (q m) if p = q. So we will
only define the product AB for two matrices of sizes, A (n p) and B (p m). The
size of the product will be n m. Symbolically,
(n p) (p m)
(n m) .
Note that the inner dimension gets annihilated, leaving the outer dimensions to
determine the size of the product.
3.3.1
Row Column
We start by defining the product of a row by a column, which is just a short way of
saying a (1 l) matrix times an (l 1) matrix, i.e., two 1-dimensional matrices. It
is the simple dot product of the two entitles as if they were vectors,
5
6
1, 2, 3, 4
7 = (1)(5) + (2)(6) + (3)(7) + (4)(8)
8
= 18 .
This is the definition of matrix multiplication in the special case when the first is a
column vector and the second is a row vector. But that definition is used repeatedly
to generate the product of two general matrices, coming up next. Heres an example
when the vectors happen to have complex numbers in them.
5
6
1, i, 3, 2 3i
4i = (1)(5) + (i)(6) + (3)(4i) + (2 3i)(8i)
8
= 19 + 10i .
[For Advanced Readers. As you see, this is just the sum of the simple products
of the coordinates, even when the numbers are complex. For those of you already
familiar with complex inner products (a topic we will cover next week), please note
that this is not a complex inner product; we do not take the complex conjugate of either
vector. Even if the matrices are complex numbers, we take the ordinary complex
product of the corresponding elements and add them.]
3.3.2
1-dimensional matrices, we must define the (kl)th element of the answer matrix to
be the dot product of the kth row of A with the lth column of B. Lets look at it
graphically before we see the formal definition.
We illustrate the computation of the 1-1 element of an answer matrix in Figure 3.1
and the computation of the 2-2 element of the same answer matrix in Figure 3.2.
Figure 3.1: Dot-product of the first row and first column yields element 1-1
Figure 3.2: Dot-product of the second row and second column yields element 2-2
The Formal Definition of Matrix Multiplication. If A is an n p matrix,
and B is a p m matrix, then C = AB is an n m matrix, whose (kl)th element is
given by
Ckl = (AB)kl
p
X
j=1
where k = 1, . . . , n and l = 1, . . . , m.
75
Akj Bjl ,
(3.1)
[Exercise. Fill in the rest of the elements in the product matrix, C, above.]
[Exercise. Compute the products
1 2 1
1 2 3
2 0 4 3 1 1
5 5 5
0 0 1
and
1 2
1
2 0 1
2 1 1 4 3 1
0 0
5
0 5 0
1 1
3
1
.
1
1
[Exercise. Compute the first product above in the opposite order. Did you get
the same answer?]
While matrix multiplication may not be commutative, it is associative,
A(BC) = (AB)C,
a property that is used frequently.
3.3.3
1 2i 3
4
4
3 1 1 2.5i = 11 + 2.5i .
0 0 1
1
1
Some Observations
Position. You cannot put the vector on the left of the matrix; the dimensions
would no longer be compatible. You can, however, put the transpose of the
vector on the left of the matrix, vt A. That does make sense and it has an
answer (see exercise, below).
Linear Transformation. Multiplying a vector by a matrix produces another
vector. It is our first example of something called a linear transformation,
which is a special kind of mapping that sends vectors to vectors. Our next
lecture covers linear transformations in depth.
76
[Exercise. Using the v and A from the last exercise, compute the product vt A.
[Exercise. Using the same A as above, let the vector w (1, 2.5i, 1)t and
compute the product Aw, then
compute A(v + w), and finally
compare A(v + w) with Av + Aw. Is this a coincidence? ]
3.4
Matrix Transpose
2
t
i
2, i, 3
=
3
t
1
2 = (1, 2, 3)
3
That was a special case of the more general operation, namely, taking the transpose
of an entire matrix. The transpose operation creates a new matrix whose rows are
the columns of the original matrix (and whose columns are the rows of the original).
More concisely, if A is the name of our original n m matrix, its transpose, At , is the
m n matrix defined by
( Akl )t
( Alk ) .
1 2
5 6i
9 10
1
2
2 1
5
0
1
3 4
2
7 8 =
3
11 12
4
t
1
0 i
2
1 4 =
0
5 0
i
t
5 9
6i 10
7 11
8 12
2 5
1 0
1 5
4 0
[Exercise. Make up two matrices, one square and one not square, and show the
transpose of each.]
77
3.5
Matrices can be added (component-wise) and multiplied by a scalar (apply the scalar
to all nm elements in the matrix). Im going to let you be the authors of this section
in two short exercises.
[Exercise. Make these definitions precise using a formula and give an example of
each in a 3 3 case.]
[Exercise. Show that matrix multiplication is distributive over addition and both
associative and commutative in combination with scalar multiplication, i.e.,
A (c B1 + B2 )
3.6
=
=
c (AB1 ) + (AB2 )
(AB2 ) + c (AB1 ) . ]
Zero. A matrix whose elements are all 0 is called a zero matrix, and can be written
as 0 or ( 0 ), e.g.,
0 0
0 0 0
0 0
0 = 0 0 0
or
(0) =
0 0
0 0 0
0 0
Clearly, when you add the zero matrix to another matrix, it does not change anything
in the other matrix. When you multiply the zero matrix by another matrix, including
a vector, it will squash it to all 0s.
( 0 )A = ( 0 )
and
( 0 )v = 0
1 0 0 0
1 0 0
0 1 0 0
1 = 0 1 0
or
1 =
0 0 1 0
0 0 1
0 0 0 1
The identity matrix has the property that when you apply it to (multiply it with)
78
1 0 0 0
1
2i 0 1
1
2i 0 1
0 1 0 0 2 1 1
4
4
= 2 1 1
0 0 1 0 5
5
0
5
0
0
5
0
0 0 0 1
3 i 2 2 2
3 i 2 2 2
1 0 0 0
1
2i 0 1
1
2i 0 1
2 1 1
4
4
0 1 0 0 = 2 1 1
5
5
0
5
0 0 0 1 0
0
5
0
3 i 2 2 2
0 0 0 1
3 i 2 2 2
Notice that multiplication by a unit matrix has the same (non) effect whether it
appears on either side of its co-multiplicand. The rule for both matrices and vectors
is
1M = M 1 = M,
1 v = v and
vt 1 = vt .
In words, the unit matrix is the multiplicative identity for matrices.
3.7
Determinants
Associated with every square matrix is a scalar called its determinant. There is very
little physics, math, statistics or any other science that we can do without a working
knowledge of the determinant. Lets check that box now.
3.7.1
Determinant of a 2 2 Matrix
=
=
(i)(5i) (1 + i)( 6)
5
6 i 6
(5 6) i 6 .
79
3.7.2
Determinant of a 3 3 Matrix
Well give an explicit definition of a 3 3 matrix and this will suggest how to proceed
to the n n case.
a b c
d e f a e f b d f + c d e
g h
h i
g i
g h i
= a minor of a
b minor of b
+ c minor of c
(Sorry, I had to use the variable name i, not for the 1, but to mean the 3-3 element
of the matrix, since I ran out of reasonable letters.) The latter defines the minors of
a matrix element to be the determinant of the smaller matrix constructed by crossing
out that elements row and column (See Figure 3.3.)
80
3.7.3
Determinant of an n n Matrix
The 33 definition tells us to proceed recursively for any square matrix of size n.
We define its determinant as an alternating sum of the first row elements times their
minors,
det(A) = A
A11 minor of A11
A12 minor of A12
+ A13 minor of A13
+
n
X
k+1
=
(1) A1k minor of A1k .
k=1
I think you know whats coming. Why row 1 ? No reason at all (except that every
square matrix has a first row). In the definition above we would say that we expanded
the determinant along the first row, but we could have expanded it along any row
or any column, for that matter and gotten the same answer.
However, there is one detail that has to be adjusted if we expand along some other
row (or column). The expression
(1)k+1
has to be changed if we expand along the jth row (column) rather than the first row
(column). The 1 above becomes j,
(1)k+j ,
giving the formula
det(A)
n
X
k+j
(1) Ajk minor of Ajk ,
k=1
3.7.4
Determinants of Products
det(AB)
det(A) det(B) .
We dont need to prove this or do exercises, but make a mental note as we need it in
the following sections.
81
3.8
Matrix Inverses
Since we have the multiplicative identity (i.e., 1) for matrices we can ask whether,
given an arbitrary square matrix A, the inverse of A can be found. That is, can we
find a B such that AB = BA = 1? The answer is sometimes. Not all matrices have
inverses. If A does have an inverse, we say that A is invertible or non- singular, and
write its inverse as A1 . Shown in a couple different notations, just to encourage
flexibility, a matrix inverse must satisfy (and is defined by)
M 1 M
A1 A
=
=
M M 1 = 1 or
A A1 = I .
Here is one of two little theorems that well need when we introduce quantum
mechanics.
Little Inverse Theorem A. If M v = 0 for some non-zero vector
v, then M has no inverse.
Proof by Contradiction. Well assume that M 1 exists and reach a contradiction. Let v be some non-zero vector that M sends to 0. Then,
1
v = Iv =
M M v = M 1 (M v) = M 1 0 = 0,
contradicting the choice of v 6= 0.
QED
3.9
3.9.1
x1
4x2 x5 + 3.2x1
+ x2 + x3 + x4 + x5
10x4 22x1
=
=
=
19 + 5x3
1
85 x2 + x1
To solve the system uniquely, one needs to have exactly at least as many equations as
there are unknowns. So the above system does not have a unique solution (although
you can find some simpler relationships between the variables if you try). Even if you
do have exactly the same number of equations as unknowns, there still may not be a
unique solution since one of the equations might to cite one possibility be a mere
multiple of one of the others, adding no new information. Each equation has to add
new information it must be independent of all the others.
Matrix Equations
We can express this system of equations concisely using the language of matrix multiplication,
x1
19
3.2 4 5 0 1
x
2
1 1 1 1 1 x3 = 1 .
23 1 0 10 0 x4
85
x5
If we were able to get two more relationships between the variables, independent of
these three, we would have a complete system represented by a square 5 5 matrix
on the LHS. For example,
3.2 4 5 0 1
x1
19
1
1 1
1
1 x2
1
23 1 0 10
0
x3 = 85 .
2.5 .09 50
0
1 x4
2 0 .83 1 17
x5
4
Setting M = the 5 5 matrix on the left, v = the vector of unknowns, and c = the
vector of constants on the right, this becomes
Mv
=
83
c.
How can we leverage the language of matrices to get a solution? We want to know
what all the xk are. Thats the same as having a vector equation in which v is all
alone on the LHS,
v
d,
and the d on the RHS is a vector of constants. The scent of a solution should be
wafting in the breeze. If M v = c were a scalar equation we would divide by M .
Since its a matrix equation there are two differences:
1. Instead of dividing by M , we multiply each side of the equation (on the left) by
M 1 .
2. M 1 may not even exist, so this only works if M is non-singular.
If M is non-singular, and we can calculate M 1 , we apply bullet 1 above,
M 1 M v = M 1 c
1 v = M 1 c
v = M 1 c
This leaves us with two action items.
1. Determine whether or not M is invertible (non-singular).
2. If it is, compute the inverse, M 1 .
We know how to do item 1; if det(M ) is non-zero, it can be inverted. For item 2,
we have to learn how to solve a system of linear equations, something we are perfectly
situated to do given the work we have already done in this lecture.
3.9.2
Cramers Rule
Todays technique, called Cramers rule, is a clean and easy way to invert a matrix,
but not used much in practice due to its proclivity toward round-off error and poor
computer performance. (Well learn alternatives based on so-called normal forms and
Gaussian elimination when we get into Simons and Shors quantum algorithms). But
Cramers rule is very cute, so lets get enlightened.
Cramers Rule. A system of linear equations,
Mv
c,
(like the 5 5 system above) can be solved uniquely for the unknowns
xk det(M ) 6= 0. In that case each xk is given by
det Mk
,
det M
where Mk is the matrix M with its kth column replaced by the constant
vector c (see Figure 3.4).
xk =
84
167077 6= 0 .
=
=
=
=
=
635709
199789
423127
21996.3
176281
635709
167077
3.80488 .
[Exercise. Compute the other four xk and confirm that any one of the equations
in the original system holds for these five values.]
Computing Matrix Inverses Using Cramers Rule
As noted, Cramers rule is not very useful for writing software, but we can apply it
to the problem of finding a matrix inverse, especially when dealing with small, 2 2,
85
Our goal is to solve this matrix equation. We do so in two parts. First, we break this
into an equation which only involves the first column of the purported inverse,
a b
e
1
=
.
c d
g
0
This is exactly the kind of 2-equation linear system we have already conquered.
Cramers rule tells us that it has a solution (since det(M ) 6= 0) and the solution
is given by
1 b
det
0 d
e =
det M
and
a 1
det
c 0
g =
.
det M
The same moves can be used on the second column of M 1 , to solve
a b
f
0
=
.
c d
h
1
[Exercise. Write down the corresponding quotients that give f and h.]
Example. We determine the invertibility of M and, if invertible, compute its
inverse, where
12 1
M =
.
15 1
The determinant is
12 1
15 1
12 15
86
27 6= 0,
12 1
det
15 0
det M
=
15
,
27
which we dont simplify ... yet. Continuing on to solve for f and h results in the final
inverse matrix
1 1 1
1/27 1/27
1
,
M
=
=
15/27 12/27
27 15 12
and we can see why we did not simply the expression of g.
[Exercise. For the above example, confirm that M M 1 = Id.]
[Exercise. Compute the inverse of
M
1 2
3 4
1 0 2
3 4 0
2 0 0
is non-singular, then compute its inverse using Cramers rule. Check your work.]
Completing the Proof of the Big Inverse Theorem
Remember this
Big Inverse Theorem. A matrix is non-singular (invertible) its determinant 6= 0.
87
?
Your proved in an exercise. Now you can do in another exercise.
[Exercise. Using Cramers rule as a starting point, prove that
det(M ) 6= 0 M is non-singular .
Hint. We just did it in our little 2 2 and 3 3 matrices. ]
88
Chapter 4
Hilbert Space
4.1
4.1.1
|0i + |1i .
In the sneak peek given in the introduction I leaked the meaning of the two numbers
and . They embody the probabilistic nature of quantum bits. While someone might
have prior knowledge that a qubit with the precise value |0i + |1i (whatever
that means) was sitting in a memory location, if we tried to read the value of that
location that is, if we measured it we would see evidence of neither nor , but
instead observe only a classical bit; our measurement device would register one of two
possible outcomes: 0 or 1. Yet and do play a role here. They tell us our
measurement would be
0
with probability
||2
with probability
||2 .
and
Obviously, theres a lot yet to understand about how this unpredictable behavior can
be put to good use, but for today we move a step closer by adding the following clue:
The qubit shown above is a vector having unit length, and the , are
its coordinates in what we shall see is the vector spaces natural basis
{ |0i , |1i }.
In that sense, a qubit seems to correspond, if not physically at least mathematically,
to a vector in some vector space where and are two scalars of the system.
89
4.1.2
4.2
Since we are secure enough in the vocabulary of real vector spaces, theres no need
to dilly-dally around with 2- or 3-dimensional spaces. We can go directly to the fully
general n-dimensional complex space, which we call Cn .
Its scalars are the complex numbers, C, and its vectors are n-tuples of complex
numbers,
c
0
c
1
n
C
.. , ck C, k = 0 to n 1
n1
Except for the inner product, everything else works like the real space Rn with
the plot twist that the components are complex. In fact, the natural basis is actually
identical to Rn s basis.
[Exercise. Prove that the same natural basis works for both Rn and Cn .]
There are, however, other bases for Cn which have no counterpart in Rn , since
their components can be complex.
[Exercise. Drum up a basis for Cn that has no real counterpart. Then find a
vector in Cn which is not in Rn but whose coordinates relative to this basis are all
real.]
90
4.3
So, whats wrong with Rn s dot product in the complex case? If we were to define it
as we do in the real case, for example as in R2 ,
x1
x2
= x1 x 2 + y 1 y 2 ,
y1
y2
wed have a problem with lengths of vectors which we want to be 0. Recall that
lengths are defined by dotting a vector with itself (Ill remind you about the exact
details in a moment). To wit, for a complex a,
aa
?
=
n1
X
ak 2
k=0
is not necessarily real, never mind non-negative (try a vector whose components are
all 1 + i). When it is real, it could still be negative:
5i
5i
= 25 + 9 = 16 .
3
3
We dont want to have complex, imaginary or negative lengths typically, so we need
a different definition of dot product for complex vector spaces. We define the product
ab
n1
X
ak b k
n1
X
(ak ) bk .
k=0
k=0
When defined in this way, some authors prefer the term inner product, reserving
dot product for the real vector space analog (although some authors say complex dot
product, so you have to adapt.)
Notation. An alternative notation for the (complex) inner product is sometimes
used,
ha, bi ,
and in quantum mechanics, youll always see
ha | bi .
Caution #1. The complex inner product is not commutative, i.e.,
ha | bi
6=
91
hb | ai .
hb | ai .
[Exercise. Show that the definition of complex inner product implies the above
result.]
Caution #2. The complex inner product can be defined by conjugating the bk s,
rather than the ak s, and this would produce a different result, one which is the complex
conjugate of our defined inner product. However, physicists and we conjugate the
left-hand vectors coordinates because it produces nicer looking formulas.
Example. Let
1+i
a =
3
1 2i
b =
.
5i
and
Then
ha | bi
=
=
=
=
(1 + i) (1 2i) + (3 ) 5i
(1 i) (1 2i) + (3) 5i
(1 i 2i 2) + 15i
1 + 12i .
ha | bi + ha | b0 i ,
and the same is true in the first position. For the record, we should collect two more
properties that apply to the all-important complex inner product.
It is linear in the second position. If c is a complex scalar,
c ha | bi
ha | cbi .
hc a | bi .
4.3.1
The inner product on Cn confers it with a metric, that is, a way to measure things.
There are two important concepts that emerge:
1. Norm. The norm of vector, now repurposed to our current complex vector
space, is defined by
v
v
u n1
u n1
uX
uX
2
t
kak
|ak |
= t (ak ) ak .
k=0
k=0
With the now correct definition of inner product we get the desired behavior when
we compute a norm,
kak
ha | ai
n1
X
(ak ) ak
k=0
n1
X
|ak |2
k=0
0.
Since the length (norm or modulus) of a, kak, is the non-negative square root of this
value, once again all is well: lengths are real and non-negative.
More Notation. You may see the modulus for a vector written in normal (not
bold) face, as in
a
kak .
kak .
=
11 .
[Exercise. Compute the norm of b from that same example, above.]
93
n1
X
ck ak .
k=1
a contradiction.
4.3.2
0,
k=1
QED
Expansion Coefficients
n1
X
j hbk | bj i ,
j=0
kj ,
the last sum collapses to the desired k . However, we had to be careful to place our v
on the right side of the inner product. Otherwise, we would not get the kth expansion
coefficient, but its .
[Exercise. Fill in the blank].
Example. In C we expand v =
A
{
e0 ,
e1 }
1+i
1i
=
94
along the natural basis A,
1
0
,
.
0
1
(We do this easy example because the answer is obvious: the coordinates along A
should match the pure vector components, otherwise we have a problem with our
technique. Lets see ...)
We seek
v0
.
v1 A
The dotting trick says
v0
=
=
h
e0 | vi = (e00 ) v0 + (e01 ) v1
1 (1 + i) + 0 (1 i) = 1 + i ,
v1
=
=
h
e1 | vi = (e10 ) v0 + (e11 ) v1
0 (1 + i) + 1 (1 i) = 1 i ,
and
so
1+i
1i
=
1+i
1i
,
A
as expected (X). Of course that wasnt much fun since natural basis vector components were real (0 and 1), thus there was nothing to conjugate. Lets do one with a
little crunch.
1+i
2
Example. In C we expand the same v =
along the basis B,
1i
B
n
o
0 , b
1
b
2/2
i
2/2
,
.
2/2
i 2/2
First, we confirm that this basis is orthonormal (because the dot-product trick only
works for orthonormal bases).
D E
0 b
1
b
= (b00 ) b10 + (b01 ) b11
b0 b
0
=
=
=
and
D
E
1 b
1
b
=
=
=
E
0 v
b
= (b00 ) v0 + (b01 ) v1
( 2/2) (1 + i) + ( 2/2) (1 i)
2,
D
=
=
=
and
v1
b1 v
= (b10 ) v0 + (b11 ) v1
(i 2/2) (1 + i) + (i 2/2) (1 i)
2,
D
=
=
=
so
1+i
1i
=
2, .
2, B
2 b0 +
2 b1
=
=
=
2/2
i
2/2
2
+
2
2/2
i 2/2
1
i
+
1
i
1+i
X
1i
[Exercise. Work in C2 and use the same v as above to get its coordinates relative
to the basis C,
1 1+i
1
2+i
C {
c0 ,
c1 } =
,
.
1
3
15 3 + i
Before starting, demonstrate that this basis is orthonormal.]
96
4.4
4.4.1
Hilbert Space
Definitions
97
Figure 4.1: The Cauchy sequence 1 k1 k=2 has its limit in [0, 1]
Figure 4.2: The Cauchy sequence 1 k1 k=2 does not have its limit in (0, 1)
Notation
When I want to emphasize that were working in a Hilbert space, Ill use the letter
H just as I use terms like R3 or Cn to denote real or complex vector spaces. H could
take the form of a C2 or a Cn , and I normally wont specify the dimension of H until
we get into tensor products.
4.4.2
98
form a vector space. We can define an inner product for any two such functions, f
and g, using
Z b
hf | gi
f (x) g(x) dx ,
a
and this inner-product will give a distance and norm that satisfy the completeness
criterion. Hilbert spaces very much like these are used to model the momentum and
position of sub-atomic particles.
4.4.3
We will be making implicit use of the following consequences of real and complex
inner-product spaces as defined above, so its good to take a moment and meditate
on each one.
Triangle Inequality
Both the real dot product of Rn and the complex inner product of Cn satisfy the
triangle inequality condition: For any vectors, x, y and z,
dist(x, z)
dist(x, y) + dist(y, z) .
Pictured in R2 , we can see why this is called the triangle inequality. (See Figure 4.3).
[Exercise. Pick three vectors in C3 and verify that the triangle inequality is
satisfied. Do this at least twice, once when the three vectors do not all lie on the
same complex line, { x C } and once when all three do lie on the same line
and y is between x and z.]
Cauchy-Schwarz Inequality
A more fundamental property of inner-product spaces is the Cauchy-Schwarz inequality which says that any two vectors, x and y, of an inner-product space satisfy
|hx | yi|2
99
kxk2 kyk2 .
4.5
4.5.1
so we may as well choose a vector of length one (kvk = 1) that is on the same ray as
either (or both) of those two; we dont have to distinguish between any of the infinite
number of vectors on that ray.
This equivalence of all vectors on a given ray makes the objects in our mathematical model not points in H, but rays through the origin of H.
= a
b a
101
[Exercise. Elaborate.]
Dividing any one of them by its norm will produce a unit vector (vector with
modulus one) which also represents the same ray. Using the simplest representative,
2
2
!
2
i
i
a
5
,
=
= p
=
i
kak
5
(2)(2) + (i)(i)
5
often written as
1
2
.
i
a
.
kak
Figure 4.5: Dividing a vector by its norm yields a unit vector on the same ray
Computing a unit length alternative to a given vector in H turns out to be of
universal applicability in quantum mechanics, because it makes possible the computation of probabilities for each outcome of a measurement. The fact that a vector has
norm = 1 corresponds to the various possible measurement probabilities adding to 1
(100% chance of getting some measurement). Well see all this soon enough.
Caution. Figures 4.4 and 4.5 suggest that once you know the magnitude of a,
that will nail-it-down as a unique C3 representative at that distance along the ray.
This is far from true. Each point pictured on the ray, itself, constitutes infinitely
many different n-tuples in Cn , all differing by a factor of ei , for some real .
102
Example (continued). We have seen that a= (2, i)t has norm 5. A different
i/6
i/6
Cn representative for this H-space vector
is
e
a.
We
can
easily
see
that
e
a
i/6
has the same length as a. In words, e
= 1 (prove it or go back and review your
complex arithmetic module), so multiplying by ei/6 , while changing the Cn vector,
will not change its modulus (norm). Thus, that adjustment not only produces a
different representative, it does so without changing the norm. Still, it doesnt hurt
to calculate norm of the product the long way, just for exercise:
!
i/6
2 (cos /6 + i sin /6)
2e
i/6
e
a =
=
i ei/6
i (cos /6 + i sin /6)
2 cos /6 + 2i sin /6
=
1 sin /6 + i cos /6
!
2 23 + 2i 12
=
1 21 + i 23
!
3 + i
=
21 + i 2 3
!
1 2 3 + 2i
=
2 1 + i 3
I did the hard part: I simplified the rotated vector (we call the act of multiplying by
ei for real a rotation because of the geometric implication which you can imagine,
look-up, or simply accept).
All thats left to do in this computation is calculate the
norm and see that it is 5.
[Exercise. Close the deal.]
4.5.2
The states we are to model correspond to unit vectors in H. We all get that, now.
But what are the implications?
A state has to be a normalizable vector, which 0 is not. 0 will be the only
vector in H that does not correspond to a physical quantum state.
As we already hammered home, every other vector does correspond to some
state, but it isnt the only vector for that state. Any other vector that is a
scalar multiple of it represents the same state.
The mathematical entity that we get if we take the collection of all rays, {[a]}, as
its points, is a new construct with the name: complex projective sphere. This
new entity is, indeed, in one-to-one correspondence with the quantum states,
but ...
103
... the complex projective sphere is not a vector space, so we dont want to go
too far in attempting to define it; any attempt to make a formal mathematical
entity just so that it corresponds, one-to-one, with the quantum states it models
results in a non-vector space. Among other things, there is no 0-vector in such
a projective sphere, thus, no vector addition.
The Drill. With this in mind, we satisfy ourselves with the following process. It
may lack concreteness until we start working on specific problems, but it should give
you the feel for whats in store.
1. Identify a unit vector, v
H, corresponding to the quantum state of our
problem.
2. Sometimes we will work with a scalar multiple, v =
v H = Cn , because it
simplifies our computations and we know that this vector lies on the same ray
as the original. Eventually, well re-normalize by dividing by to bring it back
to the projective sphere.
3. Often we will apply a unitary transformation, U , directly to v
. U v
will be
a unit vector because, by definition (upcoming lecture on linear transformations), unitary transformations preserve distances. Thus, U keeps vectors on
the projective sphere.
4. In all cases, we will take care to apply valid operations to our unit state vector,
making sure that we end up with an answer which is also a unit vector on the
projective sphere.
4.5.3
Why?
There one question (at least) that you should ask and demand be answered.
Why is this called a projective sphere? Good question. Since the states
of our quantum system are rays in H, and we would prefer to visualize vectors as
points, not rays, we go back to the underlying Cn and project the entire ray (maybe
collapse would be a better word) onto the surface of an n-dimensional sphere (whose
real dimension is actually 2(n 1), but never mind that). We are projecting all those
representatives onto a single point on the complex n-sphere. (See Figure 4.5.) Caution: Each point on that sphere still has infinitely many representatives impossible
to picture due to a potential scalar factor ei , for real .]
None of this is to say that scalar multiples, a.k.a. phase changes, never matter.
When we start combining vectors in H, their relative phase will become important,
and so we shall need to retain individual scalars associated with each component
n-tuple. Dont be intimidated; well get to that in cautious, deliberate steps.
104
4.6
Almost There
We have one last math lesson to dance through after which we will be ready to
learn graduate level quantum mechanics (and do so without any prior knowledge of
undergraduate quantum mechanics). This final topic is linear transformations. Rest
up. Then attack it.
105
Chapter 5
Linear Transformations
5.1
5.1.1
0 i
.
i 0
depending which basis we are using. For example, there is a basis in which the same
logic gate has a different matrix, i.e.,
1 0
Y =
.
0 1
This should disturb you. If a matrix defines a quantum logic gate, and there can be
more than one matrix describing that gate, how can we ever know anything?
There is a more fundamental concept than the matrix of linear algebra, that of
the linear transformation. As youll learn today, a linear transformation is the basisindependent entity that describes a logic gate. While a linear transformations matrix
will change depending on which underlying basis we use to construct it, its life-giving
linear transformation remains fixed. You can wear different clothes, but underneath
youre still you.
5.1.2
There are many reasons we care about linear transformations. Ill list a few to give
you some context, and well discover more as we go.
Every algorithm in quantum computing will require taking a measurement. In
quantum mechanics, a measurement is associated with a certain type of linear
transformation. Physicists call it by the name Hermitian operator, which well
define shortly. Such beasts are, at their core, linear transformations. [Beware:
I did not say that taking a measurement was a linear transformation; it is not.
I said that every measurement is associated with a linear transformation.]
In quantum computing well be replacing our old logic gates (AND, XOR, etc.)
with a special kind of quantum gate whose basis independent entity is called a
unitary operator. But a unitary operator is nothing more than a linear transformation which has some additional properties.
We often want to express the same quantum state or qubit in different bases.
One way to convert the coordinates of the qubit from one basis to another is to
subject the coordinate vector to a linear transformation.
5.2
5.2.1
Linear Transformations are the verbs of vector spaces. They can map vectors of one
vector space, V, into a different space, W,
T
V W ,
107
or they can move vectors around, keeping them in the same space,
T
V V .
They describe actions that we take on our vectors. They can move a vector by
mapping it onto another vector. They can also be applied to vectors that dont move
at all. For example, we often want to expand a vector along a basis that is different
from the one originally provided, and linear transformations help us there as well.
5.2.2
A linear transformation, T , is a map from a vector space (the domain) into itself or
into another vector space (the range),
T
V W
(W could be V) ,
convert a vector (having a certain direction and length) into a different vector (having
a different direction and length).
Notation. Sometimes the parentheses are omitted when applying a linear transformation to a vector:
Tv
T (v) .
1(v) v (Identity)
0(v)
Sc (v)
Pk (v)
Pn (v)
D()
Z x
(f )
0 (Zero)
c v (Scale) (F igure 5.2)
vk x
k (Projection onto x
k ) (F igure 5.3)
(v n
) n
(Projection onto n
) (F igure 5.4)
0
(Differentiation)
Z x
TA (v) Av
109
3
Figure 5.3: Projection onto the direction z, a.k.a. x
[(c v1 + v2 ) n
] n
= [(c v1 ) n
+ v2 n
] n
[c (v1 n
) + v2 n
] n
= [c (v1 n
)] n
+ [v2 n
] n
c [(v1 n
)] n
+ [v2 n
] n
=
=
A(v) + A(w)
cA(v).
and
Therefore,
TA (v) Av
is linear. This is the linear transformation, TA , induced by the matrix A. Well look
at it more closely in a moment.
[Exercise. Prove these two claims about matrix-multiplication.]
110
2y
T1 : y
7
z
2z
x
2ix
T2 : y 7
3y
z
(4 1i)z
x+2
x
2i
T3 : y 7 y +
z
z+ 2
x
y
x
T4 : y
7
z
z
x
xy
y
T5 : y
7
z
z2
x
0
T6 : y 7 xyz
z
0
Which are linear, which are not? Support each claim with a proof or counter example.]
5.3
Basis vectors play a powerful role in the study of linear transformations as a consequence of linearity.
Lets pick any basis for our domain vector space (it doesnt have to be the natural
basis or even be orthonormal),
B = {b1 , b2 , ...} .
The definition of basis means we can express any vector as a linear combination
of the bk s (uniquely) using the appropriate scalars, k , from the underlying field.
Now, choose any random vector from the space and expand it along this basis (and
remember there is only one way to do this for every basis),
v
n
X
k bk .
k=1
We now apply T to v and make use of the linearity (the sum could be infinite) to get
!
n
n
X
X
k bk
=
k T (bk )
Tv = T
k=1
k=1
111
What does this say? It tells us that if we know what T does to the basis vectors,
we know what it does to all vectors in the space. Lets say T is some hard-todetermine function which is actually not known analytically (by formula), but by
experimentation we are able to determine its action on the basis. We can extend that
knowledge to any vector because the last result tells us that the coordinates of the
vector combined with the known values of T on the basis are enough. In short, the
small set of vectors
{T (b1 ) , T (b2 ) , ...}
completely determines T .
5.3.1
We now apply the theory to a linear transformation which seems to rotate points by
90 counter-clockwise in R2 . Lets call the rotation R/2 , since /2 radians is 90 .
The plan
We must first believe that a rotation is linear. I leave this to you.
[Exercise. Argue, heuristically, that R/2 is linear. Hint: Use the equations that
define linearity, but proceed intuitively, since you dont yet have a formula for R/2 .]
Next, we will use geometry to easily find the action of T on the two standard basis
vectors, {
x, y
}. Finally, well extend that to all of R2 , using linearity. Now for the
details.
The Details
Figures 5.5 and 5.6 show the result of the rotation on the two basis vectors.
=
=
112
and
x,
=
=
=
R/2 (vx x
+ vy y
)
vx R/2 (
x) + vy R/2 (
y)
vx y
vy x
.
From knowledge of the linear transformation on the basis alone, we were able to derive
a formula applicable to the entire space. Stated in column vector form,
y
x
R/2
=
,
y
x
and we have our formula for all space (assuming the natural basis coordinates). Its
that easy.
[Exercise. Develop the formula for a counter-clockwise rotation (again in R2 )
through an arbitrary angle, . Show your derivation based on its effect on the natural
basis vectors.]
[Exercise. What is the formula for a rotation, Rz, /2 , about the z-axis in R3
through a 90 angle, counter-clockwise when looking down from the positive z-axis.
Show your derivation based on its effect on the natural basis vectors, {
x, y
,
z}.]
5.4
5.4.1
You showed in one of your exercises above that any matrix A (containing scalar
constants) induces a linear transformation TA . Specifically, say A is a complex matrix
of size m n and v is a vector in Cn . Then the formula
TA (v)
Av
defines a mapping
TA
Cn Cm ,
113
which turns out to be linear. As you see, both A, and therefore, TA , might map
vectors into a different-sized vector space; sometimes m > n, sometimes m < n and
sometimes m = n.
5.4.2
We can go the other way. Starting with a linear transformation, T , we can construct
a matrix MT , that represents it. There is a little fine print here which well get to in
a moment (and you may sense it even before we get there).
For simplicity, assume we are working in a vector space and using some natural
basis A = {ak }. (Think {
x, y
,
z} of R3 .) So every vector can be expanded along A
by
v
n
X
k ak .
k=1
We showed that the action of T on the few vectors in basis A completely determines
its definition on all vectors v. This happened because of linearity,
Tv
n
X
k T (ak ) .
k=1
Lets write the sum in a more instructive way as the formal (but not quite legal) dot
product of a (row of vectors) with a (column of scalars). Shorthand for the above
sum then becomes
1
!
2
Tv =
T (a1 ) , T (a2 ) , . . . , T (an ) .. .
.
n
Now we expand each vector T (ak ) vertically into its coordinates relative to the same
basis, A, and we will have a legitimate product,
(T a1 )1 (T a2 )1 (T an )1
1
(T a1 )2 (T a2 )2 (T an )2 2
..
..
.. .. .
.
.
.
.
.
. .
(T a1 )n (T a2 )n . . . (T an )n
n
Writing ajk in place of (T ak )j , we have the
a11 a12
a21 22
T v = ..
..
.
.
an1 an2
simpler statement,
a1n
1
a2n 2
.. ,
. . . ..
.
.
ann
n
114
which reveals that T is nothing more than multiplication by a matrix made up of the
constants ajk .
Executive Summary. To get a matrix, MT , for any linear transformation, T ,
form a matrix whose columns are T applied to each basis vector.
We can then multiply any vector v by the matrix MT = ajk to get T (v).
Notation
Because this duality between matrices and linear transformations is so tight, we rarely
bother to distinguish one from the other. If we start with a linear transformation, T ,
we just use T as its matrix (and do away with the notation MT ). If we start with a
matrix, A, we just use A as its induced linear transformation, (and do away with the
notation TA ).
Example
Weve got the formula for a linear transformation that rotates vectors counter-clockwise
in R2 , namely,
x
y
R/2
=
.
y
x
To compute its matrix relative to the natural basis, {
x, y
}, we form
1
0
MR/2 =
R/2 (
x) , R/2 (
y)
=
R/2
, R/2
0
1
0 1
=
.
1 0
We can verify that it works by multiplying this matrix by an arbitrary vector,
0 1
x
0 x + (1) y
y
=
=
,
1 0
y
1x + 0y
x
as required.
[Exercise. Show that the matrix for the scaling transformation ,
S3i (v)
3i v ,
is
MS3i
3i 0
,
0 3i
and verify that it works by multiplying this matrix by an arbitrary vector to recover
the definition of S3i .]
115
5.4.3
The same goes for T s matrix. If I want to describe T in a particular basis, I will
say something like
T = T |A = T |B ,
and now I dont even need to use MT , since the |A or |B implies we are talking about
a matrix.
So the basis-free statement,
w
T (v) ,
T |A v|A ,
w|B
T |B v|B .
116
{a1 , a2 , . . .}
1
0
0 1
, , ... .
..
...
There was nothing special about the preferred basis in this formula; if we had any
basis even one that was non-orthonormal the formula would still be
!
T
=
T (b1 ) , T (b2 ) , . . . , T (bn )
.
B
The way to see this most easily is to first note that in any basis B, each basis vector
bk , when expressed in its own B-coordinates, looks exactly like the kth preferred basis
element, i.e.,
0
..
.
bk
= 1 kth element .
.
B
..
0
B
B
B
.
..
0
Example
The transformation
x
x
T :
7
y
(3i) y
117
in the context of the vector space C2 is going to have the preferred-basis matrix
1 0
MT =
.
0 3i
[Exercise. Prove it.]
Lets see what T looks like when expressed in the non-orthogonal basis
2
1
C =
,
.
0
1
[Exercise. Prove that this C is a basis for C2 .]
Using our formula, we get
T
T (c1 ) , T (c2 )
!
C
We compute each T (ck ) , k = 1, 2.
C
First up, T (c1 ) :
C
T (c1 )
2
T
0
2
.
0
Everything needs to be expressed in the C basis, so we show that for this vector:
2
2
1
1
= 1
,
+ 0
=
1
0 C
0
0
so this last column vector will be the first column of our matrix.
Next, T (c2 ) :
C
T (c2 )
1
T
1
1
.
3i
We have to express this, too, in the C basis, a task that requires a modicum of algebra:
1
2
1
=
+
.
3i
0
1
Solving this system should be no problem for you (exercise), giving,
1 3i
2
3i ,
118
so
T (c2 )
1
3i
!
3i
13i
2
5.4.4
1
0
!
.
3i
13i
2
When we have an orthonormal basis, things get easy. We start our analysis by working
in a natural, orthonormal basis, where all vectors and matrices are simply written
down without even thinking about the basis, even though its always there, lurking
behind the page. Today, were using A to designate the natural basis, so
v = v|A
would have the same coordinates on both sides of the equations, perhaps
1+i 2
.
3.2 i
x
x
0
2/2
2/2 0
.
=
0
2/2 A
0
2/2
y
y
0
2/2
This is true for any T and the preferred basis in R2 . To illustrate, lets take the upper
left component. Its given by the dot product
T11
h
x | T (
x)i .
119
=
=
=
h
y | T (
x)i ,
h
x | T (
y)i
h
y | T (
y)i .
and
In other words,
T
h
x | T (
x)i
h
x | T (
y)i
h
y | T (
x)i
h
y | T (
y)i
!
.
To make things crystal clear, lets rename the natural basis vectors
A
{
e1 ,
e2 },
T11 T12
!
=
T21 T22
h
e1 | T (
e1 )i
h
e1 | T (
e2 )i
h
e2 | T (
e1 )i
h
e2 | T (
e2 )i
!
.
But this formula, and the logic that led to it, would work for any orthonormal basis,
not just A, and in any vector space, not just R2 .
Summary. The jkth matrix element for the transformation, T , in an orthonormal
basis,
B
{ b1 , b2 , . . . . bn }
is given by
Tjk
E
j T (b
k ) ,
b
so
E
b1 T (b1 )
D
E
2 T (b
1 )
b
D
T
D
E
b1 T (b2 )
D
E.
2 T (b
2 )
b
Not only that, but we dont even have to start with a preferred basis to express our
T and B that are used in the formula. As long as T and B are both expressed in
the same basis say someD third
we can use the coordinates and matrix elements
C E
120
Example 1
Lets represent the scaling transformation
2/2
0
.
S2/2 =
0
2/2 A
in a basis that we encountered in a previous lesson (which was named C then, but
well call B today, to make clear the application of the above formulas),
2/2 , 2/2
B {b1 , b2 } =
.
2/2
2/2
We just plug in (intermediate B labels omitted):
D
E D
E
b
T
(
b
)
b
T
(
b
)
1
1
1
2
= D
S2/2
E D
E
B
2 T (b
1 )
2 T (b
2 )
b
b
b1 22
D
2 2
b
2
D
=
b1
*
2
b
*
=
1
b
1
b
1
2
1
2
!+
1
2
1
2
!+
2
2
2
2
D
1 2
b
2
D
2 2
b
2
2
b
2
b
!+
1
2
1
b
12
*
!+
b2 1
2
*
.
B
Thats surprising (or not). The matrix is the same in the B basis as it is in the A
basis.
1) Does it make sense?
2) Is this going to be true for all orthonormal bases and all transformations?
These are the kinds of questions you have to ask yourself when you are manipulating
mathematical symbols in new and unfamiliar territory. Ill walk you through it.
1) Does it make sense?
121
3
2/2
10
in both bases.
First well compute it in the A-basis and transform the resulting output vector
to the B-basis.
When weve done that, well start over, but this time first convert the starting
vector and matrix into B-basis coordinates and use those to compute the result,
giving us an answer in terms of the B-basis.
Well compare the two answers to see if they are equal.
By picking a somewhat random-looking vector, (3, 10)t , and discovering that
both T |A and T |B turn it into equivalent output vectors, we will be satisfied that the
result makes sense; apparently it is true that the matrix for S2/2 looks the same
when viewed in both bases. But we havent tested that, so lets do it.
A-Basis. In the A basis we already know that the output vector must be
3 2/2
,
5 2
A
because thats just the application of the transformation to the innate vector, which
is the same as applying the matrix to the preferred coordinates. We transform the
output vector into B-basis:
3 2/2
b1
5 2
3 2/2
=
5 2
3 2/2
A
b2
5 2
B
!
!
7
32 + 5
2
=
=
3
13
+5
2
2
B
B
3
B-Basis. Now, do it again, but this time convert the input vector,
, to
10
B-coordinates, and run it through S2/2 B . First the input vector:
3
b
1
10
=
10 A
3
b2
10
B
3 2 2 + 102 2
3 2
10 2
+
2
2
122
=
B
!
7 2
2
13 2
2
B
Finally, we apply S2/2 B to these B coordinates:
S2/2
!
7 2
2
13 2
2
B
2
2
0
!
7
2
.
13
2
2
2
!
7 2
2
13 2
2
And ... we get the same answer. Apparently there is no disagreement, and other tests
would give the same
results, so it seems we made no mistake when we derived the
matrix for S2/2 B and got the same answer as S2/2 A .
This is not a proof, but it easy enough to do (next exercise). A mathematicallyminded student might prefer to just do the proof and be done with it, while an
applications-oriented person may prefer just to test it on a random vector to see if
s/he is on the right track.
[Exercise. Prove that for any v, you get the same result for S2/2 (v) whether
you use coordinates in the A basis or in the B basis, thus confirming once-and-for-all
that the matrix of this scaling transformation is the same in both bases.]
2) Do we always get the same matrix?
Certainly not otherwise would I have burdened you with a formula for computing
the matrix of a linear transformation in an arbitrary basis?
Example 2
We compute the matrices for a projection transformation,
2/2
Pn (v) (v n
) n
,
where n
=
.
2/2
in the two bases, above.
The A Matrix. Ill use the common notation introduced earlier,
ek , for the kth
natural basis vector.
Pn
!
=
h
e1 | Pn (
e1 )i
h
e1 | Pn (
e2 )i
h
e2 | Pn (
e1 )i
h
e2 | Pn (
e2 )i
Since
Pn (
e1 )
Pn (
e2 )
1/2
1/2
1/2
1/2
123
and
!
.
1/2 1/2
1/2 1/2
b2 Pn (b1 )
D
E
1 Pn (b
2 )
b
D
E .
2 Pn (b
2 )
b
B
1 and Pn (b
1 ) have to be expressed in this same
Remember, for this to work, both b
basis, and we almost always express them in the B basis, since thats how we know
everything. Now,
2/2
and
Pn (b1 ) =
2/2
0
2 ) =
Pn (b
0
(exercise: prove it), so
Pn
1 0
0 0 B
(exercise: prove it), a very different matrix than Pn .
A
[Exercise. Using (3, 10) (or a general v) confirm (prove) that the matrices
Pn A and Pn B produce output coordinates of the identical intrinsic output vector,
Pn (3, 10)t . Hint: use an argument similar to the one above.]
5.5
5.5.1
M jk = Mkj
.
124
In other words, we form the transpose of M , then take the complex conjugate of every
element in that matrix.
Examples.
1i
3
0
2.5
7 + 7i
6
7i
88
1+i
0
6
3
7i
2.5
7 7i 88
1+i
5
3 2i
2.5
7 7i
99
ei
1i
5
3 + 2i
99
2.5
7 + 7i ei
1 + i
=
1 i , 2 + 2i
2 2i
!
3 7i
3 + 7i , 2
=
2
The Adjoint of a Linear Transformation
Because linear transformations always have a matrix associated with them (once we
have established a basis), we can carry the definition of adjoint easily over to linear
transformations.
Given a linear transformation, T with matrix MT (in some agreed-upon basis), its
adjoint, T , is the linear transformation defined by the matrix (multiplication) MT .
It can be stated in terms of its action on an arbitrary vector,
T (v)
(MT ) v ,
for all v V .
[Food for Thought. This definition requires that we have a basis established,
otherwise we cant get a matrix. But is the adjoint of a linear transformation, in this
definition, going to be different for different bases? Hit the CS 83A discussion forums,
please.]
5.5.2
Unitary Operators
Uv
Quantum computing uses a much richer class of logical operators (or gates) than
classical computing. Rather than the relatively small set of gates: XOR, AND,
NAND, etc., in classical computing, quantum computers have infinitely many different
logic gates that can be applied. On the other hand, the logical operators of quantum
computing are of a special form not required of classical computing; they must be
unitary, i.e., reversible.
Definition and Examples
Theoretical Definition. A linear transformation, U , is unitary (a.k.a.
a unitary operator) if it preserves inner products, i.e.,
h Uv | Uw i
hv|wi ,
Notation. It is common to use the letter, U , rather than T , for a unitary operator
(in the absence of a more specific designation, like R ).
Because of this theorem, we can use any of the three conditions as the definition
of unitarity, and for practical reasons, we choose the third.
Practical Definition #1. A linear transformation, U , is unitary (a.k.a. a
unitary operator ) if its matrix (in any orthonormal basis) has orthonormal columns
(or equivalently orthonormal rows), i.e., U is unitary for any orthonormal basis,
B
{bk }nk=1 ,
the column vectors of U B satisfy
h U (bj ) | U (bk ) i
jk .
Note that we only need to verify this condition for a single orthonormal basis (exercise,
below).
The Matrix of a Unitary Operator. A Matrix, M , is called unitary if its
adjoint is also its inverse, i.e.,
M M
MM
1.
UU
1.
sin
cos
0,
and dotting a column with itself (well just do the first column to demonstrate) gives
cos
cos
= cos2 + sin2 = 1 ,
sin
sin
so, again, we get orthonormality.
Phase Changes. A transformation which only modifies the arg of each coordinate,
i
x
e x
,
=
,
y
ei y
has the matrix
,
i
e
0
.
0 ei
Because this is a complex matrix, we have to apply the full inner-product machinery,
which requires that we not forget to take conjugates. The inner product of the two
columns is easy enough,
i
0
e
= ei 0 + 0 ei = 0 ,
0 ei
but do notice that we had to take the complex conjugate of the first vector even
though failing to have done so would have been an error that still gave us the right
answer. The inner product of a column with itself (well just show the first column)
gives
i i
e
e
= ei ei + 0 0
0 0
=
e0 + 0
1,
and we have orthonormality. Once again, the complex conjugate was essential in the
computation.
Some Non-Unitary Operators
Scaling by a Non-Unit Vector. Scaling by
the matrix
c
Sc =
0
128
whose columns are orthogonal (do it), but whose column vectors are not unit length,
since
c c
= c c + 0 0 = |c|2 6= 1 ,
0 0
by construction.
Note. In our projective Hilbert spaces, such transformations dont really exist,
since we consider all vectors which differ by a scalar multiple to be the same entity
(state). While this example is fine for learning, and true for non-projective Hilbert
spaces, it doesnt represent a real operator in quantum mechanics.
A Projection Operator. Another example we saw a moment ago is the projection onto a vectors 1-dimensional subspace in R2 (although this will be true of such
a projection in Rn , n 2). That was
2/2
Pn (v) (v n
) n
,
where n
=
.
2/2
We only need look at Pn (v) in either of the two bases for which we computed its
matrices to see that this is not unitary. Those are done above, but you can fill in a
small detail:
[Exercise. Show that both matrices for this projection operator fail to have
orthonormal columns.]
A Couple Loose Ends
has orthonormal
[Exercise. Prove
that,
for
orthonormal
bases
A
and
B,
if
U
A
columns, then U B does, too. Thus, the matrix condition for unitarity is independent
of orthonormal basis.]
[Exercise. Prove that is not true that a unitary U will have a matrix with
orthonormal columns for all bases. Do this by providing a counter-example as follows.
1. Use the rotation in R/2 in R2 and the non-orthogonal basis
1
2
,
= { d1 , d2 }
D =
0
1
.
2. Write down the matrix for R/2 D using the previously developed formula, true
for all bases,
#
"
R/2 D =
R/2 (d1 ) ,
R/2 (d2 )
D
whose four components are, of course, unknown to us. Well express them
temporarily as
.
D
129
3. Computing the matrix in the previous step requires a little calculation: you
cannot use the earlier trick
D
E
j T (b
k ) ,
Tjk
=
b
B
taken after we have applied R/2 ). You can easily confirm that (in natural
coordinates),
2
0
R/2
=
,
2 A
0 A
by drawing a picture. Now, get the D-coordinates of this vector by solving the
A-coordinate equation,
0
2
1
=
+
2
0
1
for and .
4. Even though this will immediately tell you that R/2 D is not orthonormal, go
on to get the full matrix by solving for the second column. (, )t , and showing
that neither column is normalized and the two columns are not orthogonal.]
5.5.3
Hermitian Operators
While unitary operators are the linear transformations that can be used to represent
quantum logic gates, there are types of operators associated with the measurements
of a quantum state. These are the called Hermitian operators.
Preview Characterization. In quantum mechanics, a Hermitian operator will
be an operator that is associated with some observable, that is, something about
the system, such as velocity, momentum, spin that we can imagine measuring. For
quantum computer scientists, the observable will be state of a quantum bit.
We will fully explore the connection between Hermitian operators and observables in the very next lesson (quantum mechanics), but right now lets least see the
mathematical definition of a Hermitian operator.
130
M.
1 0
3 0
0 2 1 0
3 1 3 0
0 0
0
1
1i
1 + i
7 + 7i
2.5
7 7i
88
and
are Hermitian.]
[Exercise. Explain why the
1+i
0
3
2.5
7 7i
matrices
6
7i
88
and
1 0
3 0
0 2 1 0
3 1 3 0
5.6
Well you did it. After a few weeks of intense but rewarding math, you are ready to
learn some fully caffeinated quantum mechanics. Ill walk you through it in a single
chapter that will occupy us for about a week at which time youll be fully a certified
quantum mechanic.
131
Chapter 6
The Experimental Basis of
Quantum Computing
6.1
This is the first of a three-chapter lesson on quantum mechanics. For our purposes
in CS 83A, only the second of the three the next chapter contains the essential
formalism that well be using in our algorithms. However, a light reading of this first
chapter will help frame the theory that comes next.
6.2
6.2.1
Quantum mechanics is not quantum physics. Rather, it is the collection of mathematical tools used to analyze physical systems which are, to the best of anyones
ability to test, known to behave according to the laws of quantum physics.
As computer scientists, we will not concern ourselves with how the engineers
implement the physical hardware that exhibits predictable quantum behavior (any
more than we needed to know how they constructed a single classical bit capable of
holding a 1 or a 0 out of beach sand in order to write classical software or algorithms).
Certainly this is of great interest, but it wont interfere with our ability to play our
part in moving the field of quantum information forward. As of this writing, we dont
know which engineering efforts will result in the first generation of true quantum
hardware, but our algorithms should work regardless of the specific solution they end
up dropping at our doorstep.
That said, a brief overview of the physics will provide some stabilizing terra firma
to which the math can be moored.
132
6.2.2
Heres the set-up. We have a physical system, call it S (thats a script S), and an
apparatus that can measure some property of S . Also, we have it on good authority
that S behaves according to quantum weirdness; 100 years of experimentation has
confirmed certain things about the behavior of S and its measurement outcomes.
Here are some examples the last few may be unfamiliar to you, but I will elaborate,
shortly.
The system is a proton and the measurement is the velocity ( momentum) of
the proton.
The system is a proton and the measurement is the position of the proton.
The system is a hydrogen atom (one proton + one electron) and the measurement is the potential energy state of the atom.
The system is an electron and the measurement is the z-component magnitude
of the electrons spin.
The system is an electron and the measurement is the x-component magnitude
of the electrons spin.
The system is an electron and the measurement is the magnitude of the elec.
trons spin projected onto the direction n
In each case, we are measuring a real number that our apparatus somehow is
capable of detecting. In practice, the apparatus is usually measuring something related
to our desired quantity, and we follow that with a computation to get the value of
interest (velocity, momentum, energy, z-component of spin, etc.).
6.2.3
One usually studies momentum and position as the relevant measurable quantities,
especially in a first course in quantum mechanics. For us, an electrons spin will be
a better choice.
A Reason to Learn Quantum Mechanics Through Spin
The Hilbert spaces that we get if we measure momentum or position are infinite dimensional vector spaces, and the corresponding linear combinations become integrals
rather than sums. We prefer to avoid calculus in this course. For spin, however, our
vector spaces are two dimensional about as simple as they come. Sums work great.
133
6.3
6.3.1
134
6.3.2
Translating our imperfect model into the language of math, we initially define an
electrons spin (state) to be the vector S which embodies two ideas,
1. first, its quantity of angular momentum (how heavy it is combined with how
fast its rotating), which I will call S, and
2. second, the orientation (or direction) of its imagined rotational axis, which I
S or sometimes just S.
will call n
S , can be represented by a unit vector
The first entity, S, is a scalar. The second, n
that points in the direction of the rotational axis (where we adjudicate up vs. down
by a right hand rule, which I will let you recall from any one of your early math
classes). (See Figure 6.2.)
So, the total spin vector will be written
Sx
Sy ,
Sz
135
Figure 6.2: A classical idea for spin: A 3-D direction and a scalar magnitude
and we can break it into the two aspects, its scalar magnitude,
q
Sx 2 + Sy 2 + Sz 2 ,
S |S| =
and a unit vector that embodies only its orientation (direction)
S = S
n
S
.
|S|
3
record, its value is 2 ~, where ~ is a tiny number known as Planks constant. (In
street terminology thats spin 1/2.) After today, we wont rely on the exact expression, but for now we will keep it on the books by making explicit the relationship
!
3
S .
~ n
S =
2
as the only
The constancy of its magnitude leaves the electrons spin orientation, S,
spin-related entity that can change from moment-to-moment or electron-to-electron.
6.3.3
Spherical Representation
Figure 6.3: Polar and azimuthal angles for the (unit) spin direction
You may have noticed that we dont need all three components nx , ny and nz .
S is unit length, the third can be derived from the other two. [Exercise.
Since n
How?] A common way to express spin direction using only two real numbers is
through the so-called polar and azimuthal angles, and (See Figure 6.3).
136
1 =
.
.615
1
/4 Sph
S is written ( 1, , )Sph , where
In the language of spherical coordinates the vector n
S has unit length. The two remaining
the first coordinate is always 1 because n
coordinates are the ones we just defined, the angles depicted in Figure 6.3.
and will be important alternatives to the Euclidean coordinates (nx , ny , nz ),
especially as we study the Bloch sphere, density matrices and mixed states, topics in
the next quantum computing course, CS 83B.
6.4
We proceed along classical grounds and, with the aid of some experimental physicists,
design an experiment.
6.4.1
The Experiment
We prepare a bunch of electrons in equiprobable random states, take some measurements of their spin states and compare the result with what we would expect
classically. Here are the details:
1. The States. Lets instruct our experimental physicists to prepare an electron
soup: billions upon billions of electrons in completely random spin orientations.
No one direction (or range of directions) should be more represented than any
other.
3
3
we expect Sz to vary between + 2 ~ and 2 ~. For example, in one extreme
case, we could find
an electron whose spin
is oriented straight up, i.e., in the positive zdirection with Sz = + 23 ~ possessing 100% of the vectors length.
Similar logic implies the detection of other electrons
3
2
~),
138
6.4.2
We get no such cooperation from nature. In fact, we only and always get one of two
z-values of spin: Sz = +~/2 and Sz = ~/2. Furthermore, the two readings appear
to be somewhat random, occurring with about equal likelihood and no pattern:
~
~
~
~
~
~
+
+
+
+
2
2
2
2
2
2
Figure 6.7: The measurements force Sz to snap into one of two values.
6.4.3
There are two surprises here, each revealing its own quantum truth that we will be
forced to accept.
Surprise #1. There are infinitely many quantum spin states available for electrons to secretly experience. Yet when we measure the z-component of
spin, the
~
uncooperative particle always reports that this value is either + 2 or ~2 ,
each choice occurring with about equal likelihood. We will call the z-component
of spin the observable Sz (observable indicating that we can measure it) and
accept from the vast body of experimental evidence that measuring the observable Sz forces the spin state to collapse such that its Sz snaps to either one
of the two
allowable values called eigenvalues of the observable Sz . Well call
the + ~2 outcome the +z outcome (or just the (+) outcome) and the ~2
outcome the z (or the ()) outcome.
Surprise #2. Even if we can somehow accept the collapse of the infinity of
random states into the two measurable ones, we cannot help but wonder why the
electrons projection onto the
z-axis is not the entire length
of
the vector, that
3
3
is, either straight up at + 2 ~ or straight down at 2 ~. The electron
stubbornly wants to give us only a fraction of that amount, 58%. This
corresponds to two groups. The up group which forms the angle 55
139
(0.955 rad) with the positive z-axis and down group which forms that same
55 angle with the negative z-axis. The explanation
for not being able to get
3
a measurement that has the full length 2 ~ is hard to describe without a
more complete study of quantum mechanics. Briefly it is due to the Heisenberg
uncertainty principle. If the spin were to collapse to a state that was any closer
to the vertical z-axis, we would have too much simultaneous knowledge about
its
and y- components (too close to 0) and its z-component (too close to
x
3
2 ~). (See Figure 6.8.) This would violate Heisenberg, which requires
the combined variation of these observables be larger than a fixed constant.
Therefore, Sz must give up some of its claim on the full spin magnitude, 23 ~,
Figure 6.8: Near vertical spin measurements give illegally accurate knowledge of
Sx , Sy and Sz .
resulting in a shorter Sz , specifically 12 ~.
6.4.4
A Follow-Up to Experiment #1
Before we move on to a new experimental set-up, we do one more test. The physicists
tell us that a side-effect of the experimental design was that the electrons which
measured a +z are now separated from those that measured z, and without building
any new equipment, we can subject each group to a follow-up Sz measurement. The
follow-up test, like the original, will not add any forces or torques that could change
the Sz (although we are now beginning to wonder, given the results of the first set of
trials). (See Figure 6.9.)
6.4.5
We are reassured by the results. All of the electrons in the (+) group faithfully
measure +z, whether done immediately after the first test or after a period of time.
Similarly, the Sz value of the () group is unchanged; those rascals continue in their
down-z orientation. We have apparently created two special states for the electrons
which are distinguishable and verifiable. (See Figure 6.10.)
140
6.4.6
6.4.7
The imagery of a spinning charged massive object has to be abandoned. Either our
classical physics doesnt work at this subatomic level or the mechanism of electron
spin is not what we thought it was or both. Otherwise, we would have observed
a continuum of values Sz values in our measurements. Yet the experiments that we
have been doing are designed to measure spin as if it were a 3-dimensional vector, and
the electrons are responding to something in our apparatus during the experiments,
so while the model isnt completely accurate, the representation of spin by a vector,
S (or equivalently, two angular parameters, and , and one scalar S) may still have
some merit. Were going to stop pretending that anything is really spinning, but well
stick with the vector quantity S and its magnitude S, just in case theres some useful
information in there.
6.5
The output of our first experiment will serve as the input of our second. (See Figure 6.11.)
6.5.1
The Experiment
We will take only the electrons that collapsed to |+i as input to a new apparatus. We
observed that this was essentially half of the original group, the other half consisting
of the electrons that collapsed into the state |i (which we now throw away).
Our second apparatus is going to measure the x-axis projection. Lets repeat this
using the same bullet-points as we did for the first experiment.
1. The States. The input electrons are in a specific state, |+i, whose z-spins always point (as close as possible to) up. This is in contrast to the first experiment
where the electrons were randomly oriented.
142
Figure 6.14: Viewed from top left, the classical range of x-projection of spin.
Clinging desperately to those classical ideas which have not been ruled out by
the first experiment, we imagine Sx and Sy to be in any relative amounts that
complement |Sz |, now firmly fixed at ~2 . This would allow values for those two
q
143
6.5.2
As before, our classical expectations are dashed. We get one of two x-values of spin:
Sx = +~/2 or Sx = ~/2. And again the two readings occur randomly with near
equal probability:
~
~
~
~
~
~
+
+
+
.
2
2
2
2
2
2
Also, when we subject each output group to further Sx tests, we find that after the
first Sx collapse each group is locked in its own state as long as we only test Sx .
6.5.3
Figure 6.15: A guess about the states of two groups after experiment #2
6.5.4
A Follow-Up to Experiment #2
Using the two detectors the physicists built, we hone in on one of the two output
groups of experiment, #2, choosing (for variety) the |ix group. We believe it to
contain only electrons with Sz = (+) and Sx = (). In case you are reading too
fast, thats z-up and x-down. To confirm, we run this group through a third test that
measures Sz again, fully anticipating a boring run of (+) readings.
144
6.5.5
+
2
2
2
2
2
2
We are speechless. There are now equal numbers of z-up and z-down spins in a group
of electrons that we initially selected from a purely z-up pool. (See Figure 6.16.)
Furthermore, the physicists assure us that nothing in the second apparatus could
6.5.6
Even if we accept the crudeness of only two measurable states per observable, we
still cant force both Sz and Sx into specific states at the same time. Measuring Sx
destroys all information about Sz . (See Figure 6.17.)
After more experimentation, we find this to be true of any pair of distinct directions, Sx , Sy , Sz , or even Sn , where n
is some arbitrarily direction onto which
we project S. There is no such thing as knowledge of three independent directional
components such as Sx , Sy , and Sz . The theorists will say that Sz and Sx (and any
such pair of different directions) are incompatible observables. Even if we lower our
expectations to accept only discrete (in fact binary) outcomes |+i, |+ix , |iy , etc.
we still cant know or prepare two at once.
[Exception. If we select for |+ix (or any |+in ) followed by a selection for its
polar opposite, |ix (or |in ), there will be predictably zero electrons in the final
output.]
145
c+ |+i
c |i ,
where c+ and c are scalar weights which express how much |+i and |i constitute
|ix . Furthermore, it would make sense that the scalars c+ and c have equal magnitude if they are to reflect the observed (roughly) equal number of z-up and z-down
spins detected by the third apparatus when testing the |ix group.
We can push this even further. Because we are going to be working with normalized vectors (recall the projective
sphere in Hilbert space?), it will turn out that their
common magnitude will be 1/ 2.
For this particular combination of vector states,
|ix
1
1
|+i |i
2
2
|+i |i
.
2
In words (that we shall make precise in a few moments), the |ix vector can be
expressed as a linear combination of the Sz vectors |+i and |i. This hints at the
idea that |+i and |i form a basis for a very simple 2-dimensional Hilbert space, the
foundation of all quantum computing.
6.5.7
S is crumbling before our eyes. In its place, a new model is emerging that of a two
dimensional vector space whose two basis vectors appear to be the two z-spin states
|+i and |i which represent a quantum z-up and quantum z-down, respectively. This
is a difficult transition to make, and Im asking you to accept the concept without
trying too hard to visualize it in your normal three-dimensional world view. Here are
three counter-intuitive ideas, the seeds of which are present in the recent outcomes
of experiment #2 and its follow-up:
1. Rather than electron spin being modeled by classical three dimensional unit
vectors
Sx
1
Sy =
Sz
Sph
in a real vector space with basis {
x, y
,
z}, we are heading toward a model
where spin states are represented by two dimensional unit vectors
c+
c
in a complex vector space with basis { |+iz , |iz }.
2. In contrast to classical spin, where the unit vectors with z-components +1 and
-1 are merely scalar multiples of the same basis vector
z,
0
0
0 = (+1)
0 = (1)
z
and
z
+1
1
we are positing a model in which the two polar opposite z-spin states, |+i = |+iz
and |i = |iz , are linearly independent of one another.
3. In even starker contrast to classical spin, where the unit vector x
is linearly
independent of
z, the experiments seem to suggest that unit vector |ix can be
formed by taking a linear combination of |+i and |i, specifically
|ix
|+i |i
.
2
But dont give up on the spherical coordinates and just yet. They have a role
to play, and when we study expectation values and the Bloch sphere, youll see what
that role is. Meanwhile, we have one more experiment to perform.
6.6
6.6.1
We have learned that measuring one of the scalars related to spin, Sz , Sx or any Sn ,
has two effects:
147
6.6.2
The Experiment
We can prepare a pure state that is between |+i and |i if we are clever. We direct
our physicists to rotate the first apparatus relative to the second apparatus by an
angle counter-clockwise from the z-axis (axis of rotation being the y-axis, which
ensures that the x-z plane is rotated into itself (see Figure 6.18).
Figure 6.18: A spin direction with polar angle from +z, represented by |i
Lets call this rotated state |i, just so it has a name that is distinct from |+i
and |i.
If we only rotate by a tiny , we have a high dose of |+i and a small dose of |i in
our rotated state, |i. On the other hand, if we rotate by nearly 180 ( radians), |i
would have mostly |i and very little |+i in it. Before this lesson ends, well prove
that the right way to express the relationship between and the relative amounts of
|+i and |i contained in |i is
|i = cos
|+i + sin
|i .
2
2
148
By selecting the same (+) group coming out of the first apparatus (but now tilted at
an angle ) as input into the second apparatus, we have effectively changed our input
states going into the second apparatus from purely |+i to purely |i.
We now measure Sz , the spin projected onto the z-axis. The exact features of
what I just described can be stated using the earlier three-bullet format.
1. The States. This time the input electrons are in a specific state, |i, whose
z-spin direction forms an angle from the z-axis (and for specificity, whose
spherical coordinate for the azimuthal angle, , is 0). (See Figure 6.19.)
Figure 6.19: The prepared state for experiment #3, prior to measurement
2. The Measurement. We follow the preparation of this rotated state with an
Sz measurement.
3. The Classical Expectation. Weve been around the block enough to realize
that we shouldnt expect aclassical result. If this were
a purely classical situa
3
3
tion, the spin magnitude, 2 ~, would lead to Sz = 2 ~ cos (). But we already
know the largest Sz ever reads is 21 ~, so maybe we attenuate that number
by cos , and predict ~2 cos (). Those are the only two ideas we have at the
moment. (See Figure 6.20.)
149
6.6.3
It is perhaps not surprising that we always read one of two z-values of spin: Sz = +~/2
and Sz = ~/2. The two readings occur somewhat randomly:
~
~
~
~
~
~
+
+
+
+
2
2
2
2
2
2
However, closer analysis reveals that they are not equally likely. As we try different
s and tally the results, we get the summary shown Figure 6.21.
cos
100%
2
2
sin
100% .
2
2
and
Notice how nicely this agrees with our discussion of experiment #2. There, we
prepared a |ix state to go into the final Sz tester. |+ix is intuitively 90 from the
z-axis, so in that experiment
our was 90 . That would make 2 = 45 , whose cosine
and sine are both = 22 . For these values, the formula above gives a predicted 50%
(i.e., equal) frequency to each outcome, (+) and (), and thats exactly what we
found when we measured Sz starting with |ix electrons.
6.6.4
Apparently, no matter what quantum state our electrons spin is in, we always measure the magnitude projected onto an axis as ~/2. We suspected this after the
first two experiments, but now firmly believe it. The only thing we havent tried is
to rotate the first apparatus into its most general orientation, one that includes a
non-zero azimuthal , but that would not change the results.
150
This also settles a debate that you might have been waging, mentally. Is spin, prior
to an Sz measurement, actually in some combination of the states |+i and |i? Yes.
Rotating the first apparatus relative to the second apparatus by a particular has a
physical impact on the outcomes. Even though the electrons collapse into one of |+i
and |i after the measurement, there is a difference between |i = .45 |+i + .89 |i
and | 0 i = .71 |+i + .71 |i: The first produces 20% (+) measurements, and the
second produces 50% (+) measurements.
6.6.5
|+in + |in ,
6.6.6
Were now close enough to an actual model of quantum spin 1/2 particles to skip
the game of making a minor adjustment to our evolving picture. In the next lesson
well add the new information about the meaning of the scalar coefficients and the
probabilities they reflect and give the full Hilbert space description of spin 1/2 physics
motivated by our three experiments.
Also, were ready to leave the physical world, at least until we study time evolution.
We can thank our experimental physicists and let them get back to their job of
designing a quantum computer. We have a model to build and algorithms to design.
151
6.7
Onward to Formalism
In physics the word formalism refers to the abstract mathematical notation necessary to scribble predictions about what a physical system such as a quantum computer
will do if we build it. For our purposes, the formalism is what we need to know, and
thats the content of the next two chapters. They will provide a strict but limited set
of rules that we can use to accurately understand and design quantum algorithms.
Of those chapters, only the next (the second of these three) is necessary for CS
83A, but what you have just read has prepared you with a sound intuition about
the properties and techniques that constitute the formalism of quantum mechanics,
especially in the case of the spin 1/2 model used in quantum computing.
152
Chapter 7
Time Independent Quantum
Mechanics
7.1
This is the second in a three-chapter introduction to quantum mechanics and the only
chapter that is required for the first course, CS 83A. A brief and light reading of the
prior lesson in which we introduce about a half dozen conceptual experiments will
help give this chapter context. However, even if you skipped that on first reading,
you should still find the following to be self-contained.
Our goal today is twofold.
1. We want you to master the notation used by physicists and computer scientists
to scribble, calculate and analyze quantum algorithms and their associated logic
circuits.
2. We want you to be able to recognize and make practical use of the direct correspondence between the math and the physical quantum circuitry.
By developing this knowledge, you will learn how manipulating symbols on paper
affects the design and analysis of actual algorithms, hardware logic gates and measurements of output registers.
7.2
Lets do some quantum mechanics. Well be stating a series of properties Ill call
them traits that will either be unprovable but experimentally verified postulates, or
provable consequences of those postulates. Well be citing them all in our study of
quantum computing.
153
In this lesson, time will not be a variable; the physics and mathematics pertain
to a single instant.
7.2.1
electron #47 to account for the spin of the atom as a whole. (Two other facts
recommend the use of silver atoms. First, we cant use charged particles since
the so-called Lorentz force would overshadow the subtle spin effects. Second,
silver atoms are heavy enough that their deflection can be calculated based
solely on classical equations.)
We Prepare a Fixed Initial State. An atom can be prepared in a spin state
associated with any pre-determined direction, n
, prior to subjecting it to a final
Stern-Gerlach tester. We do this by selecting a |+i electron from a preliminary
Stern-Gerlach Sz tester then orient a second tester in an n
direction relative
the original.
The Measurements and Outcomes. The deflection of the silver atom is detected
as it hits a collector plate at the far end of the last apparatus giving us the
measurements ~2 , and therefore the collapsed states |+ix , |in , etc. The
results correspond precisely with our experiments #1, #2 and #3 discussed
earlier.
Stern-Gerlach is the physical system to keep in mind as you study the math that
follows.
7.3
7.3.1
For any physical system, S , we associate a Hilbert space, H. Each physical state in
S corresponds to some ray in H, or stated another way, a point on the projective
sphere (all unit vectors) of H.
physical state S
v H,
|v| = 1 .
The Hilbert space H is often called the state space of the system S , or just the state
space.
Vocabulary and Notation. Physicists and quantum computer scientists alike
express the vectors in the state space using ket notation. This means a vector in
state space is written
| i,
and is usually referred to as a ket. The Greek letter is typically used to label any
old state. As needed we will be replacing it with specific and individual labels when
we want to differentiate two state vectors, express something known about the vector,
or discuss a famous vector that is universally labeled. Examples we will encounter
include
| a i , | uk i , | + i ,
155
and
| + iy .
When studying Hilbert spaces, I mentioned that a single physical state corresponds to an infinite number of vectors, all on the same ray, so we typically choose
a normalized representative having unit length. Its the job of quantum physicists
to describe how to match the physical states with normalized vectors and ours as
quantum computer scientists to understand and respect the correspondence.
7.3.2
Based on our experiments involving spin-1/2 systems from the (optional) previous
chapter, we were led to the conclusion that any spin state can be described by a
linear combination of two special states |+i and |i. If you skipped that chapter,
this will serve to officially define those two states. They are the natural basis kets of a
2-dimensional complex Hilbert space, H. In other words, we construct a simple vector
space of complex ordered pairs with the usual complex inner product and decree the
natural basis to be the two measurable states |+i and |i in S . Symbolically,
, C
with
H
1
|+i
and
0
0
|i
.
1
In this regime, any physical spin state |i S can be expressed as a normalized
vector expanded along this natural basis using
|i =
= |+i + |i ,
where
||2 + ||2
1.
The length requirement reflects that physical states reside on the projective sphere of
H.
[Exercise. Demonstrate that { |+i , |i } is an orthonormal pair. Caution:
Even though this may seem trivial, be sure you are using the complex, not the real,
inner product.]
The Orthonormality Expressions
In the heat of a big quantum computation, basis kets will kill each other off, turning
themselves into the scalars 0 and 1 because the last exercise says that
h+ | +i
h+ | i
=
=
h | i
h | +i
156
=
=
1,
0.
and
While this doesnt rise to the level of trait, memorize it. Every quantum mechanic
relies on it.
[Exercise. Demonstrate that the set { |+i , |i } forms a basis (the z-basis) for
H. Hint: Even though only the projective sphere models S , we still have to account
for the entire expanse of H including all the vectors off the unit-sphere if we are going
to make claims about spanning the space.]
The x-Basis for H
Lets complete a thought we started in our discussion of experiment #2 from last
time. We did a little hand-waving to suggest
|ix
|+i |i
,
2
but now we can make this official (or, if you skipped the last chapter, let this serve
as the definition of two new kets).
1
1
|ix
,
2 1
that is, it is the vector in H whose coordinates along the z-bases are as shown. We
may as well define the |+ix vector. It is
1
|+i + |i
1
=
.
|+ix =
2
2 1
[Exercise. Demonstrate that { |+ix , |ix } is an orthonormal pair.]
[Exercise. Demonstrate that the set { |+ix , |ix } forms a basis (the x-basis)
for H.]
7.3.3
There may be two questions (at least) that you are inclined to ask.
1. Why do we need a complex vector space as opposed to a real one to represent
spin?
2. Why do spin states have to live on the projective sphere? Why not any point
in H or perhaps the sphere of radius 94022?
I can answer item 1 now (and item 2 further down the page). Obviously, there was
nothing magical about the z-axis or the x-axis. I could have selected any direction
in which to start my experiments at the beginning of the last lesson and then picked
any other axis for the second apparatus. In particular, I might have selected the
157
same z-axis for the first measurement, but used the y-axis for the second one. Our
interpretation of the results would then have suggested that |iy contain equal parts
|+i and |i,
|iy
c+ |+i
c |i ,
and similarly for |+iy . If we were forced to use real scalars, we would have to pick the
same two scalars 12 for c (although we could choose which got the + and which
got the sign, a meaningless difference). Wed end up with
1
1
(warning: not true) ,
|iy
1
2
which would force them to be identical to the vectors |+ix and |ix , perhaps the
order of the vectors, reversed. But this cant be true, since the x-kets and the y-kets
can no more be identical to each other than either to the z-kets, and certainly neither
are identical to the z-kets. (If they were, then repeated testing would never have split
the original |+i into two equal groups, |+ix and|ix .) So there are just not enough
real numbers to form a third pair of basis vectors in the y-direction, distinct from the
x-basis and the z-basis.
If we allow complex scalars, the problem goes away. We can define
1
1
|iy
,
2 i
Now all three pairs are totally different orthonormal bases for H, yet each one contains
equal amounts of |+i and |i.
7.4
Trait #1 seemed natural after understanding the experimental outcomes of the spin1/2 apparatus. Trait #2 is going to require that we cross a small abstract bridge,
but I promise, you have all the math necessary to get to the other side.
7.4.1
158
TA : H H linear and TA = TA .
[Review. TA = TA is the Hermitian condition. The Hermitian condition on a
matrix M means that M s adjoint (conjugate transpose), M , is the same as M . The
Hermitian condition on a linear transformation means that its matrix (in any basis)
is its own adjoint. Thus, Trait #2 says that the matrix, TA , which purports to
correspond to the observable A, must be self-adjoint. Well reinforce all this in the
first example.]
Dont feel bad if you didnt expect this kind of definition; nothing we did above
suggested that there were linear transformations behind the observables Sz , Sx , etc.
never mind that they had to be Hermitian. Im telling you now, there are and they
do. This is an experimentally verified observation, not something we can prove, so it
is up to the theorists to guess the operator that corresponds to a specific observable.
Also, it is not obvious what we can do with the linear transformation of an observable,
even if we discover what it is. All in due time.
7.4.2
The Observable Sz
The linear transformation for Sz , associated with the z-component of electron spin
(a concept from the optional previous chapter), is represented by the matrix
~
0
~ 1 0
2
.
=
2 0 1
0 ~2
Blank stares from the audience.
Never fear. Youll soon understand why multiplication by this matrix is the operator (in H) chosen to represent the observable Sz (in the physical system S ). And
if you did not read that chapter, just take this matrix as the definition of Sz .
Notation. Sometimes we abandon the constant ~/2 and use a simpler matrix to
which we give the name z a.k.a. the Pauli spin matrix (in the z-direction),
1 0
z
.
0 1
Meanwhile, the least we can do is to confirm that the matrix for Sz is Hermitian.
[Exercise. Prove the matrix for Sz is Hermitian.]
We will now start referring to
i) the observable spin projected onto the z-axis,
159
7.5
This trait will add some math vocabulary and a bit of quantum computational skill
to our arsenal.
7.5.1
The only possible measurement outcomes of an observable quantity A are special real
numbers, a1 , a2 , a3 , . . ., associated with the operators matrix. The special numbers
a1 , a2 , a3 , . . . are known as eigenvalues of the matrix.
[Note to Physicists. Im only considering finite or countable eigenvalues because
our study of quantum computing always has only finitely many special ak . In physics,
160
7.5.2
Given any complex or real matrix M , we can often find certain special non-zero
vectors called eigenvector s for M . An eigenvector u (for M ) has the property that
u
Mu
6
=
=
0
and
a u , for some (possibly complex) scalar a .
There are two facts that I will state without proof. (They are easy enough to be
exercises.)
Uniqueness. Non-Degenerate eigenvalues have unique (up to scalar multiple)
eigenvectors. Degenerate eigenvalues do not have unique eigenvectors.
Diagonality. When the eigenvectors of a matrix, M , form a basis for the vector
space, we call it an eigenbasis for the space. M , expressed as a matrix in its
own eigenbasis, is a diagonal matrix (0 everywhere except for diagonal from
position 1-1 to n-n).
[Exercise. Prove one or both of these facts.]
7.5.3
Lets examine M = Sz . Most vectors (say, (1, 2)t , for example) are not eigenvectors
of Sz . For example,
~ 1 0
~ 1
1
1
=
6= a
2
2
2 0 1
2 2
for any complex scalar, a. However, the vector (1, 0)t is an eigenvector, as
~ 1 0
~ 1
1
=
0
2 0 1
2 0
demonstrates. It also tells us that ~2 is the eigenvalue associated with the vector
(1, 0)t , which is exactly what Trait #3 requires.
[Exercise. Show that (0, 1)t and ~2 form another eigenvector-eigenvalue pair for
Sz .]
This confirms that Trait #3 works for Sz ; we have identified the eigenvalues for
Sz and they do, indeed, represent the only measurable values of the observable Sz in
our experiments.
All of this results in a more informative variant of Trait #3, which Ill call Trait
#3.
Trait #3: The only possible measurement outcomes of an observable, A, are the
solutions {ak } to the eigenvector-eigenvalue equation
TA |uk i
ak |uk i .
The values {ak } are always real, and are called the eigenvalues of the observable,
while their corresponding kets, {|uk i} are called the eigenkets. If each eigenvalue has
a unique eigenket associated with it, the observable is called non-degenerate. On the
other hand, if there are two or more different eigenkets that make the equation true
162
for the same eigenvalue, that eigenvalue is called a degenerate eigenvalue, and the
observable is called degenerate.
You may be wondering why we can say that the eigenvalues of an observable are
always real when we have mentioned that, for a general matrix operator, we can get
complex eigenvalues. This is related to the theoretical definition of an observable
which requires it to be of a special form that always has real eigenvalues.
7.6
We take a short detour to acquire the skills needed to compute eigenvalues for any
observable.
The Eigenvalue Theorem. Given a matrix M , its eigenvalues are solutions to
the system of simultaneous equations in the unknown
det (M I)
0.
=
=
=
au
a Iu
0.
Keeping in mind that eigenvectors are always non-zero, we have shown that the matrix
M aI maps a non-zero u into 0. But thats the hypothesis of the Little Inverse
Theorem B of our matrix lesson, so we get
det (M aI)
0.
QED
7.6.1
0,
2 i 0
0
~2 i
~i
2
2
~2
4
~
2
Of course, we knew the answer, because we did the experiments (and in fact, the
theoreticians crafted the Sy matrix based on the results of the experimentalists). Now
comes the fun part. We want to figure out the eigenvectors for these eigenvalues. Get
ready to do your first actual quantum mechanical calculation.
Eigenvector for (+ ~ /2). The eigenvector has to satisfy
Sy u
~
+ u,
2
=
=
v1
v2 .
and
There are two equations in two unknowns. Wrong. There are four unknowns (each
coordinate, vk , is a complex number, defined by two real numbers). This is somewhat
expected since we know that the solution will be a ray of vectors all differing by a
complex scalar factor. We can solve for any one of the vectors on this ray as a first
step. We do this by guessing that this ray has some non-zero first coordinate (and if
we guess wrong, we would try again, the second time knowing that it must therefore
have a non-zero second coordinate [Exercise. Why?]. Using this guess, we can pick
v1 =1, since any non-zero first coordinate can be made to = 1 by a scalar multiple of
the entire vector. With this we get the complex equations
i v2
i1
=
=
1
v2 ,
revealing that v2 = i, so
u
1
,
i
164
which we must (always) normalize by projecting onto the unit (projective) sphere.
u
|+i + i |i
1 1
=
=
.
|u|
2 i
2
The last equality is the expression of u explicitly in terms of the z-basis {|+i , |i}.
Alternate Method. We got lucky in that once we substituted 1 for v1 , we were
able to read off v2 immediately. Sometimes, the equation is messier, and we need to
do a little work. In that case, naming the real and imaginary parts of v2 helps.
v2
a + bi,
and substituting this into the original equations containing v2 , above, gives
i (a + b i)
i
=
=
1
(a + b i) ,
or
b ai =
i =
1
a + bi.
Lets solve the second equation for a, then substitute into the first, as in
a
b (i b i) i
1
=
=
=
i bi
1
1.
What does this mean? It means that we get a very agreeable second equation; the b
disappears resulting in a true identity (a tautology to the logician). We can, therefore
let b be anything. Again, when given a choice, choose 1. So b = 1 and substituting
that into any of the earlier equations gives a = 0. Thus,
v2
a + bi
0 + 1i
i,
the same result we got instantly the first time. We would then go on to normalize u
as before.
Wrong Guess for v1 ? If, however, after substituting for a and solving the first
equation, b disappeared and produced a falsehood (like 0 = 1), then no b would be
suitable. That would mean our original choice of v1 = 1 was not defensible; v1 could
not have been a non-zero value. We would simply change that assumption, set v1 = 0
and go on to solve for v2 (either directly or by solving for a and b to get it). This
time, we would be certain to get a solution. In fact, any time you end up facing a
contradiction (3 = 4) instead of a tautology (7 = 7), then your original guess for v1
has to be changed. Just redefine v1 (if you chose 1, change it to 0) and everything
will work out.
165
7.6.2
Roll up your sleeves and do it all for the third primary direction.
[Exercise. Compute the eigenvalues and eigenvectors for the observable Sx .]
Congratulations. You are now doing full-strength quantum mechanics (not
quantum physics, but, yes, quantum mechanics).
7.6.3
Spoiler Alert. These are the results of the two last exercises.
166
0
1
2
2
1 1
1
~
~
1
Sx :
+
,
2
2
2 1
2 1
~
~
1 1
1
1
,
Sy :
+
2
2
2 i
2 i
Expressed explicitly in terms of the z-basis vectors we find
Sz :
Sx :
Sy :
~
~
|+i ,
|i
2
2
~
|+i + |i
|+ix =
,
2
2
~
|+i + i |i
|+iy =
,
2
2
~
|+i |i
|ix =
2
2
~
|+i i |i
|iy =
2
2
We saw the x-kets and y-kets before when we were trying to make sense out of the
50-50 split of a |ix state into the two states |+iz and |iz . Now, the expressions reemerge as the result of a rigorous calculation of the eigenvectors for the observables Sx
and Sy . Evidently, the eigenvectors of Sx are the same two vectors that you showed
(in an exercise) were an alternative orthonormal basis for H, and likewise for the
eigenvectors of Sy .
Using these expressions along with the distributive property of inner products, it
is easy to show orthonormality relations like
xh+
| + ix
x h + | ix
=
=
1,
0.
or
[Exercise. Prove the above two equalities as well as the remaining combinations
that demonstrate that both the x-bases and y-basis are each (individually) orthonormal.]
7.7
Thus far, we have met a simple 2-dimension Hilbert space, H, whose vectors correspond to states of a spin-1/2 physical system, S . The correspondence is not 1-to-1,
since an entire ray of vectors in H correspond to the same physical state, but we
are learning to live with that by remembering that we can and will normalize all
vectors whenever we want to see a proper representative of that ray on the unit (projective) sphere. In addition, we discovered that our three most common observables
Sz , Sx and Sy correspond to operators whose eigenvalues are all ~2 and whose
167
eigenvectors form three different 2-vector bases for the 2-dimensional H. Each of the
bases is an orthonormal basis.
Id like to distill two observations and award them collectively the title of trait.
7.7.1
figure out a better mathematical Hilbert space to model S . Similarly, if there were
a measurable quantity for which we could not identify a linear operator, we have not
properly modeled S .
7.7.2
While weve been doggedly using the z-axis kets to act as a basis for our 2-dimensional
state space, we know we can use the x-kets or y-kets. Lets take the x-kets as an
example.
First and most easily, when we express any basis vector in its own coordinates,
the results always look like natural basis coordinates, i.e., (0, 1)t or (1, 0)t . This was
an exercise back in our linear algebra lecture, but you can take a moment to digest
it again. So, if we were to switch to using the x-basis for our state space vectors we
would surely see that
1
|+ix =
and
0 x
0
|ix =
.
1 x
How would our familiar z-kets now look in this new basis? You can do this by starting
with the expressions for the x-kets in terms of |+i and |i that we already have,
|+ix
|ix
|+i + |i
2
|+i |i
,
2
and
and solve for the z-kets in terms of the x-kets. It turns out that doing so results in
dej`a vu,
|+i
|i
|+ix + |ix
2
|+ix |ix
.
2
and
Its a bit of a coincidence, and this symmetry is not quite duplicated when we express
the y-kets in terms of |+ix and |ix . Ill do one, and you can do the other as an
exercise.
|+iy in the x-Basis. The approach is to first write down |+iy in terms of |+i and
|i (already known), then replace those two z-basis kets with their x-representation,
169
|+i + i |i
|+ix + |ix
(1 + i) |+ix + (1 i) |ix
2
(1i)
2 |+ix 2i |ix
=
=
2
+ i
|+ix |ix
normalize
|+ix i |ix
|+ix i |ix
|+ix + i |ix i
7.7.3
Expressing |+i or |i in a different basis, like the x-basis, is just a special case of
something you already know about. Any vector, whose coordinates are given in the
preferred basis, like
|i = |+i + |i =
,
has an alternate representation when its coefficients are expanded along a different
basis, say the x-basis,
0
0
0
.
|i = |+ix + |ix =
0 x
If we have everything expressed in the preferred basis, then one day we find ourselves
in need of all our vectors names in the alternate (say x) basis, how do we do it?
Well there is one way I didnt develop (yet) and it involves finding a simple matrix
by which you could multiply all the preferred vectors to produce their coefficients
expanded along the alternate basis. But we usually dont need this matrix. Since
we are using orthonormal bases, we can convert to the alternate coordinates directly
using the dot-with-the-basis vector trick. Typically, we only have one or two vectors
whose coordinates we need in this alternate basis, so we just apply that trick.
For example, to get a state vector, |i, expressed in the x-basis, we just form the
two inner-products,
!
x h+ | i
|i =
,
x h | i
170
means we are taking the complex inner product of |i on the right with the x-basis
ket |+ix on the left. (Dont forget that we have to take the Hermitian conjugate of
the left vector for complex inner products.)
We are implying by context that the column vector on the RHS is expressed in
x-coordinates since thats the whole point of the paragraph. But if we want to be
super explicit, we could write it as
!
!
x h+ | i
x h+ | i
|i =
,
, or
x h | i x
x h | i x
with or without the long vertical line, depending on author and time-of-day.
Showing the same thing in terms of the x-kets explicitly, we get
|i
x h+ | i
*
=
1
2
1
2
|+ix + x h | i |ix
+
* 1 +
2
|+ix +
|ix
1
2
Notice that the coordinates of the three vectors, |ix and |i are expressed in the
preferred z-basis. We can compute inner products in any orthonormal bases, and since
we happen to know everything in the z-basis, why not? Try not to get confused. We
are looking for the coordinates in the x-basis, so we need to dot with the x-basis
vectors, but we use z-coordinates of those vectors (and the z-coordinates of state |i)
to compute those two scalars.
Example. The (implied z-spin) state vector
!
1+i
23
1+i
1
6
2
1 2
2
3
1+i
1
6
2
2
1
2
3
171
7.8
7.8.1
Our casual lesson about conversion from the z-basis to the x-basis has brought us to
one of the most computationally useful tools in quantum mechanics. Its something
that we use when doodling with pen and paper to work out problems and construct
algorithms, so we dont want to miss the opportunity to establish it formally, right
now. We just saw that
|i
xh+
| ix |+ix
xh
| ix |ix ,
which was true because {|+ix , |ix } formed an orthonormal basis for our state space.
In systems with higher dimensions we have a more general formula. Where do we
get state spaces that have dimensions higher than two? A spin 1 system photons
has 3 dimensions; a spin-3/2 system delta particle has 4-dimensions and later
when we get into multi-qubit systems, well be taking tensor products of our humble
2-dimensional H to form 8-dimensional or larger state spaces. And the state space
that models position and momentum are infinite dimensional (but dont let that scare
you they are actually just as easy to work with we use use integrals instead of
sums).
If we have an n-dimensional state space then we would have an orthonormal basis
for that space, say,
n
on
|uk i
.
k=1
The |uk i basis may or may not be a preferred basis doesnt matter. Using the
dot-product trick we can always expand any state in that space, say |i, along the
basis just like we did for the x-basis in 2-dimensions. Only, now, we have a larger
sum
|i
n
X
huk | i |uk i .
k=1
This is a weighted sum of the uk -kets by the scalars huk | i. There is no law that
prevents us from placing the scalars on the right side of the vectors, as in
|i
n
X
|uk i huk | i .
k=1
n
X
!
|uk i huk | |i .
k=1
172
Look at what we have. We are subjecting any state vector |i to something that
looks like an operator and getting that same state vector back again. In other words,
that fancy looking operator-sum is nothing but an identity operator 1.
!
n
X
|uk i huk |
= 1.
k=1
This simple relation, called the completeness or closure relation, can be applied by
inserting the sum into any state equation without changing it, since it is the same as
inserting an identity operator (an identity matrix) into an equation involving vectors.
Well use it a little in this course, CS 83A, and a lot in the next courses CS 83B and
CS 83C, here at Foothill College.
P
[Exercise. Explain how the sum, k |uk i huk | is, in fact, a linear transformation
that can act on a vector |i. Hint: After applying it to |i and distributing, each
term is just an inner-product (resulting in a scalar) times a vector. Thus, you can
analyze a simple inner product first and later take the sum, invoking the properties
of linearity.]
This is worthy of its own trait.
7.8.2
Any orthonormal basis { |uk i } for our Hilbert space H satisfies the closure relation
!
n
X
|uk i huk |
= 1.
k=1
In particular, the eigenvectors of an observable will always satisfy the closure relation.
We wont work any examples as this will take us too far afield, and in our simple
2-dimensional H theres not much to see, but we now have it on-the-record.
7.9
There are two versions of this one, and we only need the simpler of the two which
holds in the case of non-degenerate eigenvalues, the situation that we will be using for
our spin-1/2 state spaces. (Also, youll note our traits are no longer in-sync with the
postulates, an inevitable circumstance since there are more traits than postulates.)
7.9.1
If a system is in the (normalized) state |i, and this state is expanded along the
eigenbasis { |uk i } of some observable, A, i.e.,
n
X
|i =
ck |uk i ,
k=1
173
.
|+i =
2
The amplitude of each of the two possible outcome states (the two eigenstates |+ix
and |ix ) is 12 . This tell us that each eigenstate outcome is equally likely and, in
fact, determined by
1
1
1
1
P Sx = + ~
=
=
,
2
2 |+i
2
2
174
while
1
P Sx = ~
2 |+i
=
1
.
2
Notice that the two probabilities add to 1. Is this a happy coincidence? I think not.
The first postulate of QM (our Trait #1) guarantees that we are using unit vectors
to correspond to system states. If we had a non-unit vector that was supposed to
represent that state, wed need to normalize it first before attempting to compute the
probabilities.
Probability Example 2
The follow-up to Experiment #2 was the most shocking to us, so we should see
how it is predicted by Trait #6. The starting point for this measurement was the
output of the second apparatus, specifically, the x group: |ix . We then measured
Sz , so we need to know the coefficients of the state |ix along the Sz eigenbasis. We
computed this already, and they are contained in
|ix
|+i |i
.
2
The arithmetic we just did works exactly the same here, and our predictions are the
same:
1
1
1
1
P Sz = + ~
=
=
2
2 |ix
2
2
and
1
P Sz = ~
2 |ix
1
1
2
2
1
.
2
Notice that, despite the amplitudes negative signs, the probabilities still come out
non-negative.
[Exercise. Analyze the Sy measurement probabilities of an electron in the state
|i. Be careful. This time we have a complex number to conjugate.]
Probability Example 3
The z-spin coordinates of a state, |i are given by
!
1+i
23
175
P Sz = + ~
=
2 |i
6
6
2
1
=
=
.
6
3
The probability of detecting an x-DOWN spin starting with that same |i requires
that we project that state along the x-basis,
|i
c+ |+ix + c |ix .
However, since we only care about the x-down state, we can just compute the |ix
coefficient, which we do using the dot-product trick.
* 1 1+i +
1+i
1
6
2
c =
=
+
2
1
2
12
3
3
=
1+i+2
12
3+i
.
12
|c |
=
12
12
10
12
5
.
6
[Exercise. Compute the z and +x spin probabilities for this |i and confirm
that they complement their respective partners that we computed above. Explain
what I mean by complementing their partners.]
[Exercise. Compute the +y and y spin probabilities for this |i.]
7.10
After the first experiment of the last lesson, in which we measured the z-spin of
random electrons and noted that the measurement caused them to split into two
equal groups, one having all z-up states and the other having all z-down states,
we did a follow-up. We tested Sz again on each output group separately. In case
you missed it, heres what happened. When we re-tested the |+i group, the results
always gave us +z readings, and when we re-tested the |i group we always got z
readings. It was as if, after the first measurement had divided our electrons into two
equal groups of +z and z spins, any subsequent tests suggested that the two groups
were frozen into their two respective states, as long as we only tested Sz .
However, the moment we tested a different observable, like Sx , we disturbed that
z-axis predictability. So the collapse of the system into the Sz state was only valid as
long as we continued to test that one observable. This is our next trait.
176
7.10.1
,
2
and realize that we have prepared a state which, with respect to the z-spin observable,
is not a basis state but a linear combination a superposition of the two z-basis
states (with equally weighted components). This kind of state preparation will be
very important for quantum algorithms, because it represents starting out in a state
which is neither 0 nor 1, but a combination of the the two. This allows us to work
with a single state in our quantum processor and get two results for the price of one.
A single qubit and a single machine cycle will simultaneously produce answers for
both 0 and 1.
|+ix
177
But, after we have prepared one of these states, how do we go about giving it to
a quantum processor, and what is a quantum processor. Thats answered in the first
quantum computing lesson coming up any day now. Today, we carry on with pure
quantum mechanics to acquire the full quiver of q-darts.
7.11
We now have a complete set of numbers and mathematical entities that give us all
there is to know about any quantum system. We know
1. the possible states |i of the system (vectors in our state space),
2. the possible outcomes of a measurement of an observable of that system (the
eigenvalues, ak , of the observable),
3. the eigenvectors states (a.k.a. eigenbasis), |uk i, into which the system collapses
after we detect a value, ak , of that observable, and
4. the probabilities associated with each eigenvalue outcome (specifically, |ck |2 ,
which are derived from the amplitudes, ck , of |i expanded along the eigenbasis
|uk i).
In words, the system is in a state of probabilities. We can only get certain special
outcomes of measurement. Once we measure, the system collapses into a special state
associated with that special outcome. The probability that this measurement occurs
is predicted by the coefficients of the states eigenvector expansion.
7.12
We take a short break from physics to talk about the universal notation made popular
by the famous physicist Paul Dirac.
7.12.1
Its time to formalize something weve been using loosely up to this point: the bracket,
or bra-ket, notation. The physicists expression for a complex inner product,
hv | wi ,
h | i ,
huk | i ,
h+ | i ,
can be viewed, not as a combination of two vectors in the same vector space, but
rather as two vectors, each from different vector space. Take, for example,
h | i,
178
The RHS of the inner product, |i, is the familiar vector in our state space, or ket
space. Nothing new there. But the LHS, h|, is to be thought of as a vector from a
new vector space, called the bra-space (mathematicians call it the dual space). The
bra space is constructed by taking conjugate transpose of the vectors in the ket space,
that is,
|i =
h| = ( , ) .
Meanwhile, the scalars for the bra space are the same: the complex numbers, C.
Examples
Here are some kets (not necessarily
!
1 + i
|i =
2 2i
!
3/2
|i =
i
!
5i
1
|i =
2 0
!
1 1
|i =
2 1
1 i,
2 + 2i
h| =
h| =
1
h| = ( 5i , 0 )
2
1
h| = ( 1 , 1 )
2
|+i
h+|
|iy
y h|
3/2 , i
h+| i h| .
Hint: Its probably easier to do this without reading a hint, but if youre stuck
... write out the LHS as a single column vector and take the conjugate transpose.
Meanwhile the RHS can be constructed by constructing the bras for the two z-basis
vectors (again using coordinates) and combining them. The two efforts should result
in the same vector. ]
Vocabulary and Notation
Notice that bras are written as row-vectors, which is why we call them conjugate
transposes of kets. The dagger () is used to express the fact that a ket and bra bear
179
|i
|i
h| .
and
This should sound very familiar. Where have we seen conjugate transpose before?
Answer: When we defined the adjoint of a matrix. We even used the same dagger
() notation. In fact, you saw an example in which the matrix had only one row (or
one column) i.e., a vector. (See the lesson on linear transformations.) This is the
same operation: conjugate transpose.
Be careful not say that a ket is the complex conjugate of a bra. Complex conjugation is used for scalars, only. Again, we say that a bra is the adjoint of the ket (and
vice versa). If you want to be literal, you can always say conjugate transpose. An
alternate term physicists like to use is Hermitian conjugate, the bra is the Hermitian
conjugate of the ket.
Example
Lets demonstrate that the sum of two bras is also a bra. What does that even mean?
If h| and h| are bras, they must be the adjoints of two kets,
|i =
h| = ( , ) .
|i =
h| = ( , ) .
( + ,
+ ) ,
( |i + |i ) .
Thats all a bra needs to be: the Hermitian conjugate of some ket. So the sum is a
bra.
There is (at least) one thing we must confirm. As always, when we define anything
in terms of coordinates, we need to be sure that the definition is independent of our
choice of basis (since coordinates arise from some basis). I wont prove this, but you
may choose to do so as an exercise.
180
[Exercise. Pick any three axioms of a vector space and prove that the bras in
the bra space obey them.]
[Exercise. Show that the definition of bra space is independent of basis.]
Remain Calm. There is no cause for alarm. Bra space is simply a device that
allows us manipulate the equations without making mistakes. It gives us the ability to
talk about the LHS and the RHS of an inner product individually and symmetrically,
unattached to the inner product.
Elaborate Example
We will use the bra notation to compute the inner product of the two somewhat
complicated kets,
c |i + |i
d |i f |i
g
on the right ,
where c, d, f and g are some complex scalars. The idea is very simple. We first take
the Hermitian conjugate of the intended left vector by
1. turning all of the kets (in that left vector) into bras,
2. taking the complex conjugate of any scalars (in that left vector) which are
outside a ket,
3. forming the desired inner product, and
4. using the distributive property to combine the component kets and bras.
Applying steps 1 and 2 on the left ket produces the bra
c h| + h| .
Step 3 gives us
d |i f |i
c h| + h|
,
g
Simple Example
y h+ | +i
=
=
=
|+i + i |i
h
| + i
2
h+| i h|
| + i
2
1
h+ | +i i h | +i
= .
2
2
The first thing we did was to express |+iy in the z-basis without converting it to a
bra. Then, we used the techniques just presented to convert that larger expression
into a bra. From there, it was a matter of distributing the individual kets and bras
and letting them neutralize each other. Normally, we would perform the first two
steps at once, as the next example demonstrates.
|+i + |i
h+| + i h|
h | +ix =
y
2
2
h+ | +i + i h | +i + h+ | i + i h | i
=
2
1 + i
=
.
2
[Exercise. Compute
h+ | +ix
and
h | +iy . ]
Summary
The bra space is a different vector space from the ket (our state) space. It is, however,
an exact copy (an isomorphism) of the state space in the case of finite dimensions
- all we ever use in quantum computing. You now have enough chops to prove this
easily, so I leave it as an ...
[Exercise. Prove that the adjoint of the ket basis is a basis for bra space. Hint:
Start with any bra. Find the ket from which it came (this step is not always possible in
infinite dimensional Hilbert space, as your physics instructors will tell you). Expand
that ket in any state space basis, then . . . .]
7.12.2
We now leverage the earlier definition of a matrix adjoint and extend our ability to
translate expressions from the ket space to the bra space (and back). If A is an
operator (or matrix) in our state space (not necessarily Hermitian it could be any
operator), then its adjoint A can be viewed as an operator (or matrix) in the bra
space.
A : h|
7
182
h| .
A operates on bras, but since bras are row-vectors, it has to operate on the right,
not left:
h| A
h| .
And the symmetry we would like to see is that the output h| to which A maps
h| is the bra corresponding to the ket A |i. That dizzying sentence translated into
symbols is
h|
A h| .
Example. Start with our familiar 2-dimensional state space and consider the
operator,
i i
A =
.
1 0
Its adjoint is
A
i 1
.
i 0
h| A = ( 1 + i , 3 )
i 0
( 1 + 2i , 1 + i ) .
7.12.3
The reason we added adjoint operators into the mix was to supply the final key to
the processes of converting any combination of state space kets into bras and any
combination of bras into kets. Say we desire to convert the ket expression
c A |i + d h+ | i |i
into its bra counterpart. The rules will guide us, and they work for expressions far
more complex with equal ease.
We state the rules, then we will try them out on this expression. This calls for a
new trait.
183
7.12.4
The terms of a sum can be (but dont have to be) left in the same order.
The order of factors in a product are reversed.
Kets are converted to bras (i.e., take the adjoints).
Bras are converted to kets (i.e., take the adjoints).
Operators are converted to their adjoints.
Scalars are converted to their complex conjugates.
(Covered by the above, but stated separately anyway:) Inner products are reversed.
When done (for readability only), rearrange each product so that the scalars are
on the left of the vectors.
If we apply the adjoint conversion rules, except for the readability step, to the
above combination we get
c A |i + d h+ | i |i
h| A c + h| h | +i d ,
which we rearrange to
c h| A + d h | +i h| .
Youll get fast at this with practice. I dont want to spend any more real estate on
the topic, since we dont apply it very much in our first course, CS 83A, but here are
a couple exercises that will take care of any lingering urges.
[Exercise. Use the rules to convert the resulting bra of the last example back
into a ket. Confirm that you get the ket we started with.]
[Exercise. Create a wild ket expression consisting of actual literal scalars, matrices and column vectors. Use the rules to convert it to a bra. Then use the same rules
to convert the bra back to a ket. Confirm that you get the ket you started with.]
7.13
Expectation Values
Say we have modeled some physical quantum system, S , with a Hilbert space, H.
Imagine, further, that we want to study some observable, A, that has (all nondegenerate) eigenvalues {ak } with corresponding eigenkets { |uk i }. Most importantly,
we assume that we can prepare many identical copies of S , all in the same state |i.
184
(We did this very thing by selecting only z-up electrons in a Stern-Gerlach-like apparatus, for example.) We now look at our state expanded along the A eigenbasis,
X
|i =
ck |uk i .
k
How do the amplitudes, ck , and their corresponding probabilities, |ck |2 , make themselves felt, by us human experimenters?
The answer to this question starts by taking many repeated measurements of the
observable A on these many identical states |i and recording our results.
[Exercise. Explain why we cant get the same results by repeating the A measurements on a single system S in state |i.]
7.13.1
Well take a large number, N , of measurements, record them, and start doing elementary statistics on the results. We label the measurements we get using
jth measurement of A
mj .
N
1 X
mj .
N j=1
If we take a large enough N , what do we expect this average to be? This answer
comes from the statistical axiom called the law of large numbers, which says that this
value will approach the expectation value, , as N , that is,
lim m = .
This is good and wonderful, but I have not yet defined the expectation value . Better
do that, fast.
[Note. I should really have labeled m with N , as in mN , to indicate that each
average depends on the number of measurements taken, as we are imagining that we
can do the experiment with larger and larger N . But you understand this without
the extra notation.]
7.13.2
185
In case you dont see why this has the feeling of an expectation value (something we
might expect from a typical measurement, if we were forced to place a bet), read it
in English:
The first measurable value times the probability of getting that value
plus
the second measurable value times the probability of getting that value
plus
... .
In physics, rather than using the Greek letter , the notation for expectation value
focuses attention on the observable we are measuring, A,
X
hAi|i
|ck |2 ak .
k
Fair Warning. In quantum mechanics, you will often see the expectation value of
the observable, A, written without the subscript, |i,
hAi ,
but this doesnt technically make sense. There is no such thing as an expectation
value for an observable that applies without some assumed state; you must know
which |i has been prepared prior to doing the experiment. If this is not obvious to
you, look up at the definition one more time: We dont have any ck to use in the
formula unless there is a |i in the room, because
X
|i =
ck |uk i .
k
When authors suppress the subscript state on the expectation value, its usually
because the context strongly implies the state or the state is explicitly described
earlier and applies until further notice.
Calculating an expectation value tells us one way we can use the amplitudes.
This, in turn acts as an approximation of the average m, of a set of experimental
measurements on multiple systems in the identical state.
186
7.13.3
It seems like we should be done with this section. We have a formula for the expectation value, hAi|i , so what else is there to do? It turns out that computing that sum
isnt always as easy or efficient as computing the value a different way.
7.13.4
h | A | i .
The A on the RHS can be thought of either as the operator representing A, or the
matrix for the operator. The expression can be organized in various ways, all of which
result in the same real number. For instance, we can first compute A |i and then
take the inner product
h| A |i ,
or we can first apply A to the bra h|, and then dot it with the ket,
h| A |i .
If you do it this way, be careful not take the adjoint of A. Just because we apply
an operator to a bra does not mean we have to take its Hermitian conjugate. The
formula says to use A, not A , regardless of which vector we feed it.
[Exercise. Prove that the two interpretations of h | A | i are equal by expressing
everything in component form with respect to any basis. Hint: Its just (a row vector)
(a matrix) (a column vector), so multiply it out both ways.]
Proof of the Expectation Value Theorem. This is actually one way to prove
the last exercise nicely. Express everything in the A-basis (i.e., the eigenkets of A
which Trait #4 tells us form a basis for the state space H). We already know that
c1
c2
X
|i =
ck |uk i = c3
,
..
k
.
cn A-basis
which means (by our adjoint conversion rules)
X
h| =
ck huk | = (c1 , c2 , c3 , . . . , cn )A-basis .
k
187
Finally, whats A in its own eigenbasis? We know that any basis vector expressed in
that basis coordinates has a preferred basis look, (1, 0, 0, 0, . . . )t , (0, 1, 0, 0, . . . )t ,
etc. To that, add the definition of an eigenvector and eigenvalue
Mu
au,
and you will conclude that, in its own eigenbasis, the matrix for A is 0 everywhere
except along its diagonal, which holds the eigenvalues,
a1 0 0 0
0 a2 0 0
A = 0 0 a3 0
.
.. .. .. . .
..
. . .
. .
0 0 0 an A-basis
[Oops. I just gave away the answer to one of todays exercises.
have all our players in coordinate form, so
a1 0 0
0 a2 0
0
0 a3
h | A | i = (c1 , c2 , c3 , . . . , cn )
.. .. .. . .
. . .
.
0 0 0
a1 c 1
a2 c 2
= (c1 , c2 , c3 , . . . , cn ) a3 c3
..
.
an c n
X
=
|ck |2 ak ,
7.13.5
QED
Ill do a few examples to demonstrate how this looks in our 2-dimensional world.
The Expectation Value of Sz Given the State |+i
This is a great sanity check, since we know from Trait #7 (the fifth postulate of
QM) that Sz will always report a + ~2 with certainty if we start in that state. Lets
confirm it.
~ 1 0
~
1
h+ | Sz | +i = (1, 0)
= + .
0
2 0 1
2
188
That was painless. Notice that this result is weaker than what we already knew
from Trait #7. This is telling us that the average reading will approach the (+)
eigenvalue in the long run, but in fact every measurement will be (+).
[Exercise. Show that the expectation value h | Sz | i = ~2 .]
The Expectation Value of Sx Given the State |ix
Once again, we know the answer should be ~2 , because were starting in an eigenstate
of Sx , but we will do the computation in the z-basis, which involves a wee-bit more
arithmetic and will serve to give us some extra practice.
1
~ 0 1
1
1
h | Sx | ix = (1, 1)
x
1
1
0
2
2
2
~
0 1
1
=
(1, 1)
1 0
1
4
~
~
1
(1, 1)
= .
=
1
4
2
[Exercise. Show that the expectation value h+ | Sx | +ix = + ~2 .]
x
2
we see that the probability for each outome is 1/2. Over time half will result in an
Sz measurement of + ~2 , and half will give us ~2 , so the average should be close to 0.
Lets verify that.
1
~ 1 0
1
1
h | Sz | iy = (1, +i)
0
1
i
y
2
2
2
~
1 0
1
(1, +i)
=
0
1
i
4
~
1
=
(1, +i)
= 0.
i
4
|iy
7.14
We are done with all the math and physics necessary to do rigorous quantum computing at the basic level. Well be adding a few lessons on math as the course progresses,
189
but for now youre ready to dive into the lectures on single qubit systems and early
algorithms.
The next chapter is a completion of our quantum mechanics primer that covers
the basics of time evolution, that is, it describes the laws by which quantum systems
evolve over time. It isnt required for CS 83A, but youll need it for the later courses
in the sequence. You can skip it if you are so inclined or, if you opt in immediately,
itll provide the final postulates and traits that comprise a complete study of quantum
formalism including the all important Schrodinger equation.
Whether you choose to go directly to the chapter on qubits or first learn the
essentials of the time dependent Schrodinger equation youre in for a treat. Both
subjects provide a sense of purpose and completion to all the hard work weve done
up to this point.
Either way, its about time.
190
Chapter 8
Time Dependent Quantum
Mechanics
8.1
Once a quantum system is put into a known state it will inevitably evolve over
time. This might be a result of the natural laws of physics taking their toll on the
undisturbed system or it may be that we are intentionally subjecting the system to
a modifying force such as a quantum logic gate. In either case, the transformation is
modeled by a linear operator that is unitary.
Weve seen unitary matrices already, and even if you skip this chapter, youre
equipped to go on to study qubits and quantum logic because the matrices and
vectors involved do not have a time variable, t, in their respective coefficients - they
are all complex constants.
But there are unitary transformation and quantum state vectors in which the
elements themselves are functions of time, and it is that kind of evolution we will
study today. It could represent the noise inherent in a system or it may just be a
predictable change that results from the kind of hardware we are using.
8.2
8.2.1
The Hamiltonian
Total Energy
It turns out that the time evolution of a quantum system is completely determined by
the total energy of the system (potential plus kinetic). There is a name for this quantity: The Hamiltonian. However the term is used differently in quantum mechanics
than in classical mechanics, as well see in a moment.
191
8.2.2
We have to figure out a way to express the energy of a system S using pen and paper
so we can manipulate symbols, work problems and make predictions about how the
system will look at 5 PM if we know how it started out at 8 AM. It sounds like a
daunting task, but the 20th century physicists gave us a conceptually simple recipe
for the process. The first step is to define a quantum operator a matrix for our
state space that corresponds to the total energy. Well call this recipe ...
Trait #10 (Constructing the Hamiltonian)
To construct an operator, H, that represents the energy of a quantum system:
1. Express the classical energy, H , formulaically in terms of basic classical concepts (e.g., position, momentum, angular momentum, etc.). You will have H
on the LHS of a formula and all the more basic terms on the RHS.
2. Replace the occurrence of the classical variables on the RHS by their (well
known) quantum operators, and replace the symbol for classical energy, H ,
on the LHS by its quantum symbol H.
Vocabulary. Although the total energy, whether classical or quantum, is a scalar,
the Hamiltonian in the quantum case is typically the operator associated with the
(measurement of) that scalar, while the classical Hamiltonian continues to be synonymous with the scalar, itself. As you can see from the reading Trait #10, to
distinguish the classical Hamiltonian, which works for macroscopic quantities, from
the quantum Hamiltonian, we use H for classical and H for quantum).
This is a bit hard to visualize until we do it. Also, it assumes we have already been
told what operators are associated with the basic physical concepts on the RHS. As
it happens, there are very few such basic concepts, and their quantum operators are
well known. For example, 3-dimensional positional coordinates (x, y, z) correspond to
three simple quantum operators X, Y and Z. Because I have not burdened you with
the Hilbert space that models position and momentum, I cant give you a meaningful
and short example using those somewhat familiar concepts, but in a moment youll
see every detail in our spin-1/2 Hilbert space, which is all we really care about.
8.3
8.3.1
in relation to the magnetic field. (You may challenge that I forgot to account for
the rotational kinetic energy, but an electron has no spatial extent, so there is no
classical moment of inertia, and therefore no rotational kinetic energy.) We want to
build a classical energy equation, so we treat spin as a classical 3-dimensional vector
representing the intrinsic angular momentum, (sx , sy , sz )t . A dot product between
this vector and the magnetic field vector, B, expresses this potential energy and yields
the following classical Hamiltonian
H
B S ,
where is a scalar known by the impressive name gyromagnetic ratio whose value is
not relevant at the moment. (We are temporarily viewing the system as if it were
classical in order to achieve step 1 in Trait #10, but please understand that it
already has one foot in the quantum world simply by the inclusion of the scalar .
Since scalars dont affect us, this apparent infraction doesnt disturb the process.)
Defining the z-Direction. The dot product only cares about the relationship
between two vectors,
vw
vw cos ,
where is the angle between them, so we can rotate the pair as a fixed assembly, that
is, preserving angle . Therefore, lets establish a magnetic field (with magnitude B)
pointing in the +z-direction,
0
0 ,
B = B z =
B
and let the spin vector, S, go along for the rotational ride. This does not produce
a unique direction for the spin, but we only care about the polar angle , which we
have constrained to remain unchanged. Equivalently, we can define the z-direction
to be wherever our B field points. Either way, we get a very neat simplification.
The classical spin has well-defined real-valued components sx , sy and sz (not
operators yet),
sx
sy ,
S =
sz
and we use this vector to evaluate the dot product, above. Substituting gives
0
sx
H = 0 sy
B
sz
=
B sz .
This completes step 1 in Trait #10, and I can finally show you how easy it is to do
step 2.
193
8.3.2
A Quantum Hamiltonian
B Sz .
8.4
8.4.1
Because the operator H is a scalar multiple of our well studied Sz , we can easily find
its eigenvectors (also known as the energy eigenkets) and eigenvalues; the eigenvectors
are the same |i, and to get their respective eigenvalues we simply multiply those
of Sz by B. Wed better prove this, as it might not be obvious to everyone. We
know that eigenvalue-eigenvector pairs for Sz are given by
~
~
|+i ,
|+i
and
Sz |+i =
2
2
~
~
Sz |i =
|i ,
|i
,
2
2
but we now know that Sz and B bear the scalar relationship,
1
Sz =
H.
B
Substituting this into the Sz eigenvector expressions gives
1
~
H |+i =
|+i
and
B
2
1
~
H |i =
|i ,
B
2
194
or
H |+i
H |i
B~
|+i
2
B~
|i .
2
and
But this says that H has the same two eigenvectors as Sz , only they are associated
with different eigenvalues,
|+i
|i
B~
,
2
B~
+
.
2
8.4.2
Allowable Energies
Lets take a short side-trip to give the crucial postulate that expresses, in full generality, how any quantum state evolves based on the systems Hamiltonian operator,
H.
8.5
8.5.1
| (t) i .
n
X
ck (t) |uk i .
k=1
Everything we did earlier still holds if we freeze time at some t0 . We would then
evaluate the system at that instant as if it were not time-dependent and we were
working with the fixed state
|i
|(t )i
n
X
c0k |uk i ,
k=1
c0k
ck (t ) .
To get to time t = t0 (or any future t > 0), though, we need to know the exact
formula for those coefficients, ck (t), so we can plug-in t = t0 and produce this fixed
state. Thats where the sixth postulate of quantum mechanics comes in.
Trait #12 (The Time-Dependent Schr
odinger Equation)
The time evolution of a state vector is governed by the Schrodinger Equation
i~
d
| (t) i
dt
H(t) | (t) i .
Notice that the Hamiltonian can change from moment-to-moment, although it does
not always do so. For our purposes, it is not time dependent, so we get a simpler
form of the Schrodinger Equation,
i~
d
| (t) i
dt
=
196
H | (t) i .
ak |uk i ,
8.5.2
You are ready to solve your first Schrodinger equation. We consider the system of a
stationary electron in a uniform z-up directed B-field, whose Hamiltonian we have
already cracked (meaning we solved the time-independent Schrodinger equation).
Because its so exciting, Im going to summarize the set-up for you. The players
are
our state vector, |(t)i, with its two expansion coefficients, the unknown functions c1 (t) and c2 (t),
!
c1 (t)
| (t) i = c1 (t) |+i + c2 (t) |i =
,
c2 (t)
z
~
B
2
1 0
,
0 1
d
| (t) i
dt
H | (t) i .
This is equivalent to
i~
!
d
c
1
dt
i~
d
c
dt 2
B ~2 c1
!
,
B ~2 c2
or
d
c1
dt
d
c2
dt
Bi
c1
and
2
Bi
c2 .
2
=
=
kx ,
Cekt ,
one solution for each complex constant C. (If you didnt know that, you can verify it
now by differentiating the last equation.) The constant C is determined by the initial
condition, at time t = 0,
C
x(0) .
Bi
,
2
so
c1 (t)
C1 eit(B/2) ,
c2 (t)
C2 eit(B/2) .
|i ,
our starting state. In other words, the initial conditions for our two equations are
c1 (0)
c2 (0)
=
=
0
0 ,
and
where we are saying 0 and 0 are the two scalar coefficients of the state |i at time
t = 0. That gives us the constants
C1
C2
=
=
198
0 ,
0 ,
0 eit(B/2) ,
c2 (t)
0 eit(B/2) .
0 eit(B/2) |+i
0 eit(B/2) |i .
c1 eit(B/2) |+i
c2 eit(B/2) |i .
We pause to consider how this can be generalized to any situation (in the case of
finite or enumerable eigenvalues). Ill introduce an odd notation that physicists have
universally adopted, namely that the eigenket associated with eigenvalue a, will be
that same a inside the ket symbol, i.e., |ai .
1. We first solved the systems Hamiltonian the time-independent Schrodinger
equation, if you like to get the allowable energies, { Ek } and their associated
eigenkets, { |Ek i }.
2. Next, we expanded the initial state, |i, along the energy basis,
X
ck |Ek i .
|i =
k
ck eitEk /~ .
199
8.5.3
Stationary States
Notice what this implies if our initial state happens to be one of the energy eigenstates.
In our spin-1/2 system that would be either |+i or |i. Take the |i = |+i. The
result is
|(t)i
eit(B/2) |+i ,
Introducing the shorthand t Bt/2, the time evolution merely causes a phase
factor of eit to appear. But remember that our state-space does not differentiate
between scalar multiples of state vectors, so
|(t)i
eit |+i
|+i
|i .
c1 (t) c1 (t)
eit(B/2) eit(B/2)
e0
1.
Its one and only expansion coefficient changes by a factor of eit whose square magnitude is 1 regardless of t. This is big enough to call a trait.
Trait #13 (Stationary States)
An eigenstate of the Hamiltonian operator evolves in such a way that its measurement
outcome does not change; it remains in the same eigenstate.
For this reason, eigenstates are often called stationary states of the system.
8.5.4
Does this mean that we can throw away the phase factors eit for times t 6= 0? It
may surprise you that the answer is a big fat no. Lets take the example of an initial
state that is not an energy eigenket. One of the y-eigenstates will suffice,
|i
|iy
|+i i |i
.
2
c2
and
2
i
.
2
200
Allow this state to evolve for a time t according to the Schrodinger equation,
|(t)i
eit(B/2) |+i
i eit(B/2) |i
.
2
|+i
i eit(B) |i
,
2
c2 (t)
and
2
ieit(B)
.
2
Comparing this to the original state, we see that while the coefficient of |+i never
changes, the coefficient of |i changes depending on t. To dramatize this, consider
time t0 = /(B) (an impossibly tiny fraction of a second if you were to compute it
using real values for and a typical B-field).
|(t0 )i
=
=
|+i
|+i
i ei |i
2
+ i |i
=
2
|+i
i (1) |i
|+iy .
We started out in state |iy , and a fraction of a second later found ourselves in state
|+iy . Thats a pretty dramatic change. If we were to measure Sy initially, we would,
with certainty, receive a -~/2 reading. Wait a mere /(B) seconds later, and a
measurement would result, with equal certainty, in the opposite value +~/2.
8.5.5
Weve set the stage for a technique that applies throughout quantum mechanics. Well
call it a trait.
Trait #14 (Evolution of Any Observable)
To determine the time-evolved probability of the outcome of any observable, A, starting in the initial state |i,
1. compute the Energy eigenvalues and eigenkets for the system, { Ek |Ek i} ,
by solving the time-independent Schrodinger equation, H |Ek i = Ek |Ek i,
201
k ck
|Ek i,
.
2
(I used | ()t iy , rather than the somewhat confusing |(t)iy , to designate the states
dependence on time.)
| ()t iy
|+i
Thats the official answer to the question how does |iy evolve?, but to see how
we would use this information, we have to pick an observable we are curious about
and apply Trait #14, steps 4 and 5.
Lets ask about Sy , the y-projection of spin specifically the probability of measuring a |+iy at time t. Step 4 says to dot the time-evolved state with the vector
|+iy , so the amplitude (step 4) is
c+y
yh +
| ()t iy .
Ill help you read it: the left vector of the inner product is the + ~2 eigenket of the
operator Sy , |+iy , independent of time. The right vector of the inner product is
our starting state, |iy , but evolved to a later time, t.
Because everything is expressed in terms of the z-basis, we have to be sure we
stay in that realm. The z-coordinates of |+iy are obtained from our familiar
!
1/ 2
|+i + i |i
|+iy =
=
.
2
i/ 2 z
202
I added the subscript z on the RHS to emphasize that we are displaying the vector
|+iy in the z-coordinates, as usual. If we are to use this on the left side of a complex
inner product we have to take the conjugate of all components. This is easy to see in
the coordinate form,
1/ 2 , i/ 2 z ,
y h+| =
but lets see how we can avoid looking inside the vector by applying our adjoint
conversion rules to the expression defining |+iy to create a bra for this vector. Ill
give you the result, and you can supply the (very few) details as an ...
[Exercise. Show that
y h+|
|+iy
h+| i h|
. ]
2
Getting back to the computation of the amplitude, c+y , substitute the computed
values into the inner product to get
|+i i eit(B) |i
h+| i h|
c+y =
2
2
iBt
h+ | +i e
h | i
1 eiBt
=
=
.
2
2
(The last two equalities made use of the orthonormality of any observable eigenbasis
(Trait #4).)
Finally, step 5 says that the probability of measuring Sy = + ~2 at any time t is
1 eiBt
1 eiBt
2
|c+y |
= c+y c+y =
,
2
2
where we used the fact (see the complex number lecture) that, for real ,
ei
= ei .
Lets simplify the notation by setting
Bt
=
=
=
2 ei ei
4
1
1 ei + ei
2
2
2
1
1
cos .
2
2
203
cos (B t) .
2
2
2
As you can see, the probability of measuring an up-y state oscillates between 0 and
1 sinusoidally over time. Note that this is consistent with our initial state at time
t = 0: cos 0 = 1, so the probability of measuring + ~2 is zero; it had to be since we
started in state |iy , and when you are in a eigenstate (|iy ), the measurement of the
observable corresponding to that eigenstate (Sy ) is guaranteed to be the eigenstates
eigenvalue ( ~2 ). Likewise, if we test precisely at t = /(B), we get 21 12 (1) = 1,
a certainty that we will detect + ~2 , the |+iy eigenvalue.
We can stop the clock at times between those two extremes to get any probability
we like.
[Exercise. What is the probability of measuring Sy (t) = + ~2 at the (chronologically ordered) times
(a) t = /(6B) ,
(b) t = /(4B) ,
(c) t = /(3B) ,
(d) t = /(2B) .
[Exercise. Do the same analysis to get the probability that Sy measured at time
t will be ~2 . Confirm that at any time t, the two probabilities add to 1.]
8.6
Larmor Precession
We complete this lecture by combining expectation value with time evolution to get
a famous result which tells us how we can relate the 3-dimensional real vector of
a classical angular momentum vector to the quantum spin-1/2 electron, which is a
2-dimensional complex vector.
8.6.1
We assume that we have many electrons in the same initial spin state, and we look
at that states expansion along the z-basis,
|i
c1 |+i + c2 |i ,
|c2 |2
204
1.
We let the state (of any one of these systems, since they are all the same) evolve for a
time t. We have already solved the Schrodinger equation and found that the evolved
state at that time will be
|(t)i
=
=
c2 eit(B/2) |i
c1 eit(B/2) |+i +
!
c1 eit(B/2)
.
c2 eit(B/2)
c ei1
i2
se
and
,
c ei1 eit(B/2)
s ei2 eit(B/2)
!
.
1 +2
Then we multiply by the unit scalar ei( 2 ) to get a more balanced equivalent
state,
i( 1 2 2 ) it(B/2)
e
ce
.
|(t)i =
1 2
i( 2 ) it(B/2)
se
e
=
=
B
and
1 2 ,
to get the simple and balanced Hilbert space representative of our state,
!
!
c ei (t + 0 )/2
c ei0 /2 eit/2
|(t)i =
=
.
s ei (t + 0 )/2
s ei0 /2 eit/2
We get a nice simplification by using the notation
t + 0
,
2
to express our evolving state very concisely as
!
c ei (t)
|(t)i =
.
s ei (t)
(t)
205
A Convenient Angle
There is one last observation before we start to compute. Since |i is normalized,
|c|2 +
|s|2
1,
the amplitudes c and s have moduli (absolute values) that are consistent with the
sine and cosine of some angle. Furthermore, we can name that angle anything we
like. Call it /2 for reasons that will become clear in about 60 seconds. [Start of
60 seconds.]
We have proclaimed the angle to be such that
c
sin ,
2
cos
and
cos
sin
2
2
sin
,
2
cos2
sin2
2
2
cos .
By letting /2 be the common angle that we used to represent c and s (instead of,
say, ) we ended up with plain old on the RHS of these formulas, which is the form
well need. [End of 60 seconds.]
Although well start out using c and s for the moduli of |is amplitudes, well
eventually want to make these substitutions when the time comes. The angle will
have a geometric significance.
8.6.2
=
=
~
c ei (t) , s ei (t)
2
c ei (t)
s ei (t)
0 1
~
c ei(t)
i (t)
i (t)
ce
, se
2 s ei(t)
~ 2
~
c s2
=
cos .
2
2
We can draw some quick conclusions from this (and subtler ones later).
1. The expectation value of Sz does not change with time.
2. If |i = |+i, i.e., c = 1 and s = 0, the expectation value is + ~2 , as it must, since
|+i is a stationary state and so always yields a (+) measurement.
3. If |i = |i, i.e., c = 0 and s = 1, the expectation value is ~2 , again consistent
with |i being the other stationary state, the one that always yields a ()
measurement.
207
~
c ei (t) , s ei (t)
2
~
c ei (t) , s ei (t)
2
0 1
c ei (t)
s ei (t)
1 0
s ei (t)
c ei (t)
+ cs e2i (t) .
~
cs e2i (t)
2
This is nice, but a slight rearrangement should give you a brilliant idea,
=
h(t) | Sx | (t)i
cs ~
cs ~ cos (2(t)) ,
which, after undoing our substitutions for cs and (t) we set up in the convenient
angle section, looks like
h(t) | Sx | (t)i
cs ~ cos (t + 0 )
~
sin cos (t + 0 ) .
2
|+ix |ix
,
2
so we would expect a roughly equal collapse into the |+ix and |ix states,
averaging to 0.
Weve already established that the two kets |i are stationary states of H,
so whatever holds at time t = 0, holds for all time.
208
~
c ei (t) , s ei (t)
2
c ei (t) , s ei (t)
~
cs e2i (t)
2i
~
2
!
0 i
i
c ei (t)
is ei (t)
s ei (t)
!
ic ei (t)
cs e2i (t)
cs ~ sin (t + 20 )
cs ~
~
sin sin (t + 0 ) .
2
|+iy + |iy
|i
|+iy |iy
,
i 2
and
s(t)
hSx i|(t)i
hS i
y |(t)i .
hSz i|(t)i
This s(t) is a true 3-dimensional (time-dependent) vector whose real coordinates are
the three expectation values, hSx i, hSy i and hSz i, at time, t.
In the previous section we showed that
~
sin cos (t + 0 )
2
cos
sin cos (t + 0 )
sin sin (t + 0 ) .
2
cos
If this is not speaking to you, drop the factor of ~/2 and set (t) = t + 0 . What
we get is the 3-dimensional vector
8.6.3
210
within a uniform magnetic field B pointing in the +z direction, the Schrodinger equation tells us that its expectation value vector, s(t), evolves according to the formula
s(t)
sin cos (t + 0 )
sin sin (t + 0 ) .
2
cos
sin .
2
cos
and
/2 ranges from 0 (when it defines the up-z state, |+i) to /2 (when it defines
the down-z state, |i state), allowing to range from 0 to in R3 .
is the Larmor frequency, defined by the magnitude of the B-field and the
constant, (the gyromagnetic ratio),
B .
8.7
1 2 .
211
Chapter 9
The Qubit
|i
9.1
|0i + |1i
All the hard work is done. You have mastered the math, spin-physics and computational quantum mechanics needed to begin doing quantum computer science.
While useful to our understanding of quantum mechanics, the spin-1/2 physical
system, S , is no longer useful to us as computer scientists. We will work entirely
inside its state space H from this point forward, applying the postulates and traits
of quantum mechanics directly to that abstract system, indifferent to the particular
S that the Hilbert space H is modeling.
Although in practical terms a quantum bit, or qubit, is a variable superposition of
two basis states in H of the form shown at the top of this chapter, formally qubits
as well as classical bits are actually vector spaces. You read that correctly. A single
qubit is not a vector, but an entire vector space. Well see how that works shortly.
To get oriented, well do a quick review of classical bits in the new formalism, then
well retrace our steps for qubits. Lets jump in.
9.2
9.2.1
In order to study the qubit, lets establish a linguistic foundation by defining the more
familiar classical bit.
212
x
y
xy
x (or x)
xy
We could go on like this to define more operators and their corresponding logic gates.
However, its a little disorganized and will not help you jump to a qubit, so lets try
something a little more formal.
Our short formal development of classical logic in this lesson will be restricted to
the study of unary operators. We are learning about a single qubit today which is
analogous to one classical bit and operators that act on only one classical bit, i.e.,
unary operators. It sounds a little boring, I know, but thats because in classical logic,
unary operators are boring (there are only four). But as you are about to see, in the
quantum world there are infinitely many different unary operators, all useful.
214
9.3
9.3.1
You are familiar with R2 , the vector space of ordered pairs of real numbers. Id like
to introduce you to an infinitely simpler vector space, B = B2 .
The Tiny Field B
In place of the field R of real numbers, we define a new field of numbers,
B
{0, 1} .
Thats right, just the set containing two numbers. But we allow the set to have
addition, , and multiplication, , defined by
is addition mod-2
0 1 = 1
1 0 = 1
0 0 = 0
1 1 = 0
is ordinary
0 1
1 0
0 0
1 1
multiplication
= 0
= 0
= 0
= 1
Of course is nothing other than the familiar XOR operation, although in this
context we also get negative mod-2 numbers (1 = 1) and subtraction mod-2 (0 1 =
0 (1) = 1), should we need them.
The Vector Space B
We define B = B2 to be the vector space whose scalars come from B and whose vectors
(objects) are ordered pairs of numbers from B. This vector space is so small I can list
215
B = B
0
0
1
1
,
,
,
.
0
1
0
1
Im not going to bother proving that B obeys all the properties of a vector space, and
you dont have to either. But if you are interested, its a fun ...
[Exercise. Show that B obeys the properties of a field (multiplicative inverses,
distributive properties, etc.) and that B obeys the properties of a vector space.]
The Mod-2 Inner Product
Not only that, but there is an inner product,
x1
x2
= x1 x2 y1 y2 .
y1
y2
Youll notice something curious about this inner product (which is why I put it in
quotes and used the operator rather than ). The vector (1, 1)t is a non-zero
vector which is orthogonal to itself. Dont let this bother you. In computer science,
such non-standard inner products exist. (Mathematicians call them pairings, since
they are not positive definite, i.e., it is not true that v 6= 0 kvk > 0. Well
just call a weird inner product.) This one will be the key to Simons algorithm,
presented later in the course. The inner product gives rise to a modulus or length
for vectors through the same mechanism we used in R2 ,
kxk = |x|
x x,
where I have shown two different notations for modulus. With this definition we find
0
=
1
= 1 and
0
1
0
=
1
= 0.
0
1
The strange but necessary equality on the lower right is a consequence of this
oddball inner product on B = B2 .
[Exercise. Verify the above moduli.]
[Notation. Sometimes Ill use an ordinary dot for the inner product, (x1 , y1 )t
(x2 , y2 )t instead of the circle dot, (x1 , y1 )t (x2 , y2 )t . When you see a vector on
each side of youll know that we really mean the mod-2 inner product, not mod-2
multiplication.]
Dimension of B
If B is a vector space, what is its dimension, and what is its natural basis? Thats
not hard to guess. The usual suspects will work. Its a short exercise.
216
9.3.2
217
Multiple Bits
If we have several bits we have several copies of B, and we can name each with a
variable like x, y or z. We can assign values to these variables with the familiar
syntax
x
[1] ,
[1] ,
[0] ,
etc.
A classical bit (uncommitted to a value) can also be viewed as a variable linear
combination of the two basis vectors in B.
Alternate Definition of Bit. A bit is a variable superposition of the
two natural basis vectors of B,
x
[0] + [1] ,
2 2
where
1.
Since and are scalars of B, they can only be 0 and 1, so the normalization
condition implies exactly one of them is 1 and the other is 0.
Two questions are undoubtedly irritating you.
1. Is there any benefit, outside of learning quantum computation, of having such
an abstract and convoluted definition of a classical bit?
2. How will this definition help us grasp the qubit of quantum computing?
Keep reading.
9.3.3
We will only consider unary (i.e., one bit) operators in this section. Binary operators
come later.
Unary Operators
Definition of Logical Unary Operator. A logical unary operator or (one bit operator) is a linear transformation of B that maps
normalized vectors to other normalized vectors.
This is pretty abstract, but we can see how it works by looking at the only four logical
unary operators in sight.
218
The constant-[0] operator A(x) [0]. This maps any bit into the 0-bit.
(Dont forget, in B, the 0-bit is not the 0-vector, it is the unit vector (1, 0)t .)
Using older, informal truth tables, we would describe this operator as follows:
x
[0]-op
In our new formal language, the constant-[0] operator corresponds to the linear
transformation whose matrix is
1 1
,
0 0
since, for any unit vector (bit value) (, )t , we have
1 1
1
=
=
0 0
0
0
[0] ,
the second-from-last equality is due to the fact that, by the normalization requirement on bits, exactly one of and must be 1.
The constant-[1] operator A(x) [1]. This maps any bit into the 1-bit.
Informally, it is described by:
x
[1]-op
0
0
=
=
1 1
[1] .
The negation (or NOT) operator A(x) x. This maps any bit into its
logical opposite. It corresponds to
0 1
,
1 0
219
0 1
x =
1 0
1 0
x.
[Exercise. Using the formula, verify that [0] = [1] and [1] = [0].]
The identity operator A(x) 1x = x. This maps any bit into itself and
corresponds to
1 0
.
0 1
[Exercise. Perform the matrix multiplication to confirm that 1[0] = [0] and
1[1] = [1].]
Linear Transformations that are Not Unary Operators
Apparently any linear transformation on B other than the four listed above will not
correspond to a logical unary operator. For example, the zero-operator
0 0
0 0
isnt listed. This makes sense since it does not map normalized vectors to normalized
vectors. For example,
0 0
0 0
1
0
[0] =
=
= 0,
0 0
0 0
0
0
not a unit vector. Another example is the matrix
1 0
,
1 0
since it maps the bit
1
1
0
0
0.
1 1
1 1
does not represent a logical operator by finding a unit vector that it maps to a nonunit vector, thus violating the definition of a logical unary operator.]
Defining logical operators as matrices provides a unified way to represent the
seemingly random and unrelated four logical operators that we usually define using
isolated truth tables. Theres a second advantage to this characterization.
220
0
=
,
1 1
1
and theres no way to reconstruct (, )t reliably from the constant bit [1]; it has
erased information that can never be recovered. Thus, the operator, and its associated
logic gate, is called irreversible.
Of the four logical operators on one classical bit, only two of them are reversible:
the identity, 1, and negation, . In fact, to reverse them, they can each be reapplied
to the output to get the back original input bit value with 100% reliability.
Characterization of Reversible Operators.
versible its matrix is unitary.
An operator is re-
Example. We show that the matrix for is unitary by looking at its columns.
The first column is
0
,
1
which
has unit length, since (0, 1)t (0, 1)t = 1, and
is orthogonal to the to the second column vector, since (0, 1)t (1, 0)t = 0.
Reversibility is a very powerful property in quantum computing, so our ability to
characterize it with a simple criterion such as unitarity is of great value. This is easy
to check in the classical case, since there are only four unary operators: two have
unitary matrices and two do not.
[Exercise. Show that the matrix for 1 is unitary and that the two constant
operators matrices are not.]
An Anomaly with Unitary Matrices in B
We learned that unitary transformations have the following equivalent properties.
1. They preserve the lengths of vectors: kAvk = kvk ,
2. they preserve inner products, h Av | Aw i = hv | wi , and
221
9.4
The Qubit
We are finally ready to go quantum. At each turn there will be a classical concept
and vocabulary available for comparison because we invested a few minutes studying
the formal definitions in that familiar context.
9.4.1
Quantum Bits
222
|i |1i
|+ix |0ix
|ix |1ix
|+iy |0iy
|iy |1iy
Alternate x-Basis Notation. Many authors use the shorter notation for the
x-basis,
|0ix |+i
|1ix |i ,
but I will eschew that for the time being; |+i and |i already have a z-basis meaning
in ordinary quantum mechanics, and using them for the x-basis too soon will cause
confusion. However, be prepared for me to call |+i and |i into action as x-basis
CBS, particularly when we need the variable x for another purpose.
Computational Basis States. Instead of using the term eigenbasis, computer
scientists refer to computational basis states (or CBS when Im in a hurry). For
example, we dont talk about the eigenbasis of Sz , { |+i , |i }. Rather, we speak of
the preferred computational basis, { |0i , |1i }. You are welcome to imagine it as being
associated with the observable Sz , but we really dont care what physical observable
led to this basis. We only care about the Hilbert space H, not the physical system,
S , from which it arose. We dont even know what kind of physics will be used to
build quantum computers (yet). Whatever physical hardware is used, it will give us
the 2-D Hilbert space H.
223
Alternate bases like { |0ix , |1ix }, { |0iy , |1iy } or even { |0in , |1in } for some di , when needed, are also called computational bases, but we usually qualify
rection n
them using the term alternate computational basis. We still have the short-hand
terms z-basis, x-basis, etc., which avoid the naming conflict, altogether. These alternate computational bases are still defined by their expansions in the preferred, z-basis
( |1ix = |0i2|1i , e.g.), and all the old relationships remain.
9.4.2
9.4.3
A more realistic working definition of qubit parallels that of the alternative formal
definition of bit.
224
|0i + |1i ,
||2 + ||2
1.
I used the word variable to call attention to the fact that the qubit stores a value,
but is not the value, itself.
Global Phase Factors
We never forget that even if we have a normalized state,
|i
|0i + |1i ,
ei |0i + ei |1i ,
real ,
still resides on the projective sphere, and since it is a scalar multiple of |i, it is a
valid representative of same state or qubit value. We would say |i and ei |i differ
by an overall, or global, phase factor , a condition that does not change anything,
but can be used to put |i into a more workable form.
Comparison with Classical Logic
The alternate/working expressions for bits in the two regimes look similar.
Classical:
x = [0] + [1] ,
Quantum:
|i = |0i + |1i ,
2 2
||2 + ||2
= 1
= 1
In the classical case, and could only be 0 or 1, and they could not be the same
(normalizability), leading to only two possible states for x, namely, [0] or [1]. In the
quantum case, and can be any of an infinite combination of complex scalars,
leading to an infinite number of distinct values for the state |i.
9.4.4
Whats the big idea behind qubits? They are considerably more difficult to define and
study than classical bits, so we deserve to know what we are getting for our money.
225
Parallel Processing
The main motivation comes from our Trait #6, the fourth postulate of quantum
mechanics. It tells us that until we measure the state |i, it has a probability of
landing in either state, |0i or |1i, the exact details given by the magnitudes of the
complex amplitudes and .
As long as we dont measure |i, it is like the Centaur of Greek mythology: part |0i
and part |1i; when we process this beast with quantum logic gates, we will be sending
both alternative binary values through the hardware in a single pass. That will change
it to another normalized state vector (say |i), which has different amplitudes, and
that can be sent through further logic gates, again retaining the potential to be part
|0i and part |1i, but with different probabilities.
Classical Computing as a Subset
Classical logic can be emulated using quantum gates.
This is slightly less obvious than it looks, and well have to study how one constructs even a simple AND gate using quantum logic. Nevertheless, we can correctly
forecast that the computational basis states, |0i and |1i, will correspond to the classical bit values [0] and [1]. These two lonely souls, however, are now swimming in a
continuum of non-classical states.
Measurement Turns Qubits into Bits
Well, this is not such a great selling point. If qubits are so much better than bits,
why not leave them alone? Worse still, our Trait #7, the fifth postulate of quantum
mechanics, means that even if we dont want to collapse the qubit into a computational
basis state, once we measure it, we will have done just that. We will lose the exquisite
subtleties of and , turning the entire state into a |0i or |1i.
Once you go down this line of thought, you begin to question the entire enterprise.
We cant get any answers if we dont test the output states, and if they always collapse,
what good did the amplitudes do? This skepticism is reasonable. For now, I can only
give you some ideas, and ask you to wait to see the examples.
1. We can do a lot of processing below the surface of the quantum ocean, manipulating the quantum states without attempting an information-destroying
measurement that brings them up for air until the time is right.
2. We can use Trait #6, the fourth postulate, in reverse: Rather than looking at
amplitudes as a prediction of experimental outcomes, we can view the relative
distribution of several measurement outcomes to guess at the amplitudes of the
output state.
3. By preparing our states and quantum logic carefully, we can load the dice so
226
9.5
9.5.1
9.5.2
Well examine every angle of this first example. It is precisely because of its simplicity that we can easily see the important differences between classical and quantum
computational logic.
The Quantum NOT (or QNOT) Operator, X
The QNOT operator swaps the amplitudes of any state vector. It corresponds to
0 1
,
1 0
the same matrix that represents the NOT operator, , of classical computing. The
difference here is not in the operator but in the vast quantity of qubits to which we
can apply it. Using |i = (, )t , as we will for this entire lecture, we find
0 1
X |i =
=
.
1 0
In the special case of a CBS ket, we find that this does indeed change the state from
|0i to |1i and vice versa.
[Exercise. Using the formula, verify that X |0i = |1i and X |1i = |0i.]
227
We might show the effect of the gate right on the circuit diagram,
|0i + |1i
|0i + |1i .
The input state is placed on the left of the gate symbol and the output state on the
right.
Computational Basis State Emphasis
Because any linear operator is completely determined by its action on a basis, you will
often see an operator like X defined only on the CBS states, and you are expected to
know that this should be extended to the entire H using linearity. In this case, the
letters x and y are usually used to label a CBS, so
|0i
|xi
or
|1i
228
|xi .
This expresses the two possible input states and says that X |0i = |0i = |1i, while,
X |1i = |1i = |0i. Using alternative notation,
|xi
|xi .
In fact, youll often see the mod-2 operator for some CBS logic. If we used that to
define X, it would look like this:
|xi
|1 xi .
[Exercise. Why does the last expression result in a logical negation of the CBS?]
You Must Remember This. The operators (, , etc.) used inside the kets
on the variables x and/or y apply only to the binary values 0 or 1 that label the basis
states. They make no sense for general states. We must extend linearly to the rest
of our Hilbert space.
Sample Problem
Given the definition of the bit flip operator, X, in terms of the CBS |xi,
|xi
|xi ,
what is the action of X on an arbitrary state |i, and what is the matrix for X?
Expand |i along the computational basis and apply X:
X |i
=
=
=
=
=
X ( |0i + |1i)
X |0i + X |1i
|0i + |1i
|1i + |0i
|0i + |1i ,
229
While the problem didnt ask for it, lets round out the study by viewing X in terms
of a kets CBS coordinates,(, )t . For that, we can read it directly off the final
derivation of X |i, above, or apply the matrix, which I will do now.
0 1
=
.
1 0
One Last Time: In the literature, most logic gates are defined in terms of |xi,
|yi, etc. This is only the action on the CBS, and it is up to us to fill in the blanks,
get its action on the general |i and produce the matrix for the gate.
Comparison with Classical Logic
Weve seen that there is at least one classical operator, NOT, that has an analog,
QNOT, in the quantum world. Their matrices look the same and they affect the
classical bits, [0] / [1], and corresponding CBS counterparts, |0i / |1i, identically, but
thats where the parallels end.
What about the other three classical unary operators? You can probably guess
that the quantum identity,
1 0
1
,
0 1
exhibits the same similarities and differences as the QNOT did with the NOT. The
matrices are identical and have the same effect on classical/CBS kets, but beyond
that, the operators work in different worlds.
[Exercise. Express the gate (i.e., circuit) definition of 1 in terms of a CBS |xi
and give its action on a general |i. Discuss other differences between the classical
and quantum identity.]
That leaves the two constant operators, the [0]-op and the [1]-op. The simple
answer is there are no quantum counterparts for these. The reason gets to the heart
of quantum computation.
Unitarity is Not Optional
The thing that distinguished and 1 from [0]-op and the [1]-op in the classical case
was unitarity. The first two were unitary and the last two were not. How did those
last two even sneak into the classical operator club? They had the property that they
preserved the lengths of vectors. Thats all an operator requires. But these constant
ops were not reversible which implied that their matrices were not unitary. The
quirkiness of certain operators being length-preserving yet non-unitary was a fluke
of nature caused by the strange mod-2 inner product on B. It allowed a distinction
between length-preservation and unitary.
In quantum computing, no such distinction exists. The self-same requirement
that operators map unit vectors to other unit vectors in H forces operators to be
230
=
+
0 0
0
0
0
0
0
1
2
2
!
1
1
1
2
=
+
=
,
0
0
0
2
not a unit vector. To see a different approach we show it using a putative operator,
A, defined by its CBS action that ignores the CBS input, always answering with a
|0i,
|xi
|0i .
A
=
2
2
2 |0i ,
=
=
|0i + |0i
|i
X
Of course, we know from Trait #7 (fifth postulate of QM) that once we measure the
state it collapses into a CBS, so we cannot measure both points on the same sample
of our system. If we measure at A, the system will collapse into either |0i or |1i and
we will no longer have |i going into X. So the way to interpret this diagram is to
231
visualize many different copies of the system in the same state. We measure some at
access point A and others at access point B. The math tells us that
(
|0i + |1i at Point A
|i =
|0i + |1i at Point B
So, if we measure 1000 identical states at point A,
|i
we will get
# of 0 -measurements
# of 1 -measurements
by Trait #6, the fourth QM postulate. However, if we do not measure those states,
but instead send them through X and only then measure them (at B),
|i
the same trait applied to the output expression tells us that the probabilities will be
swapped,
# of 0 -measurements
# of 1 -measurements
Composition of Gates
Recall from the lesson on linear transformation that any unitary matrix, U , satisfies
U U
UU
1,
where U is the conjugate transpose a.k.a, adjoint, of U . In other words its adjoint is
also its inverse.
Because unitarity is required of all quantum gates in particular QNOT we
know that
X X
But the matrix for X tells us that
0
X
=
1
XX
1,
it is self-adjoint:
1
0 1
=
0
1 0
1.
X.
|i ,
which can be verified either algebraically, by multiplying the vectors and matrices, or
experimentally, by taking lots of sample measurements on identically prepared states.
Algebraically, for example, the state of the qubit at points A, B and C,
|i
will be
This leads to the expectation that for all quantum gates as long as we dont measure
anything we can keep sending the output of one into the input of another, and while
they will be transformed, the exquisite detail of the qubits amplitudes remain intact.
They may be hidden in algebraic changes caused by the quantum gates, but they will
be retrievable due to the unitarity of our gates. However, make a measurement, and
we will have destroyed that information; measurements are not unitary operations.
Weve covered the only two classical gates that have quantum alter egos. Lets go
on to meet some one-bit quantum gates that have no classical counterpart.
9.5.3
If the bit flip, X, is the operator that coincides with the physical observable Sx ,
then the phase flip, Z, turns out to be the Pauli matrix z associated with the Sz
observable.
The Z operator, whose matrix is defined to be
1 0
,
0 1
negates the second amplitude of a state vector, leaving the first unchanged,
1 0
Z |i =
=
.
0 1
233
ei ,
|0i |1i .
(1)x |xi .
234
Measurement
It is interesting to note that Z has no measurement consequences on a single qubit
since the magnitude of both amplitudes remain unchanged by the gate. The circuit
|i
Z
yielding the same probabilities, ||2 and ||2 , at both access points. Therefore, both
|i and Z |i will have identical measurement likelihoods; if we have 1000 electrons
or photons in spin state |i and 1000 in Z |i, a measurement of all of them will
throw about ||2 1000 into state |0i and ||2 1000 into state |1i.
However, two states that have the same measurement probabilities are not
necessarily the same state.
The relative phase difference between |i and Z |i can be felt the moment we
try to combine (incorporate into a larger expression) either state using superposition.
Mathematically, we can see this by noticing that
|i + |i
2
|i ,
i.e., we cannot create a new state from a single state, which is inherently linearly
dependent with itself, while a distinct normalized state can be formed by
|i + Z |i
2
=
=
( + ) |0i + ( ) |1i
2
2 |0i
= |0i .
2
9.5.4
After seeing the X and Z operators, we are compelled to wonder about the operator
Y that corresponds to the Pauli matrix y associated with the Sy observable. It is
just as important as those, but has no formal name (well give it one in a moment).
235
Y |i =
= i
i 0
,
that last equality accomplished by multiplying the result by the innocuous unit scalar,
i.
Notation and Vocabulary
Although Y has no official name, we will call it the bit-and-phase flip, because it flips
both the bits and the relative phase, simultaneously.
The symbol Y is used to represent this operator because it is identical to the Pauli
matrix y .
Gate Symbol and Circuits
In a circuit diagram, we use
Y
to express a Y gate. Its full effect on a general state in diagram form is
|0i + |1i
i |0i + i |1i .
236
This may seem less than a compelling justification, but the expression is of the syntactic form
,
n
y ,
z
|i
Y
i
which causes the probabilities, ||2 and ||2 to get swapped at access point B.
9.5.5
While all these quantum gates are essential, the one having the most far-reaching
consequences and personifying the essence of quantum logic is the Hadamard gate.
The Hadamard operator, H, is defined by the matrix
1 1 1
.
2 1 1
Traditionally, we first address its effect on the CBS states,
1 1 1
|0i + |1i
1
H |0i =
=
0
2 1 1
2
1 1 1
|0i |1i
0
H |1i =
=
.
1
2 1 1
2
237
and
We immediately recognize that this rotates the z-basis kets onto the x-basis kets,
H |0i
|0ix
H |1i
|1ix .
and
H |i =
=
,
2 1 1
2
which can be grouped
H |i
|0i +
|1i
|0i
+
|1i .
|0i + |1i
H
2
2
Computational Basis State Emphasis
It turns out to be very useful to express H in compact computational basis form. Ill
give you the answer, and let you prove it for for yourself.
|xi
.
2
[Exercise. Show that this formula gives the right result on each of the two CBS
kets.]
238
Measurement
Compared to the previous gates, the Hadamard gate has a more complex and subtle
effect on measurement probabilities. The circuit
|i
H
!
,
P H |i & |1i
| + |2
2
| |2
,
2
at point B.
Notation. The diagonal arrow, & , is to be read when measured, collapses to.
To make this concrete, here are the transition probabilities of a few input states
(details left as an exercise).
|i
|0i =
0
1/
2
|1ix =
1/ 2
i
.3
|0 i =
.7
1/2
|1 i =
3/2
Probabilities Before
P & |0i = 1
P & |1i = 0
P & |0i = 1/2
P & |1i = 1/2
P & |0i = .3
P & |1i = .7
P & |0i = 1/4
P & |1i = 3/4
239
Probabilities After
P & |0i = 1/2
P & |1i = 1/2
P & |0i = 0
P & |1i = 1
P & |0i = 1/2
P & |1i = 1/2
P & |0i = 2 +4 3
P & |1i = 2 4 3
9.5.6
The phase-flip gate, Z, is a special case of a more general (relative) phase shift
operation in which the coefficient of |1i is shifted by = radians. There are
two other common shift amounts, /2 (the S operator) and /4 (the T operator).
Beyond that we use the most general amount, any (the R operator). Of course,
they can all be defined in terms of R , so well define that one first.
The phase shift operator, R , is defined by the matrix
1 0
,
0 ei
where is any real number. It leaves the coefficient of |0i unchanged and shifts (or
rotates) |1is coefficient by a relative angle , whose meaning we will discuss in a
moment. Heres the effect on a general state:
1 0
R |i =
=
.
0 ei
ei
The operators S and T are defined in terms of R ,
1 0
S R/2 =
,
0 i
and
T
R/4
1
0
.
0 ei/4
Vocabulary
S is called the phase gate, and in an apparent naming error T is referred to as the
/8 gate, but this is not actually an error as much as it is a change of notation.
Like a state vector, any unitary operator on state space can be multiplied by a unit
scalar with impunity. Weve seen the reason, but to remind you, state vectors are
rays in state space, all vectors on the ray considered to be the same state. Since
we are also working on the projective sphere, any unit scalar not only represents the
same state, but keeps the vector on the projective sphere.
With that in mind, if we want to see a more balanced version of T , we multiply
it by ei/8 and get the equivalent operator,
i/8
e
0
T
.
=
0
ei/8
Measurement
These phase shift gates leave the probabilities alone for single qubit systems, just as
the phase flip gate, Z, did. You can parallel the exposition presented for Z in the
current situation to verify this.
240
9.6
9.6.1
Basis Conversion
Every quantum gate is unitary, a fact that has two important consequences, the first
of which we have already noted.
1. Quantum gates are reversible; their adjoints can be used to undo their action.
2. Quantum gates map any orthonormal CBS to another orthonormal CBS.
The second item is true using logic from our linear algebra lesson as follows. Unitary
operators preserve inner products. For example, since the natural CBS { |0i , |1i }
which we can also express using the more general notation { |xi }1x=0 satisfies the
orthonormality relation
hx | yi
xy
(kronecker delta) ,
xU U y
xy .
(Ill remind you that the LHS of last equation is nothing but the inner product of
U |yi with U |xi expressed in terms of the adjoint conversion rules introduced in our
lesson on quantum mechanics).
That tells us that { U |0i , U |1i } are orthonormal, and since the dimension of
H = 2 they also span the space. In other words, they form an orthonormal basis,
as claimed.
Lets call this the basis conversion property of unitary transformations and make
it a theorem.
Theorem (Basis Conversion Property). If U is a unitary operator
and A is an orthonormal basis, then U (A), i.e., the image of vectors A
under U , is another orthonormal basis, B.
[Exercise. Prove the theorem for any dimension, N . Hint: Let bk = U (ak )
be the kth vector produced by subjecting ak A to U . Review the inner productpreserving property of a unitary operator and apply that to any two vectors bk
and bj in the image of A. What does that say about the full set of vectors B =
U (A)? Finally, what do you know about the number of vectors in the basis for an
N -dimensional vector space?]
241
7 |1i
7 |0i .
7 |0ix
7 |1ix ,
and, since every quantum gates adjoint is its inverse, and H is self-adjoint (easy
[Exercise]), it works in the reverse direction as well,
H : |0ix
H : |1ix
7 |0i
7 |1i .
[Exercise. Identify the unitary operator that has the effect of converting between
the z-basis and the y-basis.]
9.6.2
Combining Gates
We can place two or more one-qubit gates in series to create desired results as the
next two sections will demonstrate.
An x-Basis QNOT
The QNOT gate (a.k.a. X), swaps the two CBS states, but only relative to the z-basis,
because thats how we defined it. An easy experiment shows that this is not true of
another basis, such as the x-basis:
|0i |1i
X |0i X |1i
X |1ix = X
=
2
2
|1i |0i
= |1ix
=
= |1ix .
2
[Exercise. To what is the final equality due? What is X |0ix ?]
242
[Exercise. Why is this not a surprise? Hint: Revive your quantum mechanics
knowledge. X is proportional to the matrix for the observable, Sx , whose eigenvectors
are |0ix and |1ix , by definition. What is an eigenvector?]
If we wanted to construct a gate, QN OTx , that does have the desired swapping
effect on the x-basis we could approach it in a number of ways, two of which are
1. brute force using linear algebra, and
2. a gate-combination using the basis-transforming power of the H-gate.
Ill outline the first approach, leaving the details to you, then show you the second
approach in its full glory.
Brute Force. We assert the desired behavior by declaring it to be true (and
confirming that our guess results in a unitary transformation). Here, that means
stating that QN OTx |0ix = |1ix and QN OTx |1ix = |0ix . Express this as two
equations involving the matrix for QN OTx and the x-basis kets in coordinate form
(everything in z-basis coordinates, of course). Youll get four simultaneous equations
for the four unknown matrix elements of QN OTx . This creates a definition in terms
of the natural CBS. We confirm its unitary and weve got our gate.
[Exercise. Fill in the details for QN OTx .]
Gate Combination. We know that QNOT (i.e., X) swaps the z-CBS, and H
converts between z-CBS and x-CBS. So we use H to map the x-basis to the z-basis,
apply X to the z-basis and convert the results back to the x-basis:
H
1
2
1
1
1 1
1 1
1
1
1
2
2 0
0 2
=
1 0
,
0 1
and we have our matrix. Its easy to confirm that the matrix swaps the x-CBS kets
and is identical to the matrix we would get using brute force (Ill let you check that).
243
Circuit Identities
The above example has a nice side-effect. By comparing the result with one of our
basic gates, we find that H X H is equivalent to Z,
H
There are more circuit identities that can be generated, some by looking at the
matrices, and others by thinking about the effects of the constituent gates and confirming your guess through matrix multiplication.
Here is one you can verify,
H
The last pattern is true for any quantum logic gate, U , which is self-adjoint because
then U 2 = U U = 1, the first equality by self-adjoint-ness and the second by
unitarity.
Some operator equivalences are not shown in gate form, but rather using the
algebraic operators. For example
XZ
i Y
or
XY Z
ZY X
i1.
Thats because the algebra shows a global phase factor which may appear awkward
in gate form yet is still important if the combination is to be used in a larger circuit.
As you may recall, even though a phase factor may not have observable consequences
on the state alone, if that state is combined with other states prior to measurement,
the global phase factor can turn into a relative phase difference, which does have
observable consequences.
I will finish by reminding you that the algebra and the circuit are read in opposite
order. Thus
XY Z
i1.
i1
This completes the basics of Qubits and their unary operators. There is one final
topic that every quantum computer scientist should know. It is not going to be used
much in this course, but will appear in CS 83B and CS 83C. It belongs in this chapter,
so consider it recommended, but not required.
244
9.7
9.7.1
Our goal is to find a visual 3-D representation for the qubits in H. To that end, we
will briefly allude to the lecture on quantum mechanics.
If you studied the optional time evolution of a general spin state corresponding
to a special physical system an electron in constant magnetic field B you learned
that the expectation value of all three observables formed a real 3-D time-evolving
vector,
hSx i|(t)i
9.7.2
c1 |0i + c2 |1i .
Rewriting |i
1 +2
2
i( 1 2 2 )
ce
.
1 2
s ei( 2 )
1 2
2
245
s2
1,
so c and s can be equated with the sine and cosine of some angle which we call 2 , i.e.,
c
sin .
2
cos
and
9.7.3
The three logic gates, X, Y and Z, are represented by unitary matrices, but they also
happen to be Hermitian. This authorizes us to consider them observables defined by
those matrices. In fact, we already related them to the observables Sx , Sy and Sz ,
the spin measurements along the principal axes, notwithstanding the removal of the
factor ~2 . Therefore, each of these observables has an expectation value a prediction
about the average of many experiments in which we repeatedly send lots of qubits
in the same state, |i, through one of these gates, measure the output (causing a
collapse), compute the average for many trials, and do so for all three gates X, Y
and Z. We define a real 3-D vector, s, that collects the expectation value into one
column, to be
hXi|i
s hY i|i .
hZi|i
At the end of the quantum mechanics lecture we essentially computed these expectation values. If you like, go back and plug t = 0 into the formula there. Aside from
the factor of ~2 (caused by the difference between X/Y /Z and Sx /Sy /Sz ), you will get
sin cos
s = sin sin .
cos
By defining c and s in terms of 2 , we ended up with expectation values that had the
whole angle, , in them. This is a unit vector in R3 whose spherical coordinates are
(1, , )t , i.e., it has a polar angle and azimuthal angle . It is a point on the
unit sphere.
246
9.7.4
n |
n| = 1
=
is called the Bloch sphere when the coordinates of each point on the sphere n
(x, y, z)t are interpreted as the three expectation values hXi, hY i and hZi for some
on the Bloch
qubit state, |i. Each qubit value, |i, in H corresponds to a point n
sphere.
=
If we use spherical coordinates to represent points on the sphere, then n
t
(1, , ) corresponds to the |i = |0i + |1i in our Hilbert space H according to
!
1,
cos 2 ei
i
= ,
H.
n
Bloch sphere |i =
sin
e
2
Sph
Now we see that a polar angle, , of a point on the Bloch sphere gives the magnitudes
of its corresponding qubit coordinates, but not directly; when is the polar angle,
/2 is used (through sine and cosine) for the qubit coordinate magnitudes.
247
Chapter 10
Tensor Products
V W
10.1
While quantum unary gates are far richer in variety and applicability than classical
unary gates, there is only so much fun one can have with circuit elements, like
U
which have a single input. We need to combine qubits (a process called quantum
entanglement), and to do that well need gates that have, at a minimum, two inputs,
U
(You may notice that there are also two outputs, an inevitable consequence of unitarity that well discuss in this hour.)
In order to feed two qubits into a binary quantum gate, we need a new tool to
help us calculate, and that tool is the tensor product of the two single qubit state
spaces
H H.
The concepts of tensors are no harder to master if we define the general tensor product
of any two vector spaces, V and W of dimensions l and m, respectively,
V W ,
and this approach will serve us well later in the course. We will then apply what we
learn by setting V = W = H.
248
10.2
10.2.1
Definitions
Whenever we construct a new vector space like V W , we need to handle the required
equipment. That means
1. specifying the scalars and vectors,
2. defining vector addition and scalar multiplication, and
3. confirming all the required properties.
Items 1 and 2 are easy, and we wont be overly compulsive about item 3, so it
should not be too painful. Well also want to cover the two normally optional but
for us required topics,
4. defining the inner product and
5. establishing the preferred basis.
If you find this sort of abstraction drudgery, think about the fact that tensors are the
requisite pillars of many fields including structural engineering, particle physics and
general relativity. Your attention here will not go unrewarded.
Overview
The new vector space is based on the two vector spaces V (dimension = l) and W
(dimension = m) and is called tensor product of V and W , written V W . The new
space will turn out to have dimension = lm, the product of the two component space
dimensions).
249
The Scalars of V W
Both V and W must have a common scalar set in order to form their inner product,
and that set will be the scalar set for V W . For real vector spaces like R2 and R3 ,
the scalars for
R2 R3
would then be R. For the Hilbert spaces of quantum physics (and quantum computing), the scalars are C.
The Vectors in V W
Vectors of the tensor product space are formed in two stages. I like to Compartmental -ize them as follows:
1. We start by populating V W with the formal symbols
v w
consisting of one vector v from V and another vector w from W . v w is called
the tensor product of the two vectors v and w. There is no way to further merge
these two vectors; v w is as concise as their tensor product gets.
For example, in the case of (5, 6)t R2 and (, 0, 3)t R3 , they provide the
tensor product vector
5
0 R2 R3 ,
6
3
with no further simplification possible (notwithstanding the natural basis coordinate representation that we will get to, below).
Caution. No one is saying that all these formal products vw are distinct from
one another. When we define the operations, well see there is much duplication.
2. The formal vector products constructed in step 1 produce only a small subset
of the tensor product space V W . The most general vector is a finite sum of
such symbols.
The full space V W consists of all finite sums of the form
X
vk wk , with
k
vk V and wk W.
250
i
1
1i
1+i
1i
2
2
i
6 + 6
2 .
3
0 +
7
3i
3i
2
4
0
i
Although this particular sum cant be further simplified as a combination of
separable tensors, we can always simplify long sums so that there are at most
lm terms in them. Thats because (as well learn) there are lm basis vectors for
the tensor product space.
Second Caution. Although these sums produce all the vectors in V W ,
they do so many times over. In other words, it will not be true that every sum
created in this step is a distinct tensor from every other sum.
Vocabulary
Product Space. The tensor product of two vector spaces is sometimes referred
to as the product space.
Tensors. Vectors in the product space are sometimes called tensors, emphasizing that they live in the tensor product space of two vector spaces. However,
they are still vectors.
Separable Tensors. Those vectors in V W which arise in the step 1 are
called separable tensors; they can be separated into two component vectors,
one from each space, whose product is the (separable) tensor. Step 2 presages
that most tensors in the product space are not separable. Separable tensors are
sometimes called pure or simple.
Tensor Product. We can use the term tensor product to mean either the
product space or the individual separable tensors. Thus V W is the tensor
product of two spaces, while v w is the (separable) tensor product of two
vectors.
Vector Addition
This operation is built-into the definition of a tensor; since the general tensor is the
sum of separable tensors, adding two of them merely produces another sum, which
is automatically a tensor. The twist, if we can call it that, is how we equate those
sums which actually represent the same tensor. This all expressed in the following
two bullets.
251
0 =
v k wk
vj0 wj0 ,
which simply expresses the fact that a sum of two finite sums is itself a finite
sum and therefore agrees with our original definition of a vector object in the
product space. The sum may need simplification, but it is a valid object in the
product space.
The tensor product distributes over sums in the component space,
(v + v0 ) w = v w + v0 w
v (w + w0 ) = v w + v w0 .
and
(cv) w
v (cw) .
c(v0 w0 ) + c(v1 w1 ) .
Because any tensor can be written as the finite sum of separable tensors, this
requirement covers the balance of the product space. You can distribute c over
as large a sum as you like.
Order. While we can place c on either the left or right of a vector, it is usually
placed on the left, as in ordinary vector spaces.
252
hv | v0 i hw | w0 i ,
where on the RHS is scalar multiplication. This only defines the tensor product of
two separable tensors, however we extend this to all tensors by asserting a distributive
property (or if you prefer the terminology, by extending linearly). For example,
h v w | v0 w0 + v00 w00 i
hv w | v0 w0 i + hv w | v00 w00 i .
[Exercise. Compute the inner product
*
5
0
6
3
in R2 R3
+
1
0
2
1
3
[Exercise. Prove that the definition of inner product satisfies all the usual requirements or a dot or inner product. Be sure to cover distributivity and positive
definiteness.]
The Natural Basis for V W
Of all the aspects of tensor products, the one that we will use most frequently is the
preferred basis.
Tensor Product Basis Theorem (Orthonormal Version). If V has
dimension l with orthonormal basis
n
ol1
vk
,
k=0
253
1
0
0
1
1
1
0 ,
1 ,
0 ,
0
0
0
0
0
1
1
0
0
0
0
0
0 ,
1 ,
0
1
1
1
0
0
1
Proof of Basis Theorem. Ill guide you through the proof, and you can fill in
the gaps as an exercise if you care to.
Spanning. A basis must span the space. We need to show that any tensor can
be expressed as a linear combination of the alleged basis vectors vk wj . This is an
easy two parter:
1. Any v V can be expanded along the V - basis as
X
v =
k v k ,
and any w W can be expanded along the W - basis as
X
w =
j wj ,
which implies that any separable tensor v w can be expressed
X
vw =
k j (vk wj ) .
[Exercise. Prove the last identity by applying linearity (distributive properties
of the tensor product space).]
2. Any tensor is a sum of separable tensors, so item 1 tells us that it, too, can
be expressed as a linear combination of vk wj . [Exercise. Demonstrate this
algebraically.]
254
=
=
v k wj
ol1, m1
j, k = 0, 0
from V and W .
If V and W each have an inner product (true for any of our vector spaces) this
theorem follows immediately from the orthonormal basis theorem.
[Exercise. Give a one sentence proof of this theorem based on the orthonormal
product basis theorem.]
The theorem is still true, even if the two spaces dont have inner products, but
we wont bother with that version.
Practical Summary of the Tensor Product Space
While we have outlined a rigorous construction for a tensor product space, it is usually
good enough for computer scientists to characterize the produce space in terms of the
tensor basis.
255
ckj (vk wj ) .
k=0
j=0
vk
n1
wj
m1
and
k=0
j=0
The sums, products and equivalence of tensor expressions are defined by the required
distributive and commutative properties listed earlier, but can often be taken as the
natural rules one would expect.
Conventional Order of Tensor Basis
While not universal, when we need to list the tensor basis linearly, the most common
convention is to let the left basis index increment slowly and the right increment
quickly. It is V -major / W -minor format if you will, an echo of the row-major
(column-minor ) ordering choice of arrays in computer science,
n
v0 w0 , v0 w1 , v0 w2 , . . . , v0 wm1 ,
v1 w0 , v1 w1 , v1 w2 , . . . , v1 wm1 ,
..
.
o
vl1 w0 , vl1 w1 , vl1 w2 , vl1 wm1 .
You might even see these basis tensors labeled using the shorthand like kj ,
n
00 , 01 , 02 , . . . , 0(m1) ,
10 , 11 , 12 , . . . , 1(m1) ,
..
.
o
(l1)0 , (l1)1 , (l1)2 , . . . , (l1)(m1) .
256
10.2.2
c0
d0
c1 d1
v w = .. .. .
. .
cl1
dm1
[A Reminder. We are numbering staring with 0, rather than 1, now that we are in
computing lessons.]
Im going to give you the answer immediately, and allow you to skip the explanation if you are in a hurry.
c0 d 0
c0 d 1
d0
..
.
1
c0 ..
c0 dm1
.
cd
1 0
dm1
cd
1 1
..
.
d
0
d
c1 dm1
1
c0
d0
c1 .
c1 d 1
.
c
d
2
0
.. .. =
c2 d 1 .
dm1
. .
..
.
cl1
dm1
cd
..
2 m1
.
d
0
d1
c
c d
l1 ..
l1 0
.
c d
l1 1
dm1
..
.
cl1 dm1
257
5
0
15
5
0 =
6 .
6
3
0
18
[Exercise. Give the coordinate representation of the tensor
1
1+i
2
6
3
3i
4
in the natural tensor basis.]
Explanation. Ill demonstrate the validity of the formula in the special case
R R3 . The derivation will work for any V and W (as the upcoming exercise
shows).
2
c0 d0
0
+ c0 d1
1
0
0
0
0
1
0
0
0
+ c1 d0
0 + c1 d1
1
1
1
0
0
0
1
c0 d 2
0
0
1
0
0
c1 d 2
0 .
1
1
Next, identify each of the basis tensor products with their symbolic vk (for R2 ) and
wj (for R3 ) to see it more clearly,
c0 d0 (v0 w0 )
+ c1 d0 (v1 w0 )
+
+
c0 d1 (v0 w1 )
c1 d1 (v1 w1 )
258
+
+
c0 d2 (v0 w2 )
c1 d2 (v1 w2 ) .
The basis tensors are listed in the conventional (V -major / W -minor format) allowing us to write down the coordinates of the tensor,
c0 d 0
c0 d 1
c0 d 2
c1 d 0 ,
c1 d 1
c1 d 2
as claimed.
QED
lm ,
which dont make reference to the two component vector spaces or the inherited V major / W -minor ordering we decided to use. However, for this to be useful, we need
an implied correspondence between these vectors and the inherited basis
n
v0 w0 , v0 w1 , v0 w2 , . . . , v0 wm1 ,
v1 w0 , v1 w1 , v1 w2 , . . . , v1 wm1 ,
...
o
This is all fine, as long as we remember that those self-referential tensor bases assume some agreed-upon ordering system, and in this course that will be the V -major
ordering.
To illustrate this, say we are working with the basis vector
0
0
0
R2 R3 .
0
1
0
In the rare times when we need to relate this back to our original vector spaces we
would count: The 1 is in position 4 (counting from 0), and relative to R2 and R3 this
means
4
1 3 + 1,
1
0
1
0
1
0
which we can confirm by multiplying out the RHS as we learned in the last section
0
0
1
1
0
00
0 1
0 0
1 0
1 1
10
0
0
0
.
0
1
0
This will be easy enough with our 2-dimensional component spaces H, but if you
arent prepared for it, you might find yourself drifting aimlessly when faced with a
long column basis tensor and dont know what to do with it.
260
0
1
.. .
.
lm2
lm1
The only thing worth noting here is the correspondence between this and the component spaces V and W . If we are lucky enough to have a separable tensor in our
hands, this would have the special form
c0 d 0
c0 d 1
c0 d 2
..
.
c1 d 0
c1 d 1
c1 d 2 ,
.
..
c d
2 0
c d
2 1
c d
2 2
..
.
and we might be able to figure out the component vectors from this. However, in
general, we dont have separable tensors. All we can say is that this tensor is a linear
combination of the lm basis vectors, and just accept that it has the somewhat random
components, i which we might label simply
0
1
.. ,
.
lm2
lm1
261
00
01
02
..
.
0(m1)
10
11
12
.
. ,
.
1(m1)
20
21
22
.
..
2(m1)
..
.
with the awareness that these components kj may not be products of two factors
ck dj originating in two vectors (c0 , . . . , cl1 )t and (d0 , . . . , dm1 )t .
One thing we do know: Any tensor can be written as a weighted-sum of, at most,
lm separable tensors. (If this is not immediately obvious, please review the tensor
product basis theorem.)
Example. Lets compute the coordinates of the non-separable tensor
1
0
3
1
=
2
+
1
6
0
1
in the natural basis. We
component and adding,
1
3 2
= +
6 2
0 1
1
3
0
6
1
+ 1
6
0
12
0
6
0
3
7
1 + 3
6 .
12
6
Tensors as Matrices
If the two component spaces have dimension l and m, we know that the product space
has dimension lm. More recently weve been talking about how these lm coordinates
262
00
01
02
0(m1)
10
11
12
1(m1)
..
,
..
..
..
.
.
.
.
.
.
.
(l1)0 (l1)1 (l1)2 (l1)(m1)
c0 d0
c0 d 1
c0 d2 c0 dm1
c1 d0
c1 d 1
c1 d2 c1 dm1
..
.
..
..
..
...
.
.
.
.
cl1 d0 cl1 d1 cl1 d2 cl1 dm1
This matrix is not to be interpreted as a linear transformation of either component
space it is just a vector in the product space. (It does have a meaning as scalarvalued function, but well leave that as a topic for courses in relativity, particle physics
or structural engineering.)
Sometimes the lm column vector model serves tensor imagery best, while other
times the l m matrix model works better. Its good to be ready to use either one
as the situation demands.
10.3
The product space is a vector space, and as such supports linear transformations.
Everything we know about them applies here: they have matrix representations in
the natural basis, some are unitary, some not, some are Hermitian, some not, some
have inverses, some dont, etc. The only special and new topic that confronts us is
how the linear transformation of the two component spaces, V and W , inform those
of the product space, V W .
The situation will feel familiar. Just as tensors in V W fall into two main
classes, those that are separable (products, v w, of two vectors) and those that
arent (they are built from sums of separable tensors), the operators on V W have
a corresponding breakdown.
10.3.1
Separable Operators
Av Bw ,
[A B] + [A B]
[A B] (c)
c [A B] .
and
v0 + 2v1
v1
and B be defined on R3 by
Bw
w0
w1 .
w2
w0
v0 + 2v1
[A B](v w) =
w1 ,
v1
w2
and this is extended linearly to general tensors. To get specific, we apply A B to
the tensor
1
0
3
1
=
2
+
1
6
0
1
264
to get
[A B]
0
3 + (12)
1+0
2
+
6
0
2
0
9
1
6
0
2
0
1
9
1
[A B] =
2
+
1 .
6
0
1
We can always forsake the separable components and instead express this as a column
vector in the product space by adding the two separable tensors,
0
9
1
[A B] =
2 +
6
0
2
9
0
9
18
17
2
9
9 2
=
+ 0 = 6 .
6
12
0
12
6 2
0
6 2
10.3.2
We can write down the matrix for any separable operator using a method very similar
to that used to produce the coordinate representation of a separable tensor product of
vectors. As you recall, we used a V -major format which multiplied the entire vector
w by each coordinate of v to build the result. The same will work here. We use an
A-major format, i.e., the left operators coordinates change more slowly than the
right operators. Stated differently, we multiply each element of the left matrix by
265
a00
a10
..
.
the right.
a01 a0(l1)
b00 b01
b00 b01
b00
a b10 b11 a b10
01
00
..
.. . .
..
.
.
.
b00 b01
b00
b10
a10 b10 b11
a11
..
.. . .
..
.
.
.
.
..
.
b0(m1)
b1(m1)
..
..
.
.
b01
b11
.. . .
b01
b11
.. . .
.
.
..
..
.
.
This works based on a V -major column format for the vectors in the product space.
If we had used a W -major column format, then we would have had to define the
product matrix using a B-major rule rather than the A-major rule given above.
Example. The matrices for the A and B of our last example are given by
0 0
1 2
A =
and B = 0 0 ,
0 1
0 0
so the tensor product transformation A B is immediately written down as
0 0
2 0 0
0 0
0 2 0
0 0
0 0 2
.
AB =
0 0 0
0 0
0 0 0
0 0
0 0 0
0 0
Example. Weve already applied the definition of a tensor operator directly to
compute the above operator applied to the tensor
3
7
1
0
1 + 3
3
1
=
2
+
1
=
6
0
6
1
12
6
266
9
17
9 2
[A B] =
12
6 2
It now behooves us to do a sanity check. We must confirm that multiplication of
by the imputed matrix for A B will produce the same result.
0 0
2 0 0
3
0 0
0 2 0
0 0
7
0
0
2
1 + 3
[A B] =
6 = ?
0 0 0
0 0
12
0 0 0
0 0
6
0 0 0
0 0
[Exercise. Do the matrix multiplication, fill in the question mark, and see if it
agrees with the column tensor above.]
10.3.3
As with the vectors in the product space (i.e., tensors), we can represent any operator
on the product space as sum of no more than (lm)2 separable operators, since there
are that many elements in a tensor product matrix. The following exercise will make
this clear.
[Exercise. Let the dimensions of our two component spaces be l and m. Then
pick two integers p and q in the range 0 p, q < lm. Show that the matrix
Ppq
which has a 1 in position (p, q) and 0 in all other positions, is separable. Hint: You
need to find an l l matrix and an m m matrix whose tensor product has 1 in the
right position and 0s everywhere else. Start by partitioning Ppq into sub-matrices of
size m m. Which sub-matrix does the lonely 1 fall into? Where in that m m
sub-matrix does that lonely 1 fall? ]
[Exercise. Show that the set of all (lm)2 matrices
n
o
Ppq 0 p, q < lm ,
spans the set of linear transformations on V W .]
267
10.3.4
Before we move on to multi-qubit systems, here are a few more things for you may
wish to ponder.
[Exercise. Is the product of unitary operators unitary in the product space?]
[Exercise. Is the product of Hermitian operators Hermitian in the product
space?]
[Exercise. Is the product of invertible operators invertible in the product space?]
268
Chapter 11
Two Qubits and Binary Quantum
Gates
|i2
11.1
269
11.2
11.2.1
The reason that we go tensor for a two-qubit system is that the two bits may
become entangled (to be defined below). That forces us to treat two bits as if they
were a single state of a larger state space rather than keep them separate.
Definition of Two Qubits. A Two-qubit system is (any copy of ) the
entire product space H H.
Definition of a Two-Qubit Value. The value or state of a twoqubit system is any unit (or normalized) vector in H H.
In other words, two qubits form a single entity the tensor product space H H
whose value can be any vector (which happens also to be a tensor) on the projective
sphere of that product space.
The two-qubit entity itself is not committed to any particular value until we say
which specific unit-vector in H H we are assigning it.
Vocabulary and Notation
Two qubits are often referred to as a bipartite system. This term is inherited from
physics in which a composite system of two identical particles (thus bi-parti -te) can
be entangled.
To distinguish the two otherwise identical component Hilbert spaces I may use
subscripts, A for the left-space and B for the right space,
HA HB .
Another notation you might see emphasizes the order of the tensor product, that is,
the number of component spaces in our current case, two,
H(2) .
In this lesson, we are concerned with order-2 products, with a brief but important
section on order-3 products at the very end.
Finally, note that in the lesson on tensor products, we used the common abstract
names V and W for our two component spaces. In quantum computation the component spaces are usually called A and B. For example, whereas in that lecture I
talked about V -major ordering for the tensor coordinates, Ill now refer to A-major
ordering.
270
11.2.2
First and foremost, we need to establish symbolism for the computation basis states
(CBS ) of our product space. These states correspond to the two-bits of classical
computing, and they allow us to think of two ordinary bits as being embedded within
the rich continuum of a quantum bipartite state space.
Symbols for Basis Vectors
The tensor product of two 2-D vector spaces has dimension 2 2 = 4. Its inherited
preferred basis vectors are the separable products of the component space vectors,
{ |0i |0i , |0i |1i , |1i |0i , |1i |1i } .
These are the CBS of the bipartite system. There are some shorthand alternatives in
quantum computing.
|0i |0i |0i |0i |00i
|0i2
|1i2
|2i2
|3i2
All three of the alternatives that lack the symbol are seen frequently in computer
science, and we will switch between them freely based on the emphasis that the
context requires.
The notation of the first two columns admits the possibility of labeling each of
the component kets with the H from whence it came, A or B, as in
|0iA |0iB |0iA |0iB
|0iA |1iB |0iA |1iB
etc.
I will often omit the subscripts A and B when the context is clear and include them
when I want to emphasize which of the two component spaces the vectors comes from.
The labels are always expendable since the A-space ket is the one on the left and the
B-space ket is the one on the right. I will even include and/or omit them in the same
string of equalities, since it may be clear in certain expressions, but less so in others:
U
|0iA + |1iA |1iB
= U |0i |1i + |1i |1i = |0i |0i + |0i |1i
= |0iA |0iB + |1iB
(This is an equation we will develop later in this lesson.)
271
coordinates
|0i |0i
|0i2
1
0
0
|0i |1i
|1i2
0
1
0
|1i |0i
|2i2
0
0
1
|1i |1i
|3i2
0
0
0
This table introduces an exponent-like notation, |xi2 , which is needed mainly in the
encoded form, since an integer representation for a CBS does not disclose its tensor
order (2 in this case) to the reader, while the other representations clearly imply that
we are looking at two-qubits.
11.2.3
The CBS are four special separable tensors. There is an infinity of other separable
tensors in H H of the form
|i |i |i |i .
Note that, unlike the CBS symbolism, there is no further alternative notation for a
general separable tensor. In particular, |i makes no sense.
11.2.4
.
|0i |0ix = |0iA
2
The basis we use as input(s) to the quantum circuit and along which we measure
the output of the same circuit is what we mean when we speak of the computational
basis. By convention, this is the z-basis. It corresponds to the classical bits
|0i |0i [00]
|0i |1i [01]
|1i |0i [10]
|1i |1i [11]
Nevertheless, in principle there is nothing preventing us from having hardware that
measures output along a different orthonormal basis.
[Future Note. In fact, in the later courses, CS 83B and 83C, we will be considering measurements which are not only made with reference to a different observable
(than Sz ) but are not even bases: they are neither orthonormal nor linearly independent. These go by the names general measurement operators or positive operator
valued measures, and are the theoretical foundation for encryption and error correction.]
273
Example
We expand the separable state
|0ix |1iy
along the (usual) computational basis in H H.
|0i + |1i
|0i i |1i
|0ix |1iy =
2
2
|00i i |01i + |10i i |11i
.
2
As a sanity check we compute the modulus-squared of the product directly, i.e., by
summing the magnitude-squared of the four amplitudes,
2
1
1
i
i
1
1
i
i
=
+
+
+
|0ix |1iy
2
2
2
2
2
2
2
2
=
1,
This is the kind of test you can perform in the midst of a large computation to
be sure you havent made an arithmetic error.
We might have computed the modulus-squared of the tensor product using one
of two other techniques, and it wont hurt to confirm that we get the same 1 as an
answer. The techniques are
the definition of inner product in the tensor space (the product of component
inner-products),
2
= h |0ix |1iy |0ix |1iy i
|0ix |1iy
=
h0 | 0ix
h1 | 1iy
11
X,
or the adjoint conversion rules to form the left bra for the inner product,
2
=
h1|
h0|
|0i
|1i
=
h1|
h0
|
0i
|1iy
|0ix |1iy
y
x
x
y
y
x
x
=
h1 | 1iy
X .
However, neither of these would have been as thorough a check of our arithmetic as
the first approach.
[Exercise. Expand the separable state
.1 |0i + i .9 |1i
i .7 |0i + .3 |1i
How the Second Order x-Basis Looks when Expressed in the Natural Basis
If we combine the separable expression of the x-CBS kets,
|+i |+i , |+i |i , |i |+i ,
and
|i |i .
with the expansion of each component ket along the natural basis,
|+i
H |0i
|0i + |1i
|i
H |1i
|0i |1i
,
2
and
the four x-kets look like this, when expanded along the natural basis:
|+i |+i
|+i |i
|i |+i
|i |i
11.2.5
The exponent 2 on the LHS is, as mentioned earlier, a clue to the reader that |i
lives in a second-order tensor product space, a detail that might not be clear without
looking at the RHS. In particular, nothing is being squared.
275
11.2.6
This brings us to the common definition of a two-qubit system, which avoids the
above formalism.
Alternative Definition of Two Qubits. Two qubits are represented by a variable superposition of the four tensor basis vectors of HH,
|i2
1.
|i2
or
We may also use alternate notation for scalars, especially when we prepare for higherorder product spaces:
|i2
or
|i2
11.3
11.3.1
:
11.3.2
Well go through the checklist on a not-particularly-practical gate, but one that has
the generality needed to cover future gates.
:
The Symbol
Every binary qubit gate has two input lines, one for each input qubit, and two output
lines, one for each output qubit. The label for the unitary transformation associated
with the gate, say U , is placed inside a box connected to its inputs and outputs.
U
Although the data going into the two input lines can become entangled inside the
gate, we consider the top half of the gate to be a separate register from the lower
half. This can be confusing to new students, as we cant usually consider each output
line to be independent of its partner the way the picture suggests. More (a lot more)
about this shortly.
Vocabulary. The top input/output lines form an upper A register (or A channel )
while the bottom form a lower B register (or B channel ).
A-register out
A-register in
B-register in
B-register out
277
| yi
U
|yi
| x yi
It is very important to treat the LHS as a single two-qubit input state, not two
separate single qubits, and likewise with the output. In other words, it is really
saying
U |xi |yi
| yi | x yi
or, using shorter notation,
U |xi |yi
| yi | x yi
Furthermore, |xi |yi only represents the four CBS, so we have to extend this linearly
to the entire Hilbert space.
Lets make this concrete. Taking one of the four CBS, say |10i, the above definition
tells us to substitute 1 x and 0 y, to get the gates output,
U |1i |0i
= | 0i | 1 0i = |1i |0i .
[Exercise. Compute the effect of U on the other three CBS.]
: The Matrix
In our linear transformation lesson, we proved that the matrix MT that represents
an operator T can be written by applying T to each basis vector, ak , and placing the
answer vectors in the columns of the matrix,
!
MT
T (v) is then just MT v. Applying the technique to U and the CBS {|xi |yi} we get
!
MU
278
Each of these columns must be turned into the coordinate representation in the
inherited tensor basis of the four U -values. Lets compute them. (Spoiler alert: this
was the last exercise):
0
0
MU
0
0
0
1
1
0
0
0
0
0
1
0
0
1
,
0
0
which is, indeed, unitary. Incidentally, not every recipe you might conjure for the four
values U (|xi |yi) will produce a unitary matrix and therefore not yield a reversible
and thus valid quantum gate. (We learned last time that non-unitary matrices do
not keep state vectors on the projective sphere and therefore do not correspond to
physically sensible quantum operations.)
[Exercise. Go through the same steps on the putative operator defined by
U |xi |yi
|x yi | x yi .
Is this matrix unitary? Does U constitute a realizable quantum gate?]
|i2 : Behavior on General State
The general state is a superposition of the four basis kets,
|i2
0 1
0 0
U |i2 =
0 0
1 0
=
0
0
1
0
0
0
Evidently, U leaves the amplitude of the CBS ket, |1i |0i, alone and permutes the
other three amplitudes.
& : Measurement
[Note: Everything in this section, and in general when we speak of measurement,
assumes that we are measuring the states relative to the natural computational basis,
the z-basis, unless explicitly stated otherwise. This is { |0i , |1i } for a single register
and { |00i , |01i , |10i , |11i } for both registers. We can measure relative to other
CBSs, but then some of the results below cannot be applied exactly as stated. Of
course, I would not finish the day without giving you an example.]
Now that we know what U does to input states we can make some basic observations about how it changes the measurement probabilities. Consider a states
amplitudes both before and after the application of the gate (access points P and Q,
respectively):
(
|i2
U |i2
collapses it to |00i with probability ||2 . Meanwhile, a look at the U |i2 s amplitudes
(point Q),
U |i2
reveals that measuring the output there will land it on the state |00i with probability
||2 . This was the inputs probability of landing on |01i prior to the gate; U has
shifted the probability that a ket will register a 01 on our meter to the probability
that it will register a 00. In contrast, a glance at |is pre- and post-U amplitudes
of the CBS |10i tells us that the probability of this state being measured after U is
the same as before: ||2 .
Measurement of Separable Output States. By looking at the expansion
coefficients of the general output state, we can usually concoct a simple input state
that produces
a separable
output. For example, taking = = 0 gives a separable
input, |0iA + |1iA |1iB , as well as the following separable output:
U
|0iA + |1iA
|1iB
= U |0i |1i + |1i |1i = |0i |0i + |0i |1i
= |0iA |0iB + |1iB
|0i + |1i
|0i
U
|1i
|0i + |1i
Q
Measuring the A-register at the output (point Q) will yield a 0 with certainty
(the coefficient of the separable CBS component, |0i, is 1) yet will tell us nothing
about the B-register, which has a ||2 probability of yielding a 0 and ||2 chance of
yielding a 1, just as it did before we measured A. Similarly, measuring B at point
Q will collapse that output register into one of the two B-space CBS states (with
the probabilities ||2 and ||2 ) but will not change a subsequent measurement of the
A-register output, still certain to show us a 0.
(If this seems as though Im jumping to conclusions, it will be explained formally
when we get the Born rule, below.)
A slightly less trivial separable output state results from the input,
!
!
!
!
2
6
2
6
|00i +
|01i +
|10i +
|11i
|i2 =
4
4
4
4
!
|0iA + |1iA
|0iB + 3 |1iB
.
=
2
2
(As it happens, this input state is separable, but thats not required to produce a
separable output state, the topic of this example. I just made it so to add a little
symmetry.)
281
The output state can be written down instantly by permuting the amplitudes
according to U s formula,
!
!
!
!
6
6
2
2
U |i2 =
|00i +
|01i +
|10i +
|11i
4
4
4
4
!
3 |0iA + |1iA
|0iB + |1iB
,
2
2
and I have factored it for you, demonstrating the output states separability. Measuring either output register at access point Q,
|0i + |1i
|0i + 3 |1i
2
3 |0i + |1i
2
|0i + |1i
2
Q
has a non-zero probability of yielding one of the two CBS states for its respective H,
but it wont affect the measurement of the other output register. For example, measuring the B-qubit-out will land it in it |0iB or |1iB with equal probability. Regardless
of which result we get, it will not affect a future measurement of the A-qubit-out which
has a 3/4 chance of measuring 0 and 1/4 chance of showing us a 1.
Measuring One Register of a Separable State. This is characteristic of
separable states, whether they be input or output. Measuring either register does not
affect the probabilities of the other register. It only collapses the component vector of
the tensor, leaving the other vector un-collapsed.
11.3.3
Quantum Entanglement
= |0iA
,
2
282
0 1 0 0
1
1
1 1
1 0
0 0 0 1
U |i2 =
=
0 0 1 0
2 0
2 0
1 0 0 0
0
1
|0iA |0iB + |1iA |1iB
,
2
clearly not factorable. Furthermore, unlike the separable output states we have studied, a measurement of either register forces its partner to collapse.
|0i
|0i + |1i
Q
For example, if we measure Bs output, and find it to be in state |1iB , since the
output ket has only one CBS tensor associated with that |1iB , namely |1iA |1iB , as
we can see from its form
|0i |0i + |1i |1i
,
2
we are forced to conclude that the A-register must have collapsed into its |1i state. If
this is not clear to you, imagine that the A-register had not collapsed to |1i. It would
then be possible to measure a 0 in the A-register. However, such a turn of events
would have landed a |1i in the B-register and |0i in the A-register, a combination
that is patently absent from the output kets CBS expansion, above.
Stated another way (if you are still unsure), there is only one bipartite state here,
and if, when expanded along the CBS basis, one of the four CBS kets is missing
from that expansion that CBS ket has a zero probability of being the result of a
measurement collapse. Since |0i |1i is not in the expansion, this state is not accessible
through a measurement. (And by the way, the same goes for |1i |0i.)
Definition of Quantum Entanglement
An entangled state in a product space is one that is not separable.
283
Non-Locality
Entangled states are also said to be non-local, meaning that if you are in a room
with only one of the two registers, you do not have full control over what happens
to the data there; an observer of the other register in a different room may measure
his qubit and affect your data even though you have done nothing. Furthermore, if
you measure the data in that register, your efforts are not confined to your room but
extend to the outside world where the other register is located. Likewise, separable
states are considered local, since they do allow full segregation of the actions on
separate registers. Each observer has total control of the destiny of his register, and
his actions dont affect the other observer.
The Entanglement Connection
Non-separable states are composed of entangled constituents. While each constituent
may be physically separated in space and time from its partner in the other register,
the two parts do not have independent world lines. Whatever happens to one affects
the other.
This is the single most important and widely used phenomenon in quantum computing, so be sure to digest it well.
Partial Collapse
In this last example a measurement and collapse of one register completely determined
the full and unique state of the output. However, often things are subtler. Measuring
one register may have the effect of only partially collapsing its partner. Well get to
that when we take up the Born rule.
Measurements Using a Different CBS
Everything weve done in the last two sections is true as long as we have a consistent
CBS from start to finish. The definition of U , its matrix and the measurements have
all used the same CBS. But funny things happen if we use a different measurement
basis than the one used to define the operator or express its matrix. Look for an
example in a few minutes.
Now lets do everything again, this time for a famous gate.
11.3.4
This is the simplest and most commonly used binary gate in quantum computing.
284
:
The Symbol
The A-register is often called the control bit, and the B-register the target bit,
control bit
target bit
|xi
|x yi
|yi
When viewed on the tiny set of four CBS tensors, it appears to leave the A-register
unchanged and to negate the B-register qubit or leave it alone, based on whether the
A-register is |1i or |0i:
(
|yi ,
if x = 0
|yi 7
| yi ,
if x = 1
Because of this, the gate is described as a controlled-NOT operator. The A-register is
called the control bit or control register, and the B-register is the target bit or target
register. We cannot use this simplistic description on a general state, however, as the
next sections will demonstrate.
285
: The Matrix
We compute the column vectors of the matrix by applying CNOT to the CBS tensors
to get
!
MCNOT
!
|00i , |01i , |11i , |10i
1
0
0
0
0
1
0
0
0
0
0
1
0
0
1
0
we get
CNOT |i2
0
1
0
0
0
0
0
1
0
1
0
1
0
0
0
A Meditation. If you are tempted to read on, feeling that you understand
everything we just covered, see how quickly you can answer this:
[Exercise. The CNOT is said to leave the source register unchanged and flip
the target register only if the source register input is |1i. Yet the matrix for CNOT
seems to always swap the last two amplitudes, , of any ket. Explain this.]
Caution. If you cannot do the last exercise, you should not continue reading, but
review the last few sections or ask a colleague for assistance until you see the light.
This is an important consequence of what we just covered. It is best that you apply
that knowledge to solve it rather than my blurting out the answer for you.
286
& : Measurement
First, well consider the amplitudes before and after the application of CNOT (access
points P and Q, respectively):
(
|i2
CNOT |i2
will yield a 00 with probability ||2 and 01 with probability ||2 . A post-gate
measurement of CN OT |i2 (point Q),
CNOT |i2
will yield those first two readings with the same probabilities since their kets respective amplitudes are not changed by the gate. However, the probabilities of getting a
10 vs. a 11 reading are swapped. They go from ||2 and ||2 before the gate to
||2 and ||2 , after.
Separable Output States. Theres nothing new to say here, as we have covered
all such states in our learning example. Whenever we have a separable output state,
measuring one register has no affect on the other register. So while a measurement
of A causes it to collapse, B will continue to be in a superposition state until we
measure it (and vice versa).
Quantum Entanglement for CNOT
A separable bipartite state into CNOT gate does not usually result in a separable
state out of CNOT. To see this, consider the separable state
|0i + |1i
2
|0i
|i
= |0ix |0i =
2
going into CNOT:
|0i + |1i
|0i
?
?
287
When presented with a superposition state into either the A or B register, back
away very slowly from your circuit diagram. Turn, instead, to the linear algebra,
which never lies. The separable state should be resolved to its tensor basis form by
distributing the product over the sums,
1
|0i + |1i
1
1
1
CNOT (|00i) + CNOT (|10i)
2
2
1
1
|00i + |11i
2
2
|00i + |11i
.
2
This is the true output of the gate for the presented input. It is not separable as is
obvious by its simplicity; there are only two ways we might factor it: pulling out an
A-ket (a vector in the first H space) or pulling out a B-ket (a vector in the second H
space), and neither works.
(Repeat of ) Definition. An entangled state in a product space is
one that is not separable.
Getting back to the circuit diagram, we see there is nothing whatsoever we can
place in the question marks that would make that circuit sensible. Anything we
might try would make it appear as though we had a separable product on the RHS,
which we do not. The best we can do is consolidate the RHS of the gate into a single
bipartite qubit, indeed, an entangled state.
|0i + |1i
2
|0i
|00i + |11i
With an entangled output state such as this, measuring one output register causes the
collapse of both registers. We use this property frequently when designing quantum
algorithms.
Individual Measurement of Output Registers. Although we may have an
entangled state at the output of a gate, we are always allowed to measure each
288
register separately. No one can stop us from doing so; the two registers are distinct
physical entities at separate locations in the computer (or universe). Entanglement
and non-locality mean that the registers are connected to one another. Our intent to
measure one register must be accompanied by the awareness that, when dealing with
an entangled state, doing so will affect the other registers data.
When CNOTs Control Register gets a CBS ...
Now, consider the separable state
|i
=
|1i |0ix
|1i
|0i + |1i
|1i
|0i + |1i
?
?
I have chosen the A-register input to be |1i for variety. It could have been |0i with
the same (as of yet, undisclosed) outcome. The point is that this time our A-register
is a CBS while the B-register is a superposition. We know from experience to ignore
the circuit diagram and turn to the linear algebra.
1
|0i + |1i
1
|1i
= |10i + |11i ,
2
2
2
and we apply CNOT
1
1
CNOT |10i + |11i
2
2
=
=
=
1
1
CNOT (|10i) + CNOT (|11i)
2
2
1
1
|11i + |10i
2
2
|1i + |0i
|1i
.
2
Aha separable. Thats because the control-bit (the A-register) is a CBS; it does not
change during the linear application of CNOT so will be conveniently available for
factoring at the end. Therefore, for this input, we are authorized to label the output
registers, individually.
|1i
|0i + |1i
|1i
|0i + |1i
289
The two-qubit output state is unchanged. Not so fast. You have to do an ...
[Exercise. We are told that a |1i going into CNOTs control register means
we flip the B-register bit. Yet, the output state of this binary gate is the same as
the
Explain. Hint: Try the same example with a B-register input of
input state.
.3 |0i + .7 |1i. ]
[Exercise. Compute CNOT of an input tensor |1i |1ix . Does CNOT leave this
state unchanged?]
Summary. A CBS ket going into the control register (A) of a CNOT gate allows
us to preserve the two registers at the output: we do, indeed, get a separable state
out, with the control register output identical to the control register input. This is
true even if a superposition goes into the target register (B). If a superposition goes
into the control register, however, all bets are off (i.e., entanglement emerges at the
output).
What the Phrase The A-Register is Unchanged Really Means
The A, or control, register of the CNOT gate is said to be unaffected by the CNOT
gate, although this is overstating the case; it gives the false impression that a separable
bipartite state into CNOT results in a separable state out, which we see is not the
case. Yet, there are are at least two ways to interpret this characterization.
1. When a CBS state (of the preferred, z-basis) is presented to CNOTs A-register,
the output state is, indeed, separable, with the A-register unchanged.
2. If a non-trivial superposition state is presented to CNOTs A-register, the measurement probabilities (relative to the z-basis) of the A-register output are preserved.
We have already demonstrated item 1, so lets look now at item 2. The general
state, expressed along the natural basis is
|i2
290
one outcome is |0iB and of the other outcome is |1iB ). Therefore, to get the overall
probability that A collapses to |0iA , we simply add those two probabilities:
P (A-reg output & |0i) = P CNOT |i & |00i
+ P CNOT |i & |01i
=
||2 + ||2 ,
which is exactly the same probability of measuring a 0 on the input, |i, prior to
applying CNOT.
[Exercise. We did not compute the probability of measuring a 0 on the input,
|i. Do that that to confirm the claim.]
[Exercise. What trait of QM (and postulate) tells us that the individual probabilities are ||2 and ||2 ?]
[Exercise. Compute the probabilities of measuring a 1 in the A-register both
before and after the CNOT gate. Caution: This doesnt mean that we would measure
the same prepared state before and after CNOT. Due to the collapse of the state after
any measurement, we must prepare many identical states and measure some before
and others after then examine the outcome frequencies to see how the experimental
probabilities compare.]
Measuring Using a Different CBS
Now we come to the example that I promised: measuring in a basis different from the
one used to define and express the matrix for the gate. Lets present the following
four bipartite states
|00ix , |01ix , |10ix , and |11ix
to the input of CNOT and look at the output (do a measurement) in terms of the
x-basis (which consist of those four tensors).
|00ix : This one is easy because the z-coordinates are all the same.
|00ix
|0ix |0ix
1
|0i + |1i
1
|0i + |1i
1
2 1
2
2
1
CNOT applied to |00ix swaps the last two z-coordinates, which are identical,
so it is unchanged.
CNOT |00ix
|00ix
291
|01ix
|0ix |1ix
1
|0i + |1i
|0i |1i
1
1
=
2 1
2
2
1
1
1 0 0 0
0 1 0 0 1 1
CNOT |01ix =
0 0 0 1 2 1
1
0 0 1 0
1
1
2 1
1
From here its easier to expand along the z-basis so we can factor,
1
CNOT |01ix =
|0i |0i |0i |1i |1i |0i + |1i |1i
2
1
=
|0i |0i |1i |0i |0i |1i + |1i |1i
2
1
|0i |1i |0i
|0i |1i |1i
=
2
1
=
|0i |1i |0i |1i
2
|0i |1i
|0i |1i
=
= |1ix |1ix = |11ix .
2
2
What is this? Looking at it in terms of the x-basis it left the B-register unchanged at |1ix but flipped the A-register from |0ix to |1ix .
Looking back at the |00ix case, we see that when the B-register held a qubit in
the Sx =0 state, the A-register was unaffected relative to the x-basis.
This looks suspiciously as though the B-register is now the control bit and A is
the target bit and, in fact, a computation of the remaining two cases, |10ix and
|11ix , would bear this out.
|10ix : CNOT |10ix
|10ix [Exercise.]
|01ix [Exercise.]
292
=
=
=
=
|00ix
|11ix
|10ix
|01ix ,
demonstrating that, in the x-basis, the B-register is the control and the A is the
target.
[Preview. We are going to revisit this in a circuit later today. For now, well call
it an upside-down action of the CNOT gate relative to the x-basis and later see how
to turn it into an actual upside-down CNOT gate for the natural CBS kets. When
we do that, well call it CNOT, because it will be controlled from bottom-up in
the z-basis. So for, however, weve only produced this bottom-up behavior in the
x-basis, so the gate name does not change.]
What About Measurement? I advertised this section as a study of measurement, yet all I did so far was make observations about the separable components of
the output which was an eye-opener in itself. Still, lets bring it back to the topic
of measurement.
Take any of the input states, say |10ix . Then the above results say that
CNOT |10ix
|10ix .
To turn this into a statement about measurement probabilities, we dot the output
state with the x-basis kets to get the four amplitudes. By orthogonality of CBS,
x h00 | 10ix
x h01 | 10ix
x h10 | 10ix
x h11 | 10ix
0,
producing the measurement probability of 100% for the state, |10ix . In other words,
for this state which has a B input of 0 (in x-coordinates) its output remains 0
with certainty, while As 1 (again, x-coordinates) is unchanged, also with certainty.
On the other hand, the input state |11ix gave us an output of |01ix , so the output
amplitudes become
x h00 | 01ix
x h01 | 01ix
x h10 | 01ix
x h11 | 01ix
0.
Here the input whose B x-basis component is 1 turns into an output with the B
x-basis component remaining 1 (with certainty) and an A x-basis input 1 becoming
flipped to 0 at the output (also with certainty).
This demonstrates that such statements like The A-register is left unchanged
or The A register is the control qubit, are loosey-goosey terms that must be taken
with a grain of salt. They are vague for non-separable states (as we saw, earlier) and
patently false for measurements in alternate CBSs.
293
11.3.5
We construct the second order Hadamard gate by forming the tensor product of two
first order gates, so wed better first review that operator.
Review of the First Order Hadamard Gate
Recall that the first order Hadamard gate operates on the 2-dimensional Hilbert space,
H, of a single qubit according to its effect on the CBS states
H |0i
|0ix
|0i + |1i
H |1i
|1ix
|0i |1i
,
2
and
.
2
1 1
,
1 1
H |i =
|0i +
|1i .
2
2
:
Definition. The second order Hadamard gate, also called the two-qubit or binary
Hadamard gate, is the tensor product of two single-qubit Hadamard gates,
H 2
H H.
Notation. You will see both H H and H 2 when referring to the second order
Hadamard gate. In a circuit diagram, it looks like this
H 2
when we want to show the separate A and B register, or like this
/
H 2
294
when we want to condense the input and output pipes into a multi-pipe. However, it
is often drawn as two individual H gates applied in parallel,
H
H
|xi |yi : Action on the CBS
When a two-qubit operator is defined as a tensor product of two single-qubit operators, we get its action on CBS kets free-of-charge compliments of the definition of
product operator. Recall that the tensor product of two operators T1 and T2 , requires
that its action on separable tensors be
[T1 T2 ](v w)
T1 v T2 w .
This forces the action of H H on the H(2) computational basis state |0i |0i (for
example) to be
|0i + |1i
|0i + |1i
[H H] |0i |0i
= H |0i H |0i =
.
2
2
Separability of CBS Output. Lets pause a moment to appreciate that when an
operator in H(2) is a pure product of individual operators, as this one is, CBS states
always map to separable states. We can see this in the last result, and we know it will
happen for the other three CBS states.
CBS Output in z-Basis Form. Separable or not, its always good to have
the basis-expansion of the gate output for the four CBS kets. Multiplying it out for
H 2 (|0i |0i), we find
1
1
|0i |0i + |0i |1i + |1i |0i + |1i |1i
1 .
=
[H H] |0i |0i
=
2
2 1
1
Doing the same thing for all four CBS kets, we get the identities
H 2 |0i |0i
H 2 |0i |1i
H 2 |1i |0i
H 2 |1i |1i
Condensed Form #1. There is a single-formula version of these four CBS results,
and it is needed when we move to three or more qubits, so we had better develop it
now. However, it takes a little explanation, so we allow a short side trip.
First, lets switch to encoded notation (0 00, 1 01, 2 10 and 3 11),
and view the above in the equivalent form,
2
H 2 |1i2
H 2 |2i2
|0i
|3i
Next, I will show (with your help) that all four of these can be summarized by the
single equation (note that x goes from the encoded value 0 to 3)
H 2 |xi2
=
where the operator in the exponent of (1) is the mod-2 dot product that was
defined for the unusual vector space, B2 , during the lesson on single qubits. Ill
repeat it here in the current context. For two CBS states,
x = x1 x0
y = y1 y0 ,
and
where the RHS represents the two-bit string of the CBS (00, 01, etc.), we define
xy
x1 y 1 x0 y 0 ,
that is, we multiply corresponding bits and take their mod-2 sum. To get overly
explicit, here are a few computed mod-2 dot products:
x = x1 x0
3 = 11
1 = 01
0 = 00
1 = 01
1 = 01
y = y1 y0
3 = 11
1 = 01
2 = 10
3 = 11
2 = 10
296
xy
0
1
0
1
0
Ill now confirm that the last expression presented above for H 2 |xi2 , when applied
to the particular CBS |3i2
= |1i |1i, gives the right result:
H 2 |3i2
=
=
=
H 2 |11i
which matches the final of our four CBS equations for H 2 . Now, for the help I
warned you would be giving me.
[Exercise. Show that this formula produces the remaining three CBS expressions
for H 2 .]
We now present the condensed form using summation notation,
H
|xi
3
1 X
(1)x y |yi2 .
2 y=0
Most authors typically dont use the for the mod-2 dot product, but stick with a
simpler , and add some verbiage to the effect that this is a mod-2 dot product...,
in which case you would see it as
H
|xi
3
1 X
(1)x y |yi2 .
2 y=0
(
2
|xi
1
2
3
X
(1)x y |yi2
y=0
|xi
1
2
3
X
(1)x y |yi2
y=0
Condensed Form #2. You may have noticed that the separability of the CBS
outputs, combined with the expression we already had for a single-qubit Hadamard,
H |xi
,
2
297
gives us another way to express H in terms of the CBS. Once again, invoking the
definition of a product operator, we get
[H H] |xi |yi
= H |xi H |yi
|0i + (1)x |1i
|0i + (1)y |1i
=
,
2
2
which, with much less fanfare than condensed form #1, produces a nice separable
circuit diagram definition for the binary Hadamard:
|0i + (1)x |1i
|xi
H 2
|yi
which I will call condensed form #2.
[Caution: The y in this circuit diagram labels the B-register CBS ket, a totally
different use from its indexing of the summation formula in condensed form #1.
However, in both condensed forms the use of y is standard.]
The reason for the hard work of condensed form #1 is that it is easier to generalize
to higher order qubit spaces, and proves more useful in those contexts.
[Exercise. Multiply-out form #2 to show it in terms of the four CBS tensor kets.]
[Exercise. The result of the last exercise will look very different from condensed
form #1. By naming your bits and variables carefully, demonstrate that both results
produce the same four scalar amplitudes standing next to the CBS tensor terms.]
: The Matrix
This time we have two different approaches available. As before we can compute
the column vectors of the matrix by applying H 2 to the CBS tensors. However,
the separability of the operator allows us to use the theory of tensor products to
write down the product matrix based on the two component matrices. The need for
frequent sanity checks wired into our collective computer science mindset induces us
to do both.
Method #1: Tensor Theory.
Using the standard A-major (B-minor) method, a separable operator A B can
298
a00 a01 a0(l1)
b00 b01
a10 a11 a1(l1) b10 b11
..
.. . .
..
..
..
.
.
.
.
.
.
b00 b01
b00
a b10 b11 a b10
01
00
..
.. . .
..
.
.
.
b00 b01
b00
b10
a10 b10 b11
a11
..
.. . .
..
.
.
.
.
..
.
b0(m1)
b1(m1)
..
..
.
.
b01
b11
.. . .
b01
b11
.. . .
.
.
..
..
.
.
H H
1
1
1
1
2 1 1 1
2
1
1 1
1
1 1 1
1
1 1
2 1
1 1 1
1
2
1
1 1
1
2
1
1 1
1
1 1
1 12
1
1 1
1
1
,
1
1
299
matrix. Using our four expressions for H 2 |xi |yi presented initially, we learn
1
1
1
1
H 2 |00i =
|00i + |01i + |10i + |11i
=
2
2 1
1
1
1
1
1
|00i |01i + |10i |11i
=
H 2 |01i =
1
2
2
1
1
1
1
1
|00i + |01i |10i |11i
=
H 2 |10i =
2
2 1
1
1
1 1
1
.
|00i |01i |10i + |11i
=
H 2 |11i =
2
2 1
1
Therefore,
!
MH 2
|00i ,
|01i ,
|10i ,
|11i
1
1
1
1
1
1 1 1 1 .
1 1 1
21
1 1 1
1
H 2 |i2
1
1
1
1
1
1
1
1
1
1 1 1
21
1 1 1
1
300
+++
1
+ ,
2 +
+
which shows the result of applying the two-qubit Hadamard to any state.
& : Measurement
There is no concise phrase we can use to describe how the binary Hadamard affects
measurement probabilities of a general state. We must be content to describe it in
terms of the algebra. For example, testing at point P,
(
|i2
H 2
H 2 |i2
collapses |i2 to |00i with probability ||2 (as usual). Waiting, instead, to take the
measurement of H 2 |i2 (point Q), would produce a collapse to that same |00i with
the probability
+ + +
2
( + + + ) ( + + + )
=
,
2
4
an accurate but relatively uninformative fact.
Measurement of CBS Output States. However, there is something we can
say when we present any of the four CBS tensors to our Hadamard. Because its
separable, we can see its output clearly:
2
|0i + (1)y |1i
|xi
H 2
|yi
301
and
|1i |1i ,
|1ix |1ix ,
and
or in alternate notation,
|+i |+i , |+i |i , |i |+i ,
and
|i |i .
Since H converts to-and-from the bases {|0i , |1i} and {|0ix , |1ix } in H(1) , it is a
short two-liner to confirm that the separable H 2 converts to-and-from their induced
second order counterparts in H(2) . Ill show this for one of the four CBS kets and you
can do the others. Lets take the third z-basis ket, |1i |0i,
H 2 |1i |0i
[H H] (|1i |0i)
|1ix |0ix
(H |1i) (H |0i)
|1ix |0ix
= |i |+i
=
|00i
|01i
|10i
|11i
H 2
H 2
H 2
H 2
|++i
|+i
|+i
|i
=
=
=
=
|++i ,
|+i ,
|+i and
|i ,
11.4
=
=
=
=
|00i ,
|01i ,
|10i and
|11i .
In the earlier study, when we were applying the naked CNOT gate directly to kets,
I described the final step as measuring the output relative to the x-basis. At the
time it was a bit of an informal description. How would it be done practically?
302
11.4.1
We apply the observation of the last section. When we are ready to measure along
the x-basis, we insert the quantum gate that turns the x-basis into the z-basis, then
measure in our familiar z-basis. For a general circuit represented by a single operator
U we would use
H
,
U
H
to measure along the x-basis. (The meter symbols imply a natural z-basis measurement, unless otherwise stated.)
11.4.2
{ |0 i , |1 i } ,
of our first order Hilbert space, H(1) . Lets also assume we have the unary operator,
call it T , that converts from the natural z-CBS to the C-CBS,
T : z-basis C ,
T |0i = |0 i ,
T |1i = |1 i .
Because T is unitary, its inverse is T , meaning that T takes a C-CBS back to a
z-CBS.
T : C
z-basis ,
T |0 i = |0i ,
T |1 i = |1i .
Moving up to our second order Hilbert space, H(2) , the operator that converts from
the induced 4-element z-basis to the induced 4-element C-basis is T T , while to go
in the reverse direction we would use T T ,
T T |00 i = |00i ,
T T |01 i = |01i
T T |10 i = |10i ,
T T |11 i = |11i .
303
To measure a bipartite output state along the induced C-basis, we just apply T T
instead of the Hadamard gate H H at the end of the circuit:
T
U
Question to Ponder
How do we vocalize the measurement? Are we measuring the two qubits in the C basis
or in the z-basis? It is crucial to our understanding that we tidy up our language.
There are two ways we can say it and they are equivalent, but each is very carefully
worded, so please meditate on them.
1. If we apply the T gate first, then subsequent measurement will be in the z-basis.
2. If we are talking about the original output registers of U before applying the T
gates to them, we would say we are measuring that pair along the C basis. This
version has built-into it the implication that we are first sending the qubits
through T s, then measuring them the z-basis.
How We Interpret the Final Results
If we happen to know that, just after the main circuit but before the final basistransforming T T , we had a C-CBS state, then a final reading that showed the
binary number x (x = 0, 1, 2 or 3) on our meter would imply that the original
bipartite state before the T s was |x i. No collapse takes place when we have a CBS
ket and measure in that CBS basis.
However, if we had some superposition state of x-CBS kets, |i2 , at the end of our
main circuit but before the basis-transforming T T , we have to describe things
probabilistically. Lets express this superposition as
|i2
By linearity,
T T |i2
a result that tells us the probabilities of detecting the four natural CBS kets on our
final meters are the same as the probabilities of our having detected the corresponding
x-CBS kets prior to the final T T gate. Those probabilities are, of course, |cx |2 ,
for x = 0, 1, 2 or 3.
304
11.4.3
If we wanted to operate the circuit entirely using the alternate CBS, i.e., giving it
input states defined using alternate CBS coordinates as well as measuring along the
alternate basis, we would first create a circuit (represented here by a single operator
U ) that works in terms of the z-basis, then surround it with T and T gates,
T
T
U
11.4.4
Well be using both separable and non-separable basis conversion operators today
and in future lessons.
305
11.5
Weve seen three examples of two-qubit unitary operators: a general learning example,
the CN OT and the H 2 . There is a tier of binary gates which are derivative of one
or more of those three, and we can place this tier in a kind of secondary category
and thereby leverage and/or parallel the good work weve already done.
11.5.1
:
The Symbol
The controlled-U gate is drawn like the CNOT gate with CNOTs operation replaced by the unary operator, U , that we wish to control:
U
For example, a controlled-bit flip (= controlled-QNOT) operator would be written
,
X
whereas a controlled-phase shift operator, with a shift angle of , would be
.
R
306
The A-register maintains its role of control bit/register, and the B-register the target
register or perhaps more appropriately, target (unary) operator.
control register
target operator
There is no accepted name for the operator, but Ill use the notation CU (i.e.,
CX, C(R ), etc.) in this course.
|xi |yi : Action on the CBS
It is easiest and most informative to give the effect on the CBS when we know which
specific operator, U , we are controlling. Yet, even for a general U we can give formal
expression using the power (exponent) of the matrix U . As with ordinary integer
exponents, U n simply means multiply U by itself n times with the usual convention
that U 0 = 1:
|xi
|xi
|yi
U x |yi
CU |10i
|1i U |0i
CU |11i
|1i U |1i
307
0
0
U00
U10
0
0
U01
U11
: The Matrix
The column vectors of the matrix are given by applying CU to the CBS tensors to
get
!
MCU
!
|00i , |01i , |1i U |0i , |1i U |1i
1
0
0
0
0 0
0
1 0
0
,
0 U00 U01
0 U10 U11
CU
U
[Exercise. Prove that this is a separable operator for some U and not-separable
for others. Hint: What did we say about CNOT in this regard?]
[Exercise. Confirm unitarity.]
|i2 : Behavior on General State
Applying CU to the general state,
|i2
we get
CU |i2
1
0
0
0
0 0
0
1 0
0
0 U00 U01
0 U10 U11
U00 + U01
U10 + U11
|0i |0i + |0i |1i + (U00 + U01 ) |1i |0i + (U10 + U11 ) |1i |1i .
308
& : Measurement
Theres nothing new we have to add here, since measurement probabilities of the
target-register will depend on the specific U being controlled, and we have already
established that the measurement probabilities of the control register are unaffected
(see CNOT discussion).
The Controlled-Z Gate
Lets get specific by looking at CZ.
:
The Symbol
(1)x |xi ,
but this actually helps us understand the CZ action using a similar expression:
CZ |xi |yi
[Exercise. Prove this formula is correct. Hint: Apply the wordy definition of a
controlled Z-gate (leaves the B-reg alone if ... and applies Z to the B-reg if ...) to
the four CBS tensors and compare what you get with the formula (1)xy |xi |yi for
each of the four combinations of x and y.]
309
CZ |00i
CZ |01i
CZ |10i
CZ |11i
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
: The Matrix
The column vectors of the matrix were produced above, so we can write it down
instantly:
1 0 0 0
0 1 0 0
CZ =
0 0 1 0
0 0 0 1
[Exercise. Prove that this (a) is not a separable operator and (b) is unitary.]
|i2 : Behavior on General State
Applying CU to the general state,
|i2
we get
CZ |i2
1
0
0
0
0
1
0
0
0 0
0 0
1 0
0 1
310
& : Measurement
As we noticed with the unary Z-operator, the probabilities of measurement (along the
preferred CBS) are not affected by the controlled-Z. However, the state is modified.
You can demonstrate this the same way we did for the unary Z: combine |i2 and
CZ |i2 with a second tensor and show that you (can) produce distinct results.
Swapping Roles in Controlled Gates
We could have (and still can) turn any of our controlled gates upside down:
U
Now the B-register takes on the role of control, and the A-register becomes the target:
target operator
control register
Lets refer to this version of a controlled gate using the notation (caution: not
seen outside of this course) (C) U
Everything has to be adjusted accordingly if we insist which we do on continuing to call the upper register the A-register, producing A-major (B-minor ) matrices
and coordinate representations. For example, the action on the CBS becomes
|xi
U y |xi
|yi
|yi
and the matrix we obtain (make sure you can compute this this) is
1
0
0
0
0 U
U01
00 0
(C) U =
,
0
0
1
0
0 U10 0 U11
where the Ujk are the four matrix elements of MU .
With this introduction, try the following exercise.
311
[Exercise. Prove that the binary quantum gates CZ and (C) Z are identical.
That is, show that
Z
=
Hint: If they are equal on the CBS, they are equal period.]
11.5.2
The binary Hadamard gate, H H, is just one example of many possible separable
gates which, by definition, are constructed as the tensor product of two unary operators. Take any two single-qubit gates, say X and H. We form the product operator,
and use tensor theory to immediately write down its matrix,
0 MH
1 MH
X H =
1 MH
0 MH
0 0
1
0 0
2 1 1
1 1
1 1
1 1
.
0 0
0 0
|i
X
H
X |i
H |i
demonstrating the complete isolation of each channel, but only when separable states
are sent to the gate. If an entangled state goes in, an entangled state will come out.
312
The channels are still separate, but the input and output must be displayed as unified
states,
(
)
X
2
|i
(X H) |i2
H
Separable Operators on Entangled States
A single-qubit operator, U , applied to one qubit of an entangled state is equivalent
to a separable binary operator like 1 U or U 1. Such local operations will
generally modify both qubits of the entangled state, so a separable operators effect
is not restricted to only one channel for such states. To reinforce this concept, let U
be any unary operator on the A-space and form the separable operator
U 1 : HA HB HA HB .
Lets apply this to a general |i2 , allowing the possibility that this is an entangled
state.
(U 1) |i2 = [U 1] |00i + |01i + |10i + |11i
= U |0i |0i + U |0i |1i + U |1i |0i + U |1i |1i .
Replacing U |0i and U |1i with their expansions along the CBS (and noting that the
matrix elements, Ujk , are the weights), we get
(U 1) |i2
=
U00 |0i + U10 |1i |0i + U00 |0i + U10 |1i |1i
+ U01 |0i + U11 |1i |0i + U01 |0i + U11 |1i |1i .
Now, distribute and collect terms for the four CBS tensor basis, to see that
(U 1) |i2 =
U00 + U01 |00i +
and without even finishing the regrouping we see that the amplitude of |00i can be
totally different from its original amplitude, proving that the new entangled state is
different, and potentially just as entangled.
[Exercise. Complete the unfinished expression I started for (U 1) |i2 , collect
terms for the four CBS kets, and combine the square-magnitudes of |0iB to get the
probability of B measuring a 0 . Do the same for the |1iB to get the probability
of B measuring a 1 . Might the probabilities of these measurements be affected by
applying U to the A-register?]
Vocabulary. This is called performing a local operation on one qubit of an
entangled pair.
313
The Matrix. It will be handy to have the matrix for the operator U 1 at
the ready for future algorithms. It can be written down instantly using the rules for
separable operator matrices (see the lesson on tensor products),
U00 0
U01 0
0 U00
0 U01
.
U10 0
U11 0
0 U10
0 U11
Example. Alice and Bob each share one qubit of the entangled pair
1
|00i + |11i
1
0
,
|00 i
=
2
2 0
1
which youll notice I have named |00 i for reasons that will be made clear later today
when we get to the Bell states. Alice will hold the A-register qubit (on the left of
each product) and Bob the B-register qubit, so you may wish to view the state using
the notation
|0iA |0iB + |1iA |1iB
.
2
Alice sends her qubit (i.e., the A-register) through a local QNOT operator,
0 1
X =
,
1 0
and Bob does nothing. This describes the full local operator
X 1
applied to the entire bipartite state. We want to know the effect this has on the
total entangled state, so we apply the matrix for X 1 to the state. Using our
pre-computed matrix for the general U 1 with U = X, we get
X 1 |00 i
0
0
1
0
0
0
0
1
1 0
1
0 1
1 0
0 0 2 0
1
0 0
|01i + |10i
.
2
314
1
1
2 1
0
We could have gotten the same result, perhaps more quickly, by distributing X 1
over the superposition and using the identity for separable operators,
[S T ] (v w)
S(v) T (w) .
Either way, the result will be used in our first quantum algorithm (superdense coding),
so its worth studying carefully. To help you, here is an exercise.
[Exercise. Using the entangled state |00 i as input, prove that the following local
operators applied on Alices end produce the entangled output states shown:
iY 1
11.6
.
2
If we measure the A-register of this state, it will of course collapse to either |0iA or
|1iA as all good single qubits must. That, in turn, forces the B-register into its version
of that same state, |0iB or |1iB , meaning if A collapsed to |0iA , B would collapse to
|0iB , and likewise for |1i; there simply are no terms present in |00 is CBS expansion
in which A and B are in different states. In more formal language, the CBS terms in
which the A qubit and B qubit differ have zero amplitude and therefore zero collapse
probability.
To make sure there is no doubt in your mind, consider a different state,
|i2
.
2
This time, if we measured the A-register and it collapsed to |0iA , that would force
the B-register to collapse to |1iB .
[Exercise. Explain this last statement. What happens to the B-register if we
measure the A-register and it collapses to |1iA ?]
All of this is an example of a more general phenomenon in which we measure one
qubit of an entangled bipartite state.
315
11.6.1
|i2
and rearrange it so that either the A-kets or B-kets are factored out of common terms.
Lets factor the A-kets for this illustration.
+ |1iA |0iB + |1iB .
|i2 = |0iA |0iB + |1iB
(I labeled the state spaces of each ket to reinforce which kets belong to which register,
but position implies this information even without the labels. I will often label a
particular step in a long computation when I feel it helps, leaving the other steps
unlabeled.)
What happens if we measure A and get a 0 ? Since there is only one term which
matches this state, namely,
|0i |0i + |1i ,
we are forced to conclude that the B-register is left in the non-normalized state
|0i + |1i .
There are a couple things that may be irritating you at this point.
1. We dont actually have a QM postulate that seems to suggest this claim.
2. We are not comfortable that Bs imputed state is not normalized.
We are on firm ground, however, because when the postulates of quantum mechanics
are presented in their full generality, both of these concerns are addressed. The
fifth postulate of QM (Trait #7), which addresses post-measurement collapse has
a generalization sometimes called the generalized Born rule. For the present, well
satisfy ourselves with a version that applies only to a bipartite states one-register
measurement. Well call it the . . .
Trait #15 (Born Rule for Bipartite States): If a bipartite state is factored
relative to the A-register,
|i2 = |0i |0i + |1i
+ |1i |0i + |1i ,
316
a measurement of the A-register will cause the collapse of the B-register according to
A & 0
|0i + |1i
B & p
||2 + ||2
A & 1
|0i + |1i
B & p
.
||2 + ||2
Note how thisphandles the non-normality of the state |0i + |1i: we divide though
by the norm ||2 + ||2 . (The same is seen for the alternative state |0i + |1i).
Vocabulary. Well call this simply the Born Rule.
Trait #15 has a partner which tells us what happens if we first factor out the
B-ket and measure the B-register.
[Exercise. State the Born Rule when we factor and measure the B -register.]
Checking the Born Rule in Simple Cases
You should always confirm your understanding of a general rule by trying it out on
simple cases to which you already have an answer.
Example. The state we encountered,
|i2
,
2
has
= = 1
= = 0,
and
so if we measure the A-register and find it to be in the state |0iA (by a measurement
of 0 on that register), the Born rule tells us that the state remaining the B-register
should be
|0i + |1i
p
||2 + ||2
1 |0i + 0 |1i
12 + 02
|0i ,
X
[Exercise. Use this technique to prove that if we start in the same state and,
instead, measure the B-register with a result of 1, then the A-register is left in the
state |1iA with certainty.]
Example. Lets take a somewhat less trivial bipartite state. Consider
|i2
and imagine that we test the B-register and find that it decided to collapse to |0iB .
To see what state this leaves the A-register in, factor out the B-kets of the original
to get an improved view of |i2 ,
|0iA + |1iA |0iB +
|0iA |1iA |1iB
|i2 =
.
2
Examination of this expression tells us that the A-register corresponding to |0iB is
some normalized representation of the vector |0i + |1i. Lets see if the Born rule
gives us that result. The expressions four scalars are
1
and
= = =
2
1
= ,
2
so a B-register collapse to |0iB will, according to the Born rule, leave the A-register
in the state
1
|0i + 21 |1i
|0i + |1i
|0i + |1i
p
= p2
=
,
2
||2 + ||2
(1/2)2 + (1/2)2
again, the expected normalized state.
[Exercise. Show that a measurement
of B & 1 for the same state results in an
A-register collapse to |0i |1i / 2 .]
Application of Born Rule to a Gate Output
The Born rule gets used in many important quantum algorithms, so theres no danger
of over-doing our practice. Lets take the separable gate 1 H, whose matrix you
should (by now) be able to write down blindfolded,
1 1
1 1 1
.
1H =
1 1
2
1 1
Hand it the most general |i2 = (, , , )t , which will produce gate output
1
2 +
We measure the B register and get a 1. To see whats left in the A-register, we
factor the B-kets at the output,
1 H |i2 =
( + ) |0i + ( + ) |1i |0i
+
(At this point, we have to pause to avoid notational confusion. The of the Born
rule is actually our current ( ), with similar unfortunate name conflicts for the
other three Born variables, all of which Im sure you can handle.) The Born rule says
that the A-register will collapse to
( ) |0i + ( ) |1i
p
,
| |2 + | |2
which is as far as we need to go, although it wont hurt for you to do the ...
[Exercise. Simplify this and show that it is a normal vector.]
11.7
Multi-Gate Circuits
We have all the ingredients to make countless quantum circuits from the basic binary
gates introduced above. Well start with the famous Bell states.
11.7.1
There are four pairwise entangled states, known as Bell states or EPR pairs (for the
physicists Bell, Einstein, Podolsky and Rosen who discovered their special qualities).
Weve already met one,
|00 i
|00i + |11i
,
2
|01i + |10i
,
2
|10 i
|00i |11i
|11 i
|01i |10i
.
2
and
The notation I have adopted above is after Nielsen and Chuang, but physicists also
use the alternative symbols
|00 i
|+ i ,
|01 i
|+ i ,
|10 i
| i
|11 i
| i .
and
In addition, I am not using the superscript to denote a bipartite state |00 i2 , since
the double index on (00 ) tells the story.
319
The circuit that produces these four states using the standard CBS basis for H(2)
as inputs is
)
|xi
H
|xy i ,
|yi
which can be seen as a combination of a unary Hadamard gate with a CNOT gate.
We could emphasize that this is a binary gate in its own right by calling it BELL and
boxing it,
|xi
)
|xy i .
|yi
In concrete terms, the algebraic expression,
|xi |yi
|0i |0i
|00 i ,
|0i |1i
|01 i ,
|1i |0i
|10 i
|1i |1i
|11 i .
BELL
|xy i
is telling us that
BELL
BELL
BELL
BELL
and
,
1
320
1
0
BELL = (CN OT ) (H 1) =
0
0
1
0
2 0
1
0 0 0
1
1
1 0 0
0
0 0 1 2 1
0 1 0
0
0 1
0
1 0
1
0 1 0
1 0 1
0 1
0
1 0
1
.
1 0 1
0 1
0
1 0 1
0
1
0
1
1 0
0 1 0
1 0
BELL |10i =
=
2 0 1 0 1 1
2 0
1 0 1 0
1
0
=
|00i |11i
|10 i .
321
|00 i
|01 i
|10 i
|11 i
This might be a good time to appreciate one of todays earlier observations: a local
(read separable) operation on an entangled state changes the entire state, affecting
both qubits of the entangled pair.
BELL as a Basis Transforming Operator
The operator BELL takes natural CBS kets to the four Bell kets, the latter shown to
be an orthonormal basis for H(2) . But thats exactly what we call a basis transforming
operator. Viewed in this light BELL, like H 2 , can be used when we want to change
our basis. Unlike H 2 , however, BELL is not separable (a recent exercise) and not
its own inverse (to be proven in a few minutes).
Measuring Along the Bell Basis. We saw that to measure along any basis,
we find the binary operator, call it S, that takes the z-basis to the other basis and
use S prior to measurement.
Thus, to measure along the BELL basis (and we will, next lecture), we plug in BELL
for S,
BELL
And what is BELL ? Using the adjoint conversion rules, and remembering that the
order of operators in the circuit is opposite of that in the algebra, we find
BELL
=
=
i
(CN OT ) H 1
H 1 (CN OT ) ,
h
H 1
(CN OT )
the final equality a consequence of the fact that CNOT and H 1 are both self-adjoint.
322
[Exercise. Prove this last claim using the matrices for these two binary gates.]
In other words, we just reverse the order of the two sub-operators that comprise
BELL. This makes the circuit diagram for BELL come out to be
H
,
1
The matrix for BELL is easy to derive since we just take the transpose (everythings
real so no complex conjugation necessary):
0 1
0
1 0
1
1 0 1
0 1 0
BELL
11.7.2
1
0
2 0
1
1
0
2 1
0
0 0
1
1 1
0
.
0 0 1
1 1 0
Earlier today we demonstrated that the CNOT gate, when presented and measured
relative to the x-basis, behaved upside down. Now, Im going to show you how to
turn this into a circuit that creates an upside down CNOT in the standard z-basis
CBS.
First, the answer. Here is your circuit, which well call CNOT, because it is
controlled from bottom-up:
|xi
|yi
|x yi
|yi
H
.
323
1
1
1
1
1
1 1 1 1 1 0
C NOT =
1 1 1 0
21
1 1 1
1
0
0
1
0
0
0
0
0
1
0
1 1
1
1
0
1 1 1 1 1
1 2 1
1 1 1
0
1 1 1
1
1 1
1
1
1
1
1
1
1
1 1 1 1 1 1 1 1
1 1 1
1 1 1
1
4 1
1 1 1
1
1
1 1 1
1
0
0
0
0
0
0
1
0
0
1
0
4
1
0
4 0
0
0
0
0
4
0
0
4
0
0
4
0
0
0
1
.
0
0
This is an easy matrix to apply to the four CBS kets with the following results:
C NOT
C NOT
C NOT
C NOT
:
:
:
:
|00i
|01i
|10i
|11i
7
7
7
7
|00i
|11i
|10i
|01i
We can now plainly see that the B-register is controlling the A-registers QNOT
operation, as claimed.
Interpreting the Upside-Down CNOT Circuit
We now have two different studies of the upside-down CNOT. The first study concerned the naked CNOT and resulted in the observation that, relative to the x-CBS,
the B-register controlled the QNOT (X) operation on the A-register, thus it looks
upside-down if you are an x-basis ket. The second and current study concerns a new
circuit that had the CNOT surrounded by Hadamard gates and, taken as a whole is
a truly upside-down CNOT viewed in the ordinary z-CBS. How do these two studies
compare?
The key to understanding this comes from our recent observation that H 2 can
be viewed as a way to convert between the x-basis and the z-basis (in either direction
since its its own inverse).
Thus, we use the first third of the three-part circuit to let H 2 take z-CBS kets
to x-CBS kets. Next, we allow CNOT to act on the x-basis, which we saw from
our earlier study caused the B-register to be the control qubit because and only
because we are looking at x-basis kets. The output will be x-CBS kets (since we put
x-CBS kets into the central CNOT). Finally, in the last third of the circuit we let
H 2 convert the x-CBS kets back to z-CBS kets.
324
11.8
We will officially introduce n-qubit systems for n > 2 next week, but we can find ways
to use binary qubit gates in circuits that have more than two inputs immediately as
long as we operate on no more than two qubits at at-a-time. This will lead to our
first quantum algorithms.
11.8.1
If a second order product space is the tensor product of two vector spaces,
W
AB,
then its easy to believe that a third order product space, would be constructed from
three component spaces,
W
ABC.
This can be formalized by relying on the second order construction, and applying it
twice, e.g.,
W
(A B) C .
Its actually less confusing to go through our order-2 tensor product development and
just extend all the definitions so that they work for three component spaces. For
example, taking a page from our tensor product lecture, we would start with the
formal vector symbols
abc
and produce all finite sums of these things. Then, as we did for the A B product
space, we could define tensor addition and scalar multiplication. The basic concepts
extend directly to third order product spaces. Ill cite a few highlights.
The dimension of the product space, W , is the product of the three dimensions,
dim (W )
wjkl
aj bk cl
where {aj }, {bk } and {cl } are the bases of the three component spaces.
The vectors (tensors) in the product space are uniquely expressed as superpositions of these basis tensors so that a typical tensor in W can be written
X
w =
cjkl (aj bk cl ) ,
j,k,l
where cjkl are the amplitudes of the CBS kets, scalars which we had been naming
, , , etc. in a simpler era.
325
A separable operator on the product space is one that arises from three component operators, TA , TB and TC , each defined on its respective component space,
A, B and C. This separable tensor operator is defined first by its action on
separable order-3 tensors
[TA TB TC ] (a b c)
and since the basis tensors are of this form, that establishes the action of TA
TB TC on the basis which in turn extends the action to the whole space.
If any of this seems hazy, I encourage you to refer back to the tensor produce
lecture and fill in details so that they extend to three component spaces.
[Exercise. Replicate the development of an order-2 tensor product space from
our past lecture to order-3 using the above definitions as a guide.]
11.8.2
Definition of Three Qubits. Three qubits are, collectively, the tensor product
space H H H, and the value of those qubits can be any tensor having unit length.
Vocabulary and Notation. Three qubits are often referred to as a tripartite
system.
To distinguish the three identical component spaces, we sometimes use subscripts,
HA H B H C .
The order of the tensor product, this time three, can used to label the state space:
H(3) .
11.8.3
The tensor product of three 2-D vector spaces has dimension 2 2 2 = 8. Its
inherited preferred basis tensors are the separable products of the component space
vectors,
n
|0i |0i |0i , |0i |0i |1i , |0i |1i |0i , |0i |1i |1i ,
o
|1i |0i |0i , |1i |0i |1i , |1i |1i |0i , |1i |1i |1i .
326
|0i3
|1i3
|2i3
|3i3
|4i3
|5i3
|6i3
|7i3
The notation of the first two columns admits the possibility of labeling each of the
component kets with the H from which it came, A, B or C,
|0iA |0iB |0iC |0iA |0iB |0iC
|0iA |0iB |1iC |0iA |0iB |1iC
etc.
The densest of the notations expresses the CBS ket as an integer from 0 to 7. We
reinforce this correspondence and add the coordinate representation of each basis ket:
|000i
|0i3
1
0
0
0
0
0
0
0
|001i
|1i3
0
1
0
0
0
0
0
0
|010i
|2i3
0
0
1
0
0
0
0
0
|011i
|3i3
0
0
0
1
0
0
0
0
|100i
|4i3
0
0
1
0
1
0
0
0
|101i
|5i3
0
0
0
0
0
1
0
0
|110i
|6i3
0
0
1
0
0
0
1
0
|111i
|7i3
0
0
0
0
0
0
0
1
Note that that the exponent 3 is needed mainly in the encoded form, since an
integer representation for a CBS does not disclose its tensor order (3) to the reader,
while the other representations clearly reveal that the context is three-qubits.
The Channel Labels. We will use the same labeling scheme as before, but more
input lines means more labels. For three lines, we would name the registers A, B and
327
A-register out
U
B-register in
B-register out
C-register in
C-register out
(
2
|i
H
|i
H
This circuit is receiving an order-3 tensor at its inputs. The first two registers, A and
B, get a (potentially) entangled bipartite state |i2 and the third, C, gets a single
qubit, |i. We analyze the circuit at the three access points, P , Q and R.
A Theoretical Approach. Well first do an example that is more general than
we normally need, but provides a surefire fallback technique if we are having a hard
time. Well give the input states the most general form,
|i =
and
|i
328
|i2 |i
Access Point Q. The first gate is a CNOT applied only to the entangled |i2 ,
so the overall effect is just
CNOT 1 |i2 |i
=
CNOT |i2 |i
by the rule for applying a separable operator to a separable state. (Although the first
two qubits are entangled, when the input is grouped as a tensor product of H(2) H,
|i2 |i is recognized as a separable second-order tensor.)
Applying the CNOT explicitly, we get
CNOT |i
|i
CNOT
We needed to multiply out the separable product in preparation for the next phase
of the circuit, which appears to operate on the last two registers, B and C.
Access Point R. The final operator is local to the last two registers, and takes
the form
1 H 2
Although it feels as though we might be able to take a short cut, the intermediate
tripartie state and final operator are sufficiently complicated to warrant treating it as
329
a fully entangled three-qubit state and just doing the big matrix multiplication.
1
1
1
1 0 0
0
0
1 1 1 1 0 0
0
0
1
1 1 1 0 0
0
0
h
i
1
1
1
1
1
0
0
0
0
2
1H
= 2 0 0
0
0
1
1
1
1
0 0
0
0 1 1 1 1
0 0
0
0 1
1 1 1
0 0
0
0 1 1 1 1
1
2
+
+
+
+
+
+
|i
|00i + |11i
2
|1i .
330
and
For reference, Ill repeat the circuit for this specific input.
|00i + |11i
|1i
H
|1i .
2
Access Point Q. Apply the two-qubit CNOT gate to registers A and B, which
has the overall effect of applying CNOT 1 to the full tripartite tensor.
CN OT 1
|i2 |i
=
CNOT |i2 1 |i
|00i + |11i
1 |1i
=
CNOT
2
CNOT |00i + CNOT |11i
=
|1i
2
|00i + |10i
=
|1i
2
=
|001i + |101i
,
2
=
|01i
2
2
Access Point R. Finally we apply the second two-qubit gate H 2 to the B and
C registers, which has the overall effect of applying 1 H 2 to the full tripartite
state. The factorization we found makes this an easy separable proposition,
h
i |0i + |1i
|0i + |1i
2
1H
|01i
=
H 2 |01i .
2
2
331
Referring back to the second order Hadamard on the two states in question, i.e.,
H 2 |01i
we find that
h
i |0i + |1i
1H
|01i
2
|00i |01i + |10i |11i
|0i + |1i
=
2
2
2
.
2
2
2
On the other hand, if we had been keen enough to remember that H 2 is just the
separable H H, which has a special effect on a separable state like |01i = |0i |1i,
we could have gotten to the factored form faster using
h
i |0i + |1i
|0i + |1i
2
1H
|01i
=
H H |0i |1i
2
2
|0i + |1i
=
H |0i H |1i
2
|0i + |1i
|0i + |1i
|0i |1i
=
.
2
2
2
The moral is, dont worry about picking the wrong approach to a problem. If your
math is sound, youll get to the end zone either way.
Double Checking Our Work. Lets pause to see how this compares with the
general formula. Relative to the general |i2 and |i, the specific amplitudes we have
in this example are
1
= = ,
2
= 1
and
= = = 0
which causes the general expression
1h
( + + + ) |000i + ( + ) |001i
2
+ ( + ) |010i + ( + ) |011i
+ ( + + + ) |100i + ( + ) |101i
+ ( + ) |110i + ( + ) |111i
332
to reduce to
1h
2
1h 1
1
1
1
( |000i |001i + |010i |011i
=
2
2
2
2
2
i
1
1
1
1
+ |100i |101i + |110i |111i .
2
2
2
2
A short multiplication reveals this to be the answer we got for the input |00i+2|11i |1i,
without the general formula.
[Exercise. Verify that this is equal to the directly computed output state. Hint:
If you start with the first state we got, prior to the factorization, there is less to do.]
[Exercise. This had better be a normalized state as we started with unit vectors
and applied unitary gates. Confirm it.]
The Born Rule for Tripartite Systems
The Born rule can be generalized to any dimension and stated in many ways. For
now, lets state the rule for an order-three Hilbert space with registers A, B and C,
and in a way that favors factoring out the AB-registers.
Trait #15 (Born Rule for Tripartite States): If we have a tripartite state
that can be expressed as the sum of four terms
|i3
each of which is the product of a distinct CBS ket for HA HB and some general
first order (typically un-normalized) ket in the space HC ,
|ki2AB |k iC ,
then if we measure the first two registers, thus forcing their collapse into one of the
four basis states,
2
|0iAB , |1i2AB , |2i2AB , |3i2AB ,
the C register will be left in a normalized state associated with the measured CBS
333
A B & |0i2
&
A B & |1i2
&
A B & |2i2
&
A B & |3i2
&
h0 | 0 i
|1 i
h1 | 1 i
,
,
|2 i
and
h2 | 2 i
|3 i
h3 | 3 i
Note the prime () in the tripartite Trait #15, to distinguish this from the unprimed Trait #15 for bipartite systems. Also, note that I suppressed the state-space
subscript labels A, B and C which are understood by context.
Well use this form of the Born rule for quantum teleportation in our next lecture.
11.8.4
End of Lesson
This was a landmark week, incorporating all the basic ideas of quantum computing.
We are now ready to study our first quantum algorithms.
334
Chapter 12
First Quantum Algorithms
12.1
Three Algorithms
This week we will see how quantum circuits and their associated algorithms can be
used to achieve results impossible in the world of classical digital computing. Well
cover three topics:
Superdense Coding
Quantum Teleportation
Deutschs Algorithm
The first two demonstrate quantum communication possibilities, and the third provides a learning framework for many quantum algorithms which execute faster (in
a sense) than their classical counterparts.
12.2
Superdense Coding
12.2.1
= |0i + |1i ,
seems like it holds an infinite amount of information. After all, and are complex
numbers, and even though you cant choose them arbitrarily (||2 + ||2 must be 1),
the mere fact that can be any complex number whose magnitude is 1 means it
could be an unending sequence of never-repeating digits, like 0.4193980022903 . . . .
If a sender A (an assistant quantum researcher named Alice) could pack |i with
that (and compatible ) and send it off in the form of a single photon to a receiver
B (another helper whose name is Bob) a few time zones away, A would be sending
an infinite string of digits to B encoded in that one sub-atomic particle.
335
The problem, of course, arises when the when B tries to look inside the received
state. All he can do is measure it once and only once (Trait #7, the fifth postulate
of QM), at which point he gets a 0 or 1 and both and are wiped off the face
of the Earth. That one measurement tells B very little.
[Exercise. But it does tell him something. What?]
In short, to communicate ||, A would have to prepare and send an infinite
number of identical states, then B would have to receive, test and record them. Only
then would B know || and || (although neither nor ). This is no better than
classical communication.
We have to lower our sights.
A Most Modest Wish
We are wondering what information, exactly, A (Alice) can send B (Bob) in the
form of a single qubit. We know its not infinite. At the other extreme is the most
modest super-classical capability we could hope for: two classical bits for the price
of one. I think that we need no lecture to affirm the claim that, in order to send a
two-digit binary message, i.e., one of
0
1
2
3
=
=
=
=
00 ,
01 ,
10 ,
11 ,
we would have to send more than one classical bit wed need two. Can we pack at
least this meager amount of classical information into one qubit with the confidence
that B would be able read the message?
12.2.2
We can, and to do so, we use the four Bell states (EPR pairs) from the two qubit
lecture,
|00 i
|00i + |11i
,
2
|01 i
|01i + |10i
,
2
|10 i
|00i |11i
|11 i
|01i |10i
.
2
336
and
)
|00 i ,
|0i
as we learned.) A takes the A register of the entangled state |00 i and B takes the
B register. B gets on a plane, placing his qubit in the overhead bin and travels a
few time zones away. This can all be done long before the classical two-bit message
is selected by A , but it has to be done. It can even be done by a third party who
sends the first qubit of this EPR pair to A and the second to B.
Defense of Your Objection. The sharing of this qubit does not constitute
sending more than one qubit of information (the phase yet to come), since it is
analogous to establishing a radio transmission protocol or message envelope, which
would have to be done even with classical bits. It is part of the equipment that B
and A use to communicate data, not the data itself.
Notation. In the few cases where we need it (and one is coming up), lets build
some notation. When a potentially entangled two-qubit state is separated physically
into two registers or by two observers, we need a way to talk about each individual
qubit. Well use
2
|i
for the A register (or A s) qubit, and
A
|i2
Note that, unless |i2 happens to be separable and |00 i is clearly not we will be
faced with the reality that
|i2 6=
|i2 |i2 .
A
[Note. This does not mean that the A register and B register cant exist in physically
independent locations and be measured or processed independently by different observers. As we learned, one observer can modify or measure either qubit individually.
What it does mean is that the two registers are entangled so modifying or measuring
one will affect the other. Together they form a single state.]
With this language, the construction and distribution of each half of the entangled
|00 i to A and B can be symbolized by
|00 i A goes to A
|00 i
|00 i goes to B
B
337
A Applies
00
(nothing)
11
|00 i
01
X 1
|01 i
10
Z 1
|10 i
11
iY
iY 1
|11 i
(The 1s in the equivalent binary gate column reflect the fact that B is not touching his half of |00 i, which is effectively the identity operation as far as the B register
is concerned.)
And how do we know that the far right column is the result of A s local operation?
We apply the relevant matrix to |00 i and read off the answer (see section Four Bell
States from One in the two qubit lecture).
Compatibility Note. Most authors ask A to apply Z if she wants to encode
01 and X if she wants to encode 10 , but doing so results in the state |10 i for
01 and |01 i for 10 , not a very nice match-up and is why I chose to present the algorithm with those two gates swapped. Of course, it really doest matter which of the
four operators A uses for each encoding, as long as B uses the same correspondence
to decode.
If we encapsulate the four possible operators into one symbol, SD (for Super
Dense), which takes on the proper operation based on the message to be encoded,
A s job is to apply the local circuit,
h
i
( SD 1 ) |00 i
|00 i
SD
A
A
Notice that to describe A s half of the output state, we need to first show the full
effect of the bipartite operator and only then restrict
attention to A s qubit. We
cannot express it as a function of A s input, |00 i , alone.
A
i
( SD1 )|00 i
A
( SD 1 ) |00 i ,
h
i
( SD 1 ) |00 i
so he can now measure both qubits to determine which of the four Bell states he has.
Once thats done he reads the earlier table from right-to-left to recover the classical
two-bit message.
Refresher: Measuring Along the Bell Basis. Since this is the first time we
will have applied it in an algorithm, Ill summarize one way that B can measure
his entangled state along the Bell basis. When studying two qubit logic we learned
that to measure a bipartite state along a non-standard basis (call it C), we find the
binary operator that takes the z-basis to the other basis, call it S, and use S prior
to measurement:
)
(
S
BELL
H
.
Adding the measurement symbols (the meters) along the z-basis, circuit becomes
(
(one of the four BELL states)
339
BELL
In terms of matrices, B subjects his two-qubit state to the matrix for BELL (also
computed last time),
1 0 0
1
1 0 1 1
0
.
BELL =
2 1 0 0 1
0 1 1 0
Bobs Action and Conclusion. Post-processing with the BELL gate turns
the four Bell states into four z-CBS kets; if B follows that gate with a z-basis measurement and sees a 01 , he will conclude that he had received the Bell state |01 i
from A , and likewise for the other states. So his role, after receiving the qubit sent
by A , is to
1. apply BELL to his two qubits, and
2. read the encoded message according to his results using the table
B measures
00
00
01
01
10
10
11
11
In other words, the application of the BELL gate allowed B to interpret his z-basis
measurement reading xy as the message, itself.
The following exercise should help crystallize the algorithm.
[Exercise. Assume A wants to send the message 11.
i) Apply iY 1 to |00 i and confirm that you get |11 i out.
ii) Multiply the 4 4 matrix for BELL by the 4 1 state vector for |11 i to show
that B recovers the message 11.
A Circuit Representation of Superdense Coding
We can get a circuit for the overall superdense coding algorithm by adding some new
notation.
Classical Wires. Double lines (=) indicate the transfer of classical bits. We use
them to move one or more ordinary digits within a circuit.
340
With this notation, the superdense coding circuit can be expressed as:
A-reg: |00 i
|xi
SD
B-reg: |00 i
BELL
|yi
B
The notation tells the story. A uses her two-bit classical message xy (traveling on
the double lines) to control (filled circles) which of the four operations (SD = 1, X,
Z or iY ) she will apply to her qubit. After sending her qubit to B, B measures both
qubits along the Bell basis to recover the message xy now sitting in the output
registers in natural z-basis form |xi |yi.
[Exercise. Measurement involves collapse and uncertainty. Why is B so certain
that his two measurements will always result in a true reproduction of the message
xy sent by A ? Hint: For each of the four possible messages, what bipartite state
is he holding at the moment of measurement?]
This can actually be tightened up. Youve seen several unary operator identities
in the single qubit lecture, one of which was XZ = iY . A slight revision of this
(verify as an exercise) is
ZX
iY ,
which enables us to define the elusive SD operation: we place a controlled-X gate and
controlled-Z gate in the A-channel under A s supervision. Each gate is controlled
by one of the two classical bits in her message. They work just like a quantum
Controlled-U gate, only simpler: if the classical control bit is 1, the target operation
is applied, if the bit is 0, it is not.
A-reg: |00 i
|xi
X
Z
B
BELL
B-reg: |00 i
|yi
341
For example, if both bits are 1, both gates get applied and result in in the desired
behavior: 11 ZX
= iY .
[Exercise. Remind us why the gates X and Z appear reversed in the circuit
relative to the algebraic identity iY = ZX.]
The Significance of Superdense Coding
This technique may not seem tremendously applicable considering its unimpressive
2-bit to 1-bit compression, but consider sending a large classical message, even one
that is already as densely compressed as classical logic will allow. This is a 2-to-1
improvement over the best classical technique when applied to the output of classical
compression. The fact that we have to send lots of entangled Bell states before our
message takes nothing away from our ability to send information in half the time (or
space) as before.
12.3
Quantum Teleportation
We re-enlist the help of our two most excellent researchers, Alice and Bob, and continue to refer to them by their code names A and B.
In superdense coding A sent B one qubit, |i = |0i + |1i, in order to reconstruct two classical bits. Quantum teleportation is the mirror image of this process. A
wants to send B the qubit, |i, by sending him just two classical bits of information.
Giving Teleportation Context
You might ask why she doesnt simply send B the one qubit and be done with it.
Why be so indirect and translate the quantum information into classical bits? There
are many answers, two of which I think are important.
1. As a practical matter, it may be impossible, or at least difficult, for A to
send B qubit information due to its instability and/or expense. By contrast,
humans have engineered highly reliable and economical classical communication
channels. Sending two bits is childs play.
2. Sending the original qubit rather than two classical bits is somewhat beside the
point. The very fact A can get the infinitely precise data embedded in the
continuous scalars and by sending something as crude as an integer from 0
to 3 should come as unexpectedly marvelous news, and we want to know why
and how this can be done.
342
Caveats
There is the usual caveat. Just because B gets the qubit doesnt mean he can know
what it is. He can no more examine its basis coefficients than A (or anyone in her
local lab who didnt already know their values) could. What we are doing here is
getting the qubit over to Bs lab so he can use it on his end for any purpose that A
could have (before the teleportation).
And then theres the unusual caveat. In the process of executing the teleportation,
A loses her copy of |i. Well see why as we describe the algorithm.
An Application of the Born Rule
I like to take any reasonable opportunity to restate important tools so as to establish
them in your mind. The Born rule is so frequently used that it warrants such a review
before we apply it to teleportation.
The Born rule for 3-qubit systems (Trait #15) tells us (and I am paraphrasing
with equally precise expressions) that if we have a tripartite state which can be
expressed as the sum of four terms (where the first factors of each term are AB-basis
kets):
|i3
then an AB-register measurement along the natural basis will force the corresponding
C-register collapse according to
A B & |0i2AB
&
|0 iC
,
k |0 iC k
A B & |1i2AB
&
|1 iC
,
k |1 iC k
etc.
There are two consequences that will prepare us for understanding quantum teleportation as well as anticipating other algorithms that might employ this special
technique.
Consequence #1. The rule works for any orthonormal basis in channels A and
B, not just the natural basis. Whichever basis we choose for the first two registers A
and B, it is along thatbasis that
we must make our two-qubit measurements. So, if
we use the Bell basis, |jk i , then a state in the form
|i3
343
when measured along that basis will force the corresponding C-register collapse according to
|00 iC
A B & |00 iAB
=
C &
,
k |00 iC k
A B & |01 iAB
&
|01 iC
,
k |01 iC k
etc.
This follows from the Trait #7, Post-Measurement Collapse, which tells us that AB
will collapse to one of the four CBS states regardless of which CBS we use forcing
C into the state that is glued to its partner in the above expansion.
p
The division by each k |jk i k (or, if you prefer, hjk | jk i ) is necessary because
the overall tripartite state, |i3 can only be normalized when the |jk i have non-unit
(in fact < 1) lengths.
[Exercise. We already know that |jk i are four normalized CBS kets. Show that
if the |jk i were normal vectors in HC , then |i3 would not be a normal vector. Hint:
Write down 3 h | i3 and apply orthonormality of the Bell states.]
Consequence #2. If we know that the four general states, |jk i, are just four
variations of a single known state, we may be able to glean even more specific information about the collapsed C-register. To cite the example needed today, say we
know that all four |jk i use the same two scalar coordinates, and , only in slightly
different combinations,
|0iC + |1iC
|0iC + |1iC
3
+ |01 iAB
|i
= |00 iAB
2
2
|0iC |1iC
|0iC + |1iC
+ |10 iAB
+ |11 iAB
.
2
2
(Each denominator 2 is needed to produce a normal state |i3 ; we cannot absorb
it into and , as those scalars are fixed by the normalized |i to be teleported.
However, the Born rule tells us that the collapse of the C-register will get rid of this
factor, leaving only one of the four numerators in the C-register.) Such a happy stateof-affairs will allow us to convert any of the four collapsed states in the C-register to
the one state,
|0i + |1i
by mere application of a simple unary operator. For example, if we find that AB
collapses to |00 i (by reading a 00 on our measuring apparatus), then C will have
already collapsed to the state |0i + |1i. Or, if AB collapses to |11 i (meter
reads11 ), then we apply the operator iY to C to recover |0i + |1i, because
0 1
iY |0i + |1i
=
=
.
1 0
Youll refer back to these two facts as we unroll the quantum teleportation algorithm.
344
12.3.1
We continue to exploit the EPR pairs which I list again for quick reference:
|00 i
|00i + |11i
,
2
|01 i
|01i + |10i
,
2
|10 i
|00i |11i
|11 i
|01i |10i
.
2
and
|iC
|0iC + |1iC .
The subscript C indicates that we have a qubit separate from the two entangled
qubits already created and distributed to our two messagers, a qubit which lives in
its own space with its own (natural) CBS basis { |0iC , |1iC }.
By tradition for this algorithm, we place the C-channel above the A/B-Channels:
|iC
register C
(
|00 iAB
register A
register B
.
|i
= |iC |00 iAB =
|0iC + |1iC
2
The Plan
A starts by teleporting a qubit to B. No information is actually sent. Rather, A does
something
to her entangled qubit, |00 i A , which instantaneously modifies Bs qubit,
|00 i B , faster than the speed of light. This is the meaning of the word teleportation.
345
She then follows that up by taking a measurement of her two qubits, getting two
classical bits of information the outcomes of the two register readings. Finally, she
sends the result of that measurement as a classical two-bit message to B (sorry, we
have to obey the Einsteins speed limit for this part). B will use the two classical bits
he receives from Alice to tweak his qubit (already modified by Alices teleportation)
into the desired state, |i.
A Expresses the System State in the Bell Basis (No Action Yet)
In the z-basis, all the information about |i is contained in A s C-register. She
wants to move that information over to Bs B-register. Before she even does anything
physical, she can accomplish most of the hard work by just rearranging the tripartite
state |i3 in a factored form expanded along a CA Bell -basis rather than a CA
z-basis. In other words, wed like to see
|i3
?
=
where the |jk iB are (for the moment) four unknown B-channel states. We can
only arrive at such a CA Bell basis expression if the two channels A and C become
entangled, which they are not, initially. Well get to that.
In our short review of the Born rule, above, I gave you a preview of the actual
expression well need. This is what we would like/wish/hope for:
|0iB + |1iB
|0iB + |1iB
?
3
+ |01 iCA
|i
= |00 iCA
2
2
|0iB |1iB
|0iB + |1iB
+ |10 iCA
+ |11 iCA
.
2
2
Indeed, if we could accomplish that, then A would only have to measure her two
qubits along the Bell basis, forcing a collapse into one of the four Bell states and
by the Born rule collapsing Bs register into the one of his four matching states. A
glance at the above expressions reveals that this gets us is 99.99% of the way toward
placing |i into Bs B-register, i.e., manufacturing |iB , a teleported twin to Alices
original |iA . Well see how B gets the last .01% of the way there, but first, we prove
the validity of the hoped-for expansion.
We begin with the desired expression and reduce it to the expression we know
to be our actual starting point, |i3 . (Warning: After the first expression, Ill be
346
dropping the state-space subscripts A/B/C and letting position do the job.)
|0iB + |1iB
|0iB + |1iB
|00 iCA
+ |01 iCA
2
2
|0iB |1iB
|0iB + |1iB
+ |10 iCA
+ |11 iCA
2
2
|0i + |1i
|01i + |10i |0i + |1i
=
+
2
2
2
|01i |10i |0i + |1i
|00i |11i |0i |1i
+
+
2
2
2
2
|00i + |11i
1
2 2
Half the terms cancel and the other half reinforce to give
|0iB + |1iB
|0iB + |1iB
|00 iCA
+ |01 iCA
2
2
|0iB |1iB
|0iB + |1iB
+ |10 iCA
+ |11 iCA
2
2
=
=
=
1
2 |000i + 2 |100i + 2 |011i + 2 |111i
2 2
1
1
|0i + |1i |00i + |0i + |1i |11i
2
2
|00i + |11i
( |0i + |1i)
2
|iC |00 iAB ,
a happy ending. This was A s original formulation of the tripartite state in terms of
the z-basis, so it is indeed the same as the Bell expansion we were hoping for.
Next, we take action to make use of this alternate formulation of our system state.
(Remember, we havent actually done anything yet.)
347
|00 iCA
+ |10 iCA
|0iB + |1iB
|0iB + |1iB
+ |01 iCA
2
2
|0iB |1iB
|0iB + |1iB
+ |11 iCA
,
2
2
register-A
or more explicitly,
register-C
H
.
register-A
348
After applying the gate (but before the measurement), the original tripartite state,
|i3 , will be transformed to
|0iB + |1iB
3
BELL 1 |i
= BELL |00 iCA 1
2
|0iB + |1iB
|00 i
A
|00 i
B
349
or
|00 i
|00 i
g
BELL
Earlier, I admonished you to not expect to see or be allowed to write a nonseparable bipartite state broken into two parts, each going into the individual channels
of a binary gate. Rather we need to consider it as a non-separable entity going into
both channels at once, as in:
(
|i2
U |i2
However, the new notation that I have provided today, one half of an entangled qubit,
|i2
for the A register (or A s) qubit, and
A
|i2
allows us to write these symbols as individual inputs into either input of a binary
quantum gate without violating the cautionary note. Why? Because earlier, the
separate inputs we disallowed were individual components of a separable tensor (when
no such separable tensor existed). We were saying that you cannot mentally place a
tensor symbol, , between the two individual inputs. Here, the individual symbols
are not elements in the two component spaces, and there is no danger of treating
them as separable components of a bipartite state, and no is implied.
A Sends Her Measurement Results to B
A now has a two (classical) bit result of her measurement: xy = 00 , 01 , 10
or ,11. She sends xy to B through a classical channel, which takes time to get
there.
350
+
+
|1iB
|1iB
|1iB
|1iB .
This happened as a result of A s Bell basis measurement (instantaneous faster-thanlight speed transfer, ergo teleportation). Thats the 99.99% I spoke of earlier. To
get the final .01% of the way there, he needs to look at the two classical bits he
received (which took time to reach him). They tell him which of those four states
his qubit landed in. If its anything other than 00 he needs to apply a local unary
operator to his B-register to fix it up, so it will be in the original |i. The rule is
B Receives
00
01
10
11
B Applies
(nothing)
X
Z
iY
B Recovers
|i
|i
|i
|i
351
|i
|00 i
|00 i
QT
|i
The circuit says that after taking the measurements (the meter symbols), A radios
the classical data (double lines and wavy lines) to B who uses it to control (filled
circles) which of the four operations he will apply to his qubit.
Once again, we use the identity
ZX
iY
|i
(Dont forget that operators are applied from left-to-right in circuits, but right-to-left
in algebra.)
Many authors go a step further and add the initial gate that creates the ABchannel Bell state |00 i from CBS kets:
|0i
)
|00 i ,
|0i
352
which leads to
C: |i
A: |0i
B: |0i
|i
C: |i
A: |0i
B: |0i
|i
going into the entire circuit is transformed by various gates and measurements along
the way. It continues to exist as a tripartite state to the very end, but you may not
recognize it as such due to the classical wires and transmission of classical information
around access points R and S, seemingly halting the qubit flow to their right. Yet
the full order-3 state lives on. It is simply unnecessary to show the full state beyond
that point, because registers C and A, after collapse, will contain one of the four CBS
kets, |xiC |yiA , for xy = 00, 01, 10 or 11. But those two registers never change after
the measurement, and when Bob applies a measurement to his local register B, say
iY perhaps, he will be implicitly applying the separable operator 1 1 iY to the
full separable tripartite state.
[Exercise. Using natural coordinates for everything, compute the state of the
vector |i3 as it travels through the access points, P-U: |i3P , |i3Q , |i3R , |i3S ,
|i3T and |i3U . For points S, T and U you will have to know what measurement A
reads and sends to B, so do those three points twice, once for a reading of CA =01
and once for a reading of CA =11 . HINT: Starting with the easy point P, apply
transformations carefully to the basis kets using separable notation like (1 BELL)
or (BELL 1). When you get to post-measurement classical pipes, apply the Born
Rule which will select exactly one term in the sum. ]
353
Once we have the idea to entangle channels A and C by converting to the Bell basis
(perhaps driven by the fact that one of the Bell states is in the AB register pair) we
end up with a state in the general form,
|i3
Without even looking at any of four |jk i kets in the B-channel, we are convinced
that 100% of the |i information is now sitting inside that register, waiting to be
tapped. Why?
The reason is actually quite simple.
Quantum gates including basis transformations are always unitary and thus
reversible. If the Bell-basis operator had failed to transfer all the |i information into
the B-register, then, since none of it is left in the AC-registers (they contain only
Bell states), there would be no hope of getting an inverse gate to recover our starting
state which holds the full |i. Thus, producing an expression that left channels A
and C bereft of any a trace of |i information must necessarily produce a B-channel
contains it all.
12.4
Superdense coding and quantum teleportation may seem more like applications of
quantum computation than quantum algorithms. They enable us to transmit data
in a way that is not possible using classical engineering. In contrast, Deutsch s
little problem really feels algorithmic in nature and, indeed, its solution provides the
template for the many quantum algorithms that succeed it.
We take a short side-trip to introduce Boolean functions then construct our first
quantum oracles used in Deutschs and later algorithms.
12.4.1
Classical unary gates of which there are a grand total of four contain both reversible operators (1 and ) and irreversible ones (the [0]-op and [1]-op). Quantum
operators require reversibility due to their unitarity, so there are no quantum analogs
354
for the latter two. Binary gates provide even more examples of irreversible classical
operations for which there are no quantum counterparts.
These are specific examples of a general phenomenon that is more easily expressed
in terms of classical Boolean functions.
Boolean Functions. A Boolean function is a function that has one or more
binary digits (0 or 1) as input, and one binary digit as output.
From Unary Gates to Boolean Functions
A classical unary gate takes a single classical bit in and produces a single classical
bit out. In the language of functions, it is nothing other than a Boolean function of
one bit, i.e.,
f : { 0, 1 }
{ 0, 1 } ,
for x B ,
f (x)
for x B ,
g(x)
1
355
f (x, y)
(0, 0)
(0, 1)
(1, 0)
(1, 1)
0
356
12.4.2
Although our quantum algorithms will use quantum gates, they will often have to
incorporate the classical functions that are the center of our investigations. But
how can we do this when all quantum circuits are required to use unitary and
therefore reversible gates? There is a well known classical technique for turning an
otherwise irreversible function into one that is reversible. The technique pre-dates
quantum computing, but well look at it only in the quantum context, and if youre
interested in the classical analog, you can mentally down-convert ours by ignoring
its superposition capability and focus only on the CBS inputs.
Oracles for Unary Functions
Suppose we are given a black box that computes some unary function, f (x), even one
that may be initially unknown to us. The term black box suggests that we dont know
whats on the inside or how it works.
x
f (x)
It can be shown that, using this black box along with certain fundamental quantum
gates one can build a new gate that
takes two bits in,
has two bits out,
is unitary (and therefore reversible),
computes the function f when presented the proper input, and
does so with the same efficiency (technically, the same computational complexity,
a term we will define in a later lesson), as the black box f , whose irreversible
function we want to reproduce.
We wont describe how this works but, instead, take it as a given and call the new,
larger circuit Uf , the quantum oracle for f . Its action on CBS kets and its circuit
diagram are defined by
Data register:
|xi
|xi
,
Uf
Target register: |yi
|y f (x)i
which also gives the name data register to the A-channel and target register to the
B-channel.
First, notice that the output of the target register is a CBS; inside the ket we
are XOR-ing two classical binary values y and f (x), producing another binary value
which, in turn, defines a CBS ket: either |0i or |1i.
357
Example. We compute the matrix for Uf when f (x) = 0, the constant (and
irreversible) [0]-op. Starting with the construction of the matrix of any linear transformation and moving on from there,
Uf |00i , Uf |01i , Uf |10i , Uf |11i
Uf =
|0i | 0 f (0) i , |0i | 1 f (0) i , |1i | 0 f (1) i , |1i | 1 f (1) i
=
=
|0i |f (0)i , |0i | f (0) i , |1i |f (1)i , |1i | f (1) i ,
where we use the alternate notation for negation, a = a. So far, everything we did
applies to the quantum oracle for any function f , so well put a pin in it for future
use. Now, going on to apply it to f = [0]-op,
|0i |0i , |0i |1i , |1i |0i , |1i |1i
U[0]-op =
1 0 0 0
0 1 0 0
=
0 0 1 0 ,
0 0 0 1
an interesting result in its own right, U[0]-op = 1, but nothing to which we should
attribute any deep meaning. Do note, however, that such a nice result makes it selfevident that Uf is not only unitary but its own inverse, as we show next it always
will be.
Uf is Always its Own Inverse. We compute on the tensor CBS, and the result
will be extensible to the entire H H by linearity:
Uf Uf |xyi = Uf Uf |xyi
= Uf |xi | y f (x) i
E
= |xi y f (x) f (x)
E
= |xi |yi = |xyi QED
= |xi y f (x) f (x)
[Exercise. Why is f (x) f (x) = 0?]
Uf Computes f (x). This is a simple consequence of the circuit definition,
because if we plug 0 y, we get
|xi
|xi
.
Uf
|0i
|f (x)i
f (x0 , x1 )
x1
This time, we assume that circuit theory enables us to build a three-in, three-out
oracle, Uf , defined by
|x0 i
|x0 i
,
|x1 i
|x1 i
Uf
|yi
| y f (x0 , x1 ) i
usually shortened by using the encoded form of the CBS kets, |xi2 , where x
{ 0, 1, 2, 3 },
|xi2
|xi2
.
Uf
|yi
|y f (x)i
|xi2 |f (x)i .
[Exercise. Compute the quantum oracle (in matrix form) for the classical AND
gate.]
359
12.5
Deutschs Problem
Our first quantum algorithm answers a question about an unknown unary function
f (x). It does not find the exact form of this function, but seeks only to answer a
general question about its character. Specifically, we ask whether the function is
one-to-one (distinct inputs produce distinct outputs) or constant (both inputs are
mapped to the same output.)
Obviously, we can figure this out by evaluating both f (0) and f (1), after which
we would know the answer, not to mention have a complete description of f . But
the point is to see what we can learn about f without doing both evaluations of the
function; we only want to do one evaluation. In a classical world if we only get to
query f once we have to choose between inputs 0 or 1, and getting the output for our
choice will not tell us whether the function is one-to-one or constant.
All the massive machinery we have accumulated in the past weeks can be brought
to bear on this simple problem very neatly to demonstrate how quantum parallelism
will beat classical computing in certain problems. It will set the stage for all quantum
algorithms.
12.5.1
For this and a subsequent algorithm, we will define a property that a Boolean function
might (or might not) have. We continue to assume that function means Boolean
function.
Balanced and Constant Functions
Balanced Function. A balanced function is one that takes on the value 0 for exactly
half of the possible inputs (and therefore 1 on the other half).
Two examples of balanced functions of two inputs are and XOR and 1y : (x, y) 7
y:
(x, y)
XOR(x, y)
(x, y)
1y (x, y)
(0, 0)
(0, 0)
(0, 1)
(0, 1)
(1, 0)
(1, 0)
(1, 1)
(1, 1)
Two unbalanced function of two inputs are AND and the [1]-op:
360
(x, y)
AN D(x, y)
(x, y)
[1](x, y)
(0, 0)
(0, 0)
(0, 1)
(0, 1)
(1, 0)
(1, 0)
(1, 1)
(1, 1)
Constant Functions. Constant functions are functions that always produce the
same output regardless of the input. There are only two constant functions for any
number of inputs: either the [0]-op or the [1]-op. See the truth table for the [1]-op,
above; the truth table for the [0]-op would, of course, have 0s in the right column
instead of 1s.
Balanced and Constant Unary Function
There are only four unary functions. Therefore the terms balanced and constant might
seem heavy handed. The two constant functions are obviously the [0]-op or the [1]-op,
and the other two are balanced. In fact, the balanced unary functions already have
a term that describes them: one-to-one. Theres even a simpler term in balanced
functions in the unary case: not constant. To see this, lets lay all of our cards on
the table, pun intended.
x
[0]
[1]
So exactly two of our unary ops are constant and the other two are balanced = oneto-one = not constant.
The reason we complicate things by adding the vocabulary constant vs. balanced
is that we will eventually move on to functions of more than one input, and in those
cases,
not all functions will be either balanced or one-to-one (e.g., the binary AND
function isnt either), and
balanced functions will not be one-to-one (e.g., binary XOR function is balanced
but not one-to-one)
Deutschs Problem
We are now ready to state Deutschs problem using vocabulary that will help when
we go to higher-input functions.
361
Deutschs Problem. Given an unknown unary function that we are told is either
balanced or constant, determine which it is in one query of the quantum oracle,
Uf .
Notice that we are not asking to determine the exact function, just which category
it belongs to. Even so, we cannot do it classically in a single query.
12.5.2
Deutschs Algorithm
The algorithm consists of building a circuit and measuring the A-register once.
Thats it. Our conclusion about f is determined by the result of the measurement:
if we get a 0 the function is constant, if we get 1 the function is balanced. We
will have gotten an answer about f with only one query of the oracle and thereby
obtained a 2 improvement over a classical algorithm.
The Circuit
We combine the quantum oracle for f with a few Hadamard gates in a very small
circuit:
|0i
H
Uf
|1i
(ignore)
Because there are only four unary functions, the temptation is to simply plug each
one into Uf and confirm our claim. Thats not a bad exercise (which Ill ask you to
do), but lets understand how one arrives at this design so we can use the ideas in
other algorithms.
The Main Ideas Behind Deutschs Solution
Classical computing is embedded in quantum computing when we restrict our attention to the finite number of CBS kets that swim around in the infinite quantum
ocean of the full state space. For the simplest Hilbert space imaginable the firstorder space, H = H(1) those CBS kets consist of the two natural basis vectors
{ |0i , |1i }, corresponding to the classical bits [0] and [1]. We should expect that no
improvements to classical computing can be achieved by using only z-basis states (i.e.,
the CBS) for any algorithm or circuit. Doing so would be using our Hilbert space
as though it were the finite set { |0i , |1i }, a state of affairs that does nothing but
simulate the classical world.
There are two quantum techniques that motivate the algorithm.
#1: Quantum Parallelism. This is the most general of the two ideas and will
be used in all our algorithms. Any non-trivial superposition of the two CBS kets
|0i
|1i ,
both and 6= 0 ,
362
takes us off this classical plane into quantum hyperspace where all the fun happens.
When we send such a non-trivial superposition through the quantum oracle, we are
implicitly processing both z-basis kets and therefore both classical states, [0] and
[1] simultaneously. This is the first big idea that fuels quantum computing and
explains how it achieves its speed improvements. (The second big idea is quantum
entanglement, but well feature that one a little later.)
The practical impact of this technique in Deutschs algorithm is that well be
sending a perfectly balanced (or maximally mixed ) superposition,
1
1
|0ix = |0i + |1i ,
2
2
through the data register (the A-channel) of the oracle, Uf .
#2: The Phase Kick-Back Trick. This isnt quite as generally applicable as
quantum parallelism, but it plays a role in several algorithms including some well
meet later in the course. It goes like this. If we feed the other maximally mixed state,
1
1
|1ix = |0i |1i ,
2
2
into the target register (the B-channel) of Uf , we can transfer or kick-back 100%
of the information about the unknown function f (x) from the B-register output to
the A-register output.
Youve actually experienced this idea earlier today when you studied quantum
teleportation. Recall that by merely rearranging the initial configuration of our input
state we were able to effect a seemingly magical transfer of |i from one channel
to the other. In the current context, presenting the x-basis ket, |1ix , to the target
register will have a similar effect.
Temporary Change in Notation
Because we are going to make heavy use of the x-basis kets here and the variable x is
being used as the Boolean input to the function f (x), I am going to call into action
our alternate x-basis notation,
|+i
|i
|0ix
and
|1ix .
Uf
|1i
(ignore)
363
We recognize H as the operator that takes z-basis kets to x-basis kets, thus manufacturing a |+i (i.e., |0ix ) for the data register input and |i (i.e., |1ix ) for the target
register input,
|0i
|+i
.
|1i
|i
In other words, the Hadamard gate converts the two natural basis kets (easy states to
prepare) into superposition inputs for quantum oracle. The top gate sets up quantum
parallelism for the circuit, and the bottom one sets up the phase kick-back. For
reference, algebraically these two gates perform
H |0i
H |1i
|0i + |1i
= |+i and
2
|0i |1i
= |i .
2
H
.
Uf
|1i
(ignore)
Step 1. CBS Into Both Channels. We creep up slowly on our result by first
considering a CBS ket into both registers, a result we know immediately by definition
of Uf ,
Data register:
|xi
|xi
Uf
Target register: |yi
|y f (x)i
or algebraically,
Uf (|xi |yi)
|xi |y f (x)i
Step 2. CBS Into Data and Superposition into Target. We stick with a
CBS |xi going into the data register, but now allow the superposition |i to go into
364
|xi
|0i |1i
!
=
Uf
|xi
|f (x)i | f (x) i
!
.
This amounts to
Uf (|xi |i)
|xi
|0i |1i
when f (x) = 0
|1i |0i
, when f (x) = 1
2
|0i |1i
f (x)
|xi (1)
.
2
Since its a scalar, (1)f (x) can be moved to the left and be attached to the A-registers
|xi, a mere rearrangement of the terms,
|0i |1i
f (x)
f (x)
|xi
Uf
|i
|i
Although we have a ways to go, lets pause to summarize what we have accomplished so far.
Quantum Mechanics: This is a non-essential, theoretical observation to test
your memory of our quantum mechanics lesson. We have proven that |xi |i is
an eigenvector of Uf with eigenvalue (1)f (x) for x = 0, 1.
Quantum Computing: The information about f (x) is encoded kickedback in the A (data) registers output. Thats where we plan to look for it
365
in the coming step. Viewed this way, the B-register retains no useful information; just like in teleportation, a rearrangement of the data sometimes creates a
perceptual shift of information from one channel to another that we can exploit
by measuring along a different basis something we will do in a moment.
Step 3. Superpositions into Both Registers. Finally, we want the state |+i
to go into the data register so we can process both f (0) and f (1) in a single pass.
The effect is to present the separable |+i |i to the oracle and see what comes out.
Applying linearity to the last result we get
|0i + |1i
Uf (|+i |i) = Uf
|i
2
Uf |0i |i + Uf |1i |i
=
2
=
|i .
2
By combining the phase kick-back with quantum parallelism, weve managed to get an
expression containing both f (0) and f (1) in the A-register. We now ask the question
that Deutsch posed in the context of this simple expression, What is the difference
between the balanced case (f (0) 6= f (1)) and the constant case (f (0) = f (1))?
Answer: When constant, the two terms in the numerator have the same sign and
when balanced, they have different signs, to wit,
|0i + |1i
(1)
|i , if f (0) = f (1)
2
Uf (|+i |i) =
|0i |1i
(1)
|i , if f (0) 6= f (1)
2
We dont care about a possible overall phase factor or (1) in front of all this since
its a unit scalar in a state space. Dumping it and noticing that the A-register has
x-basis kets in both cases, we get the ultimate simplification,
|+i or |i
,
Uf
|i
|i
366
Measurement
We only care about the A-register, since the B-register will always collapse to |i.
The conclusion?
After measuring the A-register along the x-basis, if we collapse to |+i, f
is constant, and if we collapse to |i, f is balanced.
Of course an x-basis measurement is nothing more than a z-basis measurement
after applying the x z basis transforming unitary H. This explains the insertion
of the final Hadamard gate in the upper right (dashed,
|0i
H
.
Uf
|1i
(ignore)
H
Uf
|1i
(ignore)
one time only and measure the data register output in the natural basis.
If we read 0, f is constant.
If we read 1, f is balanced.
This may not seem like game changing result; a quantum speed up of 2 in a problem that is both trivial and without any real world application, but it demonstrates
that there is a difference between quantum computing and classical computing. It
also lays the groundwork for the more advanced algorithms to come.
12.6
Were just getting started, though. Next time we attack a general n-qubit computer
and two algorithms that work on that kind of system, so get ready for more fun.
367
Chapter 13
Multi-Qubit Systems and
Algorithms
13.1
This week we generalize our work with one and two-qubit computation to include
n-qubits for any integer n > 2. Well start by defining nth order tensor products, a
natural extension of what we did for 2nd and 3rd order products and then apply that
to nth order state spaces, H(n) .
It is common for non-math major undergraduates to be confounded by higher
order tensors due to the vast number of coordinates and large dimensions, so I will
give you a running start by doing a short recap of both the second and third order
tensor products first. This will establish a pattern that should make the larger orders
go down more smoothly.
13.2
13.2.1
AB,
368
ckj (ak bj ) ,
k=0,
j=0
where the ckj are the scalar weights and also serve as the coordinates of w along the
states basis.
The separable basis tensors appearing in the above linear combination are the
dA dB vectors
n
o
ak bj k = 0, . . . , (dA 1) and j = 0, . . . , (dB 1) ,
induced by the two component bases,
A
ak
dA 1
bj
dB 1
and
k=0
j=0
The sums, products and equivalence of tensor expressions were defined by the required
distributive and commutative properties, but can often be taken as the natural rules
one would expect.
Separable Operators in the Product Space
A separable operator on the product space is one that arises from two component
operators, TA and TB , each defined on its respective component space, A and B. This
separable tensor operator is defined first by its action on separable order-2 tensors
[TA TB ] (a b)
TA (a) TB (b)
and since the basis tensors are of this form, it establishes the action of TA TB on
the basis which, in turn, extends the action to the whole space.
13.2.2
We also outlined the same process for a third-order tensor product space
W
ABC
in order to acquire the vocabulary needed to present a few of the early quantum
algorithms involving three channels. Here is a summary of that section.
369
th
and
dA 1,
dB 1,
dC 1
ckjl (ak bj cl ) ,
k=0,
j=0,
l=0
where the ckjl are the scalar weights (or coordinates) that define w.
Separable Operators in the Product Space
A separable operator on the product space is one that arises from three component
operators, TA , TB and TC , each defined on its respective component space, A, B and
C. This separable tensor operator is defined first by its action on separable order-3
tensors
[TA TB TC ] (a b c)
and since the basis tensors are of this form, that establishes the action of TA TB TC
on the basis which, in turn, extends the action to the whole space.
13.2.3
We now formally generalize these concepts to any order tensor product space
W
A0 A1 An2 An1 ,
370
=
=
d0 ,
d1 ,
=
=
dn2 and
dn1 .
..
.
dim (An2 )
dim (An1 )
d0 d1 dn2 dn1
n1
Y
dk
k=0
which seems really big (and is big in fields like general relativity), but for us each
component space is H which has dimension two, so dim(W ) will be the still large
but at least palatable number 2n .
Objects of the Product Space and Induced Basis
The vectors a.k.a. tensors of the space consist of those w expressible as weighted
sums of the separable basis
n
od0 1, d1 1, ..., dn1 1
a0k0 a1k1 a2k2 a(n1)kn1
.
k0 , k1 , ..., kn1 = 0, 0, ..., 0
ck0 k1 ...kn1 a0k0 a1k1 a2k2 a(n1)kn1 .
This notation is an order of magnitude more general than we need, but it is good
to have down for reference. Well see that the expression takes on a much more
manageable form when we get into the state spaces of quantum computing.
The sums, products and equivalence of tensor expressions have definitions analogous to their lower-order prototypes. Youll see examples as we go.
371
n1
O
Ak
n1
Y
k=0
Ak ,
k=0
( n1
Y
j=0
,
k0 , k1 , ..., kn1 = 0, 0, ..., 0
Tk
k=0
n1
Y
Tk ,
k=0
k=0
k=0
or
"n1
Y
k=0
#
Tk
n1
Y
!
vk
k=0
n1
Y
k=0
372
Tk (vk ) .
13.3
n-Qubit Systems
The next step in this lecture is to define the precise state space we need for a quantum
computer that supports n qubits. I wont back up all the way to two qubits as I did for
the tensor product, but a short recap of three qubits will be a boon to understanding
n qubits.
13.3.1
A three qubit system is modeled by a third order tensor product of three identical
copies of our friendly spin-1/2 Hilbert space, H. We can use either the order -notation
or component space label notation to signify the product space,
H(3)
HA HB HC .
The dimension is 2 2 2 = 8.
Three Qubit CBS and Coordinate Convention
The natural three qubit tensor basis is constructed by forming all possible separable
products from the component space basis vectors, and we continue to use our CBS
ket notation. The CBS for H(3) is therefore
n
|0i |0i |0i , |0i |0i |1i , |0i |1i |0i , |0i |1i |1i ,
o
|1i |0i |0i , |1i |0i |1i , |1i |1i |0i , |1i |1i |1i ,
with the often used shorthand options
|0i |0i |0i |0i |0i |0i |000i
|0i3
|1i3
|2i3
|3i3 .
|4i3
|5i3
|6i3
|7i3
373
The notation of the first two columns admits the possibility of labeling each of the
component kets with the H from which it came, A, B or C,
|0iA |0iB |0iC |0iA |0iB |0iC ,
|0iA |0iB |1iC |0iA |0iB |1iC ,
etc.
The densest of the notations expresses the CBS ket as an integer from 0 to 7. This
can all be summarized by looking at the third order coordinate representation of these
eight tensors:
|000i
|0i3
1
0
0
0
0
0
0
0
|001i
|1i3
0
1
0
0
0
0
0
0
|010i
|2i3
0
0
1
0
0
0
0
0
|011i
|3i3
0
0
0
1
0
0
0
0
|100i
|4i3
0
0
0
0
1
0
0
0
|101i
|5i3
0
0
0
0
0
1
0
0
|110i
|6i3
0
0
0
0
0
0
1
0
|111i
|7i3
0
0
0
0
0
0
0
1
A typical three qubit value is a normalized superposition of the eight CBS, e.g.,
|2i3 + e99i |3i3 + i |7i3
|3i3 + |5i3
,
,
2
3
ck |ki3 ,
k=0
where
7
X
|ck |2
1.
k=0
13.3.2
Order three quantum logic gates are unitary operators on H(3) . They can be constructed by taking, for example,
374
:
The Symbol
The A and B registers are the control bits, and the C register the target bit,
control bits
target bit
two terms that will become clear in the next bullet point.
At times Ill use all caps, as in TOFFOLI, to name the gate in order to give it the
status of its simpler cousin, CNOT.
|xi |yi : Action on the CBS
The TOFFOLI gate has the following effect on the computational basis states:
|xi
|xi
|yi
|yi
|zi
| (x y) z i
375
In terms of the eight CBS tensors, it leaves the A and B registers unchanged and
negates the C register qubit or leaves it alone based on whether the AND of the
control bits is 1 or 0:
(
|zi ,
if x y = 0
|zi 7
| zi ,
if x y = 1
It is a controlled-NOT operator, but the control consists of two bits rather than one.
Remember, not every CBS definition we can drum up will result in a unitary
operator, especially when we start defining the output kets in terms of arbitrary
classical operations. In an exercise during your two qubit lesson you met a bipartite
gate which seemed simple enough but turned out not to be unitary. So we must
confirm this property in the next bullet.
: The Matrix
We compute the column vectors of the matrix by applying TOFFOLI to the CBS
tensors to get
!
MTOFFOLI
1
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
1
0
0
1
0
which is an identity matrix until we reach the last two rows (columns) where it swaps
those rows (columns). It is unitary.
[Exercise. Prove that this is not a separable operator.]
376
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 1 0
0 0 0 0 1
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
state,
0
0
0
0
0
1
0
0
0
c0
c1
0
0
ce
0
c3
0
c4
0
c5
1 c6
0
c7
0
0
0
0
0
0
0
1
c0
c1
ce
c3
.
c4
c5
c7
c6
This is as far as we need to go on the Toffoli gate. Our interest here is in higher order
gates that are separable products of unary gates.
13.3.3
n Qubits
H(n)
}|
{
z
H H ... H
n1
O
H.
k=0
377
|0 000i
|0in
|0 001i
|1in
|0 010i
|2in
|0 011i
|3in
|1 110i
|2n 2in
|1 111i
|2n 1in
..
.
Dimension of H(n)
As you can tell by counting, there are 2n basis tensors in the product space, which
makes sense because the dimension of the product space is the
product of the dimensions of the component spaces; since dim(H) = 2, dim H(n) = 2 2 2 = 2n ,
X.
For nth order CBS kets we usually label each component ket using the letter x
with its corresponding space label,
|xn1 i |xn2 i |xn3 i |x0 i ,
xk {0, 1} ,
|xn1 xn2 . . . x3 x2 x1 x0 i ,
x {0, 1, 2, 3, . . . , 2n 1} .
|00000i ,
|1i
|00001i ,
|2i5
|00010i ,
|8i5
|01000i ,
|23i
and, in general,
|10111i ,
|xi5
378
|x4 x3 x2 x1 x0 i .
13.3.4
Quantum logic gates of order n > 3 are nothing more than unitary operators of order
n > 3, which we defined above. Theres no need to say anything further about a
general nth order logic gate. Instead, lets get right down to the business of describing
the specific example that will pervade the remainder of the course.
The nth Order Hadamard Gate, H n
We generalize the two-qubit Hadamard gate, H 2 , to n-qubits naturally. It is the local
operator that behaves like n individual unary H gates if presented with a separable
input state,
H
,
n copies
.
.
H
or described in terms of the CBS,
| xn1 xn2 x1 x0 i
H |xn1 i
H |xn2 i
..
.
H n
H |x1 i
H |x0 i
|xi
3
1 X
(1)x y |yi2 ,
2 y=0
where stands for the mod-2 dot product. It is only a matter of expending more
time and graphite to prove that this turns into the higher order version,
n 1
n 2X
1
n
(1)x y |yin .
H n |xi
=
2
y=0
Putting this result into the circuit diagram gives us
|xi
H n
/
1
2
n
n 1
2X
y=0
379
(1)x y |yin .
Vector Notation
Well sometimes present the formula using vector dot products. If x and y are considered to be vectors of 1s and 0s, we would represent them using boldface x and
y,
xn1
yn1
xn2
yn2
..
..
x x = . ,
y y = . .
x1
y1
x0
y0
When so expressed, the dot product between vector x and vector y is considered the
mod-2 dot product,
xy
This results in an equivalent form of the Hadamard gate using vector notation,
H n |xi
=
n 1
n 2X
(1)x y |yin .
y=0
|0ix
and
|1ix ,
the induced x-basis CBS can also be written without using the letter x as a label,
|+i |+i |+i |+i , |+i |+i |+i |i , |+i |+i |i |+i ,
. . . , |i |i |i |i .
380
Since H converts to and from these the x and z bases in H, it easy to confirm that
the separable H n converts to-and-from these two basis in H(n) .
[Exercise. Do it.]
Notation. In order to make the higher order x-CBS kets of H(n) less confusing
(we need x as an encoded integer specifying the CBS state), Im going to call to
duty some non-standard notation that I introduced in our two qubit lecture: Ill use
the subscript to indicate a CBS relative to the x-basis:
|yin
That is, if y is an integer from 0 to 2n 1, when you see the subscript on the CBS
ket you know that its binary representation is telling us which x-basis (not z-basis)
ket it represents. So,
|0in =
|1in =
|2in =
|3in =
etc.
,
,
,
,
(Without the subscript , of course, we mean the usual z-basis CBS.) This frees
up the variable x for use inside the ket,
|xin
|xin
H n |xin
|xin .
n components
z
}|
{
|0i |1i |1i |0i |1i
z
}|
{
|+i |i |i |+i |i
or
But when expanded along any basis these states have 2n components (because the
product space is 2n dimensional). From our linear algebra and tensor product lessons
381
we recall that a basis vector, bk , expanded along its own basis, B, contains a single 1
and the rest 0s. In coordinate form that looks like
bk
0
..
.
1 kth element .
.
..
0 B
This column vector is very tall in the current context, whether a z-basis ket,
0
..
|xin
=
2n rows
1 xth element ,
..
0
z
|xin
..
2n rows
1
..
xth element ,
n
n
=
2 rows
|xi
Actually, we know what those ?s are because it is the H n which turns the z-CBS
into an x-CBS,
|xin
H n
|xin ,
and we have already seen the result of H n applied to any |xin , namely,
|xin
|xi
382
n 1
n 2X
y=0
(1)x y |yin ,
|xin
2n rows
1
1
.
1
..
n .
1
2
.
..
1 z
But we can do better. Not all possible sums and differences will appear in the sum,
so not all possible combinations of +1 and -1 will appear in an x-basis kets column
vector (not counting the scalar factor ( 12 )n ). An x-CBS ket, |xi , will have exactly
the same number of +s as s in its expansion (and +1s, 1s in its coordinate vector)
except for |0i , which has all +s (+1s). How do we know this?
We start by looking at the lowest dimension, H = H(1) , where there were two
easy-to-grasp x-kets in z-basis form,
|+i
|i
|0i + |1i
and
2
|0i |1i
.
2
The claim is easily confirmed here with only two kets to check. Stepping up to second
order, the x-kets expanded along the z-basis were found to be
|+i |+i
|+i |i
|i |+i
|i |i
where except for |0in which has all plus signs the sum will always have an equal
numbers of +s and s.
[Caution. This doesnt mean that every sum with an equal number of positive
and negative coefficients is necessarily an x CBS ket; there are still more ways to
distribute the +s and s equally than there are CBS kets, so the distribution of the
plus and minus signs has to be even further restricted if the superposition above is to
represent an x-basis ket. But just knowing that all x CBS tensors, when expanded
along the z-basis, are balanced in this sense, will help us understand and predict
quantum circuits.]
Proof of Lemma. We already know that the lemma is true for first and second
order state spaces because we are staring directly into the eyes of the two x-bases,
above. But lets see why the Hadamard operators tell the same story. The matrix for
H 2 , which is used to convert the second order z-basis to an x-basis, is
1 1
1
1
1
1 1 1 1 .
H H =
1 1 1
21
1 1 1
1
If we forget about the common factor 12 , it has a first column of +1s, and all its
remaining columns have equal numbers of +1s and 1s. If we apply H 2 to |0i2 =
( 1, 0, 0, 0 )t we get the first column, all +1s. If we apply it to any |xi2 , for x > 0,
say, |2i2 = ( 0, 0, 1, 0 )t , we get one of the other columns, each one of which has an
equal numbers of +1s and 1s.
To reproduce this claim for any higher order Hadamard, we just show that the
matrix for H n (which generates the nth order x-basis) will also have all +1s in the
left column and equal number of +1s and 1s in the other columns. This is done
formally by recursion, but we can get the gist by noting how we extend the claim
from n = 2 to n = 3. By definition,
H 3 = H H H = H H 2 ,
or, in terms of matrices,
H 3
1
1
1
1
1 1 1 1 1
1 1 1
.
1 1 1
21
2 1 1
1 1 1
1
By our technique for calculating tensor product matrices, we know that the matrix
on the right will appear four times in the 8 8 product matrix, with the lower right
copy being negated (due to the -1 in the lower right of the smaller left matrix). To
384
wit,
H 3
1 1
1
1
1 1 1 1
1
1 1 1
1 1 1
1
3
2
1
1
1
1
1 1 1 1
1
1 1 1
1 1 1
1
1
1
1
1
1 1
1 1
1
1 1 1
1 1 1
1
1 1 1 1
1
1 1
1
1 1
1
1
1
1 1 1
Therefore, except for the first column (all +1s), the tensor products columns will all
be a doubling (vertical stacking) of the columns of the balanced 4 4 (or a negated
4 4). Stacking two balanced columns above one another produces a column that is
twice as tall, but still balanced.
QED
[Exercise. Give a rigorous proof by showing how one extends an order (n 1)
Hadamard matrix to an order n Hadamard matrix.]
13.3.5
We saw that quantum oracles for functions of a single boolean variable helped produce
some early quantum algorithms. Now that we are graduating to n qubit circuits useful
in studing Boolean functions of n binary inputs, its time to upgrade our definition
of quantum oracles to cover multi-input f s. (We retain the assumption that our
functions produce only a single Boolean output value theyre not vector functions).
We are given a black box for an n-input Boolean function, f ( xn1 , xn2 , . . . , x1 , x0 ),
xn1
xn2
..
.
x1
x0
f ( xn1 , xn2 , . . . , x1 , x0 ) .
385
|xin
.
Uf
|yi
|y f (x)i
|xin |f (x)i .
We assume (it does not follow from the definition) that the oracle is of the
same spatial circuit complexity as f (x), i.e., it grows in size at the same rate as
f grows relative to the number of inputs, n. This is usually demonstrated to
be true for common individual functions by manually presenting circuits that
implement oracles for those functions.
13.4
Deutschs algorithm enabled us to see how quantum computing could solve a problem
faster than classical computing, but the speed up was limited to 2, forget about
the expense required to build the quantum circuit; its not enough to justify the
investment. We now restate Deutschs problem for functions of n Boolean inputs and
in that context call it the Deutsch-Jozsa Problem. We will find that the classical
solution grows (in time) exponentially as n increases, not counting the increasing
oracle size, while the quantum algorithm we will present next, has a constant time
solution. This is a significant speed-up relative to the oracle and does give us reason
to believe that quantum algorithms may be of great value. (The precise meaning of
relative vs. absolute speed-up will be presented in our up-coming lesson devoted to
quantum oracles, but well discuss a couple different ways to measure the speed-up
informally later today.)
We continue to study functions that have Boolean inputs and outputs, specifically
n binary inputs and one binary output,
f : {0 , 1}n {0 , 1} .
The Deutsch-Jozsa Problem. Given an unknown function,
f ( xn1 , xn2 , . . . , x1 , x0 ) ,
of n inputs that we are told is either balanced or constant, determine
which it is in one query of the quantum oracle, Uf .
386
13.4.1
Deutsch-Jozsa Algorithm
The algorithm consists of building a circuit very similar to that in Deutschs circuit
and measuring the data register once. Our conclusion about f is the same as in the
unary case: if we get a 0 the function is constant, if we get 1 the function is
balanced. Well analyze the speed-up after we prove this claim.
The Circuit
We replace the unary Hadamard gates of Deutschs circuit with nth order Hadamard
gates to accommodate the wider data register lines, but otherwise, the circuit layout
is organized the same:
|0in
H n
H n
Uf
|1i
(ignore)
H n
H n
Uf
|1i
(ignore)
The H and H n operators take z-basis kets to x-basis kets in the first order H, and
the nth order H(n) spaces, respectively, thus manufacturing a |0in for the data register
input and |i for the target register input,
|0in
|1i
H n
|0in
|i
387
The top gate sets up quantum parallelism and the bottom sets up the phase kick-back.
For reference, here is the algebra:
H
|0i
H |1i
|0in
=
n 1
n 2X
|yi
2X
1
1
|yin
n
2 y=0
y=0
and
|0i |1i
= |i .
2
H n
H n
Uf
|1i
(ignore)
Well do it in stages, as before, to avoid confusion and be sure we dont make mistakes.
Step 1. CBS Into Both Channels. When a natural CBS ket goes into both
registers, the definition of Uf tells us what comes out:
Data register:
|xin
|xin
,
Uf
Target register:
|yi
|y f (x)i
algebraically,
Uf (|xin |yi)
|xin |y f (x)i .
U
|xi
|1i
f
f
|0i |1i
=
Uf (|xin |i) = Uf |xin
2
2
=
388
|xin
|f (x)i | f (x) i
!
.
This amounts to
Uf (|xin |i)
|0i |1i
, when f (x) = 0
2
|xin
|1i |0i
, when f (x) = 1
2
|0i |1i
n
f (x)
|xi (1)
.
2
(1)f (x) |xin |i ,
and once again the information about f (x) is converted into an overall phase factor
in the data register, (1)f (x) |xin .
From a circuit standpoint, we have accomplished
|xin
Data register:
Uf
|i
Target register:
|i
Step 3. Superpositions into Both Registers. Finally, we send the full output
of H n |0in ,
|0in
|+ + + + +i ,
into the data register so we can process f (x) for all x in a single pass and thereby
leverage quantum parallelism. The net effect is to present the separable |0in |i to
the oracle. Applying linearity to the last result we find
!
!
n 1
2X
1
n
n
Uf |0i |i
|yi
|i
= Uf
2n y=0
=
n 1
2X
1
Uf |yin |i
2n y=0
n 1
2X
1
!
n 1
2X
1
n
389
H n
H n
Uf
|1i
(ignore)
To that end, we consider how it changes the state at access point P into a state at
the final access point Q:
|0in
H n
H n
Uf
|1i
(ignore)
H
390
it is the output of the data register we will test. It produces the output
!
n 1
n 1
2X
2X
1
1
n
n
f (y)
H
(1)
|yi
=
(1)f (y) H n |yin
n
n
2 y=0
2 y=0
! #
n 1 "
n 1
2X
2X
1
1
=
(1)f (y)
(1)yz |zin
2n y=0
2n z=0
1
2n
n 1
2X
z=0
n 1
2X
!
|zin
y=0
{z
G(z)
where we have regrouped the sum and defined a scalar function, G(z), of the summation index z. So, the final output is an expansion along the z-basis,
n
2 1
1 X
G(z) |zin .
2n z=0
We now look only at the coefficient, G(0), of the very first CBS ket, |0in . This will
tell us something about the other 2n 1 CBS coefficients, G(z), for z > 0. We break
it into two cases.
f is constant. In this case, f (y) is the same for all y, either 0 or 1; call it c.
.
We evaluate the coefficient of |0in in the expansion, namely G(0)
2n
n
G(0)
2n
2 1
1 X
(1)c (1)y0
2n y=0
(1)c
2n
2n
1 ,
thereby forcing the coefficients of all other z-basis kets in the expansion to be 0
(why?). So in the constant case we have a CBS ket |0in at access point Q with
certainty and are therefore guaranteed to get a reading of 0 if we measure
the state.
f is balanced. This time the coefficient of |0in in the expansion is
n
G(0)
2n
2 1
1 X
(1)f (y) (1)y0
2n y=0
2 1
1 X
(1)f (y) ,
2n y=0
H n
H n
Uf
|1i
(ignore)
one time only and measure the data register output in the natural basis.
If we read 0 then f is constant.
If we read x for any other x, (i.e., x [1, 2n 1]), then f is balanced.
13.4.2
2n1 + 1
times. That is, wed plug just over half of the possible x values into f (say, x =
0, 1, 2, ..., 2n1 + 1), and if they were all the same, wed know the function must
be constant. If any two were distinct, we know it is balanced. Of course, we may get
lucky and find that f (0) 6= f (1), in which case we can declare victory (balanced ) very
quickly, but we cannot count on that. We could be very unlucky and get the same
output for the first 2n /2 computations, only to know the answer with certainty, on
the 2n /2 + 1st. (if its the same as the others: constant, if not: balanced.)
While we have not had our official lecture on time complexity, we can see that
as the number of binary inputs, n, grows, the number of required evaluations of f ,
2n1 + 1, grows exponentially with n. However, when we consider that there are
N = 2n encoded integers that are allowed inputs to f , then as N grows, the number
of evaluations of f , N2 + 1, grows only linearly with N .
The classical problem has a solution which is exponential in n (the number of
binary inputs, or linear in N = 2n (the number of integer inputs).
392
393
=
.
P (S B) = 2
2
2
2M
The factor of 2 out front is due to the fact that the error on a balanced function
can occur two different ways, all 1s or all 0s. The final factor 1/2 is a result of an
assumption which could be adjusted if not true that we are getting a constant
function or balanced function with equal likelihood.
So we decide beforehand the error probability we are willing to accept, say some
<< 1, and select M so that
1
2M
This will allow our classical algorithm to complete (with the same tiny error probability, , in a fixed number of evaluations, M , of the function f regardless of the
number of inputs, n. To give you an idea,
(
0.000001 ,
for M = 20
P (S B)
.
16
9 10 ,
for M = 50
394
Since the error probability does not increase with increasing n, the classical algorithm has a constant time solution, meaning that we can solve it with the same
time complexity as the quantum Deutsch-Jozso algorithm. (We will define terms
like complexity and constant time precisely very soon, but you get the general idea.)
Therefore, no realistic speed-up is gained using quantum computing if we accept a
vanishingly small error result.
This does not diminish the importance of the deterministic solution which does
show a massive computational speed increase, but we must always temper our enthusiasm with a dose of reality.
13.4.3
Id like to offer a slighty more elaborate but also more illustrative argument for the
final Hadamard gate and how we might guess that it is the correct way to complete
the circuit (dashed box),
|0in
H n
H n
Uf
|1i
(ignore)
|0in
H n
H n
Uf
|1i
(ignore)
H
P
the data register was in the state
n
2X
1
1
The constant case 1, guarantees that we land in the x-CBS state, |0in . The balanced
case 2, suggests that we might end up in one of the other x-CBS states, |xin , for
x > 1. Lets pretend that in the balanced case we are lucky enough to land exactly
in one of those other CBS states. If so, when we measure at access point P along the
x-basis,
1. a measurement of 0 would imply that f was constant, and
2. a measurement of x, for x > 0, would imply that f was balanced.
This is because measuring any CBS state along its own basis gives, with 100% probability, the value of that state; that states amplitude is 1 and all the rest of the CBS
states amplitudes are 0.
The Bad News. Alas, we are not able to assert that all balanced f s will produce
x-CBS kets since there are more ways to distribute the + and signs equally than
there are x-CBS kets.
The Good News. We do know something that will turn out to be pivotal: a
balanced f will never have the CBS ket, |0in in its expansion. Lets prove it.
If we give the data registers state at access point P the name |in ,
n
|i
2X
1
1
1
|in =
equal numbers of +1 and 1 .
..
2n
.
1
Furthermore, its |0in coefficient is given by the dot-with-the-basis-ket trick (all coefficients are real, so we can use a simple dot-product),
1
1
1
1
1
1
1
n
n
..
h0
|
i
=
. .
.
n ..
2n
2
1
1
1
1
Aside from the scalar factors 1/ 2, the left vector has all 1s, while the right vector
has half +1s and half 1s, i.e., their dot product is 0: we are assured that there is
no presence of the 0th CBS ket |0in in the expansion of a balanced f . X
396
We have shown that the amplitude of the data registers |0in is 0 whenever f is
balanced, and we already knew that its amplitude is 1 whenever f is constant, so
measuring at access point P along the x-basis will
collapse to |0in if f is constant, guaranteed, and
never collapse to |0in if f is balanced, guaranteed.
Conclusion: If we measure along the x-basis at access point P, a reading of 0
means constant and a reading of x, for any x > 0, means balanced.
Measurement
The x-basis measurement we seek is nothing more than a z-basis measurement after
applying the nth order x z basis transforming unitary H n . This explains the
final nth order Hadamard gate in the upper right (dashed box),
|0in
H n
H n
Uf
|1i
(ignore)
13.5
a x,
398
13.5.1
The algorithm uses the same circuit with the same inputs and the same single data
register measurement as Deutsch-Josza. However this time, instead of asking whether
we see a 0 or a non-0 at the output, we look at the full output: its value will be
our desired unknown, a.
The Circuit
For quick reference, here it is again:
|0in
H n
H n
Uf
|1i
(ignore)
The |0i going into the top register provides the quantum parallelism and the |1i
into the bottom offers a phase kick-back that transfers information about f from the
target output to the data output.
Preparing the Oracles Input: The Two Left Hadamard Gates
Same as Deutsch-Jozsa. The first part of the circuit prepares states that are needed
for quantum parallelism and phase kick-back,
|0in
H n
H n
Uf
|1i
(ignore)
|0i
2X
1
1
|yin
n
2 y=0
H |1i
|0i |1i
= |i .
2
and
H n
H n
Uf
|1i
(ignore)
H
399
namely,
n
|0in
Uf
|i
2X
1
1
!
|i ,
!
n 1
2X
1
(1)ay |yin |i ,
n
2 y=0
|0in
H n
H n
Uf
|1i
(ignore)
H
2X
1
1
(1)ay |yin
n
2 y=0
1
2n
n 1
2X
z=0
n 1
2X
!
|zin
(1)ay (1)yz)
y=0
{z
G(z)
which also defines a scalar function G(z), used to simplify the analysis. So, the final
output is an expansion along the z-basis,
n
2 1
1 X
G(z) |zin .
n
2
z=0
z = a.
G(a)
n 1
2X
ay
(1)
(1)
ya
n 1
2X
y=0
2n ,
y=0
2n
2n
1.
z 6= a. We dont even have to sweat the computation for the amplitudes for the
other kets, because once we know that |ai has amplitude 1, the others have to
be 0. (Why?)
We have shown that at access point Q, the CBS state |ai is sitting in the data register.
Since it is a CBS state, it wont collapse to anything other than what it already is,
and we are guaranteed to get a reading of a, our sought-after n-bit binary number.
Time Complexity
Because the quantum circuit evaluates Uf only once, this is a constant time solution.
What about the classical solution?
Deterministic. Classically we would need a full n evaluations of f in order to
get all n coordinates of a. That is, we would use the input value
0
..
.
ek 1 kth element .
.
..
0
in order to compute the kth coordinate of a based on
0
a0
.. ..
. .
f (ek ) = 1 ak =
. .
.. ..
0
an1
ak .
After n passes we would have all n coordinates of a and be done. Thus, the classical
algorithm grows linearly with the number of inputs n. This kind of growth is called
linear growth or linear time complexity as it requires longer to process more inputs,
but if you double the number of inputs, it only requires twice as much time. This
is not as bad as the exponential growth of the classical deterministic Deutsch-Jozsa
algorithm.
401
13.6
We cant end this lesson without providing a final generalization of Traits #15 and
#15, the Born rule for bipartite and tripartite systems. Well call it Traits #15,
the generalized Born rule. The sentiment is the same as its smaller order cousins.
In rough language it says that when we have a special kind of sum of separable
states from two high-dimensional spaces, A and B, an A-measurement will cause the
overall state to collapses to one of the separable terms, thereby selecting the B-state
of that term.
13.6.1
Assume that we have an (n + m)th order state, |in+m , in the product space A B
= H(n) H(m) with the property that |in+m can be written as the following kind
of sum:
|in+m
n
m
n
m
n
|0inA |0 im
B + |1iA |1 iB + + |2 1iA |2n 1 iB .
In this special form, notice that each term is a separable product of a distinct CBS
ket from A and some general state from B, i.e., the kth term is
|kinA |k im
B .
402
We know by QM Trait #7 (post-measurement collapse), that the state of the component space A = H(n) must collapse to one of the CBS states, call it
|k0 inA .
The generalized Born rule assures us that this will force the component space B =
H(m) to collapse to the matching state,
|k0 im
B ,
only it will become normalized after the collapse to the equivalent
|k0 im
p
.
hk0 | k0 i
(Note that in the last expression I suppressed the superscripts m in the denominator
and subscripts, B, everywhere, to avoid clutter.)
Discussion
The assumption of this rule is that the component spaces A and B are in an entangled
state which can be expanded as a sum, all terms of which have A basis factors. Well,
any state in A B can be expressed expressed this way; all we have to do is express
it along the full 2n+m product basis kets, then collect terms having like A-basis kets
and factor out the common ket in each term. So the assumption isnt so much about
the state |in+m as it is about how the state is written.
The next part reminds us that when an observer of the state space A takes a
measurement along the natural basis, her only possible outcomes are one of the 2n 1
basis kets:
{ |0inA , |1inA , |2inA , . . . , |2n 1inA } ,
so only one term in the original sum survives. That term tells us what a B-state
space observer now has before him:
n
A & |0i
A & |1i
A & |2i
&
|0 im
p
,
h0 | 0 i
&
| im
p 1
,
h1 | 1 i
&
| im
p 2
,
h2 | 2 i
and, in general,
A & |kin
&
|k im
p
hk | k i
403
for k = 0, 1, . . . , 2n 1 .
This does not tell us what a B-state observer would measure, however, since the state
he is left with, call it
| im
p k0
,
hk0 | k0 i
is not assumed to be a CBS ket of the B space. It is potentially a superposition
of CBS kets, itself, so has a wide range of possible collapse probabilities. However,
just knowing the amplitudes of the state |k0 im
B (after normalization) narrows down
what B is likely to find. This measurement collapse, forced by an observer of A, but
experienced by an observer of the entangled B, is the crux of the remainder of the
course.
Size of the Superposition. Weve listed the sum as potentially having the
maximum number of 2n terms, based on the underlying assumption that each term
has an A-basis ket. However, it often has fewer terms, in which case the rule still
applies, only then there are fewer collapse possibilities.
Role of A and B. There was nothing special about the component state space
A. We could have expanded the original state,
|in+m
in such a way that each term in the sum had a B-space CBS ket, |kim
B and the A-space
n
partners were general states, |k iA .
13.7
404
Chapter 14
Probability Theory
14.1
14.1.1
In the few quantum algorithms weve had, the quantum circuit solved the problem in
a single pass of the circuit: one query of the oracle. We didnt need probability for
that. We did use probability, informally, to estimate how long a classical algorithm
would take to complete if we allowed for experimental error. In the Deutsch-Jozsa
problem, when we considered a classical solution using the M -and-guess method, we
expressed the event of failure (all trial outcomes were the S ame yet (read and)
f is Balanced ) by
S B,
and argued intuitively that the probability of this happening was
P (S B)
1
.
2M
That analysis was necessary to evaluate the worthiness of the quantum alternatives
constant-time solution.
14.1.2
Soon, we will study quantum algorithms that require multiple queries of a quantum
circuit and result in an overall performance which is non-deterministic, i.e., probabilistic. Heres a preview of a small section of Simons quantum algorithm:
... (previous steps) ...
Repeat the following loop at most n + T times or until we get n 1 linearly
independent vectors, whichever comes first.
405
14.2
When we build and analyze quantum circuits, well be throwing around the terms
event and probability. Events (word-like things) are more fundamental than probabilities (number-like things). Here is an informal description (rigor to follow).
14.2.1
Events
406
14.2.2
Probabilities
Probabilities are the numeric likelihoods that certain events will occur. They are
always positive numbers between 0 and 1, inclusive. If the probability of an event is
0, it cannot occur, if it is 1, it will occur with 100% certainly, if it is .7319, it will
occur 73.19% of the time, and so on. We express the probabilities of events using P ()
notation, like so
=
=
.338,
.021 .
or
In the heat of an analysis, well be using the script letters that symbolize the events
under consideration. They will be defined in context, so youll always know what
they stand for. The corresponding probabilities of the events will be expressed, then,
using syntax like
P (E )
P (I )
P (C )
14.3
>
.99996,
.1803, or
.99996 .
Many probability lessons use the familiar coin flip as a source of elementary examples.
But we quantum computer scientists can use a phenomenon which behaves exactly like
a coin flip but is much more interesting: a measurement of the quantum superposition
state
|0i + |1i
.
2
407
If we prepare this state by applying a Hadamard gate, H, to the basis state |0i, our
coin is waiting at the output gate. In other words, the coin is H |0i:
|0i
|0i + |1i
.
2
[Exercise. We recognize this state under an alias; it also goes by the name |+i.
Recall why.]
Measuring the output state H |0i is our actual toss. It causes the state to collapse
to either |0i or |1i, which we would experience by seeing a 0 or 1 on our meter.
|0i + |1i
2
Here, &
their respective CBS kets (the eigenvectors). Since both amplitudes are 1/ 2, the
probabilities are
1 2
1
=
,
and
P (measuring 0) =
2
2
1 2
1
P (measuring 1) =
=
.
2
2
So we have a perfectly good coin. Suddenly learning probability theory seems a lot
more appealing, and as a bonus well be doing a little quantum computing along the
way.
[Exercise. We could have used H |1i as our coin. Explain why.]
[Exercise. The above presupposes that we will measure the output state along
the z-basis, {|0i , |1i}. What happens if, instead, we measure the same state along
the x-basis, {|+i , |i}. Can we use this method as a fair coin flip?]
14.4
We now give the definitions of events and probabilities a bit more rigor.
408
14.4.1
Outcomes
10
quantum
coins
#0 :
1:
|0i
|0i + |1i
|0i
|0i + |1i
|0i + |1i
..
.
9:
|0i
Total 0s = 0
Total 0s = 1
Total 0s = 2
409
..
.
Total 0s = 10
Again, we could abbreviate this by saying, the possible outcomes are 0-10.
One problem with this definition of outcome is that some are more likely than
others. It is usually beneficial to define outcomes so that they are all equally
or nearly equally likely. So, we change our outcomes to be the many ten-tuples
consisting of
Results were (0,0,0,0,0,0,0,0,0,0)
Results were (0,0,0,0,0,0,0,0,0,1)
Results were (0,0,0,0,0,0,0,0,1,0)
..
.
Results were (1,1,1,1,1,1,1,1,1,1)
There are now a lot more outcomes (210 = 1024), but they each have the same
likelihood of happening. (If you dont believe me, list the eight outcomes for three
coins and start flipping.) A shorter way to describe the second breakdown of outcomes
is,
a possible outcome has the form
( x0 , x1 , x2 , x3 , x4 , x5 , x6 , x7 , x8 , x9 ),
where xk is the kth measurement result.
14.4.2
While there are many ways to partition all the possible results of your experiment,
there is usually one obvious way that presents itself. In the ten qubit coin toss, we saw
that the second alternative had some advantages. But there are actually requirements
that make certain ways of dividing up the would-be outcomes illegal. To be legal, a
partition of outcomes must satisfy two conditions.
1. Outcomes must be mutually exclusive (a.k.a. disjoint).
2. Outcomes must collectively represent every possible result of the experiment.
[Exercise. Explain why the two ways we defined the outcomes of the ten qubit
coin toss are both legal.]
410
14.4.3
Its natural to believe that organizing the ten qubit coin toss by looking at each
individual qubit outcome (such as the 4th qubit measured a 0) as being a reasonable
partition of the experiment. After all, there are ten individual circuits. Here is the
(unsuccessful) attempt at using that as our outcome set.
The outcomes are to be the individual measurement results
Z3
measurement of qubit # 3 is 0 ,
Z8
measurement of qubit # 8 is 0 ,
O5
measurement of qubit # 5 is 1 ,
O0
measurement of qubit # 0 is 1 ,
and so on.
(Z would mean that the event detects a Zero, while O, script-O, means the event
detects a One. Meanwhile, the subscript indicates which of the ten measurements we
are describing.)
There are ten measurements, each one can be either zero or one, and the above
organization produces 20 alleged outcomes. However, this does not satisfy the two
requirements of outcome.
[Exercise. Explain why?]
14.4.4
Events
Definitions
Event. An event is a subset of outcomes.
Simple Event. An event that contains exactly one outcome is called a
simple event (a.k.a. elementary event).
Please recognize that outcomes are not events. A set containing an outcome is an
event (a simple one, to be precise).
Compound Event. An event that contains more than one outcome is
called a compound event.
Describing Events
Events can be described either by the actual sets, using set notation, or an English
(or French or Vietnamese) sentence.
Examples of simple event descriptions for our ten qubit coint toss experiment are
411
{ (0, 0, 1, 1, 1, 0, 0, 0, 1, 1) }
{ (1, 1, 1, 1, 1, 1, 0, 0, 1, 0) }
{ (0, 0, 0, 0, 0, 0, 0, 0, 0, 0) }
The first five qubits measure 0 and the last five measure 1.
All ten qubits measure 1.
Qubit
nate.
Examples of compound event descriptions for our ten qubit coint toss experiment
are
{ (0, 0, 1, 1, 1, 0, 0, 0, 1, 1), (1, 1, 1, 1, 1, 1, 0, 0, 1, 0) }
The first five qubits measure 0.
The fourth qubit measures 1.
As the qubit
[Exercise. Describe five simple events, and five compound events. Use some
set notation and some natural English descriptions. You can use set notation that
leverages formulas rather than listing all the members, individually.]
14.4.5
Sample Space. The sample space is the set of all possible outcomes.
It is referred to using the Greek .
In our ten qubit coin toss, is the set of all ordered ten-tuples consisting of 0s and
1s,
n
o
=
(x0 , x1 , x2 , . . . , x9 ) xk {0, 1} .
At the other extreme, there is the null event.
Null Event. The event consisting of no outcomes, a.k.a. the empty set, is the
null event. It is represented by the symbol, .
412
14.4.6
Set Operations
Unions
One way to express is as a compound event consisting of the set union of simple
events. Lets do that for the ten qubit coin flip using big- notation, which is just
like summation notation, , only for unions,
{ (x0 , x1 , x2 , . . . , x9 ) } .
xk {0, 1}
Example. We would like to represent the event, F , that the first four quantum
coin flips in our ten qubit experiment are all the same. One expression would be
F =
{ (w, w, w, w, x0 , x1 , . . . , x5 ) } .
w, xk {0,1}
Example. We would like to represent the event, F 0 , that the first four quantum
coin flips in our ten qubit experiment are all the same, but the first five are not all
the same. One expression would be
F0 =
{ (w, w, w, w, w 1, x0 , x1 , . . . , x4 ) } ,
w, xk {0,1}
= F E ,
Differences
What if we wanted to discuss the ten qubit coin flip event, D, which had odd sums,
but whose first four flips are not equal ? We could leverage our definitions of O and
F and use difference notation, or \, like so:
D = O F or
D = O \F .
That notation instructs the reader to start with O, then remove all the events that
satisfy (or are ) F .
Complements and the Operator
When we start with the entire sample space , and subtract and event, S ,
S ,
we use a special term and notation: the complement of S , written in several ways,
depending on the author or context,
S0,
S,
Sc,
S .
or
All are usually read not S , and the last reprises the logical negation operator, .
14.5
This is a good time to introduce other ways to view the outcomes of this ten qubit
experiment that will help us when we get to some quantum algorithms later in the
course.
A 10-Dimensional Space With Coordinates 0 and 1
For each outcome,
(x0 , x1 , x2 , . . . , x9 ),
we look at it as a ten component vector whose coordinates are 0 or 1. The vectors
can be written in either row or column form,
x0
x1
(x0 , x1 , x2 , . . . , x9 ) = x2 .
..
.
x9
414
An outcome is already in a vector-like format so its not hard to see the correspondence. For example, the outcome (0, 1, 0, . . . , 1) corresponds to the vector
0
1
0
.
..
.
1
This seems pretty natural, but as usual, Ill complicate matters by introducing the
addition of two such vectors. While adding vectors is nothing new, the concept
doesnt seem to have much meaning when you think of them as outcomes. (What
does it mean to add two outcomes of an experiment?) Lets not dwell on that for
the moment but proceed with the definition. We add vectors in this space by taking
their component-wise mod-2 sum or, equivalently, their component-wise XOR,
0
1
01
1
1
0
1 0
1
0
1
0 1
+
= 1 .
..
..
..
..
.
.
.
.
1
1
1 1
0
To make this a vector space, Id have to tell you its scalars, (just the two numbers
0 and 1), operations on the scalars (simple multiplication, and mod-2 addition,
), etc. Once the details were filled in we would have the ten dimensional vectors
mod-2, or (Z2 )10 . The fancy name expresses the fact that the vectors have 10
components (the superscript 10 in (Z2 )10 ), in which each component comes from the
set {0, 1} (the subscript 2 in (Z2 )10 ). You might recall from our formal treatment of
classical bits that we had a two dimensional mod-2 vector space, B = B2 , which in
this new notation is (Z2 )2 .
We can create mod-2 vectors of any dimension, of course, like the five-dimensional
(Z2 )5 or more general n-dimensional (Z2 )n . The number of components 10, 5 or n,
tells you the dimension of the vector space.
The Integers from 0 to 210 1
A second view of a ten qubit coin flip is that of a ten bit integer from 0 to 1023,
constructed by concatenating all the results,
x0 x1 x2 x9 .
415
binary
00 000
00 001
00 010
00 011
00 100
..
.
x0 x1 x2 , x9
integer
0
1
2
3
4
..
.
x
14.6
14.6.1
416
Another way to say this is that no vector in the set can be expressed as a linearcombination of the others.
[Exercise. Prove that the last statement is equivalent to the definition.]
[Exercise. Show that the zero vector can never be a member of a linearly independent set.]
[Exercise. Show that a singleton (a set consisting of any single non-zero vector)
is a linearly independent set.]
Definition of Span of a Set of Vectors (Review)
The span of a set of vectors is the (usually larger) set consisting of all vectors that
can be constructed by taking linear combinations of the original set.
The Span of a Set of Vectors. The span of a set of m vectors S =
{vk }m1
k=0 is
n
o
c0 v0 + c1 v1 + . . . + cm1 vm1 ck are scalars .
The set S does not have to be a linearly independent set. If it is not, then it means
we can omit one or more of its vectors without reducing its span.
[Exercise. Prove it.]
When we say that a vector, w, is in the span of S, we mean that w can be written
as a linear combination of the vk s in S. Again, this does not require that the original
{vk } be linearly independent.
[Exercise. Make this last definition explicit using formulas.]
When a vector, w, is not in the span of S, adding w to S will increase Ss span.
14.6.2
Because the only scalars available to weight each vector in a mod-2 linear combination are 0 and 1, the span of any set of mod-2 vectors reduces to simple sums.
The Span of a Set of Mod-2 Vectors. The span of a set of m mod-2
vectors S = {vk }m1
k=0 is
n
o
vl 0 + vl 1 + . . . + vl s vl k S .
417
Abstract Example
Say we have four mod-2 vectors (of any dimension),
S
{v0 , v1 , v2 , v3 } .
Because the scalars are only 0 or 1, vectors in the span are all possible sums of these
vectors. If one of the vectors doesnt appear in a sum, thats just a way to say its
corresponding weighting scalar is 0. If it does appear, then its weighting scalar is 1.
Here are some vectors in the span of S:
0
v2
v0
v0
v1
[Exercise. For a mod-2 vector space, how many vectors are in the span of the
empty set, ? How many are in the span of {0}? How many are in the span of a set
consisting of a single (specific) non-zero vector? How many are in the span of a set
of two (specific) linearly independent vectors? Bonus: How many in the span of
a set of m (specific) linearly independent vectors? Hint: If youre stuck, the next
examples will help sort things out.]
Concrete Example
Consider the two five-dimensional vectors in (Z2 )5
(1, 0, 0, 1, 1), (1, 1, 1, 1, 0) .
Their span is all possible sums (and lets not forget to include the zero vector):
0 =
(1, 0,
(1, 1,
(1, 0,
(0, 0, 0, 0, 0)
0, 1, 1)
1, 1, 0)
0, 1, 1) + (1, 1, 1, 1, 0) = (0, 1, 1, 0, 1)
0,
0,
1,
1,
0,
0,
1,
1,
0,
1,
1,
0,
0) ,
1) ,
0) and
1) .
Thus, a vector w is linearly independent of the original two exactly when it is not in
that set,
w
/
0, (1, 0, 0, 1, 1), (1, 1, 1, 1, 0), (0, 1, 1, 0, 1) ,
or taking the complement, when
w
0, (1, 0, 0, 1, 1), (1, 1, 1, 1, 0), (0, 1, 1, 0, 1) .
(I used the over-bar, E , rather than the E c notation to denote complement, since the
tends to get lost in this situation.)
How many vectors is this? Count all the vectors in the space (25 = 32) and
subtract the four that we know to be linearly dependent. That makes 32 4 = 28
such w independent of the original two.
Discussion. This analysis didnt depend on which two vectors were in the original
linearly independent set. If we had started with any two distinct non-zero vectors,
they would be independent and there would be 28 ways to extend them to a set of
three independent vectors. Furthermore, the only role played by the dimension 5 was
that we subtracted the 4 from 25 = 32 to get 28. If we had been working in the
7-dimensional (Z2 )7 and asked the same question starting with two specific vectors,
we would have arrived at the conclusion that there were 27 4 = 128 4 = 124
vectors independent of the first two. If we started with two linearly independent 10dimensional vectors, we would have gotten 210 4 = 10244 = 1020 choices. And if we
419
14.6.3
Were trying to learn how to count, a skill that every combinatorial mathematician
needs to master, and one that even we computer scientists would do well to develop.
Finding a Third Vector that is Not in the Span of Two Random Vectors
Lets describe an event that is more general than the one in the last example.
E the event that, after selecting two 5-dimensional mod-2 vectors
x and y at random, a third selection w will not be in the span of the first
two.
Notation. Since we are selecting vectors in a specific sequence, well use the
notation
x, y
to represent the event where x is the first pick and y is the second pick. (To count
accurately, we must consider order, which is why we dont use braces: { or }.)
Similarly, the selection of the third w after the first two could be represented by
x, y, w .
How many outcomes are in event E ?
This time, the first two vectors are not a fixed pair, nor are they required to be
linearly independent. We simply want to know how many ways the third vector (or
qubit coin flip if you are remembering how these vectors arose) can avoid being in
the span of the first two. No other restrictions.
We break E into two major cases:
1. x and y form a linearly independent set (mostly answered), and
2. x and y are not linearly independent. This case contains three sub-cases:
(i) Both x and y are 0,
(ii) exactly one of x and y is 0, and
(iii) x = y, but neither is 0.
420
Lets do the larger case 2 first, then come back and finish up what we started
earlier to handle case 1.
Harder Case 2. For x and y not linearly independent, we count each sub-case
as follows.
(i) There is only one configuration of x and y in this sub-case, namely 0, 0 . In
such a situation, the only thing we require of w is that it not be 0. There
are
32 1 = 31 such w. Therefore, there are 31 simple events, 0, 0, w , in this
case.
(ii) In this sub-case there are 31 configurations of the form 0, y y6=0 and 31 of
the form x, 0 x6=0 . Thats a total of 62 ways that random choices of x and y
can lead us to this sub-case. Meanwhile, for each such configuration there are
32 2 = 30 ways for w to be different from x and y. Putting it together, there
are 62 30 = 1860 simple events in this sub-case.
(iii) There are 31 configurations of x = y 6= 0 in this sub-case, namely x, x x6=0 .
Meanwhile, for each such configuration any w that is neither 0 nor x will work.
There are 32 2 = 30 such w. Putting it together, there are 31 30 = 930
simple events in this sub-case.
Summarizing, the number of events with x and y not linearly independent and w not
in their span is 31 + 1860 + 930 = 2821.
Easier Case 1. We get into this situation with x, y x6=y6=0 . That means the
first choice cant be 0 so there are 31 possibilities for x, and the second one cant be
0 or x, so there are 30 choices left for y. Thats 31 30 ways to get into this major
case. Meanwhile, there are 28 ws not in the span for each of those individual outcomes
(result of last section), providing the linearly-independent case with 313028 = 26040
simple events.
Combining Both Cases. We add the two major cases to get 2821 + 26040 =
28861 outcomes in event E . If you are thinking of this as three five qubit coin flip
outcomes 15 individual flips, total there are 28861 ways in which the third group
of five will be linearly independent of the first two.
Is this likely to happen? How many possible flips are there in the sample space
? The answer is that there are 215 (or, if you prefer, 323 ) = 32768. (The latter
expression comes from 3 five qubit events, each event coming from a set of 25 = 32
outcomes.) That means 28861 of the 32768 possible outcomes will result in the third
outcome not being in the span of the first two. A simple division tells us that this
will happen 88% of the time.
A third vector selected at random from (Z2 )5 is far more likely to be independent of any two previously selected vectors than it is to be in their
span.
This was a seat-of-the-pants calculation. Wed better get some methodology so
we dont have to work so hard every time we need to count.
421
14.7
14.7.1
The Axioms
{ E }.
0.
1.
k=0
0.
If E F , P (E ) P (F ). ]
14.7.2
Definitions
It may not always be obvious how to assign probabilities to events even when they
are simple events. However, when the sample space is finite and all simple events are
equiprobable, we can always do it. We just count and divide.
422
Caution. There is no way to prove that simple events are equiprobable. This
is something we deduce by experiment. For example, the probability of a coin flip
coming up tails (or the z-spin measurement of an electron in state |0ix being 1 ) is
said to be .5, but we dont know that it is for sure. We conclude it to be so by doing
lots of experiments.
Size of an Event. The size of an event, written |E |, is the number of
outcomes that constitute it,
|E |
# outcomes E .
|E |
.
||
This is not a definition, but a consequence of the axioms plus the assumption that
simple events are finite and equiprobable.
Example 1
In the ten qubit coin flip, consider the event, F , in which the first four results are
all equal. We compute the probability by counting how many simple events (or,
equivalently, how many outcomes) meet that criterion. What we know about these
events is that the first four are the same, so they are either all 0 or all 1.
All 0.
form,
which we can see is the same as the number of integers of the form x0 x1 x2 x3 x4 x5 .
Thats 000000 through 111111, or 0 63 = 64.
All 1.
form,
128
1024
=
423
1/8
.125 .
Example 2
c in which the first four individual flips come up the same
Next, consider the event F
(= Example 1), but with the additional constraint that the fifth qubit be different
from those four. Now, the outcomes fall into the two categories
Four 0s followed by a 1.
(0, 0, 0, 0, 1, x0 , x1 , . . . , x4 ) ,
Four 1s followed by a 0.
(1, 1, 1, 1, 0, x0 , x1 , . . . , x4 ) ,
Again, we look at the number of possibilities for the free range bits x0 , . . . , x4 ,
which is 32 for each of the two categories, making the number of outcomes in this
event 32 + 32 = 64, so the probability becomes
c)
P (F
64
1024
1/16
.0625 .
Example 3
We do a five qubit coin flip twice. That is, we measure five quantum states once,
producing a mod-2 vector, x = (x0 , x1 , . . . , x4 ), then repeat, getting a second vector,
y = (y0 , y1 , . . . , y4 ). Its like doing one ten qubit coin flip, but we are organizing
things naturally into two equal parts. Instead of the outcomes being single vectors
with ten mod-2 components, outcomes are pairs of vectors, each member of the pair
having five mod-2 components,
o
n
(x0 , x1 , . . . , x4 ), (y0 , y1 , . . . , y4 ) xk , yk {0, 1}
x, y x, y (Z2 )5 .
Consider the event, I , in which the two vectors form a linearly independent set.
Weve already discussed the exact conditions for a set of two mod-2 vectors to be
linearly independent:
neither vector is zero (0), and
the two vectors are distinct, i.e., x 6= y.
We compute P (I ) by counting events. For an outcome ( x, y ) I , x can be
any non-0 five-tuple, (x0 , x1 , . . . , x4 ), and y must be different from both 0 and x.
424
|I |
||
930
1024
.908 .
As you can see, it is very likely that two five qubit coin flips will produce a linearly
independent set; it happens > 90% of the time.
[Exercise. Revise the above example so that we take three, rather than two, five
qubit coin flips. Now the sample space is all triples of these five-tuples, (x, y, w).
What is the probability of the event, T , that all three flip-tuples are linearly
independent? Hint: We already covered the more lenient case in which x and y were
allowed to be any two vectors and w was not in their span. Repeat that analysis but
exclude the cases where x and y formed a linearly dependent set. ]
[Exercise. Make up three interesting event descriptions in this experiment and
compute the probability of each.]
14.8
14.8.1
Unions
The third axiom tells us about the probability of unions of disjoint events, {Ek },
namely,
!
n1
n1
X
[
=
P (Ek ) .
P
Ek
k=0
k=0
What happens when the events are not mutually exclusive? A simple diagram in the
case of two events tells the story. If we were to add the probabilities of two events
P (E ) + P (F ) P (E F ) .
425
When there are more than two sets, the intersections involve more combinations and
get harder to write out. But the concept is the same, and all we need to know is that
!
n1
n1
[
X
P
Ek
=
P (Ek ) P (various intersections) .
k=0
k=0
The reason this is always enough information is that we will be using the formula to
bound the probability from above, so the equation, as vague as it is, clearly implies
!
n1
n1
[
X
P
Ek
P (Ek ) .
k=0
14.8.2
k=0
Conditional Probability
Very often we will want to know the probability of some event, E , under the assumption that another event, F , is true. For example, we might want to know the
probability that three quantum coin flips are linearly independent under the assumption that the first two are (known to be) linearly independent. The notation for the
event E given F is
E F ,
and the notation for the events probability is
P E F .
This is something we can count using common sense. Start with our formula for an
event in a finite sample space of equiprobable simple events (always the setting for
us),
P (E )
|E |
.
||
Next, think about what it means to say under the assumption that another event,
F , is true. It means that our sample space, , suddenly shrinks to F , and in that
smaller sample space, we are interested in the probability of the event E F , so
P E F
|E F |
,
|F |
whenever F 6= .
426
Bayes Law
Study this last expression until it makes sense. Once you have it, divide the top
and bottom by the size of our original sample space, ||, to produce the equivalent
identity,
P E F
P (E F )
,
P (F )
whenever P (F ) > 0.
This is often taken to be the definition of conditional probability, but we can view
it as a natural consequence of the meaning of the phrase E given F . It is also a
simplified form of Bayes law, and I will often refer to this as Bayes law (or rule or
formula), since this simple version is all we will ever need.
14.8.3
Statistical Independence
P (F ) .
P (E F )
,
P (F )
whenever P (F ) > 0 ,
P (E )P (F ),
which, by the way, is true even for the degenerate case, F = . This is the official
definition of statistically independent events, and working backwards you would derive
the intuitive meaning that we started with. In words, two events are independent
the probability of the intersection is equal to the product of the probabilities.
Multiple Independent Events
The idea carries over to any number of events, although the notation becomes thorny.
Its easier to first say it in words, then show the formula. In words,
427
14.8.4
Ek i
=
P (Eki ) .
Ill list a few useful consequences of the definitions and the formulas derived above.
Unless otherwise noted, they are all easy to prove and you can select any of these as
an exercise.
Events
(E F )c
E (F G )
=
=
E c Fc
(E F ) (E G )
Probabilities
P (E )
P (E F )
P (E F ) + P (E F c )
P E F P (F )
Example 1
In our ten qubit coin flip, what is the probability that the 3rd, 6th and 9th flips are
identical?
Well call the event E . It is the union of two disjoint events, the first requiring
that all three flips be 0, Z , and the second requiring that all three be 1, O,
E
P (E )
= Z O,
with Z O =
=
= P (Z ) + P (O) .
428
=
=
=
P (O3 O6 O9 )
P (O3 ) P (O6 ) P (O9 ) .
.5 .5 .5 = .125
P (Z ) + P (O)
.125 + .125
= .25 .
Example 2
We do the five qubit coin flip five times. We examine the probability of the event I5
defined as the five five-tuples are linearly independent. Our idea is to write P (I5 )
as a product of conditional probabilities. Let
Ij
0, # 1, . . . # (j 1)
are linearly-independent.
Our goal is to compute the probability of I5 .
(It will now be convenient to use the over-bar notation E to denote complement.)
Combining the basic identity,
P I5
= P I5 I4
+
(exercise: why?), with the observation that
P I5 I4
P I5 I4
=
5
Y
P Ij Ij1 ,
j=1
429
where, the j = 1 term contains the curious event, I0 . This corresponds to no coin-flip
= no vector being selected or tested. Its different from the zero vector = (0, 0, 0, 0, 0)t ,
which is an actual flip possibility; I0 is no flip at all, i.e., the empty set, . But we
know that is linearly independent always because it vacuously satisfies the condition
that one cannot produce the zero vector as a linear combination of vectors from the
set since there are no vectors to combine. So P (I0 ) = 1. Thus, the last factor in
the product is just
P I1 I0
= P I1 ,
but its cleaner to include the conditional probability (LHS of above) in all factors
when using product notation.
[Note. Well be computing the individual factors in Simons algorithm. This is
as far as we need to take it today.]
14.9
Some texts and papers often use the alternate notation, instead of for intersections, and instead of for unions. When and are used, is typically the
negation (complement) operator of choice, so the three go together. This is seen more
commonly in electrical engineering and computer science than math and physics. I
introduced the concepts in this lesson using more traditional , , and c notation but
well use both in upcoming lectures. Normally, I will reserve , , and c for non-event
sets (like sets of integers) and , , and for events.
Well take this opportunity to repeat some of the more important results needed
in future lectures using the new notation.
14.9.1
Disjoint Events
14.9.2
k=0
Any event, F , and its complement, F , partition the space, leading to the identity
P E
= P E F
+ P E F .
430
14.9.3
Bayes Law.
Bayes law (or rule or formula) in its simple, special case, can be expressed as
P (E F )
P E F
=
,
whenever P (F ) > 0.
P (F )
14.9.4
Statistical Independence
14.10
Application to Deutsch-Jozsa
In a recent lecture, I gave you an informal argument that the Deutsch-Jozsa problem
had a constant time solution using a classical algorithm if we accept a small but
constant error. We now have the machinery to give a rigorous derivation and also
show two different sampling strategies. It is a bit heavy handed to apply such rigorous
machinery to what seemed to be a relatively obvious result, but the practice that we
get provides a good warm-up for times when the answers are not so obvious. Such
times await in our upcoming lessons on Simons and Shors algorithms.
First, a summary of the classical Deutsch-Jozsa algorithm:
The M -and-Guess Algorithm. Let M be some positive integer. Given a
Boolean function, f (x) of x [ 0, n 1 ] which is either balanced or constant, we
evaluate f (x) M times, each time at a random x [ 0, n 1 ]. If we get two different
outputs, f (x0 ) 6= f (x00 ), we declare f balanced. If we get the same output for all M
trials, we declare f constant.
The only way we fail is if the function is balanced (event B) yet all M trials
produce the same output (event S ). This is the event
S B.
whose probability we must compute.
Assumption. Since we only care about the length of the algorithm as the number
of possible encoded inputs, 2n , gets very large, there is no loss of generality if we
assume M < 2n . To be sure, we can adjust the algorithm so that in those cases in
which M 2n we sample f non-randomly at all 2n input values. In those cases we
will know f completely after the M trials and will have zero error. It is only when
M < 2n that we have to do the analysis.
431
14.10.1
The way the algorithm is stated, we are admitting the possibility of randomly selecting
the same x more than once during our M trials. This has two consequences in the
balanced case the only case that could lead to error:
1. If we have a balanced function, it results in a very simple and consistent probability for obtaining a 1 (or 0) in any single trial, namely P = 12 .
2. It is a worst case scenario. It produces a larger error than we would get if we
were to recast the algorithm to be smarter. So, if we can guarantee a small
constant error for all n in this case it will be even better if we improve the
algorithm slightly.
The algorithm as stated uses a sampling with replacement technique and is the
version that I summarized in the original presentation. Well dispatch that rigorously
first and move on to a smarter algorithm in the section that follows.
14.10.2
432
M
1
\
Ok .
k=0
O Z
(disjoint),
so
P (S )
P (O Z )
2 P (O)
P (O) + P (Z )
!
M
1
M
1
\
Y
2P
Ok
= 2
P (Ok ) .
k=0
k=0
The first line uses the mutual exclusivity of O and Z , while the last line relies on
the statistical independence of the {Ok }. Plugging in 21 for the terms in the product,
we find that
M
M
1
Y
1
1
1
P (S ) = 2
= 2
=
.
M 1
2
2
2
k=0
433
14.10.3
M 1
We might be tempted to say that 21
is the probability of failure in our M -andguess algorithm, but that would be rash. We computed it under the assumption of
a balanced f . To see the error clearly, imagine that our function provider gives us a
balanced function only 1% of the time. For the other 99% of the functions, when we
guess constant we will be doing so on a constant function and will be correct; our
chances of being wrong in this case are diminished to
.01
1
2M 1
To see how this comes out of our theory, we give the correct expression for a wrong
guess. Let W be the event consisting of a wrong guess. It happens when both f is
balanced (B) AND we get M identical results (S ),
W
S B.
14.10.4
Now, lets make the obvious improvement of avoiding duplicate evaluations of f for
the randomly selected x. Once we generate a random x, we somehow take it out
of contention for the next random selection. (I wont go into the details on how we
434
guarantee this, but you can surmise that it will only add a constant time penalty
something independent of n to the algorithm.)
[Exercise. Write an algorithm that produces M distinct x values at random.
You can assume a random number generator that returns many more than M distinct
values (with possibly some repeats), since a typical random number generator will
return at least 32,768 different ints, while M is on the order of 20 or 50. Hint: Its
okay if your algorithm depends on M , since M is not a function of n. It could even
be on the order of M 2 or M 3 , so long as it does not rely on n.]
Intuitively, this should reduce the error because every time we remove another x
whose f (x) = 1 from our pot, there are fewer xs capable of leading to that 1, while
n
the full complement of 22 xs that give f (x) = 0 are still present. Thus, chances of
getting f (x) = 1 on future draws should diminish each trial from the orginal value of
1
. We prove this by doing a careful count.
2
Analysis Given a Balanced f
As before, we will do the heavy lifting assuming the sample space = B, and when
were done we can just multiply by 1/2 (which assumes equally likely reception of
balanced vs. constant function.).
M = 2 Samples
The {Ok } are no longer independent. To see how to deal with that, well look at
M = 2 which only contains the two events O0 and O1 .
The first trial is the easiest. Clearly,
P (O0 )
1
.
2
2n
2
2n
1
1
2n1 1
,
2n 1
making
P (O)
P O1 O0 P (O0 )
435
=
2n1 1
2n 1
1
.
2
Lets write
1
2
M = 3 Samples
You may be able to guess what will happen next. Here we go.
P (O) = P (O2 O1 O0 ) = P O2 [O1 O0 ] P (O1 O0 ) .
We know the value P (O1 O0 ), having
computed
it in the M = 2 case, so direct
your attention to the factor P O2 [O1 O0 ] . The event O1 O0 means there are
now two fewer xs left that would cause f (x) = 1, and the total number of xs from
which to choose now stands at 2n 2, so
P O2 [O1 O0 ]
2n
2
2n
2
2
2n1 2
.
2n 2
=
=
P (OM 1 O2 O1 O0 )
n1
n1
n1
n1
2
(M 1)
2
2
2
1
2
,
n
n
n
2 (M 1)
2 2
2 1
2n
M
1
Y
k=0
2n1 k
.
2n k
This covers the eventually of getting all 1s when f is balanced, and if we include the
alternate way to get unlucky, all 0s, we have
P ( S without replacement )
M
1
Y
k=0
436
2n1 k
2n k
M
1
Y
k=0
2n1 k
.
2n k
M
1
Y
k=0
1
,
2
so we want to confirm our suspicion that we have improved our chances of guessing
correctly. We compare the current case with this past case and ask
M
1
1
< MY
n1
Y
2
k
1
=
?
n
2 k
2
k=0
k=0
>
Intuitively we already guessed that it must be less, but we can now confirm this with
hard figures. We simply prove that each term in the left product is less than (or in
one case, equal to) each term in the right product.
We want to show that
1
2n1 k
.
n
2 k
2
Now is the time we realize that the old grade school fraction test they taught us from
our childhood, usually called cross-multiplication,
a
c
ad cb .
b
d
actually has some use. We apply it by asking
2 2n1 k
2n k ?
2n 2k 2n k ?
2k k ?
The answer is yes. In fact, except for k = 0 where both sides are equal, the LHS is
strictly less than the RHS. Thus, the product and therefore the probability of error is
actually smaller in the without replacement algorithm. The constant time result of
the earlier algorithm therefore guaranteed a constant time result here, but we should
have fewer wrong guesses now.
437
14.11
This section will turn out to be critically important to our analysis of some quantum
algorithms, ahead. Im going to define a few terms here before our official coverage,
but theyre easy and well give these new terms full air time in future lectures.
14.11.1
Non-Deterministic Algorithms
At the start of this lesson, I alluded to a type of algorithm, A, that was nondeterministic (or as I sometimes say to avoid the hyphen, probabilistic). This means
that given any acceptably small error tolerance, > 0, the probability that A will
give an inaccurate result is < .
Well say that A is probabilistic with error tolerance .
There is a theorem that we can easily state and prove which will help us in our
future quantum algorithms. It gives a simple condition that a probabilistic algorithm
will succeed in constant time. First, a few more definitions.
14.11.2
438
14.11.3
Looping Algorithms
Often in quantum computing, we have an algorithm that repeats an identical measurement (test, experiment) in a loop, and that measurement can be categorized in
one of two ways: success (S ) or failure (F ). Assume that a measurement (test, experiment) only need succeed one time in any of the loop passes to end the algorithm
with a declaration of victory: total success. Only if it fails after all loop passes is the
algorithm considered to have failed. Finally, the events S and F for any one loop
pass are usually statistically independent of the outcomes on previous loop passes, a
condition we will assume is met.
Well say that A is a looping algorithm.
14.11.4
Say we have a looping algorithm A of size N that is probabilistic and can be shown
to complete with the desired confidence (error tolerance) in a fixed number of loop
passes, T , where T is independent of N .
Well call A a constant time algorithm.
14.11.5
We can now state the theorem, which Ill call the CTC theorem for looping algorithms
(CTC for constant time complexity). By the way, this is my terminology. Dont
try to use it in mixed company.
The CTC Theorem for Looping Algorithms. Assume that A is a
probabilistic, looping algorithm having size N . If we can show that the
probability of success for a single loop pass is bounded away from zero,
i.e.,
P (S )
>
0,
p , for all k 1.
We are allowing the algorithm to have an error with probability . Pick T such that
(1 p)T
439
<
a condition we can guarantee for large enough T since (1 p) < 1. Note that p being
independent of the size N implies that T is also independent of N . After having
established T , we repeat As loop T times. The event of failure of our algorithm at
the completion of all T loop passes, which well call Ftot , can be expressed in terms
of the individual loop pass failures,
Ftot
Since the events are statistically independent, we can convert this to a probability
using a simple product,
P (Ftot )
(1 p)T
<
We have shown that we can get A to succeed with failure probability < if we allow
As loop to proceed a fixed number of times, T , independent of its size, N . This is
the definition of constant time complexity.
QED
Explicit Formula for T
To solve for T explicitly, we turn our condition on the integer T into an equality on
a real number t,
(1 p)t
log1p () ,
then pick any integer T > t. Of course, taking a log having a non-standard base like
1 p, which is some real number between 0 and 1, is not usually a calculator-friendly
proposition; calculators, not to mention programming language math APIs, tend to
give us the option of only log2 or log10 . No problem, because ...
[Exercise. Show that
logA x
log x
,
log A
where logs on the RHS are both base 2, both base 10, or both any other base for that
matter.]
Using the exercise to make the condition on t a little more palatable,
t
log ()
,
log (1 p)
and combining that with the need for an integer T > t, we offer a single formula for
T,
log ()
T =
+ 1,
log (1 p)
where bxc is notation for the floor of x, or the greatest integer x.
440
b6900.8452188c + 1
6900 + 1
6901 ,
b44.67478762c + 1
44 + 1
45 ,
441
Chapter 15
Computational Complexity
15.1
Algorithms are often described by their computational complexity. This is a quantitative expression of how the algorithms time and space requirements grow as the data
set or problem size grows. Weve already seen examples where the separation between
the quantum and classical algorithms favors quantum, but the arguments given at
the time were a little hand-wavy; we didnt have formal definitions to lean on. We
filled in our loosely worded probability explanations by supplying a lesson on basic
probability theory, and now its time to do the same with computational complexity.
15.2
15.2.1
The way an algorithms running time grows as the size of the problem increases
its growth rate is known in scientific disciplines as its time complexity. To get
a feel for different growth rates, lets assume that it acts on data sets (not function
inputs or a number to be factored ). The following do not define the various time
442
complexities we are about to study, but they do give you taste of their consequences.
Constant Time Complexity
The algorithm does not depend on the size of the data set, N . It appears to
terminate in a fixed running time (C seconds) no matter how large N is. Such
an algorithm is said to have constant time complexity (or be a constant-time
algorithm).
Polynomial Time Complexity
The algorithm takes C seconds to process N data items. We double N and the
running time seems to double it takes 2C seconds to process 2N items. If we
apply it to 8N data items, the running time seems to take 8C seconds. Here,
the algorithm exhibits linear time complexity (or be a linear algorithm).
The algorithm takes C seconds to process N data items. We double N and
now the running time seems to quadruple it takes C (22 ) = 4C seconds to
process 2N items. If we apply it to 8N data items the running time seems to
take C (82 ) = 64C seconds. Now, the algorithm will likely have quadratic time
complexity (or be a quadratic algorithm).
The algorithm takes C seconds to process N data items. We double N and
now the running time seems to increase by a factor of 23 it takes C (23 ) = 8C
seconds to process 2N items. If we apply it to 8N data items the running time
seems to take C (83 ) = 512C seconds. Now, the algorithm will likely have cubic
time complexity.
The previous examples constant, linear, quadratic all fall into the general
category of polynomial time complexity which includes growth rates limited by some
fixed power of N (N 2 for quadratic, N 3 for cubic, N 5 for quintic, etc.).
Non-Polynomial Time Complexity
Sometimes we cant find a p such that N p reflects the growth in time as the data grows.
We need a different functional form. Examples include logarithmic and exponential
growth. I wont give an example of the former here well define it rigorously in the
next section. But heres what exponential growth feels like.
The algorithm processes N items in C seconds (assume N is large enough that
C > 1). When we double N the running time seems to takes C 2 seconds.
If we apply it to 3N data items the running time takes C 3 seconds and to
7N data items the running time takes C 7 . This is longer/slower/worse than
polynomial complexity for any polynomial size. Now, the algorithm probably
has exponential growth).
443
(This last example doesnt describe every exponential growth algorithm, by the
way, but an algorithm satisfying this for some C > 1 would likely be exponential.)
15.2.2
Ive limited our discussion to the realm of time. However, if we were allowed to make
hardware circuity that grows as the data set grows (or utilize a larger number of
processors from a very large pool of existing farm of computers) then we may not
need more time. We would be trading time complexity for space complexity.
The general term that describes the growth rate of the algorithm in both time
and space is computational complexity.
For the purposes of our course, well usually take the hardware out of the picture
when measuring complexity and only consider time. There are two reasons for this:
Our interest will usually be in relative speed-up of quantum over classical methods. For that, we will be using a hardware black box that does the bulk of
the computation. We will be asking the question, how much time the quantum algorithm saves us over the classical algorithm when we use the same or
spatially equivalent black boxes in both regimes?
Even when we take space into consideration, the circuitry in our algorithms for
this course will grow linearly at worst, (often logarithmically) and we have much
bigger fish to fry. Were trying to take a very expensive exponential algorithm
classically and find a polynomial algorithm using quantum computation. Therefore, the linear or logarithmic growth of the hardware will be overpowered by
the time cost in both cases and therefore ignorable.
For example the circuit we used for both Deutsch-Josza and Bernstein-Vazirani,
|0in
H n
H n
Uf
|1i
(ignore)
had n + 1 inputs (and outputs), so it grows linearly with n (and only logarithmically
with the encoded N = 2n ). Furthermore, since this is true for both classical and
quantum algorithms, such growth can be ignored when we compare the two regimes.
15.2.3
Notation
To kick things off, we establish the following symbolism for the time taken by an
algorithm to deal with a problem of size N (where you can continue to think of N as
444
the amount of data, while keeping in mind it might be the number of inputs or the
size of a number to be factored).
TQ (N )
15.3
Big-O Growth
15.3.1
When you begin to parse the meaning of an algorithms running time, you quickly
come to a realization. Take a simple linear search algorithm on some random (unsorted) data array, myArray[k]. We will plow through the array from element 0
through element N 1 and stop if and when we find the search key, x:
f o r ( k = 0 , found = f a l s e ;
k < N && ! found ;
k++
)
{
i f ( x == myArray [ k ] )
{
found = t r u e ;
foundPosition = k ;
}
}
i f ( found )
c o ut << x << found a t p o s i t i o n << k << e n d l ;
If x is in location myArray[0], the algorithm terminates instantly independent of the
array size: constant time. If it is in the last location, myArray[N-1] (or not in the
list at all), the algorithm will take N 1 steps to complete, a time that increases
linearly with N .
If we cant even adjudicate the speed of an algorithm for a single data set, how
do we categorize it in terms of all data sets of a fixed size? We do so by asking three
of four types of more nuanced questions. The most important category of question is
what happens in the worst case? This kind of time complexity is called big-O.
To measure big-O time complexity, we stack the cards against ourselves by constructing the worst possible data set that our algorithm could encounter. In the
search example above, that would be the case in which x was in that last position
searched. This clears up the ambiguity about where we might find it and tells us that
the big-O complexity is going to be linear. But wait we havent officially defined
what it means to be linear.
445
15.3.2
Definition of Big-0
N 2 + 3N + 75 .
f (N )
N log N + 2 .
Another might be
We wish to compare the growth rate of TQ (N ) with the function f (N ). We say that
TQ (N ) = O (f (N ))
for all N n0 .
This means that while TQ (N ) might start out being much greater than c|f (N )| for
small N , eventually it improves as N increases to the extent that TQ (N ) will become
and stay c |f (N )| for all N once we get past N = n0 .
Note. Since our comparison function, f (x), will always be non-negative, I will
drop the absolute value signs in many of the descriptions going forward.
15.3.3
Now we can officially define the terms like quadratic or exponential growth.
446
f (N )
constant
log N
logarithmic
log2 N
log-squared
linear
N log N
N2
quadratic
N3
cubic
N k,
k 0, integer
polynomial
N k log N l ,
k, l 0, integer
15.3.4
(also) polynomial
2N
exponential
N!
factorial
Youll note that the table of common big-O terminology doesnt contain functions
like 10N 2 or N 2 + N . Theres a good reason for this.
Ignore a Constant Factor K
Instead of declaring an algorithm to be O(1000N ), we will say it is O(N ) (i.e., linear ).
Instead of O (1.5N 3 ) we will call it O (N 3 ) (i.e., cubic). Heres why.
Theorem. If TQ (N ) = O Kf (N ) , for some constant K, then TQ (N ) =
O(f (N )) .
The theorem says we can ignore constant factors and use the simplest version of the
function possible, i.e., O(N 2 ) vs. O(3.5N 2 ).
[Exercise. Prove it. Hint. Its easy.]
447
For Polynomial big-O complexity, Ignore all but the Highest Power Term
We only need monomials, never binomials or beyond, when declaring a big-O. For
example if TQ is O(N 4 + N 2 + N + 1) its more concisely O(N 4 ). Heres why.
Lemma. If j and k are are non-negative integers satisfying j < k, then
N j N k , for all N 1.
[Exercise. Prove it. Hint. Its easy.]
Now we prove the main claim of the section: for big O, we can ignore all but
the highest power term in a polynomial.
Theorem. If
TQ
k
X
!
aj N j
j=0
then
TQ
O Nk .
Pick a positive number a greater than all the coefficients aj , which will allow us to
write
k
k
X
X
j
c
aj N ca
Nj .
j=0
j=0
k
X
j=0
ca
k
X
Nk
ca(k + 1) N k ,
for all N n0 ,
j=0
QED
448
15.4
Growth
15.4.1
Definition of
for all N n0 .
This means that while TQ (N ) might start out being much smaller than c|f (N )| for
small N , eventually it will degrade as N increases to the extent that TQ (N ) will
become and stay c |f (N )| for all N once we get to N = n0 .
There are similar theorems as those we proved for big-O complexity that would
apply to time complexity, but they are not critical for our needs, so Ill leave it as
an exercise.
[Exercise. State and prove lemmas and theorems analogous to the ones we proved
for big-O, but applicable to growth.]
15.5
Growth
Next, we express the fact that an algorithm Q is said to grow at exactly (a term
which is not universally used because it could be misinterpreted) the same rate as
some mathematical expression f (N ) using the notation
TQ (N ) = (f (N )) .
We mean that TQ grows neither faster nor slower than f (N ). In words, we say the
timing for algorithm Q is theta f (N ). Officially,
TQ (N ) = (f (N ))
both
TQ (N ) = O (f (N )) and TQ (N ) = (f (N ))
449
Ideally, this is what we want to know about an algorithm. Sometimes, when programmers informally say an algorithm is big-O of N or N log N , they really mean
that it is theta of N or log N , because they have actually narrowed down the growth
rate to being precisely linear or logarithmic. Conversely, if a programmer says an algorithm is linear or logarithmic or N log N , we dont know what they mean without
qualification by one of categories, usually big-O or .
15.6
Little-o Growth
Less frequently, you may come across little-Oh notation, that is, TQ = o(f (N )). This
simply means that we not only have an upper bound in f (N ), but this upper bound
is, in some sense, too high. One way to say this is that
TQ (N ) = o (f (N ))
TQ (N ) = O (f (N )) , but TQ (N ) 6= (f (N )) , .
15.7
Computational theorists use the term easy to refer to a problem whose (known)
algorithm has polynomial time complexity. We dont necessarily bother specifying
the exact power of the bounding monomial; usually the powers are on the order of
five or less. However in this course, well prove exactly what they are when we need
to.
Hard problems are ones whose only known algorithms have exponential time complexity.
A large part of the promise of quantum computing is that it can use quantum
parallelism and entanglement to take problems that are classically hard and find
quantum algorithms that are easy. This is called exponential speed-up (sometimes
qualified with the terms relative or absolute).
For the remainder of the course we will only tackle two remaining algorithms, but
they will be whoppers. They both exhibit exponential speed up.
15.8
Wrap-Up
This section was a necessarily brief and incomplete study of time complexity because
we only needed the most fundamental aspects of big-O and growth for the most
obvious and easy-to-state classes: exponential growth, polynomial growth (and a few
cases of N log N growth). When we need them, the analysis we do should be selfexplanatory, especially with this short section of definitions on which you can fall
back.
450
Chapter 16
Computational Basis States and
Modular Arithmetic
|15 3i4
16.1
|12i4
|1100i
(1, 1, 0, 0)t
In the quantum computing literature youll encounter alternative ways to express the
states of a qubit or the inner workings of an algorithm. If youre schooled only in
one style, you might be puzzled when you come across unfamiliar verbiage, especially
when the author changes dialects in mid-utterance. Today, I want to consolidate a few
different ways of talking about Hilbert space and computational basis states. This will
prepare you for the algorithms ahead and enable you to read more advanced papers
which assume the reader can make such transitions seamlessly. Its also convenient to
have this single resource to consult if youre reading a derivation and suddenly have
a queasy feeling as the notation starts to get away from you.
16.2
16.2.1
451
H(n)
z
}|
{
H H ... H
n1
O
H.
k=0
The computational basis states, or CBS, of this product space are the 2n vectors of
the form
n
|xi
n1
O
|xk i ,
k=0
where each |xk i is either |0i or |1i, i.e., a CBS of the kth one qubit space. We index in
decreasing order from xn1 to x0 because well want the right-most bit to correspond
to the least significant bit of the binary number xn1 . . . x1 x0 .
Different CBS Notations
One shorthand we have used in the past for this CBS is
|xn1 i |xn2 i |x1 i |x0 i ,
and two other common notations well need are the decimal integer (encoded ) form,
x, and its binary representation,
|xin ,
x {0, 1, 2, 3, . . . , 2n 1} or
|xn1 xn2 . . . x3 x2 x1 x0 i ,
xk {0, 1} .
For example, for n = 3,
|0i3
|000i ,
|1i3
|001i ,
|2i
|010i ,
|3i
|011i ,
|4i
and, in general,
|100i ,
|xi3
452
|x2 x1 x0 i .
The 2-dimensional H = H(1) and its 2n -dimensional products H(n) are models we
use for quantum computing. However, the problems that arise naturally in math and
computer science are based on simpler number systems. Lets have a look at two such
systems and show that they are equivalent to the CBS of our Hilbert space(s), H.
16.2.2
Z2 )n
The Second Environment: The Finite Group (Z
Simple Integers
Were all familiar with the integers, Z,
Z
{ . . . , 3, 2, 1, 0, 1, 2, 3, . . . }, usual +,
where I have explicitly stated the operation of interest, namely ordinary addition.
(Were not particularly interested in multiplication at this time.) This is sometimes
called the group of integers, and as a set we know its infinite, stretching toward
in the two directions.
Stepping Stone: Z N , or mod N Arithmetic
Another group that you may not have encountered in a prior course is the finite group
ZN , consisting of only the N integers from 0 to N 1,
ZN
{ 0, 1 , 2, . . . N 1},
+ is (+
mod N ),
and this time we are using addition modulo N as the principal operation; if x+y N ,
we bring it back into the set by taking its remainder after dividing by N :
x + y
(mod N )
(x + y) % N.
(mod N )
(N x).
Subtraction is defined using the above two definitions, as you would expect,
x y
(mod N )
453
(x + y) % N.
{ 0, 1 },
is (+
mod 2) .
1
0
0
1
1
0 1
= 1
= 1
= 0
= 0
= 1
= 0 1
Of course is nothing other than the familiar XOR operation, although in this
context, we get subtraction and negative mod-2 numbers defined, as well. Also, while
we should be consistent and call subtraction , the last example shows that we can,
and usually do, use the ordinary operator, even in mod-2 arithmetic. If there is
the potential for confusion, we would tag on the parenthetical (mod 2).
An Old Friend with a New Name
We studied Z2 under a different name during our introductory lecture on a single
qubit (although you may have skipped that optional section; it was a study of classical
bits and classical gates using the formal language of vector spaces in preparation for
defining the qubit). At the time we didnt use the symbolism Z2 for the two-element
group, but called it B, the two-element field on which we built the vector space B
for the formal definition of classical bits and operators.
454
H(1)
|0i
|1i
I hasten to add that this connection does not go beyond the 1:1 correspondence listed
above and, in particular, does not extend to the mod-2 addition in Z2 vs. the vector
addition in H; those are totally separate and possess no similarities. Also, only the
basis states in H are part of this correspondence; the general state, |i = |0i + |1i
has no place in the analogy. As tenuous as it may seem, this connection will help us
in the up-coming analysis.
Z2 )n with Arithmetic
Second Environment Completed: (Z
As a set, (Z2 )n is simply the n-tuples that have either 0 or 1 as their coordinates,
that is,
(Z2 )n
(xn1 , xn2 , . . . x2 , x1 , x0 ) ,
each xk = 0 or 1,
or, in column vector notation,
(Z2 )n
0
0
0
0 0 0
.. .. ..
. , . , . ,
0 0 1
0
1
0
xn1
x
n2
..
x2
x1
..
.
Notice that we label the 0th coordinate on the far right, or bottom, and the (n 1)st
coordinate on the far left, or top. This facilitates the association of these vectors with
binary number representations (coming soon) in which the LSB is on the right, and
the MSB is on the left (as in binary 1000 = 8, while 0001 = 1).
455
The additive operation stems from the Z2 in its name: its the component-wise
mod-2 addition, or, equivalently XOR, e.g.,
0
1
1
1
0
1
=
0
1
1
1
1
0
Ill usually write the elements of (Z2 )n in boldface, as in x or y, to emphasize the
vector point-of-view.
Common Notation. This set is often written { 0, 1 }n , especially when we dont
care about the addition operation, only n-tuples of 0s and 1s that are used as inputs
to Boolean functions.
Z2 )n is a Vector Space
(Z
Ive already started calling the objects in (Z2 )n vectors, and this truly is an official
designation. Just as Rn is an n-dimensional vector space over the reals, and H(n) is a
2n -dimensional vector space over the complex numbers, (Z2 )n is a vector space over
Z2 . The natural question arises, what does this even mean?
You know certain things about all vector spaces, a few of which are
There is some kind of scalar multiplication, cv.
There is always some basis and therefore a dimension.
There is often an inner product defining orthogonality.
All this is true of (Z2 )n , although the details will be defined as they crop up. For now,
we only care about the vector notation and addition. That is, unless you want to
do this ...
[Exercise. Describe all of the above and anything else that needs to be confirmed
to authorize us to call (Z2 )n a vector space.]
Caution. For general N , (ZN )n is not a vector space. The enlightened among
you can help the uninitiated understand this in your forums, but it is not something
we will need. What kind of N will lead to a vector space?
Recall. Once again, think back to the vector space that we called B = B2 . It was
the formal structure that we used to define classical bits. Using the more conventional
language of this lesson, B is the four-element, 2-dimensional vector space (Z2 )2 .
Z2 )n and H (n)
Connection Between (Z
Lets punch up our previous analogy, bringing it into higher dimensions. We relate
the n-component vectors, (Z2 )n , and the multi-qubit space, H = H(n) . As before,
456
the dimension of H(n) , 2n , equals the size of the group (Z2 )n , also 2n . The vectors in
(Z2 )n corresponds nicely to the CBS in H(n) :
(Z2 )n
H(n)
(0, 0, . . . , 0, 0, 0)
|00 000i
(0, 0, . . . , 0, 0, 1)
|00 001i
(0, 0, . . . , 0, 1, 0)t
|00 010i
(0, 0, . . . , 0, 1, 1)
|00 011i
(0, 0, . . . , 1, 0, 0)
..
.
|00 100i
..
.
|xn1 x2 x1 x0 i
(xn1 , . . . , x2 , x1 , x0 )t
Again, there is no connection between the respective addition operations, and the
correspondence does not include superposition states of H(n) . Still, the basis states in
H(n) line up with the vectors in (Z2 )n , and thats the important thing to remember.
16.2.3
One of our stepping stones above included the finite group ZN with mod-N + as its
additive operation,
ZN
{ 0, 1 , 2, . . . N 1}.
Now were going to make two modifications to this. First, well restrict the size, N ,
to powers of 2, i.e., N = 2n , for some n,
Z2n
{ 0, 1 , 2, . . . 2n 1},
xk 2k , as a sum of powers-of-2.
k=0
The second change will be to the addition operation. It will be neither normal addition
nor mod-N addition. Instead, we define x + y using the bit-wise operator.
If x
and y
then x y
=
=
xn1 xn2 x2 x1 x0 ,
yn1 yn2 y2 y1 y0 ,
(xn1 yn1 ) (x2 y2 )(x1 y1 )(x0 y0 ) .
457
Note that the RHS of the last line is not a product, but the binary representation
using its base-2 digits (e.g., 11010001101 ). Another way to say it is
xy
n1
X
(xk yk )2k .
k=0
To eliminate any confusion between this group and the same set under ordinary modN (mod-2n ) addition, lets call ours by its full name,
(Z2n , ) .
Examples of addition in (Z2n , ) are
11
12
15
45
5 11
13 13
15 3
=
=
=
=
=
=
=
0,
3,
4,
1,
14,
0, and
12.
=
=
0, so
x,
xn1
xn2
..
x = (xn1 xn2 x1 x0 ) x = . ,
x1
x0
x y
x y.
[Exercise. For those of you fixating on the vector space aspect of (Z2 )n , you may
as well satisfy your curiosity by writing down why this makes Z2n a vector space over
Z2 , one that is isomorphic to (effectively the same as) (Z2 )n .]
458
16.2.4
Z2 )n and (Z
Z2n , )
Interchangeable Notation of H(n) , (Z
In practical terms, the relationship between the above three environments allows us
to use bit-vectors,
1
1
(1, 1, 0, 1, 0)t =
0 ,
1
0
binary number strings,
11010,
or plain old encoded ints
26
interchangeably, at will. One way well take advantage of this is by using plain int
notation in our kets. For n = 5, for example, we might write any of the four equivalent
expressions,
|26i
|11010i
|1i |1i |0i |1i |0i
|1i |1i |0i |1i |0i ,
usually the first or second. Also, we may add notation to designate the number of
qubits under consideration, as in
|26i5
|3i |2i3 .
Hazard
Why are these last two equivalent? We must be careful not to confuse the encoded
CBS notation as if it gave coordinates of CBS kets in the natural basis it does not.
In other words, |3i2 expressed in natural tensor coordinates is not
1
.
1
It cant be for two reasons:
a) |3i2 is a 4-dimensional vector and requires four, not two, coordinates to express
it, and
459
b) |3i2 is a CBS, and any CBS ket expressed in its own (natural) basis can have
only a single 1 coordinate, the balance of the column consisting of 0 coordinates.
The correct expression of |3i2 in tensor coordinates is
0
0
,
0
1
as can be seen if we construct it from its component tensor coordinates using
0
0
0
0
=
0 .
1
1
1
Therefore, to answer this last question why are the last two expressions equivalent?
we must first express all vectors in terms of natural coordinates. That would produce
three column vectors (for |3i2 , |2i3 and |26i5 ) in which all had a single 1 in its
respective column, and only then could we compute the product of two of them,
demonstrating that it was equal to the third.
To head off another possible source of confusion, we must understand why the
tensor product dimension of the two component vectors is not 2 3 = 6 contradicting
a possibly (and incorrectly) hoped-for result of 5. Well, the dimensions of these spaces
are not 2, 3, 5 or even 6. Remember that exponent to the upper-right of the ket
designates the order of the Hilbert space. Meanwhile, the dimension of each space
is (2)order , so these dimensions are actually 22 , 23 and 25 . Now we can see that the
product space dimension, 32, equals the product of the two component dimensions,
4 8, as it should.
Sums inside Kets
Most importantly, if x and y are two elements in Z2n , we may take their mod-2 sum
inside a ket,
|x yi ,
which means that we are first forming x y, as defined in (Z2n , ), and then using
that n-bit answer to signify the CBS associated with it, e.g.,
|1 5i
|5 11i
4
|15 3i
|21 21i
=
=
|4i ,
|14i
|12i
and
|0i .
460
|xi
=
n 1
n 2X
(1)x y |yin ,
y=0
where is the mod-2 dot product based on the individual binary digits in the base-2
representation of x and y,
xy
xn1
yn1
xn2
yn2
..
..
x x = . ,
y y = . .
x1
y1
x0
y0
the dot product between vector x and vector y is also assumed to be the mod-2 dot
product,
xy
|xi
=
n 1
n 2X
(1)x y |yin .
y=0
This demonstrates the carefree change of notation often uncounted in this and other
quantum computing presentations.
We have completed our review and study of the different notation and language
used for CBS. As we move forward, well want to add more vocabulary that straddles
these three mathematical systems, most notably, periodicity, but thats best deferred
until we get to the algorithms which require it.
461
Chapter 17
Quantum Oracles
|0in
H n
H n
Uf
|0in
|
{z
}
Quantum Oracle
17.1
Weve seen oracles in our circuits for Deutsch, Deutsh-Jozsa and Bernstein-Vazirani,
but today we will focus on the oracle itself, not a specific algorithm. Our goals will
be to
extend our input size to cover any dimension for each of the two oracles channels,
get a visual classification of the matrix for an oracle, and
define relativized and absolute time complexity, two different ways of measuring
a quantum algorithms improvement over a classical algorithm.
The last item relies on an understanding of the oracles time complexity, which is why
it is included in this chapter.
Well continue to use Uf to represent the oracle for a Boolean function, f . Even
as we widen the input channels today, Uf will still have two general inputs, an upper,
A or data register and a lower, B or target register.
462
At the top of the page Ive included a circuit that solves Simons problem (coming
soon). It reminds us how the oracle relates to the surrounding gates and contains a
wider (n qubit) input to the target than weve seen up to now.
17.2
At the heart of our previous quantum circuits has lurked the unitary transformation
weve been calling the quantum oracle or just the oracle. Other gates may be wired
into the circuit around the oracle, but they are usually standard parts that we pull
off the shelf like CNOT, Hadamard and other simple gates. The oracle on the other
hand is custom designed for the problem to be solved. It typically involves some function, f , that the circuit and its algorithm are meant to explore/discover/categorize.
17.2.1
The simplest quantum oracle is one that arises from a unary Boolean f . We defined
such a Uf in the Deutsch circuit. Its action on a general CBS |xi |yi is shown in the
following circuit:
|xi
|xi
Uf
|yi
|y f (x)i
In terms of the effect that the oracle has on the CBS |xi |yi, which we know to be
shorthand for |xi |yi, the oracle can be described as
|xi |yi
Uf
|xi |y f (x)i .
There are a number of things to establish at the start, some review, others new.
Initial Remarks (Possibly Review)
1. is the mod-2 addition:
00
01
10
11
=
=
=
=
0
1
1
0
3. The separable state |xi |yi coming in from the left represents a very special and
restricted input: a CBS. Whenever any gate is defined in terms of CBS, we
must remember to use linearity and extend the definition to the entire Hilbert
space. We do this by expanding a general ket, such as a 4-dimensional |i2 ,
along the CBS,
|i2
reading off the output for each of the CBS kets from our oracle description, and
combining those using the complex amplitudes, ck .
4. When a CBS is presented to the input of an oracle like Uf , the output happens
to be a separable state (something not true for general unitary gates as we saw
with the BELL operator). In this case, the separable output is |xi |y f (x)i.
Considering the last bullet, we cant expect such a nice separable product when
we present the oracle with some non-basis state, |i2 , at its inputs. Take care
not to make the mistake of using the above template directly on non-CBS inputs.
Initial Remarks (Probably New)
Oracles are often called black boxes, because we computer scientists dont care how
the physicists and engineers build them or whats inside. However, when we specify
an oracle using any definition (above being only one such example), we have to check
that certain criteria are met.
1. The definition of Uf s action on the CBS inputs as described above must result
in unitarity. Should you come across a putative oracle with a slightly off-beat
definition, a quick check of unitarity might be in order a so called sanity
check.
2. The above circuit is for two one-qubit inputs (thats a total of 4-dimensions for
our input and output states) based on a function, f , that has one bit in and
one bit out. After studying this easy case, well have to extend the definition
and confirm unitarity for
multi-qubit input registers taking CBS of the form |xin and |yim , and
an f with domain and range larger than the set {0, 1}.
3. The function that we specify needs to be easy to compute in the complexity
sense. A quantum circuit wont likely help us if a computationally hard function
is inside an oracle. While we may be solving hard problems, we need to find
easy functions on which to build our circuits. This means the functions have to
be computable in polynomial time.
4. Even if the function is easy, the quantum oracle still may be impractical to build
in the near future of quantum computing.
464
17.2.2
The nice thing about studying single bit functions, f , and their two-qubit oracles, Uf ,
is that we dont have to work in abstracts. There are so few options, we can compute
each one according to the definitions. The results often reveal patterns that will hold
in the more general cases.
Notation Reminder If a is a single binary digit, a = a is its logical negation
(AKA the bit-flip or not operation),
(
0, if a = 1,
a (or a)
1, if a = 0.
A short scribble should convince you that 0 a = a and 1 a = a. Therefore,
Uf
17.2.3
Uf (|xi |yi)
(Remember, | i2 does not mean ket squared, but is an indicator that this is an
n-fold tensor product state, where n = 2.)
The Matrix for Uf
Its always helpful to write down the matrix for any linear operator. At the very least
it will usually reveal whether or not the operator is unitary even though we suspect
without looking at the matrix that Uf is unitary since it is real and is its own inverse.
However, self-invertibility does not always unitarity make, so its safest to confirm
unitarity by looking at the matrix.
This is a transformation from 4-dimensions to 4-dimensions, so we need to express
the 4-dimensional basis kets as coordinates. Lets review the connection between the
four CBS kets of H(2) and their natural basis coordinates:
465
|0i |0i
|0i2
1
0
0
0
component
encoded
coordinate
|0i |1i
|1i2
0
1
0
0
|1i |0i
|2i2
0
0
1
0
|1i |1i
|3i2
0
0
0
1
To obtain the matrix, express each Uf (|xi |yi) as a column vector for each tensor
CBS, |ki2 , k = 0, . . . , 3:
Uf |0i2 Uf |1i2 Uf |2i2 Uf |3i2
|1i2 |0i2 |3i2 |2i2
=
0
1
0
0
1
0
0
0
=
0
0
0
1
0
0
1
0
Aha. These rows (or columns) are clearly orthonormal, so the matrix is unitary
meaning the operator is unitary.
It will be useful to express this matrix in terms of the 2 2 Pauli matrix, x ,
associated with the X gate.
Uf
x
0
=
0
x
x
x
This form reveals a pattern that you may find useful going forward. Whenever a
square matrix M can be broken down into smaller unitary matrices along its diagonal
(0s assumed elsewhere), M will be unitary.
[Exercise. Prove the last statement.]
17.2.4
Uf (|xi |yi)
1
0
0
0
0
1
0
0
=
0
0
0
1
0
0
1
0
=
1
x
This time, weve enlisted the help of the 2 2 identity matrix, 1. Again, we see how
Uf acts on its inputs while confirming that it is unitary.
17.2.5
There are only two more 1-bit functions left to analyze. One of them, f (x) 0 was
covered in our first quantum algorithms lesson under the topic of Oracle for the [0]-op.
We found its matrix to be
Uf
1.
Uf
17.3
17.3.1
We will be building oracles based on classical Boolean functions that might have many
inputs and one output,
f : { 0, 1 }n { 0, 1 } .
or many inputs and many outputs,
f : { 0, 1 }n { 0, 1 }m .
To facilitate this, well need more compact language than { 0, 1 }n , especially as we
will be adding these multi-bit items both inside and outside our kets. We have
such vocabulary in our quiver thanks to the previous lesson on CBS and modular
arithmetic.
17.3.2
{0, 1, 2, 3, . . . 2n 1 }
0
0
0
1
.. .. ..
..
(Z2 )n
. , . , . , , . ,
0 0 1
1
0
1
0
1
with being component-wise mod-2 addition.
468
{ |xin ,
x = 0, 1, 2, 3, . . . , 2n 1 } .
Seen below, we can use either encoded or binary form to write these basis kets.
|0in
|1in
|2in
|3in
|4in
..
.
| 2n 1 in
17.3.3
|0 000i ,
|0 001i ,
|0 010i ,
|0 011i ,
|0 100i ,
|1 111i .
This review might be more than we need, but it will nip a few potentially confusing
situations in the bud, the biggest being the notation
|y f (x)i
when y and f (x) are more than just labels for the CBS of the 2-dimensional H, 0 and
1. Of course, when they are 0 and 1, we know what to do:
|1 0i
|1 1i
=
=
|1i or
|0i .
But when we are in a higher dimensional Hilbert space that comes about by studying
a function sending Z2n into Z2m , well remember the above correspondence. For
example,
|15 3i4
|21 21i
=
=
|12i4
and
|0i .
With that little review, we can analyze our intermediate and advanced multi-qubit
oracles.
469
17.4
17.4.1
Circuit
|xin
Uf
|yi
|y f (x)i
17.4.2
The general mapping we found so useful in the simple case continues to work for us
here. Even though x is now in the larger domain, Z2n , f (x) is still restricted to 0
or 1, so we can reuse the familiar identities with a minor change (the first separable
component gets an order n exponent),
Uf
470
17.4.3
Analyzing Uf for x = 0
For the moment, we wont commit to a specific f , but we will restrict our attention
to the input x = 0.
Uf
f (0)
Uf (|0in |yi)
This is a little different from our previous table. Rather than completely determining
a 4 4 matrix for a particular f , it gives us two possible 2 2 sub-matrices depending
on the value of f (0).
1. When f (0) = 0, the first two columns of the matrix for Uf will be (see upper
two rows of table):
Uf |0in+1 Uf |1in+1
|0in+1 |1in+1
=
1 0
0 1
= 0 0
?
.. ..
. .
0 0
?
1
0
?
2. When f (0) = 1, the first two columns of the matrix for Uf will be (see lower
471
0 1
1 0
= 0 0
?
.. ..
. .
0 0
x
?
0
?
17.4.4
We continue studying a non-specific f and, like the x = 0 case, work with a fixed x.
However, this time x Z2n can be any element in that domain. If it helps, you can
think of a small x, like x = 1, 2 or 3. The mappings that will drive the math become
Uf
f (x)
Uf (|xin |yi)
|xin |0i
|xin |1i
|xin |0i
|xin |1i
We computed the first two columns of the matrix for Uf before, and now we
compute two columns of Uf further to the right.
[In Case You Were Wondering. Why did the single value x = 0 produce two
columns in the matrix? Because there were two possible values for y, 0 and 1, which
gave rise to two different basis kets |0in |0i and |0in |1i. It was those two kets that
we subjected to Uf to produce the first two columns of the matrix. Same thing here,
except now the two basis kets that correspond to the fixed x under consideration are
|xin |0i and |xin |1i, and they will produce matrix columns 2x and 2x + 1.]
1. When f (x) = 0, columns 2x and 2x + 1 of the matrix for Uf will be (see upper
two rows of table):
Uf (|xin |0i) Uf (|xin |1i)
|xin |0i |xin |1i
2x
2x+1
z }| {
0 0
.. ..
. .
1 0
0 1
.. ..
. .
0 0
2x
2x
2x+1
2x+1
z }| {
0
1
2x
2x+1
2. When f (0) = 1, the first two columns of the matrix for Uf will be (see lower
473
2x+1
z }| {
0 0
.. ..
. .
0 1
1 0
.. ..
. .
0 0
2x
2x
2x+1
2x+1
z }| {
2x
2x+1
As you can see, for any x the two columns starting with column 2x contain all 0s
away from the diagonal and are either the 2 2 identity 1 or the Pauli matrix x on
the diagonal, giving the matrix the overall form
[1 or x ]
[1 or x ]
...
0
[1 or x ]
[1 or x ]
474
17.5
17.5.1
|xin
Uf
|y f (x)im
|yi
which arises when the function under study is an f that maps Z2n Z2m . In this
case it only makes sense to have an m-qubit B register, as pictured above, otherwise
the sum inside the bottom right output, |y f (x)i would be ill-defined.
Sometimes m = n
Often, for multi-valued f , we can arrange things so that m = n and, in that case the
circuit will be
|xin
|xin
Uf
n
|y f (x)in
|yi
17.5.2
In this case, f (x) { 0, 1, 2, 3 } for each x Z2n . There will be (n + 2) qubits going
into Uf , (n qubits into the A register and 2 qubits into the B register).
|xin
|xin
Uf
2
|y f (x)i2
|yi
This gives a total of 2n+2 possible input CBS going into the system, making the
dimension of the overall tensor product space, H(n+2) , = 2n+2 . The matrix for Uf
will, therefore, have size (2n+2 2n+2 ).
17.5.3
We follow the pattern set in the intermediate case and look at a non-specific f but
work with a fixed x Z2n . Because we are assuming m = 2, this leaves four possible
images for this one x, namely f (x) = 0, 1, 2 or 3. As well see, this leads to a 4 4
475
| y1 y0 f (x)1 f (x)0 i2
| y1 f (x)1 i | y0 f (x)0 i2 .
The second line is the definition of in Z22 , and the ( )( ) inside the ket is not
multiplication but the binary expansion of the number y1 y0 f (x)1 f (x)0 . Thus, our
four combinations of the 2-bit number y with the fixed value f (x) become
Uf
476
|xin |yi2
f (x)
Uf |xin |yi2
|xin |0i2
|xin |1i2
|xin |2i2
|xin |3i2
|xin |0i2
|xin |1i2
|xin |2i2
|xin |3i2
|xin |0i2
|xin |1i2
|xin |2i2
|xin |3i2
|xin |0i2
|xin |1i2
|xin |2i2
|xin |3i2
477
}|
0 0 0 0
.. .. .. ..
. . . .
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
..
.
0
..
.
0
..
.
0
..
.
0
4x
4x+3
z }| {
1
0
4x
4x+3
}|
0
1
4x
4x+3
2. Lets skip to the case f (x) = 3, and use the table to calculate columns 4x
through 4x + 3 of the matrix for Uf (see bottommost four rows of table):
Uf |xin |0i2
Uf |xin |1i2
Uf |xin |2i2
Uf |xin |3i2
|xin |1i |1i |xin |1i |0i |xin |0i |1i |xin |0i |0i ,
=
which translates to the matrix
4x 4x+3
}|
0 0 0 0
.. .. .. ..
. . . .
0
0
0
1
0
0
1
0
0
1
0
0
1
0
0
0
..
.
0
..
.
0
..
.
0
..
.
0
4x
4x+3
}|
0
0
4x
4x+3
479
}|
4x
4x+3
}|
4x
4x+3
x
x
1
480
and
x
x
17.5.4
Now that weve analyzed a few oracles, we can see why all Uf will be unitary. In fact,
you might have noticed this even before having studied the examples. Here are the
key points, and Ill let you fill in the details as an [Exercise].
Every column is a CBS ket since it is the separable product of CBS kets.
All CBS kets are normal vectors (all coordinates are 0 except one, which is 1).
If two columns were the identical, Uf would map two different CBS kets to the
same CBS ket.
Any matrix that maps two different CBS kets to the same CBS ket cannot be
invertible.
Since Uf is its own inverse, it is invertible, so by last two bullets, all columns
are distinct unit vector.
We conclude that distinct columns have their solitary 1 in different positions.
The inner product of these columns with themselves is 1 and with other columns
is 0: the matrix has orthonormal columns.
QED
Of course, we learn more by describing the form of the matrices, so the activities
of this chapter have value beyond proving unitarity.
17.5.5
1
x
1 1
1 x
1
= x 1
x
x x
Each component of the tensor product, 1 and x , were the two possible 2 2 matrices
that appeared along the diagonal of Uf for m = 1.
The tensor product of unitary matrices is easily shown to be unitary. And one
could proceed inductively to demonstrate that the Uf for output size m + 1 consists
of sub-matrices along the diagonal, each of which is a tensor products of the potential
sub-matrices available to the Uf for size m. This will give both unitarity as well as
the visual that we have already predicted.
[Caution. If we designate some function with 2n inputs and 2m outputs as fn,m ,
we are not saying that Ufn,(m+1) = Ufn,m Ugn,m for two smaller oracles it isnt. That
statement doesnt even track, logically, since it would somehow imply we could generate an arbitrary Uf , which can be exponentially complex, out of repeated products
of something very simple. Rather, we merely noted the fact that the sub-matrices
along the diagonal are, individually, tensor products of the next-smaller ms possible
diagonal sub-matrices. We still need the full details of f to compute all the smaller
sub-matrices, which will be different (in general) from one another.]
482
17.6
In this course, we are keeping things as simple as possible while attempting to provide
the key ideas in quantum computation. To that end, well make only one key classification of oracles used in algorithms. You will undoubtedly explore a more rigorous
and theoretical classification in your advanced studies.
Complexity of the Oracle
The above constructions all demonstrated that we can take an arbitrary function, f ,
and, in theory, represent it as a reversible gate associated with a unitary matrix. If
you look at the construction, though, youll see that any function which requires a full
table of 2n values to represent it (if there is no clear analytical short-cut we can use
to compute it) will likewise end up with a similarly complicated oracle. The oracle
would need an exponentially large number of gates (relative to the number of binary
inputs, n). An example will come to us in the form of Simons algorithm where we
have a Z2n -periodic function (notation to be defined) and seek to learn its period.
However, there are functions which we know have polynomial complexity and can
be realized with a correspondingly simple oracle. An example of this kind of oracle
is that which appears in Shors factoring algorithm (not necessarily Shors periodfinding algorithm). In a factoring algorithm, we know a lot about the function that
we are trying to crack and can analyze its specific form, proving it to be O(n3 ). Its
oracle will also be O(n3 ).
Two Categories of a Quantum Algorithms Complexity
Therefore, whenever presenting a quantum algorithm, one should be clear on which
of the two kinds of oracles are available to the circuit. That distinction will be made
using the following, somewhat informal (and unconventional) language.
Relativized Time Complexity - This is the time complexity of a {circuit +
algorithm} without knowledge of the oracles design. If we were later given the
complexity of the oracle, we would have to modify any prior analysis to account
for it. Until that time we can only speak of the algorithm and circuit around
the oracle. We can say that some algorithm is O(n3 ), e.g., but that doesnt
mean it will be when we wire up the oracle. It is O(n3 ) relative to the oracle.
Absolute Time Complexity - This is the time complexity of a {circuit +
algorithm} with knowledge of the oracles design. We will know and incorporate
the oracles complexity into the final {circuit + algorithm}s complexity.
483
Chapter 18
Simons Algorithm for
Period-Finding
18.1
Simons algorithm represents a turning point in both the history of quantum computer
science and in every students study of the field. It is the first algorithm that we study
which represents a substantial advance in relativized time complexity vs. classical
computing. The problem is exponential classically, even if we allow an approximate
rather than a deterministic solution. In contrast, the quantum algorithm is very fast,
O(log5 N ), where N is the size of the domain of f . In fact, you will see a specific
algorithm, more detailed than those usually covered, which has complexity O(log3 N ).
Well prove all this after studying the quantum circuit.
[Before going any further, make sure you didnt overlook the word relativized
in the first paragraph. As you may recall from a past lecture, it means that we
dont have knowledge, in general, about the performance of the circuits oracle Uf .
Simons quantum algorithm gives us a lower bound for the relative complexity, but
that would have to be revised upward if we ended up with an oracle that has a larger
big-oh.. This is not the fault of QC; the kinds of periodic functions covered in
Simons treatment are arbitrarily complex. The function that we test may have a
nice O(n) or O(n2 ) implementation, in which case well be in great shape. If not,
it will become our bottleneck. That said, even with a polynomial-fast oracle, there
is no classical algorithm that can achieve polynomial time solution, so the quantum
solution is still a significant result.]
While, admittedly a toy problem in the sense that its not particularly useful,
it contains the key ingredients of the relevant algorithms that follow, most notably
Shors quantum factoring and encryption-breaking. Even better, Simons algorithm
is free of the substantial mathematics required by an algorithm like Shors and thus
embodies the essence of quantum computing (in a noise-free environment) without
distracting complexities.
484
In the problem at hand, we are given a function and asked to find its period. However, the function is not a typical mapping of real or complex numbers, and the period
is not the thing that you studied in your calculus or trigonometry classes. Therefore,
a short review of periodicity, and its different meanings in distinct environments, is
in order.
18.2
Periodicity
18.2.1
Ordinary Periodicity
R
C
S
(S could be R, Z2 , etc.) ,
f:
Z
is called periodic if there is a unique smallest a > 0 (called the period),
with
f (x + a)
f (x) ,
Z2 )n Periodicity
(Z
18.2.2
Lets change things up a bit. We define a different sort of periodicity which respects
not ordinary addition, but mod-2 addition.
A function defined on (Z2 )n ,
f:
(Z2 )n
(S is typically Z or
485
(Z2 )m , m n 1) ,
y = x a.
18.2.3
Z 2n Periodicity
f (x) = f (y)
y = x a.
Z2 )n Periodicity
Examples of (Z
We implied that the range of f could be practically any set, S, but for the next few
examples, lets consider functions that map Z2n into itself,
f : Z2n
486
Z2n
f (x)
1 = x4 x3 1 x1 x0
x1
x0
If n = 5, k = 4, and the constant was 0, wed have a 4th bit collapse-to-0,
0
x3
g(x)
x2 = 0 x3 x1 x1 x0 .
x1
x0
Lets show why the first is (Z2 )5 periodic with period 4, and this will tell us why all
the others of its ilk are periodic (with possibly a different period), as well.
Notation - Denote the bit-flip operation on the kth bit, xk , to mean
(
0, if xk = 1,
xk
1, if xk = 0.
[Exercise. Demonstrate that you can effect a bit-flip on the kth bit of x using x2k .]
I claim that the 2nd-bit collapse-to-1 function, f , is (Z2 )n periodic with period
a = 4. We must show
f (x) = f (y)
:
y = x 4.
x =
and
y =
x2
y 2 = x2 .
x1
y 1
x1
x0
y0
x0
Then,
x4
x3
f (x) =
1 ,
x1
x0
but, also
x4
x3
f (y) =
1 ,
x1
x0
: Assume f (x) = f (y) for x 6= y. Since f does not modify any bits other
than bit 2, we conclude yk = xk , except for (possibly) bit k = 2. But since
x 6= y, some bit must be different between them, so it has to be bit-2. That is,
y4
x4
y 3
x3
= x2 = x 4,
y
y =
2
y 1
x1
x0
y0
showing that y = x 4.
QED
Of course, we could have collapsed the 2nd bit to 0, or used any other bit, and gotten
the same result. A single bit collapse is (Z2 )n periodic with period 2k , where k is the
bit being collapsed.
Note - With single bit collapses, there are always exactly two numbers from the
domain Z2n that map into the same range number in Z2n . With the example, f , just
discussed,
)
00100
7 00100 ,
00000
)
10101
7 10101 ,
10001
)
11111
7 11111 ,
11011
and so on.
Collapsing More than One Bit Not Periodic
A collapse of two or more bits can never be periodic, as we now show. For illustration,
lets stick with n = 5, but consider a simultaneous collapse-to-1 of both the 2nd and
0th bit,
x4
x3
f (x)
1 = x4 x3 1 x1 1
x1
1
In this situation, for any x in the domain of f , you can find three others that map to
the same f (x). For example,
00000 = 0
00001 = 1
7 00101 = 5 .
00100 = 4
00101 = 5
488
If there were some period, a, for this function, it would have to work for the first two
listed above, meaning, f (0) = f (1). For that to be true, wed need 1 = 0 a, which
forces a = 1. But a = 1 wont work when you consider the first and third x, above:
f (0) = f (4), yet 4 6= 0 1.
As you can see, this comes about because there are too many xs that get mapped
to the same f (x).
Lets summarize: bit collapsing gives us a periodic function only if we collapse
exactly one bit (any bit) to a constant (either 0 or 1). However, a function which is
a multi-bit-collapser (preserving the rest) can never be periodic.
So, what are the other periodic functions in the Z2n milieu?
18.2.4
S. Were going to be moving numbers from the source, S, into R and Q according to
this plan:
1. Pick any x S = Z2n . Call it r0 (0, because its our first pick).
2. Generate r0 s partner q0 = r0 a (partner, because periodicity guarantees that
f maps both r0 and q0 to the same image value and also that there is no other
x Z2n which maps to that value.)
3. Toss r0 into bin R and its partner, q0 , (which you may have to dig around in S
to find) into bin Q.
4. We have reduced the population of the source bin, S, by two: S = S {r0 , q0 }.
Keep going . . . .
5. Pick a new x from whats left of S. Call it r1 (1, because its our second pick).
6. Generate r1 s partner q1 = r1 a. (Again, we know that f maps both r1 and
q1 to the same image value, and there is no other x Z2n which maps to that
value.)
7. Toss r1 into bin R and its partner, q1 , into bin Q. S is further reduced by two
and is now S = S {r0 , q0 , r1 , q1 }.
8. Repeat this activity, each pass moving one value from bin S into bin R and its
partner into bin Q until we have none of the original domain numbers left in S.
9. When were done, half of the xs from dom(f ) will have ended up in R and the
other half in Q. (However, since we chose the first of each pair at random, this
was not a unique division of dom(f ), but thats not important.)
Here is the picture, when were done with the above process:
Z2n =
R
Q
= { . . . , rk , . . .} { . . . , qk , . . . }
where R and Q are of equal size, and
f (rk )
f (qk )
18.3
Simons Problem
Weve developed all the vocabulary and intuition necessary to understand the statement of Simons problem, and the quantum fun can now begin.
Statement of Simons Problem
Let f : (Z2 )n (Z2 )n be (Z2 )n periodic.
Find a.
(I made a bold because I phrased the problem in terms of the vector space (Z2 )n ;
had I used the integer notation of Z2n , I would have said find a, without the bold.)
Its not really necessary that ran(f ) be the same (Z2 )n as that of its domain. In
fact, the range is quite irrelevant. However, the assumption facilitates the learning
process, and once we have our algorithm we can lose it.
Lets summarize some of the consequences of (Z2 )n periodicity.
1. f (y) = f (x)
y = xa
2. f (y) = f (x)
a = xy
3. f is two-to-one on (Z2 )n
We can also state this in the equivalent Z2n language.
1. f (y) = f (x)
y = xa
2. f (y) = f (x)
a = xy
3. f is two-to-one on Z2n
Pay particular attention to bullet 2. If we get a single pair that map to the same
f (x) = f (y), we will have our a. This will be used in the classical (although not
quantum) analysis.
491
18.4
18.4.1
The Circuit
A birds eye view of the total circuit will give us an idea of whats ahead.
|0in
H n
H n
Uf
|0in
You see a familiar pattern. There are two multi-dimensional registers, the upper
(which I will call the A register, the data register or even the top line, at my whim),
and the lower (which I will call the B register, target register or bottom line, correspondingly.)
This is almost identical to the circuits of our recent algorithms, with the following
changes:
The target channel is hatched, reflecting that it has n component lines rather
than one.
We are sending a |0in into the bottom instead of a |1i (or even |1in ).
We seem to be doing a measurement of both output registers rather than ignoring the target.
In fact, that third bullet concerning measuring the bottom register will turn out to be
conceptual rather than actual. We could measure it, and it would cause the desired
collapse of the upper register, however our analysis will reveal that we really dont
have to. Nevertheless, we will keep it in the circuit to facilitate our understanding,
label it as conceptual, and then abandon the measurement in the end when we are
certain it has no practical value.
Here is the picture Ill be using for the remainder of the lesson.
|0in
H n
H n
Uf
|0in
| {z }
(actual)
| {z }
(conceptual)
[Note: I am suppressing the hatched quantum wires to produce a cleaner circuit.
Since every channel is has n lines built-into it and we clearly see the kets and operators
labeled with the exponent n, the hatched wires no longer serve a purpose.]
492
18.4.2
The Plan
We will prepare a couple CBS kets for input to our circuit, this time both will be |0in .
The data channel (top) will first encounter a multi-dimensional Hadamard gate to
create a familiar superposition at the top. This sets up quantum parallelism which we
found to be pivotal in past algorithms. The target channels |0in will be sent directly
into the oracle without pre-processing. This is the first time we will have started with
a |0i rather than a |1i in this channel, a hint that were not going to get a phase
kick-back today. Instead, the generalized Born rule, (QM Trait #15) will turn out
to be our best friend.
[Preview: When we expect to achieve our goals by applying the Born rule to a
superposition, the oracles target register should normally be fed a |0in rather than
a |1in .]
After the oracle, both registers will become entangled.
At that point, we conceptually test the B register output. This causes a collapse
of both the top and bottom lines states (from the Born rule), enabling us to know
something about the A register. Well analyze the A registers output which resulted from this conceptual B register measurement and discover that it has very
special properties. Post processing the A register output by a second re-organizing
Hadamard gate will seal the deal.
In the end, we may as well have measured the A register to begin with, since
quantum entanglement authorizes the collapse using either line, and the A register is
what we really care about.
Strategy
Our strategy will be to load the dice by creating a quantum circuit that spits out
measured states which are orthogonal to the period a, i.e., z a = 0 mod-2. (This is
not a true orthogonality, as well see, but everyone uses the term and so shall we. Well
discover that states which are orthogonal to a can often include the vector a, itself,
another reason for the extra care with which we analyze our resulting orthogonal
states.)
That sounds paradoxical; after all, we are looking for a, so why search for states
orthogonal to it? Sometimes in quantum computing, its easier to back-into the
desired solution by sneaking up on it indirectly, and this turns out to be the case in
Simons problem. You can try to think of ways to get a more directly, and if you
find an approach that works with better computational complexity, you may have
discovered a new quantum algorithm. Let us know.
Because the states orthogonal to a are so much more likely than those that are not,
we will quickly get a linearly independent set of n 1 equations with n 1 unknowns,
namely, awk = 0, for k = 0, . . . , n2. We then augment this system instantly (using
a direct classical technique) with an nth linearly independent equation, at which point
we can solve the full, non-degenerate n n system for a using fast and well-known
493
techniques.
18.5
We need to break the circuit into sections to analyze it. Here is the segmentation:
|0in
H n
H n
Uf
| {z }
(actual)
|0in
| {z }
(conceptual)
18.6
We now travel carefully though the circuit and analyze each component and its consequence.
18.6.1
This stage of the circuit is identical in both logic and intent with the Deutsch-Jozsa
and Bernstein-Vazirani circuits; it sets up quantum parallelism by producing a perfectly mixed entangled state, enabling the oracle to act on f (x) for all possible x,
simultaneously.
|0in
H n
H n
Uf
|0in
Hadamard, H n , in H(n)
494
It never hurts to review the general definition of a gate like H n . For any CBS |xin ,
the 2n -dimensional Hadamard gate is expressed in encoded form using the formula,
n 1
n 2X
1
n
H n |xi
=
(1)x y |yin ,
2
y=0
where is the mod-2 dot product. Today, Ill be using the alternate vector notation,
n 1
n 2X
1
n
n
(1)x y |yin ,
H |xi
=
2
y=0
where the dot product between vector x and vector y is also assumed to be the mod-2
dot product. In the circuit, we have
n 1
n 2X
n
1
(1)x y |yin ,
|xi
H n
2
y=0
H n
1
2
n 1
n 2X
|yin ,
y=0
or, returning to the usual computational basis notation, |xin , for the summation index
is
n 1
n 2X
n
1
|0i
|xin .
H n
2
x=0
Youll recognize the output state of this Hadamard operator as the nth order x-basis
CBS ket, |0in . It reminds us that not only do Hadamard gates provide quantum
parallelism but double as a z x basis conversion operator.
18.6.2
H n
H n
Uf
|0i
|
{z
}
Quantum Oracle
495
Due to the increased B channel width, we had better review the precise definition of
the higher dimensional oracle. Its based on CBS kets going in,
|xin
|xin
Uf
|y f (x)in
|yi
Uf
|xin |yin
|xin |y f (x)in ,
and from there we extend to general input states, linearly. We actually constructed
the matrix of this oracle and proved it to be unitary in our lesson on quantum oracles.
Today, we need only consider the case of y = 0:
|xin
|xin
Uf
|f (x)in
|0i
|xin |0in
Uf
|xin |f (x)in
In Words
We are
1. taking the B register CBS input |yin , which is |0in in this case, and extracting
the integer representation of y, namely 0,
2. applying f to the integer x (of the A register CBS |xin ) to form f (x),
3. noting that both 0 and f (x) are Z2n ,
4. forming the mod-2 sum of these two integers, 0 f (x), which, of course is f (x),
and
5. using the result to define the output of the oracles B register, |f (x)in .
6. Finally, we recognize the output to be a separable state of the two output
registers, |xin |f (x)in .
Just to remove any lingering doubts, assume n = 5, x = 18, and f (18) = 7. Then
the above process yields
1. |0i5 0
f (18)=7
2. |18i5 18 7
496
18.6.3
Next, we invoke linearity to the maximally mixed superposition state |0in going into
the oracles top register.
Reminder. Ive said it before, but its so important in a first course such as
this that Ill repeat myself. The bottom registers output is not f applied to the
superposition state. f only has meaning over its domain Z2n , which corresponds to
the finite set of z-basis CBS kets { |xi}. It has no meaning when applied to sums,
especially weighted sums (by real or complex amplitudes) of these preferred CBSs.
By linearity, Uf distributes over all the terms in the maximally mixed input |0in ,
at which point we apply the result of the last subsection, namely
Uf |xin |0in
= |xin |f (x)in ,
to the individual summands to find that
2n 1
n X
|xin |0in
Uf
x=0
2n 1
n X
|xin |f (x)in .
x=0
|f (x)in
.
( 2)n
But this is
n
m
n
m
n
|0inA |0 im
B + |1iA |1 iB + + |2 1iA |2n 1 iB ,
497
n
Each of the 2n orthogonal terms in the superposition has amplitude 1/ 2 ,
so the probability that
by A will collapse the superposition to
ameasurement
n 2
any one of them is
1/ 2
= 1/2n .
[Exercise. Why are the terms orthogonal? Hint: inner product of tensors.]
n 1
2P
[Exercise. Look at the sum in our specific situation:
|xin |f (x)in . QM
x=0
18.6.4
Its time to use the fact that f is Z2n periodic with (unknown) period a to help us
rewrite the output of the Oracles B register prior to the conceptual measurement.
Z2n periodicity tells us that the domain can be partitioned (in more than one way)
into two disjoint sets, R and Q,
Z2n =
R
Q
= { , x, } { , x a, } .
Cosets
Q can be written as R a, that is, we translate the entire subset R by adding a
number to every one of its members resulting in a new subset, Q. Mathematicians
say that Q = R a is a coset of the set R. Notice that R is a coset of itself since
R = R 0. In this case, the two cosets, R and Q = R a partition the domain of f
into two equal and distinct sets.
498
18.6.5
Our original expression for the oracles complete entangled output was
2n 1
n X
|xin |f (x)in ,
x=0
but our new partition of the domain will give us a propitious way to rewrite this.
Each element x R has a unique partner in Q satisfying
x
f
7 f (x) .
x a xR
Using this fact, we only need to sum the B register output over R (half as big as Z2n )
and include both x and x a in each term,
2n 1
n X
x=0
n X
1
=
|f (x)in
2
2
n
xR
The upshot is that we can apply the Born rule in reverse; well be measuring the B
register and forcing the A register to collapse into one of its binomial superpositions.
Lets do it. But first, we should give recognition to a reusable design policy.
The Lesson: |0in into the Oracles B Register
This all worked because we chose to send the CBS |0in into the oracles B register.
Any other CBS into that channel would not have created the nice terms |xin |f (x)in
of the oracles entangled output. After factoring out terms that had common |f (x)in
components in the B register, we were in a position to collapse along the B-basis and
pick out the attached sum in the A register.
Remember this. Its a classic trick that can be tried when we want to select a
small subset of A register terms from the large, perfectly mixed superposition in that
register. It will typically lead to a probabilistic outcome that wont necessarily settle
the algorithm in a single evaluation of the oracle, but we expect it to give a valuable
result that can be combined with a few more evaluations (loop passes) of the oracle.
This is the lesson we learn today that will apply next time when we study Shors
period-finding algorithm.
In contrast, when we were looking for a deterministic solution in algorithms like
Deutsch-Jozsa and Bernstein-Vazirani, we fed a |1i into the B register and used the
phase kick-back to give us an answer in a single evaluation.
|1i into oracles B register
|0in into oracles B register
phase kick-back ,
Born rule .
18.7
18.7.1
Although we wont really need to do so, lets imagine what happens if we were to
apply the generalized Born rule now using the rearranged sum (that turned the B
channel into the CBS channel).
|0in
H n
H n
Uf
|0in
|
{z
Conceptual
Each B register measurement of f (x) will be attached to not one, but two, input A
register states. Thus, measuring B first, while collapsing A, actually produces merely
500
a superposition in that register, not a single, unique x from the domain. It narrows
things down considerably, but not completely,
n1 X |xin + |x ain
1
|f (x)in
2
2
xR
|x0 in + |x0 ain
|f (x0 )in
&
2
Here, & means collapses to.
Well thats good, great and wonderful, but if after measuring the post-oracle B register, we were to measure line A, it would collapse to one of two states, |x0 i or |x0 + ai,
but we wouldnt know which nor would we know its unsuccessful companion (the one
to which the state didnt collapse). There seems to be no usable information here.
As a result we dont measure A ... yet.
Lets name the collapsed but unmeasured superposition state in the A register
|x0 in , since it is determined by the measurement f (x0 ) of the collapsed B register,
|x0 in + |x0 ain
n
.
|x0 i
2
Guiding Principle: Narrow the Field. We stand back and remember this
stage of the analysis for future use. Although a conceptual measurement of B does not
produce an individual CBS ket in register A, it does result in a significant narrowing
of the field. This is how the big remaining quantum algorithms in this course will
work.
18.7.2
In an attempt to coax the superposition ket |x0 in H(n) to cough up useful information, we take H n |x0 in . This requires that we place an H n gate at the output
of the oracles A register:
|0in
H n
H n
Uf
|0in
[Apology. I cant offer a simple reason why anyone should be able to intuit a
Hadamard as the post-oracle operator we need. Unlike Deutsch-Jozsa, today we
are not measuring along the x-basis, our motivation for the final H n back then.
However, there is a small technical theorem abouta Hadamard applied to a binomial
superposition of CBSs of the form (|xin + |yin ) / 2 which is relevant, and perhaps
this inspired Simon and his compadres.]
501
Continuing under the assumption that we measure an f (x0 ) at the B register output, thus collapsing both registers, we go on to work with the resulting superposition
|x0 i in the A register. Lets track its progress as we apply the Hadamard gate to it.
As with all quantum gates, H n is linear so moves past the sum, and we get
|x0 in + |x0 ain
H n
H n |x0 ain
2n 1
n X
(1)y x0 |yin
y=0
and
2n 1
n X
1
=
(1)y (x0 a) |yin
2
y=0
6=
502
so
H n |x0 in + H n |x0 ain
1
2
n 1
n 2X
y=0
=
18.7.3
(1)y x0
2n 1
n+1 X
1 + (1)y a
|yin
(1)y x0
1 + (1)y a
|yin .
y=0
H
2
2n 1
n+1 X
1
y x0
ya
=
(1)
1 + (1)
|yin .
2
y=0
0, if y a = 1 (mod 2)
ya
1 + (1)
=
,
2, if y a = 0 (mod 2)
so we can omit all those 0 terms which correspond to y a = 1 (mod 2), leaving
n1 X
n
n
1
n |x0 i + |x0 ai
H
=
(1)y x0 |yin .
2
2
ya = 0
(mod 2)
n1
[Exercise. Show that, for any fixed number a Z2n (or its equivalent vector
a (Z2 )n ) the set of all x with x a = 0 (or x with x a = 0) is exactly half the
numbers (vectors) in the set.]
Mod-2 Orthogonality vs. Hilbert Space Orthogonality
Avoid confusion. Dont forget that these dot products (like y a) are mod-2 dot
products of vectors in (Z2 )n . This has nothing to do with the Hilbert space inner
product nhy | ain , an operation on quantum states.
503
0
a3
1
a2
a = 5 =
0 = a1
1
a0
then
y | y a = 0 (mod 2)
a3
0
=
a1
a1 ,a3 {0,1}
a3
1
a1
,
a1 ,a3 {0,1}
which consists of eight of the original 24 = sixteen (Z2 )4 vectors associated with the
CBSs.
Therefore, there
are 2n1= 23 = 8 terms in the sum, exactly normalized by
|2n1 |
|x0 in + |x0 ain
H n
n1
(1)y x0 |yin .
ya = 0
(mod 2)
All of the vectors in the final A register superposition are orthogonal to a, so we can
now safely measure that mixed state and get some great information:
|0in
H n
H n
Uf
|0in
504
&
|y0 i, y0 a
We dont know which y0 among the 2n1 ys we will measure that depends on the
whimsy of the collapse, and theyre all equally likely. However, we just showed that
theyre all orthogonal to a, including y0 .
Warning(s): y = 0 Possible
This is one possible snag. We might get a 0 when we test the A register. It is a possible
outcome, since 0 = 0 is mod-2 orthogonal to everything. The probabilities are low
1/2n and you can test for it and throw it back if that happens. Well account for
this in our probabilistic analysis, further down. While were at it, remember that a,
itself, might get measured, but thats okay. We wont know its a, and the fact
that it might be wont change a thing that follows.
18.7.4
We havent yet spoken of the actual A register measurement without first measuring the B register. Now that youve seen the case with the simplifying B register
measurement, you should be able to follow this full development that omits that
step.
If we dont measure B first, then we cant say we have collapsed into any particular f (x0 ) state. So the oracles output must continue to carry the full entangled
summation
n1 X n
|xi + |x ain
1
|f (x)in
2
2
xR
to all of our
xR
expressions.
n
H 1n
=
=
n X
|xin + |x ain
|f (x)in
xR
n1 X
n
n
|xi + |x ain
n
n
H 1
|f (x)i
2
n1 X n
H |xin + H n |x ain
|f (x)in .
2
xR
xR
We can now cite the still valid result that led to expressing the Hadamard fraction
505
n1 X
n1 X
1
1
n
yx
=
(1)
|yi
|f (x)in
2
2
ya = 0
xR
(mod 2)
=
2n2
|yin
ya = 0
(mod 2)
(1)y x |f (x)in .
xR
While our double sum has more overall terms than before, they are all confined to
those y which are (mod-2) orthogonal to a. In fact, we dont have to apply the Born
rule this time, because all that were claiming is an A register collapse to one of the
2n1 CBS kets |yin which we get compliments of quantum mechanics: third postulate
+ post measurement collapse.
Therefore, when we measure only the A register of this larger superposition, the
collapsed state is still some y orthogonal to a.
|0in
H n
H n
&
|yi, y a
Uf
|0in
A Small Change in Notation
Because
I often use the variable y for general CBS states, |yi, or summation variables,
P
, Im going to switch to the variable z for the measured orthogonal output state,
y
as in |zi , z a. Well then have a mental cue for the rest of the lecture, where z
will always be a mod-2 vector orthogonal to a. With this last tweak, our final circuit
result is
|0in
H n
H n
&
|zi, z a
Uf
|0i
18.8
In a single application of the circuit, we have found our first z, with z a. We would
like to find n 1 such vectors, all linearly independent as a set, so we anticipate
506
sampling the circuit several more times. How long will it take us to be relatively
certain we have n 1 independent vectors? We explore this question next.
I will call the set of all vectors orthogonal to a either a (if using vector notation)
or a (if using Z2n notation). It is pronounced a-perp.
[Exercise. Working with the vector space (Z2 )n , show that a is a vector subspace. (For our purposes, its enough to show that it is closed under the operation).]
[Exercise. Show that those vectors which are not orthogonal to a do not form a
subspace.]
18.9
Simons Algorithm
18.9.1
[Notational Alert. By now, were all fluent in translating from the vectors (a, wk )
of Z2n to the encoded decimals (a, wk ) of (Z2 )n notation, so be ready to see me switch
between the two depending on the one I think will facilitate your comprehension. Ill
usually use encoded decimals and give you notational alerts when using vectors.]
What we showed is that we can find a vector orthogonal to a in a single application
of our circuit. We need, however, to find not one, but n 1, linearly-independent
vectors, z, that are orthogonal to a. Does repeating the process n 1 times do the
trick? Doing so would certainly manufacture
{z0 , z1 , z2 . . . , zn2 }
with
zk a
for k = 0, . . . , n 2,
however, thats not quite good enough. Some of the zk might be a linear combination
of the others or even be repeats. For this to work, wed need each one to not only be
orthogonal to a, but linearly independent of all the others as well.
Pause for an Example
If n = 4, and
a3
0
a2
1
a = 5 =
0 = a1
a0
1
suppose that the circuit produced the
0
1
,
0
1
three vectors
0
0
1
0
and
1
1
0
1
507
after three circuit invocations. While all three are orthogonal to a, they do not form
a linearly-independent set. In this case, 3 = n 1 was not adequate. Furthermore, a
fourth or fifth might not even be enough (if, say, we got some repeats of these three).
Therefore, we must perform this process m times, m n 1, until we have n 1
linearly-independent vectors. (We dont have to say that they must be orthogonal to
a, since the circuit construction already guarantees this.) How large must m be, and
can we ever be sure we will succeed?
18.9.2
The Algorithm
We provide a general algorithm in this section. In the sections that follow, well
see that we are guaranteed to succeed in polynomial time O(n4 ) with probability
arbitrarily close to 1. Finally, Ill tweak the algorithm just a bit and arrive at an
implementation that achieves O(n3 ) performance.
Whether its O(n4 ), O(n3 ) or any similar big-O complexity, it will be a significant
relative speed-up over classical computing which has exponential growth, even if we
accept a non-deterministic classical solution. (Well prove that in the final section of
this lesson.) The problem is hard, classically, but easy quantum mechanically.
Here is Simons algorithm for Z2n -period finding. To be sure that it gives a
polynomial time solution, we must eventually verify that
1. we only have to run the circuit a polynomial number of times (in n) to get
n 1 linearly independent zs which are orthogonal to a with arbitrarily good
confidence, and
2. the various classical tasks like checking that a set of vectors is linearly independent and solving a series of n equations are all of polynomial complexity
in n, individually.
Well do all that in the following sections, but right now lets see the algorithm:
Select an integer, T , which reflects an acceptable failure probability of 1/2T .
Initialize a set W to the empty set. W will eventually contain a growing number
of (Z2 )n vectors,
Repeat the following loop at most n + T times.
1. Apply Simons circuit.
2. Measure the output of the final H n to get z.
3. Use a classical algorithm to determine whether or not z is linearly dependent on the vectors in W.
If it is independent, name it wj , where j is the number of elements
already stored in W, and add wj to W.
508
18.9.3
Strange Behavior
This section is not required but may add insight and concreteness to your understanding of the concepts.
[Notational Alert. Ill use vector notation (a, wk ) rather than encoded decimal
(a, wk ) to underscore the otherwise subtle concepts.]
We know that (Z2 )n is not your grandparents vector space; it has a dot-product
that is not positive definite (e.g., a a might = 0), which leads to screwy things.
Nothing that we cant handle, but we have to be careful.
Well be constructing a basis for (Z2 )n in which the first n1 vectors, w0 , . . . , wn2
are orthogonal to a, while the nth vector, wn1 , is not. In general, this basis will not
be and cannot be an orthonormal basis. It is sometimes, but not usually. To see
cases where it cannot be, consider a period a that is orthogonal to itself. In such a
case, since a a we might have a = wk for some k < n 1, say a = w0 . Therefore,
we end up with
w0 w0
w0 wn1
=
=
aa = 0
a wn1 =
and
1,
n1
X
k=0
510
ck w k ,
6=
v wk .
This is not a big deal, since we will not need the trick, but its a good mental exercise
to acknowledge naturally occurring vector spaces that give rise to non-positive definite pairings and be careful not to use those pairing as if they possessed our usual
properties.
Example #1
Take n = 4 and
0
1
=
0 .
1
We may end up with a basis that uses the 3-dimensional subspace, a , generated by
the three vectors
1
0
0
0
0
1
, , = a ,
0 1 0
0
0
1
augmented by the nth vector not orthogonal to a,
0
1
wn1 = w3 =
0 .
0
These four vectors form a possible outcome of our algorithm when applied to (Z2 )4
with a period of a = ( 0, 1, 0, 1 )t , and you can confirm the odd claims I made above.
Example #2
If youd like more practice with this, try the self-orthogonal
1
1
a =
1
1
511
1 ,
0
vectors orthogonal to a,
1
1
0 1
, = a
0 1
1
1
wn1
w3
1
0
.
0
0
This isnt an othonormal basis, nor will the dot-with-basis trick work.
[Exercise. Create some of your own examples in which you do, and do not, get
an orthonormal basis that results from the algorithms desired outcome of n 1 basis
vectors being orthogonal to a and the nth, not.]
18.10
18.10.1
This section contains our officially sanctioned proof that one will get the desired n 1
vectors, orthogonal to a, with polynomial complexity in n. It is a very straightforward
argument that considers sampling the circuit n + T times. While it has several
steps, each one is comprehensible and contains the kind of arithmetic every quantum
computer scientist should be able to reproduce.
Theorem. If we randomly select m + T samples from Z2m , the probability
that these samples will contain a linearly-independent subset of m vectors
is > 1 (1/2)T .
Notice that the constant T is independent of m, so the process of selecting the
m independent vectors is O(m + T ) = O(m), not counting any sub-algorithms or
arithmetic we have to apply in the process (which well get to). Well be using m =
n 1, the dimension of a (
= Z2n1 ).
18.10.2
z0
z00
z01
z0(m1)
z1
z10
z11
z1(m1)
z2
z20
z
21
2(m1)
=
.
..
..
..
..
.
.
.
.
.
.
.
zm+T 1
z(m+T 1)0 z(m+T 1)1 z(m+T 1)(m1)
The number of independent rows is the row rank of this matrix, and by elementary linear algebra, the row rank = column rank. So, lets change our perspective
and think of this matrix as set of m column vectors, each of dimension m + T .
We would be done if we could show that all m of column vectors
z00
z01
z0(m1)
z10 z11
z1(m1)
z20 z21
z2(m1)
,
,...,
..
..
..
.
.
.
z(m+T 1)0
z(m+T 1)1
z(m+T 1)(m1)
c0 ,
c1 ,
cm1
were linearly independent with probability > 1 (1/2)T +1 . (That would mean
the column rank was m.)
[This row rank = column rank trick has other applications in quantum computing, and, in particular, will be used when we study Neumarks construction for
orthogonalizing a set of general measurements in the next course. Neumarks
construction is a conceptual first step towards noisy-system analysis.]
Step 2: Express the probability that all m column vectors, ck , are
independent as a product of m conditional probabilities.
[Note: This is why we switched to column rank. The row vectors, we know, are
not linearly independent after taking T + m > m samples, but we can sample
many more than m samples and continue to look at the increasingly longer
column vectors, eventually making those linearly independent.]
Let
I (j) event that c0 , c1 , . . . cj1 are linearly-independent.
Our goal is to compute the probability of I (m). Combining the basic identity,
P I (m)
= P I (m) I (m 1)
+ P I (m) I (m 1) ,
513
we can write
P I (m)
= P I (m) I (m 1)
= P I (m) I (m 1) P I (m 1) ,
(The j = 1 term might look a little strange, because it refers to the undefined
I (0), but it makes sense if we view the event I (0), i.e., that in a set of no
vectors theyre all linearly-independent, as vacuously true. Therefore, we see
that I (0) can be said to have probability 1, so the j = 1 term reduces to
P ( I (1) ), without a conditional, in agreement with the line above.)
Step 3: Compute the j th factor, P I (j) I (j 1) , in this product.
P I (j) I (j 1)
= 1 P I (j) I (j 1)
But what does I (j) I (j 1) mean? It means two things:
1. It assumes the first j 1 vectors, {c0 , . . . , cj2 }, are independent and
therefore span the largest space possible for j 1 vectors, namely, one of
size 2j1 , and
2. The jth vector, cj1 , is the span of the first j 1 vectors {c0 , . . . , cj2 }.
This probability is computed by counting the number of ways we can select a
vector from the entire space of 2m+T vectors (remember, our column vectors
have m + T coordinates) that also happen to be in the span of the first j 1
vectors (a subspace of size 2j1 ).
[Why size 2j1 ? Express the first j 1 column vectors in their own coordinates, i.e., in a basis that starts with them, then adds more basis vectors to get
the full basis for (Z2 )m+T . Recall that basis vectors expanded along themselves
514
always have a single 1 coordinate sitting in a column of 0s. Looked at this way,
how many distinct vectors can be formed out of various sums of the original
j 1?]
The probability we seek is, by definition of probability, just the ratio
2j1
2m+T
m+T j+1
1
,
2
so
P I (j) I (j 1)
m+T j+1
1
,
2
for j = 1, 2, . . . , m.
Sanity Check. Now is a good time for the computer scientists first line of
defense when coming across a messy formula: the sanity check.
Does this make sense for j = 1, the case of a single vector c0 ? The formula tells
us that the chances of getting a single, linearly-independent vector is
P I (1)
m+T 1+1
1
2
m+T
1
.
2
Wait, shouldnt the first vector be 100% certain? No, we might get unlucky and
pick the 0-vector with probability 1/2m+T , which is exactly what the formula
predicts.
That was too easy, so lets do one more. What about j = 2? The first vector,
c0 , spans a space of two vectors (as do all non-zero single vectors in (Z2 )m ).
The chances of picking a second vector from this set would be 2/(size of the
space), which is 2/2m+T = 1/(2m+T 1 ). The formula predicts that we will get
a second independent vector, not in the span of c0 with probability
P I (2) I (1)
m+T 2+1
1
2
m+T 1
1
,
2
exactly the complement of the probability that c1 just happening to get pulled
from the set of two vectors spanned by c0 , as computed.
So, spot testing supports the correctness of the derived formula.
Step 4: Plug the expression for the jth factor (step 3) back into the
full probability formula for all m vectors (step 2).
In step 2, we decomposed the probability of success, as a product,
P I (m)
m
Y
j=1
515
P I (j) I (j 1) .
m+T j+1
1
.
2
m
Y
m+T j+1 !
1
,
2
j=1
m
Y
T +i !
1
.
2
i=1
i=1
p
X
i .
i=1
Proof by Induction.
Case p = 1:
1 1
1 1 .
Consider any p > 1 and assume the claim is true for p 1. Then
p
Y
(1 i )
(1 p )
i=1
p1
Y
(1 i )
i=1
(1 p ) 1
1
p1
X
!
i
i=1
p1
i p +
i=1
p
>
p1
X
p i
i=1
i .
X QED
i=1
Step 6: Apply the lemma to the conclusion of step 4 to finish off the
proof. Using m for the p of the lemma, we obtain
T +i !
m
Y
1
P I (m)
=
1
2
i=1
m T +i
X
1
1
2
i=1
#
T "X
m i
1
1
= 1
.
2
2
i=1
But that big bracketed sum on the RHS is a bunch of distinct and positive
powers of 1/2, which can never add up to more than 1 (think binary floating
point numbers like .101011 or .00111 or .1111111), so that sum is < 1, i.e.,
" m #
X 1 i
< 1 , so
2
i=1
#
T
T "X
m i
1
1
1
, and we get
<
2
2
2
i=1
#
T "X
T
m i
1
1
1
1
> 1
.
2
2
2
i=1
Combining the results of the last two equation blocks we conclude
P I (m)
>
T
1
1
.
2
This proves that the column vectors, cj , are linearly-independent with probability greater than 1 1/2T , and therefore the row vectors, our zk also have
at least m linearly independent vectors among them (row rank = column rank,
remember?), with that same lower-bound probability.
QED
18.10.3
Summary of Argument 1
517
18.10.4
This argument is seen frequently and is more straightforward than our preferred one,
but it gives a weaker result in absolute terms. That is, it gives an unjustifiably
conservative projection for the number of samples required to achieve n 1 linearly
independent vectors. Of course, this would not affect the performance of an actual
algorithm, since all we are doing in these proofs is showing that well get linear
independence fast. An actual quantum circuit would be indifferent to how quickly
we think it should give us n 1 independent vectors; it would reveal them in a time
frame set by the laws of nature, not what we proved or didnt prove. Still, its nice to
predict the convergence to linear independence accurately, which this version doesnt
do quite as well as the first. Due to its simplicity and prevalence in the literature, I
include it.
Here are the steps. Well refer back to the first proof when we need a result that
was already proven there.
Theorem. If we randomly select m samples from Z2m , the probability that
we have selected a complete, linearly-independent (and therefore basis) set
is > 1/4.
This result (once proved) estimates a probability of at least 1 1/4T of getting
a linearly independent set of vectors after sampling Z2m mT times. The reason we
have to take take the product, mT , is that the theorem only computes the probability
that results when we take exactly m samples; it does not address the trickier math
for overlapping sample sets or a slowly changing sample set that would come from
adding one new sample and throwing away an old one. Nevertheless, it proves O(m)
complexity.
Keep in mind that well be applying this theorem to m = n 1, the dimension of
a (
= Z2n1 ).
18.10.5
[Notation. As with the first proof, well use boldface vector notation, z (Z2 )m .]
Pick m vectors, { z0 , z1 , z2 , . . . zm1 }, at random from (Z2 )m .
Step 1: Express the probability that the m vectors, zk , are independent as a product of m conditional probabilities.
Let
I (j) event that z0 , z1 , . . . zj1 are linearly-independent.
Our goal is to compute the probability of I (m).
518
m
Y
P I (j) I (j 1) .
j=1
(If interested, see argument 1, step 2 to account for the fact that I (0)
can be said to have probability 1, implying that the j = 1 term reduces to
P ( I (1) ), without a conditional.)
Step 2: Compute the j th factor, P I (j) I (j 1) , in this product.
P I (j) I (j 1)
= 1 P I (j) I (j 1)
But what does I (j) I (j 1) mean? It means two things:
1. It assumes the first j 1 vectors, {z0 , . . . , zj2 }, are independent and
therefore span the largest space possible for j 1 vectors, namely, one of
size 2j1 , and
2. The jth vector, zj1 , is the span of the first j 1 vectors {z0 , . . . , zj2 }.
This probability is computed by counting the number of ways we can select a
vector from the entire space of 2m vectors which happens to also be in the span
of the first j 1 vectors, a subspace of size 2j1 . But thats just the ratio
mj+1
1
2j1
=
,
m
2
2
so
P I (j) I (j 1)
mj+1
1
,
2
for j = 1, 2, . . . , m.
Sanity Check. Does this make sense for j = 1, the case of a single vector z0 ?
The formula tells us that the chances of getting a single, linearly-independent
vector is
m1+1
m
1
1
= 1
.
P I (1)
= 1
2
2
Wait, shouldnt the first vector be 100% certain? No, we might get unlucky
and pick the 0-vector with probability 1/2m , which is exactly what the formula
predicts.
That was too easy, so lets do one more. What about j = 2? The first vector,
z0 , spans a space of two vectors (as do all non-zero single vectors in (Z2 )m ).
The chances of picking a second vector from this set would be 2/(size of the
519
space), which is 2/2m = 1/(2m1 ). The formula predicts that we will get a
second independent vector, not in the span of z0 with probability
m2+1
m1
1
1
= 1
,
P I (2) I (1)
= 1
2
2
exactly the complement of the probability that z1 just happening to get pulled
from the set of two vectors spanned by z0 , as computed.
So, spot testing supports the formulas claim.
Step 3: Plug the expression for the jth factor (step 2) back into the
full probability formula for all m vectors (step 1).
In step 1, we decomposed the probability of success, as a product,
P I (m)
m
Y
P I (j) I (j 1) .
j=1
mj+1
1
.
2
m
Y
j=1
mj+1 !
1
,
2
m
Y
i=1
i !
1
.
2
Y
1
1
2
i=1
From here one could just quote a result from the theory of mathematical q-series,
namely that this infinite product is about .28879. As an alternative, there are
520
some elementary proofs that involve taking the natural log of the product,
splitting it into a finite sum plus and infinite error sum, then estimating the
error. Well accept the result without further ado, which implies
i !
m
Y
1
P I (m)
=
1
2
i=1
>
.25
18.10.6
Summary of Argument 2
18.10.7
Both proofs give the correct O(n) complexity (of the algorithms quantum processing)
for finding n 1 independent vectors spanning a in the context of Z2n .
The second proof is appealing in its simplicity, but part of that simplicity is due
to its handing off a key result to the number theorists. (I have found no sources that
give all the details of the > 1/4 step). Also, the number of circuit samples argument
2 requires for a given level of confidence is, while still O(n), many times larger than
one really needs, and, in particular, many more than the first proof. This is because
it does not provide probabilities for overlapping sample sets, but rather tosses out
all n 1 samples of any set that fails. One can adjust the argument to account for
this conditional dependence, but thats a different argument; if we know that one
set is not linearly independent, then reusing its vectors requires trickier math than
this proof covers. Of course, this is mainly because it is only meant to be a proof of
polynomial time complexity, and not a blueprint for implementation.
521
For example, for n = 10, T = 10, the second proof predicts that 10 9 = 90
samples would produce at least one of the 10 sets to be linearly independent with
probability greater than 1 (3/4)10 .943686. In contrast, the first proof would only
ask for 9 + 10 = 19 samples to get confidence > .999023. Thats greater confidence
with fewer samples.
18.11
18.11.1
As far as we have come, we cant claim victory yet, especially after having set the
rather high bar of proving, not just uttering, the key facts that lead to our end result.
We have a quantum circuit with O(n) gates which we activate O(n) times to find
the unknown period, a, with arbitrarily high probability. Weve agreed to consider
the time needed to operate that portion of the quantum algorithm and ignore the
circuits linear growth. Therefore, we have thus far accounted for a while loop which
requires O(n) passes to get a.
However, there are steps in the sampling process where we have implicitly used
some non-trivial classical algorithms that our classical computers must execute. We
need to see where they fit into the big picture and incorporate their costs. There are
two general areas:
1. The test for mod-2 linear independence in (Z2 )n that our iterative process has
used throughout.
2. The cost of solving the system of n mod-2 equations,
(
0, k = 0, . . . , n 2
wk a =
1, k = n 1
For those of you who will be skipping the details in the next sections, Ill reveal
the results now:
1. The test for mod-2 linear independence is handled using mod-2 Gaussian elimination which we will show to be O(n3 ).
2. Solving the system of n mod-2 equations is handled using back substitution
which we will show to be O(n2 ).
3. The two classical algorithms will be applied in series, so we only need to count
the larger of the two, O(n3 ). Together, they are applied once for each quantum
sample, already computed to be O(n), resulting in a nested count of O(n4 ).
522
4. Well tweak the classical tools by integrating them into Simons algorithm so
that their combined cost is only O(n2 ), resulting in a final count of O(n3 ).
Conclusion. Our implementation of Simons algorithm has a growth rate of
O(n3 ). It is polynomial fast.
18.12
18.12.1
Conveniently, both remaining classical tasks are addressed by an age old and well
documented technique in linear algebra for solving systems of linear equations called
Gaussian elimination with back substitution. Its a mouthful but easy to learn and,
as with all the ancillary math we have been forced to cover, applicable throughout
engineering. In short, a system of linear equations
5x + y + 2z + w
x y + z + 2w
x + 2y 3z + 7w
=
=
=
7
10
3
x
5 1
2 1
7
1 1 1 2 y = 10
z
1 2 3 7
3
w
As the example shows, theres no requirement that the system have the same number
of equations as unknowns; the fewer equations, the less you will know about the solutions. (Instead of the solution being a unique vector like (x, y, z, w)t = (3, 0, 7, 2)t ,
it might be a relation between the components, like ( , 4, .5, 3 )t , with free
to roam over R). Nevertheless, we can apply our techniques to any sized system.
We break it into the two parts,
Gaussian elimination, which produces a matrix with 0s in the lower left triangle,
and
back substitution, which uses that matrix to solve the system of equations as
best we can, meaning that if there are not enough equations, we might only get
relations between unknowns, rather than unique numbers.
523
18.12.2
Gaussian Elimination
3 3 0 5
0 6 2 4 , or
0 0 5 7
reduced row echelon form, which is echelon form with the additional requirement that the first non-zero element in each row be 1, e.g.,
1 1 0
5/3
0 1 1/3 2/3 .
0 0 1 7/5
In our case, where all the values are integers mod-2 (just 0 and 1), the two are actually
equivalent: all non-zero values are 1, automatically.
Properties of Echelon Forms
Reduced or not, row echelon forms have some important properties that we will need.
Lets first list them, then have a peek at how one uses Gaussian elimination (GE), to
convert any matrix to an echelon form.
Geometrically, the diagonal, under which
a square matrix:
0 0
.. ..
. .
0 0
..
.
0
..
.
When the the matrix is not square, the diagonal is geometrically visualized
524
0
.
.
.
0
.
..
0
0
..
.
..
.
0
0
..
.
0
0
..
.
..
0
..
.
0
..
.
0
0
..
.
..
.
..
.
0
..
.
In any case, a diagonal element is one that sits on position (k, k), for some k.
The first non-zero element on row k is to the right of the first non-zero element
of row k1, but it might be two or more positions to the right. Ill use a reduced
form which has the special value, 1, occupying the first non-zero element in a
row to demonstrate this. Note the extra 0s, underlined, that come about as a
result of some row being pushed to the right in this way.
0 1
0 0 0 0 1
0 0 0 0 0 1
.. .. .. .. .. .. . .
..
. . . . . .
.
.
0 0 0 0 0 0 [0/1/]
Any all-zero rows necessarily appear at the bottom of the matrix.
[Exercise. Show this follows from the definition.]
All non-zero row vectors in the matrix are, collectively, a linear independent
set.
[Exercise. Prove it.]
If we know that there are no all-zero row vectors in the echelon form, then the
number of rows number of columns.
[Exercise. Prove it.]
Including the RHS Constant Vector in the Gaussian Elimination Process
When using GE to solve systems of equations, we have to be careful that the equations
that the reduced echelon form represents are equivalent to the original equations, and
to that end we have to modify the RHS column vector, e.g., (7, 10, 3)t of our
example, as we act on the matrix on the LHS. We thus start the festivities by placing
525
the RHS constant vector in the same house as the LHS matrix, but in a room of
its own,
5 1
2 1 7
1 1 1 2 10 .
1 2 3 7 3
We will modify both the matrix and the vector at the same time, eventually resulting
in the row-echelon form,
3 3 0 5 13
0 6 2 4 832 ,
15
0 0 5 7 17
5
or, if we went further, in the reduced row echelon form
1 1 0 35
13
3
0 1 1 2 832 .
3
3
45
0 0 1 75 17
25
The Three Operations that Produce Echelon Forms
There are only three legal operations that we need to consider when performing GE,
1. swapping two rows,
2. multiplying a row by a nonzero value, and
3. adding a multiple of one one row to another.
[Exercise. Prove that these operations produce equations that have the identical
solution(s) as the original equations.]
526
For example,
5 1
2 1 7
7
5 1 2
1
52nd row
1 1 1 2 10 5 5 5 10 50
3
1 2 3 7 3
1 2 3 7
5 1 2
1
7
0 6 3 9 43
3
1 2 3 7
add 1st to 2nd
3
1
2
3
7
swap 1st and 3rd
0 6 3 9 43
7
5 1 2
1
3
1
2
3
7
add 51st to 3rd
0 6 3 9 43
0 9 17 34 22
1 2 3 7
3
add 32 2nd to 3rd
0 6 3 9 43 ,
0 0 25
95
85
2
2
2
etc. (These particular operations may not lead to the echelon forms, above; theyre
just illustrations of the three rules.)
The Cost of Decimal-Based Gaussian Elimination
GE is firmly established in the literature, so for those among you who are interested,
Ill prescribe web search to dig up the exact sequence of operations needed to produce
a row reduced echelon form. The simplest algorithms with no short-cuts use O(n3 )
operations, where an operation is either addition or multiplication, and n is the
larger of the matrixs two dimensions. Some special techniques can improve that, but
it is always worse than O(n2 ), so well be satisfied with the simpler O(n3 ).
To that result we must incorporate the cost of each multiplication and addition
operation. For GE, multiplication could involve increasingly large numbers and, if
incorporated into the
full accounting, would change the complexity to slightly better
2
3
than O n (log m) , where m is the absolute value of the largest integer involved.
Addition is less costly and done in series with the multiplications so does not erode
performance further.
The Cost of Mod-2 Gaussian Elimination
For mod-2 arithmetic, however, we can express the complexity without the extra variable, m. Our matrices consist of only 0s and 1s, so the multiplication in operations
2 and 3 reduce to either the identity (1 a row) or producing a row of 0s (0 a
row). Therefore, we ignore the multiplicative cost completely. Likewise, the addition
527
18.12.3
Back Substitution
c00
c01
...
c0(n1)
b0
x0
c10
b1
c11
...
c1(n1)
x1
=
..
.. ,
..
.
.
.
.
.
.
.
.
.
.
.
.
c(n1)0 c(n1)1 . . . c(n1)(n1)
assumed to be of maximal rank, n
This would result is a reduced row
0 0
1 c023
0 0
1
. .
.
..
..
.. ..
.
0 0
0
0
0
bn1
xn1
c00(n1)
. . . c00(n2)
b00
. . . c01(n2)
c01(n1)
b01
. . . c02(n2)
c02(n1)
b02
. . . c03(n2)
c03(n1)
b03
,
..
..
..
..
.
.
.
.
...
1
c0(n2)(n1) b0n2
...
0
1
b0n1
where, the c0kj and b0k are not the original constants in the equation, but the ones
obtained after applying GE. In reduced echelon form, we see that the (n 1)st
unknown, xn1 , can be read off immediately, as
xn1
b0n1 .
b0n2 ,
which can be solved for xn2 (one equation, one unknown). Once solved, we substitute
these numbers into the equation above, getting another equation with one unknown.
This continues until all rows display the answer to its corresponding xk , and the
system is solved.
The Cost of Decimal-Based Back Substitution
The bottom row has no operations: it is already solved. The second-from-bottom has
one multiplication and one addition. The third-from-bottom has two multiplications
and two additions. Continuing in this manner and adding things up, we get
1 + 2 + 3 + ... + (n 1) =
(n 1) n
2
18.12.4
The Total Cost of the Classical Techniques for Solving Mod-2 Systems
We have shown that Gaussian elimination and back substitution in the mod-2 environment have time complexities, O(n3 ) and O(n2 ), respectively. To solve a system of
n mod-2 equation the two methods can be executed in series, so the dominant O(n3 )
will cover the entire expense.
However, this isnt exactly how Simons algorithm uses these classical tools, so we
need to count in a way that precisely fits our needs.
18.13
We now show how these time tested techniques can be used to evaluate the classical
post processing costs in Simons algorithm. The eventual answer we will get is this:
529
They will be used in-series, so we only need to count the larger of the two, O(n3 ),
and these algorithms are applied once for each quantum sample, already computed
to be O(n), resulting in a nested count of O(n4 ).
18.13.1
Linear Independence
0 0
0
1
.
.
.
w
w
w
2(n3)
2(n2)
2(n1)
.
..
..
..
.. . .
..
..
..
.
.
.
.
.
.
.
.
0 0
0
0 ...
1
w(m1)(n2) w(m1)(n1)
2. Observations. Notice that m < (n 1), since the vectors in W are independent, by assumption, and if m were equal to n 1, we would already have a
maximally independent set of vectors known to be orthogonal to a and would
have stopped sampling the circuit. That means that there are at least two more
columns than there are rows: the full space is n-dimensional, and we have n 2
or fewer linearly independent vectors so far.
As a consequence, one or more rows (the second, in the above example) skips
to the right more than one position relative to the row above it and/or the final
row in the matrix has its leading 1 in column n 3 or greater.
3. Put z at the bottom of the stack and re-apply GE.
If z is linearly independent of W this will result in a new non-zero
vector row. Replace the set W with the new set whose coordinates are the
rows of the new reduced-echelon matrix. These new row vectors are in the
span of the old W plus z added. [Caution. It is possible that none of
the original wk or z explicitly appear in the new rows, which come from
530
GE applied to those vectors. All that matters is that the span of the new
rows is the same as the span of W plus z, which GE ensures.] We have
increased our set of linearly independent vectors by one.
If z is not linearly independent of W the last row will contain all 0s.
Recover the original W (or, if you like, replace it with the new reduced
matrix row vectors, leaving off the final 0 row the two sets will span the
same space and be linearly independent). You are ready to grab another z
based on the outer-loop inside which this linear-independence test resides.
[Exercise. Explain why all the claims in this step are true.]
4. Once n 1 vectors populate the set W =
complete. We call the associated row-reduced
w0
w1
W =
..
wn2
and W satisfies the matrix-vector product equation
W a
0,
18.13.2
wn1
(0, 0, . . . , 0, 1, 0, . . . , 0)
and place this new wn1 directly below wk , pushing all the vectors in the
old rows k + 1 and greater down to accommodate the insertion. Call the
augmented matrix, W 0 .
Before:
wk
0
0
1
w
w
w
W =
23
24
25
0 0
0
0
1 w35
0 0
0
0
0
1
After:
W0
0 0
1 w23 w24
0 0
0
1
0
0 0
0
0
1
0 0
0
0
0
w05
w15
w25
0
wn1
w35
1
(0, 0, . . . , 0, 1, ) .
Define
wn1
20
(0, 0, . . . , 0, 0, 1) ,
and place this new wn1 after wn2 last old row, making wn1 the new
bottom row of W . Call the augmented matrix, W 0 .
Before:
0 0
0
1 w34 w35
0 0
0
0
1 w45 wn2
532
After:
0 0
1 w23
0 0
0
1
0 0
0
0
0 0
0
0
w04
w14
w24
w34
1
0
w05
w15
w25
w35
w45
wn1
1
0,
insert a 1 into the position corresponding to the new row in W . Push any 0s
down, as needed, to accommodate this 1. It will now be an n-dimensional vector
corresponding to 2k for some k.
0
0
0
0
0 1
0
0
0
0
0
That will produce a full set of n linearly independent vectors for Z2n in reduced
echelon form, which we call W 0 .
Cost of Completing the Basis
The above process consists of a small number of O(n) operations, each occurring in
series with each other and with the loops that come before. Therefore, its O(n) adds
nothing to the previous complexity, which now stands at O(n4 ).
533
18.13.3
We are, metaphorically, 99% of the way to finding the period of f . We want to solve
0
0
.
.
.
W a = 1 ,
.
..
0
0
but W is already in reduced-echelon form. We need only apply mod-2 back-substitution
to extract the solution vector, a.
Cost of Back Substitution to the Algorithm
This is an O(n2 ) activity done in series with the loops that come before. Therefore,
its O(n2 ) adds nothing to the previous complexity, which still stands at O(n4 ).
18.13.4
We have accounted for the classical cost of testing the linear independence and solving
the system of equations. In the process, we have demonstrated that it increases the
complexity by a factor n3 , making the full algorithm O(n4 ), not counting the oracle,
whose complexity is unknown to us.
We will do better, though, by leveraging mod-2 shortcuts that are integrated into
Simons algorithm. Youll see.
The melody that keeps repeating in our head, though, is the footnote that this
entire analysis is relative to the quantum oracle Uf , the reversible operator associated
with the Z2n periodic function, f . Its complexity is that of the black box for f , itself.
We do not generally know that complexity, and it may well be very bad. Fortunately,
in some special cases of great interest, we know enough about the function to be able
to state that it has polynomial time complexity, often of a low polynomial order.
18.14
Adjusted Algorithm
18.14.1
We now integrate Gaussian elimination into Simons algorithm during the test for
linear independence. Here is the step, as originally presented:
534
1 0 1
0 0 0
0 0 0
0 0 0
0
1
0
0
1
1
1
0
0
1
0
1
0
1
0
0
1
0
0
1
536
Summary
We are not applying GE all at once to a single matrix. Rather, we are doing an O(n2 )
operation after each quantum sample that keeps the accumulated W set in eternal
echelon-reduced form. So, its a custom O(n2 ) algorithm nested within the quantum
O(n) loop, giving an outer complexity of O(n3 ).
18.14.2
Finally, we integrate back-substitution into the final step of the original algorithm.
The original algorithms final step was
Otherwise, we succeeded. Add an nth vector, wn1 , which is linearly independent to this set (and therefore not orthogonal to a, by a previous exercise), done
easily using a simple classical observation, demonstrated below. This produces
a system of n independent equations satisfying
(
0, k = 0, . . . , n 2
wk a =
1, k = n 1
which has a unique non-zero solution.
Replace this with the new final step,
Otherwise, we succeeded and W is an (n1)n matrix in reduced echelon form.
Add an nth row vector, wn1 , which is linearly independent to W s rows (and
therefore not orthogonal to a), using the process described in Solving the Final
Set of Linear Equations, above. That was an O(n) process that produced an
n n W , also in reduced echelon form. We now have a system of n independent
equations satisfying
(
0, k = 0, . . . , n 2
wk a =
1, k = n 1
which is already in reduced echelon form. Solve it using only back-substitution,
which is O(n2 ).
The take-away is that we have already produced the echelon form as part of the
linear-independence tests, so we are positioned to solve the system using only backsubstitution, O(n2 ).
18.14.3
We detailed two adjustments to the original algorithm. The first was the test for
linear independence using a mod-2 step that simulateously resulted in GE at the end
all the quantum sampling. The second was the solution of the system of equations.
537
18.15
.
Classically, this problem is hard, that is, deterministically, we certainly need a
number of trials that increases exponentially in n to get the period, and even if we
are satisfied with a small error, we would still need to take an exponential number
of samples to achieve that (and not just any exponential number of samples, but a
really big one). Lets demonstrate all this.
18.15.1
Recall that the domain can be partitioned (in more than one way) into two disjoint
sets, R and Q,
Z2n =
R
Q
= { , x, } { , x a, } ,
with f one-to-one on R and Q, individually. We pick xs at random (avoiding duplicates) and plug each one into f or a classical oracle of f if you like,
x
Classical f
f (x) .
If we sample f any fewer than (half the domain size) + 1, that is, (2n /2) + 1 =
2n1 + 1, times, we may be unlucky enough that all of our outputs, f (x), are images
of x R (or all Q), which would produce all distinct functional values. There is
no way to determine what a is if we dont get a duplicate output, f (x0 ) = f (x00 ) for
some distinct x0 , x00 dom(f ). (Once we do, of course, a = x0 x00 , but until that
time, no dice.)
Therefore, we have to sample 2n1 + 1 times, exponential in n, to be sure we get
at least one x from R and one from Q, thus producing a duplicate.
538
18.15.2
1 .
The functional dependence m = m(, n) is just a way to express that we are allowed
to let the number of samples, m, depend on both how small an error we want and
also how big the domain of f is. For example, if we could show that m = 21/ worked,
then since that is not dependent on n the complexity would be constant time. On the
other hand, if we could only prove that an m = n4 21/ worked, then the algorithm
would be O(n4 ).
The function dependence we care about does not involve , only n, so really, we
are interested in m = m(n).
What we will show is that even if we let m be a particular function that grows
exponentially in n, we wont succeed. Thats not to say every exponentially increasing
sample size which is a function of n would fail we already know that if we chose
2n1 + 1 we will succeed with certainty. But well see that some smaller exponential
function of n, specifically m = 2n/4 , will not work, and if that wont work then no
polynomial dependence on n, which necessarily grows more slowly than m = 2n/4 ,
has a chance, either.
An Upper Bound for Getting Repeats in m Samples
For the moment, we wont concern ourselves with whether or not m is some function
of or n. Instead, lets compute an upper bound on the probability of getting a
repeat when sampling a classical oracle m times. That is, well get the probability
as a function of m, alone. Afterwards, we can stand back and see what kind of
dependence m would require on n in order that the deck be stacked in our favor.
We are looking for the probability that at least two samples f (xi ), f (xj ) are equal
when choosing m distinct inputs, {x0 , x1 , . . . , xm1 }. Well call that event E . The
more specific event that some pair of inputs, xi and xj , yield equal f (x)s, will be
referred to as Eij . Since Eij and Eji are the same event, we only have to list it once,
so we only consider the cases where i < j. Clearly,
m1
Eij
i, j=0
i<j
The probability of a union is the sum of the probabilities of the individual events
539
m1
X
P (Eij )
i, j=0
i<j
m1
X
P (Eij Ekl . . .)
various
combinations
P (Eij ) .
i, j=0
i<j
The number of unordered pairs, {i, j}, i 6= j, when taken from m things is (look up
n choose k if you have never seen this)
m
m!
m (m 1)
=
=
.
2
2!(m 2)!
2
This is exactly the number of events Eij that we are counting since our condition,
0 i < j m 1, is in 1-to-1 correspondence with the set of unordered pairs {i, j},
i and j between 0 and m 1, inclusive and i 6= j.
Meanwhile, the probability that an individual pair produces the same f value is
just the probability that we choose the second one, xj , in such a way that it happens
to be exactly xi a. Since were intentionally not going to pick xi a second time this
leaves 2n 1 choices, of which only one is xi a, so that gives
1
.
P (Eij ) =
n
2 1
Therefore, weve computed the number of elements in the sum, m(m 1)/2, as well
as the probability of each element in the sum, 1/(2n 1), so we plug back into our
inequality to get
1
m (m 1)
n
.
P (E )
2
2 1
[Exercise. We know that when we sample m = 2n1 + 1 times, we are certain to
get a duplicate. As a sanity check, make sure that plugging this value of m into the
derived inequality gives an upper bound that is no less than one. Any value 1 will
be consistent with our result. ]
This is the first formula we sought. We now go on to see what this implies about
how m would need to depend on n to give a decent chance of obtaining a.
What the Estimate Tells Us about m = m(n)
To get our feet wet, lets imagine the pipe dream that we can use an m that is
independent of n. The bound we proved,
m (m 1)
1
P (E )
n
,
2
2 1
tells us that any such m (say an integer m > 1/1000000 ) is going to have an exponentially small probability as n . So that settles that, at least.
540
=
=
=
<
m (m 1)
1
2n/4 (2n/4 1)
1
n
=
n
2
2 1
2
2 1
1
2n/2
1
2n/2 2n/4
n
<
n
2
2 1
2
2 1
n/2
n/2
1
2
1
2
n
<
n
2 2 1
2 2 2n/2
1
2n/2
1
1
n/2 n/2
=
n/2
2 2 (2 1)
2 2 1
1
,
n/2
2 1
541
Chapter 19
Real and Complex Fourier Series
19.1
Our previous quantum algorithms made propitious use of the nth order Hadamard
transform, H n , but our next algorithm will require something a little higher octane.
The fundamental rules apply: a gate is still a gate and must be unitary. As such, it
can be viewed as a basis change at one moment and a tool to turn a separable input
state like |0in into a superposition, the next. Well have occasion to look at it in both
lights.
Our objective in the next four lessons is to study the quantum Fourier transform
a.k.a. the QFT . This is done in three classical chapters and one quantum chapter.
Reading and studying all three classical chapters will best prepare you for the fourth
QFT chapter, but you can cherry pick, skim, or even skip one or more of the three
depending on your interest and prior background.
An aggressive short cut might be to try starting with the third, the Discrete and
Fast Fourier Transforms, read that for general comprehension, then see if it gives you
enough of a foundation to get through the fourth QFT chapter.
If you have time and want to learn (or review) the classic math that leads to
QFT , the full path will take you through some beautiful topics:
19.2
Fourier Series apply to functions that are either periodic or are only defined on
a bounded interval. Well first define the terms, periodicity, bounded domain and
compact support, then we can get on with Fourier Series.
19.2.1
A function f whose domain is (nearly) all the real numbers, R, is said to be periodic
if there is a unique smallest positive real number T , such that
f (x + T )
f (x),
for all x.
sin (x),
for all x,
and 2 is the smallest positive number with this property, so a = 2 is its period. 4
and 12 satisfy the equality, but theyre not as small as 2, so theyre not periods.
Its graph manifests this periodicity in the form of repetition (Figure 19.1).
I said that the domain could be nearly all real numbers, because its fine if the
function blows-up or is undefined on some isolated points. A good example is the
tangent function, y = tan x, whose period is half that of sin x but is undefined for
/2, 3/2, etc. (Figure 19.2).
543
Figure 19.2: The function y = tan x blows-up at isolated points but is still periodic
(with period )
19.2.2
A function which is defined only over a bounded interval of the real numbers (like
[0, 100] or [, ]) is said to have bounded domain. An example is:
x2 ,
if x [1, 3)
f (x) =
undefined,
otherwise
1, included both or neither. It all depends on our particular goal and function.
Half-open intervals, closed on the left and open on the right, are the most useful to
us.]
A subtly different concept is that of compact support. A function might be defined
on a relatively large set, say all (or most) real numbers, but happen to be zero outside
a bounded interval. In this case we prefer to say that it has compact support.
The previous example, extended to all R, but set to zero outside [1, 3) is
2
x , if x [1, 3)
f (x) =
0,
otherwise
(See Figure 19.4)
Terminology
The support of the function is the closure of the domain where f 6= 0. In the last
function, although f is non-zero only for [1, 0) (0, 3), we include the two points 0
and 3 in its support since they are part of the closure of the set where f is non-zero.
(I realize that I have not defined closure, and I wont do so rigorously. For us, closure
means adding back any points which are right next to places where f is non-zero,
like 0 and 3 in the last example.)
Figure 19.4: Graph of a function defined everywhere, but whose support is [1, 3],
the closure of [1, 0) (0, 3)
19.2.3
If a function is periodic, with period T , once you know it on any half-open interval of
length T , say [0, T ) or [T /2, T /2), you automatically know it for all x, complements
of f (x) = f (x + T ). So we could restrict our attention to the interval. [T /2, T /2)
imagining, if it suited us, that the function was undefined off that interval. Our
understanding of the function on this interval would tell us everything there is to
545
know about the function elsewhere, since the rest of the graph of the function is just
a repeated clone of what we see on this small part.
Likewise, if we had a (non-periodic) function with bounded domain, say [a, b], we
could throw away b to make it a half-open interval [a, b) (we dont care about f at one
point, anyway). We then convert that to an induced periodic function by insisting
that f (x) = f (x + T ), for T b a. This defines f everywhere off that interval, and
the expanded function agrees with f on its original domain, but is now periodic with
period T = b a.
As a result of this duality between periodic functions and functions with bounded
domain, I will be interchanging the terms periodic and bounded domain at will over
the next few sections, choosing whichever one best fits the context at hand.
19.3
19.3.1
Definitions
Figure 19.6: A function with bounded domain that can be expressed as a Fourier
series (support width = 2)
546
Until further notice we confine ourselves to functions with domain R and range
R.
Any well-behaved function of the real numbers that is either periodic (See Figure 19.5), or has bounded domain (See Figure 19.6), can be expressed as a sum of
sines and cosines. This is true for any period or support width, T , but we normally
simplify things by taking T = 2.
The Real Fourier Series. The real Fourier series of a well-behaved
periodic function with period 2 is the sum
f (x) =
X
X
1
+
an cos nx +
bn sin nx .
a0
2
n=1
n=1
The sum on the RHS of this equation is called the Fourier Series of the function f .
The functions of x (that is, {sin nx}n , {cos nx}n and the constant function 1/2) that
appear in the sum are sometimes called the Fourier basis functions.
Study this carefully for a moment. There is a constant term out front (a0 /2),
which simply shifts the functions graph up or down, vertically. Then, there are two
infinite sums involving cos nx and sin nx. Each term in those sums has a coefficient
some real number an or bn in front of it. In thirty seconds well see what all that
means.
[The term well-behaved could take us all week to explore, but every function
that you are likely to think of, that we will need, or that comes up in physics and
engineering, is almost certainly going to be well-behaved.]
19.3.2
Each sinusoid in the Fourier series has a certain frequency associated with it: the n
in sin nx or cos nx. The larger the n, the higher the frequency of that sine or cosine.
(See Figure 19.7)
Of course, not every term will have the same amount of that particular frequency. Thats where the coefficients in front of the sines and cosines come into
play. The way we think about the collection of coefficients, {an , bn }, in the Fourier
expansion is summarized in the bulleted list, below.
When small -n coefficients like a0 , a1 , b1 or b2 are large in magnitude, the function
possesses significant low frequency characteristics (visible by slowly changing,
large curvature in the graph of f ).
When the higher -n coefficients like a50 , b90 or b1000 are large in magnitude, the
function has lots of high frequency characteristics (busy squiggling) going on.
The coefficients, {an } and {bn } are often called the weights or amplitudes of
their respective basis functions (in front of which they stand). If |an | is large,
547
Figure 19.7: A low frequency (n = 1 : sin x) and high frequency (n = 20 : sin 20x)
basis function in the Fourier series
theres a lot of cos nx needed in the recipe to prepare a meal of f (x) (same
goes for |bn | and sin nx). Each coefficient adds just the right about of weight of
its corresponding sinusoid to build f .
As mentioned, the functions {sin nx} and {cos nx} are sometimes called the
Fourier basis functions, at other times the normal modes, and in some contexts
the Fourier eigenfunctions. Whatever we call them, they represent the individual ingredients used to build the original f out of trigonometric objects, and
the weights instruct the chef how much of each function to add to the recipe:
a pinch of cos 3x, a quart of sin 5x, three tablespoons of sin 17x, etc.
Caution: A sharp turn (f 0 blows up or there is a jump discontinuity) at even a
single domain point is a kind of squiggliness, so the function may appear smooth
except for one or two angled or cornered points, but those points require lots of
high frequencies in order to be modeled by the Fourier series.
548
19.3.3
To bring all this into focus, we look at the Fourier series of a function that is about
a simple as you can imagine,
f (x) = x,
x [, ).
X
2
n+1
x =
(1)
sin nx .
n
n=1
Finite Approximations
While the Fourier sum is exact (for well-behaved f ) the vagaries of hardware require
that we merely approximate it by taking only a partial sum that ends at some finite
n = N < . For our f under consideration, the first 25 coefficients of the sines are
shown in Figure 19.10 and graphed in Figure 19.11.
The Spectrum
Collectively, the Fourier coefficients (or their graph) is called the spectrum of f . It is
a possibly infinite list (or graph) of the weights of the various frequencies contained
in f .
Viewed in this way, the coefficients, themselves, represent a new function, F (n).
The Fourier Series as an Operator Mapping Functions to Functions
The Fourier mechanism is a kind of operator, FS, applied to f (x) to get a new
function, F (n), which is also called the spectrum.
550
F (n) = FS [f (x)]
l
{an , bn }
The catch is, this new function, F , is only defined on the non-negative integers. In
fact, if you look closely, its really two separate functions of integers, a(n) = an and
b(n) = bn . But thats okay we only want to get comfortable with the idea that
the Fourier operator takes one function, f (x), domain R, and produces another
function, its spectrum F (n), domain Z0 .
f : R R
F : Z0 R
FS
f 7 F
F contains every ounce of information of the original f , only expressed in a different
form.
Computing Fourier Coefficients
The way to produce the Fourier coefficients, {an , bn } of a function, f , is through these
easy formulas (that I wont derive),
Z
1
f (x) dx , n = 0 ,
a0 =
Z
1
an =
f (x) cos nx dx , n > 0 , and
Z
1
bn =
f (x) sin nx dx , n > 0 .
They work for functions which have period 2 or bounded domain [, ). For some
other period T , we would need to multiply or divide T /(2) in the right places (check
on-line or see if you can derive the general formula).
Using these formulas, you can do lots of exercises, computing the Fourier series of
various functions restricted to the interval [, ).
In practice, we cant build circuits or algorithms that will generate an infinite
sum of frequencies, but its easy enough to stop after any finite number of terms.
Figure 19.12 shows what we get if we stop the sum after three terms.
Its not very impressive, but remember, we are using only three sines/cosines to
approximate a diagonal line. Not bad, when you think of it that way. Lets take the
first 50 terms and see what we get (Figure 19.13).
Now we understand how Fourier series work. We can see the close approximation to
the straight line near the middle of the domain and also recognize the high frequency
551
552
X
X
1
a0
+
an cos nx +
bn sin nx
2
n=1
n=1
always produces a function of x which is periodic on the entire real line, even if we
started with (and only care about) a function, f (x) with bounded domain. The RHS
of this equation matches the original f over its original domain, but the domain
of the RHS may be larger. To illustrate this, if we were modeling the function
f (x) = x2 , restricted to [, ), the Fourier series would converge on the entire real
line, R, beyond the original domain (See Figure 19.15).
Figure 19.15: f (x) has bounded domain, but its Fourier expansion is periodic.
The way to think about and deal with this is to simply ignore the infinite number of
periods magnanimously afforded by the Fourier series expression (as a function of x)
and only take the one period that lies above f s original, bounded domain.
Compact Support, Alone, is Not Enough
In contrast, if we defined a function which is defined over all R but had compact
support, [, ], it would not have a Fourier series; no single weighted sum of sinusoids
could build this function, because we cant reconstruct the flat f (x) = 0 regions on
the left and right with a single (even infinite) sum. We can break it up into three
regions, and deal with each separately, but thats a different story.
553
19.4
19.4.1
Definitions
We continue to study real functions of a real variable, f (x), which are either periodic
or have a bounded domain. We still want to express them as a weighted sum of special
pure frequency functions. I remind you, also, that we are restricting our attention
to functions with period (or domain length) 2, but our results will apply to functions
having any period T if we tweak them using factors of T or 1/T in the right places.
To convert from sines and cosines to complex numbers, one formula should come
to mind: Eulers formula,
ei = cos + i sin .
Solving this for cosine and sine, we find:
ei + ei
2
ei ei
sin =
2i
cos =
While I wont show the four or five steps, explicitly, we can apply these equivalences
to the real Fourier expression for f ,
f (x) =
X
X
1
bn sin nx,
an cos nx +
+
a0
2
n=1
n=1
a0
X
X
1
1
1
+
(an ibn )einx +
(an + ibn )einx .
2
2
2
n=1
n=1
Now, let n runneth negative to form our Complex Fourier Series of the (same) function f .
The Complex Fourier Series. The Complex Fourier series of a
well-behaved periodic function with period 2 is the sum
f (x) =
cn einx ,
n =
where
2 (an ibn ) ,
cn 21 (an + ibn ) ,
1
a ,
2 0
554
n>0
n<0 .
n=0
The complex Fourier series is a cleaner sum than the real Fourier expansion which
uses sinusoids, and it allows us to deal with all the coefficients at once when doing
computations. The price we pay is that the coefficients, cn , are generally complex
(not to mention the exponentials, themselves). However, even when they are all
complex, the sum is still real. We have been and continue to be interested in
real-valued functions of a real variable. The fact that we are using complex functions
and coefficients to construct a real-valued function does not change our focus.
19.4.2
I expressed the cn of the complex Fourier series in terms of the an and bn of the real
Fourier series. That was to demonstrate that this new form existed, not to encourage
you to first compute the real Fourier series and, from that, compute the cn of the
complex form. The formula for computing the complex spectrum, {cn } is
Z
1
einx f (x) dx .
cn =
2
We can learn a lot by placing the complex Fourier expansion of our periodic f on the
same line as the (new, explicit) expression for the cn .
Z
X
1
inx
einx f (x) dx
f (x) =
cn e ,
where cn =
2
n =
Make a mental note of the following observations by confirming, visually, that theyre
true.
We are expressing f as a weighted-sum (weights cn ) of complex basis functions
einx . But the integral is also a kind of sum, so we are simultaneously expressing
the cn as a weighted-sum (weights f (x)) of complex basis functions einx
Under the first sum, the x in the nth basis function, einx , is fixed (thats the
number at which we are evaluating f ), but n is a summation variable; under
the second integration, it is the n of einx which is fixed (thats the index at
which we are evaluting the coefficient, cn ), and x is the itegration variable. So
the roles of x and n are swapped.
The sequence of complex weights, {cn }, is nothing more than a function c(n)
on the set of all integers, Z. This emphasizes that not only is f () a function of
x, but c() is a function of n. This way of thinking makes the above expression
look even more symmetric
f (x) =
c(n) einx ,
n =
while,
1
c(n) =
2
555
f (x) einx dx .
c(n)
is, conceptually, its own inverse. You do (very roughly) the same thing to get the
spectrum from the function as you do to build the function from its spectrum.
Example
Lets expandf (x) = x along the complex Fourier basis.
Our goal is to find the complex coefficients, cn , that make the following true:
x =
cn einx .
n=
=
=
=
=
=
1 1
in
in
in
in
in
e
+
e
+
e
e
2 n2
1 1
[in (2 cos n) (2i sin n)]
2 n2
1 1
2i [n (cos n) 0]
2 n2
i
cos n
n
i
(1)n .
n
i
(1)n einx .
n
n=
n6=0
556
19.5
19.5.1
This short section will be very useful in motivating the approach to Shors periodfinding algorithm. In order to use it, Ill temporarily need the letter f to mean
frequency (in keeping with the classical scientific literature), so were going to call our
periodic function under study g(x).
Weve been studying periodic functions, g(x) of real x which have periods T = 2,
and we have shown how to express them as a sum of either real sines and cosines,
g(x) =
X
X
1
a0
+
an cos nx +
bn sin nx ,
2
n=1
n=1
or complex exponentials,
g(x) =
cn einx .
n =
Each term in these sums has a certain frequency: the n. You may have gotten the
impression that the term frequency only applies to functions of the form sin (nx),
cos (nx) or e(nx) . If so, Id like to disabuse you of that notion (for which my presentation was partly responsible). In fact any periodic function has a frequency, even
those which are somewhat arbitrary looking.
Well relax the requirement that our periodic functions have period T = 2. That
was merely a convenience to make its Fourier sum take on a standard form. For the
moment, we dont care about Fourier series.
We will define frequency twice, first in the usual way and then using a common
alternative.
19.5.2
Ordinary Frequency
We know what the period, T , of a periodic g(x) means. The frequency, f , of a periodic
g(x) is just the reciprocal of the period,
f
1
.
T
S
Figure 19.17: f = .1 only reveals one tenth of period in [.5, .5)
The take-away here is that
f T
and if you know gs period, you know its frequency (and vice versa).
19.5.3
Angular Frequency
When we ask the question How many periods fit an interval of length l? theres
nothing forcing us to choose l = 1. Thats a common choice in physics only because
it produces answers per second, per unit or per radian of some cyclic phenomenon.
If, instead, we wanted to express things per cycle or per revolution we would choose
l = 2. Its a slightly larger interval, so the same function would squeeze 6+ times as
many periods into it; if you were to repeat something 10 times in the space (or time)
of one unit or radian, you would get 62.8 repetitions in the span of a full revolution
of 2 units or radians.
558
2
.
T
2 f .
The relationship between period and angular frequency has the same interpretation
and form as that for ordinary frequency with the qualitatively unimportant change
that the number 1 now becomes 2. In particular, if you know the functions angular
frequency, you know its period, courtesy of
T
559
2 .
Chapter 20
The Continuous Fourier Transform
20.1
This is the second of three classical chapters in Fourier theory meant to prepare you
for the quantum Fourier transform or QFT . It fits into the full path according to:
Chapter 19 [Real Fourier Series Complex Fourier Series]
20.2
20.2.1
Non-Periodic Functions
Fourier series assumed (and required) that a function, f , was either periodic or
restricted to a bounded domain before we could claim it was expressible as a weightedsum of frequencies. A natural question to ask is whether we can find such weightedsum expansions for non-periodic functions defined over all R, with or without compact
560
20.2.2
We begin with a periodic f and restate the duality between it and its (complex)
Fourier series,
f (x) =
c(n) einx ,
n =
l
1
c(n) =
2
20.2.3
f (x) einx dx .
The price well have to pay in order to express non-periodic functions as a weighted
sum of frequencies is that the Fourier basis will no longer be a discrete set of functions, {einx }nZ , indexed by an integer, n Z, and neither will their corresponding
weights, {cn }nZ be discretely indexable. Instead, n will have to be replaced by a real
number, s. This means c(n) is going to be a full-fledged function of the real numbers,
c(s), < s < .
The above formulas have to be modified in the following ways:
The integer n must become a real number s.
The sum will have to turn into an integral.
The limits of integration have to be changed from to .
561
20.2.4
This weighting function, c(s), is computable from f (x) using the companion formula,
Z
1
c(s) =
f (x) eisx dx .
2
It is this last function of s that we call the Fourier Transform, or FT , of the function
f (x), and it is usually denoted by the capital letter of the function we are transforming,
in this case F (s),
Z
1
f (x) eisx dx .
F (s) =
2
Notation
We denote the Fourier transform operator using FT , as in
F = FT (f ),
FT
f F
or, using the script notation F ,
F = F (f ),
F
f F .
20.2.5
We also consider the inverse of the Fourier transform, which allows us to recover f
from F
f = FT 1 (F ),
f = F 1 (F ).
562
or
20.2.6
or
f (x).
=
563
f (x).
20.2.7
Unlike Fourier series, where practically any reasonable periodic function possessed a
Fourier series, the same cannot be said of Fourier transforms. Since our function is
564
now free-range over all of R, like a hyper-chicken in a billion acre ranch, it might
go anywhere. We have to be careful that the Fourier integral converges. One oft cited
sufficient condition is that f (x) be absolutely integrable, i.e.,
Z
|f (x)| dx < .
As you can see, simple functions. like f (x) = x or f (x) = x2 x3 + 1 dont pass
the absolute-integrability test. We need functions that tend to zero strongly at both
, like a pulse or wavelet that peters-out at both sides. Some pictures to help you
visualize the graphs of square integrable functions are seen in figure 20.3. The main
characteristic is that they peter-out towards .
(The functions that I am claiming possess a Fourier transform are the fk (x) =
k2 (x), since these have already been squared, and are ready to be declared absolutelyintegrable, a stronger requirement than square-integrable.)
Figure 20.3: Square-integrable wavefunctions from Wikipedia StationaryStatesAnimation.gif, leading to the absolutely integrable k2 (x)
20.3
Learning to Compute
Lets do a couple even functions, since they give real-valued Fourier transforms which
are easy to graph.
20.3.1
1, |x| .5
0, everywhere else
565
=
e
e
=
is 2
is 2
.5
r
r
i(.5s)
2 1 e
2 sin .5s
ei(.5s)
=
=
.
s
2i
s
That wasnt so bad. You can see both functions graphs in Figure 20.4. They demonstrate that while f is restricted to a compact support, F requires the entire real line for
its full definition. Well see that this is no accident, and has profound consequences.
20.3.2
Example 2: Gaussian
2 /(2 2 )
N is the height of its peak at the origin, and is called the standard deviation which
conveys what percentage of the total area under f falls between k, for k = 1, 2, 3,
or any multiple we like. When k = 3, 99.7% of the area is covered. (Figure 20.5
demonstrates this.)
ex
2 /(2 2 )
eisx dx.
2 +x
which can be computed by completing the square and using a polar coordinate trick
(look it up its fun). The result is
F (s) = N e
s2
2
2
which, if you look carefully, is another Gaussian, but now with a different height
and standard deviation. The standard deviation is now 1/, rather than . Loosely
speaking, if the spread of f is wide, the spread of F is narrow, and vice versa.
20.4
20.4.1
if x = 0
,
(x) =
0, otherwise
and
Z
(x) dx
1.
(Since is 0 away from the origin, the limits of integration can be chosen to be any
interval that contains it.)
The delta function is also known as the Dirac delta function, after the physicist,
Paul Dirac, who introduced it into the literature. It cant be graphed, exactly, since
it requires information not visible on a page, but it has a graphic representation as
shown in figure 20.6.
567
20.4.2
There are many ways to make this notation rigorous, the simplest of which is to
visualize (x) as the limit of a sequence of functions, {n (x)}, the nth box function
defined by
1 1
, 2n
n, if x 2n
n (x) =
.
0,
otherwise
Each n (x) satisfies the integration requirement, and as n , n (x) becomes
arbitrarily narrow and tall, maintaining its unit area all the while. (See Figure 20.7)
Figure 20.7: Sequence of box functions that approximate the delta function with
increasing accuracy
And if were not too bothered by the imprecision of informality, we accept the definition
(x)
lim n (x) .
In fact, the converging family of functions {n (x)} serves a dual purpose. In a computer we would select an N large enough to provide some desired level of accuracy and
use N (x) as an approximation for (x), thus creating a true function (no infinities
involved) which can be used for computations, yet still has the properties we require.
568
20.4.3
While simple, the previous definition lacks smoothness. Any function that has unit
area and is essentially zero away from the origin will work. A smoother family of
Gaussian functions, indexed by a real parameter ,
r
x2
(x) =
e
,
lim (x) .
Figure 20.8: Sequence of smooth functions that approximate the delta function with
increasing accuracy
Using the integration tricks mentioned earlier, you can confirm that this has the
integration properties needed.
20.4.4
Among the many properties of the delta function is its ability to pick out (sift) an
individual f (x0 ) of any function f at the domain point x0 (no matter how mis-behaved
f is),
Z
f (x0 ) =
f (x) (x x0 ) dx ,
or (equivalently),
Z
x0 +
f (x) (x x0 ) dx .
=
x0
569
You can prove this by doing the integration on the approximating sequence {n (x)}
and take the limit.
This sifting property is useful in its own right, but it also gives another way to
express the delta function,
Z
1
(x) =
eisx ds .
2
It looks weird, I know, but you have all the tools to prove it. Here are the steps:
1. Compute F ((x x0 )) with the help of the sifting property.
2. Take F 1 of both sides.
3. Set x0 = 0.
[Exercise. Show these steps explicitly.]
20.5
It turns out to be convenient to take Fourier transforms of functions that are not
absolutely integrable. However, not only are such functions not guaranteed to have
converging FT integrals, the ones we want to transform in fact do not have converging
FT integrals. Thats not going to stop us, though, because we just introduced a nonfunction function, (x), which will be at the receiving end when we start with a
not-so-well-behaved f .
20.5.1
Example 3: A Constant
F [f (x) = 1] (s)
1 eisx dx
The integral looks very much like an expression of the delta function. (Compare it
to the last of our many definitions of (x), about half-a-page up). If the exponent
did
? =
not have that minus sign, the integral would be exactly 2 (s), making
2 (s). That suggests that we use integration by substitution, setting x0 = x and
wind up with an integral that does match this last expression of the delta function.
That would work:
[Exercise. Try it.]
For an interesting alternative, lets simply guess that the
minus sign in the exponent doesnt matter, implying the answer would still be 2 (s). We then test
570
our hypothesis by taking F 1 2 (s) and confirm that it gives us back f (x) = 1.
Watch.
Z
h
i
1
1
F
2 (s) (s) =
2 (s) eisx ds
2
Z
=
(s) eisx ds
Z
(s 0) eisx ds = ei0x = 1. X
=
The last line comes about by applying the sifting property of (s) to the function
f (s) = eisx at the point x0 = 0.
We have a Fourier transform pair
F
2 (s) ,
and we could have started with a (x) in the spatial domain which would have given a
constant in the frequency domain. (The delta function apparently has equal amounts
of all frequencies.)
571
20.5.2
Example 4: A Cosine
Next we try f (x) = cos x. This is done by first solving Eulers formula for cosine,
then using the definition of (x),
Z
h
i
1
F [cos x] (s) =
cos(x) eisx dx
2
Z ix
e + eix
1
=
eisx dx
2
2
Z
Z
1
ix(1s)
ix(1+s)
e
dx +
e
dx
=
2 2
1
2 (1 s) + 2 (1 + s)
=
2 2
r
=
(1 s) + (1 + s) .
2
20.5.3
Example 5: A Sine
f (x) = sin x can be derived exactly the same way that we did the cosine, only here
we solve Eulers formula for sine, giving
r
h
i
F [sin x] (s) = i
(1 + s) (1 s) .
2
now). What do you remember from the last few sections that would have predicted
this result?
Does the FT of sin x (or cos x) make sense? It should. The spectrum of sin x
needs only one frequency to represent it: |s| = 1. (We get two, of course, because
theres an impulse at s = 1, but theres only one magnitude.) If we had done
sin ax instead, we would have seen the impulses appear at s = a, instead of 1,
which agrees with our intuition that sin ax requires only one frequency to represent
it. The Fourier coefficients (weights) are zero everywhere except for s = a.
(Use this reasoning to explain why a constant function has an FT = one impulse
at 0.)
As an exercise, you can throw constants into any of the functions whose FT s we
computed, above. For example, try doing A sin (2nx).
20.6
There are some oft cited facts about the Fourier Transform that we present in this
section. The first is one well need in this course, and the others are results that youll
probably use if you take courses in physics or engineering.
20.6.1
Translation Invariance
One aspect of the FT we will find useful is its shift-property. For real (the only
kind of number that makes sense when f happens to be a function of R),
F
f (x ) eis F (s).
This can be stated in various equivalent forms, two of which are
F
F (s ) eix f (x) ,
where in the last version we do need to state that is real, since F generally is a
function of C.
If you translate (move five feet or delay by two seconds) the function in the
spatial or time domain, it causes a benign phase-shift (by ) in the frequency domain.
Seen in reverse, if you translate all the frequencies by a constant, this only multiplies
the spatial or time signal by a unit vector in C, again, something that can usually
be ignored (although it may not make sense if you are only considering real f ). This
means that the translation in one domain has no measurable effect on the magnitude
of the signal in the other.
This is a kind of invariance, because we dont care as much about eias F (s) as we
do its absolute-value-squared, and in that case
ias
e F (s)2 = eias 2 |F (s)|2 = |F (s)|2 .
573
Since well be calculating probabilities of quantum states and probabilities are the
amplitudes absolute-values-squared, this says that both f (x) and f (x + ) have
Fourier transforms which possess the same absolute-values and therefore the same
probabilities. Well use this in Shors quantum period-finding algorithm.
20.6.2
Plancherels Theorem
Theres an interesting and crucial fact about f and F : the area between their graphs
and their domain axes are equal. This turns out to be true of all functions and their
Fourier transforms and the result is called Plancherels Theorem.
Plancherels Theorem. For any Fourier transform pair, F and f , we
have
Z
Z
2
|f (x)| dx =
|F (s)|2 ds .
20.6.3
Convolution
One last property that is heavily used in engineering, signal processing, math and
physics is the convolution theorem.
A convolution is a binary operator on two functions that produces a third function.
Say we have an input signal (maybe an image), f , and a filter function g that we
want to apply to f . Think of g as anything you want to do to the signal. Do you
want to reduce the salt-and-pepper noise in the image? Theres a g for that. Do you
want to make the image high contrast? Theres a g for that. How about looking at
574
only vertical edges (which a robot would care about when slewing its arms). Theres
another g for that. The filter g is applied to the signal f to get the output signal
which we denote f g and call the convolution of f and g: The convolution of f
and g, written f g, is the function defined by
Z
f () g(x ) d.
[f g] (x)
The simplest filter to imagine is one that smooths out the rough edges (noise). This
is done by replacing f (x) with a function h(x) which, at each x, is the average over
some interval containing x, say 2 from x. With this idea, h(10) would be
Z 12
f () d ,
h(10) = K
8
12.1
f () d .
8.1
(Here K is some normalizing constant like 1/(sample interval).) If f had lots of change
from any x to its close neighbors (noise), |f (10) f (10.1)| could be quite large. But
|h(10) h(10.1)|) will be small since the two numbers are integrals over almost the
same interval around x = 10. This is sometimes called a running average and is
used to track financial markets by filtering out the moment-to-moment or day-to-day
noise. Well this is nothing more than a convolution of f and g, where g(x) = K for
|x| #daystoavg., and 0, everywhere else (K often chosen to be 1/(# days to avg.).
20.6.4
The convolution theorem tells us that rather than apply a convolution to two functions
directly, we can get it by taking the ordinary point-by-point multiplication of their
Fourier transforms, something that is actually easier and faster due to fast algorithms
to compute transforms.
The Convolution Theorem.
f g
2 F 1 [ F (f ) F (g)]
20.7
Lets take a moment to review the relationship between the period and frequency of
the particularly pure periodic function sin nx for some integer n. For
sin nx
575
2 ,
reminding us that if we know the period we know the (angular) frequency, and vice
versa.
20.8
Applications
20.8.1
The list of applications of the FT is impressive. It ranges from cleaning up noisy audio
signals to applying special filters in digital photography. Its used in communications,
576
577
20.8.2
In quantum mechanics, every student studies a Gaussian wave-packet, which has the
same form as that of our example, but with an interpretation: f = is a particles position-state, with (x) being an amplitude. We have seen that the magnitude
squared of the amplitudes tell us the probability that the particle has a certain position. So, if represents a wave-packet in position space, then |(x)|2 reveals relative
likelihoods that the particle is at position x. Meanwhile = F (), as we learned a
moment ago, is the same state in terms of momentum. If we want the probabilities
for momentum, we would graph (s).
(Figures 20.15 and 20.16 show two different wave-packet Gaussians after taking
their absolute value-squared and likewise for their Fourier transforms.
Figure 20.16: A more localized Gaussian with 2 = 1/7 and its Fourier transform
The second pair represents an initial narrower spread of its position state compared
to the first. The uncertainty of its position has become smaller. But notice what
happened to its Fourier transform, which is the probability density for momentum.
Its uncertainty has become larger (a wider spread). As we set up the experiment to
pin down its position, any measurement of its momentum will be less certain. You are
looking at the actual mathematical expression of the Heisenberg uncertainty principle
in the specific case where the two observables are position and momentum.
20.9
Summary
This is a mere fraction of the important techniques and properties of the Fourier
transform, and Id like to dig deeper, but were on a mission. If youre interested, I
recommend researching the sampling theorem and Nyquist frequency.
578
And with that, we wrap up our overview of the classical Fourier series and Fourier
transform. Our next step in the ladder to QFT is the discrete Fourier transform.
579
Chapter 21
The Discrete and Fast Fourier
Transforms
21.1
This is the last of three classical chapters in Fourier theory meant to prepare you for
the quantum Fourier transform. It fits into the full path according to:
Chapter 19 [Real Fourier Series Complex Fourier Series]
580
21.2
21.2.1
Functions Mapping ZN C
and we can convey all the information about a particular function using an array or
vector,
f (0)
c0
f (1)
c1
f
.. .
..
.
.
f (N 1)
cN 1
In other words, the function, f can be viewed as a vector in CN (or possibly RN ).
You will see me switch freely between functional notation, f (k), and vector notation,
fk or ck . The vectors can be 2, 3, or N -dimensional. They may model a 2-D security
cam image, a 3-D printer job or an N -dimensional quantum system.
Applicability of Complex Vectors
Since N -dimensional quantum systems are the stuff of this course, well need the
vectors to be complex: Quantum mechanics requires complex scalars in order to accurately model physical systems and create relative phase differences of superposition
states. Although our initial vector coordinates might be real (in fact, they may come
from the tiny set {0, 1}), well still want to think of them as living in C. Indeed, our
operators and gates will convert such coordinates into complex numbers.
21.2.2
The definitions and results of the classical FT carry over nicely to the DFT .
Primitive N th Roots are Central
We start by reviving our notation for the complex primitive N th root of unity,
N
e2i/N .
(I say the, primitive N th root because in this course I only consider this one number
to hold that title. It removes ambiguity and simplifies the discussion.) When clear
from the context, well suppress the subscript N and simply use ,
N .
From the primitive N th root, we generate all N of the roots (including 1, itself):
1, , 2 , 3 , , N 1
or
2i/N
4i/N
1, e
,e
, e6i/N , , e(N 1)2i/N
These roots will be central to our definition of DFT .
582
Figure 21.3: Primitive 5th root of 1 (the thick radius) and the four other 5th roots it
generates
Recap of the Continuous Fourier Transform
Lets look again at the way the FT maps mostly continuous functions to other mostly
continuous functions. The FT , also written F , was defined as a map between functions,
F = F (f )
F
f F,
which produced F from f using the formula
Z
1
f (x) eisx dx.
F (s) =
2
Its easy to forget what the FT is and think of it merely as the above formula, so Ill
pester you by emphasizing the reason for this definition: we wanted to express f (x)
as a weighted-sum of frequencies, s, the weights being F (s),
Z
1
f (x) =
F (s) eisx ds .
2
Adapting the Definition to the DFT
To define the DFT , we start with the FT and make the following adjustments.
583
eisx .
Lets rewrite the spectrum, F , using the symbolism of this s-parameter family,
s (x), in place of eisx ,
Z
1
f (x) s (x) dx .
F (s) =
2
In the discrete case, we want functions of the index k, each of whose constant
frequency is parametrized by the discrete parameter j. The N th roots of unity
provide the ideal surrogate for the continuously parametrized s . To make the
analogy to the FT as true as possible, I will take the negative of all the roots
exponents (which produces the same N roots of unity, but in a different order),
and define
j (k)
jk
N
jk .
N 1
The N functions {j }j=0
replace the infinite {s }sR . In other words, the
continuous basis functions of x, eisx parametrized by s, become N vectors,
j0
vj0
vj1
j1
.
..
.
.
.
vj =
j = 0, 1, . . . , N 1
= jk ,
vjk
..
..
.
vj(N 1)
vj(N 1)
where k is the coordinate index and j is the parameter that labels each vector.
584
N 1
1 X
fk jk ,
N k=0
for j = 0, 1, . . . N 1 ,
DFT (f )
DFT [f ] .
The jth coordinate of the output can be also expressed in various equivalent ways,
Fj
[DFT (f )]j
DFT (f )j
DFT [f ]j .
The last two lack surrounding parentheses or brackets, but the subscript j still applies
to the entire output vector, F .
Note on Alternate Definitions
Just as with the FT , the DFT has many variants all essentially equivalent but
producing slightly different constants
or minus signs in the results. In one case the
forward DFT has no factor of 1/ 2, but the reverse DFT contains a full 1/(2). In
another, the exponential is positive. To make things more confusing, a third version
has a positive exponent of , but is defined to have the minus sign built-into it, so
the overall definition is actually the same as ours. Be ready to see deviations as you
perambulate the literature.
585
Inverse DFT
Our expectation is that {Fj } so defined will provide the weighting factors needed to
make an expansion of f as a weighted sum of the frequencies j ,
fk
N 1
1 X
Fj kj ,
N j=0
Fj kj
N j=0
=
fm
kj
N j=0
N m=0
!
N
1
N 1
X
1 X
=
fm
(km)j
N m=0
j=0
From exercise (d ) of the section Roots of Unity (at the end of our complex arithmetic
lecture) the sum in parentheses collapses to N km , so the double sum becomes
N 1
1 X
fm (N km )
N m=0
N
fk
N
586
fk .
QED
21.3
1
1
1
1
1
1
2
3
N 1
2
4
6
2(N 1)
1
..
..
..
..
1
.
.
.
.
.
W = ..
.
N 1 j
2j
3k
(N 1)j
.
.
.
.
.
..
..
..
..
..
N 1
2(N 1)
3(N 1)
(N 1)(N 1)
1
Now, we can express DFT of the vector (fk ) as
DFT [f ]
W (fk ) .
1
1
1
1
2
1
1 2
4
6
..
..
..
1 .
..
.
.
.
N
2j
3j
1 j
.
..
..
..
..
.
.
.
N 1
2(N 1)
3(N 1)
1
N 1
f0
f1
f2
..
.
..
(N 1)j fk
.
..
..
.
fN 1
(N 1)(N 1)
2(N 1)
fk jk
N k=0
N 1
1 X
fk jk ,
N k=0
21.4
Properties of DFT
All the results for the continuous FT carry over to the DFT . (If you skipped the
continuous FT chapter, take these as definitions without worry.)
21.4.1
N
1
X
l=0
587
fl gkl .
The convolution theorem for continuous FT holds in the discrete case, where we still
have a way to compute the (this time discrete) convolution by using DFT :
f g =
2 DFT 1 [ DFT (f ) DFT (g)]
21.4.2
Translation invariance a.k.a the shift property holds in the discrete case in the form
of
DF T
fkl 7 lk Fk
As you can see, its the same idea: a spatial or time translation in one domain (shifting
the index by l) corresponds to a phase shift (multiplication by a root of unity) in
the other domain.
21.4.3
The formula
[DFT (f )]j
N 1
1 X
fk jk
N k=0
tells us that we have N complex terms to add and multiply for each resulting
component Fj in the spectrum. Since the spectrum consists of N coordinates, thats
N 2 complex operations, each of which consists of two or three real operations, so its
still O (N 2 ) in terms of real operations. Furthermore, we often use fixed point (fixed
precision) arithmetic in computers, making the real sums and products independent
of the size, N , of the vector. Thus, the DFT is O (N 2 ), period.
You might argue that as N increases, so will the precision needed by each floating
point multiplication or addition, which would require that we incorporate the number
of digits of precision, m, into the growth estimate, causing it to be approximately
O (N 2 m2 ). However, well stick with O (N 2 ) and call this the time complexity relative
to the arithmetic operations, i.e., above them. This is simpler, often correct and
will give us a fair basis for comparison with the up-coming fast Fourier transform
and quantum Fourier transform.
21.5
A DFT applies only to functions defined on the bounded integral domain of integers
ZN = {0, 1, 2, . . . , N 1}. Within that interval of N domain numbers, if the function
repeats itself many times that is, if it has a period T , where T << N (T is much
588
less than N ) then the function would exhibit periodicity relative to the domain.
That, in turn, implies it has an associated frequency, f . In the continuous cases, T
and f are related by one of two common relationships, either
Tf
or
Tf
2 .
One could actually make the constant on the RHS different from 1 or 2, although
its rarely done. But in the discrete case we do use a different constant, namely, N .
We still have the periodicity condition that
f (k + T )
f (k) ,
for all k (when both k and k + T are in the domain), but now the frequency is defined
by
Tf
N.
.5 , 0 , .25 , .15 , .3 , 0 , .1 ,
0 , 0 , .1 , 0 , .25 , .3 , 0 , .6 ,
0 , 0 , .25 , .15 , .3 , 0 , .1 ,
0 , 0 , .1 , .35 , .3 , 0 , .1 ,
.3 , 0 , .5 , .45 , .3 , 0 , .1 ,
0 , 0 , .25 , .15 , .4 , .6 , .6 ,
0 , 0 , .2 , .15 , .3 , .6 , .1 ,
0 , 0 , .25 , .15 , .3 , 0 , .1 ,
0 , 1 , .6 , .15 , .1 , 0 , .1 ,
0 , 0 , .25 , .15 , .3 , 0 , .1 ,
0 , 2 , .25 , .75 , .2 , 0 , .1 ,
0 , 0 , .25 , .05 , .1 , 0 , .6 ,
0 , 0 , .25 , .0 , .3 , 0 , .1 ,
.25 , 0 , .25 , .15 , .0 , .6 , .1 ,
0 , .5 , .25 , .0 , .3 , 0 , .1 ,
0 , 0 , .25 , .15 , .3 , 0 , .1 ) ,
you would see no dominant frequency, which agrees with the fact that the function is
not periodic. (See figure 21.4)
( .1 ,
.1 ,
.1 ,
.1 ,
.1 ,
.1 ,
.1 ,
.1 ,
.1 ,
.1 ,
.1 ,
.1 ,
.1 ,
.1 ,
.1 ,
.1 ,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.15 ,
.15 ,
.15 ,
.15 ,
.15 ,
.15 ,
.15 ,
.15 ,
.15 ,
.15 ,
.15 ,
.15 ,
.15 ,
.15 ,
.15 ,
.15 ,
.3 ,
.3 ,
.3 ,
.3 ,
.3 ,
.3 ,
.3 ,
.3 ,
.3 ,
.3 ,
.3 ,
.3 ,
.3 ,
.3 ,
.3 ,
.3 ,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
.1 ,
.1 ,
.1 ,
.1 ,
.1 ,
.1 ,
.1 ,
.1 ,
.1 ,
.1 ,
.1 ,
.1 ,
.1 ,
.1 ,
.1 ,
.1 )
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0 ) ,
and this one has a DFT in which all the non-zero frequencies in the spectrum have
the same amplitudes, as seen in figure 21.6.
For all these examples I used N = 128, and for the two periodic cases, the period
was T = 8 (look at the vectors), which would make the frequency f = N/8 = 128/8 =
16. You can see that all of the non-zero amplitudes in the spectrum (i.e., the DFT )
are multiples of 16: 16, 32, 48, etc. (There is a phantom spike at 128, but thats
590
21.6
21.6.1
Benefit
Although merely a computational technique to speed up the DFT , the fast Fourier
transform, or FFT , is well worthy of our attention. Among its accolades,
1. it is short and easy to derive,
2. it has wide application to circuit and algorithm design,
3. it improves the DFT s O(N 2 ) to O(N log N ), a speed up that changes the
computational time of some large arrays from over an hour to a fraction of a
second, and
4. the recursive nature of the solution sets the stage for studying the quantum
Fourier transform (QFT ) circuit.
21.6.2
Cost
A slightly sticky requirement is that it only operates on vectors which have exactly
N = 2n components, but there are easy work-arounds. One is to simply pad a
deficient f with enough 0s to bring it up to the next power-of-two. For example, we
might upgrade a 5-vector to an 8-vector like so:
f0
f1
f0
f2
f1
f4
f2
0
f4
0
0
591
21.7
The development of a truly fast FFT proceeds in two stages. In this section, we
develop a recursive relation for the algorithm. In the next section, we show how
to turn that into a non-recursive, i.e., iterative, solution which is where the actual
speed-up happens.
21.7.1
f0
f1
f2
f3
f4
f5
even
odd
f
and
f
.
..
..
.
.
fN/2
fN/2+1
In terms of coordinates,
fkeven
f2k
and
fkodd
f2k+1 ,
N 1
1 X
fk jk
N k=0
N
N
1
1
2X
2X
1
fkeven j(2k) +
fkodd j(2k+1)
N
k=0
k=0
N
N
1
1
2X
2X
1
fkeven j(2k) + j
fkodd j(2k)
N
k=0
k=0
N
N
1
1
2X
2X
jk
jk
1
.
fkeven 2
+ j
fkodd 2
N
k=0
k=0
21.7.2
(e.g., ( 8 a) = 4 a ). If we rewrite the final sum using 0 for 2 , N 0 for N/2 and
labeling the outside the odd sum as an N th root, things look very interesting:
!
0 1
N0 1
NX
1 X even 0 jk
1
1
jk
j
DFT [f ]j =
+ N
fk ( )
fkodd ( 0 )
0
0
2
N k=0
N k=0
We recognize each sum on the RHS as an order N/2 DFT , so lets go ahead and
label them as such, using an exponent label to help identify the orders. We get
1
j
DFT (N/2) f odd j
DFT (N ) [f ]j = DFT (N/2) [f even ]j + N
2
Since the j on the LHS can go from 0 (N 1), while both DFT s on the RHS are
only size N/2, we have to remind ourselves that we consider all these functions to be
periodic when convenient.
Start of Side-Trip
Lets clarify that last statement by following a short sequence of maneuvers. First turn an N/2 dimensional vector, call it g, into a periodic
vector that is N or even infinite dimensional by assigning values to
the excess coordinates based on the original N/2 with the help of an
old trick,
g( p + N/2 )
g(p) .
593
End of Side-Trip
The upshot is that the j on the RHS can always be taken modulo the size of the
vectors on the RHS, N/2. Well add that detail for utter clarity:
1
(N )
DFT [f ]j = DFT (N/2) [f even ](j mod N/2)
2
j
+ N
DFT (N/2) f odd (j mod N/2)
Finally, lets clear the smoke using shorthand like F (N ) = DFT (N ) (f ), FE = F even
and FO = F odd . The final form is due to Danielson and Lanczos, and dates back to
1942.
21.7.3
j
h
(N/2)
FE
+
(j
mod N/2)
j
N
(N/2)
FO
i
(j
mod N/2)
We have reduced the computation of a size N DFT to that of two size (N/2) DFT s
(and a constant time multiplication and addition). Because we can do this all the
way down to a size 1 DFT (which is just the identity operation check it out), we
are able to compute Fj in log N iterations, each one a small, fixed number of complex
additions and multiplications.
Were Not Quite There Yet
This is promising: We have to compute N output coordinates, and Danielson-Lanczos
tells us that we can get each one using, what appears to be log N operations, so it
seems like we have an O(N log N ) algorithm.
Not so fast (literally and figuratively).
1. The cost of partitioning f into f even and f odd , unfortunately, does require us
running through the full array at each recursion level, so thats a deal breaker.
2. We can fix the above by passing the array in-place and just adding a couple
parameters, start and gap, down each recursion level. This obviates the need
to partition the arrays, but, each time we compute a single output value, we still
end up accessing and adding all N of the original elements (do a little example
to compute F3 for an 8-element f ).
3. Recursion has its costs, as any computer science student well knows, and there
are many internal expenses that can ruin our efficiency even if we manage to
fix the above two items, yet refuse to abandon recursion.
In fact, it took some 20 years before someone (Tukey and Cooley get the credit)
figured out how to leverage this recursion relation to break theO(N 2 ) barrier.
594
21.7.4
The client is invoking setInSig() and toStringOut() to transfer the signals be595
tween it and object and also calling calcFftRecursive() to do the actual DFT
computation. The client also uses a simple eight-element input signal for testing,
7
fk k=0 { 0, .1, .2, .3, .4, .5, .6, .7 } .
Although the class proves to be only a slow O(N 2 ) solution, we should not shrug-off
its details; as computer scientists, we need to have reliable benchmarks for future
comparison and proof-of-correctness coding runs. Thus, we want to look inside class
FftUtil and then test it.
The publicly exposed calcFftRecursive() called by the client leans on a private
helper not seen by the client: calcFftRecursive(). First the definition of the public
member method:
// public method that client uses to request FFT computation
// assumes signal is loaded into private array inSig []
bool FftUtil :: c a l c F f t R e c u r s i v e ()
{
int k ;
// check for fatal allocation errors
if ( inSig == NULL || outSig == NULL )
return false ;
// calculate FFT ( k ) for each k using recursive helper method
for ( k = 0; k < fftSize ; k ++ )
outSig [ k ] = (1/ sqrt ( fftSize ) )
* c a lc F f t R e c W o r k e r ( inSig , k , fftSize , 1 , 1 ) ;
return true ;
}
Heres the private recursive helper which you should compare with the DanielsonLanczos relation. It is a direct implementation which emerges naturally from that
formula.
//
//
//
//
//
//
//
596
Ill let you analyze the code to show that is does predict O(N 2 ) big-O timing, but we
will verify it using benchmarks in the next few lines.
The output confirms that we are getting the correct values.
/* = = = = = = = = = = = = = = = = = = = = = sample run = = = = = = = = = = = = = = = = = = = = =
IN SIGNAL ( recursive )
0 0.1 0.2 0.3
0.4 0.5 0.6 0.7
----------------
And we loop through different input-array sizes to see how the time complexity shakes
out:
FFT
FFT
FFT
FFT
FFT
size
size
size
size
size
1024
2048
4096
8192
16384
The pattern is unmistakable: doubling the array size causes the time to grow fourfold. This is classic N 2 time complexity. Besides the growth rate, the absolute times
(four seconds for a modest sized signal) are unacceptable.
21.8
Strictly speaking the above code is not an FFT since it is, frankly, just not that
fast. A true FFT is an algorithm that produces N log N performance based on
non-recursive techniques.
We now study the improved FFT algorithm which consists of two main phases,
bit-reversal and iterative array-building. Well gain insight into these two sub-algorithms
597
by previewing the very short high-level FFT code that invokes them.
21.8.1
To class FftUtil, we add public and private instance methods that will compute the
DFT using a non-recursive algorithm. The highest level public method will be the
(forward) FFT , called calcFft(). It consists of three method calls:
1. Bit Reversal. The first implements bit-reversal by creating a new indexing
window into our data. That is, it generates a utility array of indexes that will
help us reorder the input signal. This utility array is independent of the input
signal, but it does depend on the FFT size, N . This phase has the overall
effect of reordering the input signal.
2. Iterative Array Building. The second applies an iterative algorithm that
builds up from the newly ordered input array to ultimately replace it by the
output DFT .
3. Normalization. Finally, we multiply the result by a normalizing factor.
Heres a birds eye view of the method.
// public method that client uses to request FFT computation
// assumes signal is loaded into private array inSig []
bool FftUtil :: calcFft ()
{
// throwing exception or single up - front test has slight appeal , but
// following is safer for future changes to constituent methods
if ( ! c o p y I n S i g T o A u x S i g B i t R e v () )
return false ;
if ( ! com bineEve nOdd ( false ) )
return false ;
if ( ! normalize () )
return false ;
return true ;
}
21.8.2
Bit-Reversal
The big break in our quest for speed comes when we recognize that the recursive
algorithm leads at its deepest nested level to many tiny order-one arrays, and
that happens after log N method calls. This is the end of the recursion at which point
we compute each of these order-one DFT s, manually. Its the infamous escape valve
of recursion. But computing the DFT of those size-one arrays is trivial:
DFT (1) ( {c} ) = {c} ,
598
that is, the DFT of any single element array is itself (apply the definition). So we
dont really have to go all the way down to that level - theres nothing to do there.
(Those, the size one DFT s are already done, even if they are in a mixed up order
in our input signal.) Instead we can halt recursion when we have size two arrays, at
which point we compute the order two DFT s explicitly. Take a look:
h
i
1
(2)
FEEOE...OE
= fp + (1)j fq ,
j
2
(2)
for some p and q, gives us the jth component of the size two DFT s. FEEOE...OE
represents one of the many order two DFT s that result from the recursion relation
by taking increasingly smaller even and odd sub-arrays in our recursive descent from
size N down to size two.
The Plan: Knowing that the first order DFT s are just the original input signals
array elements (whose exact positions are unclear at the moment), our plan is work
not from the top down, recursively, but from the bottom with the original array
values, and build-up from there. In other words, instead of recursing down from size
N to size one, we iterate up from size one to size N . To do that we need to get the
original signal, {fk }, in the right order in preparation for this rebuilding process.
So our first task is to re-order the input array so that every pair fp and fq that
we want to combine to get a size two DFT end up next to one another. While at
it, well make sure that once they are computed, all size two pairs which need to
be combined to get the fourth order DFT s will also be adjacent, and so on. This
reordering is called bit-reversal. The reason for that name will be apparent shortly.
Lets start with an input signal of size 8 = 23 that we wish to transform and define
it such that itll be easy to track:
7
fk k=0 { 0, .1, .2, .3, .4, .5, .6, .7 } .
We start at the top and, using the Danielson-Lanczos recursion relation, see how the
original f decomposes into two four-element even-odd sets, then four two-element
even-odd sets, and finally eight singleton sets. Figure 21.7 shows how we separate the
(4)
(4)
original eight-element f (8) into fE and fO , each of length 4.
599
Figure 21.7: Going from an 8-element array to two 4-element arrays (one even and
one odd)
(4)
600
Figure 21.8: Decomposing the even 4-element array to two 2-element arrays (one even
and one odd)
(2)
This time, for variety, well recurse on the odd sub-array, fEO (figure 21.9).
601
f2
f3
f0
f1
f6
f7
f4
f5
Now, we want more than just adjacent pairs; wed like the two-element DFT s
that they generate to also be next to one another. Each of these pairs has to be
positioned properly with respect to the rest. Now is the time for us to stand on the
shoulders of the giants who came before and write down the full ordering we seek.
This is shown in figure 21.12. (Confirm that the above pairs are adjacent).
What you are looking at in figure 21.12 is the bit-reversal arrangement. Its so
named because in order to get f6 , say, into its correct position for transform building,
we reverse the bits of the integer index 6 (relative to the size of the overall transform,
8, which is three bits). Its easier to see than say: 6 = 110 reversed is 011 = 3, and
indeed you will find the original f6 = .6 ends up in position 3 of the bit-reversed,
602
000
001
010
011
100
101
110
111
7
7
000 =
100 =
010 =
110 =
001 =
101 =
011 =
111 =
0
4
2
6
1
5
3
7
603
return ;
}
int FftUtil :: reverseOneInt ( int inVal )
{
int retVal , logSize , inValSave ;
inValSave = inVal ;
// inVal and retVal are array locations , and size of the array is fftSize
retVal = 0;
for ( logSize = fftSize >> 2; logSize > 0; logSize > >= 1)
{
retVal |= ( inVal & 1) ;
retVal < <= 1;
inVal > >= 1;
}
// adjusts for off - by - one last half of array
if ( inValSave >= ( fftSize > >1) )
retVal ++;
return retVal ;
}
Time Complexity
The driver method has a simple loop of size N making it O(N ). In that loop, it calls
the helper, which careful inspection reveals to be O(log N ): the loop in that helper is
managed by the statement logSize >>= 1, which halves the size of array each pass,
an action that always means log growth. Since this is a nesting of two loop the full
complexity is O(N log N ).
Maybe Constant Time? This is done in series with the second phase of FFT
rebuilding so this complexity and that of the next phase do not multiply; we will take
the slower of the two. But the story gets better. Bit reversal need only be done once
for any size N and can be skipped when new FFT s of the same size are computed.
It prepares a static array that is independent of the input signal. In that sense, it is
really a constant time operation for a given FFT or order N .
Either way you look at it, this preparation code wont affect the final complexity
since were about to see that the next phase is also O(N log N ) and the normalization
phase is only O(N ) making the full algorithm O(N log N ) whether or not we count
bit reversal.
21.8.3
We continue to use the Danielson-Lanczos recursion relation to help guide our next
steps. Once the input array is bit-reversed, we need to build-up from there. Figure 21.13 shows how we do this at the first level, using the singleton values to build
the order-two DFT s.
Note that since we are building a two-element DFT , we use the 2nd root of unity,
a.k.a. -1. After we do this for all the pairs, thus replacing all the singletons with
604
h
i
h
i
(2)
(1)
Figure 21.13: FEO = FEOE
j
j (mod 1)
h
i
(1)
+ (1)j FEOO
j (mod 1)
two-element DFT s, we repeat the process at the next level: we build the four-element
arrays from these two-element arrays. Figure 21.14 shows the process on one of the
(4)
two four-element arrays, FE . This time, we are using the 4th root of unity, i, as the
multiplier of the odd term.
605
i
i
h
h
(2)
(4)
= FEE
Figure 21.14: FE
j (mod 2)
i
h
(2)
+ (i)j FEO
j (mod 2)
(4)
(4)
(mod 1)
(mod 1)
for j = 0, 1,
which replaces those eight one-element DFT s with four two-element DFT s. The
code to do that isnt too bad.
Two-Element DFT s from Singletons:
// this computes the DFT of length 2 from the DFTs of length 1 using
// F0 = f0 + f1
606
//
//
//
//
//
//
F1 = f0 - f1
and does so , pairwise ( after bit - reversal re - ordering ) .
It represents the first iteration of the loop , but has the concepts
of the recursive FFT formula .
the roots [0] , ... , roots [ fftSize -1] are the nth roots in normal order
which implies that -1 would be found at position fftSize /2
rootPos = fftSize /2; // identifies location of the -1 = omega in first pass
for ( base = 0; base < fftSize ; base +=2)
{
for ( j = 0; j < 2; j ++)
{
arrayPos = ( j * ( fftSize - rootPos ) ) % fftSize ; // -j * omega
outSig [ base + j ] = inSig [ base ] + roots [ arrayPos ] * inSig [ base + 1];
}
}
If we were to apply only this code, we would be replacing the adjacent pairs (after
bit-reversal) by their order-two DFT s. For an input signal
fk
7
k=0
base +=4
rootPos = fftSize/2
1)
rootPos = fftSize/4
inSig[base]
inSig[base + 2 + (j % 2)]
j < 4
Ill let you write out the second iteration of the code that will combine DFT s of
length two and produce DFT s of length four. After doing that exercise, one can see
that the literals 1, 2, 4, etc. should be turned into a variable, groupsize, over which
we loop (by surrounding the above code in an outer groupsize-loop). The result
would be the final method.
Private Workhorse method combineEvenOdd():
607
Time Complexity
At first glance it might look like a triple nested loop, leading to some horrific cubic
performance. Upon closer examination we are relieved to find that
the outer loop is a doubling of groupsize until it reaches N , essentially a log N
proposition, and
608
21.8.4
Normalization
To complete the trio, we have to write the normalize() method. Its a simple linear
complexity loop not adding to the growth of the algorithm.
The Normalize Method
bool FftUtil :: normalize ()
{
double factor ;
int k ;
if ( outSig == NULL )
return false ;
factor = 1. / sqrt ( mSize ) ;
for ( k = 0; k < mSize ; k ++)
outSig [ k ] = outSig [ k ] * factor ;
return true ;
}
21.8.5
Overall Complexity
We have three methods in series, and the most costly of them is O(N log N ), making
the entire FFT O(N log N ).
21.8.6
Software Testing
The only thing left to do is time this and compare with the recursive approach. Heres
the output:
FFT
FFT
FFT
FFT
FFT
FFT
FFT
size
size
size
size
size
size
size
1024
2048
4096
8192
16384
32768
65536
It is slightly slower than linear, the difference being the expected factor of logN
(although we couldnt tell that detail from the above times). Not only is this evidence
of the N logN time complexity, but it is orders of magnitude faster than the recursive
algorithm (4.076 seconds vs. .003 seconds for a 16k array). We finally have our true
FF T .
This gives us more (far more) than enough classical background in order to tackle
the quantum Fourier transform.
609
Chapter 22
The Quantum Fourier Transform
22.1
Today youll meet the capstone of our four chapter sequence, the quantum Fourier
transform, or QFT .
Chapter 19 [Real Fourier Series Complex Fourier Series]
22.2
Definitions
22.2.1
From C2 to H(n)
We know that the discrete Fourier transform of order N, or DFT (its order usually
implied by context), is a special operator that takes an N -dimensional complex vector
f = (fk ) to another complex vector fe = (fej ). In symbols,
DFT : CN CN ,
DFT (f ) 7 fe,
610
FFT : C2 C2 ,
FFT [(fk )] 7 (fej ).
We maintain this restriction on N and continue to take n to be log N (base 2 always
implied).
Well build the definition of the QFT atop of our firm foundation of the DFT ,
so we start with
n
DFT : C2 C2 ,
n
a 2n th order mapping of C2 to itself and use that to define the 2n th order QFT
acting on an nth order Hilbert space,
QFT : H(n) H(n) .
[Order. The word order, when describing the DFT = DFT (N ) means N , the
dimension of the underlying space, while the same word,order, when applied to a
tensor product space H(n) is n, the number of component, single qubit spaces in the
product. The two orders are not the same: N = 2n or, equivalently, n = log N .]
22.2.2
22.2.3
Lets make sure we understand these concepts before defining the QFT by reprising
an example from our past, the Hadamard operator.
First Order Hadamard
We used method 1 to define H = H 1 , by
H |0i
H |1i
|0i + |1i
,
2
|0i |1i
,
2
If
then
|0i + |1i ,
+
|0i +
2
1
+
.
2
H |i
|1i
|xi
=
n 1
n 2X
(1)xy |yin .
y=0
22.2.4
As long as were careful to check that our definition provides a linear and unitary
transformation, we are free to use a states coordinates to define it. Consider a general
state |in and its preferred basis amplitudes (a.k.a., coordinates or coefficients),
c0
c1
1
|in (cx )N
=
N = 2n .
.. ,
x=0
.
cN 1
We describe how the order-N QFT = QFT (N ) acts on the 2n coefficients, and this
will define the QFT for any qubit in H(n) .
Concept Definition of the Order-N QFT
|in
1
(cx )N
x=0 ,
QFT (N ) |in
1
(e
cy )N
y=0 .
If
then
In words, starting from |in , we form the vector of its amplitudes, (cx ); we treat (cx )
like an ordinary complex vector of size 2n ; we take its DFT (N ) to get another vector
(e
cy ); we declare the coefficients { e
cy } to be the amplitudes of our desired output state,
n
(N )
QFT
|i . The end.
Expressing the Order of the QFT
If we need it, well display the QFT s order in the superscript with the notation
QFT (N ) . Quantum computer scientists dont usually specify the order in diagrams,
so well often go with a plain QFT and remember that it operates on an order-n
Hilbert space having dimension N = 2n .
Explicit Definition of the Order-N QFT
The concept definition expressed formulaically, says that
if
|i
N
1
X
cx |xin ,
x=0
then
QFT |i
N
1
X
y=0
613
e
cy |yin .
We can really feel the Fourier transform concept when we view the states as complex
vectors |in = c = (cx ) of size 2n to which we subject the standard DFT ,
QFT |in
N
1
X
y=0
The definition assumes the reader can compute a DFT , so wed better unwind our
definition by expressing the QFT explicitly. The yth coordinate of the output QFT
is produced using
[QFT |in ]y
e
cy
N 1
1 X
cx yx ,
N x=0
where = N is the primitive N th root of unity. The yth coordinate can also be
obtained using the the dot-with-the-basis-vector trick,
[QFT |in ]y
so you might see this notation, rather than the subscript, by physics-oriented authors.
For example, it could appear in the definition or even computation of the yth
coordinate of the QFT ,
n
N 1
1 X
cx yx .
N x=0
Well stick with the subscript notation, [QFT |in ]y , for now.
Notation and Convention
Hold the phone. Doesnt the DFT have a negative root exponent, jk ? The way I
defined it, yes. As I already said, there are two schools of thought regarding forward
vs. reverse Fourier transforms and DFT s. I really prefer the negative exponent for the
forward transform because it arises naturally when decomposing a function into its
frequencies. But there is only one school when defining the QFT , and it has a positive
root exponent in the forward direction (and negative in the reverse). Therefore, I have
to switch conventions.
[If you need more specifics, here are three options. (i ) You can go back and define
the forward DFT using positive exponent from the start ... OR ... (ii ) You can
consider the appearance of DFT in the above expressions as motivational but rely
only on the explicit formulas for the formal definition of the QFT without anxiety
about the exponents sign difference ... OR ... (iii ) You can preserve our original DFT
but modify the definition of QFT by replacing DFT with DFT 1 everywhere
in this section.]
Anyway, we wont be referring to the DFT , only the QFT effective immediately
so the discrepancy starts and ends here.
614
|i
N
1
X
cx |xin
x=0
then
QFT |i
N 1 N 1
1 XX
cx yx |yin .
N y=0 x=0
N 1
1 X yx n
|yi ,
N y=0
from which we would get the definition for arbitrary states by applying linearity to
their CBS expansion. Our task at hand is to make sure
1. the definitions are equivalent, and
2. they produce a linear, unitary operator.
Vetting a putative operator like this represents a necessary step in demonstrating
the viability of our ideas, so its more than just an academic exercise. You may be
doing this on your own operators some day. Imagine future students studying the
[your name here] transform. It can happen.
Equivalence of the Definitions
Step 1) Agreement on CBS. Well show that the coefficient definition, QFT ours ,
agrees with the typical CBS definition, QFT cbs on the CBS. Consider the CBS |xin .
615
N
1
X
ck |ki
k=0
N
1
X
kx |kin .
k=0
Now apply our definition to this ket and see where it leads:
n
N 1 N 1
1 XX
ck yk |yin
N y=0 k=0
N 1 N 1
1 XX
kx yk |yin
N y=0 k=0
N 1
1 X yx n
|yi
N y=0
QED
Step 2) Linearity. We must also show that QFT ours is linear. Once we do that
well know that both it and QFT cbs (linear by construction) not only agree on the
CBS but are both linear, forcing the two to be equivalent over the entire H(n) .
The definition of QFT ours in its expanded form is
QFT |in
N 1 N 1
1 XX
cx yx |yi ,
N y=0 x=0
616
so
e
c0
e
c1
e
c2
..
.
X
cx 0 x
X
1
x
cx
X
x
2
x
cx
..
.
1 1 1
c0
1 2 c1
1 2 4 c2
.
1 3 6
..
.. ..
.. . .
.
.
. .
.
1 1 1
1 2
1
2
4
MQF T 1
3
6
N
1
.. ..
..
. .
.
..
.
we can use MQF T to confirm that QFT is unitary: if the matrix is unitary, so is
the operator. (Caution. This is only true when the basis in which the matrix is
expressed is orthonormal, which { |xin } is.) We need to show that
(MQF T ) MQF T
1,
so lets take the dot product of row x of (MQF T ) with column y of MQF T :
1
1
y
1
y
1
x
2x
x
2x
1
2y
1,
(
)
,
(
)
,
.
.
.
1,
,
.
.
.
=
2y
N
N
N
..
..
.
.
=
N 1
1 X k(yx)
N k=0
1
(xy N )
N
617
xy .
QED
22.3
22.3.1
Shift Property
fkl 7 lk Fk ,
when plugged into the definition of QFT results in a quantum translation invariance
n
QFT |x zi
zx
N 1
1 X yx n
|yi
N y=0
zx QFT |xin .
We lose the minus exponent of because the QFT uses a positive exponent in its
forward direction.
22.3.2
You may find it enlightening to see some similarities and differences between the
Hadamard operator and the QFT .
We are working in an N = 2n -dimensional Hilbert space, H(n) . We always signify
this on the Hadamard operator using a superscript, as in H n . On the other hand,
when we specify the order of the QFT , we do so using the superscript (N ), as in
QFT (N ) . In what follows, Ill continue to omit the superscript for the QFT initially
and only bring it in when needed.
How do these two operators compare on the CBS?
nth Order Hadamard
|xi
n 1
n 2X
(1)xy |yi ,
y=0
where x y = x y is the mod-2 dot product based on the individual binary digits in
the base-2 representation of x and y. Now 1 = 2 (e2i/2 = ei = 1, X), so lets
replace the 1, above, with its symbol as the square root of unity,
H
|xi
=
n 1
n 2X
618
y=0
2xy |yin .
N -Dimensional QFT
The definition of the N th order QFT for N = 2n is
QFT
(N )
|xi
n 1
n 2X
yx |yin .
y=0
(2)
|xi
=
=
=
22.3.3
1 X
1
1
(1)yx |yi
2
y=0
1
(1)0x |0i + (1)1x |1i
2
( |0i + |1i
, x=0
2
|0i |1i
x=1
Any quantum gate is unitary by necessity (see the lecture on single qubits), and a
unitary operator acting on an orthonormal basis produces another orthonormal basis.
Ill restate the statute (from that lecture that provides the rigor for all this).
Theorem (Basis Conversion Property). If U is a unitary operator
and A is an orthonormal basis, then U (A), i.e., the image of vectors A
under U , is another orthonormal basis, B.
n
Applying QF T (2 ) to the preferred z-basis in H(n) will give us another basis for H(n) .
Well write it as { |e
x in }, where
|e
xin
QF T (2 ) |xin .
Well refer to this as the quantum Fourier basis or, once we are firmly back in purely
quantum territory, simply as the Fourier basis or frequency basis. It will be needed
in Shors period-finding algorithm.
619
The Take-Away
Where we used H n to our advantage for Simons (Z2 )n - periodicity, we anticipate
using the QFT to achieve a similar effect when we work on Shors ordinary integer
periodicity.
22.4
Going from an operator definition to a quantum circuit that has an efficient (polynomial growth complexity) circuit is always a challenge. We will approach the problem
in bite-sized pieces.
22.4.1
Notation
n1
O
|xk i ,
k=0
x {0, 1} .
|000i ,
|1i3
|001i ,
|2i
|010i ,
|3i
|011i ,
|4i
and, in general,
|100i ,
|xi3
620
|x2 x1 x0 i .
n1
X
xk 2k .
k=0
22.4.2
We begin with the definition of the QFT s action on a CBS and gradually work
towards the expression that will reveal the
circuit. It will take a few screens, so here
we go. (Im going to move the constant 1/ N to LHS to reduce syntax.)
N QFT (N ) |xin
N
1
X
xy |yin
y=0
N
1
X
n1
P
yk 2k
k=0
|yn1 . . . y1 y0 i
y=0
N
1
X
n1
Y
y=0
k=0
xyk
2k
|yn1 . . . y1 y0 i
(I displayed the order, N , explicitly in the LHSs QFT (N ) , something that will
come in handy in about two screens.) To keep the equations from overwhelming us,
lets symbolize the inside product by
xy
n1
Y
xyk 2
k=0
N
1
X
xy
|yn1 . . . y1 y0 i .
y=0
621
Separating a DFT into even and odd sub-arrays led to the FFT algorithm, and we
try that here in the hope of a similar profit.
8 QFT |xi3
=
+
xy0 2
xy
x01
2
Y
1,
xyk 2k
k=0
xyk 2 .
k=1
Evidently, the xy in the y-even group can start the product at k = 1 rather than
k = 0 since the k = 0 factor is 1. We rewrite the even sum with this new knowledge:
y-even group
=
=
x0 |000i
x0 |00i
+
+
7
X
2
Y
y=0
y even
k=1
!
k
xyk 2
|y2 y1 i |0i
Now that |y0 i = |0i has been factored from the sum we can run through the even y
more efficiently by
halving the y-sum from
P7
even
P3
all
Q2
replacing 2k 2k+1 .
622
Q1
0
, and
(Take a little time to see why these adjustments make sense.) Applying the bullets
gives
P
y-even group
!
!
3
1
X
Y
k+1
=
|y1 y0 i |0i
xyk 2
y=0
k=0
3
X
1
Y
y=0
k=0
k
2 xyk 2
!
|y1 y0 i
|0i .
Theres one final reduction to be made. While we successfully halved the size of y
inside the kets from its original 0 7 range to the smaller interval 0 3, the x in
the exponent still roams free in the original set. How do we get it to live in the same
smaller world as y? The key lurks here:
2
xyk 2k
The even sub-array rearrangement precipitated a 4th root of unity 2 rather than
the original 8th root . This enables us to replace any x > 3 with x 4, bringing it
back into the 0 3 range without affecting the computed values. To see why, do the
following short exercise.
[Exercise. For 4 x < 7 write x = 4 + p, where 0 p < 3.
Plug 4 + p in for x
in the above exponent and simplify, leveraging the fact that = 8 1.]
The bottom line is that we can replace x with (x mod 4) and the equality is still
holds true,
!
!
3
1
X
Y
k
P
(x
mod
4)
y
2
k
y-even group
=
2
|y1 y0 i |0i .
y=0
k=0
!
N/21
n2
X
Y
(x mod N/2) yk 2k
=
2
|yn2 yn3 . . . y0 i |0i .
y=0
k=0
N
1
X
n1
Y
y=0
k=0
xyk
2k
|yn1 . . . y1 y0 i ,
and we are encouraged to see the order-N/2 QFT staring us in the face,
q
P
(n1)
(N/2)
N
y-even group
=
QFT
| x mod (N/2) i
|0i .
2
623
i
|0i .
2
Weve expressed the even group as a QFT whose order N/2, half the original N .
The scent of recursion (and success) is in the air. Now, lets take a stab at the odd
group.
Analysis of y-odd Group
The least significant bit, y0 , of the all terms in the y-odd group is always 1, so for
terms in this group,
y0
0
xy0 2
xy
x11
2
Y
x,
=
xyk 2k
k=0
2
Y
xyk 2 .
k=1
We separated the factor from the rest so that we could start the product at k = 1
to align our analysis with the y-even group, above. We rewrite the odd sum using
this adjustment:
P
y-odd group
=
=
x1 |001i
x1 |00i
+
+
7
X
2
Y
y=0
y odd
!
k
xyk 2
k=1
|y2 y1 i
|1i
Now that |y0 i = |1i and x have both been factored from the sum, we run through
the odd y by
P
P
halving the y-sum from 7odd 3all ,
replacing |y2 y1 i |y1 y0 i ,
shifting the k-product down-by-1 so
Q2
1
Q1
0
, and
replacing 2k 2k+1 .
These bullets give us
y-odd group
=
3
X
1
Y
y=0
k=0
k
2 xyk 2
624
!
|y1 y0 i
|1i ,
and we follow it by the same replacement, (x mod 4) x that worked for the y-even
group (and works here, too):
P
y-odd group
!
!
3
1
X
Y
k
(x mod 4) yk 2
= x
2
|y1 y0 i |1i .
y=0
k=0
!
N/21
n2
X
Y
(x mod N/2) yk 2k
|yn2 yn3 . . . y0 i |1i .
2
= x
y=0
k=0
Once again, we are thrilled to see an (N/2)-Order QFT emerge from the fray,
q
P
(n1)
(N/2)
N
y-odd group
= x
|1i .
QFT
|
x
i
2
The Recursion Relation
P
P
Combine the y-even group
and y-odd group
to get
P
P
N QFT |xin = y-even group
+ y-odd group
q
(n1)
(N/2)
N
=
QFT
|
x
i
|0i
2
q
(n1)
(N/2)
N
QFT
|
x
i
|1i
+ x
2
q
(n1)
(N/2)
N
QFT
|
x
i
(|0i + x |1i)
=
2
The binomial |0i + x |1i had to end up on the right of the sum because we were
peeling off the least-significant |0i and |1i in the even-odd analysis; tensor products
are not commutative. This detail leads to a slightly annoying but easily handled
wrinkle in the end. Youll see.
Dividing out the N , using 2n for N and rearranging, we get an even clearer
picture.
|0i + 2xn |1i
n
n1
(2n )
(2n1 )
QFT
|xi
= QFT
|
xi
2
Compare this with the Danielson-Lanczos Recursion Relation for the DFT which we
turned into an FFT by unwinding recursion. In our current context its even easier,
because we have only one, not two, recursive calls to unwind.
625
If we apply the same math to the lower-order QFT on the RHS and plug the
result into last equation, using x for the x mod (N/2), we find
|0i + 2xn1 |1i
|0i + 2xn |1i
n
(2n )
(2n2 ) n2
QFT
|xi
= QFT
| x i
.
2
2
Now let recursion off its leash. Each iteration pulls a factor of |0i + 2xk |1i out
k
(and to the right) of the lower-dimensional QFT (2 ) until we get to QFT (2) , which
would be the final factor on the left,
n
Y
|0i + 2xk |1i
n
(2n )
.
QFT
|xi
=
2
k=1
First, admire the disappearance of those pesky x factors, so any anxiety about x
mod N/k is now lifted. Next, note that the RHS is written in terms of different
roots-of-unity, 2 , 4 , . . . , 2n = . However, they can all be written as powers of
= N ,
2k
nk
(2n )
|xin
n
Y
k=1
(n1)
nk x
|0i + 2
|1i
(2n )
|xi
n1
O
|0i + 2xnk |1i
k=0
626
I wont use this notation, since it scares people, and when youQmultiply kets by
kets everyone knows that tensors are implied. So, Ill use the
notation for all
products, and you can infuse the tensor interpretation mentally when you see that
the components are all qubits.
However, its still worth remarking that this is a tensor product of kets from the
individual 2-dimensional H spaces (of which there are n) and as such results in a
separable state in the N -dimensional H(n) . This is a special way different from the
expansion along the CBS to express a state in this high dimensional Hilbert space.
But you should not be left with the impression that we were entitled to find a factored
representation. Most states in H(n) cannot be factored theyre not separable. The
result we derived is that when taking the QFT of a CBS we happily end up with a
separable state.
The factored representation and the CBS expansion each give different information
about the output state, and it may not always be obvious how the coefficients or
factors of the two relate (without doing the math).
A simple example is the equivalence of a factored representation and the CBS
expansion of the following |i2 in a two qubit system (n = 2, N = 4):
|0i |1i
|0i + |1i
|0i |0i
2
+
= |0i
|i
=
2
2
2
Here we have both the CBS definition of QFT |xi2 and the separable view.
In the N -dimensional case, the two different forms can be shown side-by-side,
!
n 1
2X
n
2nk x
Y
|0i
+
|1i
n
n
(2 )
xy |yi .
=
QFT
|xi
=
2
y=0
k=1
Of course, there can only be (at most) n factors in the separable factorization, while
there will be up to 2n terms in the CBS expansion.
The reason I bring this up is that the (separable) factorization is more relevant
to the QFT than it was to the FFT because we are basing our quantum work on
the supposition that there will be quantum gates in the near future. These gates
are unitary operators applied to the input CBS qubit-by-qubit, which is essentially a
tensor product construction.
Lets see how we can construct an actual QFT circuit from such unitary operators.
22.4.3
=
.
2
2
2
627
Good things come to those who calmly examine each factor, separately. Work from
left (most-significant output qubit) to right (least-significant output qubit).
The First (Most-Significant) Output Factor
We already know that this is H |x0 i but well want to re-derive that fact in a way
that can be used as a template for the other two factors. is an 8th root of unity,
so the coefficient of |1i in the numerator can be derived from
4x
which means
|0i + 4x |1i
|0i + |1i , x0 = 0
2
|0i
|1i
x0 = 1
H |x0 i .
This was the most-significant qubit factor of output ket (the one on the far left of
the product). Lets refer to the output ket as |e
xi and its most significant separable
factor (the one at the far left of our product) as |e
x2 i. We can then rewrite the last
equation as
|e
x2 i
H |x0 i .
[Dont be lulled into thinking this is a computational basis element, though. Unlike
the input state, |xi = |x2 i |x1 i |x0 i, which is a product of CBS, and therefore, itself a
tensor CBS, the output, |e
xi while a product of states, to be sure, is not comprised of
factors which are CBS in their 2-D homes. Therefore, the product, |e
xi, is not a CBS
in the 2n -dimensional product space.]
Summary: By expressing x as powers-of-2 in the most-significant output factor,
|e
x2 i, we were able to watch the higher-powers dissolve because they turned into 1.
That left only the lowest power of , namely 4 = (1), which, in turn, produced a
Hadamard effect on the least significant bit of the input ket, |x0 i. Well do this for
the other two factors with the sober acceptance that each time, fewer high powers
will disappear.
But first, lets stand back and admire our handiwork. We have an actual circuit
element that generates the most significant separable factor for QFT (8) |xi:
|x0 i
|e
x2 i X
First, wow. Second, do you remember that I said pulling the least-significant kets
toward the right during factorization would introduce a small wrinkle? Youre looking
628
which means
|0i + 2x |1i
|0i + (1)x1 (i)x0 |1i
H |x1 i ,
x0 = 0
x1 (i)x0 |1i
|0i + (1)
, x0 = 1
2
The good news is that if x0 = 0, then the factor ix0 becomes 1 and we are left
with a Hadamard operator applied to the middle input ket, |x1 i.
The bad news is that if x0 = 1, we see no obvious improvement in the formula.
Fixing the bad news of item 1 is where I need you to focus all your attention and
patience, as it is the key to everything and takes only a few more neurons. Lets go
ahead and take H |x1 i, regardless of whether x0 is 0 or 1. If x0 was 0, we guessed
right, but if it was 1, what do we have to do to patch things up? Not much, it turns
out.
Lets compare the actual state we computed (wrong, if x0 was 1) with the one we
wanted (right, no matter what) and see how they differ. Writing them in coordinate
form will do us a world of good.
What we got when applying H to |x1 i ... :
1
1
x
2 1 1
... but if x0 = 1, we really wanted:
1
1
x
2 1 1 i
629
How do we transform
1
1x1
7
1
1x1 i
Answer: multiply by
1 0
R1
:
0 i
1 0
1
=
0 i
1x1
1
1x1 i
.
Now we have the more pleasant formula for the second factor,
(
H |x1 i ,
x0 = 0
|0i + 2x |1i
=
2
R1 H |x1 i , x0 = 1
A Piece of the Circuit
We found that the two most-significant factors of the (happily separable) QFT |xi
could be computed using the formulas
|e
x2 i
|e
x1 i
H |x0 i , and
(
H |x1 i ,
x0 = 0
R1 H |x1 i ,
x0 = 1.
In words:
1. We apply H to the two least significant kets, |x0 i and |x1 i, unconditionally, since
they will always be used in the computation of the final two most-significant
factors of QFT |xi.
2. We conditionally apply another operator, R1 , to the result of H |x1 i in the
eventuality that x0 = 1.
3. Although we apply all this to the two least significant input kets, |x1 i |x0 i,
what we get is the most-significant portion of the output states factorization,
|e
x2 i |e
x1 i (not the least-significant, so we must be prepared to do some swapping
before the day is done).
Item 2 suggests using a controlled-R1 gate, where bit x0 is the control. If x0 = 0,
the operator being controlled is not applied, but if x0 = 1, it is. Heres the schematic
for that piece:
This leads to the following circuit element:
...
R1
...
|x0 i
...
630
Lets add the remaining components one-at-a-time. First, we want to apply the
unconditional Hadamard gate to |x1 i. As our formulas indicate, this done before R1 ,
(R1 H |x1 i is applied right-to-left). Adding this element, we get:
|x1 i
|e
x1 i X
R1
...
|x0 i
|e
x1 i X
R1
|x0 i
|e
x2 i X
That completes the circuit element for the two most-significant separable output
factors. We can now get back to analyzing the logic of our instructional n = 3 case
and see how we can incorporate the last of our three factors.
The Last (Least-Significant) Factor
The rightmost output factor, |e
x0 i, has an with no exponent in the numerator,
x
which means
|0i + x |1i
|0i + (1)x2 (i)x1 ()x0 |1i
2
x2 (i)x1 |1i
|0i + (1)
,
x0 = 0
2
|0i + (1)x2 (i)x1 ()x0 |1i , x = 1
0
2
This time, while the output factor does not reduce to something as simple as a H |x2 i
in any cases, when x0 = 0, it does look like the expression we had for the middle
factor, except applied here to |x2 i |x1 i rather than the |x1 i |x0 i of the middle factor.
In other words, when x0 = 0 this least significant factor reduces to
|0i + (1)x2 (i)x1 |1i
,
2
631
.
2
This suggests that, if x0 = 0, we apply the same exact logic to |x2 i |x1 i that we used
for |x1 i |x0 i in the middle case. That logic would be (if x0 = 0)
(
H |x2 i ,
x1 = 0
|0i + x |1i
=
2
R1 H |x2 i , x1 = 1
Therefore, in the special case where x0 = 0, the circuit that works for |e
x0 i looks like
the one that worked for |e
x1 i, applied, this time, to qubits 1 and 2:
|x2 i
...
R1
...
|x1 i
To patch this up, we have to adjust for the case in which x0 = 1. The state we just
generated with this circuit was
1
1
x
x
2 1 2 (i) 1
... but if x0 = 1, we really wanted:
1
1
x
x
x
2 1 2 (i) 1 () 0
How do we transform
1x2
1
(i)x1
7
1x2
1
(i)x1 ()x0
?
Answer: multiply by
1 0
0
R2
:
1 0
1
0
1x2 (i)x1
=
1x2
1
(i)x1 ()x0
.
H |x2 i ,
x0 = 0, x1 = 0
R1 H |x2 i , x0 = 0, x1 = 1
x
|0i + |1i
R2 H |x2 i ,
x0 = 1, x1 = 0
R2 R1 H |x2 i , x0 = 1, x1 = 1
632
R1
|e
x0 i X
R2
...
|x1 i
...
|x0 i
The Full Circuit for N = 8 (n = 3)
In the previous section, we obtained the exact result for the least-significant output
factor, |e
x0 i.
|x2 i
R1
|e
x0 i X
R2
...
|x1 i
...
|x0 i
In the section prior, we derived the circuit for output two most significant factors,
|e
x1 i and |e
x2 i
|x1 i
|e
x1 i X
R1
|x0 i
|e
x2 i X
All thats left to do is combine them. The precaution we take is to defer applying any
operator to an input ket until after that ket has been used to control any R-gates
needed by its siblings. That suggests that we place the |e
x1 i |e
x2 i circuit elements to
the right of the |e
x0 i circuit element, and so we do.
|x2 i
|x1 i
|x0 i
R1
|e
x0 i X
R2
633
|e
x1 i X
R1
H
|e
x2 i X
Prior to celebration, we have to symbolize the somewhat trivial circuitry for reordering the output. While trivial, it has a linear (in n = log N ) cost, but it adds
nothing to the time complexity, as well see.
|x2 i
R1
|e
x2 i X
R2
|x1 i
|x0 i
R1
|e
x1 i X
|e
x0 i X
You are looking at the complete QFT circuit for a 3-qubit system.
Before we leave this case, lets make one notational observation. We defined R1
to be the matrix that patched-up the |e
x1 i factor, and R2 to be the matrix that
patched-up the |e
x0 i factor. Lets look at those gates along with the only other gate
we needed, H:
1 0
1 0
R2
R1
0
0 i
1 1 1
H
2 1 1
The lower right-hand element of each matrix is a root-of-unity, so lets see all three
matrices again, this time with that lower right element expressed as a power of :
1 0
1 0
R2
R1
0
0 2
1 1 1
H
4
2 1
This paves the way to generalizing to a QFT of any size.
22.4.4
Minus the re-ordering component at the far right, heres the QFT circuit we designed
for n = 3:
|x2 i
|x1 i
|x0 i
R1
R2
R1
634
[Exercise. Go through the steps that got us this circuit, but add a fourth qubit to
get the n = 4 (QFT (16) ) circuit.]
It doesnt take too much imagination to guess what the circuit would be for any
n:
|xn1 i
|xn2 i
R1
..
.
..
Rn1
..
|xn3 i
|x0 i
R2
..
..
.
...
..
R1
..
..
Rn2
...
..
..
Thats good and wonderful, but we have not defined Rk for k > 2 yet. However, the
final observation of the n = 3 case study suggested that it should be
1
0
Rk
.
nk1
0 2
You can verify this by analyzing it formally, but its easiest to just look at the extreme
cases. No matter what n is, we want R1 s lower-right element = i, and Rn1 s to be
(compare our n = 3 case study directly above), and you can verify that for k = 1
and k = n 1, thats indeed what we get.
Well, we have defined and designed the QFT circuit out of small, unitary gates.
Since youll be using QFT in a number of circuit designs, you need to be able to cite
its computational complexity, which we do now.
22.5
This is the easy part because we have the circuit diagrams to lean on.
Each gate is a single unitary operator. Some are 2-qubit gates (the controlled-Rk
gates) and some are single qubit gates (the H gates). But they are all constant time
and constant size, so we just add them up.
The topmost input line starting at |xn1 i has n gates (count the two-qubit controlledRk s as single gates in this top line, but you dont have to count their control nodes
when you get to them in the lines, below). As we move down to the lower lines
observe that each line has one-fewer gates than the one above until we get to the final
line, starting at |x0 i, which has only one gate. Thats
n
X
k=1
n(n + 1)
gates.
2
635
The circuit complexity for this is O(n2 ). Adding on a circuit that reverses the order
can only add an additional O(n) gates, but in series, not nested, so that does not
affect the circuit complexity. Therefore, we are left with a computational factor of
O(n2 )
O(log2 N ).
You might be tempted to compare this with the O(N log N ) performance of the FFT ,
but thats not really an apples-to-apples comparison, for several reasons:
1. Our circuit computes QFT |xin for only one of the N = 2n basis states. Wed
have to account for the algorithm time required to repeat N passes through
the circuit which (simplistically) brings it to O(N log2 N ). While this can be
improved, the point remains: our result above would have to by multiplied by
something to account for all N output basis states.
2. If we were thinking of using the QFT to compute the DFT , wed need to
calculate the N complex amplitudes, {e
ck } from the inputs {ck }. They dont
appear in our analysis because we implicitly considered the special N amplitudes
1
{xy }N
y=0 that define the CBS. Fixing this is feels like O(N ) proposition.
3. Even if we could repair the above with clever redesign, we have the biggest
obstacle: the output coefficients which hold the DFT information are amplitudes. Measuring the output state collapses them destroying their quantum
superposition.
Although we cannot (yet) use the QFT to directly compute a DFT with growth
smaller than the FFT , we can still use it to our advantage in quantum circuits, as
we will soon discover.
[Accounting for Precision. You may worry that increasingly precise matrix
multiplies will be needed in the 2 2 unitary matrices as n increases. This is a valid
concern. Fixed precision will only get us so far until our ability to generate and compute with increasingly higher roots-of-unity will be tapped out. So the constant time
unitary gates are relative to, or above, the primitive complex (or real) multiplications and additions. We would have to make some design choices to either limit n
to a maximum useable size or else account for these primitive arithmetic operations
in the circuit complexity. Well take the first option: our n will remain below some
maximum n0 , build our circuitry and algorithm to be able to handle adequate precision for that n0 and all n n0 . This isnt so hard to do in our current problem
since n never gets too big: n = log2 N , where N is the true size of our problem, so we
wont be needing arbitrarily large ns in practice. If that doesnt work for us in some
future problem, we can toss in the extra complexity factors and they will usually still
produce satisfactory polynomial big-Os for problems that are classically exponential.]
Further Improvements
Ill finish by mentioning, without protracted analysis, a couple ways the above
circuits can be simplified and/or accelerated:
636
If we are willing to destroy the quantum states and measure the output qubits
immediately after performing the QFT (something we are willing to do in most
of our algorithms), then the two-qubit (controlled-Rk ) gates can be replaced
with 1-qubit gates. This is based on the idea that, rather than construct a
controlled-Rk gate, we instead measure the controlling qubit first and then apply
Rk based on the outcome of that measurement. This sounds suspicious, I know:
were still doing a conditional application of a gate. However, a controlled-Rk
does not destroy the controlling qubit and contains all the conditional logic
inside the quantum gate, whereas measuring a qubit and then applying a 1qubit gate based on its outcome, moves the controlling aspect from inside the
quantum gate to the outer classical logic. It is much easier to build stable
conditioned one-qubit gates than two-qubit controlled gates. Do note, however,
that this does not improve the computational complexity.
If m-bit accuracy is enough, where m < n, then we get improved complexity. In this case, we just ignore the least-significant (n m) output qubits.
That amounts to tossing out the top (n m) lines, leaving only m channels to
compute. The new complexity is now O(m2 ) rather than O(n2 ).
637
Chapter 23
Shors Algorithm
23.1
23.1.1
Shors algorithms are the crown jewels of elementary quantum information theory.
They demonstrate that a quantum computer, once realized, will be able handle some
practical applications that are beyond the reach of the fastest existing super computers, the most dramatic being the factoring of large numbers.
As you may know from news sources or academic reports, the inability of computers to factor astronomically large integers on a human timescale is the key to
RSA encryption, and RSA encryption secures the Internet. Shors quantum factoring
algorithm should be able to solve the problem in minutes or seconds and would be a
disruptive technology should a quantum computer be designed and programmed to
implement it. Meanwhile, quantum encryption, an advanced topic that we study in
the next course, offers a possible alternative to Internet security that could replace
RSA when the time comes.
23.1.2
There are many ways to present Shors results, and it is easy to become confused
about what they all say. Well try to make things understandable by dividing our
study into two parts adumbrated by two observations.
1. Shors algorithm for period-finding is a relativized (read not absolute) exponential speed-up over a classical counterpart. Like Simons algorithm, there are
periodic functions that do not have polynomial-fast oracles. In those cases the
polynomial complexity of the {circuit + algorithm} around the oracle will not
help.
2. Shors algorithm for factoring not only makes use of the period-finding algo638
rithm, but also provides an oracle for a specific function that is polynomial-time,
making the entire {circuit + algorithm} an absolute exponential speed-up over
the classical version.
23.1.3
=
639
constant,
where the constant is usually 1 or 2 for continuous functions and the vector
size, M , for discrete functions.
2. If a discrete function is periodic, its spectrum DFT (f ) will have values which
are mostly small or zero except at domain points that are multiples of the
frequency (See Figure 23.1).
Figure 23.1: The spectrum of a vector with period 8 and frequency 16 = 128/8
In very broad and slightly inaccurate terms, this suggests we query the spectrum of
our function, f (x), ascertain its fundamental frequency, m, (the first non-zero spike)
and from it get the period, a = M/m.
But what does it mean to query the frequency? Thats code for take a postoracle measurement in the Fourier basis. We learned that measuring along a nonpreferred basis is actually applying the operator that converts the preferred basis to
the alternate basis, and for frequencies of periodic functions, that gate is none other
than the QFT .
Well this sounds easy, and while it may motivate the use of the QFT , figuring
out how to use it and what to test that will consume the next two weeks.
How Well Set-Up the State Prior to Applying the QF T
Another thing we saw in our Fourier lectures was that we get the cleanest, easiest-toanalyze spectrum of a periodic function when we start with a pure periodic function
in the spatial (or time) domain, and then apply the transform. In the continuous case
that was a sinusoid or exponential, e.g., sin 3x (Figure 23.2).
( 0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0 ) .
Such overtly periodic vectors have DFT s in which all the non-zero frequencies in the
spectrum have the same amplitudes as shown in Figure 23.3.
Figure 23.3: The spectrum of a purely periodic vector with period 8 and frequency
16 = 128/8
We will process our original function so that it produces a purely periodic cousin with
the same period by
1. putting a maximally mixed state into the oracles A register to enable quantum
parallelism, and
2. conceptually collapsing the superposition at the output of the oracle by taking
a B register measurement and applying the generalized Born rule.
This will leave a pure periodic vector in the A register which we can send through
a post processing QFT gate and, finally, measure. We may have to do this more
than once, thereby producing several measurements. To extract m, and thus a, from
the measurements we will apply some beautiful mathematics.
The Final Approach
After first defining the kind of periodicity that Shors work addresses, we begin the
final leg of our journey that will take us through some quantum and classical terrain.
When were done, you will have completed the first phase in your study of quantum
computation and will be ready to move on to more advanced topics.
The math that accompanies Shors algorithms is significant; it spans areas as
diverse as Fourier analysis, number theory, complex arithmetic and trigonometry.
641
We have covered each of these subjects completely. If you find yourself stuck on some
detail, please search the table of contents in this volume for a pointer to the relevant
section.
23.2
Injective Periodicity
23.2.1
The kind of periodicity Shors algorithm addresses can be expressed in terms of functions defined over all the integers, Z, or over just a finite group of integers like ZM .
The two definitions are equivalent, but it helps to define periodicity both ways so we
can speak freely in either dialect. Here well consider all integers, and in the next
subsection well deal with ZM .
Weve discussed ordinary periodicity, like that of a function sin x, as well as (Z2 )n
periodicity, studied in Simons algorithm. Shors periodicity is much closer to ordinary
periodicity, but has one twist that gives it a pinch of Simons more exotic variety.
A function defined on Z,
f:
S,
S Z,
The term injective will be discussed shortly. Because of the if and only if () in
the definition, we dont need to say smallest or unique for a. Those conditions
follow naturally.
23.2.2
A little consideration of the previous definition should convince you that any periodic
injective function with period a can be confined to a finite subset of Z which contains
the interval [0, a). To feel f s periodicity, though, wed want M to contain at least
a few copies of a inside it, i.e., we would like M > 3a or M > 1000a. It helps if
we assume that we do know such an M , even if we dont know the period, a, and let
f be defined on ZM , rather than the larger Z. The definition of periodic injective in
this setting would be as follows.
A function defined on ZM ,
f:
ZM
S,
S ZM ,
23.2.3
The term injective is how mathematicians say 1-to-1. Also, periodic injective is
seen in the quantum computing literature, so I think its worth using (rather than the
rag-tag 1-to-1-periodicity, which is a hyphen extravaganza). But what, exactly, are
we claiming to be injective? A periodic function is patently non-injective, mapping
643
multiple domain points to the same image value. Where is there 1-to-1-ness? It
derives from the following fact which is a direct consequence of the definition:
The if-and-only-if () condition in the definition of periodic injective implies
that, when restricted to the set [0, a) = { 0, 1, 2, . . . , a 1 }, f is 1-to-1. The same
is true of any set of <= a consecutive integers in the domain.
[Exercise. Prove it.]
23.3
644
Figure 23.6: We add the weak assumption that 2(+) a-intervals fit into [0, M )
In fact, when we apply Shors periodicity algorithm to RSA encryption-breaking or
the factoring problem, this added assumption will automatically be satisfied. Usually,
we have an even stronger assumption, namely that a M .
In the general case, well see that even if we are only guaranteed that a .9999M ,
we still end up proving that Shors problem is solvable in polynomial time by a
quantum circuit. But theres no reason to be so conservative. Even a < M/2 is
overdoing it for practical purposes, yet it makes all of our estimates precise and easy
to prove without any hand-waving. Also, it costs us nothing algorithmically, as we
will also see.
645
Figure 23.8: Our proof will also work for only one a interval in [0, M )
23.3.1
As stated, the problem has relatively few moving parts: an unknown period, a, a
known upper bound for a, M , and the injective periodicity property. To facilitate
the circuit and algorithm, well have to add a few more letters: n, N and r. Here are
their definitions.
A Power of 2, 2n , for the Domain Size
Let n be the exponent that establishes closest power-of-2 above M 2 ,
2n1 < M 2 2n .
Well use the integer interval [0, 1, . . . , 2n 1] as our official domain for f , and well
let N be the actual power-of-2,
N 2n .
Since M > 2a we are guaranteed that [0, N 1] will contain at least as many intervals
of size a within it.
[Youre worried about how to define f beyond the original domain limit M ? Stop
worrying. Its not our job to define f , just discover its period. We know that f is
periodic with period a, even though we dont know a yet. That means its definition
can be extended to all of Z. So we can take any size domain we want. Stated another
way, we assume our oracle can compute f (x) for any x.]
The reason for bracketing M 2 like this only becomes apparent as the plot unfolds.
Dont be intimidated into believing anyone could predict we would need these exact
646
< M 2 N.
[ 0, 2r 1 ], also written
[ 0, 2r 1 ], N 2n .
< log M 2
log N ,
< 2 log M
log N .
or
The big-O of every expression in this equation will kill the constants and weaken <
to , producing
O(log N )
O(log M )
647
O(log N ) ,
and since the far left and far right are equal, they both equal middle, i.e.
O (log N )
O (log M ) .
Therefore, a growth rate of any polynomial function of these two will also be equal,
O (logp N )
O (logp M ) .
Thus, bracketing M 2 between N/2 and N allows us to use N to compute the complexity and later replace it with M . Specifically, well eventually compute a big-O of
log3 N for Shors algorithm, implying a complexity of log3 M .
23.3.2
Z2 )n CBS Connection
The Z N (Z
Were all full of energy and eager to start wiring this baby up, but there is one final
precaution we should take lest we find ourselves adrift in a sea of math. On the one
hand, the problem lives in the world of ordinary integer arithmetic. We are dealing
with simple functions and sums like,
f (18)
6 + 7
=
=
6, and
13 .
On the other hand, we will be working with a quantum circuit which relies on mod-2
arithmetic, most notably the oracles B register output,
|y f (x)ir ,
or, potentially more confusing, ordinary arithmetic inside a ket, as in the expression
|x + jain .
Theres no need to panic. The simple rule is that when you see +, use ordinary
addition and when you see use mod-2.
|y f (x)ir . Were familiar with the mod-2 sum and its use inside a ket,
especially when we are expressing Uf s target register. The only very minor
adjustment well need to make arises from the oracles B channel being r qubits
wide (where 2r is f s range size) instead of the same n qubits of the oracles A
channel (where 2n is f s domain size). Well be careful when we come to that.
|x + jain . As for ordinary addition inside the kets, this will come about when
we partition the domain into mutually exclusive cosets, a process that Ill
describe shortly. The main thing to be aware of is that the sum must not extend
beyond the dimension of the Hilbert space in which the ket lives, namely 2n .
Thats necessary since an integer x inside a ket |xin represents a CBS state,
and there are only 2n of those, |0in , . . . |2n 1in . Well be sure to obey that
rule, too.
Okay, Ive burdened you with eye protection, seat belts and other safety equipment, and I know youre bursting to start building something. Lets begin.
648
23.4
23.4.1
The Circuit
H n
QFT (N )
| {z }
(actual)
Uf
|0ir
| {z }
(conceptual)
[Note: As with Simon, I suppressed the hatching of the quantum wires so as to
produce a cleaner looking circuit. The A channel has n lines, and the B channel has
r lines, as evinced by the kets and operators which are labeled with the exponents
n, N and r.]
There are two multi-dimensional registers, the upper A register, and the lower
B register. A side-by-side comparison of Shors and Simons circuits reveals two
differences:
1. The post-oracles A register is processed by a quantum Fourier transform instead of a multi-order Hadamard gate.
2. Less significant is the size of the B register. Rather than it being an n-fold tensor
space of the same dimension, 2n , as the A register, it is an r-fold space, with
a smaller dimension, 2r . This reflects the fact that we know f to be periodic,
with period a, forcing the number of distinct image values of f to be exactly a
because of injective periodicity. Well, a < M < M 2 2n , so there you have it.
These images can be reassigned to fit into a vector space of dimension, smaller,
usually much smaller, than As 2n . (Remember that we dont care what the
actual images are sheep, neutrinos so they may as well be 0, 1, ..., 2r1 .) An
exact value for r may be somewhat unclear at this point all we know is that
it need never be more than n, and wed like to reserve the right to give it a
different value by using a distinct variable name.
23.4.2
The Plan
Target Channel. The bottom line forwards its |0ir directly on to the quantum
oracles B register, a move that (we saw with Simon) anticipates an application of
the generalized Born rule.
At that point, we conceptually test the B register output, causing a collapse of
both registers (Born rule). Well analyze whats left in the collapsed A registers
output, (with the help of a re-organizing, QFT gate). Well find that only a very
small and special set of measurement results are likely. And like Simons algorithm,
we may need more than one sampling of the circuit to get an adequate collection of
useful outputs on the A-line, but itll come very quickly due to the probabilities.
Strategy
Up to now, Ive been comparing Shor to Simon. Theres an irony, though, when we
come to trying to understand the application of the final post-oracle, pre-measurement
gate. It was quite difficult to give a simple reason why a final Hadamard gate did
the trick for Simons algorithm (I only alluded to a technical lemma back then.) But
the need for a final QFT , as we have already seen, is quite easy to understand:
we want the period, so we measure in the Fourier basis to get the fundamental frequency, m. Measuring in the Fourier basis means applying a z basis-to-Fourier basis
transformation, i.e., a QFT . m gets us a and we go home early.
One wrinkle rears its head when we look at the spectrum of a periodic function,
even one that is pure in the sense described above. While the likely (or in some cases
only) measurement possibilities may be limited to a small subset {cm}a1
c=0 where
m = N/a is the frequency associated with the period a, we dont know which cm we
will measure; there are a of them and they are all about equally likely. Youll see why
we should expect to get lucky.
Figure 23.10: Eight highly probable measurement results, cm, for N = 128 and a = 8
So, while well know we have measured a multiple cm of the frequency, we wont know
which multiple. As it happens, if we are lucky enough to get a multiple c that has the
bonus feature of being relatively prime (coprime) to a, we be able to use it to find a.
A second wrinkle is that despite what Ive led you to believe through my pictures,
the likely measurements arent exact multiples cm of the frequency, m. Instead they
will be a values yc , for c = 0, 1, , (a 1) which are very close to cm. Well have
to find out how to lock-on to the nearby cm associated with our measured, yc . Still,
when we do, a c coprime to a will be the the most desirable multiple that will lead
to a.
650
Two Forks
As we proceed, well get to a fork in the road. If we take the right fork, well find an
easy option. However the left fork will require much more detailed math. Thats the
hard option. In the easy case the spectrum measurement will yield an exact multiple,
cm, of m that I spoke of above. The harder, general case, will give us only a yc close
to cm. Then well have to earn our money and use some math to hop from the yc on
which we landed to the nearby cm that we really want.
Thats the plan. It may not be a perfect plan, but I think thats what I like about
it.
23.5
|0in
H n
QFT (N )
| {z }
(actual)
Uf
|0ir
| {z }
(conceptual)
Since many of the sections are identical to what weve done earlier, the analysis is
also the same. However, Ill repeat the discussion of those common parts to keep this
lecture somewhat self-contained.
23.6
23.6.1
I suppose we would be well advised to make certain we know what the state looks like
at access point A before we tackle point B, and that stage of the circuit is identical
to Simons; it sets up quantum parallelism by producing a perfectly mixed entangled
651
state, enabling the oracle to act on f (x) for all possible x, simultaneously.
|0in
H n
QFT (N )
Uf
|0ir
Hadamard, H n , in H(n)
Even though we are only going to apply the 2n -dimensional Hadamard gate to the
simple input |0in , lets review the effect it has on any CBS |xin .
|xin
H n
1
2
n 1
n 2X
(1)x y |yin ,
y=0
where the dot product between vector x and vector y is the mod-2 dot product. When
applied to |0in , reduces to
n 1
n 2X
n
1
|yin ,
|0i
H n
2
y=0
or, returning to the usual computational basis notation |xin for the summation,
n 1
n 2X
n
1
n
|0i
|xin .
H
2
x=0
The output state of this Hadamard operator is the nth order x-basis CBS ket, |+in =
|0in , reminding us that Hadamard gates provide both quantum parallelism as well as
a z x basis conversion operator.
23.6.2
H n
QFT (N )
Uf
|0ir
|
{z
}
Quantum Oracle
652
The only difference between this and Simons oracle is the width of the oracles B
register. Today, it is r qubits wide, where r will be (typically) smaller than the n of
the A register.
|xin
|xin
Uf
|0i
|xin |0ir
23.6.3
|0 f (x)ir
Uf
|f (x)ir
|xin |f (x)ir
2n 1
n X
|xi |0i
Uf
x=0
2n 1
n X
|xin |f (x)ir ,
x=0
which
see is a weighted sum of separable products, all weights being equal to
we
n
(1/ 2) . The headlines are these:
The output is a superposition of separable terms |xin
the generalized Born rule needs,
|in+r
|f (x)in
,
( 2)n
23.7
At this point, we will split the discussion into two parallel analyses:
653
23.8
Well be using two classical concepts heavily when we justify Shors quantum algorithm, and well also need these result for time complexity estimation.
Basic Notation
We express that fact that one integer, c, divides another integer, a, evenly (i.e. with
remainder 0) using the notation
c a .
Also, we will symbolize the non-negative integers using the notation
Z0 .
Now, assume we have two distinct non-negative integers, a and b, i.e.,
a, b Z0 ,
a > b,
23.8.1
23.8.2
largest integer, c, with ca and cb.
b
a
b is my shorthand for a is not coprime to b.
a
654
23.9
First Fork: Easy Case (aN )
We
case in which a is a divisor of N = 2n (in symbols,
now consider the special
aN ). This implies a = 2l . Immediately, we recognize that theres really no need for
Figure 23.11: Easy case covers aN , exactly
a quantum algorithm in this situation because we can test for periodicity using classical means by simply trying 2l for l = 1, 2, 3, . . . , (n 1), which constitutes O(log N )
trials. In the case of factoring, to which well apply period-finding in a later lecture,
well see that each trial requires a computation of f (x) = y x (mod N ) O(log4 N ),
y some constant (to be revealed later). So the classical approach is O(log N ) relative
to the oracle, and O(log5 N ) absolute including the oracle, all without the help of a
quantum circuit. However, the quantum algorithm in this easy case lays the foundation for the difficult case that follows, so we will develop it now and confirm that QC
leads to at least O(log5 N ) classical complexity (well do a little better).
23.9.1
Its time to use the fact that f is periodic injective with (unknown) period a to help
rewrite the output of the Oracles B register prior to the conceptual measurement.
Injective-periodicity tells us that the domain can be partitioned (in more than one
way) into many disjoint cosets of size a, each of which provides a 1-to-1 sub-domain
for f . Furthermore, because we are in the easy case, these cosets fit exactly into the
big interval [0, N ). Heres how.
[ 0, N 1 ]
[ 0, ma 1 ]
R + a
R + 2a
R + (m 1)a,
where
R [0, a 1] = {0, 1, 2, ... , a 1},
a = period of f, and
m = N/a is the number of times a divides N = 2n .
Definition of Coset. R + ja is called the jth coset of R.
We rewrite this decomposition relative to a typical element, x, in the base coset R.
[ 0, N 1 ] =
0, 1, . . . x, . . . a 1
a, 1 + a, . . . x + a, . . . 2a 1
[n
o
x + ja x [0, a)
j=0
m1
[n
x + ja
oa1
x=0
j=0
23.9.2
It seems like a long time since we saw the original expression for oracles output, so
lets write it down again. It was
2n 1
n X
1
|xin |f (x)ir .
2
x=0
Our new partition of the domain gives us a nice way to express this. Each element
x R has a unique partner in each of the cosets, R + ja, satisfying
x+a
x + 2a
..
f
.
7 f (x) .
x + ja
..
x + (m 1)a xR
656
Using this fact (and keeping in mind that N = 2n ) we only need to sum over the a
elements in R and include all the x + ja siblings in each terms A register factor,
N 1
1 X
|xin |f (x)ir
N
x=0
a1
1 X n
m1
a1
X
X
1
|x + jain |f (x)ir
=
N
j=0
x=0
a1
m1
r X
X
m
1
=
|x + jain |f (x)ir .
N
m
x=0
j=0
I moved a factor of 1/ m to the right of the outer sum so we could see that
1. each term in that outer sum is a normalized state (there are
m CBS terms in
the inner sum, and each inner term has an amplitude of 1/ m) and
2. the common amplitude remaining on the outside produces a normalized state
overall (therep
are a normalized
terms in the outer sum, and each term has an
23.9.3
Although we wont really need to do so, lets imagine what happens if we were to
apply the generalized Born rule now using the rearranged sum in which the B channel
now plays the role of Borns CBS factors and A channel holds the general factors.
|0in
H n
QFT (N )
Uf
|0ir
|
{z
Conceptual
As the last sum demonstrated, each B register measurement of f (x) will be attached
to not one, but m, input A register states. Thus, measuring B first, while collapsing
A, merely produces a superposition of m states in that register, not a single, unique
x from the domain. It narrows things down, but not enough to measure,
657
m1
a1
X
X
m
1
|x + jain |f (x)ir
N x=0
m
j=0
&
Here, &
m1
1 X
|x0 + jain
m j=0
means collapses to.
!
|f (x0 )ir
m1
X
1
|x0 in
|x0 + jain .
m
j=0
Figure 23.13: The spectrum of a purely periodic vector with period 8 and frequency
16 = 128/8
658
The Details
Well get our ideal result if we can produce an A register measurement, cm, where c
is coprime to a. The following two thoughts will guide us.
The shift property of the QFT will turn the sum x+ja into a product involving
a root-of-unity,
QFT
(N )
|x x0 i
x0 x
N 1
1 X yx n
|yi .
N y=0
23.9.4
H n
QFT (N )
Uf
|0ir
The QFT , being a linear operator, distributes over sums, so it passes right through
the ,
m1
1 X
|x0 + jain
m
m1
1 X
QFT (N )
j=0
j=0
(N )
|x0 + jai
N 1
1 X (x0 + ja) y
=
|yin
N
y=0
N 1
1 X x0 y jay
=
|yin ,
N
y=0
659
(N )
m1
1 X
|x0 + jain
m
j=0
N 1
m1
1 X 1 X x0 y jay
|yin
m
N
j=0
y=0
N 1 m1
1 X X x0 y jay
|yin
mN
y=0
j=0
N 1
m1
1 X x0 y X jay
=
|yin
mN
j=0
y=0
N 1
m1
X
X
1
x0 y
jay |yin
mN
y=0
j=0
We can measure it at any time, and we next look at what the probabilities say we
will see when we do.
Foregoing the B Register Measurement. Although we analyzed this under
the assumption of a B measurement, an A channel measurement really doesnt care
about a conceptual B channel measurement. The reasoning is the same as in
Simons algorithm. If we dont measure B first, the oracles output must continue to
carry the full entangled summation
m1
a1
r X
X
m
1
|x + jain |f (x)ir .
N
m
j=0
x=0
i
r
1 . This would add an extra outer-nested sum
1 X
|f (x)ir
x[0.a)
to our Summary expression, above, making it the full oracle output, not just that
of the A register. Even leaving B is unmeasured, the algebraic simplification we
get below will still take place inside the big parentheses above for each x, and the
probabilities wont be affected. (Also, note that an A register collapse to one specific
|x0 + jain will implicitly select a unique |f (x0 )i in the B register.) With this overview,
try carrying this complete sum through the next section if youd like to see its (non)
effect on the outcome.
660
23.9.5
We are now in an excellent position to analyze this final A register superposition and
see much of it disappear as a result of some of the properties of roots-of-unities that
we covered in a past lecture. After that, we can analyze the probabilities which will
lead to the algorithm. We proceed in five steps that will
1. identify a special set of a elements, C = {yc = cm}a1
c=0 of certain measurement
likelihood,
2. observe that each of yc = cm will be measured with equal likelihood,
3. prove that a random selection from [0, a 1] will be coprime-to-a 50% of the
time,
4. observe that a y = cm associated with c coprime-to-a will be measured with
probability 1/2, and
5. measure y = cm associated with c coprime-to-a with arbitrarily high confidence
in constant time complexity.
23.9.6
After (conceptual) measurement/collapse of the B register to state |f (x0 )ir , the postQFT A register was left in the state:
m1
1 X
|x0 + jain
QFT (N )
j=0
1
mN
N 1
X
y=0
x0 y
m1
X
jay |yin
j=0
We look at the inner sum in parentheses in a moment. First, lets recap some facts
about .
N
was our primitive N th root of unity, so
N = 1.
Because m is the number of times a divides (evenly) into N ,
m = N/a,
661
we conclude
1 = N = am = ( a )m .
In other words, we have shown that
a = m
is the primitive mth root of unity. Using m in place of a in the above sum produces a
form that we have seen before (lecture Complex Arithmetic for Quantum Computing,
section Roots of Unity, exercise (d )),
m1
m1
m, if y 0 (mod m)
X
X
jy
jay
m
=
.
0, if y 6 0 (mod m)
j=0
j=0
This causes a vast quantity of the terms in the QFT output (the double sum) to
disappear: only 1-in-m survives,
m1
1 X
r
|x0 + jain
QFT (N )
m X
x0 y |yin .
N
y0
(mod m)
j=0
y = cm,
y 0 (mod m)
This defines the special set of size a which will be certain to contain our measured y:
C
23.9.7
{cm}a1
c=0 .
|x0 + jain
m
QFT (N )
j=0
p
p
m/N = 1/a, so the last
a1
1 X x0 cm
|cmin ,
a
c=0
Example
Consider a function that has period 8 = 23 defined on a domain of size 128 = 27 . Our
problem variables for this function become
n = 7,
N = 2n = 128 ,
a = 23 = 8
and
N
27
m =
= 3 = 16 .
a
2
Lets say that we measured the B register and got the value f (x0 ) corresponding to
x0 = 3. According to the above analysis the full pretested superposition,
127
1 X
|xi7 |f (x)ir
128
x=0
7
1 X 7
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
.25 ,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0 ) .
We are interested in learning the period but cant really look at the individual coordinates of this vector, it being a superposition which will collapse unpredictably.
But the analysis tells us that its QFT will produce a vector with only eight non-zero
amplitudes,
r
r
m X
1 X
7
n
(128)
x0 y
QFT
|3 i
=
|yi
=
x0 y |yi7 ,
N
8
y0
y0
(mod m)
(mod 16)
663
Match this with the graph to verify the spikes are at positions 16, 32, 48, etc. (Due
to an artifact of the graphing software the 0 frequency appears after the array at
phantom position 128.) If we can use this set to glean the frequency m = 16, we will
be able to determine the period a = N/m = 128/16 = 8.
Measuring along the frequency basis means applying the basis-transforming QFT
after the oracle and explains its presence in the circuit.
23.9.8
23.9.9
since a = 2r
Our next goal will be to demonstrate that we not only measure some value in the
set C = {cm} in constant time, but that we can expect to get a special subset B C
664
in constant time, namely those cm corresponding to c a (i.e., c coprime to a). This
will enable us to find m (details shortly), and once we have m we get the period a
instantly from a = N/m.
In Step I, we proved that the likelihood of measuring one of the special C =
{yc } = {cm} was 100%. Then, in Steps II and III, we demonstrated that
each of the cm will be measured with equal likelihood, and
the probability of selecting a number that is coprime to a at random from the
numbers between 0 and a 1 is (in this special easy case) 1/2, i.e., a constant,
independent of a.
We combine all this with the help of a little probability theory. The derivation may
seem overly formal given the simplicity of the probabilities in this easy case, but it
sets the stage for the difficult case where we will certainly need the formalism.
First we reprise some notation and introduce some new.
C
{cm}a1
c=0
a}
{ yb yb = bm C and b
(Note: B C.)
P (C)
P (B)
P (B|C)
P we measure some y B
given that
the measured y is known to be C
a)
P ( c
a)
1 P (c
665
1
.
2
23.9.10
with c a that is, yck B with high probability. In a moment, well see why its
so important to achieve this coprime condition.
Example:
If we measure the output of our quantum circuit repeatedly, insisting
on yc c a with P > .999999, or error tolerance = 106 , we would need
log (106 )
6
T =
+ 1 =
+ 1 = 19 + 1 = 20
log (.5)
.30103
measurements.
We can instruct our algorithm to cycle 100 times, to get an even better confidence,
but if the data determined that only eight measurements were needed to find the
desired cm, then it would return with a successful cm after eight passes. Our hyperconservative estimate costs us nothing.
23.9.11
H n
QFT (N )
Uf
|0i
&
|cmi
a
c
m0 gcd(N, cm) =
gcd(am, cm) = m
N
.
m
But this only happens if we measure y B, which is why finding such a y will give
us our period and therefore solve Shors problem.
How do we know whether we measured an cm B? We dont, but we try m0 ,
anyway. We manufacture an a0 using m0 , by
a0
N
,
m0
hoping a0 = a, which only happens when m0 = m (and, after all, finding a, not m, is
our goal). We test this by asking whether f (x + a) = f (x) for any x and x = 1 will
do. a is the only number that will produce an equality here by our requirement of
injective periodicity. If it does, were done. If not, we try T 1 more times because,
with .999999 probability, well succeed after T = 20 times no matter how big M
(or N ) is.
As for the cost of the test f (x + a) = f (x), that depends on f . We are only
asserting relativized polynomial complexity for period-finding but know (by a future
chapter) that for factoring, f will be polynomial fast (O(log4 (M )), and maybe better).
We can now present the algorithm.
Shor-like Algorithm (Easy Case)
Select an integer, T , that reflects an acceptable failure rate based on any known
aspects of the period. E.g., for a failure tolerance of .000001, We might choose
T = 20
Repeat the following loop at most T times.
1. Apply Shors circuit.
2. Measure output of QFT and get cm.
3. Compute m0 = EA(N, cm), and set a0 N/m0 .
4. Test a0 : If f (1 + a0 ) = f (1) then a0 = a, (success) break from loop.
5. Otherwise continue to the next pass of the loop.
If the above loop ended naturally (i.e., not from the break ) after T full passes,
we failed. Otherwise, we have found a.
668
23.10
669
Figure 23.15: There is (possibly) a remainder for N/a, called the excess
23.10.1
Like the easy case, f s injective periodicity helps us rewrite the output of the oracles
B register prior to the conceptual measurement. The domain can still be partitioned
into many disjoint cosets of size a, each of which provides a 1-to-1 sub-domain for f ,
but now there is a final coset which may not be complete:
Figure 23.16: [0, N ) is the union of distinct cosets of size a, except for last
Express [ 0, N 1 ] as the union of a union,
[ 0, N 1 ]
excess
}|
{
z
= [0, a 1] [a, 2a 1] [(m 1)a, ma 1] [ ma, N 1 ]
R + a
670
R + (m 1)a,
partial
}| {
z
R + ma
where
R [0, a 1] = {0, 1, 2, ... , a 1},
a = period of f, and
m = bN/ac, the number of times a divides (unevenly) into N.
Cosets. As before, R + ja is called the jth coset of R, but now we have
R + ma [R + ma] ,
the partial mth coset of R from ma to N 1.
[ 0, N 1 ]
[n
x + ja
oa1
x + ma
oN ma1
x=0
x=0
j=0
but now we had to slap on the partial coset, { x + ma }, to account for the possible
overflow.
Notation to Deal with the Partial Coset
We have to be careful about counting the family members of each element x R, i.e.,
those x + ja who map to the same f (x) by periodicity. We sometimes have a member
in the last, partial, coset, and sometimes not. If x is among the first few integers of
R, i.e., [0, N ma), then there will be m + 1 partners (including x) among its kin.
However, if x is among the latter integers of R, i.e., [N ma, a), then there will be
only m partners (including x) among its kin.
Well use m
e to be either m or m + 1 depending on x,
(
m + 1, for the first few x in [0, a 1]
m
e =
m, for the remaining x in [0, a 1]
671
23.10.2
2n 1
n X
|xin |f (x)ir ,
x=0
and our new partition of the domain gives us a nice way to rewrite this. First, note
that
x+a
x + 2a
..
.
f
7 f (x) .
x + ja
..
x + (m 1)a
and sometimes . . .
x + ma
xR
Now we make use of our flexible notation, m,
e to keep the expressions neat without
sacrificing precision of logic:
r ! X
N 1
a1
m1
e
X
X
m
e
1
1
|xin |f (x)ir =
|x + jain |f (x)ir ,
N
N
m
e
x=0
x=0
j=0
The factor of 1/ m
e inside the sum normalizes each term in the outer sum. However,
the common amplitude remaining on the outside is harder to symbolize in a formula,
which is why I used to describe it. (m
e doesnt even make good sense outside the
sum, but it gives us an idea of what the normalization factor is.) It turns out that
q we
pm
dont care about its exact value. It will be some number between N and m+1
,
N
the precise value being whatever is needed to normalize the overall state.
23.10.3
Weve seen that it helps to imagine a measurement of the oracles B registers output.
|0in
H n
QFT (N )
Uf
|0ir
|
{z
Conceptual
Each B register measurement of f (x) will be attached to not one, but m,
e input A
register states. The generalized Born rule tells us that measuring B will cause the collapse of A into a superposition of m
e CBS states, narrowing things down considerably.
r
m1
e
a1
X
X
m
e
1
|x + jain |f (x)ir
N x=0
m
e
j=0
&
Here, &
m1
e
1 X
|x0 + jain
m
e j=0
means collapses to.
!
|f (x0 )ir
m1
e
X
1
|x0 in
|x0 + jain
m
e
j=0
Figure 23.20: The spectrum of a purely periodic vector with period 10 and frequency
12.8 = 128/10
The situation appears grim.
Lets look at the bright side, though. This picture of a typical DFT applied to an
N dimensional vector, 0 except for amplitudes 1m
at m
e time domain points, suggests
e
that there are still only a frequencies, y0 , y1 , . . . , ya1 which have large magnitudes.
And theres even more reason for optimism. Those likely yk appear to be at least
close to multiples of the integer-ized frequency m, i.e., they are near frequency
domain points of the form cm. (Due to an artifact of the graphing software the 0
frequency appears after the array at phantom position N = 128.)
674
23.10.4
The QFT is applied to the conceptually semi-collapsed |x0 in at the output of the
oracles A register:
|0in
H n
QFT (N )
Uf
|0ir
The linear QFT passes through the ,
m1
e
1 X
|x0 + jain
m
e
QFT (N )
j=0
m1
e
1 X
675
is
QFT
(N )
N 1
1 X (x0 + ja) y
=
|yin
N
|x0 + jai
y=0
N 1
1 X x0 y jay
|yin ,
=
N
y=0
(N )
m1
e
1 X
|x0 + jain
m
e
j=0
m1
e
N 1
1 X 1 X x0 y jay
|yin
N
m
e
j=0
1
mN
e
y=0
N 1 m1
e
XX
y=0
j=0
N 1
1
mN
e
x0 y jay |yin
x0 y
y=0
m1
e
X
jay |yin
j=0
1
In this expression, the normalizing factor mN
is precise. Thats in contrast to the
e
pre-collapsed state in which we had an approximate factor outside the full sum. The B
register measurement picked out one specific x0 , which had a definite m
e associated
with it. Whether it was m or m + 1 doesnt matter. It is one of the two, and that
value is used throughout this expression.
N 1
m1
e
X
X
1
x0 y
jay |yin .
mN
e
y=0
j=0
The next several sections explore what the probabilities say we will see when we
measure this state. And while we analyzed it under the assumption of a prior B
measurement, the upper (A) channel measurement wont care about that conceptual
measurement, as well see. We continue the analysis as if we had measured the B
channel.
676
23.10.5
This general case, which I scared you into thinking would be a mathematical horror
story, has been a relative cakewalk so far. About all we had to do was replace the
firm m with the slippery m,
e and everything went through without incident. Thats
about to change.
In the easy case, we were able to make the majority terms in our sum vanish (all
but 1-in-m). Lets review how we did that. We noted that N , the primitive
N th root, so
N = 1 .
Then we replaced N with ma, to get
ma = 1
and realized that this implied that a was a primitive mth root of unity. From there
we were able get massive cancellation due to the facts we developed about sums of
roots-of-unity.
The problem, now, is that we cannot replace N with ma. We have an m,
e but
even resolving that to m or m + 1 wont work, because neither one divides N evenly
(by the general case hypothesis). So well never be able to manufacture an mth root
of unity at this point and cannot watch those big sums dissolve before our eyes. So
sad.
We can still get what we need, though, and have fun with math, so lets rise to
the challenge.
As with the easy case our job is to analyze the final (post QFT ) A register
superposition. While none of the terms will politely disappear the way they did in
the easy case, we will find that certain y states will be much more likely than others,
and this will be our savior.
Computing the final measurement probabilities will require the following five steps.
1. identify (without proof ) a special set of a elements, C = {yc }a1
c=0 of high measurement likelihood,
2. prove that the values in, C = {yc }a1
c=0 have high measurement likelihood,
a1
3. associate {yc }a1
c=0 with {c/a}c=0 ,
677
23.10.6
In this step, we will merely describe the subset of y that we want to measure. In the
next step, well provide the proof.
In the easy case we measured y, which had the special form
y = cm,
c = 0, 1, 2, ... , a 1,
with 100% certainly in single measurement. From there we tested whether the c was
coprime to a, (which it was with high probability), and so on. This time we cant be
100% sure of anything even after post-processing with the QFT , but thats normal
for quantum algorithms we often have to work the numbers and be satisfied to
get what we want in constant or polynmoial time. I claim that in the general case we
will measure an equally small subset of y, again a in all, that we label
y = yc ,
c = 0, 1, 2, ... , a 1
cN , cN +
2h
2
a
a
(a 1)N , (a 1)N +
2
2
Each interval contains exactly one integral multiple, ya, of a in it. Well label the
multiplier that gets us into the cth interval yc . (y0 is easily seen to be 0.)
h a
h
a
a
a
0a , +
, , yc a cN , cN +
,
2
2
2
2
h
a
a
,
ya1 a (a 1)N , (a 1)N +
2
2
678
679
3 + 12 10
123
< 128 .
|xi7 |f (x)ir
128
x=0
sometimes
7
z }| {
1 X 7
( 0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
.27735 ,
.27735 ,
.27735 ,
.27735 ,
.27735 ,
.27735 ,
.27735 ,
.27735 ,
.27735 ,
.27735 ,
.27735 ,
.27735 ,
.27735 ,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0, 0,
0, 0,
0, 0,
0, 0,
0, 0,
0, 0,
0, 0,
0, 0,
0, 0,
0, 0,
0, 0,
0, 0,
0 ) .
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
We are interested in learning f s period and, like the easy case, the way to get at it is
by looking at this state vectors spectrum, so we take the QFT . Now, unlike the easy
case, this vectors QFT will not create lots of 0 amplitudes; generally all N = 128 of
them will be non-zero. Thats because the resulting sum
N 1
m1
e
X
X
1
x0 y
jay |yin .
QFT (128) |3 i7 =
mN
e
y=0
j=0
did not admit any cancellations or simplification. Instead, the above claim which
we will prove in the next section is that for x0 = 3 only m
e = 13 of them will be
likely, the special {yc }, for c = 0, 1, 2, . . . , 12 we described in our last analysis. Lets
put our money where our mouth is and at least show this to be true for the one
function under consideration.
We take three yc as examples: y4 , y5 and y6 . Well do this in two stages. First
we identify the three yc values. Next, we graph the probabilities of QFT (128) |3 i7
around those three values to see how they compare with nearby y values.
Stage 1. Compute y4 , y5 and y6
c = 4: The center of the interval is 4N = 4 128 = 512. We seek y4 such that
10y4 [ 507, 517 )
y4 = 51,
since
10 51 = 510 [ 507, 517 ) .
c = 5: The center of the interval is 5N = 5 128 = 640. We seek y5 such that
10y5 [ 635, 645 )
y5 = 64,
since
10 64 = 640 [ 635, 645 ) .
c = 6: The center of the interval is 6N = 6 128 = 768. We seek y6 such that
10y6 [ 763, 773 )
y6 = 77,
since
10 77 = 770 [ 763, 773 ) .
681
2
7
(128)
Stage 2. Look at the Graph of
QFT
|3 i
Around These Three y
Here is a portion of the graph of the QFT s absolute value squared showing
the probabilities of measuring over 35 different y value in the frequency domain. It
exhibits in a dramatic way how much more likely the yc are to be detected than
are the non-yc values. (See Figure 23.24.) Even though the non-yc have non-zero
(mod N ) .
23.10.7
We gave a convincing argument that the a distinct frequency domain values {yc }
constructed in the last section do produce highly likely measurement results, but we
didnt even try to prove it. Its time to do that now. This is the messiest math of the
lecture. Ill try to make it clear by offering pictures and gradual steps.
When we last checked, the (conceptual) measurement/collapse of B register to
state |f (x0 )i left the post-QFT A register in the state
N 1
m1
e
X
X
1
x0 y
jay |yin .
mN
e
y=0
j=0
The probability of measuring the state |yi is the amplitudes magnitude squared:
P (measurement yields y)
Letting
ay ,
683
2
m1
e
X
1
x0 y 2
jay
| |
mN
e
j=0
2
m1
e
X
1
jay
mN
e
j=0
e
1 + + 2 + + m1
e
m
1
.
1
j=0
2i
e
aym
1
=
ay
1
iy m
e
e
1
,
eiy 1
eN
e
ay m
e
2i
ay
N
2ay
.
N
2
e
1 eiy m
1
eiy 1
mN
e
The next several screens are filled with the math to estimate the magnitude of the
fraction,
iy m
e e 1
eiy 1 .
We will do it by bounding numerator and denominator, separately. Its a protracted
side-trip because I go through each step slowly, so dont be intimidated by the length
of the derivation.
A General Bound for ei 1
It will help to bracket the expression:
?
i
e 1
An Upper Bound for ei 1
In the complex plane, ei 1 is the length of the chord from 1 to ei , and this is
always the arc length from 1, counterclockwise, to ei , all on the unit circle. But
the arc length is, by definition, the (absolute value of the) angle, , itself, so:
i
e 1 || .
684
1 (cos + i sin ) .
2
2
sin
cos = cos
2
2
sin = 2 cos
sin
2
2
So
i
1 e
=
=
2
1
cos
sin
2
2
+
i 2 cos
sin
.
2
2
2
2
1 cos
+ sin
2
2
i 2 cos
sin
2
2
2
i 2 cos
sin
2 sin
2
2
2
2 sin
sin
i cos
2
2
2
2
Noting that
2
sin
i
cos
= 1,
2
2
we have shown that
i
e 1
2 sin
.
2
685
2
This gives us a lower bound for ei 1:
2 ||
i
e 1 ,
for [, ] .
i
e 1
|| ,
for [, ] .
686
So, lets first convert the exponentials in the numerator and denominator using the
above relation between those special {yc } (which we hope to measure with high probability) and their corresponding mod N equivalents, the {b
yc }, close to the origin.
ei yc
= ei (2ayc /N )
= ei (2byc /N )
= ei (2 [ ybc +cN ]/ N )
= ei (2byc /N ) ei 2 c
= ei ybc /a
This allows us to rewrite the absolute value of the amplitudes we wish to estimate as
iy m
i m/a
e e 1
ybc e 1
= e
eiy 1
eiybc /a 1 .
We want a lower bound for this magnitude. This way we can see that the likelihood
of measuring these relatively few ys is high. To that end, we get
a lower bound for the numerator, and
an upper bound for the denominator.
Upper Bound for Denominator
The denominator is easy. We derived an upper bound for all angles, , so:
i /a
e ybc 1 ybc .
a
Lower Bound for Numerator
The numerator is
i m/a
e ybc e 1
i (2by / N ) m
e
c
e
1 .
Remember that m
e is sometimes m and sometimes m + 1.
f is m
Sub-Case 1: m
When m
e is m, we have things easy: 2mb
yc / N (, ) because
a
a
ybc <
,
2
2
or
am
2m
am
ybc . <
.
N
N
N
And, since
am
< 1,
N
687
we get
<
2m
ybc . < .
N
2 |2mb
yc / N |
.
2ay
,
N
we write it as
i 2my / N
c
e
1
2 m
y
b
c
a
.
Combining the bounds for the numerator and denominator (in the case where m
e = m),
we end up with
m ,
i m/a
e ybc e 1
ybc
2 a ybc
= 2m
eiybc /a 1
a
f is m + 1 (Deferred)
Sub-Case 2: m
That only worked for m
e = m; the bounds argument we presented wont hold up when
m
e = m+1, but well deal with that kink in a moment. Lets pretend the above bound
works for all m,
e both m and m + 1, and finish computing the probabilities under this
white lie. Then well come back and repair the argument to also include m
e = m + 1.
f = m Bounds
Simulating the Probabilities Using the m
If the above bounds worked for all m,
e we could show that this leads to a desired
lower bound for our probabilities of getting a desired y C in constant time using
the following argument.
For any y, we said
P (measurement yields y)
2
e
1
1 eiy m
eiy 1 ,
mN
e
but for y = yc , one of the (a) special ys that lie in the neighborhoods of the cN s,
we can substitute our new-found lower bound for the magnitude of the fraction.
(Remember, we are allowing that the bound holds for all m,
e even though we only
proved it for m
e = m), so
P (measurement yields yc )
688
1 4m
e2
mN
e
2
m 4
N 2
In this hard case, allowing a|N , we defined m to be the unique integer satisfying
ma N < (m + 1)a, or quoting only the latter inequality,
(m + 1)a
>
N.
Its now time to harvest that weak additional assumption requiring at least two
periods of a to fit into M ,
a
<
M
,
2
<
N
.
2
>
1
.
2
[Exercise. Do it.]
We can plug that result into our probability estimate to get
P (measurement yields one of the yc )
>
2
.
2
1,
0 as m
4
,
2
Our last lower bound for the p of success, 2/ 2 > 0, was independent of M (or
N or a), so by repeating the random measurement a fixed number, T , times, we can
assure we measure one of those yc with any desired level of confidence. This follows
from the CTC theorem for looping algorithms (end of probability lesson), but for a
derivation that does not rely on any theorems, compute directly,
P (we do not get a yc after T measurements)
=
T
^
!
measurement Tk yields a yc
k=1
T
Y
P measurement Tk yields a yc
k=1
T
Y
1 P measurement Tk yields a yc
k=1
T
Y
k=1
2
1 2
The last product can be made arbitrarily small by making T large enough, independent of N , M , a, etc. This would prove the claim of Step 2 if we could only use the
bound we got for m
e = m. But we cant. So we must soldier on ...
f = m + 1 Bounds
Repeating the Estimates when m
We now repair the convenient-but-incorrect assumption that m
e is always m. To do
so, lets repeat the estimates, but do so for m
e = m + 1. This is a little harder
sub-case. When were done, well combine both cases.
Remember where m
e came from, and why it could be either m or m + 1. In our
general (hard) case, a does not divide N evenly; the first few x R = [0, a 1] will
generate m + 1 mod-a relatives within [0, N 1] that map to the same f (x), while
the last several x R = [0, a 1] will only produce m such mod-a relatives within
[0, N 1]. m
e represented however many mod-a relatives of x fit into the [0, N 1]
interval: m for some x, and m + 1 for others.
We retrace our steps.
The probability of measuring the state |yi is the amplitudes magnitude squared,
P (measurement yields y)
690
2
m1
e
X
1
jay
mN
e
j=0
2
e
1
1 eiy m
eiy 1 .
mN
e
So far, were okay; we had not yet made any assumption about the particular choice
of m.
e To bound this probability, we went on to get an estimate for the fraction
i m/a
e ybc e 1
eiybc /a 1 .
The denominators upper bound worked for any , so no change needed there. But
the numerators lower bound has to be recomputed, this time, under the harsher
assumption that m
e = m + 1.
Earlier, we showed that 2mb
yc / N was confined to the interval (, ), which
gave us our desired result. Now, however, we replace m with m + 1, and well see
that 2(m + 1)b
yc / N wont be restricted to (, ). What, exactly, are its limits?
Start, as before,
a
a
ybc <
,
2
2
a(m + 1)
2(m + 1)
a(m + 1)
ybc <
.
N
N
N
By our working assumption, a/N < 2 (which continues to reap benefits) we can assert
that
a(m + 1)
N
am
a
+
N
N
<
1 +
1
,
2
2
But now our = 2b
yc (m + 1)/N lives in the enlarged interval [(3/2), (3/2)],
so that general result no longer applies. Sigh. We have to go back and get new
general result for this larger interval. It turns out that old bound merely needs to be
multiplied by a constant:
2 ||
K
2 sin
,
2
3
3
for , ,
2
2
2 sin 3
4
3
691
.4714 .
Figure 23.28: |sin(x/2)| lies above |Kx/| in the interval ( 1.5, 1.5 )
Again, this can be done using calculus and solving K/ = sin (/2) for K. Visually,
you we see where the graphs of the sine and the line intersect, which confirms that
assertion. Summarizing the general result in the expanded interval,
i
3
3
e 1 = 2 sin 2K ||
for , .
2
2
2
This gives us the actual bound we seek in the case m
e = m + 1, namely
i 2(m+1)by / N
2K |2(m + 1)b
yc / N |
c
e
1
.
Combining the bounds for the numerator and denominator (in the case where m
e =
m + 1),
m+1 ,
i m/a
e ybc e 1
ybc
2K
ybc
a
eiybc /a 1
a
2K(m + 1)
2K m
e
.
f = m and
Finishing the Probability Estimates by Combining Both Cases m
f =m+1
m
Remember that when m
e = m, we had the stronger bound:
i m/a
e ybc e 1
2m
2m
e
=
,
eiybc /a 1
692
so we can use the new, weaker, bound to cover both cases. For all m,
e both m and
m + 1, we have
i m/a
e ybc e 1
2K m
e
,
eiybc /a 1
for
K
2 sin 3
4
3
.4714 .
e2
1 4K 2 m
mN
e
2
m 4K 2
.
N 2
am 4K 2
N 2
1 4K 2
2 2
.04503 .
>
2K 2
2
m
,
N
2
.
2
>
>
4
,
2
0 as m .
Putting these two observations together, we conclude that when a << N , the lower
bound for any measurement is 4/ 2 . (any means it doesnt matter whether
the state to which the second register collapsed, |f (x0 )i, is associated with an x0 for
which there were m or m + 1 mod-a equivalents in [0, N 1]).
693
So we have both a hard bound assuming worst case scenarios (the period, a, cycles
no more than twice in M ) and the more likely scenario, a << M . Symbolically,
P (measurement yields one of the yc )
.40528,
typically
In the worst case, the smaller lower bound doesnt change a thing; its still a constant
probability boundeed away from zero, independent of N, M, a, etc., and it still gives us
a constant time, T , for detecting one of our special yc with arbitrarily high confidence.
This, again, follows form the CTC Theorem for looping algorithms, or you can simply
apply probability theory directly.
In practice, a << M , so we can use .40528 as our constant p of success bounded
away from 0. If we are satisfied with an = 106 of failure, the CTC theorem tells
us that the number of passes of our circuit would be
log (106 )
6
T =
+ 1 =
+ 1 = 26 + 1 = 27 .
log (.59472)
.2256875
If we sampled y 27 times our chances of not measuring at least one yc is less than of
one in a million.
This completes the proof of the fact that we will measure one of the a yc in constant
time. Our next task is to demonstrate that by measuring relatively few of these yc we
will be able to determine the period. This will be broken into small, bite-sized steps.
23.10.8
a1
STEP III: Associate {yc }a1
c=0 with {c/a}c=0
We now know that well measure one of those yc with very good probability if we
sample enough times (O(1) complexity). But, whats our real goal here? Wed like to
get back to the results of the easy case where we found a number c that was coprime
to a in constant time and used that to compute a. In the general case, however, c
is merely an index of the yc , not a multiple cm, of m. What can we hope to know
about the c which just indexes the set {yc }? Youd be surprised.
In this step, we demonstrate that each of these special, likely-measured yc values is
bound tightly to the fraction c/a in a special way: yc /N will turn out to be extremely
(and uniquely) close to c/a. This, in itself, should feel a little like magic: somehow
the index of the likely-measured set of ys shows up in the numerator of a fraction
that is close to yc /N . Lets pull back the curtain.
Do you remember those (relatively small) half-open intervals of width a around
694
the points cN ,
h
h
a
a
a
a
, +
,
N , N+
,
2
2 h
2
2
a
a
cN , cN +
2h
2
a
a
(a 1)N , (a 1)N +
,
2
2
yc a
<
cN
c
1
a
2N
yc
N
<
c
1
+
.
a
2N
cN
a
,
2
then divide by N ,
1
.
2N
< M 2 N,
695
1
.
2M 2
=
q a, k c and l c + 1, and you get the aforementioned
c/a (c + 1)/a.
p
q
kq lp
pq
kq lp
M2
1
.
M2
QED
This lemma tells us that c/a is not only the best fractional approximation to yc with
denominator a, but its best among all fractions having denominators M . For
letting n/d be any fraction with denominator d M . The lemma says
c
n
1
,
a
d
M2
which places n/d squarely outside the 1/ (2M 2 ) neighborhood that contains both c/a
and yc /N .
696
Conclusion: Of all fractions n/d with denominator d M , c/a is the only one
that lies in the neighborhood of radius 1/ (2M 2 ) around yc /N . Thus yc /N strongly
selects c/a and vice versa.
[Interesting Observation. We showed that
yc
N
is uniquely close to
c
.
a
is uniquely close to
c f req .
Our math doesnt require that we recognize this fact, but it does provide a nice
parallel with the easy case, in which our measured {yc } were exact multiples, {cm},
of the true integer frequency, f req = m.]
23.10.9
697
nk
dk
= x.
nk0
dk0
nk
dk
K
(the convergents for x) .
k=0
7. When x = p/q is a rational number, CFA will complete in O(log3 q). (Sharper
bounds exist, but this is enough for our purposes, and is easy to explain.)
Procedure for Using CFA to Produce c/a
We apply CFA to x = yc /N and = 1/(2M 2 ).
Claim. CFA will produce and return the unique c/a within 1/(2M 2 ) of yc /N .
Proof. a < M , so
1
2M 2
<
698
1
,
2a2
a
N
<
1
,
2M 2
c
yc
a
N
<
1
.
2a2
so
As this is the hypothesis of bullet 6, c/a must appear among the convergents of
yc /N . Since it is within 1/(2M 2 ) of yc /N , we know that CFA will terminate when it
reaches c/a, if not before.
We now show that CFA cannot terminate before its loop produces c/a. If CFA
returned a convergent nk /dk that preceded c/a, we would have
1
n
k
x
dk
2M 2
by bullet 4. But since the dk are strictly increasing (bullet 5), and we are saying
that the algorithm terminated before getting to c/a, then
dk < a .
That would give us a second fraction, nk /dk with denominator dk < M within
1/(2M 2 ) of yc /N , a title uniquely held by c/a (from Step III). Therefore, when we
give CFA the inputs x = yc /N and = 1/(2M 2 ), it must produce and return c/a.
QED
CFA is O(log3 M )
By Bullet 7 CFA has time complexity O(log3 N ), which we have already established
to be equivalent to O(log3 M ).
23.10.10
In Steps I and II we proved that the likelihood of measuring one of the special
{yc }a1
c=0 was always bounded below by a constant, independent of N . (The constant
may depend on how many periods, a, fit into M , but for RSA encryption-breaking,
we will see that the special case we require will assure us at least two periods, and
normally, many more).
In the easy case, the {yc } were all equally likely, and we also knew that there were
exactly a/2 coprimes < a. We combined those two facts to put the proof to bed.
Here, neither condition holds, so we have to work harder (which is why this isnt
called the easy case).
699
What we can say is that, from Steps III and IV, each of the measured {yc }s
leads in constant time, with any predetermined confidence, to a partner fraction
in {c/a} with the help of some O(log3 M ) logic provided by CFA.
In this step, we demonstrate that not only do we measure some yc (constant time)
and get a partner, {c/a} (O(log3 M )), but that we can even expect to get a special
subset B {yc } {c/a} in constant time, namely those yc corresponding to c a
(i.e., c coprime to a). This will enable us to extract the period a from c/a.
We do it all in three steps, the first two of which correspond to the missing
conditions we enjoyed in the easy case:
Stated loosely, in our quantum circuit, the difference between measuring the
least likely yc and the most likely yc is a fixed ratio independent of a. (This
corresponds to the equi-probabilities of the easy case.)
The probability of selecting a number, c, which is coprime to a, at random,
from the numbers between 0 and a 1 is constant, independent of a. (This
corresponds to the 50% likelihood of the easy case.)
in
We combine the first two bullets to show that the probability of getting a c a
any single pass of the circuit is bounded away from 0 by a constant independent
of the size of the algorithm. Thats the requirement of the CTC theorem for
looping algorithms and therefore guarantees we obtain such a c with small error
tolerance in constant time, i.e., after a fixed number of loop passes independent
of N .
Proof of First Bullet
This can be demonstrated by retooling an analysis analysis we already did. We
established that the amplitude squared of a general |yi in our post-QFT A register
was
2
e
1
1 eiy m
P (measurement yields y) =
eiy 1 .
mN
e
There are clearly two measurement-dependent parameters that will affect this probability: m,
e which is either m or m + 1, depending on the collapsed state of the second
register, and y , which depends on the measured y. When a << M the probabilities
are very close to being uniform, but to avoid hand-waving, lets go with our worst-case
scenario the assumption that we added to Shors hypothesis in order to get hard
estimates for all our bounds: M > 2a, but not necessarily any larger.
When we computed our lower bound on the probability of getting a yc , we used an
inequality that contained an angle under the assumption that was restricted to
an interval wider than [, ], spefically [(3/2), (3/2)]. The inequality we found
applicable was
700
2 ||
K
2 sin
,
2
3
3
for , ,
2
2
2 sin 3
4
3
.4714 .
The key to our current predicament is to get an upper bound on measuring any (even
the most likely) yc . This will lead us to get an inequality for general restricted to a
narrower range, namely, [/2, /2]. This can be easily determined using the same
graphing or calculus techniques, and is
Figure 23.31: |sin(x/2)| lies above |Lx/| in the interval ( .4714, .4714 )
2 ||
L
2 sin
,
2
h i
for ,
,
2 2
2 sin
1.4142 .
=
.
mN
e eiy 1
mN
e eiybc /a 1
We want an upper bound for the magnitude of the fractional term. To that end, we
get an upper bound for the numerator and a lower bound for the denominator.
701
i (2by / N )
c
e
1 ,
but 2b
yc / N (/2, /2) because
a
a
ybc <
2
2
or
a
N
2
a
ybc . < .
N
N
And since
a
< 1/2 ,
N
we get
<
ybc . <
.
2
N
2
This allows us to invoke the latest lower bound just mentioned for [/2, /2]:
i 2by / N
e c
1
2 |2b
yc / N |
, L = 2 sin 1.4142 .
2ay
N
we can write
i 2y / N
e c
1
702
2 |ybc /a|
.
2m
e
.
L
Finally,
P (measurement yields yc )
1 4L2 m
e2
mN
e
2
m + 1 4L2
.
N
2
e2
1 4K 2 m
mN
e
2
m 4K 2
,
N 2
to get
P (least-likely yc )
P (most-likely yc )
mK 2
.
(m + 1)L2
Our assumption has been that a M/2 so m = (integer quotient) N/a is > 2 (usually
much greater). This, and the estimates for L 1.4142 and K .4714, result in a
ratio which is independent of a, M or N , between the probability of measuring the
least likely yc to that of measuring the most likely yc .
P (least-likely yc )
P (most-likely yc )
mK 2
(m + 1)L2
2K 2
.072 .
3L2
QED
This covers the bullet about the ratio of the least likely and the most likely yc .
Note-to-file. if a << M < N (i.e., m gets very large), as is often the case, Both
K and L will be close to 1 (review the derivations and previous notes-to-file). This
means all yc are roughly equi-probable. Also m/(m + 1) 1. Taken together, the
ratio of the least likely to most likely is approximately 1.
Summarizing, we have a hard minimum for the worst case scenario (only two
intervals of size a fit into [0, M )) as well as an expected minimum for the more
realistic one (a << M ). That is,
P (most-likely yc )
1 ,
typically
703
a)
P (c
=
P
2c 2a 3c 3a 5c 5a . . .
...
a)
P (c
pk c pk a
P
finite
^
...
!
p c p a
P p c p a
p prime
p prime
Since (pc pa) is true for p > a or p > c, the probabilities for those higher primes,
p, are all 1, which is why the product is finite.
Next, we compute these individual probabilities. For a fixed prime, p, the probability that it divides an arbitrary non-negative c chosen randomly from all non-negative
integers is actually independent of c,
1
P p c = .
p
[Exercise. Justify this.] This is also true for pa, so,
P
p c p a
1
p2
p c p a
and
1
1
.
p2
704
us,
1
P p c < ,
p
1
p c p a
<
p2
p c p a
P
P
and
1
1
.
p2
Finally, we plug this result back into the full product, to get
a) =
P (c
finite
Y
P (p|c p|a)
p prime
p prime
1
p2
= (2),
where (s) is the most famous function you never heard of, the Riemann zeta function,
defined by
Y
(s) =
p prime
1
ps
in Euler product form. The value of (2) has to be handed-off to the mathematical
annals, and well simply quote the result,
(2)
.607 .
a)
P (c
(2)
.607. QED
That proves our second bullet, which is all we will need, but notice what it implies.
Since
a)
P ( c
<
.393 ,
705
{yc }a1
c=0
a}
{ yb yb C and b
(Note: B C.)
|B|
|C|
P (yc )
P (C)
P (B)
P (B|C)
P we measure some y B
given that
the measured y is known to be C
We would like a lower bound on the probability of measuring a yc which also has the
property that its associated c is coprime to a. In symbols, we would like to show:
Claim: P (B)
Proof:
P (B)
=
=
P (B|C) P (C)
,
!
X
X
P (yb )
P (yc ) P (C)
cC
bB
Let
yBmin y B with P (yBmin ) minimum over B
yBmax y B with P (yBmax ) maximum over B
yCmin , yCmax same, except over all of C
If there is more than one y that produce minimum or maximum probabilities, choose
706
!
X
P (yb )
P (yc )
cC
bB
|B| P (yBmin )
|C| P (yCmax )
|B| P (yCmin )
|C| P (yCmax )
q .072 ,
so
P (B)
q .072 P (C) .
From the proof of the second bullet of this step, q > .607, and from Step II P (C) >
.04503, so
P (B)
This is independent of a, M, N, etc. and allows us to apply the CTC theorem for
looping algorithms to aver that after
a fixed number, T , of applications of the quantum
circuit we will produce a yc with c a with any desired error tolerance. Well compute
T in a moment.
Remember that we used a worst case bounds above. As we demonstrated, normally the ratio .072 is very close to 1, and P (C) > .40528, so we can expect a better
constant lower bound:
P (B)
Conclusion of Step V
After an adequate number of measurements (independent of a, M ), which produce
yc1 , yc2, . . . , ycT , we can expect at least one of the yck = yc to correspond to c/a,
with c a with high probability.
Examples that Use Different Assumptions about a
How many passes, T , do we need to get an error tolerance of, say = 106 (one in
a million)? It depends on the number of times our period, a, fits into the interval
[0, N ). Under the worst case assumption that we formally required only two we
would need a much larger number of whacks as the circuit than in a typical problem
that fits hundreds of periods in the interval. Lets see the difference.
The p of success bounded away from 0 for a single pass of our algorithms loop
in the CTC theorem, along with the error tolerance , gives us the number of required
passes, the formula provided by the theorem,
log ()
T =
+ 1.
log (1 p)
707
Worst Case (P (B) .002): We solved this near the end of the probability
lesson and we found
log (106 )
T =
+ 1 = 6901 ,
log (.998)
or, more briefly and conservatively, 7000 loop passes.
Typical Case (P (B) .266):
This was also an example in the earlier
chapter,
log (106 )
+ 1 = 45 ,
T =
log (.734)
or, rounding up, 50 loop passes.
Its important to remember that the datas actual probability doesnt care
about us. We can instruct our algorithm to cycle 7000 times, but if the data
determined that only 15 loop passes were needed to find the desired c/a, then it
would return with a successful c/a after 15 passes. Our hyper-conservative estimate
costs us nothing.
On the other hand, if we are worried that a < N/2 is too risky, and want to allow
for only one period of a fitting into M or N , the math works. For example, we could
require merely a < .999N and still get constant time bounds for all probabilities. Just
repeat the analysis replacing 1/2 with .999 to find the more conservative bounds. You
would still find that proofs all worked, albeit with P (B) > a constant much smaller
than .002.
23.10.11
23.10.12
Other than a discussion of Euclids algorithm for computing the greatest common
divisor and some facts about continued fractions (both covered in this volume) you
have completed a rather in-depth development of Shors periodicity algorithm for
quantum computers.
You studied the circuit, the algorithm and computational complexity for quantum
period-finding.
There remains the question of how quantum period-finding can be applied to RSA
encryption-breaking, which is a form of order-finding. Before we close out this first
course, Ill cover that as well.
710
Chapter 24
Euclidean Algorithm and
Continued Fractions
24.1
The Euclidean algorithm is a method for computing the greatest common divisor
of two integers. The technique was described (although probably not invented) by
the Greek mathematician Euclid in about 300 B.C. Two thousand years later it
continues to find application in many computational tasks, one of which is Shors
quantum period finding algorithm. In fact, we apply it twice in this context, once
directly, then again indirectly when we apply a second, centuries old technique called
continued fractions.
In this lesson, we will study both of these tools and compute their time complexities.
24.2
24.2.1
The Euclidean Algorithm (EA) takes two positive integers, P and Q, and returns the
largest integer that divides both P and Q evenly (i.e., without remainder). Its output
is called the greatest common divisor of P and Q and is written as a function of its
two input integers, gcd(P, Q). Of special note, we will learn that
EA(P, Q) O(log3 X), where X is the larger of the two integers passed to it
(although sharper/subtler bounds exist), and
EA will be used as a basis for our Continued Fractions Algorithm, CFA.
711
24.2.2
Long Division
A long division algorithm (LDA) for integers A and B, both > 0, produces A B in
the form of quotient, q and remainder, r satisfying
A
qB + r.
The big-O time complexity of LDA(A, B) in its simplest form is O(log2 X), where X
is the larger of {A, B}. If you research this, youll find it given as O(N 2 ), where N is
the number of digits in the larger of {A, B}, but that makes N = log10 X, and log10
has the same complexity as log2 .
24.2.3
Although the final result is independent of which number occupies the first parameter
position, we will assume P is first, as this will affect the intermediate calculations
which are used by CFA, coming up next.
I will describe the algorithm without proof it is well documented and very easyto-follow in all the short descriptions you will find online.
General Idea
To produce EA(P, Q), for P, Q > 0, we start by applying long division to P, Q.
LDA(P, Q) returns a quotient, q, and remainder, r, satisfying
P
qQ + r.
Notice that either r = 0, in which case Q|P and we are are done (gcd(P, Q) = Q),
or else we have two new integers to work with: Q and r, where, now, Q > r. We
re-apply long division, this time to the the inputs Q and r to get Q r, and again,
examine the new remainder (call it r0 ),
Q
q 0 r + r0 .
Like before, if r0 = 0 we are done (gcd(P, Q) = r), and if not, we keep going. This
continues until we get a remainder, re = 0, at which point the gcd is the integer
standing next to (being multiplied by) its corresponding quotient, qe, in the most
recent long division (just as Q was standing next to q in the initial division, or r
was standing next to q 0 in the second division).
Lets add some indexing to our general idea before we give the complete algorithm. Define
r0 P,
r1 Q,
712
and
q0
r2
q0 r1 + r2 .
q1 r2 + r3 .
24.2.4
The Algorithm
EA(P, Q):
Initialize
r0 = P
r1 = Q
Loop over k, starting from k = 0
Compute rk rk+1 using LDA(rk , rk+1 ) to compute qk and rk+2
rk
qk rk+1 + rk+2 .
until rk+2 = 0
Return gcd = rk+1
713
Example
W3 use EA to compute gcd(285, 126)
r0 = 285
r1 = 126
r0 = q0 r1 + r2
285 = 2 126 + 33
r2 6= 0, so compute r1 r2
r1 = q1 r2 + r3
126 = 3 33 + 27
r2 6= 0, so compute r2 r3
r2 = q2 r3 + r4
33 = 1 27 + 6
r3 6= 0, so compute r3 r4
r3 = q3 r4 + r5
27 = 4 6 + 3
r4 6= 0, so compute r4 r5
r4 = q4 r5 + r6
6 = 23 + 0
r5 = 0, so return gcd = r4
gcd = 3
24.2.5
Time Complexity of EA
qQ + r.
Notice that
r <
P
.
2
qk rk+1 + rk+2 ,
714
gives
rk
.
2
rk+2 <
In other words, every two divisions, we have reduced rk by half, forcing the evenindexed rs to become 0 in at most 2 log P iterations. Therefore, the number of steps
is O(2 log P ) = O(log P ). (Incidentally, the same argument also works for Q, since
Q = r1 , spawning the odd-indexed rs. So, whichever is smaller, P or Q, gives a tighter
bound on the number of steps. However, we dont need that level of subtlety.) We
have shown that the EA main loops complexity is O(log X), where X is the larger
(or either) of P , Q.
Next, we note that each loop iteration, calls the LDA(rk , rk+1 ), which is O(log2 rk ),
since rk > rk+1 . For all k, rk < X (larger of P and Q), so LDA(rk , rk+1 ) is also in
the more conservative class, O(log X).
Combining the last two observations, we conclude that the overall complexity of
EA(P, Q) is the product of complexities of loop, O(log X), and the LDA within the
loop, O(log2 X), X being the larger of P and Q. Symbolically,
EA(P, Q) O(log3 X),
X = larger of {P, Q}.
24.3
24.3.1
Continued Fractions
Although we wont deal with such dizzying entities directly, you need to see an example of at least one continued fraction so we can define the more useful derivative
(convergents) and present the CFA algorithm. A continued fraction is a nested construct (possibly infinite) of sums and quotients of integers in the form
1
a0 +
a1 +
a2 +
a3 +
.
1
a4 +
..
285
= 2 +
126
1
1
3 +
1 +
1
2
You might ask why we bother with a continued fraction of an (already) rational number, x. The special form of a continued fraction leads to important approximations
to x that are crucial in our proof of the quantum period-finding algorithm.
4 +
An Irrational Example
A famously simple irrational continued fraction is
x =
2 = 1 +
1
1
2 +
2 +
2 +
.
1
.
2 + ..
I will not prove any the above claims or most of what comes below. The derivations
are widely and clearly published in elementary number theory texts and short webpages on continued fractions. But I will organize the key facts that are important to
Shors Algorithm.
24.3.2
Before we abandon the ugly nested construct above, lets look at one easy way to get
the ak if x is rational. This will tell us something about the computational complexity
of our upcoming continued fractions algorithm (CFA).
716
P
,
Q
the {ak } in its continued fraction expansion are exactly the unique {qk }
of the Euclidean Algorithm, EA(P, Q), for finding the gcd(P, Q).
In other words, we already have an algorithm for expanding a rational number as a
continued fraction. Its called the Euclidean algorithm and we already presented it.
We just grab qk from EA and were done.
(Thats why I made a distinction between the first parameter, P and the second,
Q, and also why I labeled the individual qk in the EA, which we didnt seem to need
at the time.)
Time Complexity for Computing a Rational (Finite) Continued Fraction
Theorem. The time complexity for computing all the ak for a rational x
is O(log3 X), where X is the larger of {P, Q}.
Proof : Since the {ak } of continued fractions are just the {qk } of the EA, and
we have proved that the EA O(log3 X), where X is the larger of P and Q, the
conclusion follows.
QED
24.3.3
An iterative algorithm is typically used when programming CFA. We use the notation
bxc
greatest integer x.
The CF Method
Specify a termination condition (e.g., a maximum number of loop passes or a
continued fraction within a specified of the target x, etc.).
Loop over k starting at k = 0 and incrementing until we reach the maximum
number of loop passes or we break sooner due to a hit detected in the loop
body.
1. ak bxc
2. f rac x bxc
3. If f rac = 0, break from loop; weve found x.
4. x 1/f rac
Return the sequence {ak }.
717
24.3.4
a0 +
a1 +
...
ak1 +
.
718
.
1
ak
Example
For the rational x = 285/126, whose continued fraction and {ak } we computed earlier,
you can verify that the convergents are
n0
d0
2
,
1
n1
d1
7
,
3
n2
d2
9
,
4
n3
d3
43
,
19
n4
d4
95
.
42
and
Notice that the final convergent is our original x in reduced form. This is very important for our purposes. Figures 24.1 and 24.2 show two graphs of these convergents at
different zoom levels.
1. The convergents converge very fast every two convergents are much closer
together than the previous two, and
2. The convergents bounce back-and-forth around the target, x, alternating lessthan-x values and greater-than-x values.
Before making this precise, lets see if it holds for a second example.
Example
For the rational x = 11490/16384 the convergents are:
n0
d0
0
,
1
n1
d1
1
,
1
n2
d2
2
,
3
n3
d3
5
,
7
n4
d4
7
,
10
n5
d5
54
,
77
n6
d6
1897
,
2705
n7
d7
5745
=
8192
and
11490
.
16384
This time, well need more zoom levels because the later convergents are so close to
x they are impossible to see. Figures 24.3 through 24.7 show various stages of the
convergents.
720
721
24.3.5
d0 1
d 1 a1
Loop over k starting at k = 2 and iterating until k = K, the final index of the
sequence {ak } returned by the CF method.
1. nk ak nk1 + nk2
2. dk ak dk1 + dk2
Return the sequence {nk /dk }.
24.3.6
The convergents {nk /dk } have the following properties which are derived in most
beginning tutorials in very few steps.
1. For any real number, x, the convergents approach x,
nk
lim
= x.
x
dk
2. For rational x, the above limit is finite, i.e., there will be a K < , with
nK /dK = x exactly, and no more fractions are produced for k > K.
3. They alternate above and below x,
> x, if k is odd
nk
=
(assuming {nk /dk } =
6 x) .
dk
< x, if k is even
4. Each nk /dk (if not the final convergent which is exactly x) differs from x by no
more than 1/(dk dk+1 ),
n
1
k
x
.
dk
dk dk+1
722
5. For k > 0, nk /dk is the best approximation to x of all fractions with denominator
dk ,
n
k
x
x n , for all d dk .
dk
d
6. Consecutive convergents differ from each other by exactly 1/(dk dk+1 ),
nk1
nk
1
dk1 dk = dk dk+1 .
7. The denominators {dk } are strictly increasing and, if x is rational, are all the
denominator of x (whether or not x was given to us in reduced form).
8. When x = P/Q is a rational number, computation of the all convergents is
O(log3 X), where X is the larger of {P, Q}. (This follows from the fact that
the convergent algorithms above are based on EA as well as the details of the
loops used.)
There is one not-so-easy-to-prove fact that we will need. It can be found in An
Introduction to the Theory of Numbers by Hardy and Wright (Oxford U. Press) as
Theorem 184. The proof is rather involved.
If a fraction n/d differs from x by less than 1/(2d2 ) then n/d will appear in the
list of convergents for x. Symbolically:
If
n
x
d
<
1
2d2
then
n
d
24.3.7
nk0
dk 0
nk
dk
K
(the convergents for x) .
k=0
Our version of a convergent-generating algorithm, which I call CFA, will take as input
parameters a rational target x and requested degree of accuracy . CFA(x, ) will
return n/d, the first convergent (i.e., that with the smallest index k) to x within
of x. It simply wraps the previous algorithm into an envelope that returns a single
fraction (rather than all the convergents).
1. To our previous algorithm for generating the convergents, pass x along with the
terminating condition that it stop looping when it detects that |x nk /dk | .
723
2. Return nk /dk .
Thats all there is to it.
Depending on and x, CFA(x, ) either returns n/d = nK /dK = x, exactly, as its
final convergent or an -approximation n/d 6= x, but within of it.
724
Chapter 25
From Period-Finding to Factoring
25.1
25.2
1. M even, and
2. M = pk , k > 1, is a power of some prime.
Why the Two Cases Are Easy
The test M even? entails a simple examination of the least significant bit (0 or 1),
trivially fast. Meanwhile, there are easy classical methods that determine whether
M = pk for some prime p and produce such a p in the process (thus providing a
divisor of M ).
In fact, we can dispose of a larger class of M : those M for which M = q k , k > 1
for any integer q < M , prime or not, and produce q in the process all using classical
machinery. If we detected that case, it would provide a factor q, do so without
requiring Shors quantum circuit, and cover the more restrictive condition 2, in which
q is a prime.
So why does the second condition only seek to eliminate the case in which M is
a power of some prime p before embarking on our quantum algorithm rather than
using classical methods to test and bypass the larger class of M that are powers of
any integer, q? First, eliminating only those M that are powers of a single prime
is all that the quantum algorithm actually requires. So once we have disposed of
that possibility, we are authorized to move on to Shors quantum algorithm. Second,
knowing we can move on after confirming M 6= pk , for a p prime, gives us options.
We can ask the number theorists to provide a very fast answer to the question
is M = pk , p prime, and let the quantum algorithm scoop up the remaining
cases (which include M = q k , q not prime).
Alternatively, we can apply a fast classical method to search for a q (prime or
not) that satisfies M = q k , thus avoiding the quantum algorithm in a larger
class of M .
One of the above two paths may be faster than the other in any particular {hardware
+ software} implementation, so knowing that we can go either way gives us choices.
Now lets outline why either of the two tests
M = qk , k > 1
or
M = pk , k > 1, p prime
can be dispatched classically.
Classical Algorithm for Larger Class: M = q k ,
k>1
Any such power k would have to satisfy k < log2 M since p > 2 (weve eliminated M
even). Therefore, for every k < log2 M we compute the integral part
j k
k
M
q=
726
(something for which fast algorithms exist) and test whether q k = M . If it does, q is
our divisor, and we have covered the case M = q k , k > 1, for any integer q without
resorting to quantum computation. The time complexity is polynomial fast because
the outer loop has only log2 M passes (one for each k),
j k
k
even a slow, brute force method to compute q =
M has a polynomial
big-Oh, and
taking the power q k is also polynomial fast. (Moreover, the implementation of
previous bullet can be designed to absorb this computation, obviating it.)
[Exercise. Design an algorithm that implements these bullets and derive its
big-Oh.]
Classical Algorithm for Smaller Class: M = pk ,
k > 1, p Prime
But if one wanted to also know whether the produced q in the above process was
prime, one could use an algorithm like AKS (do a search) which has been shown to
be log polynomial, better than O(log 8 (#digits in M )) = O(log 8 (logM )), in M .
This approach was based on first finding a q with M = q k , then going on to
determine whether q was prime. Thats not efficient, and I presented it only because
the components are easy, off-the-shelf results that can be combined to prove the
classical solution is polynomial fast. In practice we would seek a solution that tests
whether M is a power of a prime directly, using some approach that was faster than
testing whether it is a power of a general integer, q.
Why We Eliminate the Two Cases
The reason we quickly dispose of these two cases is that the reduction of factoring
to period-finding, described next, will not work for either one. However, we now
understand why we can be comfortable assuming M is neither even nor a power of a
single prime and can proceed based on that supposition.
25.3
A Sufficient Condition
The next step is to change the the factoring problem into a proposition with which
we can work more easily.
Claim. We will obtain a q M if we can find an x with the property that
x2 = 1 (mod M ), for
x 6= 1 (mod M ) .
727
Proof
x2
= 1 (mod M )
x2 1 = 0 (mod M )
2
M (x 1)
M (x 1) (x + 1) .
That can only happen if M has a factor, p > 1, in common with one or both of (x1)
and (x + 1), i.e.,
pM and p(x 1) or
pM and p(x + 1) .
That factor, p, cannot be M , itself, for if it did, either
M (x 1), contrary to x 6= +1 (mod M ) or
M (x + 1), contrary to x 6= 1 (mod M ) ,
both, outlawed by the hypothesis of the claim. Define
(
gcd(M, x 1), if common factor p(x 1) or
q
gcd(M, x + 1), if common factor p(x + 1) .
Whichever of the above two cases is true (and we just proved at least one must be),
we have produced a q with q M .
QED
The time complexity of gcd(M, k), M k is shown in another lecture to be
O(log3 M ), so once we have x, getting q is fast.
Before we find x, we take a short diversion to describe something called order
f inding.
25.4
Pick a y at random
from ZM {0, 1} = {2, 3, 4, . . . , M 1}. Either y will be
coprime to M (y M ) or it wont.
If y M were done, because q gcd(M, y) will be our desired factor of
M (and we dont even need to look for an x). An O(M 3 ) application of GCD
will reveal this fact by either returning a q > 1 (were done) or result in 1 (we
continue).
728
If y M we go on to find the order of y, defined next and which leads to the
rest of the algorithm.
We therefore assume that y M , since if it were not we would have lucked upon the
first bullet and factored M that would be a third easy case we might encounter.
While it may not be obvious, finding the order of y in ZM will be the key. In
this section we define order and learn how we compute it with the help of Shors
quantum period-finding; the final section will explain how doing so factors M .
Order. The order of y ZM , also called the order of y (mod M ), is
the smallest positive integer, b > 1 such that
yb
(mod M ) .
(mod M ) .
For each pair, take k 0 to the the larger of the two, and write this last equality as
y k = y k+b
(mod M ),
b k 0 k > 0.
We just argued that there are infinitely many k > 1 (with potentially a different b > 0
for each k) for which the above holds. There must be a smallest b that satisfies this
among all the pairs. (Once you find one pair, take the k and b for that pair. Keep
looking for other pairs with different ks and smaller bs. You cant do this indefinitely,
since eventually youd reach b = 0. It doesnt matter how long this takes we only
need the existence of such a b, not to produce it, physically.) Assume this last equality
represents that smallest b > 0 for any k which makes it true. That means, there exists
a pair with
y k y k+b = 0
(mod M ),
Factoring,
yk 1 yb = 0
M y k (1 y b )
M (1 y b ) .
729
(mod M )
The last step holds because y k M , since we are working inside the major case in
which we were unlucky enough to pick a y M . Were done (proving that an order b
of y exists) because the final equality means
yb
(mod M ) .
Define
a
b + 1,
(mod M ),
a minimal .
(mod M ) ,
whenever
x x0 < a .
(Review the definition of b and its minimality to verify this extra condition.)
Well also be using the fact that the period, a, is less than M . Heres why we can
assume so. The order of y, b = a 1, has to divide M , because the order of every
element in a finite group divides evenly into the size of the group, M in this case (see
elementary group theory, if youd like to research it). So, either b = M or b M/2.
In the former case, a = M 1, and we dispose of that possibility instantly by testing
whether M 1 is the period of f (x) = y x (mod M ) by evaluating it for any x and
x + (M 1). That only leaves the case b M/2 a < M .
Enter Quantum Computing We have a ZM periodic function, f (x) = y x
(mod M ), with unknown period, a < M . This is the hypothesis of Shors quantum
algorithm which we have already proved can be applied in log3 M time. This is exactly
where we would use our quantum computer in the course of factoring M .
We have picked a y ZM {0, 1} at random, defined a function based on that
y, and found its period, a. That gave us the order of y in ZM . The next, and final,
step is to demonstrate how we use the order, a, to factor M .
25.5
and do so efficiently (in polynomial time), we will get our factor q of M . We did
manage to leverage quantum period-finding to efficiently get the order of a randomly
selected y ZM , so our job is to use that order, a, without using excessive further
computation, to factor M . There are the three cases to consider:
1. a is even, and y a/2 6= 1 (mod M ).
2. a is even, and y a/2 = 1 (mod M ).
3. a is odd.
Case 1: Even a, with y a/2 6= 1 (mod M )
Claim.
y a/2
x
satisfies our sufficient condition.
Proof
x2 = y a , where a = order of y
x2 = 1 (mod M ).
(mod M ), so
6=
(mod M ),
6=
+1
(mod M ),
6=
1 (mod M ). Our
and well have shown that this x satisfies our sufficient condition. Proceed by contradiction. What would happen if
x
+1
(mod M ) ?
y a/2
+1
(mod M ), i.e.,
y (a/2)+1
Then
(mod M ) .
If a = 2, then
y2
y
=
=
y
1
(mod M ), i.e.,
(mod M ),
731
ya
y (a/2)+1
(mod M ) .
That contradicts that a is the order of y (mod )M , because the order is the smallest
integer with that property, by construction.
QED
That dispatches the first case; we have found an x which satisfies the sufficient
condition needed to find a factor, q, of M .
Cases 2 and 3: Even a, with y a/2 = 1 (mod M ) or Odd a
I combine these two cases because, while they are both possible, we rely on results
from number theory (one being the Chinese Remainder Theorem) which tell us that
the probability of both cases, taken together is never more than .5, i.e.,
1
.
2
This result is independent of M . That means that if we repeatedly pick y at random,
T times, the chances that we are unlucky enough to get case 2 or case 3 in all T trials
is
!
T
T
Y
^
1
1
P
case 2 case 3
=
.
2
2T
k=1
k=1
P ( case 2 case 3 )
25.6
There is one final detail we have not addressed. Shors quantum period-finding was
only as fast as its weakest link. Since the algorithm is log3 M only counting the logic
exterior to the quantum oracle, Uf , then any f which has a larger growth rate would
erode Shors performance accordingly. In other words, it has relativized exponential
speed up. If it is to have absolute speed-up over the classical case we must show that
the oracle, itself, is polynomial time in M . Im happy to inform you that in the case
of the factoring problem, the function f (x) = y x (mod M ) is, indeed, log3 M .
732
25.7
y ZM and we also saw that a < M , so we only need to consider computing y x for
both y, x < M . Since M < M 2 <= N = 2n , we can express both x and y as a sum
of powers-of-2 with, at most, n = log N terms. Lets do that for x:
x
n1
X
xk 2k ,
k=0
yk
n1
Y
xk 2k
y xk 2
k=0
k
25.7.1
Complexity of Step 1
k
y xk 2
y2
xk
,
k1
Starting with the second element, y, each element in this array is the square of the
one before. Thats a total of n 2 multiplications (we get 1 and y for free). Thus, to
produce the entire array it costs O(log N ) multiplications, with each multiplication
733
(by above note) O(log2 N ). Thats a total complexity of O(log3 N ). This is done
once for each factor in the product
n1
Y
y2
xk
k=0
To complete the computation of each of the k factors, we raise one of our pre-computed
k
y 2 to the xk power. Thats is xk multiplications for each factor. Wait a minute xk
is a binary digit, either 0 or 1, so this is nothing other than a choice; for each n we
tag on an if-statement to finish off the computation for that factor. Therefore, the
computation of each factor remains O(log3 N ).
There are n factors to compute, so this tags on another log N magnitude to the
bunch, producing a final cost for step 1 of O(log4 N ). However, have not done the
big product yet . . . .
25.7.2
Complexity of Step 2
Each binary product in the big is O(log2 N ). The big has n 1 such products,
so the final product (after computing the factors) is O((n 1) log2 N ) = O(log3 N ).
Combining Both Results
The evaluation of all n factors, O(log4 N ), is computed in series with the final product, O(log3 N ), not nested, so the slower of the two, O(log4 N ), determines the full
complexity of the oracle. Note that this was a lazy and coarse computation, utilizing
simple multiplication algorithms and a straightforward build of the f (x) = y x (mod
M ), function, and we can certainly do a little better.
As we demonstrated when covering Shors algorithm, the relationship between M
and N (N/2 < M 2 N ) implies that this is the equal to O(log4 M )
25.7.3
735
List of Figures
1
22
23
1.1
32
1.2
34
1.3
36
1.4
conjugation as reflection . . . . . . . . . . . . . . . . . . . . . . . . .
37
1.5
38
1.6
40
1.7
43
1.8
46
1.9
47
2.1
A vector in R2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
2.2
Vector addition in R . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
54
2.3
Scalar multiplication in R . . . . . . . . . . . . . . . . . . . . . . . .
55
2.4
Orthogonal vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . .
56
2.5
A vector in R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
58
2.6
61
2.7
64
2.8
65
3.1
Dot-product of the first row and first column yields element 1-1 . . .
75
3.2
Dot-product of the second row and second column yields element 2-2
75
3.3
80
3.4
85
4.1
The Cauchy sequence 1 k1 k=2 has its limit in [0, 1] . . . . . . . .
98
736
4.2
The Cauchy sequence 1 k1 k=2 does not have its limit in (0, 1) . .
4.3
4.4
4.5
Dividing a vector by its norm yields a unit vector on the same ray
5.1
5.2
5.3
3 . . . . . . . . . . . . . . . . 110
Projection onto the direction z, a.k.a. x
5.4
. . . . . . . . . . . . . . . . 110
Projection onto an arbitrary direction n
5.5
Rotation of x
counter-clockwise by /2 . . . . . . . . . . . . . . . . . 112
5.6
Rotation of y
counter-clockwise by /2 . . . . . . . . . . . . . . . . . 113
6.1
6.2
6.3
Polar and azimuthal angles for the (unit) spin direction . . . . . . . . 136
6.4
6.5
6.6
6.7
6.8
6.9
98
. . . . . . 101
. 102
. . . . . . . . . . . . . . . . . . . . . . 135
. . 136
. . 150
19.2 The function y = tan x blows-up at isolated points but is still periodic
(with period ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
19.3 Graph of a function defined only for x [1, 3) . . . . . . . . . . . . 544
19.4 Graph of a function defined everywhere, but whose support is [1, 3],
the closure of [1, 0) (0, 3) . . . . . . . . . . . . . . . . . . . . . . 545
19.5 A periodic function that can be expressed as a Fourier series . . . . . 546
19.6 A function with bounded domain that can be expressed as a Fourier
series (support width = 2) . . . . . . . . . . . . . . . . . . . . . . . 546
19.7 A low frequency (n = 1 : sin x) and high frequency (n = 20 : sin 20x)
basis function in the Fourier series . . . . . . . . . . . . . . . . . . . . 548
19.8 f (x) = x, defined only on bounded domain [, ) . . . . . . . . . . 549
19.9 f (x) = x as a periodic function with fundamental interval [, ): . 549
19.10First 25 Fourier coefficients of f (x) = x . . . . . . . . . . . . . . . . . 550
19.11Graph of the Fourier coefficients of f (x) = x . . . . . . . . . . . . . . 550
19.12Fourier partial sum of f (x) = x to n = 3 . . . . . . . . . . . . . . . . 552
19.13Fourier partial sum of f (x) = x to n = 50
. . . . . . . . . . . . . . . 552
. . 566
j (mod 2)
j (mod 2)
23.1 The spectrum of a vector with period 8 and frequency 16 = 128/8 . . 640
23.2 sin(3x) and its spectrum . . . . . . . . . . . . . . . . . . . . . . . . . 640
23.3 The spectrum of a purely periodic vector with period 8 and frequency
16 = 128/8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
23.4 Graph of two periods of a periodic injective function . . . . . . . . . . 642
23.5 Example of a periodic function that is not periodic injective . . . . . 644
23.6 We add the weak assumption that 2(+) a-intervals fit into [0, M ) . . 645
23.7 Typical application provides many a-intervals in [0, M ) . . . . . . . . 646
23.8 Our proof will also work for only one a interval in [0, M ) . . . . . . . 646
23.9 N = 2n chosen so (N/2, N ] bracket M 2 . . . . . . . . . . . . . . . . . 647
23.10Eight highly probable measurement results, cm, for N = 128 and a = 8 650
23.11Easy case covers aN , exactly . . . . . . . . . . . . . . . . . . . . . . 655
23.12[0, N ) is the union of distinct cosets of size a . . . . . . . . . . . . . . 655
739
. . . . . . . . . 679
740
List of Tables
741