DIV_CURL Introduction
DIV_CURL Introduction
1 Integral Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1 Gradient, Divergence and Curl 5
1.1.1 Vector Identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.1.2 Vector Potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1. Integral Theorems
⃗∇ × E = − 1 ∂ B
c ∂t
The shortest way to write (and easiest way to remember) gradient, divergence and curl uses
the symbol “⃗∇” which is a differential operator like ∂∂ x . It is defined by
⃗∇ = î ∂ + jˆ∂ + k̂ ∂
∂x ∂y ∂z
and is called “del” or “nabla”. Here are the definitions.
1. The gradient of a scalar-valued function f (x, y, z) is the vector field
∂f ∂f ˆ ∂f
grad, f = ⃗∇ f = î + j+ k̂
∂x ∂y ∂z
Note that the input, f , for the gradient is a scalar-valued function, while the output,⃗∇ f ,
is a vector-valued function.
1 Good shorthand is not only more brief, but also aids understanding “of the forest by hiding the trees”.
6 Chapter 1. Integral Theorems
⃗ 2 ⃗ ⃗ ∂2 f ∂2 f ∂2 f
∆f = ∇ f = ∇·∇f = 2 + 2 + 2
∂x ∂y ∂z
The Laplacian of a vector field F(x, y, z) is the vector field
∂ 2F ∂ 2F ∂ 2F
∆F = ⃗∇2 F = ⃗∇ · ⃗∇F = + 2 + 2
∂ x2 ∂y ∂z
Note that the Laplacian maps either a scalar-valued function to a scalar-valued func-
tion, or a vector-valued function to a vector-valued function.
The gradient, divergence and Laplacian all have obvious generalizations to dimensions
other than three. That is not the case for the curl. It does have a, far from obvious,
generalization, which uses differential forms. Differential forms are well beyond our scope,
but are introduced in the optional §??.
As an example of an application in which both the divergence and curl appear, we have
Maxwell’s equations3 4 5 , which form the foundation of classical electromagnetism.
⃗∇ · E = 4πρ
⃗∇ · B = 0
⃗∇ × E + 1 ∂ B = 0
c ∂t
⃗∇ × B − ∂ E = 4π J
1
c ∂t c
Here E is the electric field, B is the magnetic field, ρ is the charge density, J is the current
density and c is the speed of light.
2 Pierre-Simon Laplace (1749–1827) was a French mathematician and astronomer. He is also the Laplace
of Laplace’s equation, the Laplace transform, and the Laplace-Bayes estimator. He was Napoleon’s examiner
when Napoleon attended the Ecole Militaire in Paris.
3To be picky, these are Maxwell’s equations in the absence of a material medium and in Gaussian units.
4 One important consequence of Maxwell’s equations is that electromagnetic radiation, like light, propagate
physicists, Maxwell was voted the third greatest physicist of all time. Only Newton and Einstein beat him.
1.1 Gradient, Divergence and Curl 7
Memory Aid. Most of the vector identities (in fact all of them except Theorem 1.1.1.e,
Theorem 1.1.3.d and Theorem 1.1.7) are really easy to guess. Just combine the conventional
linearity and product rules with the facts that
◦ if the left hand side is a vector (scalar), then the right hand side must also be a vector
(scalar) and
◦ the only valid products of two vectors are the dot and cross products and
◦ the product of a scalar with either a scalar or a vector cannot be either a dot or cross
product and
◦ A × B = −B × A. (The cross product is antisymmetric.)
For example, consider Theorem 1.1.2.c, which says ⃗∇ · ( f F) = (⃗∇ f ) · F + f , ⃗∇ · F.
◦ The left hand side, ⃗∇ · ( f F), is a scalar, so the right hand side must also be a scalar.
◦ The left hand side, ⃗∇ · ( f F), is a derivative of the product of f and F, so, mimicking
the product rule, the right hand side will be a sum of two terms, one with F multiplying
a derivative of f , and one with f multiplying a derivative of F.
◦ The derivative acting on f must be ⃗∇ f , because ⃗∇ · f and ⃗∇ × f are not well-defined.
To end up with a scalar, rather than a vector, we must take the dot product of ⃗∇ f and
F. So that term is (⃗∇ f ) · F.
◦ The derivative acting on F must be either ⃗∇ · F or ⃗∇ × F. We also need to multiply by
the scalar f and end up with a scalar. So the derivative must be a scalar, i.e. ⃗∇ · F and
that term is f {⃗∇ · F}.
◦ Our final guess is ⃗∇ · ( f F) = (⃗∇ f ) · F + f , ⃗∇ · F, which, thankfully, is correct.
1.1 Gradient, Divergence and Curl 9
Proof of Theorems 1.1.1, 1.1.2, 1.1.3, 1.1.6 and 1.1.7. All of the proofs (except for
those of Theorem 1.1.7.c,d, which we will return to later) consist of
◦ writing out the definition of the left hand side and
◦ writing out the definition of the right hand side and
◦ observing (possibly after a little manipulation) that they are the same.
For Theorem 1.1.1.a,b, Theorem 1.1.2.a,b, Theorem 1.1.3.a,b and Theorem 1.1.6.a,b,
the computation is trivial — one line per identity, if one uses some efficient notation.
Rename the coordinates x, y, z to x1 , x2 , x3 and the standard unit basis vectors î, jˆ, k̂ to
î1 , î2 , î3 . Then ⃗∇ = ∑3n=1 în ∂∂xn and the proof of, for example, Theorem 1.1.2.a is
3
⃗∇ · (F + G) = ∂
∑ ∂ xn în · (F + G)
n=1
3 3
∂ ∂
= î
∑ ∂ xn n · F + ∑ ∂ xn în · G = ⃗∇ · F + ⃗∇ · G
n=1 n=1
For Theorem 1.1.1.c,d, Theorem 1.1.2.c, Theorem 1.1.3.c and Theorem 1.1.6.c, the
computation is easy — a few lines per identity. For example, the proof of Theorem
1.1.3.c is
3 3
⃗∇ × ( f F) = ∂ ∂
î
∑ ∂ xn n × ( f F) = ∑ ∂ xn f , {în × F}
n=1 n=1
3 3
∂f ∂
= ∑ ∂ xn în × F + f ∑ ∂ xn în × F (by Theorem ??.b)
n=1 n=1
= (⃗∇ f ) × F + f , ⃗∇ × F
The similar verification of Theorems 1.1.1.c,d, 1.1.2.c and 1.1.6.c are left as exercises.
The latter two are parts (a) and (c) of Question PROB4prb vector identity in Section
4.1 of the CLP-4 problem book.
For Theorem 1.1.2.d, the computation is also easy if one uses the fact that
a · (b × c) = (a × b) · c
which is Lemma 1.1.1.a below. The verification of Theorem 1.1.2.d is part (b) of
Question PROB4prb vector identity in Section 4.1 of the CLP-4 problem book.
That leaves the proofs of Theorem 1.1.1.e, Theorem 1.1.3.d, Theorem 1.1.7.a,b,c,d,e,
which we write out explicitly.
Theorem 1.1.1.e:
First write out the left hand side as
3 3 ∂F 3 ∂G
⃗∇(F · G) = ∂
∑ în (F · G) = ∑ în · G + ∑ în F ·
n=1 ∂ xn n=1 ∂ xn n=1 ∂ xn
10 Chapter 1. Integral Theorems
(c · a)b = a × (b × c) + (b · a)c
∂F ∂G
Applying it once with b = în , c = ∂ xn , a = G and once with b = în , c = ∂ xn , a=F
gives
3 3
⃗∇(F · G) = ∑ G × în × ∂ F ∂ F ∂ G ∂ G
+ (G · în ) + ∑ F × în × + (F · în )
n=1 ∂ xn ∂ xn n=1 ∂ xn ∂ xn
= G × (⃗∇ × F) + (G · ⃗∇)F + F × (⃗∇ × G) + (F · ⃗∇)G
Theorem 1.1.3.d:
We use the same trick. Write out the left hand side as
3 3 ∂F 3
⃗∇ × (F × G) = ∂ ∂G
∑ în × ∂ xn
(F × G) = ∑ în ×
∂ xn
× G + ∑ în × F ×
∂ xn
n=1 n=1 n=1
Theorem 1.1.7.a:
Substituting in
⃗∇ × F = ∂ F3 − ∂ F2 î − ∂ F3 − ∂ F1 jˆ + ∂ F2 − ∂ F1 k̂
∂y ∂z ∂x ∂z ∂x ∂y
gives
⃗∇ · ⃗∇ × F = ∂ ∂ F3 − ∂ F2 − ∂ ∂ F3 − ∂ F1 + ∂ ∂ F2 − ∂ F1
∂x ∂y ∂z ∂y ∂x ∂z ∂z ∂x ∂y
2 2 2 2
∂ F3 ∂ F2 ∂ F3 ∂ F1 ∂ F2 ∂ F1 2 2
= − − + + −
∂ x∂ y ∂ x∂ z ∂ y∂ x ∂ y∂ z ∂ z∂ x ∂ z∂ y
=0
because the two red terms have cancelled, the two blue terms have cancelled and the
two black terms have cancelled.
Theorem 1.1.7.b:
1.1 Gradient, Divergence and Curl 11
Substituting in
⃗∇ f = ∂ f î + ∂ f jˆ + ∂ f k̂
∂x ∂y ∂z
gives
⃗∇ × ⃗∇ f = ∂ ∂ f − ∂ ∂ f î − ∂ ∂ f − ∂ ∂ f jˆ + ∂ ∂ f − ∂ ∂ f k̂ = 0
∂y ∂z ∂z ∂y ∂x ∂z ∂z ∂x ∂x ∂y ∂y ∂x
Theorem 1.1.7.c:
By Theorem 1.1.2.c, followed by Theorem 1.1.2.d,
⃗∇ · f (⃗∇g × ⃗∇h) = ⃗∇ f · (⃗∇g × ⃗∇h) + f ⃗∇ · (⃗∇g × ⃗∇h)
Theorem 1.1.7.d:
By Theorem 1.1.2.c,
⃗∇ · ( f ⃗∇g − g⃗∇ f ) = (⃗∇ f ) · (⃗∇g) + f , ⃗∇ · (⃗∇g) − (⃗∇g) · (⃗∇ f ) + g, ⃗∇ · (⃗∇ f )
= f , ⃗∇2 g − g, ⃗∇2 f
Theorem 1.1.7.e:
3 3 3 3 ∂ 2 Fn
⃗∇ × (⃗∇ × F) = ∑ îl ∂ × ∑ mî
∂
× ∑nn î F = î × î × î
∑ l m n ∂ xl ∂ xm
l=1 ∂ xl m=1 ∂ xm n=1 l,m,n=1
where6
(
1 if m = n
δm,n =
0 if m ̸= n
6δ
m,n is called the Kronecker delta function. It is named after the German number theorist and logician
Leopold Kronecker (1823–1891). He is reputed to have said “God made the integers. All else is the work of
man.”
12 Chapter 1. Integral Theorems
Hence
3 3
⃗∇ × (⃗∇ × F) = ∂ 2 Fn ∂ 2 Fn
∑ δl,n îm − ∑ δl,m în
l,m,n=1 ∂ xl ∂ xm l,m,n=1 ∂ xl ∂ xm
3 3
∂ ∂ Fn ∂ 2 Fn
= ∑ îm − ∑ în 2
m,n=1 ∂ xm ∂ xn m,n=1 ∂ xm
= ⃗∇(⃗∇ · F) − ⃗∇2 F
1. a · (b × c) = (a × b) · c
2. a × (b × c) = (c · a)b − (b · a)c
Proof. (a) Here are two proofs. For the first, just write out both sides
a · (b × c) = (a1 , a2 , a3 ) · (b2 c3 − b3 c2 , , , b3 c1 − b1 c3 , , , b1 c2 − b2 c1 )
= a1 b2 c3 − a1 b3 c2 + a2 b3 c1 − a2 b1 c3 + a3 b1 c2 − a3 b2 c1
(a × b) · c = (a2 b3 − a3 b2 , , , a3 b1 − a1 b3 , , , a1 b2 − a2 b1 ) · (c1 , c2 , c3 )
= a2 b3 c1 − a3 b2 c1 + a3 b1 c2 − a1 b3 c2 + a1 b2 c3 − a2 b1 c3
jˆ k̂
î
a · b × c = (a1 , a2 , a3 ) · det b1 b2 b3
c1 c2 c3
b2 b3 b1 b3 b1 b2
= a1 det − a2 det + a3 det
c2 c3 c1 c3 c1 c2
a1 a2 a3
= det b1 b2 b3
c1 c2 c3
jˆ k̂
î
a × b · c = det a1 a2 a3 · (c1 , c2 , c3 )
b1 b2 b3
a2 a3 a1 a3 a1 a2
= c1 det − c2 det + c3 det
b2 b3 b1 b3 b1 b2
c1 c2 c3
= det a1 a2 a3
b1 b2 b3
1.1 Gradient, Divergence and Curl 13
Exchanging two rows in a determinant changes the sign of the determinant. Moving
the top row of a 3 × 3 determinant to the bottom row requires two exchanges of rows.
So the two 3 × 3 determinants are equal.
(b) The proof is not exceptionally difficult — just write out both sides and grind.
Substituting in
b × c = (b2 c3 − b3 c2 )î − (b1 c3 − b3 c1 ) jˆ + (b1 c2 − b2 c1 )k̂
gives, for the left hand side,
jˆ
î k̂
a × (b × c) = det a1 a2 a3
b2 c3 − b3 c2 −b1 c3 + b3 c1 b1 c2 − b2 c1
= î a2 (b1 c2 − b2 c1 ) − a3 (−b1 c3 + b3 c1 )
− jˆ a1 (b1 c2 − b2 c1 ) − a3 (b2 c3 − b3 c2 )
+k̂ a1 (−b1 c3 + b3 c1 ) − a2 (b2 c3 − b3 c2 )
On the other hand, the right hand side
(a · c)b − (a · b)c = (a1 c1 + a2 c2 + a3 c3 )(b1 î + b2 jˆ + b3 k̂) − (a1 b1 + a2 b2 + a3 b3 )(c1 î + c2 jˆ + c3 k̂
= î a1 b1 c1 + a2 b1 c2 + a3 b1 c3 − a1 b1 c1 − a2 b2 c1 − a3 b3 c1
+ jˆ a1 b2 c1 + a2 b2 c2 + a3 b2 c3 − a1 b1 c2 − a2 b2 c2 − a3 b3 c2
+k̂ a1 b3 c1 + a2 b3 c2 + a3 b3 c3 − a1 b1 c3 − a2 b2 c3 − a3 b3 c3
= î [a2 b1 c2 + a3 b1 c3 − a2 b2 c1 − a3 b3 c1 ]
+ jˆ [a1 b2 c1 + a3 b2 c3 − a1 b1 c2 − a3 b3 c2 ]
+k̂ [a1 b3 c1 + a2 b3 c2 − a1 b1 c3 − a2 b2 c3 ]
The last formula that we had for the left hand side is the same as the last formula we
had for the right hand side. ■
[Screening tests] We have seen the vector identity Theorem 1.1.7.b before. It says
that if a vector field F is of the form F = ⃗∇φ for some some function φ (that is, if F
is conservative), then
⃗∇ × F = ⃗∇ × (⃗∇φ ) = 0
Conversely, we have also seen, in Theorem ??, that, if F is defined and has continuous
first order partial derivatives on all of R3 , and if ⃗∇ × F = 0, then F is conservative.
The vector identity Theorem 1.1.7.b is our screening test for conservativeness.
Because its right hand side is zero, the vector identity Theorem 1.1.7.a is suggestive.
It says that if a vector field F is of the form F = ⃗∇ × A for some some vector field A,
then
⃗∇ · F = ⃗∇ · (⃗∇ × A) = 0
14 Chapter 1. Integral Theorems
⃗∇ × E + 1 ∂ B = 0
c ∂t
that we saw in Example 1.1. The first equation implies that (assuming B is sufficiently
smooth) there is a vector field A, called the magnetic potential, with B = ⃗∇ × A.
Substituting this into the second equation gives
⃗ 1∂ ⃗ ⃗
1 ∂A
0 = ∇×E + ∇×A = ∇× E +
c ∂t c ∂t
So E + 1c ∂∂tA passes the screening test of Theorem 1.1.7.b and there is a function φ
(called the electric potential) with
1 ∂A
E+ = −⃗∇φ
c ∂t
We have put in the minus sign just to provide compatibility with the usual physics
terminology.
Problem: Let r(x, y, z) = x, î + y, jˆ + z, k̂ and let ψ(x, y, z) be an arbitrary function.
Verify that
⃗∇ · r × ⃗∇ψ = 0
By the vector identity Theorem 1.1.7.b, the second term is zero. Now since
⃗∇ × r = ∂ z − ∂ y î − ∂ z − ∂ x jˆ + ∂ y − ∂ x k̂ = Zero
∂y ∂z ∂x ∂z ∂x ∂y
the first term is also zero. Indeed ⃗∇ · r × ⃗∇ψ = 0 holds for any curl free r(x, y, z).
The first term vanishes by Example 1.1.1. By the vector identity Theorem 1.1.3.d,
with F = r and G = ⃗∇ψ, the second term, including the minus sign, is
as desired.
B = ⃗∇ × A
As we saw in Example 1.1.1, if a vector field B has a vector potential, then the vector
identity Theorem 1.1.7.a implies that ⃗∇ · B = 0. This fact deserves to be called a
theorem.
Theorem 1.1.8 — Screening test for vector potentials. If there exists a vector
potential for the vector field B, then
⃗∇ · B = 0
16 Chapter 1. Integral Theorems
Of course, we’ll consider the converse soon. Also note that the vector potential, when
it exists, is far from unique. Two vector fields A and à are both vector potentials for
the same vector field if and only if
⃗∇ × A = ⃗∇ × Ã ⇐⇒ ⃗∇ × (A − Ã) = Zero
That is, if and only if the difference A − Ã passes the conservative field screening test
of Theorems ?? and ??. In particular, if A is one vector potential for a vector field B
(i.e. if B = ⃗∇ × A), and if ψ is any function, then
⃗∇ × (A + ⃗∇ψ) = ⃗∇ × A + ⃗∇ × ⃗∇ψ = B
by the vector identity Theorem 1.1.7.b. That is, A + ⃗∇ψ is another vector potential
for B.
To simplify computations, we can always choose ψ so that, for example, the third
of A + ⃗∇ψ, namely A + ⃗∇ψ · k̂ = A3 + ∂∂ψz , is zero — just choose
component
R
ψ = − A3 , z. We have just proven If the vector field B has a vector potential, then,
in particular, there is a vector potential A for B with8 A3 = 0.