0% found this document useful (0 votes)
7 views

DIV_CURL Introduction

The document discusses the mathematical concepts of gradient, divergence, and curl, which are essential differential operators in vector calculus. It includes definitions, identities, and theorems related to these operators, as well as their applications in physics, particularly in Maxwell's equations. The content is structured into sections covering integral theorems, vector identities, and their implications in various dimensions.

Uploaded by

wrktanay.ghosh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

DIV_CURL Introduction

The document discusses the mathematical concepts of gradient, divergence, and curl, which are essential differential operators in vector calculus. It includes definitions, identities, and theorems related to these operators, as well as their applications in physics, particularly in Maxwell's equations. The content is structured into sections covering integral theorems, vector identities, and their implications in various dimensions.

Uploaded by

wrktanay.ghosh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

space*5cm

Divergence, Curl, Grad


space*1cm Tanay Ghosh
Copyright © 2018 Tanay Ghosh

R ESEARCH S CHOLAR , AUS

First release, April 2018


Contents

1 Integral Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1 Gradient, Divergence and Curl 5
1.1.1 Vector Identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.1.2 Vector Potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1. Integral Theorems

1.1 Gradient, Divergence and Curl


“Gradient, divergence and curl”, commonly called “grad, div and curl”, refer to a very widely
used family of differential operators and related notations that we’ll get to shortly. We will
later see that each has a “physical” significance. But even if they were only shorthand1 , they
would be worth using. For example, one of Maxwell’s equations (relating the electric field
E and the magnetic field B) written without the use of this notation is
∂E
3 ∂ E2   ∂ E3 ∂ E1  ˆ  ∂ E2 ∂ E1  1  ∂ B1 ∂ B2 ˆ ∂ B3 
− î − − j+ − k̂ = − î + j+ k̂
∂y ∂z ∂x ∂z ∂x ∂y c ∂t ∂t ∂t
The same equation written using this notation is

⃗∇ × E = − 1 ∂ B
c ∂t
The shortest way to write (and easiest way to remember) gradient, divergence and curl uses
the symbol “⃗∇” which is a differential operator like ∂∂ x . It is defined by

⃗∇ = î ∂ + jˆ∂ + k̂ ∂
∂x ∂y ∂z
and is called “del” or “nabla”. Here are the definitions.
1. The gradient of a scalar-valued function f (x, y, z) is the vector field
∂f ∂f ˆ ∂f
grad, f = ⃗∇ f = î + j+ k̂
∂x ∂y ∂z
Note that the input, f , for the gradient is a scalar-valued function, while the output,⃗∇ f ,
is a vector-valued function.
1 Good shorthand is not only more brief, but also aids understanding “of the forest by hiding the trees”.
6 Chapter 1. Integral Theorems

2. The divergence of a vector field F(x, y, z) is the scalar-valued function


∂ F1 ∂ F2 ∂ F3
div, F = ⃗∇ · F = + +
∂x ∂y ∂z
Note that the input, F, for the divergence is a vector-valued function, while the output,
⃗∇ · F, is a scalar-valued function.
3. The curl of a vector field F(x, y, z) is the vector field
∂F ∂ F2   ∂ F3 ∂ F1  ˆ  ∂ F2 ∂ F1 
3
curl, F = ⃗∇ × F = − î − − j+ − k̂
∂y ∂z ∂x ∂z ∂x ∂y
Note that the input, F, for the curl is a vector-valued function, and the output, ⃗∇ × F,
is a again a vector-valued function.
4. The Laplacian2 of a scalar-valued function f (x, y, z) is the scalar-valued function

⃗ 2 ⃗ ⃗ ∂2 f ∂2 f ∂2 f
∆f = ∇ f = ∇·∇f = 2 + 2 + 2
∂x ∂y ∂z
The Laplacian of a vector field F(x, y, z) is the vector field
∂ 2F ∂ 2F ∂ 2F
∆F = ⃗∇2 F = ⃗∇ · ⃗∇F = + 2 + 2
∂ x2 ∂y ∂z
Note that the Laplacian maps either a scalar-valued function to a scalar-valued func-
tion, or a vector-valued function to a vector-valued function.
The gradient, divergence and Laplacian all have obvious generalizations to dimensions
other than three. That is not the case for the curl. It does have a, far from obvious,
generalization, which uses differential forms. Differential forms are well beyond our scope,
but are introduced in the optional §??.
As an example of an application in which both the divergence and curl appear, we have
Maxwell’s equations3 4 5 , which form the foundation of classical electromagnetism.
⃗∇ · E = 4πρ
⃗∇ · B = 0

⃗∇ × E + 1 ∂ B = 0
c ∂t
⃗∇ × B − ∂ E = 4π J
1
c ∂t c
Here E is the electric field, B is the magnetic field, ρ is the charge density, J is the current
density and c is the speed of light.
2 Pierre-Simon Laplace (1749–1827) was a French mathematician and astronomer. He is also the Laplace
of Laplace’s equation, the Laplace transform, and the Laplace-Bayes estimator. He was Napoleon’s examiner
when Napoleon attended the Ecole Militaire in Paris.
3To be picky, these are Maxwell’s equations in the absence of a material medium and in Gaussian units.
4 One important consequence of Maxwell’s equations is that electromagnetic radiation, like light, propagate

at the speed of light.


5 James Clerk Maxwell (1831–1879) was a Scottish mathematical physicist. In a poll of prominent

physicists, Maxwell was voted the third greatest physicist of all time. Only Newton and Einstein beat him.
1.1 Gradient, Divergence and Curl 7

1.1.1 Vector Identities


Two computationally extremely important properties of the derivative x are linearity and
the product rule.

x a f (x) + bg(x) = a f x(x) + bgx(x)

x f (x), g(x) = f x(x), g(x) + f (x), gx(x)
Gradient, divergence and curl also have properties like these, which indeed stem (often
easily) from them. First, here are the statements of a bunch of them. (A memory aid and
proofs will come later.) In fact, here are a very large number of them. Many are included
just for completeness. Only a relatively small number are used a lot. They are in red.

Theorem 1.1.1 — Gradient Identities. (a) ⃗∇( f + g) = ⃗∇ f + ⃗∇g


(b) ⃗∇(c f ) = c, ⃗∇ f , for any constant c
(c) ⃗∇( f g) = (⃗∇ f )g + f (⃗∇g)
⃗∇( f /g) = g, ⃗∇ f − f , ⃗∇g /g2 at points x where g(x) ̸= 0.

(d)
(e) ⃗∇(F · G) = F × (⃗∇ × G) − (⃗∇ × F) × G + (G · ⃗∇)F + (F · ⃗∇)G
Herea
∂F ∂F ∂F
(G · ⃗∇)F = G1 + G2 + G3
∂x ∂y ∂z
aThis is really the only definition that makes sense. For example G · (⃗∇F) does not make sense because
you can’t take the gradient of a vector-valued function.

Theorem 1.1.2 — Divergence Identities. (a) ⃗∇ · (F + G) = ⃗∇ · F + ⃗∇ · G


(b) ⃗∇ · (cF) = c, ⃗∇ · F, for any constant c
(c) ⃗∇ · ( f F) = (⃗∇ f ) · F + f , ⃗∇ · F
(d) ⃗∇ · (F × G) = (⃗∇ × F) · G − F · (⃗∇ × G)

Theorem 1.1.3 — Curl Identities. (a) ⃗∇ × (F + G) = ⃗∇ × F + ⃗∇ × G


(b) ⃗∇ × (cF) = c, ⃗∇ × F, for any constant c
(c) ⃗∇ × ( f F) = (⃗∇ f ) × F + f , ⃗∇ × F
(d) ⃗∇ × (F × G) = F(⃗∇ · G) − (⃗∇ · F)G + (G · ⃗∇)F − (F · ⃗∇)G
Here
∂F ∂F ∂F
(G · ⃗∇)F = G1 + G2 + G3
∂x ∂y ∂z

Theorem 1.1.4 — Laplacian Identities. (a) ⃗∇2 ( f + g) = ⃗∇2 f + ⃗∇2 g


(b) ⃗∇2 (c f ) = c, ⃗∇2 f , for any constant c
8 Chapter 1. Integral Theorems

(c) ⃗∇2 ( f g) = f , ⃗∇2 g + 2⃗∇ f · ⃗∇g + g, ⃗∇2 f

Theorem 1.1.5 — Degree Two Identities. (a) ⃗∇ · (⃗∇ × F) = 0 (divergence of curl)


(b) ⃗∇ × (⃗∇ f ) = 0 (curl of gradient)
⃗∇ · f {⃗∇g × ⃗∇h} = ⃗∇ f · (⃗∇g × ⃗∇h)

(c)
(d) ⃗∇ · ( f ⃗∇g − g⃗∇ f ) = f , ⃗∇2 g − g, ⃗∇2 f
(e) ⃗∇ × (⃗∇ × F) = ⃗∇(⃗∇ · F) − ⃗∇2 F (curl of curl)

Theorem 1.1.6 — Laplacian Identities. (a) ⃗∇2 ( f + g) = ⃗∇2 f + ⃗∇2 g


(b) ⃗∇2 (c f ) = c, ⃗∇2 f , for any constant c
(c) ⃗∇2 ( f g) = f , ⃗∇2 g + 2⃗∇ f · ⃗∇g + g, ⃗∇2 f

Theorem 1.1.7 — Degree Two Identities. (a) ⃗∇ · (⃗∇ × F) = 0 (divergence of curl)


(b) ⃗∇ × (⃗∇ f ) = 0 (curl of gradient)
⃗∇ · f {⃗∇g × ⃗∇h} = ⃗∇ f · (⃗∇g × ⃗∇h)

(c)
(d) ⃗∇ · ( f ⃗∇g − g⃗∇ f ) = f , ⃗∇2 g − g, ⃗∇2 f
(e) ⃗∇ × (⃗∇ × F) = ⃗∇(⃗∇ · F) − ⃗∇2 F (curl of curl)

Memory Aid. Most of the vector identities (in fact all of them except Theorem 1.1.1.e,
Theorem 1.1.3.d and Theorem 1.1.7) are really easy to guess. Just combine the conventional
linearity and product rules with the facts that
◦ if the left hand side is a vector (scalar), then the right hand side must also be a vector
(scalar) and
◦ the only valid products of two vectors are the dot and cross products and
◦ the product of a scalar with either a scalar or a vector cannot be either a dot or cross
product and
◦ A × B = −B × A. (The cross product is antisymmetric.)
For example, consider Theorem 1.1.2.c, which says ⃗∇ · ( f F) = (⃗∇ f ) · F + f , ⃗∇ · F.
◦ The left hand side, ⃗∇ · ( f F), is a scalar, so the right hand side must also be a scalar.
◦ The left hand side, ⃗∇ · ( f F), is a derivative of the product of f and F, so, mimicking
the product rule, the right hand side will be a sum of two terms, one with F multiplying
a derivative of f , and one with f multiplying a derivative of F.
◦ The derivative acting on f must be ⃗∇ f , because ⃗∇ · f and ⃗∇ × f are not well-defined.
To end up with a scalar, rather than a vector, we must take the dot product of ⃗∇ f and
F. So that term is (⃗∇ f ) · F.
◦ The derivative acting on F must be either ⃗∇ · F or ⃗∇ × F. We also need to multiply by
the scalar f and end up with a scalar. So the derivative must be a scalar, i.e. ⃗∇ · F and
that term is f {⃗∇ · F}.
◦ Our final guess is ⃗∇ · ( f F) = (⃗∇ f ) · F + f , ⃗∇ · F, which, thankfully, is correct.
1.1 Gradient, Divergence and Curl 9

Proof of Theorems 1.1.1, 1.1.2, 1.1.3, 1.1.6 and 1.1.7. All of the proofs (except for
those of Theorem 1.1.7.c,d, which we will return to later) consist of
◦ writing out the definition of the left hand side and
◦ writing out the definition of the right hand side and
◦ observing (possibly after a little manipulation) that they are the same.
For Theorem 1.1.1.a,b, Theorem 1.1.2.a,b, Theorem 1.1.3.a,b and Theorem 1.1.6.a,b,
the computation is trivial — one line per identity, if one uses some efficient notation.
Rename the coordinates x, y, z to x1 , x2 , x3 and the standard unit basis vectors î, jˆ, k̂ to
î1 , î2 , î3 . Then ⃗∇ = ∑3n=1 în ∂∂xn and the proof of, for example, Theorem 1.1.2.a is
3
⃗∇ · (F + G) = ∂
∑ ∂ xn în · (F + G)
n=1
3 3
∂ ∂
= î
∑ ∂ xn n · F + ∑ ∂ xn în · G = ⃗∇ · F + ⃗∇ · G
n=1 n=1

For Theorem 1.1.1.c,d, Theorem 1.1.2.c, Theorem 1.1.3.c and Theorem 1.1.6.c, the
computation is easy — a few lines per identity. For example, the proof of Theorem
1.1.3.c is
3 3
⃗∇ × ( f F) = ∂ ∂ 

∑ ∂ xn n × ( f F) = ∑ ∂ xn f , {în × F}
n=1 n=1
3 3
∂f ∂
= ∑ ∂ xn în × F + f ∑ ∂ xn în × F (by Theorem ??.b)
n=1 n=1
= (⃗∇ f ) × F + f , ⃗∇ × F
The similar verification of Theorems 1.1.1.c,d, 1.1.2.c and 1.1.6.c are left as exercises.
The latter two are parts (a) and (c) of Question PROB4prb vector identity in Section
4.1 of the CLP-4 problem book.
For Theorem 1.1.2.d, the computation is also easy if one uses the fact that
a · (b × c) = (a × b) · c
which is Lemma 1.1.1.a below. The verification of Theorem 1.1.2.d is part (b) of
Question PROB4prb vector identity in Section 4.1 of the CLP-4 problem book.

That leaves the proofs of Theorem 1.1.1.e, Theorem 1.1.3.d, Theorem 1.1.7.a,b,c,d,e,
which we write out explicitly.

Theorem 1.1.1.e:
First write out the left hand side as
3 3  ∂F 3  ∂G 
⃗∇(F · G) = ∂ 
∑ în (F · G) = ∑ în · G + ∑ în F ·
n=1 ∂ xn n=1 ∂ xn n=1 ∂ xn
10 Chapter 1. Integral Theorems

Then rewrite a × (b × c) = (c · a)b − (b · a)c, which is Lemma 1.1.1.b below, as

(c · a)b = a × (b × c) + (b · a)c
∂F ∂G
Applying it once with b = în , c = ∂ xn , a = G and once with b = în , c = ∂ xn , a=F
gives
3   3  

⃗∇(F · G) = ∑ G × în × ∂ F  ∂ F  ∂ G  ∂ G
+ (G · în ) + ∑ F × în × + (F · în )
n=1 ∂ xn ∂ xn n=1 ∂ xn ∂ xn
= G × (⃗∇ × F) + (G · ⃗∇)F + F × (⃗∇ × G) + (F · ⃗∇)G

Theorem 1.1.3.d:
We use the same trick. Write out the left hand side as
3 3  ∂F 3
⃗∇ × (F × G) = ∂   ∂G 
∑ în × ∂ xn
(F × G) = ∑ în ×
∂ xn
× G + ∑ în × F ×
∂ xn
n=1 n=1 n=1

Applying a × (b × c) = (c · a)b − (b · a)c, which is Lemma 1.1.1.b below,


3 h ∂F ∂F i 3 h
⃗∇ × (F × G) = n ∂ Gn ∂G i
G −
∑ n ∂ xn ∂ xn G + ∑ F − Fn
n=1 n=1 ∂ xn ∂ xn
= (G · ⃗∇)F − (⃗∇ · F)G + (⃗∇ · G)F − (F · ⃗∇)G

Theorem 1.1.7.a:
Substituting in

⃗∇ × F = ∂ F3 − ∂ F2 î − ∂ F3 − ∂ F1 jˆ + ∂ F2 − ∂ F1 k̂
     
∂y ∂z ∂x ∂z ∂x ∂y

gives

⃗∇ · ⃗∇ × F = ∂ ∂ F3 − ∂ F2 − ∂ ∂ F3 − ∂ F1 + ∂ ∂ F2 − ∂ F1
      
∂x ∂y ∂z ∂y ∂x ∂z ∂z ∂x ∂y
2 2 2 2
∂ F3 ∂ F2 ∂ F3 ∂ F1 ∂ F2 ∂ F1 2 2
= − − + + −
∂ x∂ y ∂ x∂ z ∂ y∂ x ∂ y∂ z ∂ z∂ x ∂ z∂ y
=0

because the two red terms have cancelled, the two blue terms have cancelled and the
two black terms have cancelled.

Theorem 1.1.7.b:
1.1 Gradient, Divergence and Curl 11

Substituting in

⃗∇ f = ∂ f î + ∂ f jˆ + ∂ f k̂
∂x ∂y ∂z

gives

⃗∇ × ⃗∇ f = ∂ ∂ f − ∂ ∂ f î − ∂ ∂ f − ∂ ∂ f jˆ + ∂ ∂ f − ∂ ∂ f k̂ = 0
      
∂y ∂z ∂z ∂y ∂x ∂z ∂z ∂x ∂x ∂y ∂y ∂x

Theorem 1.1.7.c:
By Theorem 1.1.2.c, followed by Theorem 1.1.2.d,
⃗∇ · f (⃗∇g × ⃗∇h) = ⃗∇ f · (⃗∇g × ⃗∇h) + f ⃗∇ · (⃗∇g × ⃗∇h)
 

= ⃗∇ f · (⃗∇g × ⃗∇h) + f (⃗∇ × ⃗∇g) · ⃗∇h − ⃗∇g · (⃗∇ × ⃗∇h)


 

By Theorem 1.1.7.b, ⃗∇ × ⃗∇g = ⃗∇ × ⃗∇h = 0, so


⃗∇ · f (⃗∇g × ⃗∇ f ) = ⃗∇ f · (⃗∇g × ⃗∇h)
 

Theorem 1.1.7.d:
By Theorem 1.1.2.c,
⃗∇ · ( f ⃗∇g − g⃗∇ f ) = (⃗∇ f ) · (⃗∇g) + f , ⃗∇ · (⃗∇g) − (⃗∇g) · (⃗∇ f ) + g, ⃗∇ · (⃗∇ f )
= f , ⃗∇2 g − g, ⃗∇2 f

Theorem 1.1.7.e:
3  3 3  3  ∂ 2 Fn
⃗∇ × (⃗∇ × F) = ∑ îl ∂ × ∑ mî

× ∑nn î F = î × î × î
∑ l m n ∂ xl ∂ xm
l=1 ∂ xl m=1 ∂ xm n=1 l,m,n=1

Using a × (b × c) = (c · a)b − (b · a)c, we have



îl × îm × în = (îl · în )îm − (îl · îm )în = δl,n îm − δl,m în

where6
(
1 if m = n
δm,n =
0 if m ̸= n

m,n is called the Kronecker delta function. It is named after the German number theorist and logician
Leopold Kronecker (1823–1891). He is reputed to have said “God made the integers. All else is the work of
man.”
12 Chapter 1. Integral Theorems

Hence
3 3
⃗∇ × (⃗∇ × F) = ∂ 2 Fn ∂ 2 Fn
∑ δl,n îm − ∑ δl,m în
l,m,n=1 ∂ xl ∂ xm l,m,n=1 ∂ xl ∂ xm
3 3
∂ ∂ Fn ∂ 2 Fn
= ∑ îm − ∑ în 2
m,n=1 ∂ xm ∂ xn m,n=1 ∂ xm
= ⃗∇(⃗∇ · F) − ⃗∇2 F

1. a · (b × c) = (a × b) · c
2. a × (b × c) = (c · a)b − (b · a)c

Proof. (a) Here are two proofs. For the first, just write out both sides

a · (b × c) = (a1 , a2 , a3 ) · (b2 c3 − b3 c2 , , , b3 c1 − b1 c3 , , , b1 c2 − b2 c1 )
= a1 b2 c3 − a1 b3 c2 + a2 b3 c1 − a2 b1 c3 + a3 b1 c2 − a3 b2 c1
(a × b) · c = (a2 b3 − a3 b2 , , , a3 b1 − a1 b3 , , , a1 b2 − a2 b1 ) · (c1 , c2 , c3 )
= a2 b3 c1 − a3 b2 c1 + a3 b1 c2 − a1 b3 c2 + a1 b2 c3 − a2 b1 c3

and observe that they are the same.


For the second proof, we again write out both sides, but this time we express them in
terms of determinants.

jˆ k̂
 

a · b × c = (a1 , a2 , a3 ) · det b1 b2 b3 
c1 c2 c3
     
b2 b3 b1 b3 b1 b2
= a1 det − a2 det + a3 det
c2 c3 c1 c3 c1 c2
 
a1 a2 a3
= det b1 b2 b3 
c1 c2 c3

jˆ k̂
 

a × b · c = det a1 a2 a3  · (c1 , c2 , c3 )
b1 b2 b3
     
a2 a3 a1 a3 a1 a2
= c1 det − c2 det + c3 det
b2 b3 b1 b3 b1 b2
 
c1 c2 c3
= det a1 a2 a3 
b1 b2 b3
1.1 Gradient, Divergence and Curl 13

Exchanging two rows in a determinant changes the sign of the determinant. Moving
the top row of a 3 × 3 determinant to the bottom row requires two exchanges of rows.
So the two 3 × 3 determinants are equal.
(b) The proof is not exceptionally difficult — just write out both sides and grind.
Substituting in
b × c = (b2 c3 − b3 c2 )î − (b1 c3 − b3 c1 ) jˆ + (b1 c2 − b2 c1 )k̂
gives, for the left hand side,

 
î k̂
a × (b × c) = det  a1 a2 a3 
b2 c3 − b3 c2 −b1 c3 + b3 c1 b1 c2 − b2 c1
 
= î a2 (b1 c2 − b2 c1 ) − a3 (−b1 c3 + b3 c1 )
− jˆ a1 (b1 c2 − b2 c1 ) − a3 (b2 c3 − b3 c2 )
 
 
+k̂ a1 (−b1 c3 + b3 c1 ) − a2 (b2 c3 − b3 c2 )
On the other hand, the right hand side
(a · c)b − (a · b)c = (a1 c1 + a2 c2 + a3 c3 )(b1 î + b2 jˆ + b3 k̂) − (a1 b1 + a2 b2 + a3 b3 )(c1 î + c2 jˆ + c3 k̂
 
= î a1 b1 c1 + a2 b1 c2 + a3 b1 c3 − a1 b1 c1 − a2 b2 c1 − a3 b3 c1
+ jˆ a1 b2 c1 + a2 b2 c2 + a3 b2 c3 − a1 b1 c2 − a2 b2 c2 − a3 b3 c2
 
 
+k̂ a1 b3 c1 + a2 b3 c2 + a3 b3 c3 − a1 b1 c3 − a2 b2 c3 − a3 b3 c3
= î [a2 b1 c2 + a3 b1 c3 − a2 b2 c1 − a3 b3 c1 ]
+ jˆ [a1 b2 c1 + a3 b2 c3 − a1 b1 c2 − a3 b3 c2 ]
+k̂ [a1 b3 c1 + a2 b3 c2 − a1 b1 c3 − a2 b2 c3 ]
The last formula that we had for the left hand side is the same as the last formula we
had for the right hand side. ■
[Screening tests] We have seen the vector identity Theorem 1.1.7.b before. It says
that if a vector field F is of the form F = ⃗∇φ for some some function φ (that is, if F
is conservative), then
⃗∇ × F = ⃗∇ × (⃗∇φ ) = 0

Conversely, we have also seen, in Theorem ??, that, if F is defined and has continuous
first order partial derivatives on all of R3 , and if ⃗∇ × F = 0, then F is conservative.
The vector identity Theorem 1.1.7.b is our screening test for conservativeness.
Because its right hand side is zero, the vector identity Theorem 1.1.7.a is suggestive.
It says that if a vector field F is of the form F = ⃗∇ × A for some some vector field A,
then
⃗∇ · F = ⃗∇ · (⃗∇ × A) = 0
14 Chapter 1. Integral Theorems

When F = ⃗∇ × A, A is called a vector potential for F. We shall see in Theorem


??, below, that, conversely, if F(x) is defined and has continuous first order partial
derivatives on all of R3 , and if ⃗∇ · F = 0, then F has a vector potential7 . The vector
identity Theorem 1.1.7.a is indeed another screening test.
As an example, consider the Maxwell’s equations
⃗∇ · B = 0

⃗∇ × E + 1 ∂ B = 0
c ∂t
that we saw in Example 1.1. The first equation implies that (assuming B is sufficiently
smooth) there is a vector field A, called the magnetic potential, with B = ⃗∇ × A.
Substituting this into the second equation gives

⃗ 1∂ ⃗ ⃗
 1 ∂A
0 = ∇×E + ∇×A = ∇× E +
c ∂t c ∂t

So E + 1c ∂∂tA passes the screening test of Theorem 1.1.7.b and there is a function φ
(called the electric potential) with

1 ∂A
E+ = −⃗∇φ
c ∂t
We have put in the minus sign just to provide compatibility with the usual physics
terminology.
Problem: Let r(x, y, z) = x, î + y, jˆ + z, k̂ and let ψ(x, y, z) be an arbitrary function.
Verify that
⃗∇ · r × ⃗∇ψ = 0


Solution. By the vector identity Theorem 1.1.2.d,


⃗∇ · r × ⃗∇ψ = (⃗∇ × r) · ⃗∇ψ − r · ⃗∇ × (⃗∇ψ)
 

By the vector identity Theorem 1.1.7.b, the second term is zero. Now since

⃗∇ × r = ∂ z − ∂ y î − ∂ z − ∂ x jˆ + ∂ y − ∂ x k̂ = Zero
     
∂y ∂z ∂x ∂z ∂x ∂y

the first term is also zero. Indeed ⃗∇ · r × ⃗∇ψ = 0 holds for any curl free r(x, y, z).


[Optional] Problem: Let r(x, y, z) = x, î + y, jˆ + z, k̂ and let ψ(x, y, z) be an arbitrary


function. Verify that
⃗∇2 r × ⃗∇ψ = r × ⃗∇2 (⃗∇ψ)


7 Does this remind you of Theorem ??? It should.


1.1 Gradient, Divergence and Curl 15

Solution By the vector identity Theorem 1.1.7.e with F = r × ⃗∇ψ,

⃗∇2 r × ⃗∇ψ = ⃗∇ ⃗∇ · (r × ⃗∇ψ) − ⃗∇ × ⃗∇ × (r × ⃗∇ψ)


  

The first term vanishes by Example 1.1.1. By the vector identity Theorem 1.1.3.d,
with F = r and G = ⃗∇ψ, the second term, including the minus sign, is

−⃗∇ × ⃗∇ × (r × ⃗∇ψ) = −⃗∇ × r(⃗∇ · (⃗∇ψ)) − (⃗∇ · r)⃗∇ψ + (⃗∇ψ) · ⃗∇ r − (r · ⃗∇)(⃗∇ψ)


    

INCOMPLETE.Contact Dr. T. Ghosh for complete soln.


Let f (x, y, z) be a function, F(x, y, z) be a vector field, and C be a constant. Let
(x0 , y0 , z0 ) be a point on the level surface f (x, y, z) = C. We shall now verify that the
components of F(x, y, z) normal to and tangential to f (x, y, z) = C at (x0 , y0 , z0 ) are
(F·⃗∇ f )⃗∇ f ⃗∇ f ×(F×(⃗∇ f ))
and , evaluated at (x0 , y0 , z0 ), respectively.
|⃗∇ f |2 |⃗∇ f |2
Since ⃗∇ f is normal to the surface f = C, it is obvious that
(F·⃗∇ f )⃗∇ f
◦ is normal to f = C and
|⃗∇ f |2
⃗∇ f ×(F×(⃗∇ f ))
◦ is perpendicular to ⃗∇ f and hence tangential to f = C, so that
|⃗∇ f |2
⃗ ⃗ ⃗ ⃗∇ f ))
◦ it suffices to show that (F·⃗∇ f )2∇ f and ∇ f ×(F×( add up to F.
|∇ f | |⃗∇ f |2
By Lemma 1.1.1.b,

(F · ⃗∇ f )⃗∇ f ⃗∇ f × (F × (⃗∇ f )) (F · ⃗∇ f )⃗∇ f |⃗∇ f |2 F (⃗∇ f · F)(⃗∇ f )


+ = + −
|⃗∇ f |2 |⃗∇ f |2 |⃗∇ f |2 |⃗∇ f |2 |⃗∇ f |2
=F

as desired.

1.1.2 Vector Potentials


We’ll now further explore the vector potentials that were introduced in Example 1.1.1.
First, here is the formal definition.
The vector field A is said to be a vector potential for the vector field B if

B = ⃗∇ × A

As we saw in Example 1.1.1, if a vector field B has a vector potential, then the vector
identity Theorem 1.1.7.a implies that ⃗∇ · B = 0. This fact deserves to be called a
theorem.
Theorem 1.1.8 — Screening test for vector potentials. If there exists a vector
potential for the vector field B, then
⃗∇ · B = 0
16 Chapter 1. Integral Theorems

Of course, we’ll consider the converse soon. Also note that the vector potential, when
it exists, is far from unique. Two vector fields A and à are both vector potentials for
the same vector field if and only if
⃗∇ × A = ⃗∇ × Ã ⇐⇒ ⃗∇ × (A − Ã) = Zero

That is, if and only if the difference A − Ã passes the conservative field screening test
of Theorems ?? and ??. In particular, if A is one vector potential for a vector field B
(i.e. if B = ⃗∇ × A), and if ψ is any function, then

⃗∇ × (A + ⃗∇ψ) = ⃗∇ × A + ⃗∇ × ⃗∇ψ = B

by the vector identity Theorem 1.1.7.b. That is, A + ⃗∇ψ is another vector potential
for B.
To simplify computations, we can always choose ψ so that, for example, the third
of A + ⃗∇ψ, namely A + ⃗∇ψ · k̂ = A3 + ∂∂ψz , is zero — just choose

component
R
ψ = − A3 , z. We have just proven If the vector field B has a vector potential, then,
in particular, there is a vector potential A for B with8 A3 = 0.

Wish you all the best, Tanay Ghosh


8There
is nothing special about the subscript 3 here. By precisely the same argument, we could come up
with another vector potential whose second component is zero, and with a third vector potential whose first
component is zero.

You might also like