Book
Book
— a Hands-On Course
Ó .î á
Preface
This book grew out of a series of lectures the first author, professor
of geodesy at Aalto University’s Department of the Built Environment,
held as a guest lecturer at Bahir Dar University’s Institute of Land
Administration in Ethiopia. The intensive course, delivered over two
weeks full time, was to give the students hands-on experience using
digital aerial photogrammetry software in the service of mapping the
country for economic development. We did so using two different digital-
photogrammetry software packages: the commercial package ERDAS
Imagine™, which contains LPS, the Leica Photogrammetry Suite; and
the open-source package e-foto, which was developed in an academic
environment in Brazil. The hardware, fifteen digital photogrammetry
workstations with ERDAS pre-installed, was an earlier donation by SIDA,
the Swedish development co-operation agency.
Ethiopia is hardly the only country in need of modern map-making
technology, and personnel understanding its operation and utilization, for
its development. This is the motive for developing the lecturing material
into a book, aimed at giving the reader everything (short of a laptop) to
get started.
–V–
VI P REFACE
Acknowledgements
Ó .î á
Contents
Chapters
» 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
» 2. Aerial photogrammetry and its history . . . . . . . . . . . . . . 7
» 3. Aerial photography and mapping . . . . . . . . . . . . . . . . . 21
» 4. Camera types and camera calibration . . . . . . . . . . . . . . . 31
» 5. Flight plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
» 6. Setting up an e-foto project file . . . . . . . . . . . . . . . . . . . 49
» 7. Interior orientation . . . . . . . . . . . . . . . . . . . . . . . . . . 65
» 8. Exterior orientation . . . . . . . . . . . . . . . . . . . . . . . . . . 75
» 9. Aerotriangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
» 10. Stereo restitution . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
» 11. Measuring in aerial images . . . . . . . . . . . . . . . . . . . . . 123
» 12. Digital Elevation Models . . . . . . . . . . . . . . . . . . . . . . . 129
» 13. Orthophoto mapping . . . . . . . . . . . . . . . . . . . . . . . . . 145
» 14. Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
» 15. Datums in geodesy and photogrammetry . . . . . . . . . . . . . 153
» 16. Advanced subjects . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
» A. Rotation matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Preface v
List of Tables xv
– VII –
VIII C ONTENTS
1. Introduction 1
1.1 Learning outcomes . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Imagery provided . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Supporting material provided . . . . . . . . . . . . . . . . . 4
1.4 Software provided . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 Background materials . . . . . . . . . . . . . . . . . . . . . 5
5. Flight plan 41
5.1 Problem description . . . . . . . . . . . . . . . . . . . . . . . 41
5.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.3 Practicalities . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Ó .î á
C ONTENTS
ix
6.6 Ground control points . . . . . . . . . . . . . . . . . . . . . 58
6.7 E-foto, reference systems and map projections . . . . . . . 59
6.8 Concluding remarks . . . . . . . . . . . . . . . . . . . . . . 62
7. Interior orientation 65
7.1 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
7.2 Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
7.3 Interior orientation with e-foto . . . . . . . . . . . . . . . . 69
7.4 Debugging interior orientation . . . . . . . . . . . . . . . . 74
8. Exterior orientation 75
8.1 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
8.2 Observation equations . . . . . . . . . . . . . . . . . . . . . 76
8.3 The rotation matrix . . . . . . . . . . . . . . . . . . . . . . . 79
8.4 Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
8.5 Computing the partials . . . . . . . . . . . . . . . . . . . . 81
8.6 Full observation equations . . . . . . . . . . . . . . . . . . 81
8.7 Initial approximate values for exterior orientation param-
eters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
8.8 Exterior orientation using e-foto . . . . . . . . . . . . . . . 85
8.9 Doing exterior orientation . . . . . . . . . . . . . . . . . . . 86
8.10 Completing exterior orientation . . . . . . . . . . . . . . . 91
8.11 About geometric strength . . . . . . . . . . . . . . . . . . . 91
8.12 Ground control points . . . . . . . . . . . . . . . . . . . . . 92
9. Aerotriangulation 95
9.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
9.2 Observation equations . . . . . . . . . . . . . . . . . . . . . 97
9.3 Tie points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
9.4 Aerotriangulation in e-foto . . . . . . . . . . . . . . . . . . . 101
9.5 Debugging an aerotriangulation adjustment . . . . . . . . 104
Ó .î á
X C ONTENTS
Ó .î á
C ONTENTS
xi
14.4 Maps in natural resource management . . . . . . . . . . . 151
14.5 Water resources, hydro power, height systems . . . . . . . 151
Bibliography 191
Index 195
Ó .î á
XII L IST OF F IGURES
List of Figures
Ó .î á
L IST OF F IGURES
xiii
6.6 Flight data entry form. . . . . . . . . . . . . . . . . . . . . . . 54
6.7 Flight geometry. . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.8 Project manager of e-foto . . . . . . . . . . . . . . . . . . . . . 56
6.9 Project entry form for an image. . . . . . . . . . . . . . . . . . 57
6.10 A filled-out Points tab for the Rio de Janeiro project. . . . . . 59
6.11 Ground control point list for the Finnish project . . . . . . . 60
Ó .î á
XIV L IST OF F IGURES
Ó .î á
List of Figures
xv
12.16 Saving the digital surface model file. . . . . . . . . . . . . . . 143
List of Tables
Ó » .î á
XVI L IST OF T ABLES
x
x
Ó » .î á
D Introduction
1
Ever since man has studied the precise figure of the Earth and the lands
on her surface, he has dreamt of rising above the Earth in order to
see in a single view what otherwise can only be inferred by laborious
measurements and calculations. The invention of aviation — first by
lighter-than-air airships, later by motorized aircraft — has made this
dream a reality. During the last century, aerial mapping and aerial
measurement — photogrammetry, the science and art of performing
precise geodetic measurements from photographic images — has played
a critical role in mapping countries aspiring to economic progress and
prosperity.
Much of the equipment needed in photogrammetry has until recently been
highly specialized, difficult and expensive to produce. This continues to
be the case for the aerial vehicles used, although there, the development
of small and inexpensive unmanned aerial vehicles (UAV) is opening up
new perspectives. In the field of processing the aerial imagery obtained,
new opportunities have however been created by the digitization of the
photogrammetric tool chain.
This booklet offers an introduction to digital aerial photogrammetry
based on freely-available software running on ordinary personal comput-
ers. It is intended to be suitable for self-study, but may also be part of
an academic curriculum. In addition to this text, an Intel-based 64-bit
personal computer or laptop, and the Open Source software intended
to be used with the text, only a pair of anaglyphic glasses (red-cyan) is
needed.
Using aerial photogrammetry for constructing orthophoto maps is mis-
sion critical for the development of a cadastral system, as well as in
support of large infrastructural construction works, in many rapidly in-
dustrializing and urbanizing economies. Also zoning, the regulation of
–1–
2 I NTRODUCTION
Ó » .î á
Imagery provided
3
N
Flight
They will leave the course with a broader understanding of the history
of photogrammetry, its various applications (cadastral, base mapping,
construction support, zoning, etc.) and technologies (traditional, laser
scanning, space data, etc.).
The total course load is estimated at 80 hours or 3 ETCS credits.
Flight
D F IGURE 1.2. The Finnish Ringroad II, Helsinki area, imagery (5).
Ó » .î á
4 I NTRODUCTION
Flight
(c) The camera calibration data for Bahir Dar is included in this
text, table 4.4.
Ó » .î á
Software provided
5
maps.
(c) For Bahir Dar: GCP-bahir-dar.txt. The list for Bahir Dar is
also given in table 6.1.
Ó » .î á
D Aerial photogrammetry and its
history
–7–
8 A ERIAL PHOTOGRAMMETRY AND ITS HISTORY
1 Camera obscura, latin for “dark room”, an ancient device for capturing perspective
images.
Ó » .î á
Historical overview
9
F IGURE 2.1. Aerial photography. Images are colour coded. The dashed line is
the aircraft path connecting the image exposure locations — i.e.,
the locations in space where the camera optical centre was when
the image was exposed —, obtaining two strips of two images each.
D Each terrain point will show in at least two images.
2 Willem Schermerhorn (1894 – 1977) was the father of Dutch (and Dutch-Indies!) pho-
togrammetry, internationally renowned. He also served as Dutch prime minister.
3 Artillery General Vilho Nenonen (1883 – 1960) invented, together with academician
Yrjö Väisälä, the horizon camera bearing his name, a photogrammetric camera capturing
not only the terrain underneath but also the horizon; a first attempt at in-flight camera
orientation determination, well predating current inertial measurement units.
4 Yrjö Väisälä (1891−1971) was an academician, astronomer, physicist and universal
genius who also contributed to Finnish photogrammetry.
Ó » .î á
10 A ERIAL PHOTOGRAMMETRY AND ITS HISTORY
F IGURE 2.2. An array of Multiplex devices. Photo © The National Map, U.S.
D Geological Survey.
aerial photogrammetry.
Before the computer era, the processing of aerial imagery was done
in large, expensive optomechanical devices; analogue computers, as it
were. The era of the personal computer has relegated all of this to
history, the stuff of science museums; in a way this is a pity, as this
application stretched the use of optics and fine mechanics to the limit
— and sometimes beyond it. But today, anybody with a C500 laptop,
C0.50 stereo glasses, smart software (which may be free!) and a little
perseverance and willingness to learn can become the operator of a
complete digital photogrammetric work station. The current text is
designed to help you make this journey.
Ó » .î á
Theoretical preliminaries
11
F IGURE 2.3. A traditional stereo plotter, now on display, in Japan. Note the
two big hand wheels and the foot wheel for moving the “floating
mark” horizontally and vertically, respectively. Photo © National
Geospatial Technical Operations Center, U.S. Geological Survey.
D http://ngtoc.usgs.gov/images/kari-japan07.jpg.
Ó » .î á
12 A ERIAL PHOTOGRAMMETRY AND ITS HISTORY
Extract
image Interior Measure
co-ordinates orientation tie points
Exterior
Bundle block
orientation
adjustment
Digital Compute
terrain terrain
model co-ordinates
Stereo
Orthorectification feature
extraction
Orthophoto map
F IGURE 2.4. Photogrammetric workflow. Blue, input data; black, processes; red,
D outputs. Simplified.
Ó » .î á
Theoretical preliminaries
13
variance-covariance, or variance, matrix:
⎡ ⎤
σ21
σ22
⎢ ⎥
Var {ℓ} = Σℓℓ = ⎢
⎢ ⎥
⎥.
..
⎢
⎣ . ⎥
⎦
σ2n
This matrix is very often diagonal, meaning that the different obser-
vations ℓ i are assumed statistically independent of each other. Each
diagonal element represents the square of the standard error of the cor-
responding observation: e.g., if the standard error of observation ℓ i is
2
±0.01 mm, then σ2i = (0.01 mm) = 10−4 mm2 .
Very often the standard errors of all the observations ℓ i are similar,
and are all very small numbers. In this case it is advantageous to write
Q ℓℓ = I n
ℓ + v = F (x̂) , (2.1)
in which
◦ ℓ is the vector of observations as discussed,
◦ x̂ is the vector of least-squares estimates of the unknowns x. The
“hat” symbol is generally used to indicate statistical estimates,
◦ v is the vector of residuals, i.e., the difference between the actually
observed values ℓ and the “theoretical observed values” ℓ̂ = F (x̂)
Ó » .î á
14 A ERIAL PHOTOGRAMMETRY AND ITS HISTORY
Here we see the inverse of the weight coefficient matrix, called the weight
matrix:
−1
P = Q ℓℓ .
Use of this form considers that the different observations ℓ i may have dif-
ferent uncertainties, and therefore should be assigned different weights
in the least-squares calculation: the greater the uncertainty of an obser-
vation, the lower its weight should be.
About the vector of unknowns x, it typically contains such things as
1. the location co-ordinates and orientation angles of the camera at
the moment each image was taken
2. the location co-ordinates of any auxiliary points that are visible
in multiple images, so-called tie points, chosen and measured in
the images to strengthen the geometric interconnection between
images
3. the location co-ordinates of any other interesting but unknown
points appearing in two or more images.
D 2.4.3 Linearization
The general form of the observation equations, eq. 2.1, is non-linear, and
no ready methods exist for obtaining the least-squares solution x̂ from
a given ℓ, Q ℓℓ . Before use in actual computations we have to linearize
these equations, which is done as follows.
Ó » .î á
Theoretical preliminaries
15
y y = f ( x)
Linearization
interval y = y(0) + a x − x(0)
( )
y(0)
x
x(0)
in which
∂ ∂ ∂
⎡ ⎤
F
∂ x1 1
F
∂ x2 1
··· F
∂ xn 1
∂ ∂ ∂
F F F
⎢ ⎥
⎢ ∂ x1 2 ∂ x2 2
··· ∂ xn 2
⎥
JF = ⎢ .. .. .. ..
⎥
. . . .
⎢ ⎥
⎣ ⎦
∂ ∂ ∂
F
∂ x1 n
F
∂ x2 n
··· F
∂ xn n
Ó » .î á
16 A ERIAL PHOTOGRAMMETRY AND ITS HISTORY
∆ℓ = ℓ − ℓ(0) ,
∆x̂ = x̂ − x(0) ,
( )
(0)
A = JF x ,
this becomes
∆ℓ + v = A ∆x̂, (2.4)
Ó » .î á
Content of this text
17
D 2.5 Content of this text
This tutorial is accompanied by hands-on exercises using the software
e-foto, an open-source digital photogrammetric work station package
developed in Brazil (Brito et al., 2011). This software is more limited
in scope than commercially available packages, but allows many useful
photogrammetric workflows. It has the merit of being free and thus
accessible to many. Also, it is simpler in structure and less automated,
making it more suitable for teaching and self-study. Finally, it is available
for other operating systems besides Windows™.
Somewhat complementary to this are open-source software packages
for remote sensing and geographic information systems (GIS) use. An
example of the latter is Quantum GIS (Linifiniti Consulting CC, 2008,
maintained)5 , also called QGIS. This open-source package is more like a
framework connecting other open-source products for more specific tasks.
It is very extensive and unfortunately rather tricky to install due to its
many dependencies. Like the commercial package ERDAS Imagine™6 ,
it integrates a geo-referencing solution, which is however based on the
open-source library package proj4 (http://trac.osgeo.org/proj/) which may also
be used separately. This library again is based on the global compilation
of reference systems — thousands of them! — originally maintained
by the European Petroleum Survey Group (EPSG) and found at http:
//www.spatialreference.org/.
Further useful open-source packages are GDAL, which converts a great
many raster and vector image formats used in remote sensing and ge-
ographic information systems into each other, and Monteverdi/OTB, a
package for processing remote-sensing imagery, especially from satellites.
5 This excellent open-access training manual was originally developed in South Africa.
6A product of Intergraph™.
Ó » .î á
18 A ERIAL PHOTOGRAMMETRY AND ITS HISTORY
7 When self-compiling, the source tree will automatically compile a 64-bit version on
64-bit Linux.
Ó » .î á
Introduction to e-foto
19
$ e-foto
Ó » .î á
20 A ERIAL PHOTOGRAMMETRY AND ITS HISTORY
Start e-foto
Y N
New?
Y
Digital?
N
Interior orientation Phototriangulation
Exterior orientation
3-D co-ordinates
DEM extraction
Stereo plotting
chapter.
Ó » .î á
D Aerial photography and mapping
3
D 3.1 Atmospheric reduction
If aerial imaging is done from a significant height above the ground,
also atmospheric refraction needs to be modelled. For vertical imaging,
the correction model used will be symmetric around the vertical axis: a
standard atmosphere model may be used which will be quite good enough
in view of the smallness of this correction.
The refractive index n of air can be obtained from any of a number
of models called “standard atmospheres”. For our purpose, correcting
vertical photogrammetry, we may use an approximate expression.
We assume the atmosphere to be horizontally stratified, figure 3.2, and
apply Snell1 ’s law, figure 3.1. The stratification can be approximated by a
sequence of discrete steps in refractive index n, see figure 3.2. Then we
α′
T R
δα
α
Q
S Apparent
α uplift
1 Willebrord Snell van Rooyen, “Snellius”, 1580 – 1626, Dutch astronomer, physicist,
mathematician and geodesist.
– 21 –
22 A ERIAL PHOTOGRAPHY AND MAPPING
Camera
Z0
δα
Z′
Z n
have, with Snell’s law for a single refractive-index interface, i.e., a step
change in refractive index by a ratio of n:
PQ PS/cos α PS PT − QT cos δα PT − QT
= = = ≈ .
PR PT cos α PT cos α
2 PT cos2 α PT cos2 α
Here we have assumed that the change in direction δα is small compared
to the ray direction α (alpha) itself.
Snell’s law gives
n sin α′ RT /QT PT
= = = ,
n′ sin α RT /PT QT
So
′ ′
n n
PQ PT − n PT 1 − n
= = cos α
PR PT cos2 α
′
n− n′
1 − nn n n − n′ ∆n
=⇒ PQ (α) = PR = PR ≈ PR = − ,
cos2 α cos2 α cos2 α cos2 α
for n close to unity. In the small-angle limit for α we have
PQ (0) = −∆ n PR (3.1)
Ó » .î á
Atmospheric reduction
23
and
PQ (0)
PQ (α) =
. (3.2)
cos2 α
This is the apparent uplift of a terrain point, expressed in the height
difference PR between terrain and refractive-index interface, produced
by a refractive-index increment at this interface. In the following we write
this height difference as PR = Z ′ − Z .
Now, for a number of interfaces k = 1, . . . , K , expression 3.1 should be
summed to obtain the total uplift:
K
∑
PQ tot (0) = − PR k ∆ n,
k=1
and in the limit for a continuous refractive-index gradient, this becomes
an integral:
ˆ n( Z 0 ) ˆ Z0
dn ( Z ′ )
PQ tot (0) = − PR ( n) dn = − PR ( Z ′ ) dZ ′ .
n(0) 0 dZ ′
and this is the integral we have to evaluate all the way through the
atmosphere2 :
v′
ˆ u
( ′ )
C Z
( Z − Z ) τ exp − τ dZ ′ =
′
v v
u
[ ( ˆ u′ [ ( )]
′ )]
′ Z Z′
= ( Z − Z ) −C exp − τ − 1 −C exp − τ dZ ′ =
( ′) ( ′)
′ Z Z
= − ( Z − Z ) C exp − τ − τ C exp − τ =
( ′)
Z
= − [( Z ′ − Z ) + τ] C exp − τ
2 Consider that a homogeneous air density (the same everywhere as on the camera
height) wouldn’t have any effect at all on the imaging geometry. It’s the increase in air
density downward from the camera that causes refraction.
Ó » .î á
24 A ERIAL PHOTOGRAPHY AND MAPPING
´
using integration by parts, annotated in the u, v formalism: uv′ dx =
´
[ uv] − u′ vdx. C is a still unknown constant. Now from sea level Z = 0
to outer space Z0 = ∞ this becomes
ˆ ∞ ( ′)
C Z
I 0∞ (0) = ( Z ′ − 0) τ exp − τ dZ ′ = C τ2 ,
0
and we know this, the total atmospheric delay in zenith for visible light
from sea level to space, to be approximately δ Z 0 ≈ 2.4 m. It follows that
C = δ Z 0 · τ−1 .
Now the same integral, not from sea level to outer space, but from
terrain level Z to flight level Z0 , will be given by
ˆ Z0 ˆ Z0
C Z′
( )
Z ′
I Z 0 (0) = PQ (0) dZ = ( Z − Z ) τ exp − τ dZ ′ =
′
Z Z
Z ′ Z0
[ ( ( )]
′
)
= − ( Z − Z ) + τ C exp − τ =
Z
Z Z0 − Z Z0
[ ( ) ( ) ( ) ]
= δ Z 0 exp − − + 1 exp − .
τ τ τ
Now the radial offset in the image becomes, introducing again the
dependence on α of the apparent uplift found above, equation 3.2:
distance foreshortening
cos α
δαatm = · sin α ·
Z0 − Z
Z
I Z 0 (α)
(
))
δZ0 ( Z ) ( Z0 − Z Z
) (
· exp − τ − τ + 1 exp − τ0
cos2 α
c
=⇒ δ r atm = δαatm =
cos2 α ( ))
c tan α δ Z 0 ( Z ) ( Z0 − Z Z0
) (
= · exp − τ − τ + 1 exp − τ =
cos2 α Z − Z0
( ))
r δZ0 ( Z ) ( Z0 − Z Z
) (
= · exp − τ − τ + 1 exp − τ0 . (3.3)
cos2 α Z0 − Z
Here, c is the calibrated focal length of the camera. we may use, e.g.,
τ = 8.4 km and δ Z 0 = 2.4 m. These values may be adjusted to better
correspond to known local conditions during imaging.
It is not clear from looking at it, but the right-hand side in eq. 3.3 is
to first order linear in Z0 − Z and vanishes at Z0 − Z = 0. This is most
easily seen by noting that the expression in square brackets, and its
first derivative, vanish at Z0 = Z , making it a to first order quadratic
Ó » .î á
Atmospheric reduction
25
expression in Z0 − Z there:
d Z Z0 − Z Z0
[ ( ) ( ) ( )]
exp − τ − τ + 1 exp − τ =
dZ Z = Z0
1 Z Z0 − Z Z0 1 Z0
[ ( ) ( ) ( )]
= − τ exp − τ + exp − τ + τ exp − τ =
τ2 Z=Z
( 0)
1 Z Z −Z Z 1 Z
( ) ( )
= − τ exp − τ0 + 0 2 0 exp − τ0 + τ exp − τ0 = 0.
τ
r
δ r atm = K,
cos2 α
with
Z02
( )
0.00241 Z2
K= − . (3.4)
Z0 Z02 − 6 Z0 + 250 Z 2 − 6 Z + 250
Some theoretical background to this is given in Gyer (1996)3 .
In figure 3.3 we compare our own simple, horizontally stratified ex-
ponential model above with the model of Gyer (1996) — which gives
tabulated values that were computed based upon the Standard Atmo-
sphere 1976 (Public Domain Aeronautical Software, 2014) and spherical
3 Gyer gives an approximate formula, equation 24, for a horizontally stratified atmo-
sphere (in our notation):
ˆ Z0
( )( )
tan α 1 n( Z ′ ) − n( Z 0 ) n( Z ′ ) + n( Z 0 )
δαatm = ( ) dZ ′ ≈
Z0 − Z Z 2 n2 Z 0
ˆ Z0 (
tan α
n( Z ′ ) − n( Z0 ) dZ ′ .
)
≈
Z0 − Z Z
δZ0 ( Z′ )
n( Z ′ ) − 1 = exp −
τ τ
we obtain for the integral above
ˆ Z0 ( ([ ( Z ′ )] Z 0 )
′
) ′ 0 Z0 − Z ( Z )
0
n( Z ) − n( Z0 ) dZ = δ Z − exp − − exp − =
Z τ Z τ τ
( ( Z) ( Z ) )
0 0 Z0 − Z ( Z )
0
= δZ exp − − exp − − exp − =
τ τ τ τ
( ( Z) (Z −Z ) ( Z ))
0 0
= δ Z 0 exp − − + 1 exp − .
τ τ τ
This is seen to lead to a result identical to eq. 3.3, though derived in a very different
way.
Ó » .î á
26 A ERIAL PHOTOGRAPHY AND MAPPING
15
arcsecs
10
5
Exponential (8.5 km, 2.4 m)
Gyer table 3
Kraus eq. 4.5-2
0
0 5 10 km 15 20 25
geometry —, and with the formula 3.4 given in Kraus We see that, for
common aircraft heights, our model performs no worse than the differ-
ences seen between the more sophisticated models. And this is before
taking into account the influence of air pressure variations and climatic
temperature differences.
Ó » .î á
Earth curvature reduction and local project reference
27
P′
Z
H
( )
ing geocentric co-ordinates X , Y , Z onto a map projection surface, e.g.,
the UTM (Universal Transverse Mercator) projection on the geocentric
GRS80 ellipsoid, for a chosen central meridian.
The Earth-curvature effect, like the atmospheric effect, is radially sym-
metric: it produces a vertical offset of
d2
δ Zcurv = ,
2R
which in the image becomes visible as
r f r d2 r3 Z − Z
δ r curv = δ Zcurv = = 2 0 ,
f Z0 − Z Z0 − Z 2R f 2R
Z0 − Z
with f the camera focal length, d = r the distance on the ground
f
from the camera footpoint.
Applying an Earth-curvature reduction makes it possible to do the ad-
justment computation of a photogrammetric block in local map projection
and vertical co-ordinates, “as if” these were three-dimensional rectan-
gular co-ordinates. figure 3.4 shows how this works: the co-ordinates
in which point P is given, X , Y , H , do not together form a rectangular
system. “Reducing” the observations means that we pretend to have
observed point P ′ instead, which actually is at a vertical height H above
the tangent plane (you can think of it as the plane onto which the map
projection projects the Earth’s surface), not the curved sea-level reference
surface or geoid. And now the geometry is rectangular and high-school
geometry and the Pythagoras theorem etc. just work!
This is somewhat dangerous, however, and only as correct as the as-
sumption that the geoidal surface in the project area resembles a sphere.
Ó » .î á
28 A ERIAL PHOTOGRAPHY AND MAPPING
Also one should always keep in the back of one’s mind that, in these
co-ordinates proper, the rectangular geometry is actually not valid.
Ó » .î á
GNSS and IMU measurements in aerial acquisition
29
As our project co-ordinates relate to the local terrain situation, the
following conversions are therefore necessary:
⎡ ⎤ ⎡ ⎧[ ]
⎤ [ ]
X0 ϕ ϕ map proj. X
−−−−−→
⎪
⎨
⎥ ellipsoid ⎢ ⎥
⎣ Y 0 ⎦ ←−−−→ ⎣ λ ⎦ −−−→ λ Y
⎢
⎪
Z0 h geoid model
⎩
h −−−−−−→ H = Z
Here,
◦ the information on the map projection used in defining the map
co-ordinate system of the ground control points, and of the whole
aerial mapping project, must be provided correctly to the processing
software, and
◦ a good enough geoid model must also be provided in order to convert
the GNSS-produced heights h, which are relative to the reference
ellipsoid, to the heights above sea level Z (or H ) used locally in the
project.
The inertial measurement unit (IMU) again typically gives orientation
angles (Euler angles) (ω, ϕ, κ). The pitch and roll angles ω, ϕ are relative
to the local horizon. however, the heading (yaw) angle κ is referenced to
the direction of the local meridian, whereas what we need is a reference
to the Map North of the map projection used (remember we use map
projection co-ordinates X , Y as our working co-ordinates for the aerial
mapping project), i.e., the direction of the projection’s central meridian.
The difference between the two is known as the meridian convergence,
and it can be approximately computed by
δκ = (λ − λ0 ) sin ϕ,
κ = κIMU + δκ,
See figure 3.5. This gives us the orientation angles κ that we can
use, e.g., as input for our aerotriangulation computation, either as ini-
tial or approximate values, or — with appropriate weights — as prior
constraints.
No corrections to the other two orientation angles are needed, as they
refer to the aircraft axes, not any meridian direction. Also they describe
Ó » .î á
30 A ERIAL PHOTOGRAPHY AND MAPPING
κIMU
Central meridian λ0
κ
δκ
Map East
Local
Y (= N )
merid
X (= E )
ian
Map plane
λ
F IGURE 3.5. Meridian convergence and transformation of IMU angles to Map
North. In this diagram it is assumed for the definition of κ that
D the operator is facing port, see section 8.9.
deviations from the local vertical, which coincides everywhere — after ap-
plying the Earth-curvature reduction! — to the rectangular or Cartesian
Z -axis of our computation geometry.
Ó » .î á
D Camera types and camera calibration
4
D 4.1 Camera and sensor types, spectral areas
As camera types we may distinguish:
1. Traditional (frame) cameras, using light-sensitive emulsion on film
2. Digital cameras, using one or several CCD arrays
3. “Pushbroom” or “whiskbroom” sensors
A very popular size for these aerial cameras is focal length 6 inches
/ 152 mm, image size 9 × 9 inches / 22.5 × 22.5 cm. If the resolution is
0.01 mm, i.e., the pixel size is 0.02 mm (using the rule of thumb that
resolution is half the pixel size) this means an image of 11 250 × 11 250
pixels. These cameras have four or eight fiducial marks around the edges
and corners of the image field, which get exposed on the film of each
frame to allow interior orientation.
Because digital cameras of sufficiently large format are something of
the very recent past, there are large bodies of aerial imagery in existence
which was collected on film, but which nevertheless is today processed
digitally. Analogue (film) imagery is converted to digital imagery using
scanners. These are similar to the scanners used for converting your old
holiday pictures to computer format, but
◦ their image field is way larger, and
– 31 –
32 C AMERA TYPES AND CAMERA CALIBRATION
For a long time, these cameras typically offered smaller image sizes than
the traditional cameras, having both a shorter focal length and a smaller
image field. In practical use this means that one has to fly more flight
lines at a lower aircraft altitude, which costs money. Only recently have
digital aerial cameras offered an image size in pixels similar to traditional
film cameras.
An example of a modern digital aerial camera is the Vexcel UltraCam™
Eagle (Gruber et al., 2012), which offers an image size of 104 × 68 mm, at
a pixel size of 5.2 µm (micrometre), i.e., 0.0052 mm, giving 20 010 × 13 080
pixels, fully comparable to traditional film cameras. The focal length for
panchromatic (i.e., black-and-white) image capture is 210 mm. Colour is
provided using separate camera units with lower resolution.
Digital cameras typically use a CCD (charge-coupled device) array
as their image sensor. The challenge is to make the image sensor field
of view suitably large, given that it is a semiconductor chip cut from a
silicon die of given, limited size. One approach is mosaicking multiple
CCD devices, or even multiple cameras, into one. It will be clear that this
Ó » .î á
Camera and sensor types, spectral areas
33
creates calibration challenges. A more obvious approach, also taken in
the UltraCam, is making everything smaller, starting with the pixel size.
Recently (2015), Leica published their DMC III digital aerial camera,
the first large format camera to use CMOS technology instead of CCD
(Mueller and Neumann, 2016). Contrary to the UltraCam, it provides
the panchromatic image using a single camera unit with a focal length of
92 mm, a pixel size of 3.9 µm, and an image size of no less than 25 728 ×
14 592 pixels. As the name says, this is a greytone image; additional
camera units are used at a focal length of 45 mm and a pixel size of
6.0 µm, image size 8 956 × 6 708 pixels, adding red, green, blue and near
infrared at lower resolution.
It is clear that digital image collection offers its own, significant advan-
tages:
1. No need to purchase film media, only a lot of hard-disk storage
space is needed.
2. The workflow is cut short, especially eliminating manual stages: no
more processing of films followed by scanning.
3. No contribution to image noise from film grain.
4. Digital sensors have a greater dynamic range, i.e., details both
within very dark and very bright areas in the same image are
captured.
5. Digital sensors also capture small contrasts better and don’t com-
press them like film does. This leads to the effective resolution
achieved to be superior to scanned film images of nominally better
pixel resolution.
6. More flexibility in collecting spectral information. One can sepa-
rately collect pan-spectral (black-and-white) information at high
resolution, and colour information at lower resolution. And one is
not limited to the traditional RGB visual colours plus near infrared:
one can specify and collect any spectral band, and any number of
them, up to hyperspectral imaging (as in remote sensing). The
UltraCam doesn’t use this possibility yet though.
In recent years, the field of aerial mapping has almost completely
switched to using digital cameras.
Ó » .î á
34 C AMERA TYPES AND CAMERA CALIBRATION
Of these, worth mentioning separately are the SPOT and Landsat push-
broom sensors. As this is satellite data, resolution typically is in the tens
of metres, and they are of limited usefulness for geometric photogramme-
try. Yet, e.g., Google Earth™ is largely based on imagery from the Landsat
7’s ETM+ sensor, its panchromatic channel with a resolution of 15 m. This
data, which covers the globe, is freely available.
A pushbroom sensor works by using a line of sensors perpendicular to
the flight direction of the satellite. A whiskbroom sensor instead uses
a single sensor with a small, rapidly rotating mirror, which allows the
sensor to scan a line on the Earth’s surface which is also perpendicular to
the direction of flight of the satellite. As the satellite moves, successive
lines are scanned, together building an image in a strip or band on both
sides of the ground path. As such, there are no individual images, rather
a long, unbroken stream of imagery. Also, instead of orientation elements
for individual images, exterior orientation derives orientation elements as
a continuous function of either time, or distance traveled by the satellite
along its track. This is a very different paradigm.
A technology that only recently has come of age and may warrant consid-
eration, is the use of an unmanned aerial vehicle as the sensor platform.
This has the advantage of being much less expensive than flying a real
aircraft, but at the cost of a limited payload capacity. E.g., traditional
aerial cameras are way too bulky for such a platform. But if the smaller
image size of a commercial off-the-shelf digital camera is acceptable for
the application, usually from a lower flying height, this is certainly an
option. For geodetically precise mapping of large areas it is however less
suitable.
1 Often one speaks of “camera resection”, as the term “calibration” may also refer to
radiometric calibration.
Ó » .î á
Camera calibration
35
Calibration alone is not sufficient for this, but it is a necessary condition.
Calibration is done in a special laboratory set-up.
The simplest of cameras are metric cameras; in the absence of imaging
errors, what they do is project points in the terrain through a single
point, the optical centre, onto a flat imaging surface where an emulsion
or light sensitive optical array is placed. For such an ideal camera, three
parameters, or sets of parameters, need to be determined in calibration:
◦ the focal length of the camera — this also determines the scale at
which the aerial mapping takes place, and its resolution on the
ground
◦ the location xP , yP of the principal point of the camera in relation
to the set of fiducial marks at the edge of the image plane
◦ the locations, in millimeters of x and y, of the fiducial marks in the
camera co-ordinate system.
Normally, however, the objective lens of the camera is not quite optically
perfect; experience has shown that remaining distortion in the image
tends to be radially symmetric around the principal point, and can be
described by a small number of parameters. These are also included in
the camera’s calibration certificate, as a table of offsets in the image plane
away from the principal point. See the below calibration results for an
example.
From this table it is possible to derive the radial distortion by fitting a
simple polynomial to the table values, as follows:
r′ = r + k0 r + k1 r3 + k2 r5 + . . .
Ó » .î á
36 C AMERA TYPES AND CAMERA CALIBRATION
Grid
Film
r′ − r
r
PP
Illuminated Angle α
Camera
Collimator
grid
2 Duane Brown (1929 – 1994) was an eminent American photogrammetrist and geodesist,
remembered especially for his contributions to camera calibration.
Ó » .î á
Camera calibration
37
D T ABLE 4.1. Calibration data for the Finnish camera: fiducial marks relative to
FC, “fiducial cross”. “FC” refers to the central cross, the origin of all
( x, y) co-ordinates. Eight fiducial marks were used, in the geometry
given below right, and also in figure 7.1 right side.
2 4
) ( 2 2
)) (
′
1 + P3 r 2 + . . . ,
( ( )
y = y + y k 1 r + k 2 r + . . . + 2P1 x y + P2 r + 2 y
Ó » .î á
38 C AMERA TYPES AND CAMERA CALIBRATION
D T ABLE 4.2. Calibration data for the Finnish camera: radial distortion in mi-
crometres (µm). Tabulated separately for the four semi-diagonals.
Normally these values are fitted with a model as described above,
yielding parameters k i , P j . Right, numbering of the semi-diagonals
used.
The calibration data for this camera is given in table 4.3. It is seen
that the pixel size is quite a bit larger, 0.08 mm, than for the Finnish
camera, 0.02 mm, see sub-section 4.1.1. The original negatives being of
the same size — some 22 cm — this difference is due to coarser scanning.
It may also be that the digital images, which are offered by the e-foto
development team as test images, have been reduced after scanning for
Ó » .î á
Camera examples
39
D T ABLE 4.3. Calibration data for the Brazilian camera. Units assumed are
powers of mm throughout. Four fiducial marks were used, in the
geometry given below right, and also in figure 7.1 left side.
better manageability.
The data for the camera used in the Bahir Dar imaging flight is given in
table 4.4. This is a fairly small digital camera.
We see that the sensor size in the cross-flight ( x) direction is 4490
pixels, i.e., less than a quarter of the sensor size of the (scanned) Finnish
imagery, which is no less than 11 000 pixels (see section 4.1.1). This
means that mapping a given area at the same resolution will require over
four times as many flight lines, making the aerial imaging part of the
project four times more expensive.
Now, in reality, this camera will be much smaller and lighter than a big
Ó » .î á
40 C AMERA TYPES AND CAMERA CALIBRATION
Camera: IDM_300_ICS2_Calibrated
Sensor size (pixels): 4490 3364
Pixel size (mm): 0.012 0.012
Focal length (mm): 55.1802
Distortion coefficients:
5
k1 −2.4153 · 10−
9
k2 8.9472 · 10−
k3 2.6584 · 10−12
6
P1 1.4468 · 10−
6
P2 −3.3601 · 10−
film camera like the Leica RC30, which can weigh in at 100 kg and con-
sume hundreds of watts of electric power from the aircraft on-board power
system3 . Therefore, it will be possible to fly it on a much lighter aircraft,
or even an unmanned aerial vehicle, which will bring down costs a lot.
This remains true even considering that a denser network of ground con-
trol points may be required to achieve good geometric strength. Another
way to strengthen the geometric strength of the network is to include a
large number of tie points, e.g., by automatic tie-point generation.
3 http://w3.leica-geosystems.com/downloads123/zz/airborne/rc30/documentations/
RC30_product_description.pdf.
Ó » .î á
D Flight plan
5
D 5.1 Problem description
When the purpose is to geometrically map an area, that fixes the basics
of the flight plan: the imaging will be vertical (i.e., the camera looking
straight down, rather than oblique), in a block consisting of a number
of parallel strips of overlapping images. The critical parameter to be
known is the resolution on the ground to be strived for: together with the
resolution on the film, typically of order micrometres, we then obtain the
image scale.
Say we want a resolution on the ground of 0.1 m. The film resolution
being 0.01 mm, we obtain as the image scale the ratio of these, 1 : 10 000.
If we then further know that the camera focal length is c = 150 mm — a
typical value, but alternatives exist —, we can also directly compute the
flight height: H = 10 000 × 0.15 m = 1500 m.
We can now also compute how big an area will appear on each image: if
it measures 9 inches (22.5 cm) a side, it will photograph a square 2250 m
on a side, i.e., 2.252 km2 or about 5 km2 . But this doesn’t yet take the
overlap into account. In a very crude, approximate formula:
)2
(1 − O x )(1 − O y ) = d 2 (1 − O x )(1 − O y ) ,
(
A ≈ Sd
where d is image linear size (i.e., 22.5 cm), S = 10 000 is the image scale
number (i.e., the scale is 1 : S ), d = Sd is the size of the image area on
the ground (i.e., 2250 m) and O x , O y are the longitudinal and transversal
overlaps, respectively. This formula is asymptotically valid for large
(many strips, many frames per strip) blocks.
Choosing O x = 0.60 and O y = 0.30 we obtain for the effective terrain
area mapped by one image, with this camera at this resolution:
A ≈ 1.4 km2 .
– 41 –
42 F LIGHT PLAN
Optical centre
Flight height H
Resolved detail
Terrain
F IGURE 5.1. Relationship between flight height H, camera focal length c, film
or image resolution (typically one-half of pixel size) and required
D resolution on the ground
Knowing the total size of the area now gives you the amount of film you
are going to consume. Figuring out the number of strips, the number
of images per strip, and the total flight distance — remember to take
generous turns between strips! — takes further work; see the example
below. By now, you should be set up to invite tenders from aerial mapping
companies.
D 5.2 Example
As an example we consider a target area of 3 × 5 km. See figure 5.2. This
kind of picture is definitely worth drawing.
Firstly let us assume that we fly along the longest direction of the area,
length D x = 5 km. This is generally preferable, as it means that the least
amount of time is lost to unproductive turns of the aircraft. Then, we
have 5 km to cover with images that are 2.25 km broad. But, remember
that the images have to overlap by 60%. This means that the first and
the last images are “extra”. The other images are offset by (1 − O x ) · d
from each other; this distance is also the distance between the camera
exposure centres and is known as the image or frame separation.
The positions of the left edges of the images span the range [0, D x − d ] .
Ó » .î á
Example
43
2.25 km
1 2 3 4 5 6
2.25 km
3 km
30%
60%
5 km
12 11 10 9 8 7
This gives us the formula for the number of images (exposures) in one
flight strip:
Dx − d
nx = + 1 + 2,
(1 − O x ) · d
where the extra number 2 represents the extra first and last images in
the strip (needed to produce stereo view over the whole target area), and
the number 1, that the number of edges on the closed interval [0, D x − d ]
is one more than the length D x − d is a multiple of the offset length
(1 − O x ) · d .
Thus the number of images in a flight strip will be
5 km − 2.25 km
nx = + 3 = 6.06.
0.9 km
One is tempted to round to 6, by, e.g., slightly reducing the longitudinal
overlap. Generally speaking, one should instead round upward to the
nearest integer.
Obviously the number of strips needs to be 2 in order to cover the 3 km
width of the area. Then there will be an overlap of at most
2 × 2.25 km − 3 km
Oy = = 0.67.
2.25 km
As only 30%, i.e., O y = 0.3, is required, we are again left with “edges” of
2 × 2.25 km − 3 km − 0.3 × 2.25 km = 0.825 km total, or 412 m on each side.
A general formula for the number of strips, analoguous to the above
one for the number of images per strip, is
Dy − d
ny = + 1,
(1 − O y ) · d
Ó » .î á
44 F LIGHT PLAN
Target area ∆D x
with D y the width of the target area. This yields n y = 1.48, to be rounded
upward to 2.
Both n x and n y should be rounded upward to the nearest integer. Their
product n x n y equals the total number of images (frames, exposures) for
the whole project.
If the computed n x and n y are not integers but need to be rounded
upward, the imagery will cover more area on the ground than the target
area, and there will be “space left”, see figure 5.3. The formula for this
“space left”, on the left and right, and upper and lower, edges, is now
“space taken by images” minus “space taken by overlaps” minus again the
real dimension of the target area. As follows:
( )
∆D x = ( n x − 2) d − ( n x − 3) O x d − D x =
( )
= ( n x − 2)(1 − O x ) + O x d − D x ,
( ) ( )
∆D y = n y d − ( n y − 1) O y d − D y = n y (1 − O y ) + O y d − D y ,
half of which is the shift, in both directions, needed to place the target
area in the middle of the coverage area. This helps to safeguard full
coverage also in the presence of small deviations from the nominal flight
plan.
The flight time between exposures is obtained directly from the image
separation ∆ xexp = (1 − O x ) d , as follows:
∆ xexp
∆ t exp = v ,
Ó » .î á
Practicalities
45
where v is the aircraft velocity. Multiply with the total number of expo-
sures n x n y , and one obtains the total photography time.
We want every point in the whole target area to appear in two pho-
tographs at least. This means that a border area around the target area
will be photographed on single images of the project. It is wise to have
ground control points also in this border area, as this strengthens the
control of the camera location and orientation for these edge exposures.
D 5.3 Practicalities
Both the above examples assume that the target are of measurement is
rectangular. Sometimes it is, but rarely exactly so. For approximately
rectangular areas, one may try to enclose the area in a rectangle, and
then proceed as above. For irregularly-shaped areas, there is no other
technique than the manual one.
1. Put a map of the area on your desk, with the outline of the target
area drawn on it.
2. Cut, from paper or cardboard, squares that are the same size as an
image on the ground, but reduced to map scale.
3. Mark on the squares the positions of forward and sideways overlap
by straight lines.
4. Now, arrange the squares into a proper geometry, with full, stereo
coverage, and overlaps in both directions as specified. Use pieces of
sticky material to keep the squares in place.
5. You may need to experiment a little to find the best solution. Do not
just minimize the number of images taken, try also to minimize the
number of strips, and thus, the number of aircraft turns needed.
6. Record your result, with numbered images, on a planning document
and planning map blank. Thus one obtains a flight map, which can
be given to the pilot and the photographer of the imaging flight.
The flight map should contain the flight lines (with the direction of
flight indicated) and the individual exposure locations.
In an area with strong mountain slopes (Nepal!), it is not a good idea to
fly up and down the slope, so this constrains the flight direction: rather,
fly along the height contours at contant height.
About the timing of photography: in moderate climate zones it is wise
to schedule for early spring, when the snow has left but deciduous trees
Ó » .î á
46 F LIGHT PLAN
Air Strip
km
10
20
km
are still leafless. In the tropics, avoid the rainy (cloudy) season.
The uncertainty of weather is something to always take into account.
Ó » .î á
Practicalities
47
the aircraft. Which flying direction do you think is best?
◦ Draw ground control and tie points into your sketch to create a
good geometry for the aerotriangulation. Explain what makes the
solution strong.
◦ How many total images will you have to take? Estimate cost for
this film material. Discuss colour vs. greytone.
◦ Choose a suitable aircraft (discuss!), flying speed, cost.
◦ Compute hours in the air assuming 400 km/h flying speed. Assume
that the airstrip is close to the project area, so you can ignore ap-
proach and return flights. Estimate cost assuming 100.000 credits
per hour.
Now we have, for the same area and ground resolution requirements,
a different camera: a digital camera (see subsection 4.3.3) with a focal
length of 55.18 mm and a pixel size of 0.012 mm. The camera image field
is 4490 × 3364 pixels, with the higher pixel count being in the cross-track
direction, as is customary with digital imaging.
◦ We again have an area to be mapped of 10 × 20 km2 , see picture.
◦ We again require a point accuracy (ground resolution) of ±5 cm on
the ground.
Furthermore
◦ Images have zero cost (no film is used).
◦ Still, the aircraft flies at speed 400 km/h, and one hour of flying costs
100.000 credits.
Answer the following questions:
◦ What has to be now your flying height?
◦ How broad, in metres, is a single flight strip on the ground?
◦ Assume 60% / 20% overlap. How many flight strips are needed to
cover the area?
◦ Compute again hours in the air assuming 400 km/h flying speed,
again assuming the airstrip is close to the project area. Estimate
cost assuming 100.000 credits per hour. Compare with previous
result.
Question: how realistic is the assumption that the same aircraft will be
used for this case as in the first case? Read carefully chapter 4.
Ó » .î á
48 F LIGHT PLAN
Ó » .î á
D Setting up an e-foto project file
6
Note that the e-foto system is integrated so, that data from a previous
module is carried forward to the next module inside the project definition
file, which has a file name extension of .epp. This means that before exe-
cuting a module you must load the project definition file; after completing
it, you must save it, so it is available for following modules. These load
and save options are available under the Project menu item.
In figure 6.1, we see the different tabs that the Project Manager module
offers:
◦ Project Header, already filled out with data for this project
◦ Terrain data
◦ Sensor: here the camera calibration data is entered
◦ Flight data
◦ Images: here the path names to the digital images is given
◦ Points: here, terrain points and their co-ordinates are entered, typi-
cally by loading them from a simple text file.
Before trying to change the information in any of the tabs, you need to
press the Edit button. When finished, you press the OK button — or, to
throw away a failed edit, the Cancel button.
– 49 –
50 S ETTING UP AN E - FOTO PROJECT FILE
D 6.2 Terrain
Here we enter general information on the area to be mapped. The inter-
esting fields are:
GRS (Geodetic Reference System): this is pre-filled with WGS84, but
other options are SAD69 (South American Datum 1969) and SIR-
GAS2000 (Sistema de Referencia Geocéntrico para las Américas).
Also Corrego Allegre (a Brazilian system) is provided for in the
code. Unfortunately all this is hardwired at present.
Outside South America, WGS84 is the prudent choice, and may be
taken as an alias for, e.g., ITRF yy (International Terrestrial Refer-
ence Frame), ETRF yy (European Terrestial Reference Frame) or
some similar regional geocentric reference frame. The co-ordinate
differences among all these frames are on the sub-metre level,
Ó » .î á
Terrain
51
Ó » .î á
52 S ETTING UP AN E - FOTO PROJECT FILE
D 6.3 Sensor
Here we enter first of all the camera description and properties. The
camera focal length and co-ordinates of the principal point are taken from
the calibration certificate of the camera.
Then, lower on the same page (use the scroll bar on the right), we
may enter the distortion model coefficients k i , P i according to the Brown-
Ó » .î á
Flight
53
Conrady model (Brown, 1966). Enter only if these have been determined
for your camera.
D 6.4 Flight
Here is entered the photogrammetric flight information, to the extent
that it is known.
The software uses only the Nominal Scale field for setting up the initial
approximate values of the geometry — the Nominal Flight Altitude value
is not actually used. It is probably wise to enter it though. The values
shown for longitudinal overlap (60%) and for transversal overlap (30%)
are typical for aerial mapping flights and are known to give good results.
Here you should enter the values actually used in this project’s image
acquisition flight:
◦ Longitudinal Overlap is the overlap of two successive image frames
optained during the flight within a single strip of images,
Ó » .î á
54 S ETTING UP AN E - FOTO PROJECT FILE
F IGURE 6.5. Entry of fiducial marks co-ordinates in camera frame, from the
D camera calibration certificate.
Ó » .î á
Images
55
Longitudinal
overlap
Flight
Transversal overlap
path
D 6.5 Images
The Images tab allows the importation (i.e., setting the path names where
to find them) of digital images. You can import the whole set in one go.
Note 1: When re-loading a project file, e-foto may complain that it cannot
find the images you have just imported. If this happens, open the
sub-tab for every image, click on the Edit button at the bottom, and
use Select Image to re-set the path to that image. Press OK. After
that, all should work OK.
Note 2: When loading a project file that was downloaded from the Inter-
net — like the example project files provided by the e-foto project
web page — you should also not expect these file pointers to be
correct, as you will have placed your image file in a location on
your own hard disk, which the project file does not know anything
about. You will have to go through the Select image procedure for
each image.
Note 3: If you are using the e-foto version of June 30, 2016, you need to
rotate the images in a graphic editor like the GIMP. The angle of
rotation should be the approximate exterior orientation angle κ.
For more details, we refer to section 8.9 on exterior orientation.
Ó » .î á
56 S ETTING UP AN E - FOTO PROJECT FILE
F IGURE 6.8. Project manager of e-foto. Here, interior orientation of three images
D has been completed, but exterior orientation has not yet begun.
Ó » .î á
Images
57
Note 4: For older versions of efoto (older than about 2014) you
must use the following workaround for a bug in the software:
If you don’t have GPS or INS data with your images, make
sure to
1. enable the “Ground Coordinates. . . ” click box; and
2. choose type: “Not Considered”.
3. Do the same for Inertial Navigation System (INS) data, see
below.
The need to do this is due to a bug / illogicity in these e-foto
Ó » .î á
58 S ETTING UP AN E - FOTO PROJECT FILE
Technical remarks:
You might think that if your images are in colour, you could convert
them to greytone, which would make it easier for e-foto to handle them
especially on a small laptop. However, although the files will be only one
third of the original size, e-foto internally stores them still as 32 bits per
pixel, so this doesn’t really help much.
A better idea is to reduce the images to one quarter size by the following
command:
$ convert <inputfile> -resize 50% <outputfile>
but remember that then you also lose half of your resolution on the
ground. For training with the software this is OK, but not for production
work. The convert command belongs to the ImageMagick package, which
you need to have installed.
Since 2015, only 64-bit versions of the software are offered for download,
both for Linux (Debian, Ubuntu) and for Windows.
Ó » .î á
E-foto, reference systems and map projections
59
D F IGURE 6.10. A filled-out Points tab for the Rio de Janeiro project.
4. Height
and the fields are separated by a tab symbol (Control-I or ASCII 9).
Remember also that the decimal separator should be a point, not a
comma, as the e-foto software has not been localized at this point.
Technical remark: the co-ordinate data for the control points must be
given in this order: first Easting, then Northing (like in mathemat-
ics, first x, then y). Geodesists do it the other way around: first
Northing ( x), then Easting ( y). If your file is formatted like that,
you need to interchange columns:
$ awk ’{ t=$2 ; $2=$3; $3=t; print }’ inputfile > outputfile
Ó » .î á
60 S ETTING UP AN E - FOTO PROJECT FILE
D F IGURE 6.11. Ground control point list in vi text editor for the Finnish project.
Ó » .î á
E-foto, reference systems and map projections
61
D T ABLE 6.1. The ground control points in Bahir Dar. Note that the co-ordinates,
obtained by hand-held GPS, are rounded to whole metres.
and regional geodetic datums and map projections — in fact, the whole
set compiled by the EPSG, the European Petroleum Survey Group, as
documented at, e.g., http://www.spatialreference.org.
On the other hand, realistically speaking, these traditional geodetic
datums are often non-geocentric and of limited precision, being based
on measurements made before the satellite era. If this is the case, you
wouldn’t be able to use GNSS-provided aircraft co-ordinates as such
anyway, but would have to allow the software always to adjust these to
agree with the system the ground control points are in. But then, there
is no real reason to go through a complicated transformation: just choose,
in e-foto’s Image tab, for every image, to disable the cross mark at “Ground
Coordinates of exposure station centre (optical center of sensor)”. Then, all
computations will be done in local co-ordinates and the geodetic reference
system and map projection settings will be ignored.
Ó » .î á
62 S ETTING UP AN E - FOTO PROJECT FILE
D T ABLE 6.2. Editing the project file, in order to “forget” an image’s exterior
orientation. Here it is image number (“key”) 4 that is edited out in
two places. This works for the version of June 2016.
<exteriorOrientation>
...
<imageEO type="spatialResection" image_key="4">
<Xa>
<X0 uom="#m">41736.60185948185</X0>
<Y0 uom="#m">73101.76725801102</Y0>
<Z0 uom="#m">621.2541973006183</Z0>
<phi uom="#rad">0.02360511107121352</phi>
<omega uom="#rad">-0.0150353187767421</omega>
<kappa uom="#rad">1.55154177028697</kappa>
</Xa>
</imageEO>
...
</exteriorOrientation>
<spatialResections>
...
<imageSR image_key="4">
<iterations>5</iterations>
<converged>truefalse</converged>
...
</imageSR>
...
</spatialResections>
Ó » .î á
Concluding remarks
63
in a text file.
If you know what you are doing (and, just in case, take a back-up copy
of the file), you can edit this file in an ordinary text editor. In Ubuntu
Linux this means vim, emacs or gedit. See table 6.2.
Ó » .î á
D Interior orientation
7
In a film image obtained with an analogue camera, i.e., an emulsion
on a glass or plastic film base, film co-ordinates (ξ, η) (xi, eta) were
traditionally measured, in metric units, in a co-ordinate measurement
machine. These give the positions of points in the image obtained in a
frame connected to the edges of the film. For a scanned image, i.e., an
image file, each pixel has pair of row and column numbers. These can be
extracted by software, and converted to metric image co-ordinates using
the known resolution (in pixels per mm) or pixel size (in µm) of the image.
This effectively simulates a co-ordinate measuring machine.
Interior orientation is the process to express co-ordinates measured
on the photographic plate or the scanned digital image, so-called film or
image co-ordinates, from the image or film co-ordinate frame into the
camera co-ordinate frame, so-called plate co-ordinates. It establishes the
transformation between these two co-ordinate frames. This is done by
using the fiducial marks, markers all around the edge of the camera’s
image field, which are photographed onto the emulsion together with the
terrain image at the moment of exposure. Typically there are either four,
or eight, fiducial marks.
The origin of the camera frame is the principal point, the projection of
the optical centre of the camera onto the image plane.
After the interior orientation transformation is derived, it may be used
to convert the film or scanned-image co-ordinates (ξ, η) of any point in
he image to rectangular metric camera co-ordinates ( x, y). e-foto does
this automatically in the background when one is measuring point co-
ordinates in the imagery.
– 65 –
66 I NTERIOR ORIENTATION
4 3 7 4
2 1 6 8
3 2 5
1
F IGURE 7.1. Fiducial marks of a camera. There may be four or eight fiducial
D marks.
D 7.1 Theory
The relationship between
1. film co-ordinates (ξ, η) — typically obtained in a film measurement
device, or in case of digital imagery, by dividing, in the photogram-
metry workstation, the position in pixels in both co-ordinate di-
rections by the known pixels-per-centimetre value of the image —
and
2. camera co-ordinates ( x, y) referring to the camera’s principal point,
Ó » .î á
Execution
67
can be described in a number of ways, from the very simple to the very
complex. A simple model would be a similarity transformation:
[ ] [ ] [ ][ ]
x x0 cos θ sin θ ξ
= +K .
y y0 − sin θ cos θ η
Here, the unknowns of the relationship are a scale factor K , a rotation
[ ]T
angle θ , and a translation vector x0 y0 . This vector describes the
offset between the origin (like the bottom left corner) of the photographic
plate or digital image co-ordinate system, and the principal point of the
camera. The principal point is the orthogonal projection of the optical
centre of the camera objective lens onto the image plane.
However, more common is an affine transformation:
[ ] [ ] [ ][ ]
x x0 a b ξ
= +K ,
y y0 c d η
usually written
[ ] [ ] [ ][ ]
x a0 1 + a1 a2 ξ
= +
y b0 b1 1 + b2 η
with six free parameters. This allows, besides a rotation and scaling, also
for the image to be skewed, even in two directions. This means that it
depicts a tiny square in pixel space as a tiny diamond or rectangle in the
camera co-ordinate frame, and a tiny circle as a tiny ellipse.
In other words, the pixel sizes may be different in the x and y directions,
and the directions of the rows and columns of pixels may not be precisely
perpendicular to each other. Both of these distortions may happen if the
film material is not absolutely stable but can stretch; or more likely, if the
manufacturing precision of the film scanning device is less than perfect.
Note that, as both the measured film or digital-image co-ordinates
and the camera co-ordinates are in metric units, and the axes of both
are roughly parallel (with each other and with the edges of the image)
it follows that all four a 1 , a 2 , b 1 , b 2 are typically small numbers. The
offsets a 0 and b 0 are larger, denoting the vectorial offset from image
measurement origin (typically an image corner) to camera co-ordinate
origin, the principal point in the middle of the image.
D 7.2 Execution
In preparation for interior orientation, the co-ordinates of the fiducial
marks as given in the calibration certificate (e.g., table 4.1) must have
Ó » .î á
68 I NTERIOR ORIENTATION
x Principal
Objective x x
η point
Image on film
x
ξ Camera body
x
Principal point
y
Image on film η
Camera body
ξ
F IGURE 7.2. Camera principal point, camera co-ordinates x, y, and plate co-
D ordinates ξ, η.
Ó » .î á
Interior orientation with e-foto
69
presence of large residuals will indicate that we did something
wrong.
2. A value for the variance of unit weight σ20 , as well as a value for
its square root, the standard error of unit weight, also root-mean-
square error or RMSE, σ0 . These numbers are also a measure for
how well the interior orientation has succeeded. They too tell us, if
they are unduly large, that we must have done something wrong.
If we find a very large σ0 value, the first thing to do is look at the table
of residuals. Often the presence of a very large residual for one of the
co-ordinates, or both, of one fiducial mark will indicate something wrong
with that fiducial mark. Check for clerical errors in the calibration data
entry; then, re-do the measurement of that point, and re-do the interior
orientation. Now σ0 should be suitably small. Do not accept your result
until this is the case.
If you start the e-Foto software, the start screen comes up (figure 2.6). The
main menu offers the options Project, Execute and Help. Using the Interior
Orientation module requires that you load a project file created and saved
by the Project Manager module: its new project setup process including
the entry of the photogrammetric camera calibration certificate data, and
entering the paths to the digital images to be used. If you haven’t done
so yet, we refer you to the tutorial on the Project Manager module.
1. Select the Project menu item from the main menu.
2. Load the previously saved project definition file using the “Load file”,
“Last project”, or “Open file” option, depending on e-foto version. The
file has an extension .epp. See figure 7.3.
Ó » .î á
70 I NTERIOR ORIENTATION
If you select the project file, the screen shown in figure 6.1 will come
up.
Now choose Execute ▷ Interior Orientation. You’re ready to go!
A window will be presented asking you to select your image, figure 7.4.
Please select the first image in your set (this will be the one presented for
selection).
Next, you have to identify the fiducial marks according to their num-
bers.
Now you should move the mouse, and click, every fiducial mark in turn.
The followings steps must be executed in sequence, for every fiducial
mark:
Ó » .î á
Interior orientation with e-foto
71
Ó » .î á
72 I NTERIOR ORIENTATION
5. Verify that, in the fiducial mark co-ordinate list, below the image,
the row of the table belonging to the current fiducial mark is the
active one; if not, click on the row you wish the co-ordinates to be
entered into the fiducial mark co-ordinate list.
6. Now move the mouse cursor precisely on top of the fiducial mark,
and click. The co-ordinate values for this fiducial mark are entered
into the co-ordinate list below the image, and the list cursor jumps
to the next row.
7. Clicking on the Fit View toolbar button should give you back the over-
view of the full image, ready to proceed to the next fiducial-mark
measurement.
Repeat for every fiducial mark, until all marks (four or eight) have
been measured.
Then
Ó » .î á
Interior orientation with e-foto
73
1. Press the toolbar icon Interior Orientation. Note, in the output, the
standard error of unit weight (also known as root-mean-square
error or RMSE) σ0 . Your experience with this imagery should tell
you how small this number can be if everything is correct, usually
of order 0.01 mm.
2. Press the toolbar icon View report, and the tab “V”. This will display
the residuals of the fiducial mark co-ordinates. They should all be
suitably small.
3. If you are happy, press Accept.
◦ You have to repeat the interior orientation procedure for all images
in your set.
◦ After every image, or whenever you need to interrupt your work,
remember to save your intermediate results by exiting Interior Orien-
tation (by closing its main window by clicking on the little cross in
the corner) and then, in the main menu, clicking Project ▷ Save file.
◦ You may interrupt your work at any time. Interior orientation
results that have been approved by clicking Accept in the Interior
Orientation report, and after that, saved in the Project menu, will be
preserved.
◦ If you are unhappy about the quality of some of your work (it
improves with training!), you may return to Interior Orientation, open
Ó » .î á
74 I NTERIOR ORIENTATION
the problem image, re-do the fiducial marks you are unhappy with
— you may set the fiducial mark to be updated by clicking on the
appropriate row in the fiducial mark co-ordinate list — , re-accept
and re-save.
Under the Project main menu item in the Images tab you can see which
images have their interior orientation completed and saved into the
project file, figure 6.8.
Ó » .î á
D Exterior orientation
8
Exterior orientation, or spatial resection, refers to establishing the con-
nection between terrain co-ordinates — three-dimensional, say ( X , Y , Z ) ,
with the Z axis pointing up — and the camera co-ordinates ( x, y) for one
given image. It requires a sufficient number of ground control points
i = 1 . . . n to be given both in the terrain, ( X i , Yi , Z i ) , i = 1 . . . n, and on
the photographic plate, (ξ i , η i ) , i = 1 . . . n, or equivalently, in camera
co-ordinates ( x i , yi ) .
D 8.1 Theory
The unknowns, or elements, or parameters, to be recovered by exterior
orientation are:
◦ The location co-ordinates ( X 0 , Y0 , Z0 ) of the camera, in the same
terrain co-ordinate system also used for the ground control points.
◦ The camera orientation angles (ω, ϕ, κ) also relative to the axes
( X , Y , Z ) of the terrain co-ordinate system, defined by a “corkscrew
rule”: ω rotates, around the X axis, Y towards Z ; ϕ rotates, around
the Y axis, Z towards X ; and κ rotates around the Z axis, X
toward Y . The ( X , Y , Z ) axes triad is right-handed, i.e., a corkscrew
turning from X to Y moves forward along the Z axis.
In the usual situation in aerial photography, where the camera looks
straight down and the terrain underneath has only small height (ele-
vation) differences, one can show that while recovery of the camera’s
height co-ordinate Z0 , and its rotation angle κ around the Z axis, can be
done precisely, both the horizontal co-ordinates X 0 , Y0 and the tilt angles
ω, ϕ can not be recovered very precisely: the recovered values are much
more uncertain and are correlated or anti-correlated, X 0 with ϕ, Y0 with
ω. It is only because aerial cameras have such a wide view angle, that
– 75 –
76 E XTERIOR ORIENTATION
Z
κ
Image
y
Focal length c
ω ϕ
Optical centre X 0 , Y0 , Z0
x
Terrain
F IGURE 8.1. Exterior orientation elements. These together describe the re-
lationship between the location of a terrain point in terrain
co-ordinates, and the corresponding image point in camera co-
D ordinates.
Ó » .î á
Observation equations
77
Z0
Similarly the image point is given in the camera co-ordinate frame:
⎡ ⎤
x
x = ⎣ y ⎦,
⎢ ⎥
c
where c is the calibrated focal length, and x, y the measured image co-
ordinates after reduction for camera calibration and interior orientation.
Now, in case the camera is looking straight down and the aircraft is
flying due East — with the camera operator sitting on starboard! —,
there is the following relationship between these:
c c
x=− (X − X0 ) = (X − X0 ) ,
Z0 − Z Z − Z0
Ó » .î á
78 E XTERIOR ORIENTATION
where Z is the height of the terrain point above sea level, and Z0 that of
the exposure optical centre above sea level. Thus, Z0 − Z is the height
def /
of the camera above the terrain. The ratio s = 1 : S = c ( Z0 − Z ) is an
important number called the scale factor of the exposure. The inverse
number S = 1 s , typically a large number, is called the scale number.
/
X − X0
x = c ,
Z − Z0
Y − Y0
y = c .
Z − Z0
X = R (X − X0 ) ,
r 31 r 32 r 33 eT3
So, they form an orthonormal basis {e1 , e2 , e3 } of the space. They are in
fact oriented along the axes of the ( x, y, c)camera co-ordinate frame, as
depicted in figure 8.1.
Now, for a realistic, (ω, ϕ, κ)-rotated camera we obtain the collinearity
equations:
X ⟨e1 · (X − X0 )⟩
x=c =c =
Z ⟨e3 · (X − X0 )⟩
r 11 (ω, ϕ, κ)( X − X 0 ) + r 12 (ω, ϕ, κ)(Y − Y0 ) + r 13 (ω, ϕ, κ)( Z − Z0 )
=c ,
r 31 (ω, ϕ, κ)( X − X 0 ) + r 32 (ω, ϕ, κ)(Y − Y0 ) + r 33 (ω, ϕ, κ)( Z − Z0 )
1 The origin of these new co-ordinates is the camera optical centre, so formally one could
say that X0 = 0.
Ó » .î á
The rotation matrix
79
Y ⟨e2 · (X − X0 )⟩
y=c =c =
Z ⟨e3 · (X − X0 )⟩
r 21 (ω, ϕ, κ)( X − X 0 ) + r 22 (ω, ϕ, κ)(Y − Y0 ) + r 23 (ω, ϕ, κ)( Z − Z0 )
=c .
r 31 (ω, ϕ, κ)( X − X 0 ) + r 32 (ω, ϕ, κ)(Y − Y0 ) + r 33 (ω, ϕ, κ)( Z − Z0 )
(8.1)
cos ω cos ϕ Z − Z0
= sin ϕ ( X − X 0 ) − sin ω cos ϕ (Y − Y0 ) + cos ω cos ϕ ( Z − Z0 ) .
2 The rows of the matrix R are here written as column vectors, just in order to save
horizontal page space.
Ó » .î á
80 E XTERIOR ORIENTATION
ϕ −ω 1
D 8.4 Linearization
The expressions 8.4, 8.5 need to be linearized with respect to all six
unknowns X 0 , Y0 , Z0 , ω, ϕ, κ. Doing this requires determining 2 × 6 partial
derivatives ∂ x ∂ X . . . ∂ y ∂κ . Deriving these partials, which form the
/ /
0
design matrix A , is a lot of painstaking work but not really difficult3 .
The observation equations system (for a single point’s two camera co-
ordinates) then becomes
∆ X̂ 0
⎡ ⎤
A
⎡ ⎢ ∆Ŷ0 ⎥
⎢ ⎥
∂x ∂x ∂x ∂x ∂x ∂x ⎤ ⎢
∆x + vx ⎢ ∆ Ẑ0 ⎥ ,
⎥
∂ X0 ∂Y0 ∂ Z0 ∂ω ∂ϕ ∂κ ⎦ ⎢ ⎥
=⎣ ∂y ∂y ∂y ∂y ∂y ∂ y ⎢ ∆ω̂ ⎥ (8.6)
∆y + vy
∂X0 ∂Y0 ∂ Z0 ∂ω ∂ϕ ∂κ ⎢
⎢ ⎥
⎣ ∆ϕ̂ ⎦
⎥
∆κ̂
3 It can even be done numerically, avoiding the risk of error in an analytical derivation.
This is a useful correctness check on your code.
Ó » .î á
Computing the partials
81
The matrix of partial derivatives — to be evaluated at best available
approximate values for the unknowns — is the design matrix A :
⎡ ∂x ∂x ∂x ∂x ∂x ∂x ⎤
∂X0 ∂Y0 ∂ Z0 ∂ω ∂ϕ ∂κ ⎦
A=⎣ ∂y ∂y ∂y ∂y ∂y ∂y . (8.7)
∂X0 ∂Y0 ∂ Z0 ∂ω ∂ϕ ∂κ
∂x 1 f
= − g c cos κ + x2 ϕ,
∂X0 g
and for p = κ (note that g does not contain κ):
∂x 1
= − g c (sin κ ( X − X 0 ) + cos κ (Y − Y0 ) + (ω cos κ + ϕ sin κ)( Z − Z0 )) ,
∂κ
and so forth. . .
ℓ + v = A x̂,
Ó » .î á
82 E XTERIOR ORIENTATION
with
[ ] [ ] ⎡ ∂x ∂x ∂x ∂x ∂x ∂x ⎤
∆x vx ∂X ∂Y0 ∂ Z0 ∂ω ∂ϕ ∂κ ⎦
ℓ= , v= , A = ⎣ ∂ y0 ∂y ∂y ∂y ∂y ∂y ,
∆y vy
∂X0 ∂Y0 ∂ Z0 ∂ω ∂ϕ ∂κ
and4 [ ]T
x̂ = ∆ X̂ 0 ∆Ŷ0 ∆ Ẑ0 ∆ω̂ ∆ϕ̂ ∆κ̂ .
Now of course in reality we have more than one ground control point: if
we have p control points, we will have an observation vector of length
n = 2 p, and the A matrix will be have two rows for each point, being
2 p × 6 in size. If 2 p ≥ 6, i.e., at least three control points, the problem will
be well posed, and for 2 p > 6 even overdetermined.
As an example we give the observation matrix for the case of four
ground control points:
ℓ1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
v1 A1
⎢ ℓ ⎥ ⎢ v ⎥ ⎢ A ⎥
⎢ 2 ⎥ ⎢ 2 ⎥ ⎢ 2 ⎥
⎥+⎢ ⎥=⎢ ⎥ x̂,
⎣ ℓ 3 ⎦ ⎣ v3 ⎦ ⎣ A 3 ⎦
⎢
ℓ4 v4 A4
where
⎡ ∂x ∂xi ∂xi ∂xi ∂xi ∂xi ⎤
[ ] [ ] i
∆xi v x,i ∂ X ∂Y0 ∂ Z0 ∂ω ∂ϕ ∂κ ⎦
ℓi , vi , A i = ⎣ ∂ y0 ∂ yi ∂ yi ∂ yi ∂ yi ∂ yi ,
∆ yi v y,i i
∂X0 ∂Y0 ∂ Z0 ∂ω ∂ϕ ∂κ
One might think that the above observation equations 8.4 and 8.5 could
be further simplified, by the omission of ω and ϕ (being ≈ 0) to
cos κ · ( X − X 0 ) + sin κ · (Y − Y0 )
x≈c , (8.8)
( Z − Z0 )
− sin κ · ( X − X 0 ) + cos κ · (Y − Y0 )
y≈c . (8.9)
( Z − Z0 )
However, this will not work because then, in the linearized observation
equations, all the partial derivatives in the columns belonging to ϕ and
4 Here, we write the column vector x̂ as a transposed row vector, in order to save vertical
page space.
Ó » .î á
Initial approximate values for exterior orientation parameters
83
ω will vanish, and therefore it will not be possible to estimate values for
∆
ˆϕ and ∆
ˆω — and the least-squares solution will not converge for these
unknowns!
However, these equations are useful nevertheless: they allow us to
derive a set of crude approximate values for the exterior orientation
parameters, that can be used to start their iterative improvement. We
write the simplified equations 8.8, 8.9 in matrix form:
[ ] [ ][ ]
x c cos κ sin κ X − X0
=
y Z − Z0 − sin κ cos κ Y − Y0
def
or, inverted, with the scale number S = 1 s = Z − Z0 c :
/ /
[ ] [ ][ ]
X − X0 cos κ − sin κ x
=S ,
Y − Y0 sin κ cos κ y
or [ ] [ ] [ ][ ]
X X0 S cos κ −S sin κ x
= + ,
Y Y0 S sin κ S cos κ y
Ó » .î á
84 E XTERIOR ORIENTATION
and
κ(0) = atan2 ( x4 , x3 ) ,
√
(0)
Z0 = Z + c x32 + x42 .
This way of writing the problem is actually very similar to solving the parameters of a
[ ]T
Helmert transformation in two dimensions, with X0 Y0 the translation vector, κ
the rotation, and the flight height Z0 as a “quasi-scaling”.
Ó » .î á
Exterior orientation using e-foto
85
We write these “observation equations” 8.10 for all ground control points
i = 1 . . . n within the image, and construct the ordinary least-squares
−1
solution x = ( A T A ) A T ℓ. Then, the initial approximate values for the
individual exterior orientation elements are extracted as follows:
(0)
X 0 = x1 ,
(0)
Y0 = x4 ,
(8.11)
κ(0) = atan2 (− x3 , x2 ) = atan2 ( x5 , x6 ) ,
√ √
(0)
Z0 = Z ± c x22 + x32 =Z±c x52 + x62 ,
If you select the project file, the screen shown in figure 6.1 will come up.
Now choose Execute ▷ Spatial Resection. You’re ready to go!
The main screen that comes up is in figure 8.3. Note the toolbar, figure
6 The fact that, mathematically, there is also a negative solution, explains why one
sometimes sees near-convergence to an otherwise sensible-looking solution, with small
residuals, but a negative flight height. . .
Ó » .î á
86 E XTERIOR ORIENTATION
F IGURE 8.3. Exterior Orientation main screen. Note the toolbar on top and the
D list of terrain points at the bottom.
8.4. In the point co-ordinate table, figure 8.5, ones sees the ground control
points with originally only their terrain co-ordinates given.
Ó » .î á
Doing exterior orientation
87
(a) find it on the image on your screen using the information pro-
vided. It typically looks like a white square made of cardboard
or painted on the roadtop, see the left picture in figure 8.7.
Alternatively, the point is not clearly marked in the terrain
but identified by a description, see the right picture in this
figure.
(b) Zoom if necessary using the Zoom toolbar button: click and
drag the mouse to form a yellow outline that will be magnified
(c) activate point marking by clicking the “Measure mark” icon on
the left of the toolbar
(d) check once more that the point you are about to measure is
the active row in the point co-ordinate table; click on the row
number to activate the correct row if necessary
(e) place the crosshairs precisely over the point marker; if it is
shown as big pixels, estimate the centre visually
(f) click.
(g) Like in interior orientation, also here you can correct a wrong
measurement by activating the row for the point in the co-
ordinate table, and re-measuring.
Ó » .î á
88 E XTERIOR ORIENTATION
D F IGURE 8.6. The ”flight direction” — more accurately, “camera azimuth” — dial.
The angle that the dial sets is κ(0) , the initial or approximate value
of the κ angle of camera orientation around the Z axis.
This again is defined as the angle, counted positively (counterclock-
wise) from the terrain co-ordinate frame’s X axis — which the e-foto
software calls E , for Easting — and the camera co-ordinate frame’s
x axis (or, as the e-foto software calls it, ξ). This latter axis points to
the right edge of the image.
Now, whether the right edge of the image is on the side of the
nose or the tail of the aircraft will depend on whether the camera
operator sits on starboard or port with respect to the camera. . . as
follows:
◦ If the operator sits on starboard, facing port, and the upper
edge of the images is on port, κ will be the heading of the
aircraft, counted counterclockwise from the East.
◦ If the operator sits on port, facing starboard, and the upper
edge of the images is on starboard, then κ will be the aircraft
heading reckoned counter-clockwise from the West.
. . . but of course you can always just experiment, until you find a
κ(0) that works. . .
A better way of understanding κ is, that it is the angle over which
the images are rotated, in the positive or counter-clockwise direc-
tion, from the “standard orientation” where the upper edge of the
Ó » .î á
Doing exterior orientation
89
image is North, and the right edge East7 . Perhaps “camera azimuth”
would be a suitable name.
Note: if you are using the e-foto version from June 30, 2016, it
doesn’t actually have a “flight direction” dial. For this version,
you have to rotate the images before input using a graphical
editor. You have to do this in the Project Manager’s Images
tab, see section 6.5. The angle of rotation is the same κ(0)
discussed above. This is necessary to achieve convergence of
the exterior orientation iteration.
7 This means that e-foto’s way of calling this dial “flight direction” is misleading: it may
be precisely opposite to the direction of flight!
Ó » .î á
90 E XTERIOR ORIENTATION
F IGURE 8.7. Detail with ground control point. Left, Helsinki, right, Rio de
D Janeiro.
F IGURE 8.8. Overview picture of all terrain points in this image. Only the
points marked green are activated and will be used for exterior
orientation. The red points are deactivated. There is a tick box
for activating or deactivating every point in the point co-ordinates
D list’s leftmost column, see figure 8.5.
Ó » .î á
Completing exterior orientation
91
Ó » .î á
92 E XTERIOR ORIENTATION
x x x x x x
x x
xxx x
xxx
x x xx
x x x x x x
D F IGURE 8.10. Exterior orientation precision and ground control point geometry.
co-ordinates will not be noticed: the solution is not robust. This is gener-
ally not acceptable. Four points offer a minimum of overdetermination,
provided no three points lie on a straight line.
On the right hand side of figure 8.10 are two examples of a near-singular
or ill-conditioned geometry:
◦ if all points are clustered together, a set of exterior orientation
parameters will be obtained that represents only the immediate
neighbourhood of this cluster well. At some distance, the quality of
the solution becomes rapidly poorer.
◦ If all points are on a straight line, then again close to this line the
solution will be good, but away from the line, its quality gets poorer
quickly.
Ó » .î á
Ground control points
93
Ó » .î á
94 E XTERIOR ORIENTATION
Ó » .î á
D Aerotriangulation
9
D 9.1 Description
Aerotriangulation (also known as phototriangulation) is the joint adjust-
ment of a project set or block of aerial photographs, together with tie
point information from the terrain, to obtain a three-dimensional model
of the area mapped. It requires a number of preparatory steps, up to
interior orientation for each image, the co-ordinates of points with known
ground coordinates, so-called ground control points (GCP) visible in one
or more of the images, as well as tie points which can be identified in
overlapping images and serve to interconnect them.
Numerical aerotriangulation is typically performed by bundle block
adjustment, i.e., a least-squares adjustment where the projective rela-
tionships between image and terrain co-ordinates serve as observation
equations. An often used numerical technique, in view of the very large
system of equations formed, is the iterative technique known as con-
jugate gradients, steepest descent, or an implementation known as the
Levenberg-Marquardt algorithm. This algoritm is essentially1 a lin-
earized, iterative solution algorithm stabilized by a technique similar to
Tikhonov regularization.
Contrary to exterior orientation, which is done one image at a time,
aerotriangulation uses all images, and their ground control points, to-
gether. Therefore the situation may arise where some of the images
cannot be externally oriented on their own, but, thanks to the addition
of tie points, the block adjustment is successful. Much depends on the
geometry of the tie points chosen, as well as the overlap between the
images in the block, both in the flight direction and perpendicular to it.
1 https://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm
– 95 –
96 A EROTRIANGULATION
x x x x x
x x
x x x
x x x x
F IGURE 9.1. Tie point geometry in aerotriangulation. The crosses form a tie
point geometry that is sufficient; adding the circles will make it
excellent. Also drawn are four ground control points (GCPs) as
D triangles.
Ó » .î á
Observation equations
97
If the tie point geometry of a photogrammetric block is sufficient, or
even good, this means that the relative orientations of the images within
the block — that is, the relative locations of their exposure centres, and
the orientation angles of the cameras at exposure — are well constrained
and can be well determined by the aerotriangulation.
However, absolute orientation of the block as a whole requires in ad-
dition ground control points. See again figure 9.1, where four proposed
ground control points (GCPs) are drawn as triangles. Four points are suf-
ficient, as they are for the exterior orientation of a single image (chapter
8), and provide some redundancy. Theoretically, the absolute minimum
number of GCPs to constrain the six parameters of absolute orientation
of the block is three points not on a straight line. The lack of redundancy
makes this unacceptable, however.
Modern photogrammetric software like ERDAS Imagine can generate lots
of tie points automatically using advanced image correlation techniques.
Unfortunately e-foto does not have this capability.
ℓ v A
⎡ ⎤ ⎡
⎤ ⎡ ⎤ x̂
ℓ1A v1A A 1A 0 ]
[
⎢ ℓ ⎥ ⎢ v ⎥ ⎢ A 0 ⎥ x̂
⎢ 2A ⎥ ⎢ 2A ⎥ ⎢ 2A A
⎥+⎢ ⎥=⎢ . (9.1)
⎥
⎣ ℓ2B
⎢ ⎥
⎦ ⎣ v2B ⎦ ⎣ 0 A 2B ⎦ x̂B
ℓ3B v3B 0 A 3B
12×1
8×1 8×1 8×12
Here, e.g., each observation ℓ ji stands for the linearized camera co-
ordinates, i.e., ∆ x ji , ∆ y ji , measured for ground control point j in image
i . It thus consists of two real-number elements, and the total ℓ vector
Ó » .î á
98 A EROTRIANGULATION
∂ x ji ∂ x ji ∂ x ji ∂ x ji ∂ x ji ∂ x ji
⎡ ⎤
⎢ ∂X ∂Y0,i ∂ Z0,i ∂ω i ∂ϕ i ∂κ i ⎥
A ji = ⎣ ∂ y0,i ∂ y ji ∂ y ji ∂ y ji ∂ y ji ∂ y ji ⎦ .
ji
∂ X 0,i ∂Y0,i ∂ Z0,i ∂ω i ∂ϕ i ∂κ i
∆X ∆X
⎡ ⎤ ⎡ ⎤
ˆ0 A ˆ0 B
∆Y ∆Y
⎢ ˆ0 A ⎥ ⎥ ⎢ ˆ0 B ⎥ ⎥
[ ] ⎢ ⎢
∆Z ∆Z
⎢ ⎥ ⎢ ⎥
x̂ A ⎢ ˆ0 A ⎥ ⎢ ˆ0 B ⎥
x̂ = , with x̂ A = ⎢ ⎥ , x̂B = ⎢ ⎥.
x̂B ⎢
⎢ ∆ω̂ A ⎥
⎥
⎢
⎢ ∆ω̂B ⎥
⎥
∆ϕ̂ A ⎦ ∆ϕ̂B ⎦
⎢ ⎥ ⎢ ⎥
⎣ ⎣
∆κ̂ A ∆κ̂B
Ó » .î á
Observation equations
99
Then, e.g.,
ˆ0 A = ( X 0 ) + ∆ X (0)
X A
ˆ0 A ,
(0)
ϕ̂B = ϕB + ∆ϕ̂B ,
and so on. Now, the approximate values for the observations are obtained
by propagating the approximate unknowns through the non-linear, exact
camera model equations, based on the exact matrix R given in equation
8.2:
(0) (0)
(0) ( f x ) ji (0) ( f y ) ji
x ji = (0)
, y ji = (0)
,
g ji g ji
with
(0)
( f x ) ji =
⎡ (0) (0) ⎤ ⎡ (0) ⎤
⟨ cos ϕ i cos κ i X j − X 0,i ⟩
( ) ( ) ( ) ( ) ( ) ⎥ ⎢ ( ) ⎥
= ⎣ sin ω i 0 sin ϕ i 0 cos κ i 0 + cos ω i 0 sin κ i 0 ⎦ · ⎣ Y j − Y0,i0 ⎦ =
⎢
(0) (0) (0) (0) (0) (0)
sin ω i sin κ i − cos ω i sin ϕ i cos κ i Z j − Z0,i
( )
(0) (0) (0)
= cos ϕ i cos κ i X j − X 0,i +
( )( )
(0) (0) (0) (0) (0) (0)
+ sin ω i sin ϕ i cos κ i + cos ω i sin κ i Y j − Y0,i +
( )( )
(0) (0) (0) (0) (0) (0)
+ sin ω i sin κ i − cos ω i sin ϕ i cos κ i Z j − Z0,i ,
(0)
( f y ) ji =
⎡ (0) (0) ⎤ ⎡ (0) ⎤
⟨ − cos ϕ i sin κ i X j − X 0,i ⟩
( ) ( ) ( ) ( ) ( ) ⎥ ⎢ ( ) ⎥
= ⎣ cos ω i 0 cos κ i 0 − sin ω i 0 sin ϕ i 0 sin κ i 0 ⎦ · ⎣ Y j − Y0,i0 ⎦ =
⎢
(0) (0) (0) (0) (0) (0)
cos ω i sin ϕ i sin κ i + sin ω i cos κ i Z j − Z0,i
( )
(0) (0) (0)
= − cos ϕ i sin κ i X j − X 0,i +
( )( )
(0) (0) (0) (0) (0) (0)
+ cos ω i cos κ i − sin ω i sin ϕ i sin κ i Y j − Y0,i +
( )( )
(0) (0) (0) (0) (0) (0)
+ cos ω i sin ϕ i sin κ i + sin ω i cos κ i Z j − Z0,i ,
⎡ (0) ⎤ ⎡ (0) ⎤⟩
⟨ sin ϕ i X j − X 0i
(0)
g ji = ⎣ − sin ω(i 0) cos ϕ(i 0) ⎦ · ⎣ Y j − Y0i
(0) ⎥
⎦ =
⎢ ⎥ ⎢
(0) (0) (0)
cos ω i cos ϕ i Z j − Z0i
( )
(0) (0)
= sin ϕ i X j − X 0i −
( )
(0) (0) (0)
− sin ω i cos ϕ i Y j − Y0i +
( )
(0) (0) (0)
+ cos ω i cos ϕ i Z j − Z0i .
Ó » .î á
100 A EROTRIANGULATION
Tie points may be any points that show in two or more images and are
well defined. Often, roof corners of buildings make good tie points.
If you are manually measuring tie points, follow these simple rules:
1. Measure a set of tie points that — together with the ground control
points — are well spread out over the area to be mapped. It is
useless to create tie points that are too close together or too close to
a ground control point.
2. Only measure real points, uniquely defined in three dimensions.
A “point” that should not be used as a tie point is shown in figure
12.5, an intersection of road lines on two different levels. Also, don’t
measure points that might move between exposures: cars, animals,
shadows, trees waving in the wind.
D 9.3.2 Theory
If you introduce tie points, this will lead to the further introduction of
three co-ordinate unknowns for every tie point. These will appear in
the vector x̂ in equation 9.1, and corresponding elements will appear
in the design matrix A . Note that these elements, for tie point k, are
now obtained by partial differentiation with respect to the three tie-point
co-ordinates X k , Yk , Z k , where k appears in the place of j in equations
9.2:
ℓki + vki = A ki x̂k ,
with [ ]
∆ x ki
ℓki = ,
∆ yki
the camera co-ordinates for tie point k in image i ,
⎡ ⎤
X̂ k
x̂k = ⎣ Ŷk ⎦
⎢ ⎥
Ẑ k
Ó » .î á
Aerotriangulation in e-foto
101
the (unknown) terrain co-ordinates of the tie point, and
⎡ ∂x ∂ xki ∂ xki
⎤
ki
A ki = ∂ X k ∂Y k ∂Zk ⎦
∂ yki ∂ yki ∂ yki .
⎣
∂X k ∂Y k ∂Zk
For this, we need to partially differentiate
xki = c·
cos κ i ( X k − X 0i ) + sin κ i (Yk − Y0i ) + (ω i sin κ i − ϕ i cos κ i )( Z k − Z0i )
· ,
ϕ i · ( X k − X 0i ) − ω i · (Yk − Y0i ) + ( Z k − Z0i )
yki = c·
− sin κ i ( X k − X 0i ) + cos κ i (Yk −Y0i ) + (ϕ i sin κ i + ω i cos κ i )( Z k − Z0i )
· .
ϕ i · ( X k − X 0i ) − ω i · (Yk − Y0i ) + ( Z k − Z0i )
Every tie point thus produces two rows in the system of observation
equations for every image in which it is visible, and three elements in the
vector of unknowns irrespective of the number of images in which it is
visible. This means that the design matrix A also gains two rows per
image, and three columns.
See figure 9.2 for the main window. Constructing the model requires
1. a sufficient number of known points or ground control points, the
terrain co-ordinates of which were loaded from file in the Project
Manager’s Points tab, and for which camera co-ordinates may have
been measured and saved in the Exterior Orientation module.
2. However, these points may or may not suffice for connecting each
image strongly enough with the other images to allow a strong
determination of the model parameters. Therefore, additionally we
may need to determine photogrammetric tie points, i.e., points that
can be seen in two or more images and thus “tie them together”.
These points should be evenly distributed throughout the overlap
area of the pictures. These tie points will be numbered successively
Ó » .î á
102 A EROTRIANGULATION
after the ground control points: PF24, PF25, etc. Unlike control
points, tie points have no known co-ordinates; however, in the course
of the aerotriangulation, terrain co-ordinates will be determined
(estimated) for them, and these will be saved into the Project Manager
module’s Points list.
In order to do phototriangulation, one brings up the main screen by
choosing Phototriangulation from the Execute main menu. There are many
panels in the main window; only two images are displayed at a time, the
Ó » .î á
Aerotriangulation in e-foto
103
Left Image and the Right Image, which can be selected separately.
The user interface is very similar to that of Exterior Orientation: also
here, for some versions of e-foto, one first has to specify a “flight direction”
using a dial-like interface (figure 8.6), invoked by clicking on the aircraft-
with-arrow toolbar icon. Use here the same directions you used for the
individual photographs in Exterior Orientation2 . Note that, as a block of
images used for phototriangulation may consist of multiple strips in which
the aircraft flew in opposite directions, you may have to enter different
values for different images. Setting these directions correctly will make
the phototrangulation adjustment computation converge rapidly. See
figure 9.4, showing the exterior orientation elements computed after only
eight iteration steps.
After setting the flight directions, the computation icon — showing an
aircraft in the act of photographing the terrain — is no longer greyed out.
Click on it.
The module offers a window (figure 9.5) where you can choose which
images to take along in the computation, and which points, both ground
control and tie points. You can move any or all of these between the left
window — which lists what is available — and the right window — listing
what is actually used — before proceeding with the calculation.
You will have to choose the number of iterations allowed, as well as
the precision criteria applied to judge if the computation has converged,
separately for camera location co-ordinates and for camera orientation
angles. These usually do not have to be changed; only the number of
iterations may be increased if convergence is slow.
It may happen that the module complains that the geometry is not
strong enough, and that tie points need to be added; the toolbar offers
buttons for doing this.
2 And consider that, also here, the angle to be entered is κ0 , which may be either the
flight direction or the direction opposite to it. . .
Ó » .î á
104 A EROTRIANGULATION
Ó » .î á
Debugging an aerotriangulation adjustment
Ó
» .î á
D F IGURE 9.6. Aerotriangulation point list. Points in two images are listed, with in the middle the list of all points (control points and tie points).
105
106 A EROTRIANGULATION
Ó » .î á
Debugging an aerotriangulation adjustment
107
points. If this condition is not met, you should be measuring more
tie points in your image pairs.
It is clear from the above, that for successful debugging, it is good to
have an abundance of ground control points! So, include all points you
can find that have their terrain co-ordinates measured, and if you are
yourself responsible for measuring them, make a generous measurement
plan. The time spent on ground measurements will pay itself back when
processing the imagery.
1. Firstly, find the individual images that contain four or more ground
control points. Perform exterior orientation (chapter 8). At least
some of these images should give a correct result, as judged by the
root-mean-square error (RMSE). If not, i.e., if all images give poor
results, you must suspect that there is something wrong with the
general set-up of e-foto (chapter 6); go through everything carefully
again.
2. For those images that do not produce a good result, go through each
of the ground control points (GCP), checking that
(a) the given terrain co-ordinates in the GCP table are correct,
and
(b) the image co-ordinates measured in exterior orientation are
correct, and expecially that the points were all identified cor-
rectly in the image.
3. If that doesn’t help, you should remove each of the ground control
points in turn from the computation, to see if this gives a better
result. If you get a good result by removing a GCP, chances are
that it is precisely that point which is in error. This technique is
called “cross-validation”. Of course you can only do this if you have
a sufficient number of GCPs appearing on the image, i.e., five at
least3 .
4. Next, for those images for which the above treatment is inapplicable,
find pairs of images that contain four or more control points, and
a sufficient number of common points — measure additional tie
points if you need to. Perform the aerotriangulation computation
on those pairs. Again, your aim should be to get a correct result,
3 Because, if you only have four GCPs, removing one of them, any one, will leave you
with three points only, which will always allow exterior orientation to be performed
without producing any contradictions, even if errors are present. . .
Ó » .î á
108 A EROTRIANGULATION
Ó » .î á
D Stereo restitution
10
Traditionally, aerial photographs have been acquired and processed in
pairs, allowing the formation of scenes that can be viewed in stereo by
the human eye. In the traditional photogrammetric approach, a pair of
photographs is placed inside a stereo restitution instrument that allows
them to be precisely oriented relative to each other, so that a stereoscopic
image would appear to the human eye. Then, the human operator would
be able by further mechanical manipulation to move a marker or cursor
within both pictures, effectively moving it three-dimensionally within the
viewed scene, and allowing it to be placed on chosen terrain details. The
mechanics of the stereo restitution instrument would then allow three-
dimensional co-ordinates to be recovered for plotting or map-making.
Today, this whole process is done digitally, using photographs in dig-
ital form. The formation of the 3-D scene is done computationally and
presented on a computer screen. Stereo vision however, the sensation
of depth and three-dimensionality, continues to require the human eye-
brain combination, although also there, computing technology is making
progress. . .
– 109 –
110 S TEREO RESTITUTION
have known co-ordinates. It suffices that they are visible in both images,
e.g., corners of roofs are well defined and typically used.
1 One should still require these three points to not be on one line. More than three points
would even be better, and the wider they are distributed throughout the model area, the
better.
Ó » .î á
Feature extraction
111
F IGURE 10.1. Eliminating the y-direction parallax. In the top figure, there is a
large y parallax, and stereo vision is not possible. In the bottom
figure, the right side image has been moved in the y direction to
D eliminate the parallax.
if everything else is in order, you will suddenly see the stereo image
appear.
We can also describe this more formally: the epipolar lines in both
images must be brought into superimposition and parallel to the eye base,
see figure 10.2.
Also the x-direction parallax needs to be eliminated, but this is less
critical as the human eyes are more flexible in converging. See figure
10.3.
After eliminating the parallax in both directions, you can not only view
the image pair in stereo, but more importantly, you can view the cursor
in stereo. This is called the floating mark. While, on a personal computer
work station, the cursor can be moved left and right using the mouse, the
vertical motion within the model is typically tied to the mouse wheel2 .
Ó » .î á
112 S TEREO RESTITUTION
Epipolar
plane
Terrain point
F IGURE 10.2. Epipolar plane and lines in a pair of aerial photographs. The
epipolar plane contains the ground point, the two optical centres
D and the two image points of the ground point.
Ó » .î á
Using the Stereoplotter module in e-foto
113
and move it up and down to land softly on the terrain, while it is also
horizontally positioned on a chosen terrain feature. As you move around
the image, you will notice that the parallax elimination may have to be
repeated.
Ó » .î á
114 S TEREO RESTITUTION
D F IGURE 10.4. The floating mark above, at, and below the object surface.
Before starting the work, you should do some basic settings, cf. Figure
10.6:
Pair: the image pair you are working on
Stereo mode: choose here Anaglyph. These are the inexpensive red and
cyan coloured glasses we use for stereo viewing (the alternative
Ó » .î á
Using the Stereoplotter module in e-foto
115
Do not touch the other settings. Direct ▷ Reverse may be useful if the
images were loaded the wrong way round (or you folded your cardboard
glasses the wrong way).
In the stereoplotter toolbar (figure 10.7) we have
◦ Zoom, which lets you place a little yellow rectangle around the area
you want to zoom into: mouse down for the top left corner, drag, and
mouse up again for the bottom right corner (using mouse button 1)
◦ Fit view, which will give you back a view on the full photographs
◦ Move, which lets you move the images in two directions by clicking
Ó » .î á
116 S TEREO RESTITUTION
and dragging
◦ Mark lets you set a marker in either image. For accuracy, make sure
that the image is enlarged enough (by using Zoom).
Ó » .î á
Inserting a point
117
D T ABLEAU 10.1. Editing panel.
Export features as
Export features in text format
text
Ó » .î á
118 S TEREO RESTITUTION
Remove feature
End feature
Remove point
Exit Stereoplotter
2. Check that the two images in the anaglyph superposition are placed
correctly with respect to each other. Remove any parallax in the y
direction completely, and in the x direction enough, so that you can
comfortably see the stereo effect.
3. Select the feature’s type and sub-type, and give it a name (if you
don’t, e-foto will just number your features of each type sequen-
tially). It is a good idea to register all features of the same type
together, so you have to enter this data only once. It is entered into
the middle pane.
4. Activate the Mark button in the toolbar (the crosshair icon) and
place the crosshair cursor on the first point of the feature — in
three dimensions. You should see the crosshair “hanging” over the
terrain, in mid-air. Move it sideways by moving the mouse, and
vertically by rolling your mouse’s wheel.
5. Now, click to register the point.
6. Note that the three-dimensional terrain co-ordinates (“object co-
ordinates”) X , Y , Z are displayed immediately under the anaglyph
superposition.
After this, you can proceed to measure all the points of this feature,
assuming that they are all on the same height. For the roof of a building
this will be the case. For other objects this may not be assumed: e.g., a
Ó » .î á
Working example
119
street will generally vary in height. Then, you have to use the mouse
wheel to place the floating mark on every feature point before measuring
it.
Next, we click on Add new feature and Insert point mode. Also, on the tool
bar the crosshair (“Mark”) button should be selected.
Position the crosshairs on the first corner of the station building using
the mouse, move the mark vertically using the mouse wheel until it is at
the same level as the station roof, and click.
Ó » .î á
120 S TEREO RESTITUTION
Now you should have the four point co-ordinate sets in the feature list:
Ó » .î á
Working example
121
3 You may verify this by measuring a ground control point as a point feature.
Ó » .î á
122 S TEREO RESTITUTION
poorly guarded secret that with a little training, one can place the
floating mark three-dimensionally even without any glasses at all.
One uses, instead of anaglyphic glasses, the “Detailview” window
on the right hand side of the screen. It enlarges the same detail
seen in the left and right images, and, after precisely eliminating
the y parallax (by SHIFT-Mouse-1 or CTRL-Mouse-1 with the hand
cursor), one uses the mouse wheel to position the twin cursors on
the same point in both images. Consider, though, that the refresh
of the Detailview window is a bit slow, so give it time.
The method can be speeded up by using the circumstance that the
points of a rooftop outline are usually all on the same level. This
means that one can use the hand cursor to travel from point to
point without using the mouse wheel. This is how Figure 10.9 was
produced.
Ó » .î á
D Measuring in aerial images
11
D 11.1 Image pyramids
It is typical in photogrammetric software that uses statistical analysis
techniques on its image data, to construct an image pyramid first. This
means that, in addition to the original image, an image is constructed at
half the resolution, occupying one quarter of the disk space; then, one at
one-quarter of the resolution; etc. See figure 11.1. One has to agree on a
technique to derive the pixel values for every layer of the pyramid from
the layer directly below it: simplest is just averaging 2 × 2 pixels. More
sophisticated methods exist too.
– 123 –
124 M EASURING IN AERIAL IMAGES
Note that e-foto does not use image pyramids. The widely used ERDAS
LPS software however does.
Cov{∆ x, ∆ y}
Corr{∆ x, ∆ y} = √ , (11.1)
Var{∆ x} Var{∆ y}
with
ˆ d ˆ d
def
Cov{∆ x, ∆ y} = ( g 1 ( x, y) − g1 )( g 2 ( x + ∆ x, y + ∆ y) − g2 ) dx d y,
−d −d
ˆ dˆ d
( g 1 ( x, y) − g1 )2 dx d y,
def
Var{∆ x} =
−d −d
ˆ dˆ d
( g 2 ( x + ∆ x, y + ∆ y) − g2 )2 dx d y,
def
Var{∆ y} =
−d −d
1 Folks who have studied statistics will notice the similarity with the statistical definition
of correlation: here, the place of the expectancy operator is taken by the double integral
over the patch area. The overbar notation used for average is also just shorthand for
that.
Ó » .î á
Least-squares matching
125
The pyramid approach helps here: perform the correlation calculation
first on the smallest image at the top of the pyramid, to find an approxi-
[ ]T
mate vector ∆ x1 ∆ y1 ; then, use this to limit the search in the next
lowest layer of the pyramid to the small square
[ ] [ ] [ ]
∆ x2 ∆ x1 i
= + , i, j = −1 . . . 1.
∆ y2 ∆ y1 j
In this way, the computational work is limited, even for a generous patch
semi-size d , and of order log n rather than n2 .
Here ∆ x, ∆ y are the co-ordinate offsets between the centres of the patches,
α (alpha) describes the difference in contrast, and β (beta) the difference
in brightness. The parameters a, b, c, d together describe the geometrical
affine transformation.
Once homologuous point pairs have been identified, one can search
for nearby points by recursively applying the same procedure, stepping
in four different directions. The range of ∆ x, ∆ y values to look at will
be limited if the terrain is smooth. In this way, the whole area can be
matched.
Ó » .î á
126 M EASURING IN AERIAL IMAGES
detail, but involves the building of a pyramid of images, produced from the
original image by applying a Gaussian blur of widths in increasing powers
of 2, and, subtracting in every step, the similarly constructed Gaussian
blur image for the one lower power of 2 (“difference of Gaussians”, see
figure 11.2). This neatly separates the details on the various spatial
scales, defined by a binary “scale number” k, into layers of a pyramid.
In this pyramid, there are three co-ordinates: the x and y pixel co-
ordinates of the original image, and the “blurring scale” k. A point is
declared a point of interest if this “difference-of-Gaussians” value assumes
a local minimum in this three-dimensional space.
An alternative variant of SIFT, using wavelets instead of differences of
Ó » .î á
Automatic feature recognition
127
Gaussians, is also popular. Wavelets are well suited for this hierarchical
decomposition by scale in powers of 2.
Ó » .î á
D Digital Elevation Models
12
D 12.1 Infrastructure applications
Height information, of good quality, geographical distribution and spatial
resolution, is a resource that is essential for the development of a country,
more precisely, the construction of its infrastructure. The reason for
this is that fluids, e.g., water, flow under the influence of gravity, and
the direction in which they flow, and the energy that is released when
they flow, depend on the height differences in the terrain. This applies
to drinking or irrigation water, naturally flowing water from rainfall, in
rivers and in the soil, and to sewage. It even applies to traffic, requiring
roads and railroads to be built with only moderate changes in height
along their trajectories.
There are a number of techniques for establishing a height system, or
vertical datum, in a country. More traditional techniques include precise
levelling, which however is laborious. More rapid, modern techniques
use satellite positioning; however, in order to produce the proper kind
of heights related to the gravity field, i.e., orthometric or normal heights
“above sea level”, one needs a physical model of the figure of sea level, i.e.,
a geoid model. We shall discuss this problem later on.
The detailed description or modelling of heights in a smaller area often
takes the form of digital elevation models, or DEMs. These DEMs are
needed in connection with large infrastructure projects: roads, railroads,
bridges, reservoir dams, irrigation works, mobile telephony link towers,
etc. etc.
DEMs are typically raster files describing a larger or smaller area. An
alternative format is triangulation, e.g., Delaunay1 triangulation: points
are given connected by edges that form triangles approximating the shape
1 Boris Delaunay (1890 – 1980) was a Russian mathematician, mostly remembered for
– 129 –
130 D IGITAL E LEVATION M ODELS
h( x, y)
s
s +
+
−
s
+
x
y
Ó » .î á
Extraction from imagery
131
from some oblique direction, making the relief visible2 . The merit of this
approach is, that is visualizes not just the heights but also the slopes of
the terrain.
Of course the most obvious technique for representing terrain heights
is as a perspective image, showing it as a bird’s eye view, again from some
oblique direction.
Both the shading technique and the perspective technique exploit deep
properties of the human vision system.
Ó » .î á
132 D IGITAL E LEVATION M ODELS
from which they are derived. Constructing the raster requires going
through a pair of images and finding corresponding locations, so-called
homologuous points. This is a challenging task to do automatically or
semi-automatically. One proceeds in the following steps:
1. One starts by providing a number of seed points, points for which
three-dimensional terrain co-ordinates X , Y , Z are known. These
could be, e.g., the ground control points of the aerial photography
block.
2. The software then undertakes a search process outward from the
seed points, seeking for neighbouring little patches of terrain visible
in both images. Correspondence between patches can be established,
e.g., by correlating the patches. Then the terrain height is found
for which this correlation is maximal. The new point added to the
data base is then the centre of this patch. This can be done with
good computational efficiency because the height is expected not
to change much from that of the previous point. Nevertheless the
number of numerical operations involved is huge, due to the large
number of pixels in each image.
3. In order to cut down on the calculation effort, one may restrict
the calculation to a sparser grid if it is known that the terrain is
reasonably smooth.
4. After a grid of points with known heights has been formed through-
out the area, an interpolation is undertaken creating a smooth
model surface with a height value for every pixel in the DEM.
Ó » .î á
Digital elevation models (DEM) with e-foto
133
Interpolate
Open seed editor
DEM to grid
Open stereo plotter
Load extracted DEM Save DEM
Automatic DEM Load interpolated DEM
extraction
Abort current operation
Save DEM Exit / done
Ó » .î á
134 D IGITAL E LEVATION M ODELS
The “seed editor” that comes up when clicking on the button is shown
in Figure 12.6. E-foto generates a small number of “seed points” auto-
matically, actually the ground control points. The user interface should
be rather familiar by now. The idea is to place a number of seeds, ho-
mologuous points in each stereo pair, from which the software moves
gradually outward, in a search process of areal expansion finding further
homologuous point pairs with their three-dimensional co-ordinates to be
used for building the DEM. If the seed points generated automatically
are not sufficient, one can manually add some more.
Good seed points meet the following requirements:
1. they are spread evenly throughout the area of overlap of the image
pair. Place new seed points especially there, where there are large
empty areas between the already existing seed points
2. they are sharply defined in the terrain. E.g., the corners of buildings
or other structures, but at ground level, or corners of white road
markings like pedestrian crossings.
3. Make sure the surrounding terrain is smooth, so the areal expansion
process will work. Don’t use points on the roofs of buildings3 ; they
won’t allow areal expansion into the surrounding terrain.
4. Don’t measure “quasi-points”, like in figure 12.5, which are not
really unique three-dimensional points, as may occur at non-level
road crossings. Also, don’t measure anything that moves between
3 One can however load the information from feature extraction (the Stereoplotter module)
into DEM Extraction.
Ó » .î á
Doing the DEM generation
135
Toolbar
Save Exit
Seed table Load
F IGURE 12.6. Seed editor main window. The seed points are in yellow, and are
listed in the table bottom left. Seed points can be added using
D the buttons bottom right.
exposures, like cars, animals, trees in the wind, or even the shadows
of lamp-posts!
An important feature not shown in the figure is the tick box in the mid-
dle marked “Show matching pairs”. If you tick this box, the result of the
previous DEM extraction / areal expansion run — i.e., the homologuous
points already found — will be displayed as red crosses; see figure 12.7.
This allows you to judge in which areas further seed points are needed to
make a more complete coverage possible.
Ó » .î á
136 D IGITAL E LEVATION M ODELS
F IGURE 12.7. Seed editor with “Show matching pairs” activated, so the red areas
show the coverage already obtained using Areal Expansion from
the given set of seed points. Several seed points have been added
D to improve coverage.
Ó » .î á
Doing the DEM generation
137
Growth step
Seed point
F IGURE 12.8. The areal expansion process. In this image, the growing step is
two pixels, so a sparse grid of determined heights is constructed
that still needs to be interpolated, see section 12.7. The pink
D background shows obtained coverage, like the red in figure 12.7.
The further menus of DEM Extraction are depicted below. Each contains
default values for the parameters with which the algorithm is known to
function generally well based on experience. Of course more adventurous
users can experiment with these values.
Ó » .î á
138 D IGITAL E LEVATION M ODELS
Ó » .î á
Doing DEM Interpolation
139
Ó » .î á
140 D IGITAL E LEVATION M ODELS
F IGURE 12.12. Result of Areal Expansion (Region Growing), for the Rio the
D Janeiro area.
Ó » .î á
Cleaning up the digital terrain model
141
fields, savannah or desert. Forest might work, but the outcome could be
that the DEM contains the forest canopy rather than the ground.
One available option is to import a DEM that has been produced by
other means, e.g., airborne laser scanning or synthetic aperture radar
(SAR) from aircraft or satellite.
For urban areas, the obvious alternative is to import a feature file that
has been produced by the Stereoplotter module, containing the three-di-
mensional descriptors of point, line and polygon features like buildings,
roads, parks etc. The DEM Extraction module can then interpolate an
elevation model from this, good enough for use in orthophoto production.
See figure 12.15.
For an urban area, a digital elevation model (DEM) will contain many
buildings, which is not what we usually call “terrain”. So, if we are inter-
ested in generating an DTM, a “digital terrain model”, that represents
Ó » .î á
142 D IGITAL E LEVATION M ODELS
Ó » .î á
Saving the digital surface model
143
Ó » .î á
144 D IGITAL E LEVATION M ODELS
Ó » .î á
D Orthophoto mapping
13
D 13.1 Rationale
Before a set of aerial photographs, obtained as part of an aerial mapping
project, can be used for actually drawing a map, it is necessary to extract
the information in them in the form of three-dimensional co-ordinates.
This is typically done by going through the sequence of interior and
exterior orientation and formation of a stereo model, using given points
in the terrain of which the true geocentric co-ordinates are known. After
this, three-dimensional co-ordinates of arbitrary terrain points can be
recovered from the photographs.
Digital elevation models or DEMs are a prerequisite for orthophoto
mapping.
If we know the height of a point in the photograph above the refer-
ence surface (e.g., sea level), we may use this information to derive the
true horizontal position ( X , Y ) from the position in the photograph, and
from our knowledge of the absolute position ( X 0 , Y0 , Z0 ) and absolute
orientation of the camera optical centre. See figure 13.1.
Another way to look at this is that we want to map the landscape as it
would look from infinitely far away, looking straight down from the zenith.
This is known in cartography as an orthographic projection (greek: ortho
= straight).
The procedure for doing so may be automated, allowing the automatic
drawing, by optical projection, of the aerial photograph onto a light
sensitive emulsion with every pixel being in the correct ( X , Y ) position.
This method is known as orthophoto mapping.
According to the World Bank, orthophoto mapping is the preferred
method for cadastal mappings (Timo Linkola, personal communication;
see also, e.g., Hailu and Harris (2014)).
– 145 –
146 O RTHOPHOTO MAPPING
Correction in image
Camera
Optical
centre
F IGURE 13.1. Orthophoto mapping. Notice how buildings are “falling over”
outside the image centre (photo detail from Rio de Janeiro). The
correction in the image, computed using a digital elevation model
D of the terrain, is applied in an orthophoto mapping instrument.
Ó » .î á
Ortho-rectification with e-foto
147
Ó » .î á
148 O RTHOPHOTO MAPPING
Ó » .î á
D Applications
14
D 14.1 Cadastral system, zoning, and aerial mapping
In a modern, high-investment economy, real estate property and real
rights or encumbrances on it — such as mortgage — are essential finan-
cial instruments. They need to be reliably recorded, including property
boundaries. For this, there is the cadastral system. The precision with
which boundaries need to be recorded, depends on the value of the land:
in areas of low land value, such as deserts, the precision requirement is
low, whereas in areas of high value, such as urban areas and city centres,
the precision requirement will be high. This is referred to as various
measurement classes.
As one example we give here the classification presented in the Finnish
regulatory document “Drafting a Local Zoning Map”, also known as
JHS 185. See table 14.1.
It is possible to use aerial mapping techniques for taking inventory of
real-estate boundaries, if these boundaries are already marked in the
terrain by physical features. Such features may be hedges, stone fences,
streams, or whatever physical distinguishing marks that are somewhat
permanent and visible from the air. This is the so-called “General Bound-
aries” system which is in use in many Anglo-Saxon countries. It works
well provided the precision required is not very great, and if the system
is supported by law and agreement by land owners to place or grow these
boundary markers between their properties and accept their legitimacy.
Advantages are speed and economy.
Where the land is more valuable and mapping precision requirements
are higher, greater precision and definiteness of boundary markers are
required, and proper surveys have to be carried out. This can be done with
traditional geodetic survey techniques, but also from the air. However,
– 149 –
150 A PPLICATIONS
Survey areas are divided into three measurement classes. The measurement class defines the
accuracy of measurement and graphical rendering.
Measurement class 1: Urban areas where land is very valuable and where there is a valid
local zoning plan with a binding parcel division, or a construction ban awaiting the
drafting of such a plan.
In surveys intended to be incorporated into the municipal GIS for use in technical
planning requiring great accuracy, a higher accuracy level may be used (measurement
class 1e).
Map scale 1 : 500 or 1 : 1000. Parcel boundary co-ordinate accuracy (mean error or
standard deviation) ±0.085 m.
Measurement class 2: Urban areas for which the local zoning plan to be drafted does not
require binding parcel division.
Map scale 1 : 1000 or 1 : 2000. Parcel boundary co-ordinate accuracy ±0.14 m.
Measurement class 3: Areas zoned as lake or sea shore, lake or sea shore areas, and other
areas where the land is clearly more valuable than agricultural or forest land, e.g.,
so-called dispersed settlements (https://en.wikipedia.org/wiki/Dispersed_settlement).
Map scale is usually 1 : 2000. Parcel boundary co-ordinate accuracy ±0.21 m.
A scale of 1 : 4000 or 1 : 5000 may be accepted if a zoning plan can be drafted with such
a map without essentially compromising the requirements to be placed on it.
Measurement class 4: All other (unzoned) areas.
A digital map has no scale. The accuracy of the collected data corresponds to that on a map of
a certain scale. One should not print out graphical products from a digital data base at a scale
greater than the accuracy of the data allows.
In choosing the measurement method and the scale of the map, one must take into account
surveys carried out earlier in the area or its surroundings, and the extent and character of the
area.
the work of marking (signalizing) the boundaries so they are visible from
the air can be substantial. It may be speedier and cheaper to directly
measure the boundaries using, e.g., an automated tacheometer and prism
staff, or GPS in real-time kinematic (RTK) mode. The latter requires a
second GPS receiver or base station on a known location nearby, adding
to cost. A careful comparison of logistics and costs, both of hardware and
of labour, should be made before the choice of method.
Ó » .î á
Land-use inventory and aerial mapping
151
D 14.2 Land-use inventory and aerial mapping
Here the techniques discussed in the previous chapter find application.
◦ Mapping current land use patterns
◦ Assessing demand on irrigation water, transport capacity, electric
grid power, . . .
◦ Agricultural production estimates for domestic use and export,
preparation for shortages due to drought
◦ Urbanization, also uncontrolled, and related infrastructure de-
mand.
Ó » .î á
152 A PPLICATIONS
Ó » .î á
D Datums in geodesy and
photogrammetry
– 153 –
154 D ATUMS IN GEODESY AND PHOTOGRAMMETRY
out that there usually are things that need to be fixed in order to arrive
at a unique solution.
E.g., when adjusting — i.e., computing — a levelling network, one must
conventionally fix the height of one point in order to find unique height
values for all the points in the network. This is an example of a datum
defect: a number that must be provided before a unique solution becomes
possible. Typically, the point will be at sea level, close to a tide gauge
monitoring local sea-level variations. The height value chosen for the
point will make mean sea level at this tide gauge equal to zero. This
conventional choice defines a geodetic datum.
For two- or three-dimensional co-ordinate reference systems, the datum
fixing issue becomes more complicated, as we shall see.
D 15.3 Geocentricity
In geodesy and photogrammetry we express co-ordinates, of ground con-
trol points, of the photographing aircraft, or of the terrain being mapped,
in some co-ordinate reference frame. This frame will be uniquely tied to
the Earth by measurements, and by fixing the geodetic datum.
Traditionally, datums were always local, or at best national or regional.
This was because measurement technologies available were terrestrial
in nature. Using theodolites and spirit levels, one could only measure
within small areas on the Earth’s surface. Thus, these techniques could
not provide an overview over large areas, or of the Earth as a whole.
This changed with the introduction of satellite positioning technologies,
i.e., GNSS, Global Navigation Satellite Systems. The first modern such
system was the American GPS, the Global Positioning System. These
technologies allow the creation of geocentric datums, tied to the Earth’s
centre of mass as their origin, and to the Earth’s rotation axis for their
orientation.
Such global datums are based on so-called terrestial reference systems
(TRS), e.g. the International Terrestial Reference System (ITRS) of the
International Association of Geodesy (IAG). This system has been the
basis for a series of global datums or realizations called International
Terrestial Reference Frames (ITRF).
Ó » .î á
Connecting local and global reference frames
155
D 15.4 Connecting local and global reference frames
The problem arises how to connect these global, geocentric reference
frames or datums — which are typically highly accurate — with the
traditional, non-geocentric, often national, datums. This connection must
be made in situations where, e.g., the imaging aircraft has been positioned
in flight using GNSS technology, but the co-ordinates of ground control
points visible in the imagery are provided in a traditional datum. In
the days before satellite positioning you wouldn’t have to worry about
this. Nowadays, however, GNSS on board the aircraft produces geocentric
( )
locations X 0 , Y 0 , Z 0 of the aircraft for the moment each image was
exposed. These locations must be transformed before use to map positions
X , Y and heights above sea level Z in the same reference frame as used for
the ground control points. And this is also the reference frame — for both
map co-ordinates and heights above sea level — that the photogrammetric
mapping mission will use for its end product.
There are two issues here:
1. in which co-ordinate reference frame the ground control is given,
and
2. in which co-ordinate reference frame do we wish the final result of
the mapping mission to be presented.
Relatively simple techniques based on local rectangular co-ordinates are
still commonly used when all point co-ordinates are given in the same
datum. Typically this will be a local and non-geocentric datum. Then,
many approximations are valid due to the limited size of the study area,
without loss of precision.
Ó » .î á
156 D ATUMS IN GEODESY AND PHOTOGRAMMETRY
z P
x ζ . Z
A
y P
C x z
C y
Y
O
Ó » .î á
Topocentric co-ordinates
157
The transformation between a point’s topocentric spherical co-ordinates
and its topocentric rectangular co-ordinates is
⎡ ⎤ ⎡ ⎤
x s cos A sin ζ
⎣ y ⎦ = ⎣ s sin A sin ζ ⎦ .
⎢ ⎥ ⎢ ⎥
z s cos ζ
The last formula is known as the half-angle formula and avoids the prob-
lem of finding the correct quadrant2 for the azimuth angle A . The result
is in the interval (−180◦ , 180◦ ], and negative values may be incremented
by 360◦ to make them positive.
x = R 1 R 2 R 3 (X − X C ) ,
2 In computer code, one may use the function atan2 ( y, x) for the same purpose.
Ó » .î á
158 D ATUMS IN GEODESY AND PHOTOGRAMMETRY
Astronomical Φ, Λ
co-ordinates z
Plumb- x
z
line y
x
π
2 −Φ
x y
Geoid −x
Z profile
z′ Λ
Y ′
z
y′
X Λ y′
x′
Z x′
Y
X
F IGURE 15.2. From the geocentric to the topocentric system. The matrix R 1
mentioned in the text is left out here.
T
X = XC + R 3T R 2 R 1T x,
as can be easily derived by multiplying the first equation from the left by
T
the matrix R 1T = R 1−1 , then by the matrix R 2 , and then by the matrix R 3T ,
and finally by moving XC to the other side.
0 0 1
Seen from the direction of the z axis we see (figure 15.3), that
x′ = x cos Λ + y sin Λ,
y′ = − x sin Λ + y cos Λ.
Ó » .î á
Traditional geodetic datums on the reference ellipsoid
159
y
y′
x′
z
λ x
0 0 1 + cos Φ 0 sin Φ
⎡ ⎤
− sin Φ 0 cos Φ
= ⎣ 0 1 0 ⎦. (15.2)
⎢ ⎥
+ cos Φ 0 sin Φ
0 0 1
Ó » .î á
160 D ATUMS IN GEODESY AND PHOTOGRAMMETRY
Plumbline deflection
Φ, Λ ϕ, λ
Plumbline
Ellipsoidal normal
Geodetic measurement network
Ó » .î á
Traditional geodetic datums on the reference ellipsoid
161
This means that, in total, five (5) parameters must be fixed con-
ventionally to starting values in a datum point in order to obtain
a datum definition. In this way the reference ellipsoid surface on
which the geodetic calculations are done is positioned in space.
An example of such a geodetic datum is ED50 (European Datum
1950), which used the Frauenkirche, a church in Munich, Germany
as its datum point.
2. We minimize the regional plumbline deflections and geoid heights
in a least-squares sense:
n
∑ n
∑
minimize (ξ2i + η2i ) and N i2 ,
i =1 i =1
3 Not to be confused with the terrain co-ordinates of the camera optical centre, for which
we use the same notation.
4 Here, ρ is the length of one radian expressed in degrees; i.e., ρ = 57.3, and ρ · 3600 =
206266.
Ó » .î á
162 D ATUMS IN GEODESY AND PHOTOGRAMMETRY
North
a
eric
th Am Atlantic
Nor No meas.)
Eu
ro
pe
North
American + ED
NAD +
datum + Centre of mass
(NAD)
Geoid
European
datum
(ED50, ED87)
the centre of mass of the Earth. And this offset will be different for every
different geodetic datum, see figure 15.5.
Modern datums or co-ordinate reference frames are based on satellite
positioning techniques. This includes the international geodetic commu-
nity’s ITRS (International Terrestrial Reference Frame), like ITRF2008,
or in Europe, EUREF89, and the various WGS84 datums created by the
United States Defense Department’s operators of the Global Positioning
System GPS. For such datums, due to the use of satellites orbiting the
Earth, the co-ordinate origin will automatically be in the Earth’s centre
of mass. Locations are thus always obtained geocentrically. Only the
uncertainties of the measurement process itself, including the satellite
orbit determination, may still cause offsets from geocentricity on the
several centimetre level.
Ó » .î á
Transforming between geodetic datums
163
the network, and its orientation, are fixed with the aid of additional,
astronomical information. This datum fix is done by
1. fixing, for either a single point, or for an ensemble of points, the
direction of the ellipsoidal normal ϕ, λ and the height h from the
ellipsoid. Or alternatively, by
2. fixing the location of the centre X 0 , Y0 , Z0 of the reference ellipsoid
used, relative to the Earth’s centre of mass.
Here we will show how these two alternative ways of datum fixing are
related to each other.
Rectangular geocentric co-ordinates may be written as follows, in spher-
ical approximation:
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
X cos ϕ cos λ X0
⎣ Y ⎦ = (R + h) ⎣ cos ϕ sin λ ⎦ + ⎣ Y0 ⎦ ,
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
Z sin ϕ Z0
where ϕ, λ, h are latitude, longitude and height above the reference ellip-
[ ]T
soid, and the centre of this sphere is located at X 0 Y0 Z0 .
[ ]T
The precisely geocentric co-ordinates X Y Z of the same point
can now be written out in two different datums (i.e., referred to two
different reference ellipsoids). In other words, the vector expressions
⎡ ⎤ ⎡ ⎤ ⎡ (1) ⎤
X ( ) cos ϕ(1) cos λ(1) X0
(1) ⎢ ⎥ ⎢ (1) ⎥
Y = R + h ⎣ cos ϕ(1) sin λ(1) ⎦ + ⎣ Y0 ⎦
⎢ ⎥
⎣ ⎦
(1)
Z sin ϕ(1) Z0
and ⎡ ⎤ ⎡ ⎤ ⎡ (2) ⎤
X ( ) cos ϕ(2) cos λ(2) X0
( ) ⎢ ⎥ ⎢ (2) ⎥
⎣ Y ⎦ = R + h 2 ⎣ cos ϕ(2) sin λ(2) ⎦ + ⎣ Y0 ⎦
⎢ ⎥
(2)
Z sin ϕ(2) Z0
must be identical. Here, the superscripts (1) ja (2) denote geodetic co-ordi-
nates ϕ, λ, h computed in two [different datums,
] on two different reference
ellipsoids, with their origins X 0 Y0 Z0 in two different geocentric
locations. We assume here that the orientations of the co-ordinate axes
are the same in both datums.
Ó » .î á
164 D ATUMS IN GEODESY AND PHOTOGRAMMETRY
sin ϕ(2)
⎛ ⎡ ⎤⎞ ⎡ ⎤
( ) cos ϕ(1) cos λ(1) ∆X0
− ⎝ R + h(1) ⎣ cos ϕ(1) sin λ(1) ⎦⎠ + ⎣ ∆Y0 ⎦ =
⎜ ⎢ ⎥⎟ ⎢ ⎥
sin ϕ(1) ∆ Z0
⎧ ⎡ ⎤⎫ ⎡ ⎤
⎪
⎨ cos ϕ cos λ
⎪
⎬ ∆X0
= ∆ (R + h) ⎣ cos ϕ sin λ ⎦ + ⎣ ∆Y0 ⎦ .
⎢ ⎥ ⎢ ⎥
∆ Z0
⎪ ⎪
sin ϕ
⎩ ⎭
sin ϕ + (R + h) cos ϕ 0 ∆λ ∆ Z0
Ó » .î á
Transforming between geodetic datums
165
Ellipsoidal normal
ϕ1 , λ1 ϕ2 , λ2
1 2
h2 H Plumbline
ξ1 Φ, Λ
h1
ξ2
N1
.
N2
. Geoid
. Ell
ips
o id 1
El
lip
soi
d2
Datum
transformation
Because h ≪ R :
⎡ ⎤ ⎡ ⎤⎡ ⎤
∆X0 − cos ϕ cos λ +R sin ϕ cos λ +R cos ϕ sin λ ∆h
⎣ ∆Y0 ⎦ ≈ ⎣ − cos ϕ sin λ +R sin ϕ sin λ −R cos ϕ cos λ ⎦ ⎣ ∆ϕ ⎦ =
⎢ ⎥ ⎢ ⎥⎢ ⎥
∆ Z0 − sin ϕ −R cos ϕ 0 ∆λ
⎡ ⎤⎡
∆N
⎤
− cos ϕ cos λ +R sin ϕ cos λ +R cos ϕ sin λ
= ⎣ − cos ϕ sin λ +R sin ϕ sin λ −R cos ϕ cos λ ⎦ ⎣ −∆ξ ⎦ .
⎢ ⎥⎢ ⎥
∆η
− sin ϕ −R cos ϕ 0 − cos ϕ
This equation gives the relationship that exists between the amount of
non-geocentricity of the reference ellipsoid used, and the datum defined
by it in the point (ϕ, λ, h). At the same time it is also the equation
by which the differences ∆ h, ∆ϕ, ∆λ between two geodetic datums in a
starting point may be converted to translation components of the origin
∆ X 0 , ∆Y0 , ∆ Z0 . In other words the equation by which transformation
parameters can be converted from their topocentric to their geocentric
form.
We have alse expressed the topocentric translation vector,
]T ]T
∆η
[ [
∆ h ∆ϕ ∆λ , into the alternative form ∆ N −∆ξ , − cos ϕ
where appear the geoid undulation N and the deflections of the plumbline
Ó » .î á
166 D ATUMS IN GEODESY AND PHOTOGRAMMETRY
ξ, η, all three evaluated in the datum point. See figure 15.6. Note that5
N = h−H
ξ = Φ−ϕ
η = (Λ − λ) cos ϕ
where
N geoid undulation from the reference ellipsoid (“geoid height”),
h height of the point above the ellipsoid (“ellipsoidal height”),
H height of the point above the geoid (“orthometric height”),
ξ, η deflections of the plumbline, i.e., the differences in direction be-
tween the astronomical vertical (“plumbline”) and the normal on
the reference ellipsoid — ξ in the North-South and η in the West-
East direction,
Φ ,Λ astronomical latitude and longitude (direction of the plumbline),
ϕ,λ geodetic latitude and longitude (direction of the ellipsoidal normal).
∆ N = ∆h
∆ξ = −∆ϕ
∆η = −∆λ cos ϕ
5 The extra factor cos ϕ in the η equation comes from the meridian convergence. At
higher latitudes a given change in longitude ∆λ corresponds to an ever smaller distance
on the Earth’s surface, and thus to an ever smaller change in the direction of the vertical.
6 Orthometric height H can be determined by spirit levelling from the coast (together
with gravimetric measurements for reduction), and astronomical latitude and longitude
Φ, Λ by astronomical measurements. Neither technique assumes a reference ellipsoid.
Ó » .î á
Helmert transformations in three dimensions
⎡ ⎤⎡
167
−R cos ϕ cos λ −R cos ϕ sin λ −R sin ϕ
⎤
∆X0
1⎢
⎢ + sin ϕ cos λ + sin ϕ sin λ − cos ϕ ⎥ ⎢
⎥
= ⎦ ⎣ ∆Y0 ⎦ .
⎥
R⎣ sin λ cos λ
+ cos ϕ − cos ϕ 0 ∆ Z0
R ∆η R cos ϕ∆λ
⎡ ⎤⎡ ⎤
− cos ϕ cos λ − cos ϕ sin λ − sin ϕ ∆X0
= ⎣ − sin ϕ cos λ − sin ϕ sin λ + cos ϕ ⎦ ⎣ ∆Y0 ⎦ .
⎢ ⎥⎢ ⎥
− sin λ + cos λ 0 ∆ Z0
X′ = µR (X − ∆X0 ) , (15.3)
Z′ Z
7 Also referred to as the “North, East, Up” form — though here, arbitrarily, “Up” comes
first.
8 Note the similarity with the collinearity equations 8.1 for connecting terrain and
camera co-ordinates! There too, we have a shift vector, a rotation matrix, and a scale
factor.
Ó » .î á
168 D ATUMS IN GEODESY AND PHOTOGRAMMETRY
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
∆X0 X 0′ X0
∆X0 = ⎣ ∆Y0 ⎦ = ⎣ Y0′ ⎦ − ⎣ Y0 ⎦
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
∆ Z0 Z0′ Z0
is the shift between the origins of the X and X′ systems, and
Furthermore
⎡ ⎤
1 0 0
R 1 (α1 ) ≈ ⎣ 0 1 α1 ⎦ ,
⎢ ⎥
0 −α1 1
⎡ ⎤
1 0 −α2
R 2 (α2 ) ≈ ⎣ 0 1 0 ⎦,
⎢ ⎥
α2 0 1
⎡ ⎤
1 α3 0
R 3 (α3 ) ≈ ⎣ −α3 1 0 ⎦.
⎢ ⎥
0 0 1
9 An example from aerial photogrammetry is the camera tilt angles ω, ϕ, which are
always small, so the camera z, or c, axis will be almost parallel with the local terrain
vertical axis Z , or H .
Ó » .î á
Case: transformation between ED50 and EUREF89
169
Whan all α i are small, one may also assume that all products α i α j ≈ 0,
and it follows
α2 −α1 1
α2 −α1 0
α2 −α1 ∆µ
Ó » .î á
170 D ATUMS IN GEODESY AND PHOTOGRAMMETRY
α2 0.109 ±0.106 ′′
α3 0.068 ±0.112 ′′
Z α2 −α1 1 Z ∆Z
10 For other parts of Europe, slightly different values apply, determined by fits to modern
GNSS measurements within those territories.
Ó » .î á
Transformations between pairs of modern geodetic datums
171
tistically significant at the three-sigma (3σ) level. The rotation angles
α1 , α2 , α3 are not significant at this level, and one could argue that they
could be left out without seriously degrading the solution.
Z Z −α2 α1 0 Z ∆Z
P ( t) = P ( t 0 ) + ( t − t 0 ) Ṗ,
where t is the epoch for which the co-ordinates in datum (2) must be
computed, and t 0 is the epoch of tabulation of the parameter on the web
site11 of the IERS (International Earth Rotation and Reference Systems
Service).
Since the advent of satellite positioning, also local and national datums
are usually created using this technology. The global satellite based
datums, type ITRF yy, are seldom directly suitable for national or regional
use. The user community expects the co-ordinates of points in the local
datum to be fixed, i.e., not change with time. This means that, for the
highest, geodetic precision, the local datum frame must be attached to
the tectonic motion of the local plate, typically a few centimetres per year.
As an example may serve the European situation. Europe is located
on the Eurasian tectonic plate. The co-ordinate reference system used
Ó » .î á
172 D ATUMS IN GEODESY AND PHOTOGRAMMETRY
(by agreement within EUREF, the IAG Subcommission for the Euro-
pean Reference Frame) is called ETRS, or ETRS89. ETRS stands for
European Terrestrial Reference System. National datums for individual
European countries, created by national GNSS measurement campaigns,
are realizations of this system.
However, geodetic satellite positioning will give locations in the same
system as in which the precise orbital elements of the GPS satellites are
given, so-called precise ephemeris, e.g., ITRF2005. Here, ITRF stands
for International Terrestrial Reference Frame, and 2005 is the year of
determination or realization of this reference frame. Then, the following
transformation formula is commonly used (note the slightly different
notation):
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ETRS89
X X T1
⎣ Y ⎦ ( t) = ⎣ Y ⎦ ( t) + ⎣ T 2 ⎦ +
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
Z Z T3
ETRS89 ITRF2005 ITRF2005
⎡ ⎤ETRS89 ⎡ ⎤
0 −Ṙ 3 Ṙ 2 X
+ ⎣ Ṙ 3 0 −Ṙ 1 ⎦ × ( t − 1989.0) × ⎣ Y ⎦ ( t) ,
⎢ ⎥ ⎢ ⎥
−Ṙ 2 Ṙ 1 0 Z
ITRF2005 ITRF2005
Ó » .î á
Local and global datums; cases
173
KKJ: This system is strongly non-geocentric: the offsets of the centre of
the International Ellipsoid of 1924 it uses, relative to the centre
of the ellipsoid of WGS84, are (Ollikainen, 1993) ∆ X = 93 m, ∆Y =
103 m, ∆ Z = 123 m. These offsets are in the direction WGS84 →
KKJ. Additionally there are non-zero rotations and a non-unity
scale factor.
A further complication is that, while KKJ is a map projection sys-
tem (the Gauss-Krüger projection on the International Ellipsoid
of 1924) based upon the European datum ED50, in the map plane
there is a further Helmert transformation applied to achieve ap-
proximate compatibility (on the metre level) with an older system
in use before 1970, when KKJ was officially introduced.
The “raw” transformation between KKJ and WGS84 only achieves
an accuracy on the metre level. Thus, the transformation must be
considered approximate. The National Land Survey has however
created a triangulated affine transformation (between KKJ and
EUREF-FIN, see below) based upon a Delaunay triangulation
of the Finnish territory, which achieves better (several cm level)
precision.
EUREF-FIN: This system is geocentric and based on satellite posi-
tioning. There are no translation parameters (shifts of origin)
between WGS84 and EUREF-FIN, but there is a rotation matrix:
EUREF-FIN is the Finnish national realization of ETRS89, the
European Terrestrial Reference System, which at epoch 1989.0 co-
incided with ITRS (the international geodetic community’s version
of WGS84), but has since rotated away due to the tectonic motion
of the Eurasian plate of the Earth’s crust, which ETRS eliminates.
EUREF-FIN co-ordinates can be transformed to and from “WGS84”
— i.e., to and from any realization of ITRS — with high precision.
The map projection used for mapping the whole country is UTM
(Universal Transverse Mercator) zone 35 (central meridian 27◦ )
on the GRS80 reference ellipsoid.
Adindan: Also this datum is strongly non-geocentric: ∆X =
−162 m, ∆Y = −12 m, ∆ Z = 206 m (in the direction Adindan
→ WGS84) for the Ethiopian territory (Thomas Dubois, personal
comm. April 30, 2013). This transformation is conventionally
Ó » .î á
174 D ATUMS IN GEODESY AND PHOTOGRAMMETRY
12 For the older, traditional Adindan datum, older documents list ∆ X = −166 m, ∆Y =
−15 m, ∆ Z = 204 m, with uncertainties of 5, 5, and 3 m. Rotation parameters and scale
were presumably estimated and deemed insignificant against their uncertainties.
13 . . . at least at the epoch of its determination, not many years ago. Over time, the
movement of the African plate will assert itself, and Ethiopia will have to define a,
presumably AFREF-based, precise national geodetic datum in which this effect has
been eliminated.
14 In practice one chooses the ITRF datum in which also the GNSS satellites’ precise
ephemeris used were provided by the international geodetic community.
Ó » .î á
The Proj4 map-projection and datum software
175
Clarke 1880
Z
UTM37
(39◦ E)
WGS84 Y
162 m
206 m
Y
X Adindan
12 m
X
GRS80
Ó » .î á
176 D ATUMS IN GEODESY AND PHOTOGRAMMETRY
F IGURE 15.8. EGM08 geoid for East Africa and the Middle East. For scale, the
geoid depression south of India is approximately −100 m. © U.S.
National Geospatial-Intelligence Agency. http://earth-info.nga.
D mil/GandG/wgs84/gravitymod/egm2008/
′′ ′′ ′′
+towgs84=∆ X ,∆Y ,∆ Z ,α1 ,α2 ,α3 ,∆µppm ,
to be used as (http://proj.maptools.org/gen_parms.html):
⎡ ⎤ ⎡ ⎤
X X
1 + 10−6 · ∆µppm ⎣ Y ⎦
( )⎢
⎣ Y ⎦ = +
⎢ ⎥ ⎥
Z Z
WGS84 local
⎡ ′′ ′′ ⎤⎡ ⎤ ⎡ ⎤
0 −α3 α2 X ∆X
2π ⎢ ′′ ′′
+ α 0 −α1 ⎦ ⎣ Y ⎦ + ⎣ ∆Y ⎦ .
⎥⎢ ⎥ ⎢ ⎥
360 · 60 · 60 ⎣ 3′′ ′′
−α2 α1 0 Z
local
∆Z
′′
Note that ∆µppm is given in ppm (parts per million) and the rotations α i
in seconds of arc, and these need all to be converted.
The projection is defined as a generic transversal Mercator
(+proj=tmerc) with a re-scaling factor of one (+k=1) (i.e., plain
Ó » .î á
The geoid
177
Gauss-Krüger), a central meridian of 27◦ East (+lon_0=27), a “false
Easting” of 500 km (+x_0=3500000, note that the “3” is the Finnish zone
number which is prepended), and all on the International (+ellps=intl)
or Hayford reference ellipsoid. . .
The equivalent definition for Adindan would be (http://epsg.io/20137)
Ó » .î á
178 D ATUMS IN GEODESY AND PHOTOGRAMMETRY
Ó » .î á
D Advanced subjects
16
D 16.1 Airborne laser scanning (ALS)
D 16.1.1 Airborne laser scan terrain height surveys
Airborne laser scanning has in a short while become the method of choice
to build detailed, precise digital elevation models (DEMs) over large areas.
In airborne laser scanning, three-dimensional terrain data is directly
geolocated geocentrically with the aid of a GNSS/IMU assembly, i.e., a
satellite positioning receiver (GNSS) plus an inertial measurement unit,
together tracking geocentric location and attitude of the aircraft in real
time.
The laser scanner typically is of the whiskbroom type, which changes
the nature of the exterior orientation problem: exterior orientation needs
to be performed for every measurement epoch, many times a second. Do-
ing this with ground control points would be very challenging: GNSS/IMU
is inevitable. It is however possible to specify verification points in the
terrain, or even verification surfaces, e.g., Dahlqvist et al. (2011).
Generally, the on-board GNSS system measures the location of the
GNSS antenna — more precisely, the electromagnetic centre of the an-
tenna, which must be determined by antenna calibration. This centre
may not be well defined, especially close to the large metal body of the
aircraft which causes reflections of the GNSS signal (“multipath”). GNSS
antennas are mounted on top of the aircraft fuselage for a good view of
the sky.
The laser scanning device on the other hand is mounted on the bottom
of the aircraft, for a good view of the ground. The inertial measurement
unit, or IMU, which, integrated with GNSS, mainly keeps track of the ro-
tational motions of the aircraft platform, can be mounted anywhere. One
should however try to place all three units, laser scanner, GNSS receiver,
– 179 –
180 A DVANCED SUBJECTS
Z δκ
z
δϕ
IMU
GNSS
δω
δx0
y
Laser scanner Y
x
Optical centre X 0 , Y0 , Z0
X
Terrain co-ordinates X , Y , Z
F IGURE 16.1. Geometry of airborne laser scanning. All three devices are
D mounted close together on the aircraft.
between the GNSS antenna location and the laser scanner’s origin.
2. A rotational bias triad
δω = ωIMU − ωALS ,
δϕ = ϕIMU − ϕALS ,
δκ = κIMU − κALS
Ó » .î á
Airborne geophysical surveys
181
All these six offsets could in principle be determined by careful measure-
ment, but this has turned out to be difficult1 . In practice, field calibration
is applied, where these quantities are simply added as unknowns to be
determined. This means that, unlike the situation in traditional pho-
togrammetry where we have one set of exterior orientation unknowns
( X 0 ) i , (Y0 ) i , ( Z0 ) i , ω i , ϕ i , κ i
for every image exposure i , here we have one set of six unknowns
δ x0 , δ y0 , δ z0 , δω, δϕ, δκ
1 E.g., the aircraft airframe deforms between standing on the ground and being airborne,
and with changes in the amount of fuel and payload on board.
Ó » .î á
182 A DVANCED SUBJECTS
F IGURE 16.2. The tracks of the Ethiopian Airborne Gravity Survey of 2008.
The gravity anomalies in mGal (10−5 m/s2 ) are colour coded. Total
D flight path is 90 000 km.
a i = 2x i − x i−1 − x i+1 ,
and the part of this acting in the local vertical direction n i at that point
of the track is simply a i = ⟨n i · a i ⟩.
Note that the gravity values obtained, typically expressed as grav-
ity anomalies2 , are at flight height, when most users (geophysicists,
Ó » .î á
Airborne geophysical surveys
183
geodesists) are interested in values at terrain height. Transforming the
complete set of measurements from flight height to terrain height (or sea
level) is called (harmonic) downward continuation, and belongs to the
domain of physical geodesy. The accuracy of the gravity anomalies at
terrain level is slightly poorer than at flight level.
measurement. Gravity anomalies contain the locally rapidly varying deviations of true
gravity from the smooth model gravities of the normal field, and thus extract the local
features of interest from the global overall field.
Ó » .î á
D Rotation matrices
D A.1 Introduction
A
Always when we change the orientation of the axes of a co-ordinate
system, we have, written in rectangular co-ordinates, a multiplication
with a rotation matrix.
Let us investigate the matter in two dimensions in the ( x, y) plane,
figure A.1.
The new x co-ordinate, after rotation by an angle α, is
x′P = OU = OR cos α,
where
OR = OS + SR = xP + PS tan α = xP + yP tan α.
By substitution we find
yP′ = OT = OV cos α,
′ y
y
Q P
α
′
x
V
α
T α U
α x
O S R
– 185 –
186 R OTATION MATRICES
where
OV = OQ − V Q = yP − PQ tan α = yP − xP tan α,
so substitution yields
The place of the minus sign in this matrix is the easiest to ascertain by
making a paper sketch: draw both pairs of axes, mark the angle α, and
infer graphically whether, for a point on the positive x axis (i.e., y = 0),
the new y′ co-ordinate is positive or negative.
In the above case
[ ]
′
[ ] x
y = − sin α cos α = − sin α · x,
0
i.e., y′ < 0 for α > 0. So the minus sign is indeed in the lower left corner
of the matrix.
z′ 0 0 1 z
in which
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
cos α sin α 0 1 0 0 x
R = ⎣ − sin α cos α 0 ⎦ , S = ⎣ 0 cos β sin β ⎦ , r = ⎣ y ⎦ etc.,
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
0 0 1 0 − sin β cos β z
Ó » .î á
Orthogonal matrices
187
SR
z z = z′
z′
′′
R z
S y′′
y′ β y′
y y
α
x
x x′ x′′ = x′
then (associativity):
r′′ = S (R r) = (SR ) r,
i.e., the matrices themselves may be multiplied with each other to obtain
the compound transformation.
Remember that
RS ̸= SR,
RR T = R T R = I ; (A.1)
Ó » .î á
188 R OTATION MATRICES
We can also say that the columns R · i of a rotation matrix are orthonormal,
their norm (length) is 1 and they are mutually orthogonal. This can be
seen for the case of our example matrix:
2 √
∑ 2
∥ R ·1 ∥ = √ R i1 = cos2 α + (− sin α) = 1,
i =1
2 √
∑
∥ R ·2 ∥ = √ R i2 = sin2 α + cos2 α = 1,
i =1
2
∑
⟨ R ·1 · R ·2 ⟩ = R i1 R i2 = cos α · sin α + (− sin α) · cos α = 0.
i =1
0 0 1
z′ z z
0 0 1 z′ z z
0 0 −1 z′ z −z
Both M and P differ from rotation matrices in this way, that their determi-
nant is −1, when for rotation matrices it is +1. The determinant of the X
n
matrix is (−1) , with n the number of dimensions (in the above example,
3). A determinant of −1 means that the transformation changes a right
handed co-ordinate axes frame into a left handed one, and conversely.
Ó » .î á
Orthogonal matrices
189
If we multiply, e.g., M2 and P12 , we obtain
⎡ ⎤
0 1 0
M2 P12 = ⎣ −1 0 0 ⎦ .
⎢ ⎥
0 0 1
0 0 1
Ó » .î á
Bibliography
Tulu Besha Bedada. Absolute geopotential height system for Ethiopia. PhD
thesis, University of Edinburgh, 2010.
http://www.era.lib.ed.ac.uk/handle/1842/4726. 177, 181
Jorge Luis Nunes e Silva Brito, Rafael Alves Aguiar, Marcelo Teixeira Silveira,
Luiz Carlos Teixeira Coelho Filho, Irving da Silva Badolato, Paulo
André Batista Pupim, Patrícia Farias Reolon, João Araujo Ribeiro, Jonas
Ribeiro, Orlando Bernardo Filho, and Guilherme Lúcio Abelha Mota.
E-FOTO: Development of an Open-Source Educational Digital
Photogrammetric Workstation. pages 356–361, Barcelona, Espanha, October
2011. ISBN 978-1-61208-165-6. URL http://www.thinkmind.org/index.php?
view=article&articleid=icsea_2011_15_10_10194. 2, 17
Satu Dahlqvist, Petri Rönnholm, Panu Salo, and Martin Vermeer. Evaluating
the Correctness of Airborne Laser Scanning Data Heights Using
Vehicle-Based RTK and VRS GPS Observations. Remote Sensing, 3(9):
1902–1913, 2011. ISSN 2072-4292. doi: 10.3390/rs3091902. URL
http://www.mdpi.com/2072-4292/3/9/1902. 179
– 191 –
192 B IBLIOGRAPHY
Zerfu Hailu and David Harris. Rural land registration in Ethiopia increased
transparency for 26,000,000 land holders. In Annual World Bank Conference
on Land and Poverty, Washington DC, 2014.
http://www.oicrf.org/pdf.asp?ID=13778. 145
JUHTA. Drafting a Local Zoning Map (in Finnish). Web page, Advisory Board
for Information Management in Public Administration, JHS 185.
http://www.jhs-suositukset.fi/suomi/jhs185, read May 25, 2016. xv, 150
Linifiniti Consulting CC. The Free Quantum GIS Training Manual, 2008,
maintained. URL https://docs.qgis.org/2.8/en/docs/training_manual/. CC BY
4.0, Accessed May 8, 2019. 17
Ó .î á
B IBLIOGRAPHY
193
M.T. Silveira. Considerações Técnicas sobre o Submódulo de Extração do MDE
da Versão Integrada do e-foto (versão educacional). Technical report, Rio de
Janeiro, 2011. 133
Ó .î á
Index
A B
Adindan datum (Ethiopia), 60, 93, 177 Bahir Dar, Ethiopia, 4, 39, 61
adjustment balloon, 8
least-squares, 95 base map, topographic, 9
theory, 11 base mapping, 3
aerial camera base network, geodetic, 152
analogue, 65 basics of photogrammetry, 2
digital, 66 beta βB (Greek letter), 125
aerial mapping, 149, 151 block, photogrammetric, 41
aerial survey, 152 boundary
aerotriangulation, 2, 29, 47, 94–97, 101, parcel, 150
106–108 real-estate, 149
numerical, 95 Brown, Duane, 36
affine transformation, 67, 125 bundle block adjustment, 95
African Reference Frame (AFREF), 174
airborne geophysical survey, 181 C
airborne gravity survey, 181 cadastral mapping, 145
airborne laser scanning (ALS), 2, 28, 141, cadastral system, 1, 149
144, 146, 152, 174, 179 camera, 8
aircraft aerial
body co-ordinates, 180 traditional, 34
motorized, 1 digital
airship off the shelf, 34
lighter than air, 1 digital photogrammetric, 32
alpha αA (Greek letter), 22, 125 height, 23, 75
anaglyph superposition, 118 location, 14, 45, 97, 103, 109
anaglyphic glasses, 1, 8 metric, 35
antenna calibration, GNSS, 179 orientation, 9, 14, 45, 75, 79, 88, 103,
apparent uplift, 23 106, 109
approximate value, 15, 29, 53, 80, 88, 98, traditional photogrammetric, 31
99, 174 camera calibration, 2, 31, 35, 66, 74, 77
associativity model, 12
of matrix multiplication, 187 camera co-ordinates, 65–67, 69, 75, 77, 80,
atmosphere, effect of, 12 88
atmospheric refraction, 21 linearized, 97
Automatic Extraction, 137 of feature point, 114
aviation, 8 of ground control point, 101
azimuth, geodetic, 156 of tie point, 100
– 195 –
196 ABCDEFGHIJKLMNOPQRSTUVWXYZ I NDEX
Ó .î á
I NDEX ABCDEFGHIJKLMNOPQRSTUVWXYZ
197
Ethiopian Airborne Gravity Survey, 177, Gauss-Krüger map projection, 173, 177
181 GDAL (software), 17
ETM+ sensor, 34 gedit (text editor), 63
ETRS89, 172 General Boundaries system, 149
Euler angles, 28, 29, 79 geocentric co-ordinates
EUREF89, 162, 169 rectangular, 163
EUREF-FIN, 173 geodesy, 1
European Datum 1950 (ED50), 161, 169 physical, 183
European Petroleum Survey Group geodetic measurement, 153, 160
(EPSG), 17, 61 geodetic network, 153
European Reference Frame, IAG geodetic survey, 149
Subcommission for the (EUREF), geographic information system (GIS), 17
169, 172 geography
European Terrestrial Reference System physical, 130
(ETRS89), 172, 173 geoid height, 160, 161, 166
Extensible Mark-up Language (XML), 62 geoid model, 29, 129, 152
exterior orientation, 2, 56, 75, 85, 86, 91, geometry, 7, 8
95, 97, 106, 107, 145, 179, 181 geophysical survey, 151
computation, 87 airborne, 181
elements, 34, 75, 103, 104, 113 glass plate, photographic, 7
Exterior Orientation module (e-foto), 56, 85, glasses
101, 103, 113, 121, 133 anaglyphic, 121
polarized, 115
F Global Navigation Satellite Systems
feature, 110, 116, 117 (GNSS), 28, 92, 144, 152, 154,
feature extraction, 2, 134 155, 157, 167, 174, 179, 181, 182
feature file, 141 Global Positioning System (GPS), 28, 154,
feature matching, automatic, 124 162
feature recognition, automatic, 127 GNSS
fiducial cross (FC), 37 geodetic measurement, 93
fiducial mark, 35, 37, 65–74 relative measurement, 92
field calibration, 181 static measurement, 92
figure of the Earth, 1 gravimetry
film co-ordinates, 65, 66, 68, 69 airborne, 181
film grain, 33 sea, 181
film, photographic, 7 gravity anomaly, 182, 183
flight direction, 45 ground control point (GCP), 26, 28, 29, 40,
dial, 87 47, 75, 81, 84, 85, 92, 96, 97, 100,
flight line, 32, 39 101, 107, 121, 133, 134, 181
flight map, 45 marking, 93
flight plan, 2, 41 ground control points (GCP), 91, 95
flight strip, photogrammetric, 41 ground control points, multiple, 82
floating mark, 8, 11, 111, 131 GRS80 (ellipsoid), 27, 173
floor, forest, 144
fluid flow, 129 H
FM International Oy FINNMAP (mapping half-angle formula, 157
company), 4 hand cursor, 122
focal length, 27, 31, 32, 35, 41, 46 handwheel, 114
calibrated, 36, 77 Heerbrugg, Switzerland, 38
footwheel, 114 height
Frauenkirche (Munich), 161 above sea level, 26, 129
free fall, acceleration of, 182 normal, 26, 129
orthometric, 26, 129
G height colour, 130
Gauss, Carl Friedrich, 11 height contour, 130
Gaussian blur, 126 Helmert transformation
Ó .î á
198 ABCDEFGHIJKLMNOPQRSTUVWXYZ I NDEX
Ó .î á
I NDEX ABCDEFGHIJKLMNOPQRSTUVWXYZ
199
Nepal, 45 photography, aerial, 9
North, East, Up (NEU) system, 156, 167 phototriangulation, 58, 95
Phototriangulation module (e-foto), 104, 113,
O 114, 121, 133
observation equations, 12–14, 16 pitch, 29
aerotriangulation, 95, 97, 101 pixel, 7
camera calibration, 36 pixel array, 7
exterior orientation, 76, 78, 81 plate tectonics, 171
initial values for exterior orientation, plumbline, 160, 161, 182
84 direction, 166
linearized, 82 plumbline deflection, 160, 166
observations regional, 161
linearized, 100 point of interest, 126
observations, vector of, 11, 13, 84 port (nautical), 88
omega ωΩ (Greek letter), 28 precise ephemeris, 93, 172
operator, of aerial camera, 77, 88 principal point, 35, 36, 65–67
optical centre, 35 proj4 (software), 17, 175
optics, 8 project definition file, 49, 62, 69, 85, 91
orbit determination Project Manager module (e-foto), 49, 56, 62,
satellite, 162 69, 85, 86, 89, 101, 102
orientation pushbroom sensor, 31, 34
absolute, 110 Pythagoras theorem, 27
relative, 97, 109
orthogonality Q
of rotation matrices, 187 quadratic form, 14
orthographic projection, 145 quality control, of networks, 14
orthometric height, 166 quality, of least-squares solution, 76
orthophoto map, 1, 2, 142 Quantum GIS (software), 17
orthophoto mapping, 145, 146 quotient rule, of differentation, 81
orthophoto production, 141
ortho-rectification, 2, 7, 142, 143, 147, 148 R
overdetermination radial distortions, 35
absolute orientation, 110 radiometry, 7
exterior orientation, 76, 91 rainfall, 129
overlap raster file, 129
longitudinal, 41, 43, 53 raster format, 131
of images, 41 real estate, 149
transversal, 41, 53 realization, of a co-ordinate reference
system, 154
P real-time kinematic (RTK) GPS, 150
parallax, 110, 111, 118 rectangular co-ordinates, 153, 185
partial derivative, computation, 81 rectangular geometry, 27
Pearson correlation, 136, 138 Red-Green-Blue (RGB),
permutation, 188 visual primary colours, 33
personal computer, 1, 10, 17 redundancy, 97
perspective, 8 reference ellipsoid, 28, 29, 159
perspective drawing, 8 reference frame, 2
perspective image, 131 geocentric, 28
phi ϕΦ (Greek letter), 28, 159 reference system
photogrammetry, 1, 5, 7 vertical, 151
aerial, 7, 10, 12 refractive index, 21
digital, 1, 17 Region Growing step, 137
history, 3, 7 relief shading, 130
numerical, 11 remote sensing, 7
soft-copy, 32 resection, spatial, 75, 85
terrestrial, 7 reservoir dam, 129, 151, 152
Ó .î á
200 ABCDEFGHIJKLMNOPQRSTUVWXYZ I NDEX
Ó .î á
I NDEX ABCDEFGHIJKLMNOPQRSTUVWXYZ
201
V
variance matrix, 13
of unknown estimates, 16
verification point, 179
verification surface, 179
vertical photogrammetry, 80
vi (text editor), 60
vim (text editor), 63
Vinci, Leonardo da, 7
Väisälä, Yrjö, 9
W
wavelet, 126
weather
uncertainty, 46
weight, 14
weight coefficient matrix, 13
weight matrix, 14
whiskbroom sensor, 31, 34, 143, 179
Windows, 5, 18, 58
work station, photogrammetric, 2, 10
workflow, photogrammetric, 10, 17, 33, 144
World Bank, 145
World Geodetic System 1984 (WGS84), 93
X
xi ξΞ (Greek letter), 65, 160
Y
yaw, 29
Z
zenith angle, 156
zeta ζZ (Greek letter), 156
zoning, 1, 149
Ó .î á