Math of Photogrammetry
Math of Photogrammetry
Mathematical Foundations of
Photogrammetry
Summary. Photogrammetry uses photographic cameras to obtain information about the 3D world. The basic
principle of photogrammetric measurement is straightforward: recording a light ray in a photographic image
corresponds to observing a direction from the camera to the 3D scene point where the light was reflected or
emitted. From this relation, procedures have been derived to orient cameras relative to each other or relative
to a 3D object coordinate frame and to reconstruct unknown 3D objects through triangulation. The chapter
provides a compact, gentle introduction to the fundamental geometric relations that underly image-based 3D
measurement.
Introduction
The goal of photogrammetry is to obtain information about the physical environment from
images. This chapter is dedicated to the mathematical relations that allow one to extract ge-
ometric 3D measurements from 2D perspective images. 1 Its aim is to give a brief and gentle
overview for students or researchers in neighboring disciplines. For a more extensive treat-
ment the reader is referred to textbooks such as Hartley and Zisserman (2004) and McGlone
(2013).
The basic principle of measurement with photographic cameras—and many other optical
instruments—is simple: light travels along (approximately) straight rays; these rays are
recorded by the camera; thus, the sensor measures directions in 3D space. The fundamental
geometric relation of photogrammetry is thus a simple collinearity constraint: a 3D world
point, its image in the camera, and the camera’s projection center must all lie on a straight
line. The following discussion is restricted to the most common type of camera, the so-called
perspective camera, which has a single center of projection and captures light on a flat sensor
plane. It should however be pointed out that the model is valid for all cameras with a single
center of projection (appropriately replacing only the mapping from image points to rays in
space), and extensions exist to noncentral projections along straight rays (e.g., Pajdla, 2002).
In a physical camera the light-sensitive sensor plane is located behind the projection center,
and the image is captured upside down (the “upside-down configuration”). However, there
1
Beyond geometric measurement, photogrammetry also includes the semantic interpretation of images and
the derivation of physical object parameters from the observed radiometric intensities. The methodological
basis for these tasks is a lot broader and less coherent, and is not treated here.
2
exists a symmetrical setup with the image plane located in front of the camera, as in a slide
projector (the “upright configuration”), for which the resulting image is geometrically iden-
tical. For convenience the latter configuration is used here, with the image plane between
object and camera.
y x
x~ x
y~ z~
c xH
x~
X0
Z
Y
X
X
Fig. 1. Coordinate systems: X are the 3D object coordinates of a point; the 3D camera coordinates of the same
point are x̃; the 2D image coordinates of its projection are x.
Transposition of a vector or matrix is written X> , and the cross-product between two 3-
vectors is denoted either as x × y, or using the cross-product matrix [x]× y, where for x =
[u, v, w]>
0 −w v
[x]× =
w 0 −u .
−v u 0
The Kronecker product between two vectors or matrices is denoted X ⊗ Y, such that
x11 Y x12 Y . . . x1n Y
x21 Y x22 Y . . . x2n Y
X⊗Y = . . (1)
.. .
.. . .
. . ..
xm1 Y xm2 Y . . . xmn Y
Some further matrix operators are required: the determinant det(X), the vectorization vec(X) =
x = [X11 , X12 , . . . , Xmn ]> and the diagonal matrix (here for a 3-dimensional example)
a00
diag(a, b, c) =
0 b 0 .
00c
Single-view geometry
The collinearity equation
The mapping with an ideal perspective camera can be decomposed into two steps, namely
The exterior orientation is achieved by a translation from the object coordinate origin to the
origin of the camera coordinate system (i.e., the projection center), followed by a rotation
which aligns the axes of the two coordinate systems. With the Euclidean object coordinates
Xe0 = [X0 , Y0 , Z0 ]> of the projection center and the 3 × 3 rotation matrix R this reads as
" #" #
R 0 I −Xe0
x̃ = MX = X. (2)
0> 1 0> 1
Let us now have a closer look at the camera coordinate system. In this system, the image
plane is perpendicular to the z-axis. The z-axis is also called the principal ray and intersects
the image plane in the principal point, which has the camera coordinates x̃H = t̃ · [0, 0, c, 1]>
and the image coordinates x = t · [xH , yH , 1]> . The distance c between the projection center
and the image plane is the focal length (or camera constant). The perspective mapping from
camera coordinates to image coordinates then reads
c 0 xH 0
x = [K|0]x̃ = 0 c yH x̃ .
0 (3)
00 1 0
This relation holds if the image coordinate system has no shear (orthogonal axes, respectively
pixel raster) and isotropic scale (same unit along both axes, respectively square pixels). If a
shear s and a scale difference m are present, they amount to an affine distortion of the image
coordinate system, and the camera matrix becomes
c cs xH
K= 0 c(1 + m) y H
(4)
0 0 1
If an object point X and its image x are both given at an arbitrary projective scale, they will
only satisfy the relation up to a constant factor. To verify the constraint, i.e. check whether x
is the projection of X, one can use the relation
x × PX = [x]× PX = 0 . (6)
Note that due to the projective formulation only two of the three rows of this equation are
linearly independent.
5
Given a projection matrix P it is often necessary to extract the interior and exterior orientation
parameters. To that end, observe that
The translation part of the exterior orientation immediately follows from Xe0 = −M−1 m.
Moreover, the rotation must by definition be an orthonormal matrix, and the calibration must
be an upper triangular matrix. Both properties are preserved by matrix inversion; hence the
two matrices can be found by QR decomposition of M−1 = R> K−1 (or, more efficiently, by
RQ decomposition of M).
Nonlinear errors
Equation (5) is valid for ideal perspective cameras. Real physical cameras obey the model
only approximately, mainly because the light is collected with the help of lenses rather than
entering through an ideal, infinitely small projection center (“pinhole”). The light observed
on a real camera’s sensor did not travel there from the object point along a perfectly straight
path, which leads to errors if one uses only the projective model.
In the image of a real camera we cannot measure the ideal image coordinates xe , but rather
the ones displaced by the nonlinear image distortion,
with q as the parameters of the model that describes the distortion. A simple example would
be a radially symmetric lens distortion around the principal point,
xe − xeH
∆x = (q2 r2 + q4 r4 ) , r = kxe − xeH k . (9)
r
Here, the different physical or empirical distortion models are not further discussed. Instead,
the focus is on how to compensate the effect when given a distortion model and its parame-
ters q.
The corrections ∆x vary across the image, which means that they depend on the (ideal)
image coordinates x. This may be represented by the mapping
1 0 ∆x(x, q)
x̌ = H(x)x = 0 1 ∆y(x, q) x .
(10)
00 1
The overall mapping from object points to observable image points, including nonlinear
distortions, is now
x̌ = P̌(x)X = H(x)PX . (11)
6
Note that the nonlinear distortions ∆x(xe , q) are a property of the camera, i.e., they are part
of the interior orientation, together with the calibration matrix K.
Equation (11) forms the basis for the correction of nonlinear distortions. The computation is
split into two steps. Going from object point to image point, one first projects the object point
to an ideal image point, x = PX, and then applies the distortion, x̌ = H(x)x. Note that for
practical purposes the (linear) affine distortion parameters s and m of the image coordinate
system are often also included in H(x) rather than in K.
For photogrammetric operations the inverse relation is needed, i.e., one measures the co-
ordinates x̌ and wants to convert them to ideal ones x, in order to use them as inputs for
procedures based on collinearity, such as orientation and triangulation. Often it even makes
sense to remove the distortion and synthetically generate perspective (straight line preserv-
ing) images as a basis for further processing. To correct the measured coordinates,
1 0 −∆x(x, q)
x = H−1 (x)x̌ =
0 1 −∆y(x, q) x̌ , (12)
00 1
one would need to already know the ideal coordinate one is searching for, so as to evaluate
H(x). One thus resorts to an iterative scheme, starting from x ≈ x̌. Usually at most one
iteration is required, because the nonlinear distortions vary slowly across the image.
Two-view geometry
From a single image of an unknown scene, no 3D measurements can be derived, because
the third dimension (the depth along the ray) is lost during projection. The photogrammetric
measurement principle is to acquire multiple images from different viewpoints and measure
corresponding points, meaning image points which are the projections of the same physical
object point. From correspondences one can reconstruct the 3D coordinates via triangulation.
The minimal case of two views forms the nucleus for this approach.
A direct consequence of the collinearity constraint is the coplanarity constraint for two cam-
eras: the viewing rays through corresponding image points must be coplanar, because they
intersect in the 3D point. It follows that even if only the relative orientation between the
two cameras is known one can reconstruct a (projectively distorted) straight-line-preserving
model of the 3D world, by intersecting corresponding rays, and that if additionally the in-
terior orientations are known (the cameras are calibrated), one can reconstruct an angle-
preserving model of the world in the same way. The scale of such a photogrammetric model
7
cannot be determined without external reference, because scaling up and down the two ray
bundles together does not change the coplanarity.
l x’
ξ l’ ξ’
e e’
t
Fig. 2. The coplanarity constraint: the two projection rays ξ, ξ 0 must lie in one plane, which also contains the
baseline t and the object point X. As a consequence, possible correspondences to an image point x must lie on
the epipolar line l0 and vice versa. All epipolar lines l intersect in the epipole e, the image of the other camera’s
projection center.
Now let us look at a pair of corresponding image points x in the first image and x0 in the
second image (Fig. 2). The coplanarity constraint (or epipolar constraint) states that the
two corresponding rays in object space must lie in a plane. By construction that plane also
contains the baseline t = Xe0 0 − Xe0 between the projection centers. From (2) and (3) the ray
direction through x in object space (in Euclidean coordinates) is ξ = R> K−1 x, and similarly
for the second camera ξ 0 = R0> K0−1 x0 .
Coplanarity between the three vectors implies
The matrix F is called the fundamental matrix and completely describes the relative orienta-
tion. It has the following properties:
• l = Fx0 is the epipolar line to x0 , i.e., the image of the ray ξ 0 in the first camera.
Corresponding points to x0 must lie on that line, x> l = 0. Conversely, l0 = F> x is
the epipolar line to x.
• The left null-space of F is the epipole e of the first image, i.e. the image of the second
projection center X00 in which all epipolar lines intersect, F> e = 0. Conversely, the
right null-space of F is the epipole of the second image.
• F is singular and has rank ≤ 2, because [t]× has rank ≤ 2. Accordingly, F maps points
to lines. It thus has seven degrees of freedom (nine entries determined only up to a
common scale factor, minus one rank constraint).
The coplanarity constraint is linear in the elements of F and bilinear in the image coordinates,
which is the basis for directly estimating the relative orientation.
8
If the interior orientations of both cameras are known (the cameras have been calibrated), the
epipolar constraint can also be written in camera coordinates. The ray from the projection
center to x in the camera coordinate system is given by η = K−1 x, and similarly η 0 = K0−1 x0 .
With these direction vectors the epipolar constraint reads
The matrix E is called the essential matrix and completely describes the relative orienta-
tion between calibrated cameras, i.e., their relative rotation and the direction of the relative
translation (the baseline). It has the following properties:
• E has rank 2. Additionally, the two nonzero eigenvalues are equal. E has five de-
grees of freedom, corresponding to the relative orientation of an angle-preserving
photogrammetric model (three for the relative rotation, two for the baseline direc-
tion).
• The constraint between calibrated rays is still linear in the elements of E and bilinear
in the image coordinates.
For completeness it shall be mentioned that beyond coplanarity, further constraints, so-called
trifocal constraints exist between triplets of images: if a corresponding straight line has been
observed in three images, then the planes formed by the associated projection rays must all
intersect in a single 3D line; see, for example, Hartley (1997) and McGlone (2013). This
topic is not further treated here.
Absolute orientation
Analytical operations
The input to the photogrammetric process are raw images, respectively, coordinates mea-
sured in those images. This section describes methods to estimate unknown parameters from
image coordinates, using the models developed above.
9
Single-image orientation
The complete orientation of a single image has 11 unknowns (5 for the interior orientation
and 6 for the exterior orientation). An image point affords two observations; thus, at least
six ground control points in the object coordinate system and their corresponding image
points are required. An algebraic solution, known as the Direct Linear Transform or DLT
(Abdel-Aziz and Karara, 1971), is obtained directly from equation (6).
with the vector p = vec(P) = [P11 , P12 , . . . , P34 ]> . For each control point two of the three
equations are linearly independent. Selecting two such equations for each of N ≥ 6 ground
control points and stacking them yields a homogeneous linear system A2N ×12 p = 0, which
is solved with singular value decomposition to obtain the projection matrix P. Note that the
DLT fails if all control points are coplanar and is unstable if they are nearly coplanar. A
further critical configuration, albeit of rather theoretical interest, is if all control points lie on
a twisted cubic curve (Hartley and Zisserman, 2004).
The direct algebraic solution is not geometrically optimal, but can serve as a starting value for
an iterative estimation of the optimal Euclidean camera parameters; see (McGlone, 2013).
A further frequent orientation procedure is to determine the exterior orientation of a sin-
gle camera with known interior orientation. Three noncollinear control points are needed to
determine the six unknowns. The procedure is known as spatial resection or P3P problem.
s2
c
γ a
s1 α
β b
s3
Fig. 3. Spatial resection: the image coordinates together with the interior orientation give rise to three rays in
camera coordinates, forming a trilateral pyramid. Applying the cosine law on each of the pyramids faces relates
the pairwise angles α, β, γ between the rays and the known distances a, b, c between the object points to the
pyramids sides s1 , s2 , s3 . Soving for these three lengths yields the camera coordinates of the three points.
The solution (Grunert, 1841) is sketched in Figure 3. With the known interior orientation,
the three image points are converted to rays in camera coordinates, ηi = K−1 xi , i = 1 . . . 3.
10
The pairwise angles between these rays are determined via cos α = 1
η>η
|η2 ||η3 | 2 3
etc. In the
object coordinate system, the distances between the points are determined, a = |Xe2 − Xe3 |,
etc. The three triangles containing the projection center now give rise to constraints
will not be a fundamental matrix. To correct this, one can find the nearest (according to the
Frobenius norm) rank-2 matrix by decomposing F̂ with SVD and nullifying the smallest
singular value,
This so-called “8-point algorithm” (Longuet-Higgins, 1981) can be used in equivalent form
to estimate the essential matrix between two calibrated cameras. Reordering (14) to
η > ⊗ η 0> e = 0
(20)
yields the entries of e = vec(Ê), and the solution is corrected to the nearest essential matrix
by enforcing the constraints on the singular values,
with arbitrary δ. To find a fundamental matrix (i.e., a rank-deficient matrix) in that null-
space, one introduces the nine elements of f (δ) into the determinant constraint det(F) = 0
and analytically expands the determinant with Sarrus’ rule. This results in a cubic equation
for δ, and consequently in either one or three solutions for F.
Following the same idea, a “5-point algorithm” exists for the calibrated case (Nistér, 2004):
stacking (20) for five correspondences leads to a 4-dimensional null-space
Again this can be substituted back into the determinant constraint. Furthermore, it can be
shown that the additional constraints on the essential matrix can be written
1
EE> E − trace(EE> )E = 0 (24)
2
in which one can also substitute (23). Through further—rather cumbersome—algebraic vari-
able substitutions, one arrives at a 10th -order polynomial in ζ, which is solved numerically.
For the (up to 10) real roots, one then recovers δ and , and thus E, through back-substitution.
12
The fundamental matrix is ambiguous if all points are coplanar in object space. The corre-
sponding equations become singular in that case and unstable near it. On the contrary, the
essential matrix does not suffer from that problem and in fact can be estimated from only
four correspondences if they are known to be coplanar (Wunderlich, 1982).
Naturally, once initial values for the relative orientation parameters are available, an iter-
ative solution exists to find the geometrically optimal solution for an arbitrary number of
correspondences; see (McGlone, 2013).
Having determined E, it is in many cases necessary to extract explicit relative orientation
parameters (rotation and translation direction) for the image pair. Given the singular value
decomposition (21) and the two auxiliary matrices
0 ±1 0 0 ±1 0
W= ∓1 0 0 , Z = ∓1 0 0
(25)
0 0 1 0 0 0
which can be easily verified by checking E = [t]× R. The sign ambiguities in W and Z give
rise to four combinations, corresponding to all combinations of the “upright” and “upside-
down” camera configurations for the two images. The correct one is found by checking in
which one a 3D object point is located in front of both cameras.
Reconstruction of 3D points
For photogrammetric 3D reconstruction the camera orientations are in fact only an unavoid-
able by-product, whereas the actual goal is to reconstruct 3D points (note, however, that the
opposite is true for image-based navigation). The basic operation of reconstruction is trian-
gulation of 3D object points from cameras with known orientations Pi . A direct algebraic
solution is found from the collinearity constraint in the form (6). Each image point gives rise
to
([xi ]× Pi ) X = 0 (27)
of which two rows are linearly independent. Stacking the equations leads to an equation
system for the object point X. Solving with SVD yields a unique solution for two cameras
P1 , P2 , respectively, a (projective) least-squares solution for more than two cameras.
A geometrically optimal solution for two views exists, which involves numerically solving
a polynomial of degree 6 (Hartley and Sturm, 1997). Iterative solutions for ≥3 views also
exist. In general the algebraic solution (27) is a good approximation, if one employs proper
numerical conditioning.
13
Most applications require a network of >2 images to cover the object of interest.3 By com-
binations of the elementary operations described above, the relative orientation of all images
in a common coordinate system can be found: either one can chain two-view relative ori-
entations together while estimating the relative scale from a few object points or one can
generate a photogrammetric model from two views and then iteratively add additional views
to it by alternating single-image orientation with triangulation of new object points. Absolute
orientation is accomplished (either at the end or at an intermediate stage) by estimating the
3D similarity transform that aligns the photogrammetric model with known ground control
points in the object coordinate system.
Obviously such an iterative procedure will lead to error buildup. In most applications, the
image network is thus polished with a global least-squares optimization of all unknown pa-
rameters. The specialization of least-squares adjustment to photogrammetric ray bundles,
using the collinearity constraint as functional model, is called bundle adjustment (Brown,
1958; Triggs et al, 1999; McGlone, 2013). Adjustment proceeds in the usual way: the con-
straints y = f (x) between observations y and unknowns x are linearized at the approximate
solution x0 , leading to an overdetermined equation system δy = J δx. The equations are
solved in a least-squares sense,
N δx = n
(28)
N = J> Syy
−1
J , n = J> S−1
yy δy ,
with Syy the covariance matrix of the observations. The approximate solution is then updated,
x1 = x0 + δx, and the procedure iterated until convergence.
In order to yield geometrically optimal solutions, the collinearity constraint is first trans-
formed to Euclidean space by removing the projective scale. Denoting cameras by index j,
object points by index i, and the rows of the projection matrix by P(1) , P(2) , P(3) , we get
(1) (2)
Pj Xi Pj X i
xeij = (3)
, yije = (3)
(29)
Pj Xi Pj X i
These equations must then be linearized for all observed image points w.r.t. the orienta-
tion parameters contained in the Pj as well as the 3D object point coordinates Xi . More-
over, equations for the ground control points, as well as additional measurements such as
GPS/IMU observations for the projection centers, are added.
For maximum accuracy, it is also common to regard interior orientation parameters (includ-
ing nonlinear distortions) as observations of a specified accuracy rather than as constants
3
In aerial photogrammetry, the network is often called an “image block,” since the images are usually recorded
on a regular raster.
14
and to estimate their values during bundle adjustment. This so-called self-calibration can
take different forms, e.g., for crowd-sourced amateur images, it is usually required to esti-
mate the focal length and radial distortion of each individual image, whereas for professional
aerial imagery, it is common to use a single set of orientation parameters for all images but
include more complex nonlinear distortion coefficients. For details about GPS/IMU integra-
tion, self-calibration etc. see (McGlone, 2013).
The normal equations for photogrammetric networks are often extremely large (up to >
106 unknowns). However, they are also highly structured and very sparse (<1% nonzero
coefficients), which can be exploited to efficiently solve them. The most common procedure
is to eliminate the largest portion of the unknowns, namely, the 3D object point coordinates,
with the help of the Schur complement. Let index x denote object point coordinates and
index q all other unknown parameters; then the normal equations can be written as
" #" # " #
Nxx Nxq δxx nx
= . (30)
N>
xq N qq δx q n q
Inverting Nxx is cheap because it is block diagonal with individual (3×3)-blocks for each ob-
ject point. Using that fact the normal equations can efficiently be reduced to a much smaller
system:
N̄ δxq = n̄ , with
(31)
N̄ = Nqq − N> −1
xq Nxx Nxq , n̄ = nq − N> −1
xq Nxx nx .
The standard way to solve the reduced normal equations is to adaptively dampen the equation
system with the Levenberg-Marquardt method (Levenberg, 1944; Nocedal and Wright, 2006)
for better convergence, i.e., the equation system is modified to
with Iq the identity matrix of appropriate size and the damping factor λ depending on the
success of the previous iteration. The system (32) is then reduced to a triangular form with
variants of Cholesky factorization. Using recursive partitioning and equation solvers which
exploit sparsity, it is possible to perform bundle adjustment for photogrammetric networks
with >10’000 cameras.
Due to automatic tie-point measurement as well as the sheer size of modern photogrammet-
ric campaigns, blunders—mainly incorrect tie point matches—are unavoidable in practice.
Therefore, bundle adjustment routinely employs robust methods such as iterative reweighted
least squares (IRLS) (e.g., Huber, 1981) to defuse, and subsequently eliminate, gross outliers.
15
Conclusion
A brief summary has been given of the elementary geometry underlying photogrammetric
modeling, as well as the mathematical operations for image orientation and image-based
3D reconstruction. The theory of photogrammetry started to emerge in the nineteenth cen-
tury, most of it was developped in the twentieth century. The geometric relations that govern
the photographic imaging process, and their inversion for 3D measurement purposes, are
nowadays well understood; the theory is mature and has been compiled—in much more de-
tail than here—in several excellent textbooks (e.g., Hartley and Zisserman, 2004; McGlone,
2013; Luhmann et al, 2014). Still, some important findings, such as the direct solution for
relative orientation of calibrated cameras, are surprisingly recent.
References
Abdel-Aziz YI, Karara HM (1971) Direct linear transformation from comparator coordinates into object space
coordinates in close-range photogrammetry. In: Proceedings of the Symposium on Close-Range Pho-
togrammetry, American Society of Photogrammetry
Brown DC (1958) A solution to the general problem of multiple station analytical stereotriangulation. Tech.
Rep. RCA-MTP Data Reduction Technical Report No. 43, Patrick Airforce Base
Das GB (1949) A mathematical approach to problems in photogrammetry. Empire Survey Review 10(73):131–
137
Förstner W (2010) Minimal representations for uncertainty and estimation in projective spaces. In: Proceedings
of the Asian Conference on Computer Vision, Springer Lecture Notes in Computer Science, vol 6493
Grunert JA (1841) Das Pothenot’sche Problem in erweiterter Gestalt; nebst Bemerkungen über seine Anwen-
dung in der Geodäsie. In: Grunert Archiv der Mathematik und Physik 1, p 238248
Haralick RM, Lee CN, Ottenberg K, Nölle M (1994) Review and analysis of solutions of the three point
perspective pose estimation problem. International Journal of Computer Vision 13(3):331–356
Hartley R, Zisserman A (2004) Multiple view geometry in computer vision, 2nd edn. Cambridge University
Press
Hartley RI (1994) Projective reconstruction and invariants from multiple images. IEEE Transactions on Pattern
Analysis and Machine Intelligence 16(10):1036–1041
Hartley RI (1997) Lines and points in three views and the trifocal tensor. International Journal of Computer
Vision 22(2):125–140
Hartley RI, Sturm P (1997) Triangulation. Computer Vision and Image Understanding 68(2):146–157
Huber PJ (1981) Robust Statistics. Wiley
Levenberg K (1944) A method for the solution of certain non-linear problems in least squares. Quarterly of
Applied Mathematics 2:164168
Longuet-Higgins HC (1981) A computer algorithm for reconstructing a scene from two projections. Nature
293:133–135
Luhmann T, Robson S, Kyle S, Boehm J (2014) Close-range photogrammetry and 3D imaging. De Gruyter
McGlone JC (ed) (2013) Manual of Photogrammetry, sixth edn. Americal Society for Photogrammetry and
Remote Sensing
16
Nistér D (2004) An efficient solution to the five-point relative pose problem. IEEE Transactions on Pattern
Analysis and Machine Intelligence 26(6):756–777
Nocedal J, Wright SJ (2006) Numerical Optimization, 2nd edn. Springer
Pajdla T (2002) Stereo with oblique cameras. International Journal of Computer Vision 47(1-3):161–170
Semple JG, Kneebone GT (1952) Algebraic projective geometry. Oxford University Press
Triggs B, McLauchlan PF, Hartley RI, Fitzgibbon AW (1999) Bundle adjustment – a modern synthesis. In:
Vision Algorithms: Theory and Practice, Springer Lecture Notes in Compuer Science, vol 1883
von Sanden H (1908) Die Bestimmung der Kernpunkte in der Photogrammetrie. PhD thesis, Universität
Göttingen
Wunderlich W (1982) Rechnerische Rekonstruktion eines ebenen Objektes aus zwei Photographien. Mitteilun-
gen des Geodätisches Instituts der TU Graz 40:265–377