Introduction To Photogrammetry: Schenk.2@osu - Edu
Introduction To Photogrammetry: Schenk.2@osu - Edu
02
Introduction to Photogrammetry
T. Schenk
[email protected]
1 Introduction 1
1.1 Preliminary Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Definitions, Processes and Products . . . . . . . . . . . . . . . . . . 3
1.2.1 Data Acquisition . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 Photogrammetric Products . . . . . . . . . . . . . . . . . . . 5
Photographic Products . . . . . . . . . . . . . . . . . . . . . 5
Computational Results . . . . . . . . . . . . . . . . . . . . . 5
Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.3 Photogrammetric Procedures and Instruments . . . . . . . . . 6
1.3 Historical Background . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Film-based Cameras 11
2.1 Photogrammetric Cameras . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.2 Components of Aerial Cameras . . . . . . . . . . . . . . . . 12
Lens Assembly . . . . . . . . . . . . . . . . . . . . . . . . . 12
Inner Cone and Focal Plane . . . . . . . . . . . . . . . . . . 13
Outer Cone and Drive Mechanism . . . . . . . . . . . . . . . 14
Magazine . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.3 Image Motion . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.4 Camera Calibration . . . . . . . . . . . . . . . . . . . . . . . 16
2.1.5 Summary of Interior Orientation . . . . . . . . . . . . . . . . 19
2.2 Photographic Processes . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.1 Photographic Material . . . . . . . . . . . . . . . . . . . . . 20
2.2.2 Photographic Processes . . . . . . . . . . . . . . . . . . . . . 21
Exposure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Colors and Filters . . . . . . . . . . . . . . . . . . . . . . . . 22
Processing Color Film . . . . . . . . . . . . . . . . . . . . . 23
2.2.3 Sensitometry . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2.4 Speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.2.5 Resolving Power . . . . . . . . . . . . . . . . . . . . . . . . 26
ii CONTENTS
3 Digital Cameras 29
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.1.1 Camera Overview . . . . . . . . . . . . . . . . . . . . . . . 30
3.1.2 Multiple frame cameras . . . . . . . . . . . . . . . . . . . . 31
3.1.3 Line cameras . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.1.4 Camera Electronics . . . . . . . . . . . . . . . . . . . . . . . 32
3.1.5 Signal Transmission . . . . . . . . . . . . . . . . . . . . . . 34
3.1.6 Frame Grabbers . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.2 CCD Sensors: Working Principle and Properties . . . . . . . . . . . . 34
3.2.1 Working Principle . . . . . . . . . . . . . . . . . . . . . . . 35
3.2.2 Charge Transfer . . . . . . . . . . . . . . . . . . . . . . . . . 37
Linear Array With Bilinear Readout . . . . . . . . . . . . . . 37
Frame Transfer . . . . . . . . . . . . . . . . . . . . . . . . . 37
Interline Transfer . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2.3 Spectral Response . . . . . . . . . . . . . . . . . . . . . . . 38
6 Measuring Systems 71
6.1 Analytical Plotters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
6.1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
6.1.2 System Overview . . . . . . . . . . . . . . . . . . . . . . . . 71
Stereo Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Translation System . . . . . . . . . . . . . . . . . . . . . . . 72
Measuring and Recording System . . . . . . . . . . . . . . . 73
User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Electronics and Real-Time Processor . . . . . . . . . . . . . 75
Host Computer . . . . . . . . . . . . . . . . . . . . . . . . . 76
Auxiliary Devices . . . . . . . . . . . . . . . . . . . . . . . . 76
6.1.3 Basic Functionality . . . . . . . . . . . . . . . . . . . . . . . 76
Model Mode . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Comparator Mode . . . . . . . . . . . . . . . . . . . . . . . 77
6.1.4 Typical Workflow . . . . . . . . . . . . . . . . . . . . . . . . 77
Definition of System Parameters . . . . . . . . . . . . . . . . 77
Definition of Auxiliary Data . . . . . . . . . . . . . . . . . . 78
Definition of Project Parameters . . . . . . . . . . . . . . . . 78
Interior Orientation . . . . . . . . . . . . . . . . . . . . . . . 78
Relative Orientation . . . . . . . . . . . . . . . . . . . . . . 79
Absolute Orientation . . . . . . . . . . . . . . . . . . . . . . 79
6.1.5 Advantages of Analytical Plotters . . . . . . . . . . . . . . . 79
6.2 Digital Photogrammetric Workstations . . . . . . . . . . . . . . . . . 79
6.2.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Digital Photogrammetric Workstation and Digital Photogram-
metry Environment . . . . . . . . . . . . . . . . . 81
6.2.2 Basic System Components . . . . . . . . . . . . . . . . . . . 82
6.2.3 Basic System Functionality . . . . . . . . . . . . . . . . . . 84
Storage System . . . . . . . . . . . . . . . . . . . . . . . . . 85
Viewing and Measuring System . . . . . . . . . . . . . . . . 86
Stereoscopic Viewing . . . . . . . . . . . . . . . . . . . . . . 88
Roaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
6.3 Analytical Plotters vs. DPWs . . . . . . . . . . . . . . . . . . . . . . 94
Chapter 1
Introduction
availability
invention
time gap
Figure 1.1: Time gap between research, development and operational use of a new
method or instrument.
is aerial triangulation. The mathematical foundation was laid in the fifties, the first
programs became available in the late sixties, but it took another decade before they
were widely used in the photogrammetric practice.
There are only a few manufacturers of photogrammetric equipment. The two
leading companies are Leica (a recent merger of the former Swiss companies Wild
and Kern), and Carl Zeiss of Germany (before unification there were two separate
companies: Zeiss Oberkochen and Zeiss Jena).
Photogrammetry and remote sensing are two related fields. This is also manifest
in national and international organizations. The International Society of Photogram-
metry and Remote Sensing (ISPRS) is a non-governmental organization devoted to
the advancement of photogrammetry and remote sensing and their applications. It
was founded in 1910. Members are national societies representing professionals and
special- ists of photogrammetry and remote sensing of a country. Such a national
organization is the American Society of Photogrammetry and Remote Sensing
(ASPRS).
The principle difference between photogrammetry and remote sensing is in the
appli- cation; while photogrammetrists produce maps and precise three-dimensional
positions of points, remote sensing specialists analyze and interpret images for
deriving informa- tion about the earth’s land and water areas. As depicted in Fig. 1.2
both disciplines are also related to Geographic Information Systems (GIS) in that
they provide GIS with essential information. Quite often, the core of topographic
information is produced by photogrammetrists in form of a digital map.
ISPRS adopted the metric system and we will be using it in this course. Where
appropriate, we will occasionally use feet, particularly in regards to focal lengths of
cameras. Despite considerable effort there is, unfortunately, not a unified
nomenclature. We follow as closely as possible the terms and definitions laid out in
(1). Students who are interested in a more thorough treatment about photogrammetry
are referred to (2), (3), (4), (5). Finally, some of the leading journals are mentioned.
The official journal published by ISPRS is called Photogrammetry and Remote
Sensing. ASPRS’ journal, Photogrammetric Engineering and Remote Sensing,
PERS, appears monthly, while Photogrammetric Record, published by the British
Society of Photogrammetry and Remote Sensing, appears six times a year. Another
renowned journal is Zeitschrift für
1.2 Definitions,
2 Processes and 18 Introduction
Products
photogrammetry
GIS
remote sensing
photogrammetry
object space
1.2 Definitions,
2 Processes and 19 Introduction
GIS
Products
data fusion
remote sensing
The name “photogrammetry" is derived from the three Greek words phos or phot
which means light, gramma which means letter or something drawn, and metrein, the
noun of measure.
In order to simplify understanding an abstract definition and to get a quick grasp
at the complex field of photogrammetry, we adopt a systems approach. Fig. 1.3
illustrates the idea. In the first place, photogrammetry is considered a black box.
The input is characterized by obtaining reliable information through processes of
recording patterns of electromagnetic radiant energy, predominantly in the form of
photographic images. The output, on the other hand, comprises photogrammetric
products generated within the black box whose functioning we will unravel during
this course.
1.2 Definitions,
4 Processes and 1 Introduction
Products 0
scanner
geometric information involves the spatial position and the shape of objects. It is
the most important information source in photogrammetry.
physical information refers to properties of electromagnetic radiation, e.g., radiant
energy, wavelength, and polarization.
semantic information is related to the meaning of an image. It is usually obtained
by interpreting the recorded data.
temporal information is related to the change of an object in time, usually obtained
by comparing several images which were recorded at different times.
As indicated in Table 1.1 the remotely sensed objects may range from planets to
portions of the earth’s surface, to industrial parts, historical buildings or human
bodies. The generic name for data acquisition devices is sensor, consisting of an
optical and detector system. The sensor is mounted on a platform. The most
typical sensors are cameras where photographic material serves as detectors. They
are mounted on
Table 1.1: Different areas of specialization of photogrammetry, their objects and
sensor platforms.
airplanes as the most common platforms. Table 1.1 summarizes the different objects
and platforms and associates them to different applications of photogrammetry.
Photographic Products
Photographic products are derivatives of single photographs or composites of overlap-
ping photographs. Fig. 1.4 depicts the typical case of photographs taken by an aerial
camera. During the time of exposure, a latent image is formed which is developed to
a negative. At the same time diapositives and paper prints are produced.
Enlargements may be quite useful for preliminary design or planning studies. A better
approximation to a map are rectifications. A plane rectification involves just tipping
and tilting the diapositive so that it will be parallel to the ground. If the ground has a
relief, then the rectified photograph still has errors. Only a differentially rectified
photograph, better known as orthophoto, is geometrically identical with a map.
Composites are frequently used as a first base for general planning studies. Photo-
mosaics are best known, but composites with orthophotos, called orthophoto maps are
also used, especially now with the possibility to generate them with methods of
digital photogrammetry.
Computational Results
Aerial triangulation is a very successful application of photogrammetry. It delivers 3-
D positions of points, measured on photographs, in a ground control coordinate
system, e.g., state plane coordinate system.
Profiles and cross sections are typical products for highway design where
earthwork quantities are computed. Inventory calculations of coal piles or mineral
deposits are
1.3 Historical6Background 1 Introduction
3
negative
f
perspective center
1.3 Historical6Background
-f
reduction 1 Introduction
diapositive rectification 4
enlargement
ground
other examples which may require profile and cross section data. The most popular
form for representing portions of the earth’s surface is the DEM (Digital Elevation
Model). Here, elevations are measured at regularly spaced grid points.
Maps
Maps are the most prominent product of photogrammetry. They are produced at
various scales and degrees of accuracies. Planimetric maps contain only the
horizontal position of ground features while topographic maps include elevation data,
usually in the form of contour lines and spot elevations. Thematic maps emphasize
one particular feature, e.g., transportation network.
photograph. A map depicting the same scene will only have a few thousand bytes of
data. Consequently, another important task is data reduction.
The information we want to represent on a map is explicit. By that we mean that
all data is labeled. A point or a line has an attribute associated which says something
about the type and meaning of the point or line. This is not the case for an image; a
pixel has no attribute associate with it which would tell us what feature it belongs to.
Thus, the relevant information is only implicitly available. Making information
explicit amounts to identifying and extracting those features which must be
represented on the map.
Finally, we refer back to Fig. 1.3 and point out the various instruments that are
used to perform the tasks described above. A rectifier is kind of a copy machine for
making plane rectifications. In order to generate orthophotos, an orthophoto projector
is required. A comparator is a precise measuring instrument which lets you measure
points on a diapositive (photo coordinates). It is mainly used in aerial triangulation. In
order to measure 3-D positions of points in a stereo model, a stereo plotting
instrument or stereo plotter for short, is used. It performs the transformation central
projection to orthogonal projection in an analog fashion. This is the reason why these
instruments are sometimes less officially called analog plotters. An analytical
plotter establishes the transformation computationally. Both types of plotters are
mainly used to produce maps, DEMs and profiles.
A recent addition to photogrammetric instruments is the softcopy workstation. It is
the first tangible product of digital photogrammetry. Consequently, it deals with
digital imagery rather than photographs.
photogrammetry.
2000
analytical photogr.
digital
analog photogrammetry
1950 invention of computer
REFERENCES
8 1 Introduction
7
first generation
1850 invention of photography
References
[1] Multilingual Dictionary of Remote Sensing and Photogrammetry, ASPRS, 1983,
p. 343.
[2] Manual of Photogrammetry, ASPRS, 4th Ed., 1980, p. 1056.
[3] Moffit, F.H. and E. Mikhail, 1980. Photogrammetry, 3rd Ed., Harper & Row
Publishers, NY.
[4] Wolf, P., 1980. Elements of Photogrammetry, McGraw Hill Book Co,
NY. [5] Kraus, K., 1994. Photogrammetry, Verd. Dümmler Verlag, Bonn.
10 1 Introduction
Chapter 2
Film-based Cameras
2.1.1 Introduction
In the beginning of this chapter we introduced the term sensing device as a generic
name for sensing and recording radiometric energy (see also Fig. 2.1). Fig. 2.1 shows
a classification of the different types of sensing devices.
An example of an active sensing device is radar. An operational system
sometimes used for photogrammetric applications is the side looking airborne radar
(SLAR). Its chief advantage is the fact that radar waves penetrate clouds and haze.
An antenna, attached to the belly of an aircraft directs microwave energy to the
side, rectangular to the direction of flight. The incident energy on the ground is
scattered and partially reflected. A portion of the reflected energy is received at the
same antenna. The time elapsed between energy transmitted and received can be used
to determine the distance between antenna and ground.
Passive systems fall into two categories: image forming systems and spectral data
systems. We are mainly interested in image forming systems which are further sub-
divided into framing systems and scanning systems. In a framing system, data are
acquired all at one instant, whereas a scanning system obtains the same information
sequentially, for example scanline by scanline. Image forming systems record radiant
energy at different portions of the spectrum. The spatial position of recorded radiation
refers to a specific location on the ground. The imaging process establishes a
geometric and radiometric relationship between spatial positions of object and image
space.
Of all the sensing devices used to record data for photogrammetric applications,
the photographic systems with metric properties are the most frequently employed.
They are grouped into aerial cameras and terrestrial cameras. Aerial cameras are also
called cartographic cameras. In this section we are only concerned with aerial
cameras. Panoramic cameras are examples of non-metric aerial cameras. Fig. 2.2(a)
depicts an aerial camera.
2.1 Photogrammetric
12 Cameras 21
2 Film-based Cameras
Sensing devices
Lens Assembly
The lens assembly, also called lens cone, consists of the camera lens (objective), the
diaphragm, the shutter and the filter. The diaphragm and the shutter control the
exposure. The camera is focused for infinity; that is, the image is formed in the focal
plane.
Fig. 2.3 shows cross sections of lens cones with different focal lengths. Super-
wide-angle lens cones have a focal length of 88 mm (3.5 in). The other extreme are
narrow-angle cones with a focal length of 610 mm (24 in). Between these two
extremes are wide-angle, intermediate-angle, and normal-angle lens cones, with focal
lengths of
153 mm (6 in), 213 mm (8.25 in), and 303 mm (12 in), respectively. Since the film
format does not change, the angle of coverage, or field for short, changes, as well as
the
Figure 2.2: (a) Aerial camera Aviophot RC20 from Leica; (b) schematic diagram of
aerial camera.
scale. The most relevant data are compiled in Table 2.1. Refer also to Fig. 2.4 which
illustrates the different configurations.
Super-wide angle lens cones are suitable for medium to small scale applications
because the flying height, H , is much lower compared to a normal-angle cone (same
photo scale assumed). Thus, the atmospheric effects, such as clouds and haze, are
much less a problem. Normal-angle cones are preferred for large-scale applications of
urban areas. Here, a super-wide angle cone would generate much more occluded
areas, particularly in built-up areas with tall buildings.
time of exposure. Such information includes the date and time, altimeter data, photo
number, and a level bubble.
Magazine
Obviously, the magazine holds the film, both, exposed and unexposed. A film roll is
120 m long and provides 475 exposures. The magazine is also called film cassette. It
is detachable, allowing to interchange magazines during a flight mission.
d’
normal angle
super-wide angle
perspective center
ground coverage
Figure 2.4: Angular coverage, photo scale and ground coverage of cameras with
different focal lengths.
distance d = D/m where m is the photo scale. We
have
2.1 Photogrammetric
14 Cameras 26
2 Film-based Cameras
vt v tf (2.1)
d= =
m H
with f the focal length and H the flying height.
Example:
Image motion caused by vibrations in the airplane can also be computed using
Eq. 2.1. For that case, vibrations are expressed as a time rate of change of the camera
axis (angle/sec). Suppose the camera axis vibrates by 20 /sec. This corresponds to a
distance Dv = 2 H/ρ = 52.3 m. Since this “displacement" occurs in one second, it
can be considered a velocity. In our example, this velocity is 188.4 km/sec,
corresponding to an image motion of 18 µm. Note that in this case, the direction of
image motion is random.
As the example demonstrates, image motion may considerably decrease the image
quality. For this reason, modern aerial cameras try to eliminate image motion. There
are different mechanical/optical solutions, known as image motion compensation. The
forward image motion can be reduced by moving the film during exposure such that
the
2.1 Photogrammetric
16 Cameras 27
2 Film-based Cameras
image of an object does not move with respect to the emulsion. Since the direction of
image motion caused by vibration is random, it cannot be compensated by moving the
film. The only measure is a shock absorbing camera mount.
1. The position of the perspective center with respect to the fiducial marks.
2. The coordinates of the fiducial marks or distances between them so that coordi-
nates can be determined.
4. The radial and discentering distortion of the lens assembly, including the origin
of radial distortion with respect to the fiducial system.
There are several ways to calibrate the camera. After assembling the camera, the
manufacturer performs the calibration under laboratory conditions. Cameras should
be calibrated once in a while because stress, caused by temperature and pressure
differences of an airborn camera, may change some of the interior orientation
elements. Laboratory calibrations are also performed by specialized government
agencies.
2.1 Photogrammetric
16 Cameras 28
2 Film-based Cameras
Figure 2.6: Two views of a goniometer with installed camera, ready for calibration.
Now, the measurement part of the calibration procedure begins. The telescope is
aimed at the grid intersections of the grid plate, viewing through the camera. The
angles subtended at the rear nodal point between the camera axis and the grid
intersections are obtained by subtracting from the circle readings the zero position
(reading to the collimator before the camera is installed). This is repeated for all grid
intersections along the four semi diagonals.
2.1 Photogrammetric
18 Cameras 29
2 Film-based Cameras
Having determined the angles αi permits to compute the distances di from the
center of the grid plate (PPA) to the corresponding grid intersections i by Eq. 2.2
di = f tan(αi ) (2.2)
dri = dgi − di (2.3)
The computed distances di are compared with the known distances dgi of the grid
plate. The differences dr i result from the radial distortion of the lens assembly.
Radial distortion arises from a change of lateral magnification as a function of the
distance from the center.
The differences dri are plotted against the distances di . Fig. 2.7(a) shows the
result. The curves for the four semi diagonals are quite different and it is desirable to
make them as symmetrical as possible to avoid working with four sets of distortion
values. This is accomplished by changing the origin from the PPA to a different
point, called the principal point of symmetry (PPS). The effect of this change of the
origin is shown in Fig. 2.7(b). The four curves are now similar enough and the
average curve represents the direction-independent distortion. The distortion values
for this average
i curve are denoted by dr .
Figure 2.7: Radial distortion curves for the four semi-diagonals (a). In (b) the curves
are made symmetrical by shifting the origin to PPS. The final radial distortion curve
in (c) is obtained by changing the focal length from f to c.
The average curve is not yet well balanced with respect to the horizontal axis. The
next step involves a rotation of the distortion curve such that drmin = |drmax |. A
change of the focal length will rotate the average curve. The focal length with this
desirable property is called calibrated focal length, c. Through the remainder of the
text, we will be using c instead of f , that is, we use the calibrated focal length and not
the optical focal length.
After completion of all measurements, the grid plate is replaced by a
photosensitive plate. The telescope is rotated to the zero position and the reticule is
projected through
2.1 Photogrammetric
18 Cameras 30
2 Film-based Cameras
the lens onto the plate where it marks the PPA. At the same time the fiducial marks
are exposed. The processed plate is measured and the position of the PPA is
determined with respect to the fiducial marks.
Figure 2.8: Illustration of interior orientation. EP and AP are entrance and exit pupils.
they intersect the optical axis at the perspective centers O and Op . The mathematical
perspective center Om is determined such that angles at O and Om become as similar
as possible. Point Ha , also known as principal point of autocollimation, PPA, is the
vertical drop of Om to the image plane B. The distance Om , Ha , c, is the calibrated
focal length.
1. The position of the perspective center is given by the PPA and the calibrated
focal length c. The bundle rays through projection center and image points
resemble most closely the bundle in object space, defined by the front nodal
point and points on the ground.
2. The radial distortion curve contains the information necessary for correcting
im- age points that are displaced by the lens due to differences in lateral
magnification. The origin of the symmetrical distortion curve is at the principal
point of symmetry PPS. The distortion curve is closely related to the calibrated
focal length.
3. The position of the PPA and PPS is fixed with reference to the fiducial system.
The intersection of opposite fiducial marks indicates the fiducial center FC. The
2.2 Photographic
20 Processes 31
2 Film-based Cameras
three centers lie within a few microns. The fiducial marks are determined by
distances measured along the side and diagonally.
Modern aerial cameras are virtually distortion free. A good approximation for the
interior orientation is to assume that the perspective center is at a distance c from the
fiducial center.
diapositive
paper print
developing
fixing
negative
drying
object
washing
yellow filter
green sensitive
antihalation layer
base
H =E t (2.4)
where E is the irradiance as defined in section 2.1.4, and t the exposure time. H
is determined by the exposure time and the aperture stop of the the lens system
(compare vignetting diagrams in Fig. 2.16). For fast moving platforms (or objects),
the exposure time should be kept short to prevent blurring. In that case, a small f-
number must be chosen so that enough energy interacts with the emulsion. The
disadvantage with this setting is an increased influence of aberrations.
The sensitive elements of the photographic emulsion are microscopic crystals with
diameters from 0.3 µm to 3.0 µm. One crystal is made up of 1010 silver halide ions.
When radiant energy is incident upon the emulsion it is either reflected, refracted or
absorbed. If the energy of the photons is sufficient to liberate an electron from a
bound state to a mobile state then it is absorbed, resulting in a free electron which
combines quickly with a silver halide ion to a silver atom.
The active product of exposure is a small aggregate of silver atoms on the surface
or in the interior of the crystal. This silver speck acts as a catalyst for the development
reaction where the exposed crystals are completely reduced to silver whereas the un-
exposed crystals remain unchanged. The exposed but undeveloped film is called
latent image. In the most sensitive emulsions only a few photons are necessary for
forming a
2.2 Photographic
22 Processes 33
2 Film-based Cameras
developable image. Therefore the amplifying factor is on the order of 109 , one of the
largest amplifications known.
Sensitivity
The sensitivity can be defined as the extent of photographic material to react to
radiant energy. Since this is a function of wavelength, sensitivity is a spectral quantity.
Fig. 2.11 provides an overview of emulsions with different sensitivity.
0.3 0.4 0.5 0.6 0.7 0.8 0.9
wavelength
2.2 Photographic
22 Processes 34
2 Film-based Cameras
color blind
orthochromatic
panchromatic
infrared
Silver halide emulsions are inherently only sensitive to ultra violet and blue. In
order for the silver halide to absorb energy at longer wavelengths, dyes are added.
The three color sensitive emulsion layers differ in the dyes that are added to silver
halide. If no dyes are added the emulsion is said to be color blind. This may be
desirable for paper prints because one can work in the dark room with red light
without affecting the latent image. Of course, color blind emulsions are useless for
aerial film because they would only react to blue light which is scattered most causing
a diffuse image without contrast.
In orthochromatic emulsions the sensitivity is extended to include the green
portion of the visible spectrum. Panchromatic emulsions are sensitive to the entire
visible spectrum; infrared film includes the near infrared.
Fig. 2.12 illustrates the concept of natural color and false color film material. A
natural color film is sensitive to radiation of the visible spectrum. The layer that is
struck first by radiation is sensitive to red, the middle layer is sensitive to green, and
the third layer is sensitive to blue. During the development process the situation
becomes reversed; that is, the red layer becomes transparent for red light. Wherever
green was incident the red layer becomes magenta (white minus green); likewise,
blue changes to yellow. If this developed film is viewed under white light, the original
colors are perceived.
A closer examination of the right side of Fig. 2.12 reveals that the sensitivity of
the film is shifted towards longer wavelengths. A yellow filter prevents blue light
from interacting with the emulsion. The top most layer is now sensitive to near
infrared, the middle layer to red and the third layer is sensitive to green. After
developing the film, red corresponds to infrared, green to red, and blue to green.
This explains the name false color film: vegetation reflects infrared most. Hence,
forest, trees and meadows appear red.
2.2.3 Sensitometry
Sensitometry deals with the measurement of sensitivity and other characteristics of
photographic material. The density (amount of exposure) can be measured by a
densit- ometer. The density D is defined as the degree of blackening of an exposed
film.
2.2 Photographic
24 Processes 36
2 Film-based Cameras
irradiance irradiance
R G B IR R G
red-sensitive layer
green-sensitive layer
blue-sensitive layer
latent image
viewing white light viewing white light
2.2 Photographic
24 Processes cyan layer
37
2 Film-based Cameras
developed image
magenta layer
yellow layer
R G B R G B
Figure 2.12: Concept of processing natural color (left) and false color film (right
.
D = log(O) (2.5)
Ei
O = (2.6)
Et
Et 1
T = = (2.7)
Ei O
H = Et (2.8)
where
2.0
density
1.0
2
1
fog
1.0 2.0 3.0
log exposure
with silver specks that are reduced to black silver: a bright spot in the the scene
appears dark in the negative.
The characteristic curve begins at a threshold value, called fog. An unexposed
film should be totally transparent when reduced during the development process. This
is not the case because the base of the film has a transmittance smaller than unity.
Additionally, the transmittance of the emulsion with unexposed material is smaller
than unity. Both factor contribute to fog. The lower part of the curve, between point 1
and 2, is called toe region. Here, the exposure is not enough to cause a readable
image. The next region, corresponding to correct exposure, is characterized by a
straight line (between point 2 and 3). That is, the density increases linearly with the
logarithm of exposure. The slope of the straight line is called gamma or contrast. A
film with a slope of 450 is perceived as truly presenting the contrast in the scene. A
film with a higher gamma exaggerates the scene contrast. The contrast is not only
dependent on the emulsion but also on the development time. If the same latent image
is kept longer in the development process, its characteristic curve becomes flatter.
The straight portion of the characteristic curve ends in the shoulder region where
the density no longer increases linearly. In fact, there is a turning point, solarization,
where D decreases with increasing exposure (point 4 in Fig. 2.13). Clearly, this
region is associated with over exposure.
2.2.4 Speed
The size and the density of the silver halide crystals suspended in the gelatine of the
emulsion vary. The larger the crystal size the higher the probability that is struck by
photons during the exposure time. Fewer photons are necessary to cause a latent
image. Such a film would be called faster because the latent image is obtained in a
shorter time period compared to an emulsion with smaller crystal size. In other
words, a faster
2.2 Photographic
26 Processes 39
2 Film-based Cameras
3.0
2.0
A
density
1.0
0.3
log exposure
Digital Cameras
3.1 Overview
The popular term “digital camera" is rather informal and may even be misleading
because the output is in many cases an analog signal. An more generic term is solid-
state camera.
Other frequently used terms include CCD camera and solid-state camera. Though
these terms obviously refer to the type of sensing elements, they are often used in a
more generic sense.
The chief advantage of digital cameras over the classical film-based cameras is
the instant availability of images for further processing and analysis. This is essential
in real-time applications (e.g. robotics, certain industrial applications, bio-
mechanics, etc.).
Another advantage is the increased spectral flexibility of digital cameras. The
major drawback is the limited resolution or limited field of view.
Digital cameras have been used for special photogrammtric applications since the
early seventies. However, vidicon-tube cameras available at that time were not very
accurate because the imaging tubes were not stable. This disadvantage was eliminated
with the appearance of solid-state cameras in the early eighties. The charge-coupled
device provides high stability and is therefore the preferred sensing device in today’s
digital cameras.
The most distinct characteristic of a digital camera is the image sensing device.
Because of its popularity we restrict the discussion to solid-state sensors, in particular
to charge coupled devices (CCD).
The sensor is glued to a ceramic substrate and covered by a glass. Typical chip
sizes are 1/2 and 2/3 inches with as many as 2048 × 2048 sensing elements.
However, sensors with fewer than 1K × 1K elements are more common. Fig. 3.1
depicts a line sensor (a) and a 2D sensor chip (b).
The dimension of a sensing element is smaller than 10 µm, with an insulation
space of a few microns between them. This can easily be verified when considering
the physical dimensions of the chip and the number of elements.
3.1 Overview30 44
3 Digital Cameras
44
Figure 3.2: Functional block diagram of a solid-state camera. A real camera may not
have all components. The diagram is simplified, e.g. external signals received by the
camera are not shown.
The optics component includes lens assembly and filters, such as an infrared blocking
filter to limit the spectral response to the visible spectrum. Many cameras use a C-
mount for the lens. Here, the distance between mount and image plane is 17.526 mm.
As an option, the optics subsystem may comprise a shutter.
The most distinct characteristic of an electronic camera is the image sensing
device. Section 3.2 provides an overview of charge-coupled devices.
The solid-state sensor, positioned in the image plane, is glued on a ceramic
substrate. The sensing elements (pixels) are either arranged in a linear array or a
frame array. Linear arrays are used for aerial cameras while close range applications,
including mobile mapping systems, employ frame array cameras.
The accuracy of a solid-state camera depends a great deal on the accuracy and
stability of the sensing elements, for example on the uniformity of the sensor element
spacing and the flatness of the array. From the manufacturing process we can expect
an accuracy of 1/10th of a micron. Considering a sensor element, size 10 µm, the
regularity amounts to 1/100. Camera calibration and measurements of the position
and the spacing of sensor elements confirm that the regularity is between 1/50th and
1/100th of the spacing.
The voltage generated by the sensor’s read out mechanism must be amplified for
further processing, which begins with converting the analog signal to a digital signal.
This is not only necessary for producing a digital output, but also for signal and image
processing. The functionality of these two components may range from rudimentary
to very sophisticated in a real camera.
You may consider the first two components (optics and solid-state sensor) as
image capture, the amplifiers and ADC as image digitization, and signal and image
processing as image restoration. A few examples illustrate the importance of image
restoration. The dark current can be measured and subtracted so that only its noise
signal component remains; defective pixels can be determined and an interpolated
signal can be output; the contrast can be changed (Gamma correction); and image
compression may be applied. The following example demonstrates the need for data
compression.
which are mounted in the image plane in fore, nadir and aft position (see Fig. 3.4(a).
With this configuration, triple coverage of the surface is obtained. Examples of 3-line
cameras include Leica’s ADS40. It is also possible to implement the multiple line
concept by having convergent lenses for every line, as depicted in Fig. 3.4(b).
A well-known example of a one line-camera is SPOT. The linear array consists of
7,000 sensing elements. Stereo is obtained by overlapping strips obtained from
adjacent orbits.
Fig. 3.5 shows the overlap configuration obtained with a 3-Line camera.
Figure 3.4: Schematic diagram of a 3-line camera. In (a), 3 sensor lines are mounted
on the image plane in fore, nadir and aft locations. An alternative solution is using 3
convergent cameras, each with a single line mounted in the center (b).
100M
30 10M
sensor size in pixels
pixel size in microns
(resolution)
18 1M
10 100K
6 10K
The charge-coupled device (CCD) was invented in 1970. The first CCD line
sensor contained 96 pixels; today, chips with over 50 million pixels are commercially
available.
3.2 CCD Sensors:
34 Working Principle and 50
3 Digital Cameras
Properties
Fig. 3.6 on the preceding page illustrates the astounding development of CCD sensors
over a period of 25 years. The sensor size in pixels is usually loosely termed
resolution, giving rise to confusion since this term has a different meaning in
photogrammetry1 .
(a) (b)
Figure 3.7: Schematic diagram of CCD detector. In (a) a photon with an energy
greater than the bandgap of the semiconductor generates an electron-hole pair. The
electron e is attracted by the positive voltage of the electrode while the mobile hole
moves toward the ground. The collected electrons together with the electrode form a
capacitor. In (b) this basic arrangement is repeated many times to form a linear array.
Suppose EMR is incident on the device. Photons with an energy greater than the
band gap energy of the semiconductor may be absorbed in the depletion region,
creating an electron-hole pair. The electron—referred to as photon electron—is
attracted by the positive charge of the metal electrode and remains in the depletion
region while the mobile hole moves toward the electrical ground. As a result, a
charge accumulates at opposite sides of the insulator. The maximum charge depends
on the voltage applied to the electrode. Note that the actual charge is proportional to
the number of absorbed photons under the electrode.
The band gap energy of silicon corresponds to the energy of a photon with a
wave- length of 1.1 µm. Lower energy photons (but still exceeding the band gap) may
penetrate the depletion region and be absorbed outside. In that case, the generated
electron-hole pair may recombine before the electron reaches the depletion region.
We realize that not every photon generates an electron that is accumulated at the
capacitor site. Con- sequently, the quantum efficiency is less than unity.
1
Resolution refers to the minimum distance between two adjacent features, or the minimum size of a
feature, which can be detected by photogrammetric data acquisition systems. For photography, this
distance is usually expressed in line pairs per millimeter (lp/mm).
3.2 CCD Sensors:
36 Working Principle and 52
3 Digital Cameras
Properties
An ever increasing number of capacitors are arranged into what is called a CCD
ar- ray. Fig. 3.7(b) illustrates the concept of a one-dimensional array (called a linear
array) that may consist of thousands of capacitors, each of which holds a charge
proportional to the irradiance at each site. It is customary to refer to these capacitor
sites as detector pixels, or pixels for short. Two-dimensional pixel arrangements in
rows and columns are called full-frame or staring arrays.
pulse
drain
1· t
2· t
Figure 3.8: Principle of charge transfer. The top row shows a linear array of accumu-
lated charge packets. Applying a voltage greater than V1 of electrode 1 momentarily
pulls charge over to the second electrode (middle row). Repeating this operation in a
sequential fashion eventually moves all packets to the final electrode (drain) where
the charge is measured.
The next step is concerned with transferring and measuring the accumulated
charge. The principle is shown in Fig. 3.8. Suppose that the voltage of electrode i+1
is mo- mentarily made larger than that of electrode i. In that case, the negative
charge under electrode i is pulled over to site i+1, below electrode i+1, provided that
adjacent depletion regions overlap. Now, a sequence of voltage pulses will cause a
sequential movement of the charges across all pixels to the drain (last electrode)
where each packet of charge can be measured. The original location of the pixel
whose charge is being measured in the drain is directly related to the time when a
voltage pulse was applied.
Several ingenious solutions for transferring the charge accurately and quickly
have been developed. It is beyond the scope of this book to describe the transfer
technology in any detail. The following is a brief summary of some of the methods.
3.2.2 Charge Transfer
Linear Array With Bilinear
Readout
As sketched in Fig. 3.9, a linear array (CCD shift register) is placed on both sides of
the single line of detectors. Since these two CCD arrays are also light sensitive, they
must be shielded. After integration, the charge accumulated in the active detectors is
transferred to the two shift registers during one clock period. The shift registers are
read out in a serial fashion as described above. If the readout time is equal to the
integration time, then this sensor may operate continuously without a shutter. This
principle, known as push broom, is put to advantage in line cameras mounted on
moving platforms to provide continuous coverage of the object space.
Figure 3.9: Principle of linear array with bilinear readout. The accumulated charge is
transferred during one pixel clock from the active detectors to the adjacent shift
registers, from where it is read out sequentially.
Frame Transfer
You can visualize a frame transfer imager as consisting of two identical arrays. The
active array accumulates charges during integration time. This charge is then
transferred to the storage array, which must be shielded since it is also light sensitive.
During the transfer, charge is still accumulating in the active array, causing a slightly
smeared image.
The storage array is read out serially, line by line. The time necessary to read out
the storage array far exceeds the integration. Therefore, this architecture requires a
me- chanical shutter. The shutter offers the advantage that the smearing effect is
suppressed.
Interline Transfer
Fig. 3.10 on the following page illustrates the concept of interline transfer arrays.
Here, the columns of active detectors (pixels) are separated by vertical transfer
registers. The accumulated charge in the pixels is transferred at once and then read
out serially. This again allows an open shutter operation, assuming that the read out
time does not exceed the integration time.
Since the CCD detectors of the transfer register are also sensitive to irradiance,
they must be shielded. This, in turn, reduces the effective irradiance over the chip
area. The effective irradiance is often called fill factor. The interline transfer imager
as described
3.2 CCD Sensors:
38 Working Principle and 56
3 Digital Cameras
Properties
vertical transfer register
active detectors
(shielded)
sense
serial readout register
node
3.2 CCD Sensors:
38 Working Principle and 57
3 Digital Cameras
Properties
Figure 3.10: Principle of linear array with bilinear readout. The accumulated charge
is transferred during one pixel clock from the active detectors to the adjacent shift
registers from where it is read out sequentially.
here has a fill factor of 50%. Consequently, longer integration times are required to
capture an image. To increase the fill factor, microlenses may be used. In front of
every pixel is a lens that directs the light incident on an area defined by adjacent
active pixels to the (smaller) pixel.
quantum efficiency
0.8
0.7
0.6
back illuminated
0.5
0.4
0.2
0.1
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3
wavelength
Figure 3.11: Spectral response of CCD sensors. In an ideal silicon detector all
photons exceeding the band gap energy generate electrons. Front illuminated sensors
have a lower quantum efficiency than back illuminated sensors because part of the
incident flux may be absorbed or redirected by the electrodes (see text for details).
material with the corresponding bandgap energy must be selected. This leads to hy-
brid CCD arrays where the semiconductor and the CCD mechanism are two separate
components.
40 3 Digital Cameras
Chapter 4
Properties of Aerial
Photography
4.1 Introduction
Aerial photography is the basic data source for making maps by photogrametric
means. The photograph is the end result of the data acquisition process discussed in
the previ- ous chapter. Actually, the net result of any photographic mission are the
photographic negatives. Of prime importance for measuring and interpretation are the
positive repro- ductions from the negatives, called diapositives.
Many factors determine the quality of aerial photography, such as
• photographic material
• development process
In this chapter we describe the types of aerial photographs, their geometrical prop-
erties and relationship to object space.
panchromatic black and white This is most widely used type of emulsion for pho-
togrammetric mapping.
color Color photography is mainly used for interpretation purposes. Recently, color
is increasingly being used for mapping applications.
infrared black and white Since infrared is less affected by haze it is used in applica-
tions where weather conditions may not be as favorable as for mapping
missions (e.g. intelligence).
false color This is particular useful for interpretation, mainly for analyzing vegetation
(e.g. crop desease) and water pollution.
4.3.1 Definitions
Fig. 4.2 shows a diapositive in near vertical position. The following definitions apply:
perspective center C calibrated perspective center (see also camera calibration, inte-
rior orientation).
focal length c calibrated focal length (see also camera calibration, interior orientation).
camera axis C-PP axis defined by the projection center C and the principal point PP.
The camera axis represents the optical axis. It is perpendicular to the image plane
4.3 Geometric
44properties of aerial 64
4 Properties of Aerial
photographs Photography
t
PP
s
I
N
O’
N’
Figure 4.2: Tilted photograph in diapositive position and ground control coordinate
system.
nadir point N also called photo nadir point, is the intersection of vertical (plumb line)
from perspective center with photograph.
ground nadir point N intersection of vertical from perspective center with the
earth’s surface.
swing angle s is the angle at the principal point measured from the +y-axis counter-
clockwise to the nadir N .
azimut α is the angle at the ground nadir N measured from the +Y-axis in the
ground system counterclockwise to the intersection O of the camera axis with
the ground surface. It is the azimut of the trace of the principal plane in the XY
-plane of the ground system.
PP are on the principal line. The principal line is oriented in the direction of
steepest inclination of of the tilted photograph.
true horizon line intersection of a horizontal plane through persepective center with
photograph or its extension. The horizon line falls within the extent of the pho-
tograph only for high oblique photographs.
exit
pupil
entrance
object space
A B
During the camera calibration process the projection center in image space is
changed to a new position, called the calibrated projection center. As discussed in
2.6, this is necessary to achieve close similarity between the image and object bundle.
4.3 Geometric
46properties of aerial 67
4 Properties of Aerial
photographs Photography
P’
HA H
datum
Figure 4.4: Flight height, flight altitude and scale of aerial photograph.
The photograph scale varies from point to point. For example, the scale for point P
can easily be determined as the ratio of image distance CP to object distance CP by
CP
mP = (4.2)
4.3 Geometric
46properties of aerial CP 68
4 Properties of Aerial
photographs CP = xP + yP2 + c2 Photography (4.3)
2
d rB PP H
rT
h
The magnitude of relief displacement for a true vertical photograph can be deter-
mined by the following equation
r ∆h r ∆h
d= = (4.5)
H H − ∆h
where r = xT2 + yT2 , r = xB2 + yB2 , and ∆h the elevation difference of two
points on a vertical. Eq. 4.5 can be used to determine the elevation ∆h of a vertical
object
dH
h= (4.6)
r
48 4 Properties of Aerial
Photography
The direction of relief displacement is radial with respect to the nadir point N ,
independent of camera tilt.
Chapter 5
Elements of Analytical
Photogrammetry
exit
pupil
entrance diapositive
object space
A B
A B
Figure 5.1: In (a) the data acquisition process is depicted. In (b) we illustrate the
reconstruction process.
In this chapter we describe these procedures and the mathematical models, except
aerotriangulation (block adjustment) which will be treated later. For one and the same
procedure, several mathematical models may exist. They differ mainly in the degree
of complexity, that is, how closely they describe physical processes. For example, a
similarity transformation is a good approximation to describe the process of
converting measured coordinates to photo-coordinates. This simple model can be
extended to describe more closely the underlying measuring process. With a few
exceptions, we will not address the refinement of the mathematical model.
z
y
x
p
c
FC Fiducial Center
PP Principal Point
P PS Point of Symmetry
PP
PS c calibrated focal length
FC p image vector
xp
p = yp (5.1)
−c
Note that for a diapositive the third component is negative. This changes to a positive
5.3 Interior Orientation
52 77
5 Elements of Analytical Photogrammetry
5.3.1 Similarity
Transformation
The most simple mathematical model for interior orientation is a similarity transfor-
mation with the four parameters: translation vector t, scale factor s, and rotation angle
α.
If we consider a11 , a12 , xt, yt as parameters, then above equations are linear in
the parameters. Consequently, they can be directly used as observation equations for a
least- squares adjustment. Two observation equations are formed for every point
known in
1
Measuring systems are discussed in the next chapter.
5.3 Interior Orientation
52 78
5 Elements of Analytical Photogrammetry
ym
xm
both coordinate systems. Known points in the photo-coordinate system are the
fiducial marks. Thus, computing the parameters of the interior orientation amounts to
measuring the fiducial marks (in the measuring system).
Actually, the fiducial marks are known with respect to the fiducial center.
Therefore, the process just described will determine parameters with respect to the
fiducial coor- dinate system xf, yf. Since the origin of the photo-coordinate system is
known in the fiducial system (x0 , y0 ), the photo-coordinates are readily obtained by
the translation
x = xf − x0 (5.6)
y = yf − y0 (5.7)
as indicated in Fig. 5.3(b). The skew angle expresses the nonperpendicularity. Also,
the scale is different between the the two axis.
We have
xf = a11 xm + a12 ym − xt (5.8)
5.3 Interior Orientation
54 yf = a21 xm5 Elements
+ a22 ym − yt 80 (5.9)
of Analytical Photogrammetry
where
Eq. 4.8 and 5.9 are also linear in the parameters. Like in the case of a similarity
transformation, these equations can be directly used as observation equations. With
four fiducial marks we obtain eight equations leaving a redundancy of two.
dr = p0 r + p1 r 3 + p2 r 5 + · · · (5.15)
The coefficients pi are found by fitting the polynomial curve to the distortion
values. Eq. 5.15 is a linear observation equation. For every distortion value, an
observation equation is obtained.
y
PP x
P’ yp
dry x p
P
drx
These equations are based on a model atmosphere defined by the US Air Force.
The flying height H and the ground elevation h must be in units of kilometers.
5.3 Interior Orientation
56 83
5 Elements of Analytical Photogrammetry
dref
negative P
5.3 Interior Orientation
56 84
5 Elements of Analytical Photogrammetry
P’
perspective center
datum
dearth
P’
ZP
datum ZP
the elevations refer to the datum and not to the XY plane of the cartesian coordinate
system. It would be quite awkward to produce the map in the cartesian system and
then transform it to the target system. Therefore, during map compilation, the photo-
coordinates are “corrected" so that conjugate bundle rays intersect in object space at
positions related to reference sphere.
1. Insert the diapositive into the measuring system (e.g. comparator, analytical
plotter) and measure the fiducial marks in the machine coordinate system xm,
ym. Compute the transformation parameters with a similarity or affine
transformation. The transformation establishes a relationship between the
measuring system and the fiducial coordinate system.
2. Translate the fiducial system to the photo-coordinate system (Eqs. 4.6 and 5.7).
3. Correct photo-coordinates for radial distortion. The radial distortion drp for point
5.4 Exterior Orientation
58 87
5 Elements of Analytical Photogrammetry
yf
P’
PP
yo
xo
FC xf
4. Correct the photo-coordinates for refraction, according to Eqs. 4.16 and 5.17.
This correction is negative. The displacement caused by refraction is a
functional relationship of dref = f (H, h, r, c). With a flying height H = 2, 000
m, elevation above ground h = 500 m we obtain for a wide angle camera (c ≈
0.15 m) a correction of −4 µm for r = 130 mm. An extreme example is a
superwide angle camera, H = 9, 000 m, h = 500 m, where dref = −34 µm for
the same point.
5. Correct for earth curvature only if the control points (elevations) are not in a
cartesian coordinate system or if a map is compiled. Using the extreme
example as above, we obtain dearth = 65 µm. Since this correction has the
opposite sign of the refraction, the combined correction for refraction and earth
curvature would be dcomb = 31 µm. The correction due to earth curvature is
larger than the correction for refraction.
5.4 Exterior Orientation
58 88
5 Elements of Analytical Photogrammetry
dearth
H-ZP
P’
ZP
5.4 Exterior Orientation
58 datum 5 Elements 89
ZPof Analytical Photogrammetry
The problem of establishing the six orientation parameters of the camera can
conve- niently be solved by the collinearity model. This model expresses the
condition that the perspective center C , the image point Pi , and the object point Po ,
must lie on a straight line (see Fig. 5.8). If the exterior orientation is known, then the
image vector pi and the vector q in object space are collinear:
1
pi = q (5.19)
λ
As depicted in Fig. 5.8, vector q is the difference between the two point vectors c
and p. For satisfying the collinearity condition, we rotate and scale q from object to
image space. We have
1 1
pi = R q = R (p − c) (5.20)
λ λ
with R an orthogonal rotation matrix with the three angles ω, φ and κ:
5.5 Orientation
60 of a Stereopair 90
5 Elements of Analytical Photogrammetry
1
x = (XP − XC )r11 + (YP − YC )r12 + (ZP − ZC )r13 (5.22)
λ
y = 1 (XP − XC )r21 + (YP − YC )r22 + (ZP − ZC )r23 (5.23)
λ
1
−c = (XP − XC )r31 + (YP − YC )r32 + (ZP − ZC )r33 (5.24)
λ
By dividing the first by the third and the second by the third equation, the scale
factor λ1 is eliminated leading to the following two collinearity equations:
C" C"
C’ C’
zm
P"
P’
model space
P
xm
Figure 5.9: The concept of model space (a) and model coordinate system (b).
in the transformation of 3-D cartesian systems. The decision on how to introduce the
parameters depends on the application; one definition of the model coordinate system
may be more suitable for a specific purpose than another. In the following
subsections, different definitions will be discussed.
Now the orientation of a stereopair amounts to determining the exterior
orientation parameters of both photographs, with respect to the model coordinate
system. From single photo resection, we recall that the collinearity equations form a
suitable math- ematical model to express the exterior orientation. We have the
following functional relationship between observed photo-coordinates and orientation
parameters:
x, y = f (X , Y , Z ,ω ,φ ,κ , X , Y , Z , ω ,φ ,κ , X1 , Y1 ,Z1 , · · · , Xn , Yn (5.28)
,Zn ) C C C C C C
5.5 Orientation
62 of a Stereopair 95
5 Elements of Analytical Photogrammetry
ext . or mod . pt mod . pt n
ext . 1
or
where f refers to Eqs. 4.25 and 5.26. Every point measured in one photo-
coordinate system renders two equations. The same point must also be measured in
the second photo-coordinate system. Thus, for one model point we obtain 4
equations, or 4 n equations for n object points. On the other hand, n unknown
model points lead to
3 n parameters, or to a total 12 + 3 n − 7. These are the exterior orientation elements
of both photographs, minus the parameters we have eliminated by defining the model
coordinate system. By equating the number of equations with number of parameters
we obtain the minimum number of points, nmin , which we need to measure for
solving the orientation problem.
C
with f 0 denoting the function with initial estimates for the parameters.
For a point Pi , i = 1, · · · , n we obtain the following four generic observation
equations
ϑf ϑf ϑf 0
rxi = − xi
∆X C + ∆YC + · · · + ∆Z + f
ϑXC C ϑYC ϑZC
ϑf ϑf ϑf 0
ryi = − yi
∆X C + ∆YC + · · · + ∆Z + f
ϑXC C ϑYC ϑZC
ϑf ϑf ϑf 0
rxi = − xi (5.31)
∆X C + ∆YC + · · · + ∆Z + f
ϑXC C ϑYC ϑZC
ϑf ϑf ϑf 0
ryi = − yi
∆X C + ∆YC + · · · + ∆Z + f
ϑXC C ϑYC ϑZC
As mentioned earlier, the definition of the model coordinate system reduces the
number of parameters by seven. Several techniques exist to consider this in the least
squares approach.
1. The simplest approach is to eliminate the parameters from the parameter list.
We will use this approach for discussing the dependent and independent
relative orientation.
With 5 points we obtain 20 observation equations. On the other hand, there are 5
exterior orientation parameters and 5 × 3 model coordinates. Usually more than 5
points are measured. The redundancy is r = n − 5. The typical case of relative
orientation
5.5 Orientation
64 of a Stereopair 98
5 Elements of Analytical Photogrammetry
Parameters
C"
C’ by y base component
bz bz z base component
rotation angle about x
5.5 Orientation
64 of a Stereopair 99
5 Elements of Analytical Photogrammetry
rotation angle about y
P’ P" rotation angle about z
Figure 5.10: Definition of the model coordinate system and orientation parameters in
the dependent relative orientation.
on a stereoplotter with the 6 von Gruber points leads only to a redundancy of one. It
is highly recommended to measure more, say 12 points, in which case we find r = 7.
With a non linear mathematical model we need be concerned with suitable
approx- imations to ensure that the iterative least squares solution converges. In the
case of the dependent relative orientation we have
f 0 = f (ycc0 , zmc 0 , ω 0 , φ0 , κ0 , xm
1
0
, ym
1
0
, zm
1
0
, · · · , nxm0 ,nym0 ,nzm0 ) (5.33)
The initial estimates for the five exterior orientation parameters are set to zero for
aerial applications, because the orientation angles are smaller than five degrees, and
xmc >> ymc , xmc >> zmc =⇒ ym0 = zm0 = 0. Initial positions for the
model c c
points can be estimated from the corresponding measured photo-coordinates. If the
scale of the model coordinate system approximates the scale of the photo-coordinate
system, we estimate initial model points by
xmi0 ≈ xi
ym0i ≈ yi (5.34)
zmi0 ≈ zi
The dependent relative orientation leaves one of the photographs unchanged; the
other one is oriented with respect to the unchanged system. This is of advantage for
the conjunction of successive photographs in a strip. In this fashion, all photographs
of a strip can be joined into the coordinate system of the first photograph.
5.5.3 Independent Relative Orientation
Fig. 5.11 illustrates the definition of the model coordinate system in the independent
relative orientation.
Parameters
C"
C’ rotation angle about y’
bz rotation angle about z’
rotation angle about x"
rotation angle about y"
P’ P" rotation angle about z"
Figure 5.11: Definition of the model coordinate system and orientation parameters in
the independent relative orientation.
The origin is identical to one of the photo-coordinate systems, e.g. in Fig. 5.11 it
is the primed system. The orientation is chosen such that the positive xm-axis passes
through the perspective center of the other photo-coordinate system. This requires
determining two rotation angles in the primed photo-coordinate system. Moreover, it
eliminates the base components by, bz. The rotation about the x-axis (ω) is set to zero.
This means that the ym-axis is in the x − y plane of the photo-coordinate system.
The scale is chosen by definingc xm = bx.
With this definition of the model coordinate system we have eliminated the
position of both perspective centers and one rotation angle. The following functional
model applies
The number of equations, number of parameters and the redundancy are the same
as in the dependent relative orientation. Also, the same considerations regarding
initial estimates of parameters apply.
Note that the exterior orientation parameters of both types of relative orientation
are related. For example, the rotation angles φ , κ can be computed from the spatial
direction of the base in the dependent relative orientation.
5.5 Orientation
66 of a Stereopair 10
5 Elements of Analytical Photogrammetry
2
z mc
φ = arctan( ) (5.36)
bx
ym c
κ = arctan( ) (5.37)
(bx2 + zm2c )1/2
5.5 Orientation
66 of a Stereopair 10
5 Elements of Analytical Photogrammetry
5.5.4 Direct Orientation 3
In the direct orientation, the model coordinate system becomes identical with the
ground system, for example, a State Plane coordinate system (see Fig. 5.12). Since
such systems are already defined, we cannot introduce a priori information about
exterior orientation parameters like in both cases of relative orientation. Instead we use
information about some of the object points. Points with known coordinates are called
control points. A point with all three coordinates known is called full control point. If
only X and Y is known then we have a planimetric control point. Obviously, with an
elevation control point we know only the Z coordinate.
z"
C"
C’ Parameters
x, y = f (XC , YC , ZC , ω , φ , κ , X
C ,C
Y , CZ , ω , φ , κ , Z1 , Z2 , X3 , Y3 , X4 , Y4 , X5 , Y5
ext . or
ext . unknown coo rd. of ctr. pts
or (5.38)
The Z -coordinates of the planimetric control points 1 and 2 are not known and
thus remain in the parameter list. Likewise, X − Y -coordinates of elevation control
points 3, 4, 5 are parameters to be determined. Let us check the number of
observation equations for this particular case. Since we measure the five partial control
points on both
photographs we obtain 20 observation equations. The number of parameters amounts
to 12 exterior orientation elements and 8 coordinates. So we have just enough
equations to solve the problem. For every additional point 4 more equations and 3
parameters are added. Thus, the redundancy increases linearly with the number of
points measured. Additional control points increase the redundancy more, e.g. full
control points by 4, an elevation by 2.
Like in the case of relative orientation, the mathematical model of the direct
orienta- tion is also based on the collinearity equations. Since it is non-linear in the
parameters we need good approximations to assure convergence. The estimation of
initial val- ues for the exterior orientation parameters may be accomplished in
different ways. To estimate X 0 , Y 0 for example, one could perform a 2-D
transformation
C of
C the photo
coordinates to planimetric control points. This would also result in a good estimation
of
κ0 and of the photo scale which in turn can be used to estimate C Z
0
= scale c. For
0 0
aerial applications we set ω = φ = 0. With these initial values of the exterior
orientation one can compute approximations X 0 , Y 0 of object points where Z 0 =
haver . i i i
Note that the minimum number of points to be measured in the relative orientation
is 5. With the direct orientation, we need only three points assuming that two are full
control points. For orienting stereopairs with respect to a ground system, there is no
need to first perform a relative orientation followed by an absolute orientation. This
traditional approach stems from analog instruments where it is not possible to
perform a direct orientation by mechanical means.
p = sRpm − t (5.39)
where pm = [xm, ym, zm]T is the point vector in the model coordinate system,
p = [X, Y, Z ]T the vector in the ground control system pointing to the object point
P and t = [Xt , Yt , Zt ]T the translation vector between the origins of the 2
coordinate systems. The rotation matrix R rotates vector pm into the ground control
system and s, the scale factor, scales it accordingly. The 7 parameters to be
determined comprise
3 rotation angles of the orthogonal rotation matrix R, 3 translation parameters and one
scale factor.
5.5 Orientation
68 of a Stereopair 10
5 Elements of Analytical Photogrammetry
5
Measuring Systems
Stereo Viewer
The viewing system resembles closely a stereo comparator, particularly the binocular
system with high quality optics, zoom lenses, and image rotation. Also, the measuring
mark and the illumination system are refined versions of stereocomparator
components. Fig 6.3 shows a typical viewer with the binocular system, the stages, and
the knobs for adjusting the magnification, illumination and image rotation.
The size of the stages must allow for measuring aerial photographs. Some in-
struments offer larger stage sizes, for example 18 × 9 in. to accomodate panoramic
imagery.
An important part of the stereo viewer is the measuring and recording system.
As discussed in the previous section, the translation of the stages, the measuring and
recording is all combined by employing either linear encoders or spindles.
Translation System
In order to move the measuring mark from one point to another either the viewing
system must move with respect to a stationary measuring system, or the measuring
system, including photograph, moves against a fixed viewing system. Most x-y-
comparators have a moving stage system. The carrier plate on which the diapositive
is clamped, moves against a pair of fixed glass scales and the fixed viewing system
(compare also Fig. 6.5).
In most cases, the linear translation is accomplished by purely mechanical means.
Fig. 6.4 depicts some typical translation guides. Various forms of bearings are used to
6.1 Analytical72Plotters 11
6 Measuring Systems
11
user
interface
host real-time
viewer
computer
6.1 Analytical72Plotters processor 11
6 Measuring Systems
21
Figure 6.3: Stereo viewer of the Planicomp P-3 analytical plotter from Zeiss.
reduce friction and wear and tear. An interesting solution are air bearings. The air is
pumped through small orifices located on the facing side of one of two flat surfaces.
This results in a thin uniform layer of air separating the two surfaces, providing
smooth motion.
The force to produce motion is most often produced by threaded spindles or
precision lead screws. Coarse positioning is most conveniently accomplished by a
free moving cursor. After clamping the stages, a pair of handwheels allows for
precise positioning.
pitch. Full revolutions are counted on a coarse scale while the fractional part is
usually interpreted on a separate, more accurate scale.
To record the measurements automatically, an analog to digital (A/D) conversion
is necessary because the x-y-readings are analog in nature. Today, A/D converters are
based on solid state electronics. They are very reliable, accurate and inexpensive.
diapositive
carrier stage
scale
sensor
Fig. 6.5 illustrates one of several concepts for the A/D conversion process, using
linear encoders. The grating of the glass scales is 40 µm. Light from the source L
transmits through the glass scale and is reflected at the lower surface of the plate
carrier. A photo diode senses the reflected light by converting it into a current that
can be measured. Depending on the relative position of plate carrier and scale, more
or less light is reflected. As can be seen from Fig. 6.5 there are two extreme positions
where either no light or all light is reflected. Between these two extreme positions the
amount of reflected light depends linearly on the movement of the plate carrier. Thus,
the precise position is found by linear interpolation.
User Interface
With user interface we refer to the communication devices an operator has available
to work on an analytical plotter. These devices can be associated to the following
6.1 Analytical74Plotters 11
6 Measuring Systems
41
functional groups:
viewer control buttons permit to change magnification, illumination and image rota-
tion.
pointing devices are necessary to drive the measuring mark to specific locations,e.g.
fiducial marks, control points or features to be digitized. Pointing devices
include handwheels, footdisk, mouse, trackball, cursor. A typical configuration
consists of a special cursor with an additional button to simulate z-movement
(see Fig. 6.6). Handwheels and footdisk are usually offered as an option to
provide the familiar environment of a stereoplotter.
digitizing devices are used to record the measuring mark together with addtional in-
formation such as identifiers, graphical attributes, feature codes. For obvious
reasons, digitizing devices are usually in close proximity to pointing devices.
For example, the cursor is often equipped with additional recording buttons.
Digitiz- ing devices may also come in the form of foot pedals, a typical solution
found with stereoplotters. A popular digitizing device is the digitizing tablet
that is mainly used to enter graphical information. Another solution is the
function keyboard. It provides less flexibility, however.
con-
cept. Its main task is to control the user interface and to perform the computing of
6.1 Analytical76Plotters 11
6 Measuring Systems
51
stage coordinates from model coordinates in real-time. This involves executing the
collinearity equations and inverse interior orientation at a rate of 50 to 100 times per
second.
Host Computer
The separation of real-time computations from more general computational tasks
makes the analytical plotter a device independent peripheral with which the host
communicates via standard interface and communication. The task of the host
computer is to assist the operator in performing photogrammetric procedures such as
the orientation of a stereomodel and its digitization.
The rapid performance increase of personal computers (PC) and their relatively
low price makes them the natural choice for the host computer. Other hosts typically
used are UNIX workstations.
Auxiliary Devices
Depending on the type of instruments, auxiliary devices may be optionally available
to increase the functionality. On such device is the superpositioning system. Here,
the current digitizing status is displayed on a small, high resolution monitor. The
display is interjected into the optical path so that the operator sees the digitized map
superimposed on the stereomodel.This is very helpful for quickly checking the
completeness and the correctness of graphical information.
Model Mode
Suppose we have set up a model. That is, the diapositives of a stereopair are placed
on the stages and are oriented. The task is now to move the measuring mark to
locations of interest, for example to features we need to digitize. How do the stages
move to the conjugate location?
The measuring mark, together with the binoculars, remain fixed. As a
consequence, the stages must move to go from one point to another. New positions
are indicated by the pointing devices, for example by moving the cursor in the
direction of the new point. The cursor position is constantly read by the real-time
processor. The analog signal is converted to a 3-D location. One can think of
moving the cursor in the 3-D model space. The 3-D model position is immediately
converted to stage coordinates. This is accomplished by first computing photo-
coordinates with the collinearity equations, followed by computing stage coordinates
with the inverse interior orientation. We have symbolically
x ,y = f (ext.or , X, Y, Z, c )
xm , ym = f (int.or , x , y )
xm , ym = f (int.or , x , y )
These equations symbolize the classical real-time loop of analytical plotters. The
real-time processor is constantly reading the user interface. Changes in the pointing
devices are converted to model coordinates X, Y, Z which, in turn, are transformed
to stage coordinates xm, ym that are then submitted to the stage motors. This loop
is repeated at least 50 times per second to provide smooth motion. It is important to
realize that the pointing devices do not directly move the stages. Alternatively, model
coordinates can also be provided by the host computer.
Comparator Mode
Clearly, the model mode requires the parameters of both, exterior and interior orienta-
tion. These parameters are only known after successful interior and relative
orientation. Prior to this situation, the analytical plotter operates in the comparator
mode. The same principle as explained above applies. The real-time processor still
reads the position of the pointing devices. Instead of using the orientation
parameters, approximations are used. For example, the 5 parameters of relative
orientation are set to zero, and the same assumptions are made as discussed in
Chapter 2, relative orientation. Since only rough estimates for the orientation
parameters are used, conjugate locations are only approximate. The precise
determination of conjugate points is obtained by clearing the parallaxes, exactly in
the same way as with stereocomparators. Again, the pointing devices do not drive the
stages directly.
6.1.4 Typical
Workflow
In this section we describe a typical workflow, beginning with the definition of
param- eters, performing the orientations, and entering applications. Note that the
communi- cation is exclusively through the host computer, preferably by using a
graphical user interface (GUI), such as Microsoft Windows.
Definition of Auxiliary
Data
Here we include information that is necessary to conduct the orientation procedures.
For the interior orientation camera parameters are needed. This involves the calibrated
focal lenght, the coordinates of the principal point, the coordinates of the fiducial
marks, and the radial distortion. Different software varies in the degree of comfort
and flexibility of entering data. For example, in most camera calibration protocols the
coordinates of the fiducial marks are not explicitely available. They must be
computed from distances measured between them. In that case, the host software
should allow for entering distances, otherwise the user is required to compute
coordinates.
For the absolute orientation control points are necessary. It is preferable to enter
the control points prior to performing the absolute orientation. Also, it should be
possible to import a ground control file if it already exists, say from computing
surveying measurements. Camera data and control points should be independent from
project data because several projects may use the same information.
Interior Orientation
The interior orientation begins with placing the diapositives on the stages.
Sometimes, the accessibility to the stages is limited, especially when they are parked
at certain positions. In that case, the system should move the stages into a position of
best acces- sibility. After having set all the necessary viewer control buttons, few
parameters and options must be defined. This includes entering the camera file names
and the choice of transformation to be used for the interior orientation. The system is
now ready for measuring the fiducial marks. Based on the information in the
camera file, approxi- mate stage coordinates are computed for the stages to drive to.
The fine positioning is performed with one of the pointing devices.
With every measurement improved positions of the next fiducial mark can be
com- puted. For example, the first measurement allows to determine a better
translation vector. After the second measurement, an improved value for the rotation
angle is computed. In that fashion, the stages drive closer to the true position of every
new fiducial mark. After the set of fiducial marks as specified in the calibration
protocol is measured, the transformation parameters are computed and displayed,
together with statistical results, such as residuals and standard deviation. Needless to
say that throughout the interior orientation the system is in comparator mode.
6.2 Digital Photogrammetric
78 11
6 Measuring Systems
Workstations 8
Upon acceptance, the interior orientation parameters are downloaded to the real-
time processor.
Relative Orientation
The relative orientation requires first a successful interior orientation. Prior to the
mea- suring phase, certain parameters must be defined, for example the number of
parallax points and the type of orientation (e.g. independent or dependent relative
orientation). The analytical plotter is still in comparator mode. The stages are now
directed to approx- imate locations of conjugate points, which are regularly
distributed accross the model. The approximate positions are computed according to
the consideration discussed in the previous section. Now, the operator selects a
suitable point for clearing the parallaxes. This is accomplished by locking one stage
and moving the other one only until a the point is parallax free.
After six points are measured, the parameters of relative orientation are computed
and results are displayed. If the computation is successful, the parameters are down-
loaded to the RT processor and a model is established. At that time, the analytical
plotter switches to the model mode. Now, the operator moves in an oriented model.
To measure additional points, the system changes automatically to comparator mode
to force the operator to clear the parallaxes.
It is good practice to include the control points in the measurements and computa-
tions of the relative orientation. Also, it is advisable to measure twelve or more
points.
Absolute Orientation
The absolute orientation requires a successful interior and relative orientation. In case
the control points are measured during the relative orientation, the system
immediately computes the absolute orientation. As soon as the minimum control
information is measured, the system computes approximate locations for additional
control points and positions the stages accordingly.
accuracy
instrument 2 µm ≥ 10µm ≥ 10µm
image refinement yes no no
drive to
FM, control points yes no no
profiles yes yes yes
DEM grid yes no no
photography
projection system any only central only central
size ≤ 18 × 9 in. ≤ 9 × 9 in. ≤ 9 × 9 in.
orientations
computer assistance high medium none
time 10 minutes 30 minutes 1 hour
storing parameters yes yes no
range of or. parameters unlimited ω, ϕ ≤ 5o ω, ϕ ≤ 5o
map compilation
CAD systems many few none
time 20 % 30 % 100 %
6.2 Digital Photogrammetric
80 12
6 Measuring Systems
Workstations 0
6.2.1 Background
Great strides have been made in digital photogrammetry during the past few years due
to the availability of new hardware and software, such as powerful image processing
workstations and vastly increased storage capacity. Research and development efforts
resulted in operational products that are increasingly being used by government orga-
nizations and private companies to solve practical photogrammetric problems. We are
witnessing the transition from conventional to digital photogrammetry. DPWs play a
key role in this transition.
photograph
scanner digital camera
orthophoto map
such as the graphical user interface, is displayed on the second monitor. As an option
to the 3-D pointing device (trackball), the system can be equipped with handwheels to
more closely simulate the operation on a classical instrument.
The main characteristic of Intergraph’s ImageStation Z is the 28-inch panoramic
monitor that provides a large field of view for stereo display (see Fig. 6.9, label 1).
Liquid crystal glasses (label 3) ensure high-quality stereo viewing. The infrared
emitter on top of the monitor (label 4) provides synchronization of the glasses and
allows group viewing. The 3-D pointing device (label 6) allows freehand digitizing
and the 10 buttons facilitate easy menu selection.
CPU the central processing unit should be reasonably fast considering the amount
of computations to be performed. Many processes lend themselves to paral-
lel processing. Parallel processing machines are available at reasonable prices.
However, programming that takes advantage of them is still a rare commodity
and prevents a more wide spread use of the workstations.
Figure 6.8: Typical digital photogrammetric workstation. The system shown
here offers optional handwheels to emulate operation on classical
photogrammetric plotters. Courtesy LH Systems, Inc., San Diego,
CA.
OS the operating system should be 32 bit based and suitable for real-time processing.
UNIX satisfies these needs; in fact, UNIX based workstations were the systems
of choice for DPWs until the emergence of Windows 95 and NT that make PCs
a serious competitor of UNIX based workstations.
main memory due to the large amount of data to be processed, sufficient memory
should be available. Typical DPW configurations have 64 MB, or more, of
RAM.
storage system must accommodate the efficient storage of several images. It usually
consists of a fast access storage device, e.g. hard disks, and mass storage media
with slower access times. Sec. 6.2.3 discusses the storage system in more
detail.
graphic system the graphics display system is another crucial component of the DPW.
The purpose of the display processor is to fetch data, such as raster (images)
or vector data (GIS), process and store it in the display memory and update the
monitor. The display system also handles the mouse input and the cursor.
3-D viewing system is a distinct component of a DPWs usually not found in other
workstations. It should allow viewing a photogrammetric model comfortably
and possibly in color. For a human operator to see stereoscopically, the left
and right image must be separated. Sec. 6.2.3 discusses the principles of stereo
viewing.
3-D measuring device is used for stereo measurements by the operator. The solution
may range from a combination of a 2-D mouse and trackball to an elaborate
device with several programmable function buttons.
6.2 Digital Photogrammetric
84 12
6 Measuring Systems
Workstations 4
user interface may consist of hardware components such as keyboard, mouse, and
auxiliary devices like handwheels and footwheels (to emulate an analytical
plotter environment). A crucial component is the graphical user interface
(GUI).
1. Archiving: store and access images, including image compression and decom-
pression.
6.2 Digital Photogrammetric
84 12
6 Measuring Systems
Workstations 5
3-D
viewing
6.2 Digital Photogrammetric
84
CPU/OS graphic 12
6 Measuring Systems
Workstations 3-D 6
measuring
memory network
printer
storage periphery
plotter
A detailed discussion about the entire system functionality is beyond the scope of
this book. We will focus on the storage system, on the display and measuring system,
and on roaming.
Storage System
A medium size project in photogrammetric mapping contains hundreds of
photographs. It is not uncommon to deal with thousands of photographs in large
projects. Assuming digital images with 16 K × 16 K resolution (pixel size approx.
13 µm), a storage capacity of 256 MB per uncompressed black and white image is
required. Consider a compression rate of three and we arrive at the typical number of
80 MB per image. To store a medium size project on-line causes heavy demands on
storage.
Photogrammetry is not the only imaging application with high demands on
storage, however. In medical imaging, for example, imaging libraries in the terabyte
size are typical. Other examples of high storage demand applications include
weather track- ing and monitoring, compound document management, and
interactive video. These
6.2 Digital Photogrammetric
86 12
6 Measuring Systems
Workstations 7
hard disks: are an obvious choice, because of fast access and high performance ca-
pabilities. However, the high cost of disk space1 would make it economically
infeasible to store entire projects on disk drives. Therefore, hard disk drives
are typically used for interactive and real-time applications, such as roaming or
displaying spatially related images.
optical disks: have slower access times and lower data transfer rates but at lower
cost (e.g. $10 to $15 per GB, depending on technology). The classical CD
ROM and CD-R (writable) with a capacity of approximately 0.65 GB can hold
only one stereomodel. A major effort is being devoted to increasing this
capacity by an order of magnitude and make the medium a rewritable one.
Until such systems become commercially available (including accepted
standards), CDs are used mostly as a distribution media.
magnetic tape: offers the lowest media cost per GB (up to two orders of magnitude
less than hard disk drives). Because of its slow performance (due to sequential
access type), magnetic tapes are primarily used as backup devices. Recent
advances in tape technology, however, make this device a viable option for on-
line imaging application. Juke boxes with Exabyte or DLT (digital linear tape)
cartridges (capacity of 20 to 40 GB per media) lend themselves into on-line
image libraries with capacities of hundreds of gigabytes.
captionwidth7cm
5× 29 40
6× 32
10 × 21
15 × 14
20 × 9 10 10
magnification the smaller the field of view. Table 6.2 lists zoom values and the size of
the corresponding film area that appears in the oculars. Feature extraction
(compilation) is usually performed with a magnification of 8 to 10 times. With higher
magnification, the graininess of the film reduces the quality of stereoviewing. It is
also worth pointing out that stereoscopic viewing requires a minimum field of view.
Let us now compare the viewing capabilities of analytical plotters with that of
DPWs. First, we realize that this function is performed by the graphics subsystem,
that is, by the monitor(s). To continue with the previous example of a film with 70
lp/mm resolution, viewed 10 × magnified, we read from Table 6.2 that the
corresponding area on the film has a diameter of 20 mm. To preserve the high film
resolution it ought to be digitized with a pixelsize of approximately 6 µm (1000/(2
× 70)). It follows that the monitor should display more than 3K × 3K pixels.
Monitors with this sort of resolution do not exist or are prohibitively expensive,
particularly when considering color imagery and true color rendition (24+ bit planes).
If we relax the high resolution requirements and assume that images are digitized
with a pixelsize of 15 µm, then a monitor with the popular resolution of 1280 × 1024
would display an area that is quite comparable to that of analytical plotters.
Magnification, known under the more popular terms zooming in/out, is achieved
by changing the ratio of number of image pixels displayed to the number of monitor
pixels. To zoom in, more monitor pixels are used than image pixels. As a
consequence, the size of the image viewed decreases and stereoscopic viewing may
be affected.
The analogy to the floating point mark of analytical plotters is the three
dimensional cursor that is created by using a pattern of pixels, such as a cross or a
circle. The cursor must be generated by bitplane(s) that are not used for displaying the
image. The cursor moves in increments of pixels, which may appear jerky compared
to the smooth motion of analytical plotters. One advantage of cursors, however, is that
they can be represented in any desirable shape and color.
The accuracy of interactive measurements depends on how well you can identify
a feature, on the resolution, and on the cursor size. Ultimately, the pixelsize sets the
lower limit. Assuming that the maximum error is 2 pixels, the standard deviation is
approximately 0.5 pixel. A better sub-pixel accuracy can be obtained in two ways. A
6.2 Digital Photogrammetric
88 12
6 Measuring Systems
Workstations 9
straight-forward solution is to use more monitor pixels than image pixels. Fig. 6.11(a)
exemplifies the situation. Suppose we use 3 × 3 monitor pixels to display one image
pixel. The standard deviation of a measurement is now 0.15 image pixels3 . As
pointed out earlier, using more monitor pixels for displaying an image pixel reduces
the size of the field of view. In the example above, only an area of 6 mm would be
seen—hardly enough to support stereopsis.
monitor pixel
image pixel
cursor
(a) (b)
Figure 6.11: Two solutions to sub-pixel accuracy measurements. In (a), an image pixel is dis-
played to m monitor pixels, m > 1. The cursor moves in increments of monitor
pixels, corresponding to 1/m image pixels. In (b) the image is moved under the
fixed cursor position in increments smaller than an image pixel. This requires
resampling the image at sub-pixel locations.
Stereoscopic Viewing
An essential component of a DPW is the stereoscopic viewing system (even though
a number of photogrammetric operations can be performed monoscopically). For a
human operator to see stereoscopically, the left and right image must be separated.
3
As before, the standard deviation is assumed to be 0.5 monitor pixel. We then obtain in the image
domain an accuracy of 0.5 × 1/3 image pixel.
4
Polarization absorbs half of the light. Another half is lost because the image is only viewed during half
of the time usually available when viewing in monoscopic mode.
6.2 Digital Photogrammetric
88 13
6 Measuring Systems
Workstations 0
separation implementation
2 monitors + stereoscope
spatial 1 monitor + stereoscope (split screen)
2 monitors + polarization
anaglyphic
spectral
polarization
alternate display of left and right image
temporal
synchronized by polarization
by an infrared emitter, usually mounted on top of the monitor (Fig. 6.9 on page 84
shows an example). Understandably, the goggles are heavier and more expensive
compared to the simple polarizing glasses of the first solution. On the other hand, the
polarizing screen and the monitor are a tightly coupled unit, offering less flexibility in
the selection of monitors.
Roaming
Roaming refers to moving the 3-D pointing device. This can be accomplished in two
ways. In the simpler solution, the cursor moves on the screen according to the move-
ments of the pointing device (e.g. mouse) by the operator. The preferred solution,
however, is to keep the cursor locked in the screen center, which requires redisplaying
the images. This is similar to the operation of analytical plotters where the floating
point mark is always in the center of the field of view.
The following discussion refers to the second solution. Suppose we have a stereo
DPW with a 1280 × 1024 resolution, true color monitor, and imagery digitized to 15
µm pixelsize (or approximately 16K × 16K pixels). Let us now freely roam within a
stere- omodel, much as we would do it on an analytical plotter and analyze the
consequences in terms of transfer rates and memory size.
Fig. 6.14 schematically depicts the storage and graphic systems. The essential
components of the graphic system include the graphics processor, the display
memory,
6.2 Digital Photogrammetric
90 13
6 Measuring Systems
Workstations 2
(a) (b)
6.2 Digital Photogrammetric
90 13
6 Measuring Systems
Workstations 3
display memory
monitor
polarization screen
Figure 6.13: Schematic diagram of the temporal separation of the left and right
image of a stereopair for stereoscopic viewing. In (a), a polariz-
ing screen is mounted in front of the display. Another solution
is sketched in (b). The screen is viewed through eyewear with
alternating shutters. See text for detailed explanations.
the digital-to-analog converter (DAC), and the display device (CRT monitor in our
case). The display memory contains the portion of the image that is displayed on the
monitor. Usually, the display memory is larger than the screen resolution to allow
roaming in real-time. As soon as we roam out of the display memory, new image
data must be fetched from disk and transmitted to the graphics system.
Graphic systems come in the form of high-performance graphics boards, such as
RealiZm or Vitec boards. These state-of-the-art graphics systems are as complex as
the system CPU. The interaction of the graphics system with the entire DPW, e.g.
requesting new image data, is a critical measure of system performance.
Factors such as storage organization, bandwidths, and additional processing cause
delays in the stereo display. Let us further reflect on these issues.
With an image compression rate of three, approximately 240 MB are required to
store one color image. Consequently, a 24 GB mass storage system could store 100
images on-line. By the same token, a hard disk with 2.4 GB capacity could hold 10
compressed color images.
Since we request true color display, approximately 2 × 4 MB are required to hold the
6.2 Digital Photogrammetric
92 13
6 Measuring Systems
Workstations 4
storage system graphics system
system bus
6.2 Digital Photogrammetric
92 mass 13
6 Measuring Systems
Workstations storage clock 5
Figure 6.14: Schematic diagram of storage system, graphic system and display.
two images of the stereomodel6 . As discussed in the previous section, the left and
right image must be displayed alternately at a frequency of 120 Hz to obtain an
acceptable model7 . The bandwidth of the display memory amounts to
1280 × 1024 × 3 × 120 = 472
MB/sec. Only high speed, dual port memory, such as VRAM (video RAM) satisfies
such high transfer rates. For less demanding operations, such as storing programs or
fonts, less expensive memory is used in high performance graphic workstations.
At what rate should one be able to roam? Skilled operators can trace contour lines
at a speed of 20 mm/sec. A reasonable request is that the display on the monitor
should be “crossed” within 2 seconds, in any direction. This translates to 1280 ×
0.015/2 ≈
10 mm/sec in our example. Some state a maximum roam rate of 200 pixels/sec on
Intergraph’s ImageStation Z softcopy workstation. As soon as we begin to move the
pointing device, new portions of the model must be displayed. To avoid immediate
disk transfer, the display memory is larger than the monitor, usually four times.
Thus, we can roam without problems within a distance twice as long as the screen
window at the cost of increased display memory size (32 MB of VRAM in our
example).
Suppose we move the cursor with a speed of 10 mm/sec toward one edge. When
will we hit the edge of the display memory? Assuming we begin at the center, after
one second the edge is reached and the display memory must be updated with new
data. To assure continuous roaming, at least within one stereomodel, the display
memory must be updated before the screen window reaches the limit. The new
position of the window is predicted by analyzing the roaming trajectory. A look-
ahead algorithm determines the most likely positions and triggers the loading of
image data through the hierarchy
6
1280 × 1024 × 3 Bytes = 3,932,160 Bytes.
7
Screen flicker is most noticeable far out in one’s vision periphery. Therefore, large screen sizes
require higher refresh rates. Studies indicate that for 17-inch screens refresh rates of 75 Hz are acceptable.
For DPWs larger monitors are required; therefore with a refresh rate of 60 Hz for one image we still
experience annoying flicker at the edges.
of the storage system.
Referring again to our example, we have one second to completely update the
display memory. Given its size of 32 MB, data must be transferred at a rate of 32
MB/sec from hard disk via system bus to the display memory. The bottle necks are
the interfaces, particularly the hard disk interface. Today’s systems do not offer such
bandwidths, except perhaps SCSI-2 devices8 . A PCI interface (peripheral component
interface) on the graphics system will easily accommodate the required bandwidth.
A possible solution around the hard disk bottleneck is to dedicate system memory
for storing an even larger portion of the stereomodel, serving as sort of a relay station
between hard disk and display memory. This caching technique, widely used by
operat- ing systems to increase the efficiency of data transfer disk to memory, offers
additional flexibility to the roaming prediction scheme. It is quite unlikely that we
will move the pointing device with a constant velocity across the entire model
(features to be digitized are usually confined to rather small areas). That is, the
content of the system memory does not change rapidly.
Fig. 6.15 depicts the different windows related to the size of a digital image. In
our example, the size of the display window is 19.2 mm × 15.4 mm, the display
memory size is 4× larger, and the dedicated system memory again could be 4×
larger. Finally, the hard disk holds more than one stereopair.
predicted trajectory
system memory
display memory
monitor
Figure 6.15: Schematic diagram of the different windows related to the size of
an image. Real-time roaming is possible within the display
memory. System memory holds a larger portion of the image. The
location is predicted by analyzing the trajectory of recent cursor
movements.
8
Fast wide SCSI-2 devices, available as options, sustain transfer rates of 20 MB/sec. This would be
sufficient for roaming within a b/w stereo model.
6.3 Analytical94Plotters vs. DPWs 13
6 Measuring Systems
7
Among the many potentials of DPWs is the possibility to increase the user base.
To illustrate this point, compare the skill level of an operator working on a
stereoplotter, analytical plotter, and digital photogrammetry workstation. There is
clearly a trend away from very special photogrammetric know-how to more generally
available know- how on how to use a computer. That stereo models can be
viewed without optical
6.3 Analytical94Plotters vs. DPWs 13
6 Measuring Systems
8