0% found this document useful (0 votes)
14 views139 pages

Photogrammetry DLS

Photogrammetry is the science of obtaining geometric properties of objects from photographic images, including aerial and terrestrial applications for mapping and surveying. It encompasses techniques like orthophotos, mosaics, and stereoscopic viewing, and is utilized in various fields such as urban planning, geology, and disaster management. The document also details types of photogrammetry, camera classifications, and principles of photography, including lens characteristics and digital image representation.

Uploaded by

annlinet179
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views139 pages

Photogrammetry DLS

Photogrammetry is the science of obtaining geometric properties of objects from photographic images, including aerial and terrestrial applications for mapping and surveying. It encompasses techniques like orthophotos, mosaics, and stereoscopic viewing, and is utilized in various fields such as urban planning, geology, and disaster management. The document also details types of photogrammetry, camera classifications, and principles of photography, including lens characteristics and digital image representation.

Uploaded by

annlinet179
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 139

Photogrammetry

Definitions
Photogrammetry
• Photogrammetry is the practice of determining the geometric properties of objects
from photographic images.
• It’s the making of precise measurements from photographs; the making of maps from
photographs, especially from aerial surveying.
• The science of using aerial photography and other remote sensing imagery to obtain
measurement of natural and man-made features on the earth.
• The science of deducing the physical dimensions of objects from measurements on
images (usually photographs) of the objects.
• Is the Art, Science and Technology of obtaining reliable information about physical
objects and the environment through processes of recording, measuring and
interpreting photographic images

Orthophoto
An orthophoto is a picture of the ground that has undergone rectification i.e., removal of scale
and shape distortions.

Mosaic
Mosaics are images constructed from a block of overlapping photographs which are trimmed
and joined.

Stereoscopic Viewing
Perception of depth through binocular vision (simultaneous viewing with both eyes) is called
stereoscopic viewing.

Stereoscopy
It’s a technique for creating or enhancing the illusion of depth in an image through binocular
vision.

Stereopairs
Two photos captured from different positions and overlap with each other.

Air Base
Distance between two exposure stations

Scale
The ratio of the distance between two points on a photo to the actual distance between the
same two points on the ground.
Another method used to determine the scale of a photo is to find the ratio between the
camera's focal length and the plane's altitude above the ground being photographed.

1
Field Application of Photogrammetry
• Used to conduct topographical survey or engineering surveys.
• Suitable for mountainous and hilly terrain with little vegetation.
• Used for geological mapping which includes identification of land forms, rock type &
rock structures.
• Used for projects demanding higher accuracy, since it provides accurate
measurements.
• Used in urban and regional planning applications.
• Used mostly in Planning/designing in transport planning, bridge, pipeline,
hydropower, urban planning, security and strategic planning, disaster management,
natural resources management, city models, conservation of archaeological sites etc.
• Its applications include satellite tracking of the relative positioning alterations in all
Earth environments (e.g. tectonic motions etc.),
• The quantitative results of photogrammetry are used to guide and match the results of
computational models of the natural systems, thus helping to invalidate or confirm
new theories, to design novel vehicles or new methods for predicting or/and
controlling the consequences of earthquakes, tsunamis, any other weather types.
• Photogrammetry also helps for the solving of triangulation, trilateration and
multidimensional scaling.
• In the simplest example, the distance between two points that lie on a plane parallel to
the photographic image plane can be determined by measuring their distance on the
image, if the scale (s) of the image is known.

Types of Photogrammetry
Photogrammetry may be divided into two main groups:
a) Aerial Photogrammetry - It’s the branch of photogrammetry that deals with images
taken with sensors mounted on airborne platforms i.e., aircrafts.
b) Close range (or Terrestrial) Photogrammetry - It’s a branch of photogrammetry in
which photographs are taken from camera station at a fixed position on or near the
ground.
c) Space/Satellite/Extra Terrestrial photogrammetry - It’s a branch of photogrammetry in
which photographs are taken from cameras mounted on space orbiting satellites e.g.,
Ikonos, Quickbird and GeoEye.
Aerial Photogrammetry may also be classified into three categories:
(i). Analog Photogrammetry
(ii). Analytical Photogrammetry
(iii). Digital Photogrammetry

Photography

2
Terrestrial Photogrammetry.
The photographs are taken by means of a phototheodolite which is combination of a camera
and a theodolite.
The term close-range photogrammetry is generally used for terrestrial photographs having
object distances up to about 300 m.
Terrestrial photography may be static (photos of stationary objects) or dynamic (photos of
moving objects)

Types of Terrestial Photography


Static photography
Slow, fine-grained, high-resolution films may be used and the pictures taken with long
exposure times.
Stereopairs can be obtained by using a single camera and making exposures at both ends of a
baseline.
Dynamic photography
It requires fast films and rapid shutter speeds.
If Stereopairs of dynamic occurrences are required, two cameras located at the ends of a
baseline must make simultaneous exposures.

Applications of Terrestrial and Close-Range Photogrammetry


Topographic mapping - application is limited to small areas and special situations such as
deep gorges or rugged mountains that are difficult to map from aerial photography.
Medicine - X-ray photogrammetry is utilized in measuring sizes and shapes of body parts,
recording tumor growth, studying the development of fetuses, locating foreign objects within
the body

Terrestrial Cameras
They fall into two general classifications:
a) Metric camera
They have fiducial marks built into their focal planes, or have calibrated CCD arrays, which
enable accurate recovery of their principal points.
Metric cameras are stably constructed and completely calibrated before use. Phototheodolites
and stereometric cameras are two special types of metric cameras.
Phototheodolite - A phototheodolite is an instrument that incorporates a metric camera with
a surveyor’s theodolite. With this instrument, precise establishment of the direction of the
optical axis can be made.
Stereometric cameras - A stereometric camera system consists of two identical metric
cameras which are mounted at the ends of a bar of known length. The optical axes of the

3
cameras are oriented perpendicular to the bar and parallel with each other. The length of the
bar provides a known baseline length between the cameras, which is important for controlling
scale.
b) Non-metric camera
Nonmetric cameras are manufactured for photography where geometric accuracy
requirements are generally not considered paramount.
These cameras do not contain fiducial marks, but they can be modified to include them.

The Phototheodolite
It’s a combination of camera and theodolite mounted on the same tripod. While taking the
photographs the camera axis is kept parallel to each other.
It consists of:
• A camera box of fixed focus type
• A hollow rectangular frame placed vertically to the rear side
• The sensitized photographic plate.
Camera box is supported on the tripod and is furnished with an inner and outer axis each of
which is fitted with a clamp and fine adjusting screw.
The graduated horizontal circle carries vernier reading to single minutes. These are supported
on a levelling head carrying three foot screws.
On the top of the box, a telescope is fitted. The telescope can be rotated in a vertical plane
about a horizontal axis and is fitted with vertical arc with verniers, clamp & slow motion
screw.
The line of sight of the telescope is set in the same vertical plane as the optical axis of
camera.

Types of Aerial Photographs


According to the direction of the camera axis at the time of exposure aerial photographs may
be classified into:
a) Vertical photographs
b) Oblique photographs

Vertical photographs
These photographs are taken from the air with the axis of the camera vertical or nearly
vertical.

4
A truly vertical aerial photograph is rarely obtainable because of unavoidable angular rotation
or tilts of aircraft. The allowable tolerance is usually +3o from the perpendicular (plumb) line
to the camera axis.

When the camera axis is unintentionally tilted slightly from vertical, the resulting photograph
is called a tilted photograph.
Characteristics of Vertical Photographs
a) The camera axis is perpendicular to the surface of the earth.
b) It covers relatively small area than oblique photographs.
c) The shape of the ground covered on a single vertical photo closely approximates a
square or rectangle.
d) Being a view from above, it gives an unfamiliar view of the ground.
e) Distance and directions may approach the accuracy of maps if taken over flat terrain.
f) Relief is not readily apparent.

Taking Vertical Aerial Photographs


Photographs are usually taken along a series of parallel passes, called flight strips.
As illustrated in Fig. 1-9, the photographs are normally exposed in such a way that the area
covered by each successive photograph along a flight strip overlaps part of the ground
coverage of the previous photo. This overlap in successive photos along the flight strip is
called end lap. The overlapping pair of photos is called a stereopair.

5
The positions of the camera at each exposure, e.g., positions 1, 2, 3 of Fig. 1-9, are called the
exposure stations, and the altitude of the camera at exposure time is called the flying height
Adjacent flight strips are photographed so that there is also a lateral overlapping of ground
coverage between strips. This condition, as illustrated in Fig. 1-10, is called side lap, and it is
normally held at approximately 30 percent.

Reasons for Overlaps


• Arrangement of mosaic
• Remove errors due to distortion, displacement, and tilt.
• To facilitate stereoscopic viewing.
• Avoid repetition of aerial survey

Oblique Photographs
They’re photographs are taken from air with the axis of the camera intentionally tilted from
the vertical. An oblique photograph covers larger area of the ground but clarity of details
diminishes towards the far end of the photograph.
Depending upon the angle of obliquity, oblique photographs may be further divided into two
categories.
Low oblique photographs
An oblique photograph which does not show the horizon, is known as low oblique
photograph.

6
High oblique photograph
An oblique photograph which is sufficiently tilted to show the horizon is known as a high
oblique photograph.

Characteristics of Oblique Photographs


1. Low oblique photograph covers relatively small area than high oblique photographs.
2. The ground area covered is trapezoid, but the photograph is square or rectangular.
Hence scale is not applicable and direction (azimuth) also cannot be measured.
3. The relief is discernible but distorted.

Differences between Vertical and Tilted Photographs


No. Vertical Photographs Tilted Photographs
1 tilt angle is zero tilt angle not zero

2 swing and azimuth are undefined swing and azimuth are defined

3 scale is affected by relief Scale is affected by relief and the


magnitude and angular orientation of the
tilt.
Relief displacements only radial from relief displacements occur along radial
the nadir point if nadir point coincides lines from the nadir point.
with the principal point.

7
Principles of Photography and Imaging

Lenses
A simple lens consists of a piece of optical glass that has been ground so that it has either two
spherical surfaces or one spherical surface and one flat surface.
Its primary function is to gather light rays from object points and bring them to focus at some
distance on the opposite side of the lens. A lens accomplishes this function through the
principles of refraction.
A lens gathers an entire pencil of rays from each object point instead of only a single ray.

When an object is illuminated, each point in the object reflects a bundle of light rays. A lens
placed in front of the object gathers a pencil of light rays from each point’s bundle of rays
and brings these rays to focus on the image plane. The image is inverted by the lens.
The optical axis of a lens is defined as the line joining the centers of curvature of the
spherical surfaces of the lens (points O1 and O2 of Fig. 2-7). In this figure R1 and R2 are the

8
radii of the lens surfaces, and the optical axis is the line O1O2. Light rays that are parallel to
the optical axis as they enter a lens come to focus at F, the focal point of the lens.

The distance from the focal point to the center of a lens is f, the focal length of the lens. A
plane perpendicular to the optical axis passing through the focal point is called the plane of
infinite focus, or simply the focal plane.
The following equation, called the lens formula, expresses the relationship of object distance
o and image distance i to the focal length f of a converging lens:

Two points called nodal points must be defined for thick lenses. These points, termed the
incident nodal point and the emergent/front nodal point, lie on the optical axis.
They have the property that conceptually, any light ray directed toward the incident nodal
point passes through the lens and emerges on the other side in a direction parallel to the
original incident ray and directly away from the emergent nodal point.

The imperfections that degrade the sharpness of the image are termed aberrations.

9
Lens distortions, on the other hand, do not degrade image quality but deteriorate the
geometric quality (or positional accuracy) of the image. Lens distortions are classified as
either symmetric radial or decentering.
Resolution or resolving power of a lens is the ability of the lens to show detail.
The depth of field of a lens is the range in object distance that can be accommodated by a lens
without introducing significant image deterioration. For a given lens, depth of field can be
increased by reducing the size of the lens opening (aperture).
Vignetting and falloff are lens characteristics which cause resultant images to appear brighter
in the center than around the edges. Compensation can be provided for these effects in the
lens design itself, by use of an antivignetting filter in the camera, or through lighting
adjustments in the printing process

Illuminance
Illuminance of any photographic exposure is the brightness or amount of light received per
unit area on the image plane surface during exposure. A common unit of illuminance is the
meter-candle.
Illuminance is proportional to the amount of light passing through the lens opening during
exposure, and this is proportional to the area of the opening. Since the area of the lens
opening isπd2/4, illuminance is proportional to the variable d2, the square of the diameter of
the lens opening.
Image distance i is another factor which affects illuminance. The amount of illuminance is
inversely proportional to the square of distance from the aperture.
at the center of a photograph, illuminance is proportional to the quantities 1/f 2 and d2/f 2. The
square root of this term is called the brightness factor, or

The inverse of Eq. (2-5) is the f-stop, also called f-number. In equation form f-stop is the ratio
of focal length to the diameter of the aperture.

Relationship of Aperture and Shutter Speed


Total exposure of photographic film is the product of illuminance and time of exposure. Its
unit is metercandle-seconds.
Illuminance is regulated by varying f-stop settings on the camera, while time of exposure is
set by varying the shutter speed.

10
With a lens camera, as the diameter of the aperture increases the depth of field becomes less
and lens distortions become more severe.
In photographing rapidly moving objects or in making exposures from a moving vehicle such
as an airplane, a fast shutter speed is essential, to reduce image motion. In this situation a
small f-stop setting corresponding to a large-diameter lens opening would be necessary for
sufficient exposure.

Digital Images
A digital image is a computer-compatible pictorial rendition in which the image is divided
into pixels.
The image consists of an array of integers, often referred to as digital numbers, each
quantifying the gray level, or degree of darkness, at a particular element. Pixel values range
from 0 (dark black) to 255 (bright white).
Numbers in the range 0 to 255 can be accommodated by 1 byte, which consists of 8 binary
digits, or bits. An 8-bit value can store 28, or 256. An image with a pixel grid of 72 rows by
72 columns would require a total of 72 × 72 = 5184 bytes of computer storage.

Digital images are produced through a process referred to as discrete sampling. In this
process a pixel is “sensed” to determine the amount of electromagnetic energy given off by

11
the corresponding patch of surface on the object. Discrete sampling of an image has two
fundamental characteristics, geometric resolution and radiometric resolution.
Geometric (or spatial) resolution refers to the physical size of an individual pixel, with
smaller pixel sizes corresponding to higher geometric resolution.
Radiometric resolution can be broken down into quantization and spectral resolution.
Quantization refers to the conversion of the amplitude of the original electromagnetic energy
(analog signal) into a number of discrete levels (digital signal). In lower-level quantizations,
large areas appear homogeneous and subtle tonal variations can no longer be detected. 2-level
Quantization is referred to as a binary image.

Spectral resolution is the film’s ability to accurately represent an object’s spectral response
pattern of the bands involved.

Color Image Representation


RGB System
The color of any pixel can be represented by three-dimensional coordinates in what is known
as BGR (or RGB) color space.
Assume for example that the digital number in each channel is quantified as an 8-bit value,
which can range from 0 to 255. The color of a pixel is thus represented by an ordered triplet
consisting of a blue value, a green value, and a red value, each of which can range from 0 to
255. This can be represented by the three-dimensional axis system

12
Intensity-hue-saturation (IHS) system
This system can be defined as a set of cylindrical coordinates in which the height, angle, and
radius represent intensity, hue, and saturation, respectively. The axis of the cylinder is the
gray line which extends from the origin of the color cube to the opposite corner where the
levels of blue, green, and red are maximum.

13
• Intensity represents overall brightness irrespective of color
• Hue (the actual color) represents the specific mixture of wavelengths that define the
color
• Saturation represents the amount of grey in the color

14
Focal Plane and Fiducial Marks
Focal Plane
The focal plane of an aerial camera is the plane in which all incident light rays are brought to
focus.
The focal plane is defined by the upper surface of the focal-plane frame. This is the surface
upon which the film emulsion rests when an exposure is made.
Fiducial Marks
Camera fiducial marks are usually four or eight in number, and they are situated in the middle
of the sides of the focal plane opening, in its corners, or in both locations.
Fiducial marks (or fiducials) serve to establish a reference xy photo coordinate system for
image locations on the photograph.
In essence, fiducials are two-dimensional control points whose xy coordinates are precisely
and accurately determined as a part of camera calibration.
Lines joining opposite fiducials intersect at a point called the indicated principal point.
The true principal point is the point in the focal plane where a line from the rear nodal point
of the camera lens, perpendicular to the focal plane, intersects the focal plane.
Cameras have been perfected to compensate for image motion that occurs because of the
forward movement of the aircraft during the time that the shutter is open. Forward-motion
compensation (FMC) is usually accomplished by moving the film slightly across the focal
plane during exposure, in the direction of, and at a rate just equal to, the rate of image
movement.
Importance of Fiducials
• They provide a coordinate reference for the principal point and image points
• Fiducials allow for correction of film distortion (shrinkage and expansion

Shutters
Short exposure times reduce:
• The effects of aircraft vibrations on image quality.
• Chances of blurred images
Shutters used in aerial cameras are generally classified as either between-the-lens shutters or
focal-plane shutters.
Between-the-lens shutters
Between-the-lens shutters are placed in the airspace between the elements of the camera
lens. Common types of between-the-lens shutters are the leaf type, blade type, and rotating-
disk type.

15
The leaf type consists of five or more leaves mounted on pivots and spaced around the
periphery of the diaphragm. When the shutter is tripped, the leaves rotate about their pivots to
the open position, remain open the desired time, and then snap back to the closed position.

The blade-type shutter consists of four blades, two for opening and two for closing. Its
operation is similar to that of a guillotine. When the shutter is triggered, the two thin opening
plates or blades move across the diaphragm to open the shutter. When the desired exposure
time has elapsed, two closing blades close it.
The rotating-disk type of shutter consists of a series of continuously rotating disks. Each disk
has a cutaway section, and when these cutaways mesh, the exposure is made.

Camera Mounts
The camera mount is the mechanism used to attach the camera to the aircraft.
Its purpose is to constrain the angular alignment of the camera so that the optical axis is
vertical and the format is squarely aligned with the direction of travel.
A minimal mount is equipped with dampener devices which prevent (or at least reduce)
aircraft vibrations from being transmitted to the camera, and a mechanism that allows
rotation in azimuth to correct for crab. Crab is a disparity in the orientation of the camera
with respect to the aircraft’s actual travel direction.
Drift is the pilot’s failure to fly along planned flight lines.
Crab reduces the stereoscopic ground coverage of aerial photos.

16
Some camera mounts provide gyro stabilization of the camera. Gyroscopic devices counteract
the rotational movements of the aircraft. Control is provided in three directions:
• rotation about the longitudinal axis(roll)
• rotation about the transverse axis (pitch)
• rotation about the optical axis (yaw or drift)

Digital Mapping Cameras


A digital image is an array of pixels in which the brightness of a scene at each discrete
location has been quantified.
Digital imaging devices record data by using solid-state detectors to sense reflected
electromagnetic energy. A common type of solid-state detector in current use is the charge-
coupled device (CCD).
Operating Principle of CCD
At a specific pixel location, the CCD element is exposed to incident light energy, and it
builds up an electric charge proportional to the intensity of the incident light. The electric
charge is subsequently amplified and converted from analog to digital form.

17
Digital-Frame Camera
It consists of a two-dimensional array of CCD elements, called a full-frame sensor. The
sensor is mounted in the focal plane of a single-lens camera.
Acquisition of an image exposes all CCD elements simultaneously, thus producing the digital
image.

Light rays from all points in the scene pass through the center of the lens before reaching the
CCD elements, thus producing the same type of point-perspective image as would have
occurred if film were used.
Some digital-frame cameras combine multiple image subsets to produce a composite image
that has the geometric characteristics of a single-frame image.

18
An image is acquired by rapidly triggering four in-line sensors that combine to acquire nine
image. The four sensors are aligned with the flight line and by precisely timing the image
acquisitions to correspond to the aircraft velocity, all of the images will effectively be taken
from a common location.

Linear Array Sensors


The geometric characteristics of a linear array sensor are different from those of a single-lens
frame camera.
A linear array sensor acquires an image by sweeping a line of detectors across the terrain and
building up the image.
A linear array sensor consists of a one-dimensional array or strip of CCD elements mounted
in the focal plane of a single-lens camera. Since the two-dimensional image is acquired in a
sweeping fashion, the image is not exposed simultaneously.
At a particular instant, light rays from all points along a perpendicular to the vehicle
trajectory pass through the center of the lens before reaching the CCD elements, thus
producing a single row of the two-dimensional image. An instant later, the vehicle has
advanced to its position for the next contiguous row, and the pixels of this row are imaged.
The sensor proceeds in this fashion until the entire image is acquired.

19
Since the image was acquired at a multitude of points along the line of the vehicle’s
trajectory, the resulting geometry is called line perspective.
Another aspect of linear array sensors is that unlike frame sensors, they lack the ability to
provide stereoscopic coverage in the direction of flight. This problem can be solved by
equipping the sensor with multiple linear arrays pointed both forward and aft.

20
By using the images acquired by the forward-looking and backward-looking arrays, objects
on the ground are imaged from two different vantage points, thus providing the required
stereo view.
Differences between Frame Cameras and Linear Array Sensors
1 Provide stereoscopic coverage Linear array sensors don’t directly provide
stereoscopic coverage
2 Acquire two-dimensional image Linear array sensors acquire two-
simultaneously dimensional image in a sweeping fashion

Camera Calibration
Cameras are calibrated to determine precise and accurate values for a number of constants.
These constants, generally referred to as the elements of interior orientation, are needed so
that accurate spatial information can be determined from photographs.
Camera calibration methods may be classified into one of three basic categories: laboratory
methods, field methods, and stellar methods.

21
Elements of interior orientation
The elements of interior orientation which can be determined through camera calibration are
as follows:
1. Calibrated focal length (CFL).
This is the focal length that produces an overall mean distribution of lens distortion.
It represents the distance from the rear nodal point of the lens to the principal point of the
photograph.
2. Symmetric radial lens distortion.
This is the symmetric component of distortion that occurs along radial lines from the
principal point.

3. Decentering lens distortion.


This is the lens distortion that remains after compensation for symmetric radial lens
distortion. Decentering distortion can be further broken down into asymmetric radial and
tangential lens distortion components. These distortions are caused by imperfections in the
manufacture and alignment of the lens system.

22
4. Principal point location.
This is specified by coordinates of the principal point given with respect to the x and y
coordinates of the fiducial marks.
For a digital camera, the principal point is nominally located at the center of the CCD array,
but calibration can determine the offset from this location.
The calibrated principal point (also known as the point of best symmetry) is the point whose
position is determined as a result of the camera calibration.

5. Fiducial mark coordinates.


These are the x and y coordinates of the fiducial marks which provide the two-dimensional
positional reference for the principal point as well as images on the photograph.
A digital camera does not have fiducial marks so these values are not determined from its
calibration. Instead, the dimensions and effective shape of the CCD array are sometimes
determined as part of the calibration.
The method by which the rows or columns of CCD elements are electronically sampled may
cause a difference in the effective pixel dimensions in the x versus y directions.
In addition to the determination of the elements of interior orientation, several other
characteristics of the camera are often measured.
Resolution
It’s the sharpness or crispness with which a camera can produce an image. It is determined
for various distances from the principal point.
Due to lens characteristics, highest resolution is achieved near the center, and lowest is at the
corners of the photograph.
Focal-plane flatness
It’s the deviation of the platen from a true plane. The platen should be nearly a true plane,
generally not deviating by more than 0.01 mm.
Shutter efficiency
It’s the ability of the shutter to open instantaneously, remain open for the specified exposure
duration, and close instantaneously

Laboratory Methods of Camera Calibration


The multicollimator method and the goniometer method are two types of laboratory
procedures of camera calibration.
Multicollimator method
The multicollimator method consists of photographing, onto a glass plate, images projected
through a number of individual collimators mounted in perpendicular vertical planes.

23
A single collimator consists of a lens with a cross mounted in its plane of infinite focus.
Therefore, light rays carrying the image of the cross are projected through the collimator lens
and emerge parallel. When these light rays are directed toward the lens of an aerial camera,
the cross will be perfectly imaged on the camera’s focal plane.
The equivalent focal length (EFL) is a computed value based on the distances from the center
point g to each of the four nearest collimator crosses.
The camera to be calibrated is placed so that its focal plane is perpendicular to the central
collimator axis and the front nodal point of its lens is at the intersection of all collimator axes.
In this orientation, image g of the central collimator, which is called the principal point of
autocollimation, occurs very near the principal point, and also very near the intersection of
lines joining opposite fiducials.

24
The position of the center collimator cross (principal point of autocollimation) typically
serves as the origin of the photo coordinate system for film cameras. In a digital camera, the
coordinates of the CCD elements are typically determined relative to an origin at one corner
of the array. In this case, the principal point of autocollimation will have nonzero coordinates.

Goniometer Method
Consists of centering a precision grid plate in the camera focal plane.
The grid is illuminated from the rear and projected through the camera lens in the reverse
direction. The angles at which the projected grid rays emerge are measured with a
goniometer, a device similar to a surveyor’s theodolite.
CFL and lens distortion parameters are then computed with a mathematical model similar to
that used in the multicollimator approach.

Stellar and Field Methods of Camera Calibration


In the stellar method, a target array consisting of identifiable stars is photographed, and the
instant of exposure is recorded.
Right ascensions and declinations of the stars can be obtained from an ephemeris for the
precise instant of exposure so that the angles subtended by the stars at the camera station
become known. Then these are compared to the angles obtained from precise measurements
of the imaged stars.
A drawback of this method is that since the rays of light from the stars pass through the
atmosphere, compensation must be made for atmospheric refraction. On the other hand, there
will be a large number of stars distributed throughout the camera format, enabling a more
precise determination of lens distortion parameters.

25
Differences between Frame Cameras & Digital Cameras
No. Film Cameras Digital Cameras
1 Have fiducial marks Don’t have fiducial marks
2 Record data via photosensitive films Use solid-state detectors
3 Principal point of autocollimation Coordinates of the CCD elements originate
serves as the origin of the photo at one corner of the array
coordinate system

Aerial Cameras and Other Imaging Devices


Aerial imaging devices can be categorized according to how the image is formed;
a) Frame cameras (or frame sensors)
They’re devices that acquire the image simultaneously over the entire format.
Frame cameras generally employ shutters which open and allow light from the field of view
to illuminate a two-dimensional (usually rectangular) image plane before closing.
b) Strip cameras, linear array sensors, or pushbroom scanners
These imaging devices sense only a strip of the field of view at a given time and require that
the device move or sweep across the area being photographed in order to acquire a two-
dimensional image.
c) Flying spot scanners or whiskbroom scanners
These devices build an image by detecting only a small spot at a time, requiring movements
in two directions (sweep and scan) in order to form a two-dimensional image

Metric Cameras for Aerial Mapping


Single-lens frame cameras are often classified according to their angular field of view. The
angular field of view is the angle α subtended at the rear nodal point of the camera lens by
the diagonal d of the picture format.

26
Classifications according to angular field of view are:
a) Normal angle (up to 75°)
b) Wide angle (75° to 100°)
c) Superwide angle (greater than 100°)
Angular field of view may be calculated as follows

For a nominal 152-mm-focal-length camera with a 230-mm-square format, the angular field
of view is

Characteristics of Single-Lens Camera


• From Eq. (3-1), it is seen that angular field of view increases as focal length
decreases.
• Short focal lengths, therefore, yield wider ground coverage at a given flying height
than longer focal lengths.

Main Parts of Frame Aerial Cameras


The three basic components or assemblies of a frame aerial camera are the magazine, the
camera body, and the lens cone assembly.

27
Camera Magazine
The camera magazine houses the reels which hold exposed and unexposed film, and it also
contains the film-advancing and film-flattening mechanisms.
Film flattening may be accomplished in any of the following four ways:
a) by applying tension to the film during exposure
b) by pressing the film firmly against a flat focal-plane glass that lies in front of the film
c) by applying air pressure into the airtight camera cone, thereby forcing the film against
a flat plate (platen) lying behind the focal plane
d) by drawing the film tightly up against a vacuum plate whose surface lies in the focal
plane.
Camera Body
The camera body houses the drive mechanism. The drive mechanism operates the camera
through its cycle; the cycle consists of
a) advancing the film
b) flattening the film
c) cocking the shutter
d) tripping the shutter.
The camera body also contains carrying handles, mounting brackets, and electrical
connections.

28
Lens Cone Assembly
The lens cone assembly contains the lens, shutter, and diaphragm. The lens cone assembly
also contains an inner cone or spider. The inner cone rigidly supports the lens assembly and
focal plane in a fixed relative position.
Camera Lens
The camera lens is the most important (and most expensive) part of an aerial camera.
It gathers light rays from the object and brings them to focus in the focal plane behind the
lens.
Filter
The filter serves three purposes:
a) It reduces the effect of atmospheric haze
b) it helps provide uniform light distribution over the entire format
c) it protects the lens from damage and dust.
Shutter and diaphragm
T h e shutter and diaphragm together regulate the amount of light which will expose the
photograph.
The shutter controls the length of time that light is permitted to pass through the lens.
The diaphragm regulates the fstops of the camera by varying the size of the aperture to
control the amount of light passing through the lens. Typically f-stops of aerial cameras range
from about f-4 down to f-22.

Composition of Photographic Materials


Characteristics of Photographic Emulsions
Photographic films consist of two parts: emulsion and backing or support. The emulsion
contains light-sensitive silver halide crystals. These are placed on the backing or support in a
thin coat, as shown in Fig. 2-12. The support material is usually paper, plastic film, or glass.

An emulsion that has been exposed to light contains an invisible image of the object, called
the latent image. When the latent image is developed, areas of the emulsion that were
exposed to intense light turn to free silver and become black. Areas that received no light

29
become white if the support is white paper. (They become clear if the support is glass or
transparent plastic film.)
The greater the exposure, the greater the percentage of black in the mixture and hence the
darker the shade of gray.
The degree of darkness of developed images is a function of the total exposure (product of
illuminance and time) that originally sensitized the emulsion to form the latent image.
The degree of darkness of a developed emulsion is called its density. The greater the density,
the darker the emulsion.
Density of a developed emulsion on a transparent film can be determined by subjecting the
film to a light source, and then comparing the intensity of incident light upon the film to that
which passes through (transmitted light).

A density value of zero corresponds to a completely transparent film, whereas a film that
allows 1 percent of the incident light to pass through has a density of 2.
A plot of density on the ordinate versus logarithm of exposure on the abscissa for a given
emulsion produces a curve called the characteristic curve

Characteristic curves are useful in describing the characteristics of photographic emulsions.


The slope of the straight-line portion of the curve for example, is a measure of the contrast of

30
the film. The steeper the slope, the greater the contrast (change in density for a given range of
exposure).
Film is said to be more sensitive and faster when it requires less light for proper exposure.
Faster films can be used advantageously in photographing rapidly moving objects.
As sensitivity and grain size increase, the resulting image becomes coarse and resolution
(sharpness or crispness of the picture) is reduced. Thus, for highest pictorial quality, such as
portrait work, slow, fine-grained emulsions are preferable.

Processing and Printing Black-and-White Photographs


The five-step darkroom procedure for processing an exposed black-and-white emulsion is as
follows:
1. Developing.
The exposed emulsion is placed in a chemical solution called developer. The action of the
developer causes grains of silver halide that were exposed to light to be reduced to free black
silver. The free silver produces the blacks and shades of gray of which the image is composed
2. Stop bath.
When proper darkness and contrast of the image have been attained in the developing stage, it
is necessary to stop the developing action. This is done with a stop bath - an acidic solution
which neutralizes the basic developer solution.
3. Fixing.
Undeveloped grains could turn black upon exposure to light if they were not removed. To
prevent further developing which would ruin the image, the undeveloped silver halide grains
are dissolved out in the fixing solution.
4. Washing.
The emulsion is washed in clean running water to remove any remaining chemicals.
5. Drying.
The emulsion is dried to remove the water from the emulsion and backing material.

The result obtained from developing black-and-white film is a negative. It derives its name
from the fact that it is reversed in tone and geometry from the original scene that was
photographed; i.e., black objects appear white and vice versa, and images are inverted.
A positive print is obtained by passing light through the negative onto another emulsion. This
reverses tone and geometry again, thereby producing an image in which those two
characteristics are true. The configuration involved in this process may be either contact or
projection printing.

31
Contact Printing
In contact printing the emulsion side of a negative is placed in direct contact with the
unexposed emulsion contained on printing material. Together these are placed in a contact
printer and exposed with the emulsion of the positive facing the light source.

In contact printing, the positive that is obtained is the same size as the negative from which it
was made.

Projection Printing
In this process, the negative is placed in the projector of the printer and illuminated from
above. Light rays carry images c and d, for example, from the negative, through the projector
lens, and finally to their locations C and D on the positive, which is situated on the easel
plane beneath the projector.

32
The emulsion of the positive, having been exposed, is then processed in the manner
previously described.

Besides using printing paper, positives may also be prepared on plastic film or glass plates. In
photogrammetric terminology, positives prepared on glass plates or transparent plastic
materials are called diapositives.

Spectral Sensitivity of Emulsions


Variations in electromagnetic energy are classified according to variations in their
wavelengths or frequencies of propagation.

Black-and-white emulsions composed of untreated silver halides are sensitive only to blue
and ultraviolet energy. Reflected light from a red object, for example, will not produce an
image on such an emulsion.

33
Emulsions sensitive to blue, green, and red are called panchromatic. Emulsions can also be
made to respond to energy in the near-infrared range. These emulsions are called infrared, or
IR. Infrared films make it possible to obtain photographs of energy that is invisible to the
human eye.

Filters
Filters placed in front of camera lenses also allow only certain wavelengths of energy to pass
through the lens and expose the film. If the filter is red, it blocks passage of blue and green
wavelengths and allows only red to pass.
Atmospheric haze is largely caused by the scattering of ultraviolet and short blue
wavelengths. Haze filters block passage of objectionable scattered short wavelengths (which
produce haze) and prevent them from entering the camera and exposing the film. Because of
this advantage, haze filters are almost always used on aerial cameras.

Color Film
Color emulsions consist of three layers of silver halides.

A blue-blocking filter is built into the emulsion between the top two layers, thus preventing
blue light from exposing the bottom two layers. The result is three layers sensitive to blue,
green, and red light, respectively, from top to bottom.
The first step of color developing accomplishes essentially the same result as the first step of
black-and-white developing.
The exposed halides in each layer are turned into black crystals of silver. The remainder of
the process depends on whether the film is color negative or color reversal film.
Like normal color film, color IR film also has three emulsion layers, each sensitive to a
different part of the spectrum.

34
The top layer is sensitive to ultraviolet, blue, and green energy. The middle layer has its
sensitivity peak in the red portion of the spectrum, but it, too, is sensitive to ultraviolet and
blue light. The bottom layer is sensitive to ultraviolet, blue, and infrared.
With color IR film and a yellow filter, any objects that reflect infrared energy appear red on
the final processed picture. Objects that reflect red energy appear green, and objects
reflecting green energy appear blue. It is this misrepresentation of color which accounts for
the name false color.

Image Measurements and Refinements


Coordinate Systems for Image Measurements
For metric cameras with side fiducial marks, the commonly adopted reference system for
photographic coordinates, is the rectangular axis system formed by joining opposite fiducial
marks with straight lines.

35
The x axis is usually arbitrarily designated as the fiducial line most nearly parallel with the
direction of flight, positive in the direction of flight. The origin of the coordinate system is
the intersection of fiducial lines. This point is often called the indicated principal point
Fiducial marks serve as control points from which the photo coordinate axis system can be
determined.
The position of any image on a photograph, such as point a of Fig. 4-1, is given by its
rectangular coordinates xa and ya, where xa is the perpendicular distance from the y axis to a
and ya is the perpendicular distance from the x axis to a.
On digital images, coordinates are often expressed as row and column numbers of individual
pixels. If pixel dimensions are known, the dimensionless row and column values can be
converted to linear measurements.
Photographic distance ab of Fig. 4-1, for example, may be calculated from rectangular
coordinates as follows:

Sources of Errors in Photo Coordinates


The major sources of systematic errors are
1 (a) Film distortions due to shrinkage, expansion, and lack of flatness
(b) CCD array distortions due to electrical signal timing issues or lack of flatness of the
chip surface
2 (a) Failure of photo coordinate axes to intersect at the principal point
(b) Failure of principal point to be aligned with center of CCD array
3. Lens distortions
4. Atmospheric refraction distortions

36
5. Earth curvature distortion

Distortions of Photographic Films and Papers


Photo coordinates contain small errors due to shrinkage or expansion of the photographic
materials that support the emulsion of the negative and positive.
Type of emulsion support: Paper media are generally much less stable than film.
Flatness of the camera platen: Photogrammetric equations derived for applications
involving frame cameras assume a flat image plane, any lack of flatness will result in film
distortions.
Dimensional change: Small changes in film size occur during processing and storage.
Dimensional change during storage may be held to a minimum by maintaining constant
temperature and humidity in the storage room.

Image Plane Distortion


The nominal amount of shrinkage or expansion present in a photograph can be determined by
comparing measured photographic distances between opposite fiducial marks with their
corresponding values determined in camera calibration.
Photo coordinates can be corrected based on the desired level of accuracy.
For lower levels of accuracy the following approach may be used. If xm and ym are measured
fiducial distances on the positive, and xc and yc are corresponding calibrated fiducial
distances, then the corrected photo coordinates of any point a may be calculated as

x′a and y′a are corrected photo coordinates and xa and ya are measured coordinates.
For high-accuracy applications, shrinkage or expansion corrections may be applied through
the x and y scale factors of a two-dimensional affine coordinate transformation.

Reduction of Coordinates to an Origin at the Principal Point


Photogrammetric equations that utilize photo coordinates assume an origin of photo
coordinates at the principal point. Therefore it is theoretically correct to reduce photo
coordinates from the measurement to the axis system whose origin is at the principal point.

37
For precise analytical photogrammetric work, it is necessary to make the correction for the
coordinates of the principal point. The correction is applied after a two-dimensional
coordinate transformation (e.g. affine) is made to the coordinates measured by comparator or
from a scanned image.
The principal point coordinates xp and yp from the camera calibration report are subtracted
from the transformed x and y coordinates, respectively.
The correction for the principal point offset is applied in conjunction with lens distortion
corrections.

Correction for Lens Distortions


Lens distortion causes imaged positions to be displaced from their ideal locations. The
mathematical equations that are used to model lens distortions are typically comprised of two
components: symmetric radial distortion and decentering distortion.
The mathematical model used in the current USGS calibration procedure computes both
symmetric radial and decentering distortion parameters directly by least squares. Principal
point coordinates and focal length are also determined in the solution.

Where: ͞x and ͞y are coordinates of the image relative to the principal point

38
r is the radial distance from the image to the principal point
k0, k1, k2, k3, and k4 are coefficients of symmetric radial lens distortion from the
calibration report
p1, p2, p3, and p4 are coefficients of decentering distortion from the calibration report
δx and δy are the symmetric radial lens distortion corrections to ͞x and ͞y
Δx and Δy are the decentering distortion corrections to ͞x and ͞y

Correction for Atmospheric Refraction


Light rays travelling through the atmosphere are bent according to Snell’s law.
Photogrammetric equations assume that light rays travel in straight paths, and to compensate
for the known refracted paths, corrections are applied to the image coordinates.

In Figure 4-6, the angular distortion due to refraction is Δα, and the linear distortion on the
photograph is Δr. Refraction causes all imaged points to be displaced outward from their
correct positions.

39
The magnitude of refraction distortion increases with increasing flying height and with
increasing α angle.
Refraction distortion occurs radially from the photographic nadir point and is zero at the nadir
point.
The relationship that expresses the angular distortion Δα as a function of α is

Where: α is the angle between the vertical and the ray of light

H is the flying height of the camera above mean sea level in kilometers
h is the elevation of the object point above mean sea level
The procedure for computing atmospheric refraction corrections to image coordinates on a
vertical photo begins by computing radial distance r from the principal point to the image

Also from Fig. 4-6,

The values of K and tan α from Eqs. (4-19) and (4-21), respectively, are then substituted into
Eq. (4- 18) to compute refraction angle Δα.

The radial distance r′ from the principal point to the corrected image location can then be
computed by

The change in radial distance Δr is then computed by

The x and y components of atmospheric refraction distortion corrections (δx and δy) can
then be computed by Eqs. (4-8) and (4-9), using the values of x and y in place of and ,

40
respectively. To compute corrected coordinates x′ and y′, the corrections δx and δy are
subtracted from x and y, respectively.

Correction for Earth Curvature


Elevations of points are referenced to an approximately spherical datum (i.e., mean sea level)
whereas photogrammetric equations assume that the zero-elevation surface is a plane.
To avoid need for these corrections, use a three-dimensional orthogonal object space
coordinate system. One such coordinate system is the local vertical coordinate system. A
local vertical coordinate system is a three-dimensional cartesian XYZ reference system which
has its origin placed at a specific point within the project area.
By using a local vertical coordinate system for a project, earth curvature ceases to be a
distorting effect. Instead, the curvature of the earth will simply be a natural characteristic of
the terrain.

Geometry of Aerial Photographs


Vertical Photographs
Geometry of Vertical Photographs
Photographs taken from an aircraft with the optical axis of the camera vertical or as nearly
vertical as possible are called vertical photographs. If the optical axis is exactly vertical, the
resulting photograph is termed truly vertical.
Photographs containing these small unintentional tilts are called near vertical or tilted
photographs

41
Figure 6-1 illustrates the geometry of a vertical photograph taken from an exposure station L.
The negative, which is a reversal in both tone and geometry of the object space, is situated a
distance equal to the focal length (o′L) above the rear nodal point of the camera lens.
The positive may be obtained by direct emulsion-to-emulsion “contact printing” with the
negative.
The reversal in geometry from object space to negative is seen by comparing the positions of
object points A, B, C, and D with their corresponding negative positions a′, b, c′, and d′.

Definitions
Perspective projection
It’s a linear projection where 3D objects are projected on a 2D plane. This has the effect of
distant objects appearing small than nearer objects.
Perspective centre (exposure station)
The point of origin or termination of bundles of perspective light rays.
Principal point (o)

42
The point on the photograph located at the foot of the photograph perpendicular.
Focal length (o′L)
The distance from the front nodal point of the lens to the plane of the photograph
Exposure station (L)
Position of the front nodal point (perspective center) during exposure
Principal/optical axis (o′P)
It’s the line joining the centers of curvature of the spherical surfaces of the lens
Principal line
The line on the photograph which passes through the principal point and the nadir point
Ground Nadir (Ng)
The point on the ground that is vertically beneath (directly below) the perspective center of
the camera lens.
Photographic Nadir point (n)
The point at which a vertical line (plumb line) from the perspective center to the ground nadir
intersects the photograph.
Isocentre
The point that falls between the nadir and the principal point
Angle of tilt
The angle at the perspective center between the optical axis and the plumb line
Plate parallel
Horizontal lines drawn in the photo plane are called plate parallels or plate horizontals
Isometric parallel
The photograph parallel that passes through the isocenter
Horizontal trace
Point where an inclined line intersects a horizontal plane.
Principal plane
The vertical plane through the perspective center containing the photograph perpendicular
and the nadir point
Negative plane
The plane on which the film lies during exposure.
True horizon

43
Intersection of a horizontal plane through perspective center with photograph
Vanishing point
Point where projected lines that are parallel in object space intersect when projected onto the
image plane.

Scale
The scale of a photograph is the ratio of a distance on the photo to the corresponding distance
on the ground. The scale of a vertical photograph varies with variations in terrain elevation.
Scales may be represented as unit equivalents, unit fractions, dimensionless representative
fractions, or dimensionless ratios.
a) Unit equivalents: 1 in = 1000 ft
b) Unit fraction: 1 in/1000 ft
c) Dimensionless representative fraction: 1/12,000
d) Dimensionless ratio: 1:12,000
A large number in a scale expression denotes a small scale, and vice versa; for example,
1:1000 is a larger scale than 1:5000.

Scale of a Vertical Photograph Over Flat Terrain


Figure 6-2 shows the side view of a vertical photograph taken over flat terrain. Since
measurements are normally taken from photo positives rather than negatives, the negative has
been excluded.

The scale of a vertical photograph over flat terrain is simply the ratio of;
• photo distance ab to corresponding ground distance AB.

44
• focal distance f to flying height H′

The scale of a vertical photo is directly proportional to camera focal length (image distance)
and inversely proportional to flying height above ground (object distance).
Scale of a Vertical Photograph Over Variable Terrain
If the photographed terrain varies in elevation, then the object distance and photo scale will
likewise vary.
Any given vertical photo scale increases with increasing terrain elevation and decreases with
decreasing terrain elevation.

Suppose a vertical aerial photograph is taken over variable terrain from exposure station L.
Photographic scale at h, the elevation of points A and B, is equal to the ratio of photo distance
ab to ground distance AB.
By similar triangles Lab and LAB, an expression for photo scale SAB is

45
Also, by similar triangles LOAA and Loa,

Substituting Eq. (b) into Eq. (a) gives

In general the scale at any point whose elevation above datum is h may be expressed as

The shorter the object distance (the closer the terrain to the camera), the greater the photo
scale, and vice versa.
For vertical photographs taken over variable terrain, there are an infinite number of different
scales. This is one of the principal differences between a photograph and a map.

Average Photo Scale


Average scale is the scale at the average elevation of the terrain covered by a particular
photograph and is expressed as

An average scale is used only at those points that lie at average elevation, and it is an
approximate scale for all other areas of the photograph.
Example 6-2
Suppose that highest terrain h1, average terrain havg, and lowest terrain h2 are 610, 460, and
310 m above mean sea level, respectively. Calculate the maximum scale, minimum scale, and
average scale if the flying height above mean sea level is 3000m and the camera focal length
is 152.4 mm.
Solution By Eq. (6-2) (maximum scale occurs at maximum elevation),

46
Other Methods of Determining Scale of Vertical Photographs
Comparing Measured Ground and Photo Distances
A ground distance may be measured in the field between two points whose images appear on
the photograph. After the corresponding photo distance is measured, the scale relationship is
simply the ratio of the photo distance to the ground distance.
Example 6-3
The horizontal distance AB between the centers of two street intersections was measured on
the ground as 402 m. Corresponding line ab appears on a vertical photograph and measures
95.8 mm. What is the photo scale at the average ground elevation of this line?
Solution

Comparing Measured Map and Photo Distances


In this method it is necessary to measure, on the photograph and on the map, the distances
between two well-defined points that can be identified on both photo and map. Photographic
scale can then be calculated from the following equation:

Example 6-4
On a vertical photograph the length of an airport runway measures 160mm. On a map that is
plotted at a scale of 1:24,000, the runway is measured as 103 mm. What is the scale of the
photograph at runway elevation?

47
Ground Coordinates from a Vertical Photograph
The ground coordinates of points whose images appear in a vertical photograph can be
determined with respect to an arbitrary ground coordinate system. The arbitrary X and Y
ground axes are in the same vertical planes as the photographic x and y axes, respectively,
and the origin of the system is at the datum principal point

The measured photographic coordinates are xa, ya, xb, and yb. The coordinates of ground
points A and B are XA, YA, XB, and YB. From similar triangles La′o and LA′Ao, the following
equation may be written:

48
from which

Also, from similar triangles La′′o and LA′′Ao,

from which

Similarly, the ground coordinates of point B are

From the ground coordinates of the two points A and B, the horizontal length of line AB can
be calculated, using the pythagorean theorem, as

Also, horizontal angle APB may be calculated as

Relief Displacement on a Vertical Photograph


Relief displacement is the shift or displacement in the photographic position of an image
caused by the relief of the object, i.e., its elevation above or below a selected datum.
With respect to a datum, relief displacement is outward for points whose elevations are above
datum and inward for points whose elevations are below datum.

49
The concept of relief displacement is illustrated in Fig. 6-6, which represents a vertical
photograph taken from flying height H above datum. Camera focal length is f, and o is the
principal point. The image of terrain point A, which has an elevation hA above datum, is
located at a on the photograph. An imaginary point A′ is located vertically beneath A in the
datum plane, and its corresponding imaginary image position is at a′. On the figure, both A′A
and PL are vertical lines, and therefore A′AaLoP is a vertical plane. Plane A′a′LoP is also a
vertical plane which is coincident with A′AaLoP. Since these planes intersect the photo plane
along lines oa and oa′, respectively, line aa′ (relief displacement of point A due to its
elevation hA) is radial from the principal point.

An equation for evaluating relief displacement may be obtained by relating similar triangles.
First consider planes Lao and LAAo in Fig. 6-6:

Also, from similar triangles La′o and LA′P,

50
Equating expressions (d) and (e) yields

Rearranging the above equation, dropping subscripts, and substituting the symbol d for r - r′
gives

where d = relief displacement


h = height above datum of object point whose image is displaced
r = radial distance on photograph from principal point to displaced image (The units
of d and r must be the same.)
H = flying height above same datum selected for measurement of h
Relief displacement: • increases with increasing radial distance to the image, and it
also increases with increased elevation of the object point above
datum.
• relief displacement occurs radially from the principal point.

51
The building in the center is one of the tallest imaged on the photo (as evidenced by the
length of its shadow); however, its relief displacement is essentially zero due to its proximity
to the principal point.
Example 6-7
A vertical photograph taken from an elevation of 535 m above mean sea level (MSL)
contains the image of a tall vertical radio tower. The elevation at the base of the tower is 259
m above MSL. The relief displacement d of the tower was measured as 54.1 mm, and the
radial distance to the top of the tower from the photo center was 121.7 mm. What is the
height of the tower?
Solution Select datum at the base of the tower. Then flying height above datum is

52
Flying Height of a Vertical Photograph
Height Estimates
For rough computations, flying height may be taken from altimeter readings. Flying heights
also may be obtained by using either Eq. (6-1) or Eq. (6-2) if a ground line of known length
appears on the photograph.

This procedure yields accurate flying heights for truly vertical photographs if the endpoints of
the ground line lie at equal elevations. In general, the greater the difference in elevation of the
endpoints, the greater the error in the computed flying height; therefore the ground line
should lie on fairly level terrain.
Accurate Heights
Accurate flying heights can be determined if the elevations of the endpoints of the line as
well as of the length of the line are known.
Suppose ground line AB has its endpoints imaged at a and b on a vertical photograph. Length
AB of the ground line may be expressed in terms of ground coordinates, by the pythagorean
theorem, as follows:

Substituting Eqs. (6-5) through (6-8) into the previous equation gives

The direct solution for H in the quadratic is

Example 6-9
A vertical photograph was taken with a camera having a focal length of 152.3 mm. Ground
points A and B have elevations 437.4 m and 445.3 m above sea level, respectively, and the
horizontal length of line AB is 584.9 m. The images of A and B appear at a and b, and their

53
measured photo coordinates are xa = 18.21 mm, ya = –61.32 mm, xb = 109.65 mm, and yb = –
21.21 mm. Calculate the flying height of the photograph above sea level.
Solution By Eq. (6-13),

Reducing gives

Squaring terms and arranging in quadratic form yields

Solving for H by Eq. (6-14) gives

Tilted and Oblique Photographs


Definitions
Tilted photographs
Photos captured with the camera axis inadvertently tilted from the vertical.
Oblique photographs
Photos captured with the camera wittingly tilted from the vertical. They’re classified as high
oblique if the photograph contains the horizon, and low oblique otherwise
Six independent parameters called the elements of exterior orientation (sometimes called
EOPs) express the spatial position and angular orientation of a photograph.
The spatial position is normally given by XL, YL, and ZL, the three-dimensional coordinates of
the exposure station in a ground coordinate system. Commonly, ZL is called H, the height
above datum.
Angular orientation is the amount and direction of tilt in the photo. Three angles are sufficient
to define angular orientation, and in this book two different systems are described:
(1) the tilt-swing-azimuth (t-s-α) system
(2) the omega-phi-kappa (ω-φ-κ) system.

54
Point Perspective
Perspective can be defined as how something is visually perceived under varying
circumstances.
Perspective projection
In the context of frame imaging systems, perspective projection governs how 3D objects (the
scene photographed) appear when projected onto a 2D surface (film or CCD array)
Vanishing points
Point where projected lines that are parallel in object space intersect when projected onto the
image plane
More specifically, horizontal parallel lines intersect at the horizon, and vertical lines intersect
at either the zenith or nadir in an image.

Angular Orientation
Angular Orientation in Tilt, Swing, and Azimuth
In Fig. 10-4, a tilted aerial photograph is depicted showing the tilt-swing-azimuth angular
orientation parameters.

55
L is the exposure station and o is the principal point of the photo positive.
Line Ln is a vertical line, n being the photographic nadir point, which occurs where the
vertical line intersects the plane of the photograph. The extension of Ln intersects the ground
surface at Ng, the ground nadir point, and it intersects the datum surface at Nd, the datum
nadir point. Line Lo is the camera optical axis; its extension intersects the ground at Pg, the
ground principal point, and it intersects the datum plane at Pd, the datum principal point. One
of the orientation angles, tilt, is the angle t or nLo between the optical axis and the vertical
line Ln. The tilt angle gives the magnitude of tilt of a photo.
Vertical plane Lno is called the principal plane. Its line of intersection with the plane of the
photograph occurs along line no, which is called the principal line.
The position of the principal line on the photo with respect to the reference fiducial axis
system is given by s, the swing angle. Swing is defined as the clockwise angle measured in
the plane of the photograph from the positive y axis to the nadir end of the principal line. The
swing angle gives the direction of tilt on the photo. Its value can be anywhere between 0°
and 360°, or alternatively, between –180°and 180°.

The third angular orientation parameter, α or azimuth, gives the orientation of the principal
plane with respect to the ground reference axis system. Azimuth is the clockwise angle
measured from the ground Y axis (usually north) to the datum principal line

56
If the tilt angle is zero, the photo is vertical. For a vertical photo, swing and azimuth are
undefined

Scale of a Tilted Photograph


For vertical photos, variations in object distances were caused only by topographic relief. In a
tilted photograph, relief variations also cause changes in scale, but scale in various parts of
the photo is further affected by the magnitude and angular orientation of the tilt
Figure 10-6a portrays the principal plane of a tilted photograph taken over a square grid on
approximately flat ground.

Due to tilt, object distance LA′ in Fig. 10-6a is less than object distance LB′, and hence a grid
line near A would appear larger (at a greater scale) than a grid line near B. This is illustrated
in Fig. 10-6b, where photo distance d1 appears longer than photo distance d2, yet both are the
same length on the ground.
Figure 10-7 illustrates a tilted photo taken from a flying height H above datum; Lo is the
camera focal length.

57
The image of object point A appears at a on the tilted photo, and its coordinates in the
auxiliary tilted photo coordinate system are x′a and y′a. The elevation of object point A above
datum is hA. Object plane AA′KK′ is a horizontal plane constructed at a distance hA above
datum. Image plane aa′kk′ is also constructed horizontally. The scale relationship between the
two parallel planes is the scale of the tilted photograph at point a because the image plane
contains image point a and the object plane contains object point A. The scale relationship is
the ratio of photo distance aa′ to ground distance AA′ and may be derived from similar
triangles La′a and LA′A, and Lka′ and LKA′ as follows:

but

58
also

Substituting Lk and LK into Eq. (a) and dropping subscripts yields

I n Eq. (10-3), S is the scale on a tilted photograph for any point whose elevation is h above
datum. Flying height above datum for the photo is H; f is the camera focal length; and y′ is
the coordinate of the point in the auxiliary system
Example 10-1
A tilted photo is taken with a 152.4-mm-focal-length camera from a flying height of 2266 m
above datum. Tilt and swing are 2.53° and 218.20°, respectively. Point A has an elevation of
437 m above datum, and its image coordinates with respect to the fiducial axis system are xa
= –72.4 mm and 87.1 mm. What is the scale at point a?
Solution By the second equation of Eqs. (10-2),

By Eq. (10-3),

Relief Displacement on a Tilted Photograph (Tilt Displacements)


Relief displacements on tilted photographs occur along radial lines from the nadir point.
Relief displacement is zero for images at the nadir point and increases with increased radial
distances from the nadir.

Relationship between Photographs and Maps

Photo Map
Projection An aerial photograph is A map is a geometrically
correct geometrically correct representation of the
incorrect. The distortion in part of the earth projected.
geometry is minimum at the
centre and increases towards
the edges of the photo.
Scale The scale of a photograph is The scale is uniform
not uniform throughout the map extent.

59
Production Enlargement and reduction Enlargement/reduction of
is easier in case of the maps involves redrawing
photographs it afresh.
Detail Some of the non-visible Maps may represent non
phenomenon may be visible phenomenon e.g.,
difficult to interpret but population, climate
special films like color
and infrared films can bring
about special features of
terrain such as surface
temperatures.
Presentation of features Photographs show images of Maps give an abstract
the surface itself representation of the surface
i.e., uses symbolization
Training Photo-interpretation requires A little training and
special training familiarity with the
particular legend used in the
map enables
proper use of a map

Rectification
Rectification of Tilted Photographs
Rectification is the process of making equivalent vertical photographs from tilted photo
negatives. The resulting equivalent vertical photos are called rectified photographs.
Rectified photos are free from tilt displacements but contain image displacements and scale
variations due to topographic relief. Relief displacements and scale variations can be
removed in a process called differential rectification or orthorectification, and the resulting
products are then called orthophotos (photos free of relief displacements and scale
variations).
The process of rectification may entail:
i). Plotting from the tilted photo onto another object sheet such that the plotted points are
free from tilt displacement. This is called graphical rectification. An example of this
is the paper strip method.
ii). Optical projection of the photo image onto the map plane and alignment of
corresponding features before plotting the missing features in the map by direct
tracing. The aerosketchmaster is an example of this.
iii). Photographic rectification (Optical- mechanical rectification) whereby the tilted
photograph is projected optically such that the projected image is tilt-free and the
image is then photographed to obtain the equivalent vertical photograph
Rectification is generally performed by any of five methods:
a) Analytically
b) Optically
c) Optically-mechanically

60
d) Digitally
e) Graphically

Analytical Rectification
Analytical methods performs rectification point by point.
Basic input required for the numerical methods, in addition to ground coordinates of control
points, is x and y photo coordinates of all control points plus those of the points to be
rectified.

In Eqs. (10-16), X and Y are ground coordinates, x and y are photo coordinates (in the fiducial
axis system), and the a′s, b′s, and c′s are the eight parameters of the transformation.
The use of these equations to perform analytical rectification is a two-step process.
(i). A pair of Eq. (10-16) is written for each ground control point.
(ii).Solve Eqs. (10-16) for each point whose X and Y rectified coordinates are desired.

Optical Rectification
Aero-Sketchmaster LUZ
The Aero-Sketchmaster LUZ is an instrument for the graphical rectification of verticals and
near-verticals for the completion and correction of existing maps.

The eye-piece of this

61
instrument contains a semi-transparent mirror, in which the photo
image is reflected and through which the map image passes. The eyes
therefore see the doubly reflected image of the photograph superimposed
on the doubly reflected image of the map. The photoholder
is ball and socket mounted.
If the ground covered is sufficiently flat, and the photo-plane
and map plane are both correctly oriented with respect to the eyepiece,
then the geometrical relationships of Chapter 2 are again set
up between the photograph and map.
It is therefore sometimes said that the instruments depend on anharmonic principles.
Operating the Aero-Sketchmaster
Calculate the ratio between the photo scale and the map scale.
lens − photo dis tan ce photo scale
=
lens − map dis tan ce map scale

2. Mount the photograph with its principal point at the centre of the photo-holder, and by eye
make the photo-holder vertical.
3. Secure the compilation sheet or map to the table so that the principal point traverse is more
or less parallel with the operator's eye base, and the homologue of the principal point is in the
most convenient viewing position-probably about eight inches from the edge of the table.
4. Switch on the lamp to illuminate the photograph, and balance the strengths of the two
images by inserting one of the range of smoked glasses in front of the stronger image.
5. Choose two points on the photograph in positions such as a and b in Fig. 7 .6 so that the
line ab is approximately horizontal and passes near to the principal point. Let A and B be the
respective map positions of these points. Move the instrument stand until the photo line ab is
superimposed on the map line AB. Now adjust the lens-map distance until ab is the same
length as AB. Move the instrument again until a coincides with A, and b with B. This gives a
correct mean scale for the line ab.
6. If P is the map position of p, and P is closer to A than p is to a, then the scale along pa is
greater than along PA, i.e., the scale near a must be greater than that near b. Tilt the photo-
holder about a vertical axis through p, bringing b closer to the eye-piece, until a, p and b
coincide with A, P and B respectively.
7. Now choose two more minor control points on the photograph, in positions such as c and
d, so that cd is approximately perpendicular to ab and passes near p. Repeat 6 above but
using points c and d instead of a and b, and tilting about a roughly horizontal axis.
8. Repeat 6 and 7 above until the best mean fitting is obtained. Provided that A, B, C and D
are points having similar ground heights, then the instrument will now be set for the

62
approximate elimination of tilt. We say that we have completed the general setting, but it is
rare that all or even many of the remaining m.c.p.s will be accurately superimposed on their
map positions, and further detailed settings will be necessary.
9. Plot all detail nearer to the principal point than such points as e, f, j and l.
10. Check the setting of m.c.p.s e, f, g and h in Fig. 7.6. If necessary, and using movements
similar to those of 6, 7 and 8 above, adjust the photo-holder to achieve coincidence of these
points with their homologues. Plot all detail within this quadrilateral.
11. Repeat 10 for fgkj, then for ghqr, grtk, and kjlm, etc. until the whole quadrant has been
plotted.
12. Repeat 6, 7, 8, 9, 10, and 11 for each of the other three quadrants in turn.

Optical-Mechanical Rectification
The optical-mechanical method relies on instruments called rectifiers. They produce
rectified and ratioed photos through the photographic process of projection printing
As illustrated in Fig. 10-15, the basic components of a rectifier consist of a lens, a light
source with reflector, a stage for mounting the tilted photo negative, and an easel which holds
the photographic emulsion upon which the rectified photo is exposed.

The Scheimpflug Condition


The Scheimpflug condition states that, in projecting images through a lens, if the negative
and easel planes are not parallel, the negative plane, lens plane, and easel/map plane must all

63
intersect along a common line to satisfy the lens formula and achieve sharp focus for all
images.

In order to maintain sharp focus, the lens formula Eq. (2-5), which relates image distances
and object distances for a given lens-must be satisfied for all images.
1 1 1
+ = (2-5)
o i f

As object distances increase, image distances must be decreased to satisfy the lens formula.
This is accomplished by tilting (canting) the lens as shown in Fig. 10-15 so that the lens plane
passes through S.
Situations in photogrammetry where the Scheimpflug condition must be enforced are in
rectification of tilted photographs and in orienting oblique photographs in stereoplotters. In
both of these cases the image plane is tilted with respect to the object plane.

Digital Rectification
Incorporate a photogrammetric scanner and computer processing. This procedure is a special
case of the more general concept of georeferencing.
The feature that distinguishes digital rectification from other georeferenced images is that
rectification requires that a projective transformation be used to relate the image to the
ground coordinate system whereas georeferencing often uses simpler transformations such as
the two-dimensional conformal or the two-dimensional affine transformation.
Three primary pieces of equipment needed for digital rectification are a digital image,
computer, and plotting device capable of producing digital image output.

64
Graphical Rectification
Perspective Grid Method

Disadvantages
The perspective grid method is good only for flat ground.

Projective Relationship Between Photographs and Objects


Anharmonic/Cross ratios
Anharmonic ratios are set up by perspective relations between the image and the object. The
ratio a/b:c/d is an anharmonic ratio of with four variables.
The proof of relations in anharmonic ratios can be achieved by successive application of the
sine formula.

Figure 1 represents the projection of object points A, B, C, D and E through the lens at O onto
the tilted photograph at points a, b, c, d and e respectively.

65
Points O, a, A, b and B lie in the same plane therefore lines ab and AB extended must
intersect at point Q1, on the horizon trace. Similarly, the pairs of lines (ac and AC), (ad and
AD) and (ae and AE) must meet at points Q2 , Q3 and Q4 respectively.
Figure 2 represents the same situation as in Figure 1 but with the tilted plane of the
photograph rotated about the horizon trace until it forms an extension of the horizontal datum
plane. Lines kl and KL are drawn at arbitrary positions and in arbitrary directions, to cut
across the pencil of rays radiating from a and A respectively.

For the pencil of rays from a, the anharmonic ratio is given by


kl
ln
km
mn
which can be shown to be a constant irrespective of the position of the direction of the line
kn.

Inversors

Stereoscopy
It’s a technique for creating or enhancing the illusion of depth in an image through binocular
vision.

66
Definitions
Stereoscopic Viewing (Stereopsis)
Perception of depth through binocular vision (simultaneous viewing with both eyes) is called
stereoscopic viewing.

Stereoscopy
It’s a technique for creating or enhancing the illusion of depth in an image through binocular
vision.

Stereopairs
Two photos captured from different positions and overlap with each other.

Stereoscopic Coverage
The three-dimensional view which results when two overlapping photos (stereopairs), are
viewed using a stereoscope.

Monoscopy
Depth perception with one eye

Binocular Vision
Viewing with both eyes simultaneously

Eye Base
The distance between the centers of the eye balls of an individual

Parallactic angle
When two eyes fixate on a certain point, the optical axes of the two eyes converge on that
point, intersecting at an angle called the parallactic angle.

Stereoscopic Viewing
Depth Perception
Depth perception may be classified as either stereoscopic or monoscopic.
Persons with normal vision (those capable of viewing with both eyes simultaneously) are said
to have binocular vision.
Perception of depth through binocular vision is called stereoscopic viewing.
Methods of judging distances with one eye are termed monoscopic.
Stereoscopic depth perception in Photogrammetry enables the formation of a 3D stereomodel
by viewing a pair of overlapping photographs.

Stereoscopic Depth Perception


When two eyes fixate on a certain point, the optical axes of the two eyes converge on that
point, intersecting at an angle called the parallactic angle.

67
The nearer the object, the greater the parallactic angle, and vice versa.

In Fig. 7-3, the optical axes of the two eyes L and R are separated by a distance be, called the
eye base.
When the eyes fixate on point A, the optical axes converge, forming parallactic angle ϕa.
Similarly, when sighting an object at B, the optical axes converge, forming parallactic angle
ϕb. The depth between objects A and B is DB – DA and is perceived from the difference in
these parallactic angles.
Angles of convergence are the angles formed by the pairs of optical axes at points L and R.
The parallactic angles are a function of the stereoscopic perception of height, i.e.
h = f (ϕa − ϕb) = DB2(α1 − α2)/e

Viewing Photographs Stereoscopically


Suppose that a pair of aerial photographs is taken from exposure stations L1 and L2 so that the
building appears on both photos. Flying height above ground is H′, and the distance between
the two exposures is B, the air base.

68
Object points A and B at the top and bottom of the building are imaged at a1 and b1 on the left
photo and at a2 and b2 on the right photo.

If the two photos are laid on a table and viewed so that the left eye sees only the left photo
and the right eye sees only the right photo, as shown in Fig. 7-6, a three-dimensional
impression of the building is obtained.
The three-dimensional model formed is called a stereoscopic model or simply a stereomodel,
and the overlapping pair of photographs is called a stereopair.

Stereoscopes
It is quite difficult to view photographs stereoscopically without the aid of optical devices.
Stereoscopes are instruments used for stereoscopic viewing.
Lens or pocket stereoscope
It consists of two simple convex lenses mounted on a frame. The lenses serve to magnify the
images so details can be seen more clearly.

69
The spacing between the lenses can be varied to accommodate various eye bases. The legs
fold or can be removed so that the instrument is easily stored or carried.

70
In using a pocket stereoscope, the photos are placed so that corresponding images are slightly
less than the eye base apart, usually about 5 cm.

If there’s obscurity, the top photo can be gently rolled up out of the way to enable viewing
the corresponding imagery of the obscured area.
Mirror stereoscope
It permits two photos to be completely separated when viewed stereoscopically. This;
• eliminates the problem of one photo obscuring part of the overlap of the other.
• enables the entire width of the stereomodel to be viewed simultaneously.

71
The stereoscope has two large wing mirrors and two smaller eyepiece mirrors, all of which
are mounted at 45° to the horizontal.
Light rays emanating from image points on the photos such as a1 and a2 are reflected from
the mirror Surfaces and are received at the eyes, forming parallactic angle ϕa. The brain
automatically associates the depth to point A with that parallactic angle. The stereomodel is
thereby created beneath the eyepiece mirrors

Factors Affecting Stereoscopic Vision


Y Parallax
When corresponding images fail to lie along a line parallel to the flight line, y parallax,
denoted by py, is said to exist.
Excessive Y parallax prevent stereoscopic viewing
Improper orientation of the photos
Pair of truly vertical overlapping photos taken from equal flying heights not oriented
perfectly

Variation in flying height


Exposure at different heights results in photos with different scales.

72
I n Fig. 7-16 the left photo was exposed from a lower flying height than the right photo. To
obtain a comfortable stereoscopic view, the y parallax can be eliminated by sliding the right
photo upward transverse to the flight line when viewing point a and sliding it downward
when viewing point b.

Excessive Tilts

Vertical Exaggeration in Stereoviewing


The vertical scale of a stereomodel will appear to be greater than the horizontal scale; i.e., an
object in the stereomodel will appear to be too tall. This apparent scale disparity is called
vertical exaggeration.
Vertical exaggeration is caused primarily by the lack of equivalence of the;
• photographic base-height ratio, B/H′,
• stereoviewing base height ratio, be/h
B/H′ is the ratio of the air base (distance between the two exposure stations) to flying height
above average ground, and be/h is the ratio of the eye base (distance between the two eyes) to
the distance from the eyes at which the stereomodel is perceived.
Figures 7-18 a and b depict, respectively, the taking of a pair of vertical overlapping
photographs and the stereoscopic viewing of those photos.

73
In Fig. 7-18a, the camera focal length is f, the air base is B, the flying height above ground is
H′, the height of ground object AC is Z, and the horizontal ground distance KC is D.
In Fig. 7-18a, assume that Z is equal to D. In Fig. 7-18b, i is the image distance from the eyes
to the photos, be is the eye base, h is the distance from the eyes to the perceived stereomodel,
z is the stereomodel height of object A′C′, and d is the horizontal stereomodel distance K
′C′. Note that while the ratio Z/D is equal to 1, the ratio z/d is greater than 1 due to
vertical exaggeration.
An equation for calculating vertical exaggeration can be developed with reference to these
figures. From similar triangles of Fig. 7-18a,

Subtracting (b) from (a) and reducing gives

Also from similar triangles of Fig. 7-18b,

74
Subtracting (e) from (d) and reducing yields

Equating (c) and (f) gives

In the above equation, the values of Z and z are normally considerably smaller than the values
of H′ and h, respectively; thus

Also from similar triangles of Figs. 7-18a and b,

Dividing (i) by (h) and reducing yields

Substituting (j) into (g) and reducing gives

75
I n Eq. (k), if the term Bh/(H′be) is equal to 1, there is no vertical exaggeration of the
stereomodel. (Recall that Z is equal to D.) Thus an expression for the magnitude of vertical
exaggeration V is given by

Parallax Heighting
Definitions
Parallax
Parallax is the apparent displacement in the position of an object, with respect to a frame of
reference, caused by a shift in the position of observation.

Horizontal Parallax/X Parallax


The change in position of an image from one photograph to the next caused by the aircraft’s
motion is termed stereoscopic parallax, x parallax, or simply parallax.

Y Parallax (Vertical Parallax)


When corresponding images fail to lie along a line parallel to the flight line, y parallax,
denoted by py, is said to exist.
Excessive Y parallax prevent stereoscopic viewing

Stereoscopic Parallax
The change in position of an image from one photograph to the next caused by the aircraft’s
motion is termed stereoscopic parallax, x parallax, or simply parallax.
The parallax of any point is directly related to the elevation of the point i.e., parallax is
greater for high points than for low points.

76
In Fig. 8-1, for example, images of object points A and B appear on a pair of overlapping
vertical aerial photographs which were taken from exposure stations L1 and L2. Points A and
B are imaged at a and b on the left-hand photograph. Forward motion of the aircraft between
exposures, however, caused the images to move laterally across the camera focal plane
parallel to the flight line, so that on the right hand photo they appear at a′ and b′.
Because point A is higher (closer to the camera) than point B, the movement of image a
across the focal plane was greater than the movement of image b.

Basic Parallax Formula


Figure 8-2 shows the two photographs of Fig. 8-1 in superposition. Parallaxes of object points
A and B are pa and pb, respectively. Stereoscopic parallax for any point such as A whose
images appear on two photos of a stereopair, expressed in terms of flight-line photographic
coordinates, is

77
In Eq. (8-1), pa is the stereoscopic parallax of object point A, xa is the measured photo
coordinate of image a on the left photograph of the stereopair, and xa′ is the photo coordinate
of image a′ on the right photo.

In Fig. 8-3, the tower affords an excellent example for demonstrating the use of Eq.
(8-1) for finding parallaxes. The top of the tower has an x coordinate (xt = 48.2 mm) and
an x′ coordinate (xt′ = –53.2 mm). By Eq. (8-1), the parallax pt = 48.2 – (–53.2) = 101.4
mm. Also, the bottom of the tower has an x coordinate (xb = 42.7 mm) and an x′
coordinate (x ′ = –47.9 mm). Again by Eq. (8-1), pb = 42.7 – (–47.9) = 90.6 mm.

Parallax Heighting Formula for Level Ground


Figure 8-10 illustrates an overlapping pair of vertical photographs which have been exposed
at equal flying heights above datum.
Images of an object point A appear on the left and right photos at a and a′, respectively. The
planimetric position of point A on the ground is given in terms of ground coordinates XA and
YA. Its elevation above datum is hA.
The XY ground axis system has its origin at the datum principal point P of the left-hand
photograph; the X axis is in the same vertical plane as the photographic x and x′ flight axes;
and the Y axis passes through the datum principal point of the left photo and is perpendicular
to the X axis.

78
By equating similar triangles of Fig. 8-10, formulas for calculating hA, XA, and YA may be
derived. From similar triangles L1oay and L1AoAy,

from which

and equating similar triangles L1oax and L1AoAx, we have

from which

79
Also from similar triangles L2 o′ax′ and L2 Ao′Ax,

from which

Equating Eqs. (b) and (c) and reducing gives

Substituting pa for xa – xa′ into Eq. (d) yields

Now substituting Eq. (8-5) into each of Eqs. (b) and (a) and reducing gives

In Eqs. (8-5), (8-6), and (8-7), hA is the elevation of point A above datum, H is the flying
height above datum, B is the air base, f is the focal length of the camera, pa is the parallax of
point A, XA and YA are the ground coordinates of point A in the previously defined unique
arbitrary coordinate system, and xa and ya the photo coordinates
Equations (8-5), (8-6), and (8-7) are commonly called the parallax equations.

Example 8-1
A pair of overlapping vertical photographs was taken from a flying height of 1233 m above
sea level with a 152.4-mm-focal-length camera. The air base was 390 m. With the photos
properly oriented, flight-line coordinates for points a and b were measured as xa = 53.4 mm,
ya = 50.8 mm, xa′ = –38.3 mm, ya′ = 50.9 mm, xb = 88.9 mm, yb = –46.7 mm, xb′ = –7.1 mm,
yb′ = –46.7 mm. Calculate the elevations of points A and B and the horizontal length of line
AB.

80
Solution By Eq. (8-1)

By Eq. (8-5),
Bf 390(152.4)
hA =H − =1233 − =585m above sea level
Pa 91.7

Bf 390(152.4)
hB =H − =1233 − =614m above sea level
Pb 96.0

By Eqs. (8-6) and (8-7), the ground coordinates are:

The horizontal length of line AB is

Parallax Heighting Formula for Undulating Terrain


Elevations by Parallax Differences
Parallax differences between one point and another are caused by different elevations of the
two points.

81
In Fig. 8-11, object point C is a control point with an elevation hC. The elevation of object
point A is desired. By rearranging Eq. (8-5), parallaxes of both points can be expressed as

The difference in parallax pa – pc, obtained by subtracting Eq. (e) from Eq. (f) and
rearranging, is

82
Let pa – pc equal Δp, the difference in parallax. By substituting H – hA from Eq. (f), and Δ
p into (g) and reducing, the following expression for elevation hA is obtained:

Example 8-2
In Example 8-1, flight-line axis x and x′ coordinates for the images of a vertical control point
C were measured as xc = 14.3 mm and xc′= –78.3 mm. If the elevation of point C is 591 m
above sea level, calculate the elevations of points A and B of that example, using parallax
difference Eq. (8-8).
Solution By Eq. (8-1),

For point A,

By Eq. (8-8),

For point B,

By Eq. (8-8),

Simplified Equation for Heights of Objects from Parallax Differences


A simplified equation for height determination can be obtained from Eq. (8-8) by choosing
the vertical datum to be the elevation of the point on the ground that is used as the basis for
the parallax difference. This makes hC zero, and Eq. (8-8) simplifies to

(8-9)
In Eq. (8-9), hA is the height of point A above ground, Δp = pa – pc is the difference in
parallax between the top of the feature and the ground (pc is the parallax of the ground), and
H is the flying height above ground, since datum is at ground.

83
If the heights of many features are needed in an area where the ground is approximately level,
the photo base b can be utilized as the parallax of the ground point. In this case, Eq. (8-9) can
be modified to

(8-10)
In Eq. (8-10), b is the photo base for the stereopair, Δp = pa – b

Example 8-3
The parallax difference between the top and bottom of a tree is measured as 1.3 mm on a
stereopair of photos taken at 915 m above ground. Average photo base is 88.2 mm. How tall
is the tree?
Solution By Eq. (8-10),

Measurement of Parallax
Photographic Flight-Line Axes for Parallax Measurement
For a vertical photograph of a stereopair, the flight line is the line connecting the principal
point and corresponding (conjugate) principal point.
Principal points are located by intersecting the x and y fiducial lines. All photographs except
those on the ends of a flight strip may have two sets of flight axes for parallax
measurements—one to be used when the photo is the left photo of the stereopair and one when
it is the right photo.

Principle of the Floating Mark


Assume that a mark of the same size and shape etched on transparent material, is
superimposed on each of the two overlapping photographs. The two marks, also called half-
marks, fuse into a single mark when viewed stereoscopically which appears floating in space.
Therefore, this mark is called the floating mark.

84
If the half marks are moved closer together, the parallax of the half marks is increased and the
fused mark will therefore appear to rise. Conversely, if the half marks are moved apart,
parallax is decreased and the fused mark appears to fall. This apparent variation in the
elevation of the mark as the spacing of half marks is varied is the basis for the term floating
mark.

The principle of the floating mark can be used to transfer principal points to their
corresponding locations, thereby marking the flight-line axes.
In this procedure;
• The principal points are first located as usual at the intersection of fiducial lines.
• The left half mark is placed over one of the principal points, say, the left point o1.

85
• Using a mirror stereoscope for viewing, the right half mark is placed on the right
photo and moved about until a clear stereoscopic view of the floating mark is obtained
and the fused mark appears to rest exactly on the ground.
• The right half mark is then carefully held in place while a pin is inserted through the
center of the cross to make a pinprick on the photograph.
• Once corresponding principal points have been marked, the photo base b can be
determined.
The photo base is the distance on a photo between the principal point and the corresponding
principal point from the overlapping photo.

By Eq. (8-1), the parallax of the left-photo ground principal point P1 is po1 = xo1 – (–xo′1) = 0
– (-b′) = b′. (The x coordinate of o1 on the left photo is zero.) Also, the parallax of the
right-photo ground principal point P2 is po2 = xo2 – (-xo′2) = b – 0 = b.
From the foregoing, it is seen that the parallax of the left ground principal point is photo base
b′ measured on the right photo, and the parallax of the right ground principal point is photo
base b measured on the left photo.

Monoscopic Methods of Parallax Measurement


Parallaxes of points on a stereopair may be measured either monoscopically or
stereoscopically.
In either method the photographic flight line axes must first be carefully located by marking
principal points and corresponding principal points.
Method 1
The equation for stereoscopic parallax is solved directly after direct measurement of x and x′
on the left and right photos.

86
A disadvantage of this method is that two measurements are required for each point.
Method 2
• Fasten the photographs down on a table or base material
• The photographic flight lines o1o2 and o1′o2′are marked as usual.
• A long straight line AA′ is drawn on the base material, and the two photos are
carefully mounted so that the photographic flight lines are coincident with this line.
• The distance D between the two principal points is measured.
• The parallax of point B is pb = xb – (-xb′) = xb + xb′ (note that in Fig. 8-5 the xb′
coordinate is negative)

From the figure, it can be seen that parallax is also:

Stereoscopic Methods of Parallax Measurement


Through the principle of the floating mark, parallaxes of points may be measured
stereoscopically.

Parallax Bar
This method employs a stereoscope in conjunction with an instrument called a parallax bar,
also frequently called a stereometer.
A parallax bar consists of a metal rod to which are fastened two half marks. The right half
mark may be moved with respect to the left mark by turning a micrometer screw. Readings
from the micrometer are taken with the floating mark set exactly on points whose parallaxes
are desired. From the micrometer readings, parallaxes or differences in parallax are obtained.

87
In Figure 8-9, the spacing between principal points is a constant, denoted by D. the distance
from the fixed mark to the index mark of the parallax bar is also a constant, denoted by K.

Procedure
a) The two photos of a stereopair are oriented for comfortable stereoscopic viewing in
such a way that the flight line of each photo lies precisely along a common straight
line.
b) The photos are fastened securely, and the parallax bar is placed on the photos.
c) The left half mark, called the fixed mark, is unclamped and moved so that when the
floating mark is fused on a terrain point of average elevation, the parallax bar reading
is approximately in the middle of the run of the graduations.
d) The fixed mark is then clamped
e) The right half mark, or movable mark, may be moved left or right with respect to the
fixed mark (increasing or decreasing the parallax) as required
The parallax of point A is

The term (D – K) is C, the parallax bar constant for the setup. Also ra is the micrometer
reading. By substituting C into the above equation, the expression becomes

Equation (8-3) assumes the parallax bar micrometer lo be ''forward-reading,' i.e., readings
increase with increasing parallaxes Should the readings decrease with increasing parallax, the
parallax bar is called "backward-reading" and the algebraic sign of r must be reversed.
The value of C is thus calculated using the equation:
C=p–r (8-4)
Where the parallax p = x - x′. The mean of the two values of C is then used for computations.

88
The parallax of the left photo ground principal point o1 is po1 = xo1 − (− xo′1 ) = 0 − (−b′) = b′ .
(The x coordinate of o1 on the left photo is zero.)
The parallax of the right photo ground principal point o2 is po2 = xo2 − xo′2 = b − 0 = b

∴ The parallax of the left ground principal point is photo base b' measured on the right photo
and the parallax of the right ground principal point is photo base b measured on the left
photo.
b′ − ro1 and C2 =
∴ C1 = b − ro2 (8-8)

EXAMPLE 8-1
A pair of overlapping vertical photographs were taken from a flying height of 4,045 ft above
sea level with a 152.4-mm focal length camera. The air base was 1,280 ft. With the photos
properly oriented, parallax bar readings of 12.57 and 13 .04 mm were obtained with the
floating mark set on principal points o1 and o2 respectively. On the left photo b was measured
as 93.73 mm and on the right photo b' was measured as 93.30 mm. Parallax bar readings of
10.96 and 15.27 mm were taken on points A and B.
Also, the x and y photocoordinates of points A and B measured with respect to the flight axes
on the left photo were xa = 53.41 mm, ya = 50.84 mm, xb = 88.92 mm, and yb = - 46.69 mm.
Calculate the elevations of points A and B and the horizontal length of line AB.
SOLUTION By Eq. (8-8),
C1 =b′ − ro1 =93.30 − 12.57 =80.73 mm
C2 =b − ro2 =93.73 − 13.04 =80.69 mm

89
80.73 + 80.69
=C = 80.71 mm
2
By Eq. (8-3), p = C + r
pa = C + ra = 80.71 + 10.96 = 91.67 mm
pb = C + rb = 80.71 + 15.27 = 95.98 mm

Bf
By Eq. (8-5), h= H −
p

Bf 1, 280(152.4)
hA =H − =4, 045 − =1,917 ft above sea level
pa 91.67

Bf 1, 280(152.4)
hB =H − =4, 045 − =2, 012 ft above sea level
pb 95.98

By Eqs. (8-6) and (8-7),


x 53.41(1, 280)
( a)
X A B=
= = 746 ft
pa 91.67

x 88.92(1, 280)
( b)
X B B=
= = 1,186 ft
pb 95.98

y 50.84(1, 280)
= ( a)
YA B= = 710 ft
pa 91.67

yb −46.69(1, 280)
YB = B( )= = −623 ft
pb 95.98

The horizontal length AB is,

AB= ( X B − X A ) 2 + (YB − YA ) 2
AB
= (1,186 − 746) 2 + (−623 − 710)
= 2
1, 404 ft

Advantages of Measuring Parallax Stereoscopically


• increased speed
• increased accuracy

Computing Flying Height and Air Base


Example 8-4
An overlapping pair of vertical photographs taken with a 152.4-mm-focal-length camera has
an air base of 548 m. The elevation of control point A is 283 m above sea level, and the
parallax of point A is 92.4 mm. What is the flying height above sea level for this stereopair?

90
Solution By rearranging Eq. (8-5),

If the flying height above datum is known and if one vertical control point is available in the
overlap area, the air base for the stereopair may be calculated by using Eq. (8-5).
Example 8-5
An overlapping pair of vertical photos was exposed with a 152.4-mm-focal-length camera
from a flying height of 1622 m above datum. Control point C has an elevation of 263 m
above datum, and the parallax of its images on the stereopair is 86.3 mm. Calculate the air
base.
Solution By rearranging Eq. (8-5),

If a line of known horizontal length appears in the overlap area, then the air base can be
readily
calculated. The horizontal length of the line may be expressed in terms of rectangular
coordinates,
according to the pythagorean theorem, as

Substituting Eqs. (8-6) and (8-7) into the above for the rectangular coordinates gives

Solving the above equation for B yields

(8-11)
Example 8-6
Images of the endpoints of ground line AB, whose horizontal length is 650.47 m, appear on a
pair of overlapping vertical photographs. Photo coordinates measured with respect to the
flight axis on the left photo were xa = 33.3 mm, ya = 13.5 mm, xb = 41.8 mm, and yb = –95.8
mm. Photo coordinates measured on the right photo were xa′ = –52.3 mm and xb′ = –44.9
mm. Calculate the air base for this stereopair.
Solution By Eq. (8-1),

91
By Eq. (8-9),

Sources of Errors in Parallax Heighting


Some of the sources of error in computed answers using parallax equations are as follows:
a) Locating and marking the flight lines on photos
b) Orienting stereopairs for parallax measurement
c) Shrinkage or expansion of photographs
d) Variation in flying heights for stereopairs
e) Tilted photographs
f) Errors in ground control
g) lens distortion
h) earth curvature
i) atmospheric refraction distortion

Distortions of Photographic Films and Papers


Photo coordinates contain small errors due to shrinkage or expansion of the photographic
materials that support the emulsion of the negative and positive.
Type of emulsion support: Paper media are generally much less stable than film.
Flatness of the camera platen: Photogrammetric equations derived for applications
involving frame cameras assume a flat image plane, any lack of flatness will result in film
distortions.
Dimensional change: Small changes in film size occur during processing and storage.
Dimensional change during storage may be held to a minimum by maintaining constant
temperature and humidity in the storage room.
Image Plane Distortion
The nominal amount of shrinkage or expansion present in a photograph can be determined by
comparing measured photographic distances between opposite fiducial marks with their
corresponding values determined in camera calibration.
Photo coordinates can be corrected based on the desired level of accuracy.

92
Lens Distortions
Lens distortion causes imaged positions to be displaced from their ideal locations. The
mathematical equations that are used to model lens distortions are typically comprised of two
components: symmetric radial distortion and decentering distortion.
The mathematical model used in the current USGS calibration procedure computes both
symmetric radial and decentering distortion parameters directly by least squares. Principal
point coordinates and focal length are also determined in the solution.
Atmospheric Refraction
Light rays travelling through the atmosphere are bent according to Snell’s law.
Photogrammetric equations assume that light rays travel in straight paths, and to compensate
for the known refracted paths, corrections are applied to the image coordinates.

In Figure 4-6, the angular distortion due to refraction is Δα, and the linear distortion on the
photograph is Δr. Refraction causes all imaged points to be displaced outward from their
correct positions.
The magnitude of refraction distortion increases with increasing flying height and with
increasing α angle.

93
Refraction distortion occurs radially from the photographic nadir point and is zero at the nadir
point.
The relationship that expresses the angular distortion Δα as a function of α is

Where: α is the angle between the vertical and the ray of light

H is the flying height of the camera above mean sea level in kilometers
h is the elevation of the object point above mean sea level
Earth Curvature
Elevations of points are referenced to an approximately spherical datum (i.e., mean sea level)
whereas photogrammetric equations assume that the zero-elevation surface is a plane.
To avoid need for these corrections, use a three-dimensional orthogonal object space
coordinate system. One such coordinate system is the local vertical coordinate system. A
local vertical coordinate system is a three-dimensional cartesian XYZ reference system which
has its origin placed at a specific point within the project area.
By using a local vertical coordinate system for a project, earth curvature ceases to be a
distorting effect. Instead, the curvature of the earth will simply be a natural characteristic of
the terrain.

Variation in flying height


Exposure at different heights results in photos with different scales.
I n Fig. 7-16 the left photo was exposed from a lower flying height than the right photo. To
obtain a comfortable stereoscopic view, the y parallax can be eliminated by sliding the right
photo upward transverse to the flight line when viewing point a and sliding it downward
when viewing point b.

94
Excessive Tilts

Photogrammetric Plotters
Definitions
Stereoplotter
A stereoplotter is a 3D digitizer, capable of producing accurate X, Y, and Z object space
coordinates when properly oriented and calibrated.

Degrees of Freedom
The number of movements (scaling, translations or rotations) that the components of plotters
can undergo during orientation.

95
Stereoscopic Plotting Instruments
Concept of stereoscopic plotting instrument design

FIGURE 12-1 Fundamental concept of stereoscopic plotting instrument design. (a) Aerial
photography; (b) Stereoscopic plotting instrument.

96
Orientation Phases
Interior orientation
Involves orientation of each photograph with respect to the geometry of the camera.
Diapositives are placed in two stereoplotter projectors. When light rights from opposite,
corresponding diapositives intersect, a stereomodel’s created.
Procedures involved in interior orientation are
(i) Preparation of diapositives
They are made by direct contact printing so their principal distances will be exactly
equal to the focal length of the camera.
(ii) Compensation for image distortions
May be accomplished in one of the following three ways:
• elimination of the distortion with a “correction plate”
• varying the projector principal distance by means of a cam,
• use of a projector lens whose distortion characteristics negate the camera’s
distortion
(iii)centering of diapositives in the projectors
It is done by aligning fiducial marks of the diapositive with four calibrated
collimation marks which intersect at the optical axis of the projector.
(iv) setting off the proper principal distance in the projectors
The principal distance is adjusted by either graduated screws or a graduated ring to
raise or lower the diapositive image plane.

Relative orientation
Involves orientation two photographs with respect to each other to form a stereo model.
Procedures involved in relative orientation are as follows;
The two projectors are first positioned relative to each other.
The operator can then “roam” about the overlap area and see the model comfortably in
three dimensions.

Absolute orientation
After relative orientation is completed, a true three-dimensional model of the terrain exists.
In absolute orientation, the stereomodel is scaled and leveled with respect to a reference
datum.

Model Scale

97
By comparing the geometry of Fig. 12-1a and b, model scale is seen to be the ratio of the
sizes of triangles AL1L2 and A′O1O2. Equating these similar triangles, model scale may be
expressed as

In Eq. (12-2), Sm is model scale, b is the model air base, B is the photographic air base, h is
plotter projection distance, and H′ is the flying height above ground. Model scale is changed
by varying the model base.
A minimum of two horizontal control points is required to scale a stereomodel. Scaling
procedures are;
1. These points are plotted at adopted model scale on the manuscript map as points A
and B of Fig. 12-4b.
2. The manuscript is then positioned under the model, and with the floating mark set on
one model point, such as A′, the manuscript is moved until map point A is directly
under the plotting pencil.
3. The floating mark is then set on model point B′.
4. While the manuscript is held firmly at A, it is rotated until map line AB is collinear
with model line A′B′
5. Model base is adjusted until new model line A′′B′′ is equal in length to map line AB.
Levelling the Stereo model
This procedure requires a minimum of three vertical control points distributed in the model so
that they form a large triangle.

98
A model with a vertical control point near each corner that has not yet been leveled is shown
in Fig. 12-5. There are two components of tilt in the model, an X component (also called Ω)
and a Y component (also called Φ).

Corrective tilts can be introduced by turning the leveling screws to rotate the projector bar
When orientation is completed, measurements of the model may be made and recorded.

Classification of Stereoscopic Plotters


Stereoscopic plotters can be classified based on;
1. Precision
2. Functions/Range of work
3. Projection system
4. Principle of design

Classification Based on Precision


The method is unsatisfactory because of difficulties in assessing true accuracy capabilities of
various instruments. Plotting accuracy is not solely a function of the instrument but also
depends upon other variables such as quality of photography, operator ability, etc. Also, some
2nd and 1st order plotters also share precision levels.
This classification is divided into four general categories:
(a) 1st order
(b) 2nd order
(c) 3rd order – they’re cheap and designed for small-scale mapping due to their limited
precision.
(d) Approximate - instruments of the "approximate" category assume truly vertical photos
and employ a parallax bar for measurement. These low-order instruments enable

99
direct stereoscopic plotting, but the resulting map is a perspective projection, not an
orthographic projection.

Classification Based on Function/Range of Work


This classification is divided into 3 general categories:
1) Topographical plotters – designed for production of medium scale topo maps. They
can be used for less precise large-scale maps.
2) Large scale mapping plotters – this describes plotters designed for precision plotting
of large-scale maps with an enlargement from photos to map of over 5 times.
3) Small scale mapping and map revision plotters – 3rd order plotters used for production
of small scale topo maps (1:50,000).

Classification Based on Projection System


This classification is divided into three general categories:
(a) direct optical projection instruments – examples are the Big Bertha developed by Barr
and Stroud and the Stereoplanigraph by Zeiss
(b) stereoplotters with mechanical projection – examples are Santoni’s Autoriduttore, the
Stereocartografo (I, II, III) and the WILD Autograph A7.
(c) stereoplotters with optical-mechanical projection (photogoniometer-type) -
instruments with optical-mechanical and mechanical projections also create a true 3D
model from which measurements are taken. Their method of projection, however, is a
simulation of direct projection of light rays by mechanical or optical-mechanical
means. An operator views the diapositives stereoscopically directly through a
binocular train. Examples are the Thompson-Watts plotter and the Autocartograph by
Hugershoff.

Direct Optical Projection Stereoplotters


These instruments create a true three-dimensional stereomodel by projecting transparency
images through projector lenses.
The model is formed by intersections of light rays from corresponding images of the left and
right diapositives. An operator is able to view the model directly on a viewing screen
(platen).
A typical direct optical projection stereoplotter is illustrated in Figure 12-2

100
The numbered parts are the
(1) Main frame, which supports the projectors rigidly in place
(2) Reference table, serves as the vertical datum to which model elevations are referenced
(3) Tracing table, to which the platen and tracing pencil are attached
(4) Platen, the viewing screen
(5) Guide rods, which drive the illumination lamps, causing projected rays to be
illuminated on the platen regardless of the area of the stereomodel being viewed
(6) Projectors (7) illumination lamps (8) diapositives
(7) (9) Leveling screws, which may be used to tilt the projectors in absolute orientation
(8) (10) Projector bar, to which the projectors are attached
(9) (11) Tracing pencil

Stereoplotters With Mechanical or Optical-Mechanical Projection


Mechanical projection stereoscopic plotting instruments simulate direct optical projection of
light rays by means of two precisely manufactured metal space rods.
The basic principles of mechanical projec1ion are illustrated in the simplified diagram of Fig.
14-22. Diapositives are placed in carriers and illuminated from above. The carriers are
analogous to projectors of direct optical projection instruments. Two space rods are free to
rotate about gimbal joints O′ and O′′, and they can also slide up and down through these
joints. The space rods represent corresponding projected light rays and the gimbal joints are
mechanical projection centers, analogous lo the objective lenses of projectors of direct optical
projection stereoplotters. The model exposure stations are therefore represented by O′ and
O′′. and the distance O′ O″ is the model air base. Joints O′ and O′ O′′ are fixed in position
except that their spacing can be changed during orienta­tion to obtain correct model scale.

101
The viewing system consists of two individual optical trains of lenses, mirrors, and prisms.
The two optical paths are illustrated by dashed lines in Fig. 14-22.

An operator looking through binocular eyepieces along the optical paths sees the diapositives
directly and perceives the stereomodel. Objective lenses V′ and V″ are situated in the optical
trains directly beneath the diapositives. The lenses are oriented so that viewing is orthogonal
to the diapositives; consequently, the diapositive image planes (emulsion surfaces) can lie on
the upper side of the diapositive glass with no refraction error being introduced because no
rays pass obliquely through the glass. A reference half mark is superimposed on the optical
axis of each of the lenses V′ and V″. Movement is imparted to the lenses from the space rods
by means of tie rods connected at another set of gimbal joints K′ and K″ . These joints are
fixed in vertical position, and the vertical distance from lower gimbal joints O′ and O″ to
their corresponding upper gimbal joints K′ and K″ is the principal distance p. During interior
orientation, this distance is set equal to the principal distance of the diapositives
The space rods intersect at the base carriage. By manually moving the base carriage.
movement is imparted to the space rods, which in turn impels the viewing system and makes
it possible to scan the diapositives. By manipulating the base carriage in the X, Y.
and Z directions, the optical axes of lenses V' and V" can be placed on corresponding
images such as a' and a". This will occur when the reference half marks fuse into a single
mark that appears to rest exactly on the model point. If orientation of the instrument has

102
been carefully performed, with the floating mark fused on point a, the space rods have
the same orientation that the incoming light rays from terrain point A had at the time of
photography, and the intersection of space rods locates that model point. Each additional
model point is located in this same manner.
By geometric comparison, the mechanical projection system illustrated in Fig. 14-22 is
exactly the same as direct optical projection. Diapositives are placed in the carriers with their
overlapping areas toward the outside. In scanning the diapositives, if the base carriage is
moved right, the viewing lenses move left, and vice versa. Also, if the base carriage is pushed
backward, the viewing lenses move forward. and vice versa. When the base carriage screw is
turned to raise the space rod intersection, the parallactic angle increases and the viewing
lenses move apart, a manifestation of the increased x parallax which exists for higher model
points.
The carriers of most mechanical projection stereoplotters are capable of rotations and
translations, either directly or indirectly. These carrier motions are used to perform relative
and absolute orientation. Many mechanical projection instruments are oriented using exactly
the same procedures that were described for direct optical projection stereoplotters. Others
which do not possess all three rotations and translations use slight variations from these basic
orientation procedures.
Interior orientation of the WILD A-8 Autograph consists of preparing the diapositives either
by contact printing or projection printing, centering the diapositives in the carriers by means
of collimation marks, setting off the proper principal distance, and if necessary, inserting
distortion correction plates.
Each projector of the A-8 is capable of the three rotations, but its only translation is an X
translation needed for scaling models. Relative orientation of this instrument is by the two-
projector method. Graduated dials which record omega, phi and kappa rotations facilitate
both relative orientation and absolute orientation.
Another stereoplotter utilizing mechanical projection is the Galileo-Santoni Stereosimplex
IIC.

Classification Based on Principle of Design


This classification is divided into three general categories:
1) analog stereoplotters
2) analytical stereoplotters
3) Softcopy stereoplotters.

103
Analytical Plotters
Analytical plotters compute a mathematical model based on the principles of analytical
photogrammetry.
By combining computerized control with precision optical and mechanical components,
analytical plotters enable exact mathematical calculations to define the nature of the
stereomodel.
Advantages of Analytical Plotters
a) They’re easily interfaced with computer-aided drafting (CAD) systems, which
facilitates map editing and updates.
b) With their digital output capability, they’re ideal for compiling data for use in
geographic information systems.
c) Because they have no optical or mechanical limitations in the formation of their
mathematical models, analytical plotters have great versatility.
d) They can handle any type of photography
e) They can accommodate photography from any focal-length camera
f) They can simultaneously use two photos of different focal lengths to form a model.
g) In comparison with analog plotters, analytical plotters can provide results of superior
accuracy
Capabilities of an analytical plotter
a) Can precisely measure x and y photo coordinates on both photos of a stereopair
b) Can accurately move to defined x and y photo locations

System Components and Method of Operation


An analytical plotter incorporates a pair of precision comparators which hold the left and
right photos of a stereopair.

104
The plate carrier, which holds the photograph, moves independently in the X and Y directions
as the servomotors turn the corresponding lead screws. Digital encoders sense the position of
a point on the photo relative to a fixed half mark.

FIGURE 12-8 Schematic diagram of components and operation of an analytical plotter.

105
Analytical Plotter Orientation
Interior Orientation
Interior orientation consists of
1) Preparation of diapositives
Diapositives of a stereopair are placed on the left and right plate carriers and held in place by
glass covers.
2) Compensation for image distortions
Individual two-dimensional coordinate transformations (see App. C) are then calculated for
the left and right photos. This compensates for film shrinkage or expansion.
3) Centering of diapositives
Centering of the diapositives is accomplished by measuring the X and Y plate coordinates of
the fiducials
4) Setting off the proper principal distance.

Relative Orientation
Relative orientation is achieved by numerical techniques (least squares). The operator
measures the left and right photo coordinates of at least six pass points to provide
observations for the least squares relative orientation solution.
(1) raw plate coordinates are observed for all pass points
(2) they are transformed to the calibrated fiducial system
(3) Lens distortion and atmospheric refraction corrections are applied, resulting in refined
coordinates.
(4) Numerical relative orientation is then calculated, resulting in values for the exterior
orientation parameters (ω, Ф, κ, XL, YL, ZL) for both photos.
Absolute Orientation
Absolute orientation is commonly achieved by a three-dimensional conformal coordinate
transformation.

Applications of Analytical Plotters


Planimetric and topographic mapping
The operator traces out features and contours by keeping the floating mark in contact with
each feature. The ground coordinates from the operator controls are directly transmitted to the
CAD computer for editing, plotting and storage
Profiling

106
X, Y, and Z coordinates are measured and input into the computer. The machine then
automatically drives the measuring mark to the corresponding coordinate locations.
Aerotriangulation
They can be used for independent model triangulation and simultaneous bundle adjustment.
Comparators
Analytical plotters can also be used simply as monocomparators or stereocomparators, where
image coordinates are measured and recorded for use in aerotriangulation.

Softcopy Plotters
The fundamental operation of a softcopy plotter is the same as that of an analytical plotter
except that instead of employing servomotors and encoders for point measurement, softcopy
systems rely on digital imagery.
Advantages
1) Softcopy plotters can perform all the operations of an analytical plotter and digital
image processing as well.
2) Less expensive than analytical plotters
3) More versatile than analytical plotters
4) Offer many automatic features not found on analytical plotters. An example is
automated routine point measurement
5) They offer vector superimposition, in which topographic map features (lines, points,
etc.) are superimposed on the digital photos as they are being digitized.

Components of softcopy plotters


• software
• hardware

System Hardware
Computer
The fundamental hardware requirement for a softcopy stereoplotter is a high-performance
computer workstation with advanced graphics capability.
Archival storage
Some form of archival storage such as optical disks is required for offline storage of digital
images.
Operator controls

107
The system also requires operator controls for X, Y, and Z position within the stereomodel.
The X and Y controls are usually implemented as some type of computer “mouse,” while
the Z control is typically a small wheel which can be rotated by the operator’s thumb.
Stereoviewing capability
Polarizing filters and alternating shutters can be used.
In the polarizing filter approach, a computer, display system is configured so that light from
the left image and light from the right image are orthogonally polarized. Meanwhile, the
operator wears a simple pair of spectacles consisting of orthogonally polarized filters.

Image Measurements
Manual image measurements are accomplished through operator control of a floating mark.
On a softcopy stereoplotter, a floating mark consists of left and right half marks which are
superimposed on the left and right images, respectively. An individual half mark consists of
a single pixel, or small pattern of pixels.
Procedure
1) The pixel(s) of the half mark is (are) set to brightness values and/or colors which give
a high contrast with the background image. When the operator moves the X, Y, or Z
control, the positions of the individual half marks move with respect to the
background images.
2) Once the floating mark visually coincides with a feature of interest, the operator can
press a button to record the feature’s position.
Two approaches are used for half mark movement:
a) Fixed mark with a moving image
Half marks remain at the same physical position on the screen while the images are panned
across the screen
b) Fixed image with a moving mark.
Images remain at a fixed position on the display while the individual half marks move in
response to the operator’s input.

Orientation Procedures
The key difference between orientations of softcopy and analytical plotters is that softcopy
systems allow for greater automation in the process.
Interior orientation
Systems that use pattern matching attempt to find the positions of the fiducial marks by
matching a standard image of the fiducial, sometimes called a template, with a corresponding
subarray from the image. Once all fiducials have been located, a two-dimensional

108
transformation can be computed to relate image coordinates (row and column) to the fiducial
axis system.
Relative orientation
Small subarrays from the left image in the standard pass point locations are matched with
corresponding subarrays from the right image. Once a sufficient number of pass points have
been matched a relative orientation can be computed.
Absolute orientation
Absolute orientation can be automated when a block aerotriangulation has previously been
performed on the digital images. As a result of the aerotriangulation, exterior orientation
parameters will have been determined for each photo.

Analog Stereoplotters

Figure 14-1
In the design of a complete plotting instrument, the following elements can be distinguished:
1. Projection device
2. Viewing System
3. Measuring system
Projection System
In figure 14-1, points O′ and O" are two exposure centers formed by the nodal points on the
emulsion side of the camera lens. A1 and A2 are points in the terrain, of which a1', a2' and a1",
a2" are the images in the first (left) and second (right) exposure. If we could bring two exactly
similar exposure cameras back in to the air with the photographs in the same positions as they

109
had during the exposure, lines through the points a1' and O', a1" and O" would intersect at A1.
The same applies to all corresponding points, on the two photographs.
The locus of all intersection points forms a perfect geometric model of the terrain, on the
same scale in the same position.
This model will have an orientation and a scale which are determined by the position and
length of the base b and of the lateral inclination Ω of the model perpendicular to the vertical
plane through the air base.
Most analog stereoplotters possess 8 degrees of freedom (7 for orientation and 1 during
scanning). Absolute and relative orientations are dependent upon the floating mark and
scanning movements.
Interior Orientation
The photograph must be centered in the projector and placed at the correct principal distance
before interior orientation is performed.
Relative Orientation
Relative orientation movements consist of 3 rotations of the projectors, 3 translations of
baseline endpoints and a rotation/shift of the map (i.e., 7 degrees of freedom or freedoms of
movement).

The rotations are around 3 axes as shown in Fig. 14-3.


Absolute Orientation
To obtain correct absolute orientation, some instruments allow additional inclinations in the x
direction (Φ) and in y direction (Ω).
Measuring and Plotting System
Analog stereoplotters employ a floating mark for measuring coordinates and plotting.
Scanning with the floating mark is carried out by handwheels in the x and y directions. A 3rd

110
movement is necessary for the floating mark to achieve the right height above a point on the
model.
Viewing System
There are 3 methods of observation:
1. Direct observations of the projections – It’s only possible with instruments applying a
direct optical projection on the screen. Observations may be done using the anaglyph
system with spectacles.
2. Telescopic observations through the camera lens – This was necessitated due to poor
technologies which resulted in poorly illuminated diapositives. Use of telescopes with
inbuilt measuring marks enabled perception of sharp images.

3. Direct observation through a binocular microscope

Systems in Photogrammetric Plotters


Projection Systems
Diapositives of a stereopair are placed in projectors and illuminated from above. Light rays
are projected through the projector objective lenses and intercepted below on the reflecting
surface of the platen.
The projection systems of this type of stereoplotter require that the instruments be operated in
a dark room.
The lens formula must be satisfied in order to obtain a sharply focused stereomodel.

In Eq. (12-1), p is the principal distance of the projectors (distance from diapositive image
plane to upper nodal point of the projector lens), h is the projection distance (distance from

111
lower nodal point of the objective lens to the plane of optimum focus), and f′ is the focal
length of the projector objective lens.

Viewing Systems
The function of the viewing system of a stereoplotter is to enable the operator to view a 3D
stereomodel. The different stereoviewing systems commonly used in direct optical
projection plotters are
(i). The anaglyphic system
Uses filters of complementary colors to separate the left and right projections. Assume that a
cyan filter is placed over the light source of the left projector while a red filter is placed over
the right. Then, if the operator views the projected images while wearing a pair of spectacles
having cyan glass over the left eye and red glass over the right eye, the stereomodel can be
seen in three dimensions.
(ii). The stereo-image alternator (SIA)
The SIA system uses synchronized shutters to achieve stereoviewing.
A shutter is placed in front of each projector lens. Also, a pair of eyepiece shutters is situated
in front of the platen. The shutters are synchronized so that the left projector and left eyepiece
shutters are open simultaneously while the right projector and right eyepiece shutters are
closed, and vice versa. An operator therefore sees only left projector images with the left eye
and right projector images with the right eye.
(iii). The polarized-platen viewing (PPV) system.
The PPV system operates similarly to the anaglyphic system except that polarizing filters are
used instead of colored filters. Filters of orthogonal polarity are placed in front of the left and
right projectors, and the operator wears a pair of spectacles with corresponding filters on the
left and right.

Measuring Systems
Measurements may be recorded as direct tracings of planimetric features and contours of
elevation, or as X, Y, and Z model coordinates.
The platen contains a reference mark in its center. The reference mark appears to float above
the stereomodel if the platen is above the terrain; hence it is called the floating mark. Vertical
movement of the platen is geared to a dial, and by varying gear combinations the dial can be
made to display elevations directly.
A manuscript map is placed on top of the reference table. The tracing table rests on the
manuscript and is moved manually in the X and Y directions.
To plot the position of any point, the platen is adjusted in X, Y, and Z until the floating mark
appears to rest exactly on the desired point in the model.

112
A pencil point which is vertically beneath the floating mark is then lowered to record the
planimetric position of the point on the map.

Project Planning
Definitions
Flight Map
A map showing flight lines, usually marked on a medium scale topographic map, showing the
starting and ending points of each line.

Crab
Crab is a disparity in the orientation of the camera with respect to the aircraft’s actual travel
direction.

Drift
Drift is the pilot’s failure to fly along planned flight lines.

Neat Model
The neat model is the approximate mapping area of each stereopair.

Factors Influencing Planning


Map Specifications
Scale
Map Scale
Selection of optimum map scale depends upon the purpose of the map. Compilation at a
larger scale than necessary is uneconomical, and compilation at too small a scale reduces the
usefulness of the map.
The horizontal accuracy (accuracy to which planimetric positions of points can be measured
from a map) depends directly upon the map’s scale.
Contour Interval
The recommended contour interval depends on not only the intended use of the map but also
the type of terrain.
Contour interval and map scale must be selected so that they are compatible. As map scale
decreases, contour interval must increase.
Specification of vertical accuracy in topographic mapping is commonly given in terms of
contour interval. As vertical mapping accuracy requirements increase (contour interval
decreases). Quantifying vertical accuracy based on contour interval employs a term called the
C factor. The C factor is the ratio of the flying height above ground of the photography (H′)
to the contour interval

113
Nature of Terrain
The nature of the terrain dictates the contour interval. The contour interval is directly
proportional to the steepness of the terrain.
As vertical mapping accuracy requirements increase (required contour interval decreases),
flying height must decrease and hence photographic scale increases.

Photography Specifications
Photo Scale
For topographic mapping, photo scale is usually dictated by the map’s required scale and/or
horizontal and vertical accuracy.
In topographic mapping, the enlargement ratio from photo scale to the scale of the plotted
map must be considered.
Aerial photographic coverage for mosaic preparation or for photo interpretation must be
planned at a scale which enables the smallest objects of importance to be resolved on the
photos.
Example 18-4
A map must be compiled at a scale of 1:6000. If an enlargement ratio of 5 will be used in
producing this map, what is the required photo scale?
Solution Photo scale is one-fifth as large as map scale. Therefore,
1 1
S photo
= ×= 1: 30, 000
6000 5
Example 18-5
A topographic map having a scale of 200 ft/in with 5-ft contour interval is to be compiled
from contact-printed diapositives using a stereoplotter having a nominal 6-in (152-mm)
principal distance. Determine the required flying height for the photography if the maximum
values to be employed for the C factor and enlargement ratio are 1500 and 5, respectively.
Solution
1. Considering contour interval and C factor:
H′ = 1500(5 ft) = 7500 ft
2. Considering map scale and enlargement ratio:
1 in 1 1 in f
Photo scale = =
× =
200 ft 5 1000 ft H ′

Thus, H′ = 6 in x 1000 ft/in = 6000ft

114
Weather Conditions
An ideal day for aerial photography is one that is free from clouds, although if the sky is less
than 10 percent cloud-covered, the day may be considered satisfactory.
Overcast weather can be favorable when large-scale photos are being taken for topographic
mapping over features which would cast troublesome shadows on clear, sunny days e.g. built-
up areas, forests, steep canyons etc.
Atmospheric haze can be effectively eliminated from the photographs by using a yellow filter
in front of the camera lens.
Windy, turbulent days can create excessive image motion and cause difficulties in keeping
the camera oriented for vertical photography and maintaining planned flight lines and flying
heights.
Season of the Year
If photography is being taken for topographic mapping, the photos should be taken when the
deciduous trees are bare, so that the ground is not obscured by leaves.
Sometimes aerial photography is taken for special forestry interpretation purposes, in which
case it may be desirable for the trees to be in full leaf.
Snow: Aerial photography is not taken when the ground is snow-covered. Heavy snow not
only obscures the ground but also causes difficulties in interpretation and in stereoviewing.
Sun’s altitude: Low sun angles produce long shadows, which obscure detail. Generally a sun
angle of about 30° is the minimum acceptable for aerial photography.
Type of Aircraft
Fewer high-altitude photos are required to cover a given area. Most aerial photography is
taken using single- or twin-engine aircraft.
Higher altitudes require expensive planes such as turbocharged or jet aircraft.
Type of Camera
For topographic mapping, photography is preferably taken with a wide or super-wide angle
(short focal length) camera so that a large base-height (B/H') ratio is obtained. The B/H' ratio
is the ratio of the air base of a pair of overlapping photographs to average flying height above
ground. The larger the B/H' ratio, the greater the intersection angles or parallactic angles
between intersecting light rays to common points.
Metric qualities
Photos having good metric qualities are needed for topographic mapping or other purposes
where precise quantitative photogrammetric measurements are required.
Photographs of good metric quality are obtained by using cameras and films having fine-
grained, high-resolution emulsions or digital sensors with a high pixel count.
Pictorial qualities

115
High pictorial qualities are required for qualitative analysis, such as for photographic
interpretation or for constructing orthophotos, photomaps, and aerial mosaics.
Photography of high pictorial require cameras of good-quality lens.
Type of Plotter
Important plotter considerations are:
i). Principal distance of the projectors - Some plotters are capable of accommodating a
wide range of principal distances; others are limited either optically or mechanically
to a very narrow range.
ii). Optimum enlargement ratio from photo scale or diapositive scale to map compilation
scale - If the enlargement ratio of the is plotter known, photo scale can be fixed, and
this in turn fixes flying height.
iii). C factor of the plotter - The C factor is the ratio of the flying height above ground of
the photography to the contour interval. The more precise the plotter, the greater its C
factor rating.
iv). Vertical operating range of the plotter - Projection distance is comparable to flying
height above ground and therefore flying height should be at least five times greater
than the maximum terrain variation of any model.

Costs
Material
Material costs are directly related to the quantity of each photogrammetric product to be
prepared.
Labor costs
The most realistic approach to estimating labor is to rely on past experiences with projects of
a similar nature.
Schedules for aerial photography and ground control surveys must account for possible
delays due to inclement weather.
Equipment
Very-high-altitude coverage is more expensive to obtain than low-altitude photography
because of the special equipment that it requires. Some of the problems encountered at high
flying heights are decreasing available oxygen. decreasing pressure, and extreme cold.
When flying heights exceed about 10,000 feet, an oxygen supply system is necessary for the
flight crew. At altitudes above 30,000 feet, pure oxygen under pressure is required. Also, the
cabin must be pressurized and heaters are required to protect the crew against the cold.

Flight Plan Data


Number of photographs
The number of photographs per flight line depends upon the total length of flight line and is
given by the length of the line divided by airbase.

116
The total number of photographs per flight line is determined by;
Length of flight strip
Number of photographs per flight line = +1
(1 – Pl)sl

The total number of required photographs is determined by;


Total Area To Be Photographed A
Number of Photographs = =
Net ground coverage per photo a

Size of each photo = (1 – Pl) l × (1 - Pw) w


Ground Coverage per photo = s (1 – Pl) l × s (1 - Pw) w
Where s = scale of photograph (f/H)
w = width of photograph
l = length of photograph
Pl = percentage end lap
Pw = percentage side lap
Generally, additional two additional photographs are taken at the end of line for safety.

Flight line spacing


It is defined as the distance between two adjacent flight lines or strips, at the photographic
scale or ground scale.
The total number of flight lines can be calculated by the following equation;
Width of area (W)
Total number of flight lines = +1
(1 – Pw)sw

Exposure interval
The time interval which is a small fraction of a second, during which the diaphragm is open is
called exposure time. A small value of exposure time results in poor illumination and darker
image. But a high value is also problematic since it gives streak in an image instead of a point
while imaging.
3600×L
Time lag between exposure = sec
S
Where L = Ground distance per photo (1 - Pl)
S = Speed in km/h
The movement allowed on the photograph, when converted to ground scale and divided by
the aircraft speed exposure time, and is normally expressed as 1/t seconds.

117
Photographic End Lap and Side Lap
End lap is the overlapping of successive photos along a flight strip while side lap is the
overlap of adjacent flight strips.

FIGURE 18-1 End lap, the overlapping of successive photos along a flight strip.
In Fig. 18-1, G represents the dimension of the square of ground covered by a single vertical
photograph and B is the air base or distance between exposure stations
The amount of end lap of a stereopair is commonly given in percent.

If stereoscopic coverage of an area is required, the absolute minimum end lap is 50 percent.
To prevent gaps from occurring in the stereoscopic coverage due to crab, tilt, flying height
variations, and terrain variations, end laps greater than 50 percent are used.
Aerial photography for mapping purposes is normally taken with about 60 percent end lap.

FIGURE 18-2 Side lap, the overlapping of adjacent flight strips.


Side lap is required in aerial photography to prevent gaps from occurring between flight strips
as a result of drift, crab, tilt, flying height variation, and terrain variations. Drift (failure of the
pilot to fly along planned flight lines) is the most common cause for gaps in photo coverage

118
I n Fig. 18-2, G represents the dimension of the square of ground coverage of a single vertical
photograph, and W is the spacing between adjacent flight lines. An expression for PS (percent
side lap) in terms of G and W

Mapping photography is normally taken with a side lap of about 30 percent.


Photography for orthophoto or mosaic work is sometimes taken with greater than 30 percent
side lap to lessen distortions of images due to tilt and relief.
Ground Coverage
If end lap and side lap are known, the ground area covered by the stereoscopic neat model
can be determined. The neat model is the approximate mapping area of each stereopair.
If orthophotos are the intended final product, it is beneficial to plan flights with equal end lap
and side lap. The advantage of increasing the side lap is that more ground is covered near the
center of photos. This reduces the amount of relief displacement.

119
Example 18-7
Aerial photography is to be taken from a flying height of 6000 ft above average ground with
a camera having a 6-in (152.4-mm) focal length and a 9-in (23-cm) format. End lap will be
60 percent, and side lap will be 30 percent. What is the ground area covered by a single
photograph and by the stereoscopic neat model?
Solution
1.
f 6 in 1 in
S
= = = or 1:12,000
H ′ 6000 ft 1000 ft
2. The dimension G of the square ground area covered by a single photo is
G = 1000 ft/in x 9 in = 9000 ft
3. The area in acres covered on the ground by a single photo is

(9000 ft) 2
=A = 1900 acres
43,560 ft 2 / acre

4. At 60 percent end lap, B is 0.4G; and at 30 percent side lap, W is 0.7G. Therefore the
dimensions of the rectangular stereoscopic neat model are

And the area of the neat model is


3600 ft × 6300 ft
= 520 acres
43,560 ft 2 / acre

Flying Height
Flying height above average ground is automatically fixed in accordance with scale.
Errors in computed positions and elevations of points in a stereopair increase with increasing
flying heights and decrease with increasing x parallax. Large B/H' ratios denote low flying
heights and large x parallaxes, conditions favorable to higher accuracy.
Flying heights used in taking photos for topographic mapping normally vary between about
500 and 10,000 m. If one portion of the project area lies at a substantially higher or lower
elevation than another part, two different flying heights may be necessary to maintain
uniform photo scale.
Example 18-6

120
Aerial photography having an average scale of 1:6000 is required to be taken with a 152.4-
mm-focallength camera over terrain whose average elevation is 425 m above mean sea level.
What is required flying height above mean sea level?
Solution
f
S=
H − havg

1 152.4 mm  1 m 
=  
Thus 6000 H − 425 m  1000 mm 

H = 6000(0.1524 m) – 425 m = 1340 m

Flight Map
A flight map gives the project boundaries and flight lines the pilot must fly to obtain the
desired coverage. The flight map is prepared on some existing map which shows the project
area.
The flight map may also be prepared on small-scale photographs of the area, if they are
available. In executing the planned photographic mission, the pilot finds two or more features
on each flight line which can be identified both on the flight map and on the ground. The
aircraft is flown so that lines of flight pass over the ground points.
Alternatively, an airborne GPS receiver can be employed to guide the aircraft along
predefined flight lines.
Flight planning templates are useful for determining the best and most economical
photographic coverage for mapping, especially for small areas. These templates, which show
blocks of neat models, are prepared on transparent plastic sheets at scales that correspond to
the scales of the base maps upon which the flight plan is prepared.
The templates are then simply superimposed on the map over the project area and oriented in
the position which yields best coverage with the fewest neat models.

121
Such a template is shown in Fig. 18-10. The crosses represent exposure stations, and these
may be individually marked on the flight map.

Procedure for Photogrammetric Projects


Project planning is comprised of the following steps:
1. Convert project requirements to specifications in terms of area to be mapped, desired map
scale and contour interval. The determination of these specifications depends on the
required accuracy of the final map and on cost constraints. More accurate maps are more
costly and take longer to compile.
2. Create a flight map which shows where the photos are to be taken. A flight map consists
of flight lines, usually marked on a medium scale topographic map, showing the starting
and ending points of each line. It is used by the pilot for navigation and by the
photographer for taking the pictures. Usually, there are enough topographical features in
the flight area to assist the pilot in flying the designated flight lines. Otherwise, a large
arrow on the ground at the beginning and end of each flight strip is necessary to aid the
pilot and photographer. The number of flight lines, their location, the spacing between
them, and their orientation depends on the characteristics of the project to be mapped and
on the specifications of the flight mission.
3. Determine photogrammetric specifications in terms of flight height, the number of
photographs needed, the number of strips needed, flight lines, approximate location for
exposure stations, and equipment to be used. Specifications should also be developed for
ground control, aerial triangulation, and compilation methodology. Specifications which
outline how to take the photos, including camera and film requirements, scale, flying
heights, end lap, side lap, tilt and crab.

Example 1
A project area is 10 mi (16 km) long in the east-west direction and 6.5 mi (10.5 km) wide in
the north-south direction (see Fig. 18-11). It is to be covered with vertical aerial photography
having a scale of 1:12,000. End lap and side lap are to be 60 and 30 percent, respectively. A
6-in- (152.4-mm-) focal length camera with a 9-in- (23-cm-) square format is to be used.
Prepare the flight map on a base map whose scale is 1:24,000, and compute the total number
of photographs necessary for the project.

122
FIGURE 18-11 Project area for Example 1.
Solution
1. Fly east-west to reduce the number of flight lines.
2. Dimension of square ground coverage per photograph [photo scale = 1:12,000 (1 in/1000
ft)] is

3. Lateral advance per strip (at 30 percent side lap) is

4. Number of flight lines. (Align the first and last lines with 0.3G (side-lap dimension)
coverage outside the north and south project boundary lines, as shown in Fig. 18-11. This
ensures lateral coverage outside of the project area.)
Distance of first and last flight lines inside their respective north and south project boundaries
(see Fig. 18-11) is

Number of spaces between flight lines:

Number of flight lines = number of spaces + 1 = 6


5. Adjust the percent side lap and flight line spacing.
Adjusted percent side lap for integral number of flight lines (include portion extended
outside north and south boundaries):

123
Adjusted spacing Wa between flight lines for integral number of flight lines:

6. Linear advance per photo (air base at 60 percent end lap):

7. Number of photos per strip (take two extra photos beyond the project boundary at both
ends of each strip to ensure complete stereoscopic coverage):

8. Total number of photos:

9. Spacing of flight lines on the map:

Ground Control Requirements

Photointerpretation
Photo Interpretation is the act of examining aerial photographs for the purpose of identifying
objects and judging their significance.

124
Elements of Image Interpretation

Size
Size of objects in an image is a function of scale. It is important to assess the size of a target
relative to other objects in a scene to aid in the interpretation of that target.
Tone
Refers to the relative brightness or colour of objects in an image. Generally, tone is the
fundamental element for distinguishing between different targets or features. Variations in
tone also allows the elements of shape, texture, and pattern of objects to be distinguished.
Shape
Refers to the general form, structure, or outline of individual objects. Shape can be a very
distinctive clue for interpretation. Straight edge shapes typically represent urban or
agricultural (field) targets, while natural features, such as forest edges, are generally more
irregular in shape
Pattern
Refers to the spatial arrangement of visibly discernible objects. Typically an orderly
repetition of similar tones and textures will produce a distinctive and ultimately recognizable
pattern. Orchards with evenly spaced trees, and urban streets with regularly spaced houses are
good examples of pattern.
Texture
Refers to the arrangement and frequency of tonal variation in particular areas of an image.
Rough textures would consist of a mottled tone where the grey levels change abruptly in a
small area, whereas smooth textures would have very little tonal variation
Both texture and pattern are scale independent.
Shadow

125
May provide an idea of the profile and relative height of a target or targets which may make
identification easier.
Association
Takes into account the relationship between other recognizable objects or features in
proximity to the target of interest. The identification of features that one would expect to
associate with other features may provide information to facilitate identification.

Image Analysis
Monocular
Stereoscopic

Radial Triangulation
Definitions
Radial Triangulation
Radio or radial-line triangulation is a method for extending or supplementing horizontal
ground controls.
A minimum of two horizontal control points are required. Radial-line triangulation is not
used extensively today.

Pass Points
They’re points of extended horizontal control used to facilitate progression of triangulation
across a series of photos and for controlling subsequent photogrammetric procedures such as
mosaicking or planimetric mapping.

Conjugate Principal Point


It’s a principal point of an aerial photo represented on an adjacent aerial photo.

Purpose of Radial Triangulation


The procedure is used for:
• providing supplemental control for small-scale mapping
• controlling mosaic construction
• limited planimetric map revision

126
Principles of Radial Triangulation
Radial-line Assumption
The fundamental principle upon which radial-line triangulation is based is that angles with
vertexes at the principal point of a vertical photograph are true horizontal angles.
Both Arundel and Radial-line principles assume truly vertical photos and that distortions are
radial from the principal point.

In Fig. 9-1 the plane of the vertical photo is horizontal and parallel to the datum plane. Points
A' and B' in the datum plane are vertically beneath object points A and B. Planes LAA'P' and
LBB'P' are vertical, and therefore angle aob on the photo is equal to the true horizontal angle
A'P'B'.
On a vertical photograph, relief displacement, radial-lens distortion, and atmospheric
refraction all displace images along radial lines from the principal point and therefore do not
affect sizes of photographic angles with vertexes at the principal point.
Also, variations in flying heights of vertical photos affect photo scale but not angle sizes.
Radial-line triangulation consists basically of two distinct operations:
1. resection to determine the planimetric positions of photo exposure stations
2. intersection from two or more photos whose exposure stations have been established
to determine the positions of new points

127
Graphical Methods of Radial Triangulation
Two-Point Resection
Preparation
1. In Fig. 9-2, five photos of a flight strip are laid out in their overlapping positions.

2. Principal points and conjugate principal points are marked on the photos. Points a and
b are images of two horizontal ground control points A and B.

3. A transparent overlay is placed over photo no. 1 and a template is prepared as shown
in Fig. 9-3a by ruling lines on the overlay from the principal point o1 through points a,
b, c, d, and conjugate principal point o2. Angles with vertexes at o1 are true horizontal
angles on the template.
4. A second, similar template is prepared for photo no 2, as shown in Fig. 9-3b. In
drawing the rays it is necessary to hold the overlays firmly and use a sharp, hard
drawing pencil so that true angles are obtained between rays.
5. A base map upon which the radial-line triangulation will be performed is prepared
next and ground control points A and B are plotted as shown on Fig. 9-4.

128
6. The scale of the base map is chosen arbitrarily, but it should not differ greatly from
photo scale.
7. Template no. 1 is oriented on the base map so that rays o1a and o1b simultaneously
pass through their respective plotted control points A and B. At the same time
template no. 2 is oriented on the map to make rays o2a and o2b pass through their
respective plotted control points and, in addition, rays o1o2 on template no. 2 and o2o1
on template no. 2 are made to coincide.
8. With these conditions established. the locations of o1 and o2 define the true
planimetric map positions of ground principal points (exposure stations) P1 and P2.
Their positions are marked on the map by pricking through the templates with a pin.
This procedure for locating exposure station positions is called two-point resection, since it
requires two control points.
Execution
With the exposure stations of a pair of overlapping photographs fixed on the map, any
number of other points whose images appear in the overlap area of the stereopair can be
established by intersection.
The intersection of rays o1c and o2c fixes the true planimetric position of point C. Likewise,
the intersection of rays o1d and o2d locates point D.
Map locations of any points in the overlap area can be established by this procedure.

Pass Points

Points C through J of the triangulated strip of Fig. 9-4 are points of extended horizontal
control. They may be used for controlling subsequent photogrammetric procedures such as
planimetric mapping.
• Images serving as pass points must be sharp and well defined on all photos in which
they appear.

129
• They must be located in desirable positions on three successive overlapping
photographs. The most ideal positions are opposite the principal points and conjugate
principal points as illustrated in Fig. 9-2. This placement creates the strongest
geometrical strength and yields highest accuracy.

Three-Point Resection
This method requires three or more horizontal control points on a vertical photo in order to
locate exposure stations.

In Fig. 9-5a, images a, b. and c of horizontal control points A. B. and C appear in vertical
photo no. 1.

1. A template is prepared for that photo by drawing rays from the principal point through
the three image points, as shown in Fig. 9-5b.

130
2. The template is placed on a base map upon which the three control points have been
plotted and it is oriented so that the three rays simultaneously pass through their
respective plotted control points, as shown in Fig. 9-Sc. This locates exposure station
P1 which is marked on the map by pinpricking.
3. If images of three or more horizontal control points also appear somewhere on
adjacent photo no. 2, then its exposure station can also be located by three-point
resection.

Pass Points
New pass points can be located by intersection. If pass points are selected whose images
appear on photo no. 3 - for example, at positions e, f and g of Fig. 9 - 5a - then, with their
positions known, they may be used in three-point resection to locate exposure station P3.
Advantages of 3-point Method
Conjugate (joined) principal points need not be used.
Disadvantages
Requires greater amount of ground controls
Suitability of 3-Point Resectioning
Useful in locating single photo exposure stations and in map revision

Mechanical Method of Radial Triangulation


Radial-line triangulation may also be performed using mechanical templates constructed from
kits called lazy daisys.
1. Each photo is placed individually on a softwood board and a headless pin is carefully
driven through the principal point firmly into the underlying wood using the hammer.
The shaft of the pin should be perpendicular to the photo surface.
2. A small stud is placed over the headless pin and a threaded bolt is placed over the
stud. The center bolt serves as the hub of rays of the template.
3. Headless pins are also carefully driven into the board through the two conjugate
principal points and all pass points. Studs are then placed over them.
4. A line connecting the pin at the principal point and the pin at any pass point or
conjugate principal point represents the ray through that point.
5. Slotted metal arms arc selected for constructing rays. The round hole on one end of
the arm is placed over the central bolt and long slots within the arms are placed over
the studs at the pass points and conjugate principal points.
• The arm for each ray is chosen from a variety of lengths with, two considerations
in mind:
• excessive length should be avoided as this may cause difficulties in assembling
the network enough slot length must exist on either side of all studs to allow for
relief displacements and to allow for making a scale change from photo to map.

131
Sources of error in Radial Triangulation
1. Differential paper shrinkage
This causes small errors, but can be eliminated by using polyester-base materials for both
photos and base map.
2. Drawing Errors
They can be minimized by using a sharp. hard drawing pencil and good straightedge.
3. Construction Errors
errors in preparing slotted templates can be minimized by exercising caution in preparation.
4. Excessive Relief Variations
If the terrain is flat, errors caused by faulty location of principal points is insignificant. If the
terrain is rugged or rolling, principal points and conjugate principal points should be carefully
located.
5. Excessive Tilts
On a tilted photo, relief displacement is radial from the nadir point and tilt displacement is
radial from the isocenter. If a vertical photo is assumed when in fact the photo is tilted, angles
between rays will not be true horizontal angles.
These errors can be removed by rectification.
6. Quality of Ground Control
Accuracy in extending ground controls is dependent upon the density and distribution of
existing ground control.
The following equation can be used to evaluate the accuracy of radial triangulation:
1
 t 2
e=k  (9-1)
c
Where: e is the average error expected in pass point location in mm
k is 0.16
t is the total number of photos in the assembly
c is the total number of ground control points
(9- l) may be changed as follows in order to calculate the number of well-distributed
ground control points:
2
k
c =t  (9-2)
e
EXAMPLE 9-1

132
Suppose it is required that pass points be located within an average accuracy of l0 ft from a
block of 40 photos whose average scale is 500 ft per inch. How many well-distributed ground
control points must be established by field survey to achieve the desired accuracy?
SOLUTION (Assume map compilation is at photo scale.) At compilation scale, the
acceptable error, e in millimeters is

10 ft ( 25.4mm / in )
=e = 0.508mm
500 ft / in

By Eq. (9-2):
2
 0.16 
c = 40    4 control points
 0.508 

Photogrammetric Products
Photomaps
Photomaps are aerial photos that are used directly as planimetric map substitutes. Photomaps
may be prepared from single aerial photos, or they may be made by piecing together two or
more individual overlapping photos
Title information, place names, and other data may be superimposed on the photos in the
same way that it is done on maps.
Preparation
georeferencing of digital imagery
Title information, place names, and other data may be superimposed on the photos in the
same way that it is done on maps.
Photos are plotted on a large format printer from a digital file
Advantages over maps
Photomaps show locations of a virtually unlimited number of objects, whereas features on
maps are limited by what was produced by the mapmaker.
Photomaps of large areas can be prepared in much less time and at considerably lower cost
than maps.
Photomaps are easily understood and interpreted by people without photogrammetry
backgrounds because objects are shown by their images.
Disadvantages
Are subject to image displacements and scale variations.

133
Differences with maps
No. Photomaps Maps
1 Features shown by their images Features depicted by symbols, points and
lines

Uses
• Used in land-use planning and in planning for engineering projects.
• They are used to study geologic features
• Used to inventory natural resources
• Recording the growth of cities and large institutions
• Monitoring progress of construction activities
• Recording property boundaries
• They are also used as planimetric map substitutes for many engineering projects

Mosaics
Mosaics are constructed from a block of overlapping photographs which are trimmed and
joined.
Aerial mosaics generally fall into three classes: controlled, semi-controlled, and uncontrolled.
Controlled Mosaics
A controlled mosaic is the most accurate of the three classes.
Preparation
(1) This type of mosaic is prepared from photographs that have been rectified and ratioed;
i.e., all prints are made into equivalent vertical photographs that have the same
nominal scale.
(2) In assembling the mosaic, image positions of common features on adjacent photos are
matched as closely as possible. To match a pair of photos, coordinates of the common
tie points are measured in both photos, and a two-dimensional coordinate
transformation is performed.
(3) To increase the overall accuracy of the assembly, a plot of control points is prepared
at the same scale as the ratioed photos. Then in piecing the photos together to form
the mosaic, the control point images are also matched to their corresponding plotted
control points to constrain the positions of the photos.
(4) Along the edges between adjacent photos, images of features are aligned to the extent
possible.
This process matching image positions is illustrated in Fig. 9-5, which shows two adjacent
photographs in superposition. In this figure, the overlap area between photos 1 and 2 contains
four tie points, a, b, c, and d.
In the separated photographs shown at the bottom of the figure, each of these tie points has
row and column coordinates in the digital image coordinate system of each photo. For

134
instance, point a has coordinates, in photo 1 and, in photo 2. By selecting the coordinates of
the tie points from photo 2 as control, a two-dimensional coordinate transformation can be
computed that determines the parameters for transforming photo 1 coordinates to photo 2
coordinates

FIGURE 9-5 Use of tie points to join adjacent digital photographs.

Semicontrolled mosaics
Semicontrolled mosaics are assembled by utilizing some combinations of the specifications
for controlled and uncontrolled mosaics.
A semicontrolled mosaic may be prepared, for example, by using ground control but
employing photos that have not been rectified or ratioed.
The other combination would be to use rectified and ratioed photos but no ground control.
Uncontrolled Digital Mosaics
In the manual process using hard-copy photographs, matching is done through trial and error
by shifting and rotating the photos slightly until optimal alignment is achieved.

135
There is no ground control, and aerial photographs that have not been rectified or ratioed are
used.
Uncontrolled mosaics are more easily and quickly prepared than controlled mosaics. They are
not as accurate as controlled mosaics.

Digital Elevation Models


Digital representation of elevations in a region is commonly referred to as a digital elevation
model (DEM).
Definitions
Digital Terrain Model (DTM) – Digital representation of elevations of the earth’s terrain.
Digital Surface Model (DSM) - Digital representation of elevations at or above the terrain
(tree crowns, rooftops, etc.)
Triangulated Irregular Network (TIN) - are a form of vector-based digital geographic data
that are constructed by triangulating a set of vertices (points).
Creating DEMs
a) Spot elevations are collected at critical points.
b) Lines are constructed between the points forming a system of triangles covering the
surface (TIN).
c) Elevations are then interpolated (nearest neighbor, IDW, Kriging etc.) between the
TIN vertices to create DEMs.
Disadvantages of DEMS
Consumes more computer storage than TINs.

Orthophoto
An orthophoto is a photograph showing images of objects in their true orthographic positions.
The major difference between an orthophoto and a map is that an orthophoto is composed of
images of features, whereas maps utilize lines and symbols plotted to scale to depict features.
Orthophotos are produced from aerial photos through a process called differential
rectification which eliminates image displacements due to photographic tilt and terrain
relief.
Softcopy systems are particularly well-suited for differential rectification.
a) The essential inputs for the process of differential rectification are a DEM and a
digital aerial photo having known exterior orientation parameters (ω, ϕ,κ, XL, YL,
and ZL).
b) Photo coordinates are transformed to digital image coordinates.
c) Resampling is performed to populate orthophoto pixels with digital numbers

136
Preparation of Orthophotos
Image rectification
 In order to correct scale distortions resulting from the perspective projection of an image,
the
elevation of each pixel must be known.
 The pixel elevation is interpolated from a Digital Elevation Model (DEM). There are
several
sources for DEM data. They vary mainly by cost and accuracy of data compilation.
 The most accurate source for elevation is from field surveys. But the cost associated with
developing a DEM (even for a small project) from field surveys is prohibitive to most users.
A
more common source for DEM is photogrammetry.
 One should note that the same photographs that are used for the aerial triangulation could
also
be used for developing the DEM.
 Other methods for deriving DEM's are kinematic GPS or digitizing contours from
topographic
maps. DEM data for small scale applications is also available from USGS.
 The decision on which DEM to use depends on the scale of the orthophoto. Small scale
orthophotos can use less accurate DEM (i.e. USGS data), while large scale orthophotos
require
a more accurate DEM (i.e. photogrammetry). One should note that DEM data has a much
longer “shelf life” than planimetric data.
 Thus, a good DEM could be reused for several cycles of orthophoto production. When all
the
data (image, orientation and DEM) is available, each (digital) photograph, or part of it, is
rectified individually using a special software. A single rectified photograph usually covers
only a small portion of the entire orthophoto project. Thus, a mosaicing process becomes
necessary. Mosaicing is the process of piecing together multiple image patches into a
seamless
and continuous orthophoto. Some of the technical difficulties of this matching process are:
Mosaicing and Image Enhancement
 Spatial continuity or edge matching – Features that appear on more than a single image
patch
must be continuous. For example, a road must form a continuous line and show no jumps at
the original photo edges where the images are connected.
 Radiometric consistency – Different photographs may have different contrast and
brightness
resulting from lack of uniform conditions during the photographic processing, image
scanning
or from changes in illumination conditions. For example, a lake could appear as white in one
image, because of the reflection of the sun, and black on another image, where there is no
reflection. This must be corrected during the mosaicing process.
Quality Control
 The quality control involves inspecting the orthophoto for incorrect rectification, image
matching problems, and missing images due to hidden ground problems.

137

Output Design and Cartographic Enhancement.
 Output design and cartographic enhancement consists of formatting the image and
enhancing
it by adding:
(iv) line information that either appears fuzzy or does not exist on the image (for
example, parcel boundaries)
(v) area (polygon) information (for example shading a park area)
(vi) a contour layer to show hypsography (relief features)
(vii) coordinate graticules and North arrow
(viii) annotation (text and symbols)
(ix) legend, product information etc.

Uses
• They can be incorporated with a regular grid DEM to produce three-dimensional
pictorial views.
• Can be used as maps
• Used as base-maps for plotting field observation

Photomaps
Dems
Sectional/longitudinal profiles
Machine plots
Mosaics
uses

Factors Influencing Planning


Map specifications
-scale
-contour interval
-detail plotting
Terrain
Photography specifications
-photo scale

138
-aircraft
-camera
-Plotter
-weather
Costs
-materials
-personnel
-equipment
-logistics
Flight plan data
-flyting height
-no of photos
-exposuer interval
-spacing of flight lines
Ground control requirements
-vertical
-horizontal

139

You might also like