ERDAS FieldGuide
ERDAS FieldGuide
The information contained in this document is the exclusive property of Leica Geosystems Geospatial Imaging, LLC.
This work is protected under United States copyright law and other international copyright treaties and conventions.
No part of this work may be reproduced or transmitted in any form or by any means, electronic or mechanical,
including photocopying and recording, or by any information storage or retrieval system, except as expressly
permitted in writing by Leica Geosystems Geospatial Imaging, LLC. All requests should be sent to: Manager of
Technical Documentation, Leica Geosystems Geospatial Imaging, LLC, 5051 Peachtree Corners Circle, Suite 100,
Norcross, GA, 30092, USA.
Government Reserved Rights. MrSID technology incorporated in the Software was developed in part through a
project at the Los Alamos National Laboratory, funded by the U.S. Government, managed under contract by the
University of California (University), and is under exclusive commercial license to LizardTech, Inc. It is used under
license from LizardTech. MrSID is protected by U.S. Patent No. 5,710,835. Foreign patents pending. The U.S.
Government and the University have reserved rights in MrSID technology, including without limitation: (a) The U.S.
Government has a non-exclusive, nontransferable, irrevocable, paid-up license to practice or have practiced
throughout the world, for or on behalf of the United States, inventions covered by U.S. Patent No. 5,710,835 and has
other rights under 35 U.S.C. § 200-212 and applicable implementing regulations; (b) If LizardTech's rights in the
MrSID Technology terminate during the term of this Agreement, you may continue to use the Software. Any provisions
of this license which could reasonably be deemed to do so would then protect the University and/or the U.S.
Government; and (c) The University has no obligation to furnish any know-how, technical assistance, or technical data
to users of MrSID software and makes no warranty or representation as to the validity of U.S. Patent 5,710,835 nor
that the MrSID Software will not infringe any patent or other proprietary right. For further information about these
provisions, contact LizardTech, 1008 Western Ave., Suite 200, Seattle, WA 98104.
ERDAS, ERDAS IMAGINE, IMAGINE OrthoBASE, Stereo Analyst and IMAGINE VirtualGIS are registered trademarks;
IMAGINE OrthoBASE Pro is a trademark of Leica Geosystems Geospatial Imaging, LLC.
Other companies and products mentioned herein are trademarks or registered trademarks of their respective owners.
Table of Contents
Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
Conventions Used in this Book . . . . . . . . . . . . . . xxv
Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Image Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Remote Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Absorption / Reflection Spectra . . . . . . . . . . . . . . . . . . . 5
Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Spectral Resolution . . . . . . ........ . ........ . . . 14
Spatial Resolution . . . . . . . ........ . ........ . . . 15
Radiometric Resolution . . . ........ . ........ . . . 16
Temporal Resolution . . . . . ........ . ........ . . . 17
Data Correction . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Line Dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Striping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Data Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Storage Formats . . . . . . . . . . . . . . . . . ........ . . . 18
Storage Media . . . . . . . . . . . . . . . . . . ........ . . . 21
Calculating Disk Space . . . . . . . . . . . . . ........ . . . 23
ERDAS IMAGINE Format (.img) . . . . . . . ........ . . . 24
Image File Organization . . . . . . . . . . . . . . . . . . . . 27
Consistent Naming Convention . . . . . . . . . . . . . . . . . . 27
Keeping Track of Image Files . . . . . . . . . . . . . . . . . . . 28
Geocoded Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Using Image Data in GIS . . . . . . . . . . . . . . . . . . . 29
Subsetting and Mosaicking . ........ . ........ . . . 29
Enhancement . . . . . . . . . . ........ . ........ . . . 30
Multispectral Classification . ........ . ........ . . . 30
Editing Raster Data . . . . . . . . . . . . . . . . . . . . . . . 31
Editing Continuous (Athematic) Data . . . . . . . . . . . . . . 31
Interpolation Techniques . . . . . . . . . . . . . . . . . . . . . . 32
Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Polygons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Vertex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .145
Input Image Mode . . . . . . . . . . . . . . . . . . . . . . .146
Exclude Areas . . . . . . . . . . . . ........ . . . . . . . . . 146
Image Dodging . . . . . . . . . . . ........ . . . . . . . . . 146
Color Balancing . . . . . . . . . . . ........ . . . . . . . . . 147
Histogram Matching . . . . . . . . ........ . . . . . . . . . 148
Intersection Mode . . . . . . . . . . . . . . . . . . . . . . . .149
Set Overlap Function . . . . . . . . . . . . . . . . . . . . . . . . 150
Automatically Generate Cutlines For Intersection . . . . . 150
Geometry-based Cutline Generation . . . . . . . . . . . . . . 151
Output Image Mode . . . . . . . . . . . . . . . . . . . . . .152
Output Image Options . . . . . . . . . . . . . . . . . . . . . . . 152
Run Mosaic To Disc . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .155
Display vs. File Enhancement . . . . . . . . . . . . . . . . . . 156
Spatial Modeling Enhancements . . . . . . . . . . . . . . . . . 156
Correcting Data . . . . . . . . . . . . . . . . . . . . . . . . .159
Radiometric Correction: Visible/Infrared Imagery . . . . . 160
Atmospheric Effects . . . . . . . . . . . . . . . . . . . . . . . . . 161
Geometric Correction . . . . . . . . . . . . . . . . . . . . . . . . 162
Radiometric Enhancement . . . . . . . . . . . . . . . . . .162
Contrast Stretching . . . . . . . . ........ . . . . . . . . . 163
Histogram Equalization . . . . . . ........ . . . . . . . . . 168
Histogram Matching . . . . . . . . ........ . . . . . . . . . 171
Brightness Inversion . . . . . . . . ........ . . . . . . . . . 172
Spatial Enhancement . . . . . . . . . . . . . . . . . . . . .172
Convolution Filtering . . . . . . . . . . . . . . . . . . . . . . . . . 173
Crisp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
The Classification Process . . . . . . . . . . . . . . . . . 243
Pattern Recognition . . . . . . ........ . ........ . . .243
Training . . . . . . . . . . . . . . ........ . ........ . . .243
Signatures . . . . . . . . . . . . ........ . ........ . . .244
Decision Rule . . . . . . . . . . ........ . ........ . . .245
Output File . . . . . . . . . . . . ........ . ........ . . .245
Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .375
Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
Georeferencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
Latitude/Longitude . . . . . . . . . . . . . . . . . . . . . . . . . . 376
Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
When to Rectify . . . . . . . . . . . . . . . . . . . . . . . . .376
When to Georeference Only . . . ........ . . . . . . . . . 377
Disadvantages of Rectification . ........ . . . . . . . . . 378
Rectification Steps . . . . . . . . . ........ . . . . . . . . . 378
Ground Control Points . . . . . . . . . . . . . . . . . . . . .379
GCPs in ERDAS IMAGINE . . . . ......... . . . . . . . . . 379
Entering GCPs . . . . . . . . . . . ......... . . . . . . . . . 379
GCP Prediction and Matching . ......... . . . . . . . . . 380
Polynomial Transformation . . . . . . . . . . . . . . . . .382
Linear Transformations . . . . . ......... . . . . . . . . . 383
Nonlinear Transformations . . ......... . . . . . . . . . 385
Effects of Order . . . . . . . . . . ......... . . . . . . . . . 387
Minimum Number of GCPs . . . ......... . . . . . . . . . 391
Rubber Sheeting . . . . . . . . . . . . . . . . . . . . . . . . .392
Triangle-Based Finite Element Analysis . . . . . . . . . . . . 392
Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
Triangle-based rectification . . . . . . . . . . . . . . . . . . . . 393
Linear transformation . . . . . . . . . . . . . . . . . . . . . . . . 393
Nonlinear transformation . . . . . . . . . . . . . . . . . . . . . 393
Check Point Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 394
RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .394
Residuals and RMS Error Per GCP . . . . . . . . . . . . . . . . 394
Total RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
Error Contribution by Point . . . . . . . . . . . . . . . . . . . . 396
Tolerance of RMS Error . . . . . . . . . . . . . . . . . . . . . . . 396
Evaluating RMS Error . . . . . . . . . . . . . . . . . . . . . . . . 396
Resampling Methods . . . . . . . . . . . . . . . . . . . . . .397
Rectifying to Lat/Lon . . . . . . . ........ . . . . . . . . . 399
Nearest Neighbor . . . . . . . . . . ........ . . . . . . . . . 399
Bilinear Interpolation . . . . . . . ........ . . . . . . . . . 400
Cubic Convolution . . . . . . . . . ........ . . . . . . . . . 403
Bicubic Spline Interpolation . . . ........ . . . . . . . . . 406
Map-to-Map Coordinate Conversions . . . . . . . . . .408
Conversion Process . . . . . . . . . . . . . . . . . . . . . . . . . 408
Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
Cartography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
Types of Maps . . . . . . . . . . . . . . . . . . . . . . . . . . 459
Thematic Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . .461
Annotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
Legends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 729
Field Guide / xv
Figure 50: Nonlinear Radiometric Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Figure 51: Piecewise Linear Contrast Stretch . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Figure 52: Contrast Stretch Using Lookup Tables, and Effect on Histogram . . . . . . . . 168
Figure 53: Histogram Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Figure 54: Histogram Equalization Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Figure 55: Equalized Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Figure 56: Histogram Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Figure 57: Spatial Frequencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Figure 58: Applying a Convolution Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Figure 59: Output Values for Convolution Kernel . . . . . . . . . . . . . . . . . . . . . . . . . 175
Figure 60: Local Luminance Intercept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Figure 61: Schematic Diagram of the Discrete Wavelet Transform - DWT . . . . . . . . . 184
Figure 62: Inverse Discrete Wavelet Transform - DWT-1 . . . . . . . . . . . . . . . . . . . . 185
Figure 63: Wavelet Resolution Merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
Figure 64: Two Band Scatterplot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Figure 65: First Principal Component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Figure 66: Range of First Principal Component . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Figure 67: Second Principal Component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
Figure 68: Intensity, Hue, and Saturation Color Coordinate System . . . . . . . . . . . . . 197
Figure 69: Hyperspectral Data Axes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
Figure 70: Rescale Graphical User Interface (GUI) . . . . . . . . . . . . . . . . . . . . . . . . 205
Figure 71: Spectrum Average GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Figure 72: Spectral Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Figure 73: Two-Dimensional Spatial Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Figure 74: Three-Dimensional Spatial Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Figure 75: Surface Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Figure 76: One-Dimensional Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
Figure 77: Example of Fourier Magnitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Figure 78: The Padding Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Figure 79: Comparison of Direct and Fourier Domain Processing . . . . . . . . . . . . . . . 217
Figure 80: An Ideal Cross Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Figure 81: High-Pass Filtering Using the Ideal Window . . . . . . . . . . . . . . . . . . . . . . 219
Figure 82: Filtering Using the Bartlett Window . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Figure 83: Filtering Using the Butterworth Window . . . . . . . . . . . . . . . . . . . . . . . . 220
Figure 84: Homomorphic Filtering Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Figure 85: Effects of Mean and Median Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
Figure 86: Regions of Local Region Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Figure 87: One-dimensional, Continuous Edge, and Line Models . . . . . . . . . . . . . . . 231
Figure 88: A Noisy Edge Superimposed on an Ideal Edge . . . . . . . . . . . . . . . . . . . . 232
Figure 89: Edge and Line Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
Figure 90: Adjust Brightness Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Figure 91: Range Lines vs. Lines of Constant Range . . . . . . . . . . . . . . . . . . . . . . . 239
Figure 92: Slant-to-Ground Range Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
Figure 93: Example of a Feature Space Image . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Figure 94: Process for Defining a Feature Space Object . . . . . . . . . . . . . . . . . . . . . 253
Figure 95: ISODATA Arbitrary Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
Figure 96: ISODATA First Pass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Figure 97: ISODATA Second Pass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Figure 98: RGB Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
Figure 99: Ellipse Evaluation of Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
Figure 100: Classification Flow Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Figure 101: Parallelepiped Classification Using ± Two Standard Deviations as Limits . . 272
Figure 102: Parallelepiped Corners Compared to the Signature Ellipse . . . . . . . . . . . 274
Figure 103: Feature Space Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Conventions Used The following paragraphs are used throughout the ERDAS Field
Guide and other ERDAS IMAGINE documentation.
in this Book
• remote sensing
• radiometric correction
• geocoded data
NOTE: DEMs are not remotely sensed image data, but are currently
being produced from stereo points in radar imagery.
Bands Image data may include several bands of information. Each band is
a set of data file values for a specific portion of the electromagnetic
spectrum of reflected light or emitted heat (red, green, blue, near-
infrared, infrared, thermal, etc.) or some other user-defined
information created by combining or enhancing the original bands,
or creating new bands from other sources.
ERDAS IMAGINE programs can handle an unlimited number of bands
of image data in a single file.
3 bands
1 pixel
• Nominal data file values are simply categorized and named. The
actual value used for each category has no inherent meaning—it
is simply a class value. An example of a nominal raster layer
would be a thematic layer showing tree species.
• Ordinal data are similar to nominal data, except that the file
values put the classes in a rank or order. For example, a layer
with classes numbered and named
1 - Good, 2 - Moderate, and 3 - Poor is an ordinal system.
• Interval data file values have an order, but the intervals between
the values are also meaningful. Interval data measure some
characteristic, such as elevation or degrees Fahrenheit, which
does not necessarily have an absolute zero. (The difference
between two values in interval data is meaningful.)
0 1 2 3 4
1 (3,1)
rows (y) x,y
2
columns (x)
Map Coordinates
Map coordinates may be expressed in one of a number of map
coordinate or projection systems. The type of map coordinates used
by a data file depends on the method used to create the file (remote
sensing, scanning an existing map, etc.). In ERDAS IMAGINE, a data
file can be converted from one map coordinate system to another.
Remote Sensing Remote sensing is the acquisition of data about an object or scene
by a sensor that is far from the object (Colwell, 1983). Aerial
photography, satellite imagery, and radar are all forms of remotely
sensed data.
Usually, remotely sensed data refer to data of the Earth collected
from sensors on satellites or aircraft. Most of the images used as
input to the ERDAS IMAGINE system are remotely sensed. However,
you are not limited to remotely sensed data.
SWIR LWIR
Ultraviolet
Radar
0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 11.0 12.0 13.0 14.0 15.0 16.0 17.0
Near-infrared Middle-infrared Far-infrared
(0.7 - 2.0) (2.0 - 5.0) (8.0 - 15.0)
Visible
(0.4 - 0.7)
Blue (0.4 - 0.5)
Green (0.5 - 0.6) micrometers µm (one millionth of a meter)
Red (0.6 - 0.7)
Absorption / Reflection When radiation interacts with matter, some wavelengths are
Spectra absorbed and others are reflected.To enhance features in image
data, it is necessary to understand how vegetation, soils, water, and
other land covers reflect and absorb radiation. The study of the
absorption and reflection of EMR waves is called spectroscopy.
Absorption Spectra
Absorption is based on the molecular bonds in the (surface) material.
Which wavelengths are absorbed depends upon the chemical
composition and crystalline structure of the material. For pure
compounds, these absorption bands are so specific that the SWIR
region is often called an infrared fingerprint.
Atmospheric Absorption
In remote sensing, the sun is the radiation source for passive
sensors. However, the sun does not emit the same amount of
radiation at all wavelengths. Figure 4 shows the solar irradiation
curve, which is far from linear.
2500
1000
500
0
0.0 0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3.0
Wavelength µm
UV VIS INFRARED
Radiation
Absorption—the amount of
radiation absorbed by the
atmosphere
Emission Source—radiation
re-emitted after absorption
Reflectance Spectra
After rigorously defining the incident radiation (solar irradiation at
target), it is possible to study the interaction of the radiation with the
target material. When an electromagnetic wave (solar illumination in
this case) strikes a target surface, three interactions are possible
(Elachi, 1987):
• reflection
• transmission
• scattering
1 2 3 4 5 7
Landsat TM bands
100 Atmospheric
absorption
bands
Kaolinite
80
Vegetation (green)
60
Reflectance, %
40
Silt loam
20
0
.4 .6 .8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4
Wavelength, µm
Hyperspectral Data
As remote sensing moves toward the use of more and narrower
bands (for example, AVIRIS with 224 bands each only 10 nm wide),
absorption by specific atmospheric gases must be considered. These
multiband sensors are called hyperspectral sensors. As more and
more of the incident radiation is absorbed by the atmosphere, the
digital number (DN) values of that band get lower, eventually
becoming useless—unless one is studying the atmosphere. Someone
wanting to measure the atmospheric content of a specific gas could
utilize the bands of specific absorption.
Landsat TM band 7
2080 nm 2350 nm
Kaolinite
Reflectance, %
Montmorillonite
Illite
Frequency (υ),
Band Wavelength (λ), GHz
Designation* cm (109 cycles · sec-
1
)
Spectral Resolution Spectral resolution refers to the specific wavelength intervals in the
electromagnetic spectrum that a sensor can record (Simonett et al,
1983). For example, band 1 of the Landsat TM sensor records energy
between 0.45 and 0.52 µm in the visible part of the spectrum.
NOTE: The spectral resolution does not indicate how many levels the
signal is broken into.
Spatial Resolution Spatial resolution is a measure of the smallest object that can be
resolved by the sensor, or the area on the ground represented by
each pixel (Simonett et al, 1983). The finer the resolution, the lower
the number. For instance, a spatial resolution of 79 meters is coarser
than a spatial resolution of 10 meters.
Scale
The terms large-scale imagery and small-scale imagery often refer
to spatial resolution. Scale is the ratio of distance on a map as
related to the true distance on the ground (Star and Estes, 1990).
Large-scale in remote sensing refers to imagery in which each pixel
represents a small area on the ground, such as SPOT data, with a
spatial resolution of 10 m or 20 m. Small scale refers to imagery in
which each pixel represents a large area on the ground, such as
Advanced Very High Resolution Radiometer (AVHRR) data, with a
spatial resolution of 1.1 km.
This terminology is derived from the fraction used to represent the
scale of the map, such as 1:50,000. Small-scale imagery is
represented by a small fraction (one over a very large number).
Large-scale imagery is represented by a larger fraction (one over a
smaller number). Generally, anything smaller than 1:250,000 is
considered small-scale imagery.
NOTE: Scale and spatial resolution are not always the same thing.
An image always has the same spatial resolution, but it can be
presented at different scales (Simonett et al, 1983).
Figure 8: IFOV
20 m
20 m 20 m
house
20 m
8-bit
0 max. intensity
7-bit
0 max. intensity
Temporal Resolution Temporal resolution refers to how often a sensor obtains imagery of
a particular area. For example, the Landsat satellite can view the
same area of the globe once every 16 days. SPOT, on the other hand,
can revisit the same area every three days.
Spectral
Resolution:
0.52 - 0.60 µm
Day 1
Temporal Resolution:
Day 17 same area viewed
Day 31 every 16 days
Source: EOSAT
Data Correction There are several types of errors that can be manifested in remotely
sensed data. Among these are line dropout and striping. These
errors can be corrected to an extent in GIS by radiometric and
geometric correction functions.
Line Dropout Line dropout occurs when a detector either completely fails to
function or becomes temporarily saturated during a scan (like the
effect of a camera flash on a human retina). The result is a line or
partial line of data with higher data file values, creating a horizontal
streak until the detector(s) recovers, if it recovers.
Line dropout is usually corrected by replacing the bad line with a line
of estimated data file values. The estimated line is based on the lines
above and below it.
You can correct line dropout using the 5 × 5 Median Filter from
the Radar Speckle Suppression function. The Convolution and
Focal Analysis functions in the ERDAS IMAGINE Image
Interpreter also corrects for line dropout.
Storage Formats Image data can be arranged in several ways on a tape or other
media. The most common storage formats are:
For a single band of data, all formats (BIL, BIP, and BSQ) are
identical, as long as the data are not blocked.
BIL
In BIL (band interleaved by line) format, each record in the file
contains a scan line (row) of data for one band (Slater, 1980). All
bands of data for a given line are stored consecutively within the file
as shown in Figure 11.
Header
Image
Line 1, Band 1
Line 1, Band 2
+
+
+
Line 1, Band x
Line 2, Band 1
Line 2, Band 2
+
+
+
Line 2, Band x
Line n, Band 1
Line n, Band 2
+
+
+
Line n, Band x
Trailer
NOTE: Although a header and trailer file are shown in this diagram,
not all BIL data contain header and trailer files.
BSQ
In BSQ (band sequential) format, each entire band is stored
consecutively in the same file (Slater, 1980). This format is
advantageous, in that:
Header File(s)
Line 1, Band 1
Image File Line 2, Band 2
Line 3, Band 3
Band 1 +
+
+
Line n, Band 1
end-of-file
Line 1, Band 2
Line 2, Band 2
Image File Line 3, Band 2
+
Band 2
+
+
Line n, Band 2
end-of-file
Line 1, Band x
Line 2, Band x
Image File Line 3, Band x
Band x +
+
+
Line n, Band x
Trailer File(s)
• Files are not split between tapes. If a band starts on the first
tape, it ends on the first tape.
ERDAS IMAGINE imports all of the header and image file information.
BIP
In BIP (band interleaved by pixel) format, the values for each band
are ordered within a given pixel. The pixels are arranged sequentially
on the tape (Slater, 1980). The sequence for BIP format is:
Pixel 1, Band 1
Pixel 1, Band 2
Pixel 1, Band 3
.
.
.
Pixel 2, Band 1
Pixel 2, Band 2
Pixel 2, Band 3
.
.
.
Storage Media Today, most raster data are available on a variety of storage media
to meet the needs of users, depending on the system hardware and
devices available. When ordering data, it is sometimes possible to
select the type of media preferred. The most common forms of
storage media are discussed in the following section:
• 9-track tape
• 4 mm tape
• 8 mm tape
• CD-ROM/optical disk
• videotape
Tape
The data on a tape can be divided into logical records and physical
records. A record is the basic storage unit on a tape.
Blocked Data
For reasons of efficiency, data can be blocked to fit more on a tape.
Blocked data are sequenced so that there are more logical records in
each physical record. The number of logical records in each physical
record is the blocking factor. For instance, a record may contain
28,000 bytes, but only 4000 columns due to a blocking factor of 7.
Tape Contents
Tapes are available in a variety of sizes and storage capacities. To
obtain information about the data on a particular tape, read the tape
label or box, or read the header file. Often, there is limited
information on the outside of the tape. Therefore, it may be
necessary to read the header files on each tape for specific
information, such as:
• number of bands
• blocking factor
4 mm Tapes
The 4 mm tape is a relative newcomer in the world of GIS. This tape
is a mere 2” × .75” in size, but it can hold up to 2 Gb of data. This
petite cassette offers an obvious shipping and storage advantage
because of its size.
8 mm Tapes
The 8 mm tape offers the advantage of storing vast amounts of data.
Tapes are available in 5 and 10 Gb storage capacities (although
some tape drives cannot handle the 10 Gb size). The 8 mm tape is a
2.5” × 4” cassette, which makes it easy to ship and handle.
9-Track Tapes
A 9-track tape is an older format that was the standard for two
decades. It is a large circular tape approximately 10” in diameter. It
requires a 9-track tape drive as a peripheral device for retrieving
data. The size and storage capability make 9-track less convenient
than 8 mm or 1/4” tapes. However, 9-track tapes are still widely
used.
A single 9-track tape may be referred to as a volume. The complete
set of tapes that contains one image is referred to as a volume set.
The storage format of a 9-track tape in binary format is described by
the number of bits per inch, bpi, on the tape. The tapes most
commonly used have either 1600 or 6250 bpi. The number of bits
per inch on a tape is also referred to as the tape density. Depending
on the length of the tape, 9-tracks can store between 120-150 Mb of
data.
CD-ROM
Data such as ADRG and Digital Line Graphs (DLG) are most often
available on CD-ROM, although many types of data can be requested
in CD-ROM format. A CD-ROM is an optical read-only storage device
which can be read with a CD player. CD-ROMs offer the advantage
of storing large amounts of data in a small, compact device. Up to
644 Mb can be stored on a CD-ROM. Also, since this device is read-
only, it protects the data from accidentally being overwritten,
erased, or changed from its original integrity. This is the most stable
of the current media storage types and data stored on CD-ROM are
expected to last for decades without degradation.
Calculating Disk Space To calculate the amount of disk space a raster file requires on an
ERDAS IMAGINE system, use the following formula:
ERDAS IMAGINE Format In ERDAS IMAGINE, file name extensions identify the file type. When
(.img) data are imported into ERDAS IMAGINE, they are converted to the
ERDAS IMAGINE file format and stored in image files. ERDAS
IMAGINE image files (.img) can contain two types of raster layers:
• thematic
• continuous
Raster Layer(s)
• soils
• land cover
• roads
• hydrology
soils
• Landsat
• SPOT
• DEM
• slope
• temperature
Landsat TM DEM
Tiled Data
Data in the .img format are tiled data. Tiled data are stored in tiles
that can be set to any size.
• statistics
• lookup tables
• map coordinates
• map projection
Statistics
In ERDAS IMAGINE, the file statistics are generated from the data
file values in the layer and incorporated into the image file. This
statistical information is used to create many program defaults, and
helps you make processing decisions.
Image File Data are easy to locate if the data files are well organized. Well
organized files also make data more accessible to anyone who uses
Organization
the system. Using consistent naming conventions and the ERDAS
IMAGINE Image Catalog helps keep image files well organized and
accessible.
Consistent Naming Many processes create an output file, and every time a file is created,
Convention it is necessary to assign a file name. The name that is used can either
cause confusion about the process that has taken place, or it can
clarify and give direction. For example, if the name of the output file
is image.img, it is difficult to determine the contents of the file. On
the other hand, if a standard nomenclature is developed in which the
file name refers to a process or contents of the file, it is possible to
determine the progress of a project and contents of a file by
examining the directory.
Develop a naming convention that is based on the contents of the
file. This helps everyone involved know what the file contains. For
example, in a project to create a map composition for Lake Lanier, a
directory for the files may look similar to the one below:
lanierTM.img
lanierSPOT.img
lanierSymbols.ovr
lanierlegends.map.ovr
lanierScalebars.map.ovr
lanier.map
lanier.plt
lanier.gcc
lanierUTM.img
Keeping Track of Image Using a database to store information about images enables you to
Files track image files (.img) without having to know the name or location
of the file. The database can be queried for specific parameters (e.g.,
size, type, map projection) and the database returns a list of image
files that match the search criteria. This file information helps to
quickly determine which image(s) to use, where it is located, and its
ancillary data. An image database is especially helpful when there
are many image files and even many on-going projects. For
example, you could use the database to search for all of the image
files of Georgia that have a UTM map projection.
Using Image Data ERDAS IMAGINE provides many tools designed to extract the
necessary information from the images in a database. The following
in GIS
chapters in this book describe many of these processes.
This section briefly describes some basic image file techniques that
may be useful for any application.
Subsetting and Within ERDAS IMAGINE, there are options available to make
Mosaicking additional image files from those acquired from EOSAT, SPOT, etc.
These options involve combining files, mosaicking, and subsetting.
ERDAS IMAGINE programs allow image data with an unlimited
number of bands, but the most common satellite data types—
Landsat and SPOT—have seven or fewer bands. Image files can be
created with more than seven bands.
It may be useful to combine data from two different dates into one
file. This is called multitemporal imagery. For example, a user may
want to combine Landsat TM from one date with TM data from a later
date, then perform a classification based on the combined data. This
is particularly useful for change detection studies.
You can also incorporate elevation data into an existing image file as
another band, or create new bands through various enhancement
techniques.
Mosaic
On the other hand, the study area in which you are interested may
span several image files. In this case, it is necessary to combine the
images to create one large file. This is called mosaicking.
Multispectral Image data are often used to create thematic files through
Classification multispectral classification. This entails using spectral pattern
recognition to identify groups of pixels that represent a common
characteristic of the scene, such as soil type or vegetation.
The ERDAS IMAGINE raster editing functions allow the use of focal
and global spatial modeling functions for computing the values to
replace noisy pixels or areas in continuous or thematic data.
Focal operations are filters that calculate the replacement value
based on a window (3 × 3, 5 × 5, etc.), and replace the pixel of
interest with the replacement value. Therefore this function affects
one pixel at a time, and the number of surrounding pixels that
influence the value is determined by the size of the moving window.
Global operations calculate the replacement value for an entire area
rather than affecting one pixel at a time. These functions, specifically
the Majority option, are more applicable to thematic data.
Editing Continuous
(Athematic) Data
Editing DEMs
DEMs occasionally contain spurious pixels or bad data. These spikes,
holes, and other noises caused by automatic DEM extraction can be
corrected by editing the raster data values and replacing them with
meaningful values. This discussion of raster editing focuses on DEM
editing.
Interpolation Techniques While the previously listed raster editing techniques are perfectly
suitable for some applications, the following interpolation techniques
provide the best methods for raster editing:
• distance weighting
2-D Polynomial
This interpolation technique provides faster interpolation calculations
than distance weighting and multisurface functions. The following
equation is used:
V = a0 + a1x + a2y + a2x2 + a4xy + a5y2 +. . .
Multisurface Functions
The multisurface technique provides the most accurate results for
editing DEMs that have been created through automatic extraction.
The following equation is used:
V = ∑ Wi Qi
Where:
V = output data value (elevation value for DEM)
Wi = coefficients which are derived by the least squares method
Qi = distance-related kernels which are actually interpretable as continuous
single value surfaces
Source: Wang, Z., 1990
Distance Weighting
The weighting function determines how the output data values are
interpolated from a set of reference data points. For each pixel, the
values of all reference points are weighted by a value corresponding
with the distance between each point and the pixel.
The weighting function used in ERDAS IMAGINE is:
2
W = ---- – 1
S
D
Where:
S = normalization factor
D = distance from output data point and reference point
The value for any given pixel is calculated by taking the sum of
weighting factors for all reference points multiplied by the data
values of those points, and dividing by the sum of the weighting
factors:
∑ Wi × Vi
V = i----------------------------
=1
n
∑ Wi
i=1
Where:
V = output data value (elevation value for DEM)
i = ith reference point
Wi = weighting factor of point i
Vi = data value of point i
n = number of reference points
Source: Wang, Z., 1990
Introduction ERDAS IMAGINE is designed to integrate two data types, raster and
vector, into one system. While the previous chapter explored the
characteristics of raster data, this chapter is focused on vector data.
The vector data structure in ERDAS IMAGINE is based on the ArcInfo
data model (developed by ESRI, Inc.). This chapter describes vector
data, attribute information, and symbolization.
• points
• lines
• polygons
vertices node
polygons
line
label point
node
points
Field Guide / 35
Lines A line (polyline) is a set of line segments and represents a linear
geographic feature, such as a river, road, or utility line. Lines can
also represent nongeographical boundaries, such as voting districts,
school zones, contour lines, etc.
Vertex The points that define a line are vertices. A vertex is a point that
defines an element, such as the endpoint of a line segment or a
location in a polygon where the line segment defining the polygon
changes direction. The ending points of a line are called nodes. Each
line has two nodes: a from-node and a to-node. The from-node is the
first vertex in a line. The to-node is the last vertex in a line. Lines
join other lines only at nodes. A series of lines in which the from-
node of the first line joins the to-node of the last line is a polygon.
label point
line polygon
vertices
In Figure 17, the line and the polygon are each defined by three
vertices.
Tics
Vector layers are referenced to coordinates or a map projection
system using tic files that contain geographic control points for the
layer. Every vector layer must have a tic file. Tics are not
topologically linked to other features in the layer and do not have
descriptive data associated with them.
Field Guide / 36
Vector Layers Although it is possible to have points, lines, and polygons in a single
layer, a layer typically consists of one type of feature. It is possible
to have one vector layer for streams (lines) and another layer for
parcels (polygons). A vector layer is defined as a set of features
where each feature has a location (defined by coordinates and
topological pointers to other features) and, possibly attributes
(defined as a set of named items or variables) (ESRI 1989). Vector
layers contain both the vector features (points, lines, polygons) and
the attribute information.
Usually, vector layers are also divided by the type of information
they represent. This enables the user to isolate data into themes,
similar to the themes used in raster layers. Political districts and soil
types would probably be in separate layers, even though both are
represented with polygons. If the project requires that the
coincidence of features in two or more layers be studied, the user
can overlay them or create a new layer.
Vector Files As mentioned above, the ERDAS IMAGINE vector structure is based
on the ArcInfo data model used for ARC coverages. This
georelational data model is actually a set of files using the
computer’s operating system for file management and input/output.
An ERDAS IMAGINE vector layer is stored in subdirectories on the
disk. Vector data are represented by a set of logical tables of
information, stored as files within the subdirectory. These files may
serve the following purposes:
• define features
Field Guide / 37
A workspace is a location that contains one or more vector layers.
Workspaces provide a convenient means for organizing layers into
related groups. They also provide a place for the storage of tabular
data not directly tied to a particular layer. Each workspace is
completely independent. It is possible to have an unlimited number
of workspaces and an unlimited number of vector layers in a
workspace. Table 2 summarizes the types of files that are used to
make up vector layers.
georgia
parcels testdata
Field Guide / 38
Because vector layers are stored in directories rather than in
simple files, you MUST use the utilities provided in ERDAS
IMAGINE to copy and rename them. A utility is also provided to
update path names that are no longer correct due to the use of
regular system commands on vector layers.
Attribute Along with points, lines, and polygons, a vector layer can have a
wealth of associated descriptive, or attribute, information associated
Information with it. Attribute information is displayed in CellArrays. This is the
same information that is stored in the INFO database of ArcInfo.
Some attributes are automatically generated when the layer is
created. Custom fields can be added to each attribute table.
Attribute fields can contain numerical or character data.
The attributes for a roads layer may look similar to the example in
Figure 19. You can select features in the layer based on the attribute
information. Likewise, when a row is selected in the attribute
CellArray, that feature is highlighted in the Viewer.
Displaying Vector Vector data are displayed in Viewers, as are other data types in
ERDAS IMAGINE. You can display a single vector layer, overlay
Data
several layers in one Viewer, or display a vector layer(s) over a
raster layer(s).
In layers that contain more than one feature (a combination of
points, lines, and polygons), you can select which features to
display. For example, if you are studying parcels, you may want to
display only the polygons in a layer that also contains street
centerlines (lines).
Color Schemes Vector data are usually assigned class values in the same manner as
the pixels in a thematic raster file. These class values correspond to
different colors on the display screen. As with a pseudo color image,
you can assign a color scheme for displaying the vector classes.
Symbolization Vector layers can be displayed with symbolization, meaning that the
attributes can be used to determine how points, lines, and polygons
are rendered. Points, lines, polygons, and nodes are symbolized
using styles and symbols similar to annotation. For example, if a
point layer represents cities and towns, the appropriate symbol could
be used at each point based on the population of that area.
Lines
Lines can be symbolized with varying line patterns, composition,
width, and color. The line styles available are the same as those
available for annotation.
Polygons
Polygons can be symbolized as lines or as filled polygons. Polygons
symbolized as lines can have varying line styles (see “Lines”). For
filled polygons, either a solid fill color or a repeated symbol can be
selected. When symbols are used, you select the symbol to use, the
symbol size, symbol color, background color, and the x- and y-
separation between symbols. Figure 20 illustrates a pattern fill.
Digitizing In the broadest sense, digitizing refers to any process that converts
nondigital data into numbers. However, in ERDAS IMAGINE, the
digitizing of vectors refers to the creation of vector data from
hardcopy materials or raster images that are traced using a digitizer
keypad on a digitizing tablet or a mouse on a displayed image.
Any image not already in digital format must be digitized before it
can be read by the computer and incorporated into the database.
Most Landsat, SPOT, or other satellite data are already in digital
format upon receipt, so it is not necessary to digitize them. However,
you may also have maps, photographs, or other nondigital data that
contain information you want to incorporate into the study. Or, you
may want to extract certain features from a digital image to include
in a vector layer. Tablet digitizing and screen digitizing enable you to
digitize certain features of a map or photograph, such as roads,
bodies of water, voting districts, and so forth.
Tablet Digitizing Tablet digitizing involves the use of a digitizing tablet to transfer
nondigital data such as maps or photographs to vector format. The
digitizing tablet contains an internal electronic grid that transmits
data to ERDAS IMAGINE on cue from a digitizer keypad operated by
you.
Digitizer Operation
The handheld digitizer keypad features a small window with a
crosshair and keypad buttons. Position the intersection of the
crosshair directly over the point to be digitized. Depending on the
type of equipment and the program being used, one of the input
buttons is pushed to tell the system which function to perform, such
as:
Digitizing Modes
There are two modes used in digitizing:
You can create a new vector layer from the Viewer. Select the
Tablet Input function from the Viewer to use a digitizing tablet
to enter new information into that layer.
Measurement
The digitizing tablet can also be used to measure both linear and
areal distances on a map or photograph. The digitizer puck is used
to outline the areas to measure. You can measure:
Select the Measure function from the Viewer or click on the Ruler
tool in the Viewer tool bar to enable tablet or screen
measurement.
Screen Digitizing In screen digitizing, vector data are drawn with a mouse in the
Viewer using the displayed image as a reference. These data are
then written to a vector layer.
Screen digitizing is used for the same purposes as tablet digitizing,
such as:
Imported Vector Many types of vector data from other software packages can be
incorporated into the ERDAS IMAGINE system. These data formats
Data include:
Raster to Vector A raster layer can be converted to a vector layer and used as another
layer in a vector database. The following diagram illustrates a
Conversion thematic file in raster format that has been converted to vector
format.
Other Vector Data While this chapter has focused mainly on the ArcInfo coverage
format, there are other types of vector formats that you can use in
Types
ERDAS IMAGINE. The two primary types are:
Shapefile Vector Format The shapefile vector format was designed by ESRI. You can now use
shapefile format (extension .shp) in ERDAS IMAGINE. You can now:
• display shapefiles
• create shapefiles
• edit shapefiles
• attribute shapefiles
• symbolize shapefiles
• print shapefiles
SDE Like the shapefile format, the Spatial Database Engine (SDE) is a
vector format designed by ESRI. The data layers are stored in a
relational database management system (RDBMS) such as Oracle, or
SQL Server. Some of the features of SDE include:
SDTS SDTS stands for Spatial Data Transfer Standard. SDTS is used to
transfer spatial data between computer systems. Such data includes
attribute, georeferencing, data quality report, data dictionary, and
supporting metadata.
ArcGIS Integration ArcGIS Integration is the method you use to access the data in a
geodatabase. The term geodatabase is the short form of geographic
database. The geodatabase is hosted inside of a regional database
management system that provides services for managing
geographic data. The services include validation rules, relationships,
and topological associations. ERDAS IMAGINE has always supported
ESRI data formats such as coverages and shapefiles, and now, using
ArcGIS Vector Integration, ERDAS IMAGINE can also access CAD and
VPF data on the internet.
There are two types of geodatabases: personal and enterprise. The
personal geodatabases are for use by an individual or small group,
and the enterprise geodatabases are for use by large groups.
Industrial strength host systems such as Oracle support the
organizational structure of enterprise geodatabases. The
organization of both personal and enterprise geodatabases starts
with a workspace that contains both spatial and non-spatial datasets
such as feature classes, raster datasets, and tables. An example of
a feature dataset would be U.S. Agriculture. Within the datasets are
feature classes. An example of a feature class would be U.S.
Hydrology. Within every feature class are particular features like
wells and lakes. Each feature class will be symbolized by only one
type of geometry such as points symbolizing wells or polygons
symbolizing lakes.
Introduction This chapter is an introduction to the most common raster and vector
data types that can be used with the ERDAS IMAGINE software
package. The raster data types covered include:
• radar imagery
Importing and
Exporting
Raster Data There is an abundance of data available for use in GIS today. In
addition to satellite and airborne imagery, raster data sources
include digital x-rays, sonar, microscopic imagery, video digitized
data, and many other sources.
Because of the wide variety of data formats, ERDAS IMAGINE
provides two options for importing data:
Direct Direct
Data Type Import Export
Read Write
ADRG • •
ADRI •
ARCGEN • •
Arc Coverage • •
ArcInfo & Space Imaging BIL, • • •
BIP, BSQ
Arc Interchange • •
ASCII •
ASRP • •
ASTER (EOS HDF Format) •
AVHRR (NOAA) •
AVHRR (Dundee Format) •
AVHRR (Sharp) •
BIL, BIP, BSQa (Generic • • •b
Binary)
CADRG (Compressed ADRG) • • •
CIB (Controlled Image Base) • • •
DAEDALUS •
USGS DEM • •
DOQ • •
DOQ (JPEG) • •
DTED • • •
ER Mapper •
Direct Direct
Data Type Import Export
Read Write
ERS (I-PAF CEOS) •
ERS (Conae-PAF CEOS) •
ERS (Tel Aviv-PAF CEOS) •
ERS (D-PAF CEOS) •
ERS (UK-PAF CEOS) •
FIT •
Generic Binary (BIL, BIP, • • •b
BSQ)a
GeoTIFF • • • •
GIS (Erdas 7.x) • • •
GRASS • •
GRID • • •
GRID Stack • • • •
GRID Stack 7.x • • •
GRD (Surfer: ASCII/Binary) • •
IRS-1C/1D (EOSAT Fast •
Format C)
IRS-1C/1D(EUROMAP Fast •
Format C)
IRS-1C/1D (Super Structured •
Format)
JFIF (JPEG) • • •
Landsat-7 Fast-L7A ACRES •
Landsat-7 Fast-L7A EROS •
Landsat-7 Fast-L7A Eurimage •
LAN (Erdas 7.x) • • •
MODIS (EOS HDF Format) •
MrSID • •
MSS Landsat •
NLAPS Data Format (NDF) •
NASDA CEOS •
PCX • • •
RADARSAT (Vancouver CEOS) •
RADARSAT (Acres CEOS) •
RADARSAT (West Freugh •
CEOS)
Direct Direct
Data Type Import Export
Read Write
Raster Product Format • • •
SDE • •
SDTS • •
SeaWiFS L1B and L2A •
(OrbView)
Shapefile • • • •
SPOT •
SPOT CCRS •
SPOT (GeoSpot) •
SPOT SICORP MetroView •
SUN Raster • •
TIFF • • • •
TM Landsat Acres Fast Format •
TM Landsat Acres Standard •
Format
TM Landsat EOSAT Fast •
Format
TM Landsat EOSAT Standard •
Format
TM Landsat ESA Fast Format •
TM Landsat ESA Standard •
Format
TM Landsat-7 Eurimage CEOS •
(Multispectral)
TM Landsat-7 Eurimage CEOS •
(Panchromatic)
TM Landsat-7 HDF Format •
TM Landsat IRS Fast Format •
TM Landsat IRS Standard •
Format
TM Landsat-7 Fast-L7A ACRES •
TM Landsat-7 Fast-L7A EROS •
TM Landsat-7 Fast-L7A •
Eurimage
TM Landsat Radarsat Fast •
Format
TM Landsat Radarsat Standard •
Format
USRP • •
b
Direct read of generic binary data requires an accompanying header file in the ESRI
ArcInfo, Space Imaging, or ERDAS IMAGINE formats.
The import function converts raster data to the ERDAS IMAGINE file
format (.img), or other formats directly writable by ERDAS IMAGINE.
The import function imports the data file values that make up the
raster image, as well as the ephemeris or additional data inherent to
the data structure. For example, when the user imports Landsat
data, ERDAS IMAGINE also imports the georeferencing data for the
image.
NITFS
NITFS stands for the National Imagery Transmission Format
Standard. NITFS is designed to pack numerous image compositions
with complete annotation, text attachments, and imagery-
associated metadata.
According to Jordan and Beck,
NITFS is an unclassified format that is based on ISO/IEC 12087-
5, Basic Image Interchange Format (BIIF). The NITFS
implementation of BIIF is documented in U.S. Military Standard
2500B, establishing a standard data format for digital imagery
and imagery-related products.
NITFS was first introduced in 1990 and was for use by the
government and intelligence agencies. NITFS is now the standard for
military organizations as well as commercial industries.
Jordan and Beck list the following attributes of NITF files:
• multiple images
• annotation on images
Annotation Data Annotation data can also be imported directly. Table 4 lists the
Annotation formats.
There is a distinct difference between import and direct read. Import
means that the data is converted from its original format into
another format (e.g. IMG, TIFF, or GRID Stack), which can be read
directly by ERDAS IMAGINE. Direct read formats are those formats
which the Viewer and many of its associated tools can read
immediately without any conversion process.
Direct Direct
Data Type Import Export
Read Write
DXF To Annotation •
Vector Data Vector layers can be created within ERDAS IMAGINE by digitizing
points, lines, and polygons using a digitizing tablet or the computer
screen. Several vector data types, which are available from a variety
of government agencies and private companies, can also be
imported. Table 5 lists some of the vector data formats that can be
imported to, and exported from, ERDAS IMAGINE:
Direct Direct
Data Type Import Export
Read Write
ARCGEN • •
Arc Interchange • •
Arc_Interchange to •
Coverage
Arc_Interchange to Grid •
DFAD • •
DLG • •
Direct Direct
Data Type Import Export
Read Write
DXF to Annotation •
DXF to Coverage •
ETAK •
IGES • •
MIF/MID (MapInfo) to •
Coverage
SDE • •
SDTS • •
Shapefile • •
Terramodel •
TIGER • •
VPF • •
Satellite Data There are several data acquisition options available including
photography, aerial sensors, and sophisticated satellite scanners.
However, a satellite system offers these advantages:
• Many satellites orbit the Earth, so the same area can be covered
on a regular basis for change detection.
Satellite Characteristics The U. S. Landsat and the French SPOT satellites are two important
data acquisition satellites. These systems provide the majority of
remotely-sensed digital images in use today. The Landsat and SPOT
satellites have several characteristics in common:
• Both scanners can produce nadir views. Nadir is the area on the
ground directly beneath the scanner’s detectors.
NOTE: The current SPOT system has the ability to collect off-nadir
stereo imagery.
1.9
2.0
2.1
2.2 Band 7
2.3
2.4
2.5
2.6
3.0
3.5
Band 3
4.0
5.0
6.0
7.0
8.0
9.0
10.0
11.0 Band 4
12.0 Band 6 Band 5
13.0
1
NOAA AVHRR band 5 is not on the NOAA 10 satellite, but is on NOAA 11.
IRS
IRS-1C
The IRS-1C sensor was launched in December of 1995.
The repeat coverage of IRS-1C is every 24 days. The sensor has a
744 km swath width.
The IRS-1C satellite has three sensors on board with which to
capture images of the Earth. Those sensors are as follows:
LISS-III
LISS-III has a spatial resolution of 23 m, with the exception of the
SW Infrared band, which is 70 m. Bands 2, 3, and 4 have a swath
width of 142 kilometers; band 5 has a swath width of 148 km.
Repeat coverage occurs every 24 days at the Equator.
1, Blue ---
5, SW IR 1.55 to 1.70 µm
Panchromatic Sensor
IRS-1D
IRS-1D was launched in September of 1997. It collects imagery at a
spatial resolution of 5.8 m. IRS-1D’s sensors were copied for IRS-1C,
which was launched in December 1995.
Imagery collected by IRS-1D is distributed in black and white format.
The panchromatic imagery “reveals objects on the Earth’s surface
(such) as transportation networks, large ships, parks and opens
space, and built-up urban areas” (Space Imaging, 1999b). This
information can be used to classify land cover in applications such as
urban planning and agriculture. The Space Imaging facility located in
Norman, Oklahoma has been obtaining IRS-1D data since 1997.
MSS
The MSS from Landsats 4 and 5 has a swath width of approximately
185 × 170 km from a height of approximately 900 km for Landsats
1, 2, and 3, and 705 km for Landsats 4 and 5. MSS data are widely
used for general geologic studies as well as vegetation inventories.
The spatial resolution of MSS data is 56 × 79 m, with a 79 × 79 m
IFOV. A typical scene contains approximately 2340 rows and 3240
columns. The radiometric resolution is 6-bit, but it is stored as 8-bit
(Lillesand and Kiefer, 1987).
Detectors record electromagnetic radiation (EMR) in four bands:
• Bands 1 and 2 are in the visible portion of the spectrum and are
useful in detecting cultural features, such as roads. These bands
also show detail in water.
Wavelengt
Band h Comments
(microns)
1, 0.50 to 0.60 This band scans the region between the blue
Green µm and red chlorophyll absorption bands. It
corresponds to the green reflectance of healthy
vegetation, and it is also useful for mapping
water bodies.
Wavelengt
Band h Comments
(microns)
4, NIR 0.80 to 1.10 This band is useful for vegetation surveys and
µm for penetrating haze (Jensen, 1996).
TM
The TM scanner is a multispectral scanning system much like the
MSS, except that the TM sensor records reflected/emitted
electromagnetic energy from the visible, reflective-infrared, middle-
infrared, and thermal-infrared regions of the spectrum. TM has
higher spatial, spectral, and radiometric resolution than MSS.
TM has a swath width of approximately 185 km from a height of
approximately 705 km. It is useful for vegetation type and health
determination, soil moisture, snow and cloud differentiation, rock
type discrimination, etc.
The spatial resolution of TM is 28.5 × 28.5 m for all bands except the
thermal (band 6), which has a spatial resolution of 120 × 120 m. The
larger pixel size of this band is necessary for adequate signal
strength. However, the thermal band is resampled to 28.5 × 28.5 m
to match the other bands. The radiometric resolution is 8-bit,
meaning that each pixel has a possible range of data values from 0
to 255.
Detectors record EMR in seven bands:
Wavelengt
Band h Comments
(microns)
4 bands
MSS
7 bands
TM
radiometric
resolution 1 pixel=
0-127 57x79m
1 pixel=
30x30m
radiometric
resolution
0-255
NOTE: The order of the bands corresponds to the Red, Green, and
Blue (RGB) color guns of the monitor.
Landsat 7 Specifications
Information about the spectral range and ground resolution of the
bands of the Landsat 7 satellite is provided in the following table:
Wavelength
Band Number Resolution (m)
(microns)
1 0.45 to 0.52 µm 30
2 0.52 to 0.60 µm 30
3 0.63 to 0.69 µm 30
4 0.76 to 0.90 µm 30
Wavelength
Band Number Resolution (m)
(microns)
5 1.55 to 1.75 µm 30
6 10.4 to 12.5 µm 60
7 2.08 to 2.35 µm 30
• DEM data and the metadata describing them (available only with
terrain corrected products)
NOAA Polar Orbiter Data NOAA has sponsored several polar orbiting satellites to collect data
of the Earth. These satellites were originally designed for
meteorological applications, but the data gathered have been used
in many fields—from agronomy to oceanography (Needham, 1986).
The first of these satellites to be launched was the TIROS-N in 1978.
Since the TIROS-N, five additional NOAA satellites have been
launched. Of these, the last three are still in orbit gathering data.
AVHRR
The NOAA AVHRR data are small-scale data and often cover an entire
country. The swath width is 2700 km and the satellites orbit at a
height of approximately 833 km (Kidwell, 1988; Needham, 1986).
The AVHRR system allows for direct transmission in real-time of data
called High Resolution Picture Transmission (HRPT). It also allows for
about ten minutes of data to be recorded over any portion of the
world on two recorders on board the satellite. These recorded data
are called Local Area Coverage (LAC). LAC and HRPT have identical
formats; the only difference is that HRPT are transmitted directly and
LAC are recorded.
There are three basic formats for AVHRR data which can be imported
into ERDAS IMAGINE:
Wavelengt
Band h Comments
(microns)
3, TIR 3.55 to 3.93 This is a thermal band that can be used for
µm snow and ice discrimination. It is also useful for
detecting fires.
1 450 to 520 nm
2 520 to 600 nm
3 625 to 695 nm
4 760 to 900 nm
SPOT The first SPOT satellite, developed by the French Centre National
d’Etudes Spatiales (CNES), was launched in early 1986. The second
SPOT satellite was launched in 1990 and the third was launched in
1993. The sensors operate in two modes, multispectral and
panchromatic. SPOT is commonly referred to as a pushbroom
scanner meaning that all scanning parts are fixed, and scanning is
accomplished by the forward motion of the scanner. SPOT pushes
3000/6000 sensors along its orbit. This is different from Landsat
which scans with 16 detectors perpendicular to its orbit.
The SPOT satellite can observe the same area on the globe once
every 26 days. The SPOT scanner normally produces nadir views, but
it does have off-nadir viewing capability. Off-nadir refers to any point
that is not directly beneath the detectors, but off to an angle. Using
this off-nadir capability, one area on the Earth can be viewed as
often as every 3 days.
This off-nadir viewing can be programmed from the ground control
station, and is quite useful for collecting data in a region not directly
in the path of the scanner or in the event of a natural or man-made
disaster, where timeliness of data acquisition is crucial. It is also very
useful in collecting stereo data from which elevation data can be
extracted.
The width of the swath observed varies between 60 km for nadir
viewing and 80 km for off-nadir viewing at a height of 832 km
(Jensen, 1996).
Panchromatic
SPOT Panchromatic (meaning sensitive to all visible colors) has 10 ×
10 m spatial resolution, contains 1 band—0.51 to 0.73 µm—and is
similar to a black and white photograph. It has a radiometric
resolution of 8 bits (Jensen, 1996).
XS
SPOT XS, or multispectral, has 20 × 20 m spatial resolution, 8-bit
radiometric resolution, and contains 3 bands (Jensen, 1996).
Wavelengt
Band h Comments
(microns)
Panc
hrom 1 band
atic
XS
3 bands
1 pixel=
10x10m
radiometric
resolution
0-255 1 pixel=
20x20m
SPOT4 The SPOT4 satellite was launched in 1998. SPOT4 carries High
Resolution Visible Infrared (HR VIR) instruments that obtain
information in the visible and near-infrared spectral bands.
The SPOT4 satellite orbits the Earth at 822 km at the Equator. The
SPOT4 satellite has two sensors on board: a multispectral sensor,
and a panchromatic sensor. The multispectral scanner has a pixel
size of 20 × 20 m, and a swath width of 60 km. The panchromatic
scanner has a pixel size of 10 × 10 m, and a swath width of 60 km.
Band Wavelength
Source: SPOT Image, 1998; SPOT Image, 1999; Center for Health
Applications of Aerospace Related Technologies, 2000c.
Advantages of Using Radar data have several advantages over other types of remotely
Radar Data sensed imagery:
Radar Sensors Radar images are generated by two different types of sensors:
Beam
Width
Range Sensor Height
Direction at Nadir
Azimuth
Direction
Azimuth
Previous Resolution
Image
Lines
Hill
Strength (DN)
Valley
Time
Speckle Noise Once out of phase, the radar waves can interfere constructively or
destructively to produce light and dark pixels known as speckle
noise. Speckle noise in radar data must be reduced before the data
can be utilized. However, the radar image processing programs used
to reduce speckle noise also produce changes to the image. This
consideration, combined with the fact that different applications and
sensor outputs necessitate different speckle removal models, has
lead ERDAS to offer several speckle reduction algorithms.
• enhance edges
Applications for Radar Radar data can be used independently in GIS applications or
Data combined with other satellite data, such as Landsat, SPOT, or
AVHRR. Possible GIS applications for radar data include:
Current Radar Sensors Table 19 gives a brief description of currently available radar
sensors. This is not a complete list of such sensors, but it does
represent the ones most useful for GIS applications.
RADARSA
ERS-1, 2 JERS-1 SIR-A, B SIR-C Almaz-1
T
ERS-1
ERS-1, a radar satellite, was launched by ESA in July of 1991. One
of its primary instruments is the Along-Track Scanning Radiometer
(ATSR). The ATSR monitors changes in vegetation of the Earth’s
surface.
The instruments aboard ERS-1 include: SAR Image Mode, SAR Wave
Mode, Wind Scatterometer, Radar Altimeter, and Along Track
Scanning Radiometer-1 (European Space Agency, 1997).
ERS-1 receiving stations are located all over the world, in countries
such as Sweden, Norway, and Canada.
Some of the information that is obtained from the ERS-1 (as well as
ERS-2, to follow) includes:
According to ESA,
. . .ERS-1 provides both global and regional views of the Earth,
regardless of cloud coverage and sunlight conditions. An
operational near-real-time capability for data acquisition,
processing and dissemination, offering global data sets within
three hours of observation, has allowed the development of time-
critical applications particularly in weather, marine and ice
forecasting, which are of great importance for many industrial
activities (European Space Agency, 1995).
Source: European Space Agency, 1995
JERS-1
JERS stands for Japanese Earth Resources Satellite. The JERS-1
satellite was launched in February of 1992, with an SAR instrument
and a 4-band optical sensor aboard. The SAR sensor’s ground
resolution is 18 m, and the optical sensor’s ground resolution is
roughly 18 m across-track and 24 m along-track. The revisit time of
the satellite is every 44 days. The satellite travels at an altitude of
568 km, at an inclination of 97.67°.
Band Wavelength
1 0.52 to 0.60 µm
2 0.63 to 0.69 µm
3 0.76 to 0.86 µm
41 0.76 to 0.86 µm
5 1.60 to 1.71 µm
6 2.01 to 2.12 µm
7 2.13 to 2.25 µm
Band Wavelength
8 2.27 to 2.40 µm
1
Viewing 15.3° forward
RADARSAT
RADARSAT satellites carry SARs, which are capable of transmitting
signals that can be received through clouds and during nighttime
hours. RADARSAT satellites have multiple imaging modes for
collecting data, which include Fine, Standard, Wide, ScanSAR
Narrow, ScanSAR Wide, Extended (H), and Extended (L). The
resolution and swath width varies with each one of these modes, but
in general, Fine offers the best resolution: 8 m.
SIR-A
SIR stands for Spaceborne Imaging Radar. SIR-A was launched and
began collecting data in 1981. The SIR-A mission built on the Seasat
SAR mission that preceded it by increasing the incidence angle with
which it captured images. The primary goal of the SIR-A mission was
to collect geological information. This information did not have as
pronounced a layover effect as previous imagery.
An important achievement of SIR-A data is that it is capable of
penetrating surfaces to obtain information. For example, NASA says
that the L-band capability of SIR-A enabled the discovery of dry river
beds in the Sahara Desert.
SIR-1 uses L-band, has a swath width of 50 km, a range resolution
of 40 m, and an azimuth resolution of 40 m (Atlantis Scientific, Inc.,
1997).
SIR-B
SIR-B was launched and began collecting data in 1984. SIR-B
improved over SIR-A by using an articulating antenna. This antenna
allowed the incidence angle to range between 15 and 60 degrees.
This enabled the mapping of surface features using “multiple-
incidence angle backscatter signatures” (National Aeronautics and
Space Administration, 1996).
SIR-B uses L-band, has a swath width of 10-60 km, a range
resolution of 60-10 m, and an azimuth resolution of 25 m (Atlantis
Scientific, Inc., 1997).
Source: National Aeronautics and Space Administration, 1995a,
National Aeronautics and Space Administration, 1996; Atlantis
Scientific, Inc., 1997.
Bands Wavelength
L-Band 0.235 m
C-Band 0.058 m
X-Band 0.031 m
Future Radar Sensors Several radar satellites are planned for launch within the next
several years, but only a few programs will be successful. Following
are two scheduled programs which are known to be highly
achievable.
Radarsat-2
The Canadian Space Agency is working on the follow-on system to
Radarsat 1. Present plans are to include multipolar, C-band imagery.
Image Data from Image data can also be acquired from multispectral scanners or
radar sensors aboard aircraft, as well as satellites. This is useful if
Aircraft
there is not time to wait for the next satellite to pass over a particular
area, or if it is necessary to achieve a specific spatial or spectral
resolution that cannot be attained with satellite sensors.
For example, this type of data can be beneficial in the event of a
natural or man-made disaster, because there is more control over
when and where the data are gathered.
Two common types of airborne image data are:
• C-band
• L-band
• P-band
AVIRIS The AVIRIS was also developed by JPL under a contract with NASA.
AVIRIS data have been available since 1987.
Daedalus TMS Daedalus is a thematic mapper simulator (TMS), which simulates the
characteristics, such as spatial and radiometric, of the TM sensor on
Landsat spacecraft.
The Daedalus TMS orbits at 65,000 feet, and has a ground resolution
of 25 meters. The total scan angle is 43 degrees, and the swath
width is 15.6 km. Daedalus TMS is flown aboard the NASA ER-2
aircraft.
The Daedalus TMS spectral bands are as follows:
Daedalus
TM Band Wavelength
Channel
1 A 0.42 to 0.45 µm
2 1 0.45 to 0.52 µm
3 2 0.52 to 0.60 µm
4 B 0.60 to 0.62 µm
5 3 0.63 to 0.69 µm
6 C 0.69 to 0.75 µm
7 4 0.76 to 0.90 µm
8 D 0.91 to 1.05 µm
9 5 1.55 to 1.75 µm
10 7 2.08 to 2.35 µm
Image Data from Hardcopy maps and photographs can be incorporated into the
ERDAS IMAGINE environment through the use of a scanning device
Scanning to transfer them into a digital (raster) format.
Desktop Scanners Desktop scanners are general purpose devices. They lack the image
detail and geometric accuracy of photogrammetric quality units, but
they are much less expensive. When using a desktop scanner, you
should make sure that the active area is at least 9 × 9 inches (i.e.,
A3-type scanners), enabling you to capture the entire photo frame.
Desktop scanners are appropriate for less rigorous uses, such as
digital photogrammetry in support of GIS or remote sensing
applications. Calibrating these units improves geometric accuracy,
but the results are still inferior to photogrammetric units. The image
correlation techniques that are necessary for automatic tie point
collection and elevation extraction are often sensitive to scan quality.
Therefore, errors can be introduced into the photogrammetric
solution that are attributable to scanning errors.
DOQs DOQ stands for digital orthophoto quadrangle. USGS defines a DOQ
as a computer-generated image of an aerial photo, which has been
orthorectified to give it map coordinates. DOQs can provide accurate
map measurements.
The format of the DOQ is a grayscale image that covers 3.75 minutes
of latitude by 3.75 minutes of longitude. DOQs use the North
American Datum of 1983, and the Universal Transverse Mercator
projection. Each pixel of a DOQ represents a square meter. 3.75-
minute quarter quadrangles have a 1:12,000 scale. 7.5-minute
quadrangles have a 1:24,000 scale. Some DOQs are available in
color-infrared, which is especially useful for vegetation monitoring.
DOQs can be used in land use and planning, management of natural
resources, environmental impact assessments, and watershed
analysis, among other applications. A DOQ can also be used as “a
cartographic base on which to overlay any number of associated
thematic layers for displaying, generating, and modifying planimetric
data or associated data files” (United States Geological Survey,
1999b).
According to the USGS:
DOQ production begins with an aerial photo and requires four
elements: (1) at least three ground positions that can be
identified within the photo; (2) camera calibration specifications,
such as focal length; (3) a digital elevation model (DEM) of the
area covered by the photo; (4) and a high-resolution digital
image of the photo, produced by scanning. The photo is
processed pixel by pixel to produce an image with features in true
geographic positions (United States Geological Survey, 1999b).
Source: United States Geological Survey, 1999b.
ADRG Data ADRG (ARC Digitized Raster Graphic) data come from the National
Imagery and Mapping Agency (NIMA), which was formerly known as
the Defense Mapping Agency (DMA). ADRG data are primarily used
for military purposes by defense contractors. The data are in 128 ×
128 pixel tiled, 8-bit format stored on CD-ROM. ADRG data provide
large amounts of hardcopy graphic data without having to store and
maintain the actual hardcopy graphics.
ARC System The ARC system (Equal Arc-Second Raster Chart/Map) provides a
rectangular coordinate and projection system at any scale for the
Earth’s ellipsoid, based on the World Geodetic System 1984 (WGS
84). The ARC System divides the surface of the ellipsoid into 18
latitudinal bands called zones. Zones 1 - 9 cover the Northern
hemisphere and zones 10 - 18 cover the Southern hemisphere. Zone
9 is the North Polar region. Zone 18 is the South Polar region.
Distribution Rectangles
For distribution, ADRG are divided into geographic data sets called
Distribution Rectangles (DRs). A DR may include data from one or
more source charts or maps. The boundary of a DR is a geographic
rectangle that typically coincides with chart and map neatlines.
The padding pixels are not imported by ERDAS IMAGINE, nor are
they counted when figuring the pixel height and width of each
image.
ADRG File Format Each CD-ROM contains up to eight different file types which make up
the ADRG format. ERDAS IMAGINE imports three types of ADRG data
files:
• .OVR (Overview)
• .IMG (Image)
.OVR (overview) The overview file contains a 16:1 reduced resolution image of the
whole DR. There is an overview file for each DR on a CD-ROM.
You can import from only one ZDR at a time. If a subset covers
multiple ZDRs, they must be imported separately and mosaicked
with the Mosaic option.
The white rectangle in Figure 30 represents the DR. The subset area
in this illustration would have to be imported as three files: one for
each zone in the DR.
Notice how the ZDRs overlap. Therefore, the .IMG files for Zones 2
and 4 would also be included in the subset area.
Zone 4
overlap area
Zone 3
Subset
overlap area Area
Zone 2
.IMG (scanned image The .IMG files are the data files containing the actual scanned
data) hardcopy graphic(s). Each .IMG file contains one ZDR plus padding
pixels. The Import function converts the .IMG data files on the CD-
ROM to the ERDAS IMAGINE file format (.img). The image file can
then be displayed in a Viewer.
.Lxx (legend data) Legend files contain a variety of diagrams and accompanying
information. This is information that typically appears in the margin
or legend of the source graphic.
Each ARC System chart type has certain legend files associated with
the image(s) on the CD-ROM. The legend files associated with each
chart type are checked in Table 25.
ARC System
IN EL SL BN VA HA AC GE GR GL LS
Chart
GNC • •
JNC / JNC-A • • • • •
ONC • • • • •
TPC • • • • • •
JOG-A • • • • • • • •
JOG-G / JOG-C • • • • • • •
JOG-R • • • • • •
ATC • • • • •
TLM • • • • • •
ADRG File Naming The ADRG file naming convention is based on a series of codes:
Convention ssccddzz
• ss = the chart series code (see the table of ARC System charts)
• .IMG = This file contains the actual scanned image data for a
ZDR.
ADRI Data ADRI (ARC Digital Raster Imagery), like ADRG data, are also from
the NIMA and are currently available only to Department of Defense
contractors. The data are in 128 × 128 tiled, 8-bit format, stored on
8 mm tape in band sequential format.
ADRI consists of SPOT panchromatic satellite imagery transformed
into the ARC system and accompanied by ASCII encoded support
files.
Like ADRG, ADRI data are stored in the ARC system in DRs. Each DR
consists of all or part of one or more images mosaicked to meet the
ARC bounding rectangle, which encloses a 1 degree by 1 degree
geographic area. (See Figure 31.) Source images are orthorectified
to mean sea level using NIMA Level I Digital Terrain Elevation Data
(DTED) or equivalent data (Air Force Intelligence Support Agency,
1991).
Image 1
Image 2
3
Image 4
Image 5
Image 6
Image 9
Image 8
7
In ADRI data, each DR contains only one ZDR. Each ZDR is stored as
a single raster image file, with no overlapping areas.
There are six different file types that make up the ADRI format: two
types of data files, three types of header files, and a color test patch
file. ERDAS IMAGINE imports two types of ADRI data files:
• .OVR (Overview)
• .IMG (Image)
The ADRI .IMG and .OVR file formats are different from the
ERDAS IMAGINE .img and .ovr file formats.
.OVR (overview) The overview file (.OVR) contains a 16:1 reduced resolution image
of the whole DR. There is an overview file for each DR on a tape. The
.OVR images show the mosaicking from the source images and the
dates when the source images were collected. (See Figure 32.) This
does not appear on the ZDR image.
.IMG (scanned image The .IMG files contain the actual mosaicked images. Each .IMG file
data) contains one ZDR plus any padding pixels needed to fit the ARC
boundaries. Padding pixels are black and have a zero data value. The
ERDAS IMAGINE Import function converts the .IMG data files to the
ERDAS IMAGINE file format (.img). The image file can then be
displayed in a Viewer. Padding pixels are not imported, nor are they
counted in image height or width.
ADRI File Naming The ADRI file naming convention is based on a series of codes:
Convention ssccddzz
- SP (SPOT panchromatic)
- SX (SPOT multispectral) (not currently available)
- TM (Landsat Thematic Mapper) (not currently available)
• .IMG = This file contains the actual scanned image data for a
ZDR.
You may change this name when the file is imported into ERDAS
IMAGINE. If you do not specify a file name, ERDAS IMAGINE
uses the ADRI file name for the image.
Raster Product The Raster Product Format (RPF), from NIMA, is primarily used for
military purposes by defense contractors. RPF data are organized in
Format
1536 × 1536 frames, with an internal tile size of 256 × 256 pixels.
RPF data are stored in an 8-bit format, with or without a pseudocolor
lookup table, on CD-ROM.
RPF Data are projected to the ARC system, based on the World
Geodetic System 1984 (WGS 84). The ARC System divides the
surface of the ellipsoid into 18 latitudinal bands called zones. Zones
1-9 cover the Northern hemisphere and zones A-J cover the
Southern hemisphere. Zone 9 is the North Polar region. Zone J is the
South Polar region
Polar data is projected to the Azimuthal Equidistant projection. In
nonpolar zones, data is in the Equirectangular projection, which is
proportional to latitude and longitude. ERDAS IMAGINE includes the
option to use either Equirectangular or Geographic coordinates for
nonpolar RPF data. The aspect ratio of projected RPF data is nearly
1; frames appear to be square, and measurement is possible.
Unprojected RPFs seldom have an aspect of ratio of 1, but may be
easier to combine with other data in Geographic coordinates.
Two military products are currently based upon the general RPF
specification:
Topographic Data Satellite data can also be used to create elevation, or topographic
data through the use of stereoscopic pairs, as discussed above under
SPOT. Radar sensor data can also be a source of topographic
information, as discussed in “Terrain Analysis” However, most
available elevation data are created with stereo photography and
topographic maps.
ERDAS IMAGINE software can load and use:
• USGS DEMs
• DTED
Arc/second Format
Most elevation data are in arc/second format. Arc/second refers to
data in the Latitude/Longitude (Lat/Lon) coordinate system. The
data are not rectangular, but follow the arc of the Earth’s latitudinal
and longitudinal lines.
Each degree of latitude and longitude is made up of 60 minutes. Each
minute is made up of 60 seconds. Arc/second data are often referred
to by the number of seconds in each pixel. For example, 3
arc/second data have pixels which are 3 × 3 seconds in size. The
actual area represented by each pixel is a function of its latitude.
Figure 33 illustrates a 1° × 1° area of the Earth.
A row of data file values from a DEM or DTED file is called a profile.
The profiles of DEM and DTED run south to north, that is, the first
pixel of the record is the southernmost pixel.
Longitude
1201
1201
La t i t
u de
1201
In Figure 33, there are 1201 pixels in the first row and 1201 pixels
in the last row, but the area represented by each pixel increases in
size from the top of the file to the bottom of the file. The extracted
section in the example above has been exaggerated to illustrate this
point.
Arc/second data used in conjunction with other image data, such as
TM or SPOT, must be rectified or projected onto a planar coordinate
system such as UTM.
DEM DEMs are digital elevation model data. DEM was originally a term
reserved for elevation data provided by the USGS, but it is now used
to describe any digital elevation data.
DEMs can be:
USGS DEMs
There are two types of DEMs that are most commonly available from
USGS:
DTED DTED data are produced by the National Imagery and Mapping
Agency (NIMA) and are available only to US government agencies
and their contractors. DTED data are distributed on 9-track tapes
and on CD-ROM.
There are two types of DTED data available:
Using Topographic Data Topographic data have many uses in a GIS. For example,
topographic data can be used in conjunction with other data to:
Satellite Position Positions are determined through the traditional ranging technique.
The satellites orbit the Earth (at an altitude of 20,200 km) in such a
manner that several are always visible at any location on the Earth's
surface. A GPS receiver with line of site to a GPS satellite can
determine how long the signal broadcast by the satellite has taken
to reach its location, and therefore can determine the distance to the
satellite. Thus, if the GPS receiver can see three or more satellites
and determine the distance to each, the GPS receiver can calculate
its own position based on the known positions of the satellites (i.e.,
the intersection of the spheres of distance from the satellite
locations). Theoretically, only three satellites should be required to
find the 3D position of the receiver, but various inaccuracies (largely
based on the quality of the clock within the GPS receiver that is used
to time the arrival of the signal) mean that at least four satellites are
generally required to determine a three-dimensional (3D) x, y, z
position.
The explanation above is an over-simplification of the technique
used, but does show the concept behind the use of the GPS system
for determining position. The accuracy of that position is affected by
several factors, including the number of satellites that can be seen
by a receiver, but especially for commercial users by Selective
Availability. Each satellite actually sends two signals at different
frequencies. One is for civilian use and one for military use. The
signal used for commercial receivers has an error introduced to it
called Selective Availability. Selective Availability introduces a
positional inaccuracy of up to 100m to commercial GPS receivers.
This is mainly intended to limit the use of highly accurate GPS
positioning to hostile users, but the errors can be ameliorated
through various techniques, such as keeping the GPS receiver
stationary; thereby allowing it to average out the errors, or through
more advanced techniques discussed in the following sections.
Applications of GPS Data GPS data finds many uses in remote sensing and GIS applications,
such as:
• DGPS data can be used to directly capture GIS data and survey
data for direct use in a GIS or CAD system. In this regard the GPS
receiver can be compared to using a digitizing tablet to collect
data, but instead of pointing and clicking at features on a paper
document, you are pointing and clicking on the real features to
capture the information.
Ordering Raster Table 26 describes the different Landsat, SPOT, AVHRR, and DEM
products that can be ordered. Information in this chart does not
Data reflect all the products that are available, but only the most common
types that can be imported into ERDAS IMAGINE.
Table 26: Common Raster Data Products
Ground # of Available
Data Type Pixel Size Format
Covered Bands Geocoded
Addresses to Contact For more information about these and related products, contact the
following agencies:
• SPOT data:
SPOT Image Corporation
1897 Preston White Dr.
Reston, VA 22091-4368 USA
Telephone: 703-620-2200
Fax: 703-648-1813
Internet: www.spot.com
• Landsat data:
Customer Services
U.S. Geological Survey
EROS Data Center
47914 252nd Street
Sioux Falls, SD 57198 USA
Telephone: 800/252-4547
Fax: 605/594-6589
Internet: edcwww.cr.usgs.gov/eros-home.html
• RADARSAT data:
RADARSAT International, Inc.
265 Carling Ave., Suite 204
Ottawa, Ontario
Canada K1S 2E1
Telephone: 613-238-5424
Fax: 613-238-5425
Internet: www.rsi.ca
• JFIF (JPEG)
• MrSID
• SDTS
• Sun Raster
ERDAS Ver. 7.X The ERDAS Ver. 7.X series was the predecessor of ERDAS IMAGINE
software. The two basic types of ERDAS Ver. 7.X data files are
indicated by the file name extensions:
.LAN and .GIS image files are stored in the same format. The image
data are arranged in a BIL format and can be 4-bit, 8-bit, or 16-bit.
The ERDAS Ver. 7.X file structure includes:
When you import a .GIS file, it becomes an image file with one
thematic raster layer. When you import a .LAN file, each band
becomes a continuous raster layer within the image file.
SDTS The Spatial Data Transfer Standard (SDTS) was developed by the
USGS to promote and facilitate the transfer of georeferenced data
and its associated metadata between dissimilar computer systems
without loss of fidelity. To achieve these goals, SDTS uses a flexible,
self-describing method of encoding data, which has enough structure
to permit interoperability.
For metadata, SDTS requires a number of statements regarding data
accuracy. In addition to the standard metadata, the producer may
supply detailed attribute data correlated to any image feature.
SDTS Profiles
The SDTS standard is organized into profiles. Profiles identify a
restricted subset of the standard needed to solve a certain problem
domain. Two subsets of interest to ERDAS IMAGINE users are:
Both methods read the contents of a frame buffer and write the
display data to a user-specified file. Depending on the display
hardware and options chosen, screendump can create any of the file
types listed in Table 27.
Any TIFF file that contains an unsupported value for one of these
elements may not be compatible with ERDAS IMAGINE.
Motorola (MSB/LSB)
Gray scale
Color palette
RGB (3-band)
Configuration BIP
BSQ
Compressiond None
Packbits
LZWe
a All bands must contain the same number of bits (i.e., 4, 4, 4 or 8, 8, 8). Multiband
data with bit depths differing per band cannot be imported into ERDAS IMAGINE.
b
Must be imported and exported as 4-bit data.
c
Direct read/write only.
e LZW is governed by patents and is not supported by the basic version of ERDAS
IMAGINE.
Geocoding
Geocoding is the process of linking coordinates in model space to the
Earth’s surface. Geocoding allows for the specification of projection,
datum, ellipsoid, etc. ERDAS IMAGINE interprets the GeoTIFF
geocoding to determine the latitude and longitude of the map
coordinates for GeoTIFF images. This interpretation also allows the
GeoTIFF image to be reprojected.
In GeoTIFF, the units of the map coordinates are obtained from the
geocoding, not from the georeferencing. In addition, GeoTIFF
defines a set of standard projected coordinate systems. The use of a
standard projected coordinate system in GeoTIFF constrains the
units that can be used with that standard system. Therefore, if the
units used with a projection in ERDAS IMAGINE are not equal to the
implied units of an equivalent GeoTIFF geocoding, ERDAS IMAGINE
transforms the georeferencing to conform to the implied units so that
the standard projected coordinate system code can be used. The
alternative (preserving the georeferencing as is and producing a
nonstandard projected coordinate system) is regarded as less
interoperable.
Vector Data from It is possible to directly import several common vector formats into
ERDAS IMAGINE. These files become vector layers when imported.
Other Software
These data can then be used for the analyses and, in most cases,
Vendors exported back to their original format (if desired).
Although data can be converted from one type to another by
importing a file into ERDAS IMAGINE and then exporting the ERDAS
IMAGINE file into another format, the import and export routines
were designed to work together. For example, if you have
information in AutoCAD that you would like to use in the GIS, you
can import a Drawing Interchange File (DXF) into ERDAS IMAGINE,
do the analysis, and then export the data back to DXF format.
ARCGEN ARCGEN files are ASCII files created with the ArcInfo UNGENERATE
command. The import ARCGEN program is used to import features
to a new layer. Topology is not created or maintained, therefore the
coverage must be built or cleaned after it is imported into ERDAS
IMAGINE.
DXF files can be converted in the ASCII or binary format. The binary
format is an optional format for AutoCAD Releases 10 and 11. It is
structured just like the ASCII format, only the data are in binary
format.
ERDAS
DXF Entity IMAGINE Comments
Feature
The ERDAS IMAGINE import process also imports line and point
attribute data (if they exist) and creates an INFO directory with the
appropriate ACODE (arc attributes) and XCODE (point attributes)
files. If an imported DXF file is exported back to DXF format, this
information is also exported.
DLG DLGs are furnished by the U.S. Geological Survey and provide
planimetric base map information, such as transportation,
hydrography, contours, and public land survey boundaries. DLG files
are available for the following USGS map series:
• 1:100,000-scale quadrangles
ERDAS IMAGINE
IGES Entity
Feature
The ERDAS IMAGINE import process also imports line and point
attribute data (if they exist) and creates an INFO directory with the
appropriate ACODE and XCODE files. If an imported IGES file is
exported back to IGES format, this information is also exported.
TIGER TIGER files are line network products of the U.S. Census Bureau. The
Census Bureau is using the TIGER system to create and maintain a
digital cartographic database that covers the United States, Puerto
Rico, Guam, the Virgin Islands, American Samoa, and the Trust
Territories of the Pacific.
TIGER/Line is the line network product of the TIGER system. The
cartographic base is taken from Geographic Base File/Dual
Independent Map Encoding (GBF/DIME), where available, and from
the USGS 1:100,000-scale national map series, SPOT imagery, and
a variety of other sources in all other areas, in order to have
continuous coverage for the entire United States. In addition to line
segments, TIGER files contain census geographic codes and, in
metropolitan areas, address ranges for the left and right sides of
each segment. TIGER files are available in ASCII format on both CD-
ROM and tape media. All released versions after April 1989 are
supported.
There is a great deal of attribute information provided with
TIGER/Line files. Line and point attribute information can be
converted into ERDAS IMAGINE format. The ERDAS IMAGINE import
process creates an INFO directory with the appropriate ACODE and
XCODE files. If an imported TIGER file is exported back to TIGER
format, this information is also exported.
TIGER attributes include the following:
Introduction This section defines some important terms that are relevant to image
display. Most of the terminology and definitions used in this chapter
are based on the X Window System (Massachusetts Institute of
Technology) terminology. This may differ from other systems, such
as Microsoft Windows NT.
A seat is a combination of an X-server and a host workstation.
Figure 35: Example of One Seat with One Display and Two
Screens
Screen Screen
Display Memory Size The size of memory varies for different displays. It is expressed in
terms of:
• the data file value(s) for one data unit in an image (file pixels), or
Colors Human perception of color comes from the relative amounts of red,
green, and blue light that are measured by the cones (sensors) in
the eye. Red, green, and blue light can be added together to produce
a wide variety of colors—a wider variety than can be formed from the
combinations of any three other colors. Red, green, and blue are
therefore the additive primary colors.
Color Guns
On a display, color guns direct electron beams that fall on red, green,
and blue phosphors. The phosphors glow at certain frequencies to
produce different colors. Color monitors are often called RGB
monitors, referring to the primary colors.
The red, green, and blue phosphors on the picture tube appear as
tiny colored dots on the display screen. The human eye integrates
these dots together, and combinations of red, green, and blue are
perceived. Each pixel is represented by an equal number of red,
green, and blue phosphors.
Brightness Values
Brightness values (or intensity values) are the quantities of each
primary color to be output to each displayed pixel. When an image
is displayed, brightness values are calculated for all three color guns,
for every pixel.
All of the colors that can be output to a display can be expressed with
three brightness values—one for each color gun.
Colormap and Colorcells A color on the screen is created by a combination of red, green, and
blue values, where each of these components is represented as an
8-bit value. Therefore, 24 bits are needed to represent a color. Since
many systems have only an 8-bit display, a colormap is used to
translate the 8-bit value into a color. A colormap is an ordered set of
colorcells, which is used to perform a function on a set of input
values. To display or print an image, the colormap translates data file
values in memory into brightness values for each color gun.
Colormaps are not limited to 8-bit displays.
1 255 0 0
2 0 170 90
3 0 0 255
24 0 0 255
Read-only Colorcells
The color assigned to a read-only colorcell can be shared by other
application windows, but it cannot be changed once it is set. To
change the color of a pixel on the display, it would not be possible to
change the color for the corresponding colorcell. Instead, the pixel
value would have to be changed and the image redisplayed. For this
reason, it is not possible to use auto-update operations in ERDAS
IMAGINE with read-only colorcells.
Read/Write Colorcells
The color assigned to a read/write colorcell can be changed, but it
cannot be shared by other application windows. An application can
easily change the color of displayed pixels by changing the color for
the colorcell that corresponds to the pixel value. This allows
applications to use auto update operations. However, this colorcell
cannot be shared by other application windows, and all of the
colorcells in the colormap could quickly be utilized.
Display Types The possible range of different colors is determined by the display
type. ERDAS IMAGINE supports the following types of displays:
• 8-bit PseudoColor
• 24-bit DirectColor
• 24-bit TrueColor
A display may offer more than one visual type and pixel depth.
See the ERDAS IMAGINE Configuration Guide for more
information on specific display hardware.
32-bit Displays
A 32-bit display is a combination of an 8-bit PseudoColor and 24-bit
DirectColor, or TrueColor display. Whether or not it is DirectColor or
TrueColor depends on the display hardware.
8-bit PseudoColor An 8-bit PseudoColor display has a colormap with 256 colorcells.
Each cell has a red, green, and blue brightness value, giving 256
combinations of red, green, and blue. The data file value for the pixel
is transformed into a colorcell value. The brightness values for the
colorcell that is specified by this colorcell value are used to define the
color to be displayed.
1
Green band
Colorcell
value 2
value
(4) 3
Blue band
4 0 0 255 Blue pixel
value
5
6
Auto Update
An 8-bit PseudoColor display has read-only and read/write colorcells,
allowing ERDAS IMAGINE to perform near real-time color
modifications using Auto Update and Auto Apply options.
24-bit DirectColor A 24-bit DirectColor display enables you to view up to three bands of
data at one time, creating displayed pixels that represent the
relationships between the bands by their colors. Since this is a 24-
bit display, it offers up to 256 shades of red, 256 shades of green,
and 256 shades of blue, which is approximately 16 million different
colors (2563). The data file values for each band are transformed
into colorcell values. The colorcell that is specified by these values is
used to define the color to be displayed.
Colormap Color-
Red
Cell
Data File Values Colorcell Values Value
Color- Index
Green
Cell Value
Index 1 0
Red band Red band
value Color-
value Cell Blue 1 0 2 0
(1) Value
Index 3
2 90
Green band Green band 4
value 1 0 3
value
(2) 4 5
2 0
Blue band Blue band 5 6 55
3
value value
(6) 4 6 55
5
6 200
Blue-green pixel
(0, 90, 200 RGB)
In Figure 37, data file values for a pixel of three continuous raster
layers (bands) are transformed to separate colorcell values for each
band. Since the colorcell value is 1 for the red band, 2 for the green
band, and 6 for the blue band, the RGB brightness values are 0, 90,
200. This displays the pixel as a blue-green color.
This type of display grants a very large number of colors to ERDAS
IMAGINE and it works well with all types of data.
Auto Update
A 24-bit DirectColor display has read-only and read/write colorcells,
allowing ERDAS IMAGINE to perform real-time color modifications
using the Auto Update and Auto Apply options.
In Figure 38, data file values for a pixel of three continuous raster
layers (bands) are transformed to separate screen values for each
band. Since the screen value is 0 for the red band, 90 for the green
band, and 200 for the blue band, the RGB brightness values are 0,
90, and 200. This displays the pixel as a blue-green color.
Auto Update
The 24-bit TrueColor display does not use the colormap in ERDAS
IMAGINE, and thus does not provide ERDAS IMAGINE with any real-
time color changing capability. Each time a color is changed, the
screen values must be calculated and the image must be redrawn.
Color Quality
The 24-bit TrueColor visual provides the best color quality possible
with standard equipment. There is no color degradation under any
circumstances with this display.
• 8-bit PseudoColor
• 15-bit HiColor
• 24-bit TrueColor
15-bit HiColor
A 15-bit HiColor display for the PC assigns colors the same way as
the X Windows 24-bit TrueColor display, except that it offers 32
shades of red, 32 shades of green, and 32 shades of blue, for a total
of 32,768 possible color combinations. Some video display adapters
allocate 6 bits to the green color gun, allowing 64,000 colors. These
adapters use a 16-bit color scheme.
24-bit TrueColor
A 24-bit TrueColor display for the PC assigns colors the same way as
the X Windows 24-bit TrueColor display.
Displaying Raster Image files (.img) are raster files in the ERDAS IMAGINE format.
There are two types of raster layers:
Layers
• continuous
• thematic
Continuous Raster Layers An image file (.img) can contain several continuous raster layers;
therefore, each pixel can have multiple data file values. When
displaying an image file with continuous raster layers, it is possible
to assign which layers (bands) are to be displayed with each of the
three color guns. The data file values in each layer are input to the
assigned color gun. The most useful color assignments are those that
allow for an easy interpretation of the displayed image. For example:
• Landsat TM—color-infrared: 4, 3, 2
This is infrared because band 4 = infrared.
• SPOT Multispectral—color-infrared: 3, 2, 1
This is infrared because band 3 = infrared.
Contrast Table
When an image is displayed, ERDAS IMAGINE automatically creates
a contrast table for continuous raster layers. The red, green, and
blue brightness values for each band are stored in this table.
Since the data file values in continuous raster layers are quantitative
and related, the brightness values in the colormap are also
quantitative and related. The screen pixels represent the
relationships between the values of the file pixels by their colors. For
example, a screen pixel that is bright red has a high brightness value
in the red color gun, and a high data file value in the layer assigned
to red, relative to other data file values in that layer.
The brightness values often differ from the data file values, but they
usually remain in the same order of lowest to highest. Some
meaningful relationships between the values are usually maintained.
Contrast Stretch
Different displays have different ranges of possible brightness
values. The range of most displays is 0 to 255 for each color gun.
Since the data file values in a continuous raster layer often represent
raw data (such as elevation or an amount of reflected light), the
range of data file values is often not the same as the range of
brightness values of the display. Therefore, a contrast stretch is
usually performed, which stretches the range of the values to fit the
range of the display.
For example, Figure 39 shows a layer that has data file values from
30 to 40. When these values are used as brightness values, the
contrast of the displayed image is poor. A contrast stretch simply
stretches the range between the lower and higher data file values,
so that the contrast of the displayed image is higher—that is, lower
data file values are displayed with the lowest brightness values, and
higher data file values are displayed with the highest brightness
values.
30→0
255
31→25
Statistics Files
To perform a contrast stretch, certain statistics are necessary, such
as the mean and the standard deviation of the data file values in
each layer.
Original Histogram
frequency
values stretched
values stretched
over 255 are
less than 0 are
not displayed
not displayed -2σ mean +2σ 0 -2σ mean +2σ 255
0 stretched data file values 255 stretched data file values
The mean and standard deviation of the data file values for each
band are used to locate the majority of the data file values. The
number of standard deviations above and below the mean can be
entered, which determines the range of data used in the stretch.
Histograms
of each band:
Ranges of
data file
values to
be displayed:
0 data file values in 255 0 data file values in 255 0 data file values in 255
Colormap:
0 brightness values out 255 0 brightness values out 255 0 brightness values out 255
Color
guns:
Brightness
values in
each color
gun:
Color display:
Color Table
When a thematic raster layer is displayed, ERDAS IMAGINE
automatically creates a color table. The red, green, and blue
brightness values for each class are stored in this table.
RGB Colors
Individual color schemes can be created by combining red, green,
and blue in different combinations, and assigning colors to the
classes of a thematic layer.
Colors can be expressed numerically, as the brightness values for
each color gun. Brightness values of a display generally range from
0 to 255, however, ERDAS IMAGINE translates the values from 0 to
1. The maximum brightness value for the display device is scaled to
1. The colors listed in Table 32 are based on the range that is used
to assign brightness values in ERDAS IMAGINE.
Red 1 0 0
Red-Orange 1 .392 0
Yellow 1 1 0
Yellow-Green .490 1 0
Green 0 1 0
Cyan 0 1 1
Blue 0 0 1
Black 0 0 0
White 1 1 1
NOTE: Black is the absence of all color (0,0,0) and white is created
from the highest values of all three colors (1, 1, 1). To lighten a
color, increase all three brightness values. To darken a color,
decrease all three brightness values.
Use the Raster Attribute Editor to create your own color scheme.
1 2 3
Original
image by 4 3 5
class:
2 1 4
Brightness Values
CLASS COLOR RED GREEN BLUE
1 Red = 255 0 0
2 Orange = 255 128 0
Color 3 Yellow = 255 255 0
scheme: 4 Violet = 128 0 255
5 Green = 0 255 0
Colormap:
Brightness
values in
each color
gun:
= 255
= 128
=0
R O Y
Display: V Y G
O R V
NOTE: The more Viewers that are opened simultaneously, the more
RAM memory is necessary.
The Viewer not only makes digital images visible quickly, but it can
also be used as a tool for image processing and raster GIS modeling.
The uses of the Viewer are listed briefly in this section, and described
in greater detail in other chapters of the ERDAS Field Guide.
Colormap
ERDAS IMAGINE does not use the entire colormap because there are
other applications that also need to use it, including the window
manager, terminal windows, Arc View, or a clock. Therefore, there
are some limitations to the number of colors that the Viewer can
display simultaneously, and flickering may occur as well.
Color Flickering
If an application requests a new color that does not exist in the
colormap, the server assigns that color to an empty colorcell.
However, if there are not any available colorcells and the application
requires a private colorcell, then a private colormap is created for the
application window. Since this is a private colormap, when the cursor
is moved out of the window, the server uses the main colormap and
the brightness values assigned to the colorcells. Therefore, the
colors in the private colormap are not applied and the screen flickers.
Once the cursor is moved into the application window, the correct
colors are applied for that window.
Resampling
When a raster layer(s) is displayed, the file pixels may be resampled
for display on the screen. Resampling is used to calculate pixel
values when one raster grid must be fitted to another. In this case,
the raster grid defined by the file must be fit to the grid of screen
pixels in the Viewer.
All Viewer operations are file-based. So, any time an image is
resampled in the Viewer, the Viewer uses the file as its source. If the
raster layer is magnified or reduced, the Viewer refits the file grid to
the new screen grid.
The resampling methods available are:
Preference Editor
The Preference Editor enables you to set parameters for the Viewer
that affect the way the Viewer operates.
See the ERDAS IMAGINE On-Line Help for the Preference Editor
for information on how to set preferences for the Viewer.
Pyramid Layers Sometimes a large image file may take a long time to display in the
Viewer or to be resampled by an application. The Pyramid Layer
option enables you to display large images faster and allows certain
applications to rapidly access the resampled data. Pyramid layers are
image layers which are copies of the original layer successively
reduced by the power of 2 and then resampled. If the raster layer is
thematic, then it is resampled using the Nearest Neighbor method.
If the raster layer is continuous, it is resampled by a method that is
similar to Cubic Convolution. The data file values for sixteen pixels in
a 4 × 4 window are used to calculate an output data file value with
a filter function.
Where:
n = number of pyramid layers
ERDAS IMAGINE
Pyramid layer (64 × 64) selects the pyramid
layer that displays the
Pyramid layer (128 × 128) fastest in the Viewer.
Original Image
(4K × 4K)
image file
For more information about the .img format, see “Raster Data”
and the On-Line Help.
Color Patches
When the Viewer performs dithering, it uses patches of 2 × 2 pixels.
If the desired color has an exact match, then all of the values in the
patch match it. If the desired color is halfway between two of the
usable colors, the patch contains two pixels of each of the
surrounding usable colors. If it is 3/4 of the way between two usable
colors, the patch contains 3 pixels of the color it is closest to, and 1
pixel of the color that is second closest. Figure 45 shows what the
color patches would look like if the usable colors were black and
white and the desired color was gray.
If the desired color is not an even multiple of 1/4 of the way between
two allowable colors, it is rounded to the nearest 1/4. The Viewer
separately dithers the red, green, and blue components of a desired
color.
Color Artifacts
Since the Viewer requires 2 × 2 pixel patches to represent a color,
and actual images typically have a different color for each pixel,
artifacts may appear in an image that has been dithered. Usually, the
difference in color resolution is insignificant, because adjacent pixels
are normally similar to each other. Similarity between adjacent
pixels usually smooths out artifacts that appear.
Viewing Layers The Viewer displays layers as one of the following types of view
layers:
• annotation
• vector
• pseudo color
• gray scale
• true color
Viewing Multiple Layers It is possible to view as many layers of all types (with the exception
of vector layers, which have a limit of 10) at one time in a single
Viewer.
To overlay multiple layers in one Viewer, they must all be referenced
to the same map coordinate system. The layers are positioned
geographically within the window, and resampled to the same scale
as previously displayed layers. Therefore, raster layers in one Viewer
can have different cell sizes.
When multiple layers are magnified or reduced, raster layers are
resampled from the file to fit to the new scale.
Display multiple layers from the Viewer. Be sure to turn off the
Clear Display check box when you open subsequent layers.
Overlapping Layers
When layers overlap, the order in which the layers are opened is very
important. The last layer that is opened always appears to be on top
of the previously opened layers.
In a raster layer, it is possible to make values of zero transparent in
the Viewer, meaning that they have no opacity. Thus, if a raster layer
with zeros is displayed over other layers, the areas with zero values
allow the underlying layers to show through.
Opacity is a measure of how opaque, or solid, a color is displayed in
a raster layer. Opacity is a component of the color scheme of
categorical data displayed in pseudo color.
• 50% opacity lets some color show, and lets some of the
underlying layers show through. The effect is like looking at the
underlying layers through a colored fog.
Non-Overlapping Layers
Multiple layers that are opened in the same Viewer do not have to
overlap. Layers that cover distinct geographic areas can be opened
in the same Viewer. The layers are automatically positioned in the
Viewer window according to their map coordinates, and are
positioned relative to one another geographically. The map
coordinate systems for the layers must be the same.
Linking Viewers Linking Viewers is appropriate when two Viewers cover the same
geographic area (at least partially), and are referenced to the same
map units. When two Viewers are linked:
• You can manipulate the zoom ratio of one Viewer from another.
Zoom and Roam Zooming enlarges an image on the display. When an image is
zoomed, it can be roamed (scrolled) so that the desired portion of
the image appears on the display screen. Any image that does not
fit entirely in the Viewer can be roamed and/or zoomed. Roaming
and zooming have no effect on how the image is stored in the file.
The zoom ratio describes the size of the image on the screen in terms
of the number of file pixels used to store the image. It is the ratio of
the number of screen pixels in the X or Y dimension to the number
that are used to display the corresponding file pixels.
A zoom ratio greater than 1 is a magnification, which makes the
image features appear larger in the Viewer. A zoom ratio less than 1
is a reduction, which makes the image features appear smaller in the
Viewer.
Zoom the data in the Viewer via the Viewer menu bar, the
Viewer tool bar, or the Quick View right-button menu.
Enhancing Continuous Working with the brightness values in the colormap is useful for
Raster Layers image enhancement. Often, a trial and error approach is needed to
produce an image that has the right contrast and highlights the right
features. By using the tools in the Viewer, it is possible to quickly
view the effects of different enhancement techniques, undo
enhancements that are not helpful, and then save the best results to
disk.
Creating New Image Files It is easy to create a new image file (.img) from the layer(s)
displayed in the Viewer. The new image file contains three
continuous raster layers (RGB), regardless of how many layers are
currently displayed. The Image Information utility must be used to
create statistics for the new image file before the file is enhanced.
Annotation layers can be converted to raster format, and written to
an image file. Or, vector data can be gridded into an image,
overwriting the values of the pixels in the image plane, and
incorporated into the same band as the image.
Use the Viewer to .img function to create a new image file from
the currently displayed raster layers.
Introduction The Mosaic process offers you the capability to stitch images
together so one large, cohesive image of an area can be created.
Because of the different features of the Mosaic Tool, you can smooth
these images before mosaicking them together as well as color
balance them, or adjust the histograms of each image in order to
present a better large picture. It is necessary for the images to
contain map and projection information, but they do not need to be
in the same projection or have the same cell sizes. The input images
must have the same number of layers.
In addition to Mosaic Tool, Mosaic Wizard and Mosaic Direct are
features designed to make the Mosaic process easier for you. The
Mosaic Wizard will take you through the steps of creating a Mosaic
project. Mosaic Direct is designed to simplify the mosaic process by
gathering important information regarding the mosaic project from
you and then building the project without a lot of pre-processing by
you. The difference in the two is that Mosaic Wizard is a simplified
interface with minimal options for the beginning Mosaic user while
Mosaic Direct allows the regular or advanced Mosaic user to easily
and quickly set up Mosaic projects.
Mosaic Tool still offers you the most options and allows the most
input from you. There are a number of features included with the
Mosaic Tool to aid you in creating a better mosaicked image from
many separate images. In this chapter, the following features will be
discussed as part of the Mosaic Tool input image options followed by
an overview of Mosaic Wizard and Mosaic Direct. In Input Image
Mode for Mosaic Tool:
• Exclude Areas
• Image Dodging
• Color Balancing
• Histogram Matching
You can choose from the following when using Intersection Mode:
Image Dodging The Image Dodging feature of the Mosaic Tool applies a filter and
global statistics across each image you are mosaicking in order to
smooth out light imbalance over the image. The outcome of Image
Dodging is very similar to that of Color Balancing, but if you wish to
perform both functions on your images before mosaicking, you need
to do Image Dodging first. Unlike Color Balancing, Image Dodging
uses blocks instead of pixels to balance the image.
When you bring up the Image Dodging dialog you have several
different sections. Options for Current Image, Options for All
Images, and Display Setting are all above the viewer area showing
the image and a place for previewing the dodged image. If you want
to skip dodging for a certain image, you can check the Don’t do
dodging on this image box and skip to the next image you want to
mosaic.
In the area titled Statistics Collection, you can change the Grid Size,
Skip Factor X, and Skip Factor Y. If you want a specific number to
apply to all of your images, you can click that button so you don’t
have to reenter the information with each new image.
Color Balancing When you click Use Color Balancing, you are given the option of
Automatic Color Balancing. If you choose this option, the method will
be chosen for you. If you want to manually choose the surface
method and display options, choose Manual Color Manipulation in the
Set Color Balancing dialog.
Mosaic Color Balancing gives you several options to balance any
color disparities in your images before mosaicking them together
into one large image. When you choose to use Color Balancing in the
Color Corrections dialog, you will be asked if you want to color
balance your images automatically or manually. For more control
over how the images are color balanced, you should choose the
manual color balancing option. Once you choose this option, you will
have access to the Mosaic Color Balancing tool where you can choose
different surface methods, display options, and surface settings for
color balancing your images.
Surface Methods
When choosing a surface method you should concentrate on how the
light abnormality in your image is dispersed. Depending on the
shape of the bright or shadowed area you want to correct, you
should choose one of the following:
Display Setting
The Display Setting area of the Mosaic Color Balancing tool lets you
choose between RGB images and Single Band images. You can also
alter which layer in an RGB image is the red, green, or blue.
Surface Settings
When you choose a Surface Method, the Surface Settings become
the parameters used in that method’s formula. The parameters
define the surface, and the surface will then be used to flatten the
brightness variation throughout the image. You can change the
following Surface Settings:
• Offset
• Scale
• Center X
• Center Y
• Axis Ratio
As you change the settings, you can see the Image Profile graph
change as well. If you want to preview the color balanced image
before accepting it, you can click Preview at the bottom of the Mosaic
Color Balancing tool. This is helpful because you can change any
disparities that still exist in the image.
Intersection Mode When you mosaic images, you will have overlapping areas. For those
overlapping areas, you can specify a cutline so that the pixels on one
side of a particular cutline take the value of one overlapping image,
while the pixels on the other side of the cutline take the value of
another overlapping image. The cutlines can be generated manually
or automatically.
When you choose the Set Mode for Intersection button on the Mosaic
Tool toolbar, you have several different options for handling the
overlapping of your images. The features for dealing with image
overlap include:
No Cutline Exists
When no cutline exists between overlapping images, you will need to
choose how to handle the overlap. You are given the following
choices:
• Overlay
• Average
• Minimum
• Maximum
• Feather
Cutline Exists
When a cutline does exist between images, you will need to decide
on smoothing and feathering options to cover the overlap area in the
vicinity of the cutline. The Smoothing Options area allows you to
choose both the Distance and the Smoothing Filter. The Feathering
Options given are No Feathering, Feathering, and Feathering by
Distance. If you choose Feathering by Distance, you will be able to
enter a specific distance.
Output Image Options This dialog lets you define your output map areas and change output
map projection if you wish. You will be given the choice of using
Union of All Inputs, User-defined AOI, Map Series File, USGS Maps
Database, or ASCII Sheet File as your defining feature for an output
map area. The default is Union of All Inputs.
Different choices yield different options to further modify the output
image. For instance, if you select User-defined AOI, then you are
given the choice of outputting multiple AOI objects to either multiple
files or a single file. If you choose Map Series File, you will be able to
enter the filename you want to use and choose whether to treat the
map extent as pixel centers or pixel edges.
If you choose ASCII Sheet File to define the Output Map Area, you
will need to supply a text file. If you need to create an ASCII file, you
should do so according to the following definitions:
ASCII Sheet File Definition:
The ASCII Sheet File may have one or more records in the following
format. Fields are white space delimited.
• Field 3: X coordinate
• Field 4: Y coordinate
Fields 2-4 may be repeated for any two of the coordinates or for all
four. If all four coordinates are present, the sheet will be treated as
a rotated orthoimage. Otherwise, it will be treated as a north-up
orthoimage.
Display vs. File With ERDAS IMAGINE, image enhancement may be performed:
Enhancement
• temporarily, upon the image that is displayed in the Viewer (by
manipulating the function and display memories), or
Spatial Modeling Two types of models for enhancement can be created in ERDAS
Enhancements IMAGINE:
Image Interpreter
ERDAS IMAGINE supplies many algorithms constructed as models,
which are ready to be applied with user-input parameters at the
touch of a button. These graphical models, created with Model
Maker, are listed as menu functions in the Image Interpreter. These
functions are mentioned throughout this chapter. Just remember,
these are modeling functions which can be edited and adapted as
needed with Model Maker or the SML.
Function Description
Function Description
Function Description
Correcting Data Each generation of sensors shows improved data acquisition and
image quality over previous generations. However, some anomalies
still exist that are inherent to certain sensors and can be corrected
by applying mathematical formulas derived from the distortions
(Lillesand and Kiefer, 1987). In addition, the natural distortion that
results from the curvature and rotation of the Earth in relation to the
sensor platform produces distortions in the image data, which can
also be corrected.
Radiometric Correction
Generally, there are two types of data correction: radiometric and
geometric. Radiometric correction addresses variations in the pixel
intensities (DNs) that are not caused by the object or scene being
scanned. These variations include:
• topographic effects
• atmospheric effects
Geometric Correction
Geometric correction addresses errors in the relative positions of
pixels. These errors are induced by:
• terrain variations
Radiometric Correction:
Visible/Infrared Imagery
Striping
Striping or banding occurs if a detector goes out of adjustment—that
is, it provides readings consistently greater than or less than the
other detectors for the same band over the same ground cover.
Some Landsat 1, 2, and 3 data have striping every sixth line,
because of improper calibration of some of the 24 detectors that
were used by the MSS. The stripes are not constant data values, nor
is there a constant error factor or bias. The differing response of the
errant detector is a complex function of the data value sensed.
This problem has been largely eliminated in the newer sensors.
Various algorithms have been advanced in current literature to help
correct this problem in the older data. Among these algorithms are
simple along-line convolution, high-pass filtering, and forward and
reverse principal component transformations (Crippen, 1989a).
Data from airborne multispectral or hyperspectral imaging scanners
also shows a pronounced striping pattern due to varying offsets in
the multielement detectors. This effect can be further exacerbated
by unfavorable sun angle. These artifacts can be minimized by
correcting each scan line to a scene-derived average (Kruse, 1988).
Line Dropout
Another common remote sensing device error is line dropout. Line
dropout occurs when a detector either completely fails to function,
or becomes temporarily saturated during a scan (like the effect of a
camera flash on the retina). The result is a line or partial line of data
with higher data file values, creating a horizontal streak until the
detector(s) recovers, if it recovers.
Line dropout is usually corrected by replacing the bad line with a line
of estimated data file values, which is based on the lines above and
below it.
• linear regressions
• atmospheric modeling
Linear Regressions
A number of methods using linear regressions have been tried.
These techniques use bispectral plots and assume that the position
of any pixel along that plot is strictly a result of illumination. The
slope then equals the relative reflectivities for the two spectral
bands. At an illumination of zero, the regression plots should pass
through the bispectral origin. Offsets from this represent the additive
extraneous components, due to atmosphere effects (Crippen, 1987).
Frequency
Frequency
0 j k 255 0 j k 255
O i i lD t E h dD t
In Figure 47, the range between j and k in the histogram of the
original data is about one third of the total range of the data. When
the same data are radiometrically enhanced, the range between j
and k can be widened. Therefore, the pixels between j and k gain
contrast—it is easier to distinguish different brightness values in
these pixels.
However, the pixels outside the range between j and k are more
grouped together than in the original histogram to compensate for
the stretch between j and k. Contrast among these pixels is lost.
255
Notice that the graph line with the steepest (highest) slope brings
out the most contrast by stretching output values farther apart.
linear
nonlinear
piecewise
linear
0
0 255
input data file values
0
0 255
input data file values
100%
LUT Value
The contrast value for each range represents the percent of the
available output range that particular range occupies. The brightness
value for each range represents the middle of the total range of
brightness values occupied by that range. Since rules 1 and 2 above
are enforced, as the contrast and brightness values are changed,
they may affect the contrast and brightness of other ranges. For
example, if the contrast of the low range increases, it forces the
contrast of the middle to decrease.
The statistics in the image file contain the mean, standard deviation,
and other statistics on each band of data. The mean and standard
deviation are used to determine the range of data file values to be
translated into brightness values or new data file values. You can
specify the number of standard deviations from the mean that are to
be used in the contrast stretch. Usually the data file values that are
two standard deviations above and below the mean are used. If the
data have a normal distribution, then this range represents
approximately 95 percent of the data.
The mean and standard deviation are used instead of the minimum
and maximum data file values because the minimum and maximum
data file values are usually not representative of most of the data. A
notable exception occurs when the feature being sought is in
shadow. The shadow pixels are usually at the low extreme of the
data file values, outside the range of two standard deviations from
the mean.
input
histogram
0 0
0 255 0 255
input data file values input data file values
2. A breakpoint is added to the
1. Linear stretch. Values are linear function, redistributing
clipped at 255. the contrast.
255 255
0 0
0 255 0 255
input data file values input data file values
3. Another breakpoint added. 4. The breakpoint at the top of
Contrast at the peak of the the function is moved so that
histogram continues to increase. values are not clipped.
peak
pixels at
tail are
tail grouped -
contrast
is lost
pixels at peak are spread
apart - contrast is gained
T-
A = ---
N
Where:
N = the number of bins
T = the total number of pixels in the image
A = the equalized number of pixels per bin
The pixels of each input value are assigned to bins, so that the
number of pixels in each bin is as close to A as possible. Consider
Figure 54:
40
30
A = 24
15
10 10
5 5 5
0 1 2 3 4 5 6 7 8 9
data file values
Where:
A = equalized number of pixels per bin (see above)
Hi = the number of values with the value i (histogram)
int = integer function (truncating real numbers to
integer)
Bi = bin number for pixels with value i
Source: Modified from Gonzalez and Wintz, 1977
The 10 bins are rescaled to the range 0 to M. In this example, M =
9, because the input values ranged from 0 to 9, so that the equalized
histogram can be compared to the original. The output histogram of
this equalized image looks like Figure 55:
40
30
4 5
20 A = 24
6
15 15
2 7
8
1 3
0 0 0 0 9
0 1 2 3 4 5 6 7 8 9
output data file values
Effect on Contrast
By comparing the original histogram of the example data with the
one above, you can see that the enhanced image gains contrast in
the peaks of the original histogram. For example, the input range of
3 to 7 is stretched to the range 1 to 8. However, data values at the
tails of the original histogram are grouped together. Input values 0
through 2 all have the output value of 0. So, contrast among the tail
pixels, which usually make up the darkest and brightest regions of
the input image, is lost.
Level Slice
A level slice is similar to a histogram equalization in that it divides
the data into equal amounts. A level slice on a true color display
creates a stair-stepped lookup table. The effect on the data is that
input file values are grouped together at regular intervals into a
discrete number of levels, each with one output brightness value.
To perform a true color level slice, you must specify a range for the
output brightness values and a number of output levels. The lookup
table is then stair-stepped so that there is an equal number of input
pixels in each of the output levels.
Histogram Matching Histogram matching is the process of determining a lookup table that
converts the histogram of one image to resemble the histogram of
another. Histogram matching is useful for matching data of the same
or adjacent scenes that were scanned on separate days, or are
slightly different because of sun angle or atmospheric effects. This is
especially useful for mosaicking or change detection.
To achieve good results with histogram matching, the two input
images should have similar characteristics:
• Relative dark and light features in the image should be the same.
(a) (b)
frequency
frequency
frequency
=
+
Brightness Inversion The brightness inversion functions produce images that have the
opposite contrast of the original image. Dark detail becomes light,
and light detail becomes dark. This can also be used to invert a
negative image that has been scanned to produce a positive image.
Brightness inversion has two options: inverse and reverse. Both
options convert the input data range (commonly 0 - 255) to 0 - 1.0.
A min-max remapping is used to simultaneously stretch the image
and handle any input bit format. The output image is in floating point
format, so a min-max stretch is used to convert the output image
into 8-bit format.
Inverse is useful for emphasizing detail that would otherwise be lost
in the darkness of the low DN pixels. This function applies the
following algorithm:
• zero spatial frequency—a flat image, in which every pixel has the
same value
• Resolution merging
Convolution Filtering Convolution filtering is the process of averaging small sets of pixels
across an image. Convolution filtering is used to change the spatial
frequency characteristics of an image (Jensen, 1996).
A convolution kernel is a matrix of numbers that is used to average
the value of each pixel with the values of surrounding pixels in a
particular way. The numbers in the matrix serve to weight this
average toward particular pixels. These numbers are often called
coefficients, because they are used as such in the mathematical
equations.
Convolution Example
To understand how one pixel is convolved, imagine that the
convolution kernel is overlaid on the data file values of the image (in
one band), so that the pixel to be convolved is in the center of the
window.
2 8 6 6 6 -1 -1 -1
-1 16 -1
2 8 6 6 6
-1 -1 -1
2 2 8 6 6
2 2 2 8 6
Kernel
2 2 2 2 8
Input Data
2 2 8 6 6 6 0 11 5 6 ?
2 2 8 6 6 6 1 11 5 5 ?
2 2 2 8 6 6 1 0 11 6 ?
2 2 2 2 8 6 2 1 0 11 ?
2 2 2 2 2 8 ? ? ? ? ?
Convolution Formula
The following formula is used to derive an output data file value for
the pixel being convolved (in the center):
q
q
∑ ∑ ij ij
f d
V = i = 1 j = 1
------------------------------------
F
Where:
fij = the coefficient of a convolution kernel at position i,j
(in the kernel)
dij = the data value of the pixel that corresponds to fij
q = the dimension of the kernel, assuming a square
kernel (if q = 3, the kernel is 3 × 3)
F = either the sum of the coefficients of the kernel, or
1 if the sum of coefficients is 0
V = the output pixel value
In cases where V is less than 0, V is clipped to 0.
Source: Modified from Jensen, 1996; Schowengerdt, 1983
Zero-Sum Kernels
Zero-sum kernels are kernels in which the sum of all coefficients in
the kernel equals zero. When a zero-sum kernel is used, then the
sum of the coefficients is not used in the convolution equation, as
above. In this case, no division is performed (F = 1), since division
by zero is not defined.
This generally causes the output values to be:
• zero in areas where all input values are equal (no edges)
-1 -1 -1
1 -2 1
1 1 1
High-Frequency Kernels
A high-frequency kernel, or high-pass kernel, has the effect of
increasing spatial frequency.
-1 -1 -1
-1 16 -1
-1 -1 -1
BEFORE AFTER
...the low value gets lower. Inversely, when the kernel is used on a
set of pixels in which a relatively high value is surrounded by lower
values...
BEFORE AFTER
64 60 57 64 60 57
61 125 69 61 187 69
58 60 70 58 60 70
Low-Frequency Kernels
Below is an example of a low-frequency kernel, or low-pass kernel,
which decreases spatial frequency.
1 1 1
1 1 1
1 1 1
This kernel simply averages the values of the pixels, causing them
to be more homogeneous. The resulting image looks either more
smooth or more blurred.
Resolution Merge The resolution of a specific sensor can refer to radiometric, spatial,
spectral, or temporal resolution.
• multiplicative
Multiplicative
The second technique in the Image Interpreter uses a simple
multiplicative algorithm:
(DNTM1) (DNSPOT) = DNnew TM1
The algorithm is derived from the four component technique of
Crippen (Crippen, 1989a). In this paper, it is argued that of the four
possible arithmetic methods to incorporate an intensity image into a
chromatic image (addition, subtraction, division, and multiplication),
only multiplication is unlikely to distort the color.
However, in his study Crippen first removed the intensity component
via band ratios, spectral indices, or PC transform. The algorithm
shown above operates on the original image. The result is an
increased presence of the intensity component. For many
applications, this is desirable. People involved in urban or suburban
studies, city planning, and utilities routing often want roads and
cultural features (which tend toward high reflection) to be
pronounced in the image.
Brovey Transform
In the Brovey Transform method, three bands are used according to
the following formula:
[DNB1 / DNB1 + DNB2 + DNB3] × [DNhigh res. image] = DNB1_new
[DNB2 / DNB1 + DNB2 + DNB3] × [DNhigh res. image] = DNB2_new
[DNB3 / DNB1 + DNB2 + DNB3] × [DNhigh res. image] = DNB3_new
No single filter with fixed parameters can address this wide variety
of conditions. In addition, multiband images may require different
parameters for each band. Without the use of adaptive filters, the
different bands would have to be separated into one-band files,
enhanced, and then recombined.
For this function, the image is separated into high and low frequency
component images. The low frequency image is considered to be
overall scene luminance. These two component parts are then
recombined in various relative amounts using multipliers derived
from LUTs. These LUTs are driven by the overall scene luminance:
DNout = K(DNHi) + DNLL
Where:
K = user-selected contrast multiplier
Hi = high luminance (derives from the LUT)
LL = local luminance (derives from the LUT)
255
Local Luminance
Intercept (I)
• approximation coefficients Wϕ
hϕ Wϕ
low
sub-
hϕ pass
image
low
pass hψ W ψH
high
input
pass
image
hϕ W ψV
sub- low
hψ pass
image
high
pass column hψ W ψD
decimation high
pass row
decimation
Wϕ h˜ ϕ
low
pass h˜ ϕ
low
W ψH h˜ ψ pass
high
output
pass
image
W ψV h˜ ϕ
low
pass h˜ ψ
high
column pass
W ψD h˜ ψ padding
high
row
pass
padding
Algorithm Theory The basic theory of the decomposition is that an image can be
separated into high-frequency and low-frequency components. For
example, a low-pass filter can be used to create a low-frequency
image. Subtracting this low-frequency image from the original image
would create the corresponding high-frequency image. These two
images contain all of the information in the original image. If they
were added together the result would be the original image.
The same could be done by high-pass filter filtering an image and the
corresponding low-frequency image could be derived. Again, adding
the two together would yield the original image. Any image can be
broken into various high- and low-frequency components using
various high- and low-pass filters. The wavelet family can be thought
of as a high-pass filter. Thus wavelet-based high- and low-frequency
images can be created from any input image. By definition, the low-
frequency image is of lower resolution and the high-frequency image
contains the detail of the image.
This process can be repeated recursively. The created low-frequency
image could be again processed with the kernels to create new
images with even lower resolution. Thus, starting with a 5-meter
image, a 10-meter low-pass image and the corresponding high-pass
image could be created. A second iteration would create a 20-meter
low- and, corresponding, high-pass images. A third recursion would
create a 40-meter low- and, corresponding, high-frequency images,
etc.
high
Resample
spectral
Histogram Match
res
a v fused
DWT -1 image
h d
high v
spatial DWT
res h d
Precise Coregistration
A first prerequisite is that the two images be precisely co-registered.
For some sensors (e.g., Landsat 7 ETM+) this co-registration is
inherent in the dataset. If this is not the case, a greatly over-defined
2nd order polynomial transform should be used to coregister one
image to the other. By over-defining the transform (that is, by
having far more than the minimum number of tie points), it is
possible to reduce the random RMS error to the subpixel level. This
is easily accomplished by using the Point Prediction option in the GCP
Tool. In practice, well-distributed tie points are collected until the
predicted point consistently falls exactly were it should. At that time,
the transform must be correct. This may require 30-60 tie points for
a typical Landsat TM—SPOT Pan co-registration.
When doing the coregistration, it is generally preferable to register
the lower resolution image to the higher resolution image, i.e., the
high resolution image is used as the Reference Image. This will allow
the greatest accuracy of registration. However, if the lowest
resolution image has georeferencing that is to be retained, it may be
desirable to use it as the Reference Image. A larger number of tie
points and more attention to precise work would then be required to
attain the same registration accuracy. Evaluation of the X- and Y-
Residual and the RMS Error columns in the ERDAS IMAGINE GCP Tool
will indicate the accuracy of registration.
It is preferable to store the high and low resolution images as
separate image files rather than Layerstacking them into a single
image file. In ERDAS IMAGINE, stacked image layers are resampled
to a common pixel size. Since the Wavelet Resolution Merge
algorithm does the pixel resampling at an optimal stage in the
calculation, this avoids multiple resamplings.
After creating the coregistered images, they should be codisplayed
in an ERDAS IMAGINE Viewer. Then the Fade, Flicker, and Swipe
Tools can be used to visually evaluate the precision of the
coregistration.
Theoretical Limitations
As described in the discussion of the discrete wavelet transform, the
algorithm downsamples the high spatial resolution input image by a
factor of two with each iteration. This produces approximation (a)
images with pixel sizes reduced by a factor of two with each
iteration. The low (spatial) resolution image will substitute exactly
for the “a” image only if the input images have relative pixel sizes
differing by a multiple of 2. Any other pixel size ratio will require
resampling of the low (spatial) resolution image prior to substitution.
Certain ratios can result in a degradation of the substitution image
that may not be fully overcome by the subsequent wavelet
sharpening. This will result in a less than optimal enhancement. For
the most common scenarios, Landsat ETM+, IKONOS and QuickBird,
this is not a problem.
Although the mathematics of the algorithm are precise for any pixel
size ratio, a resolution increase of greater than two or three becomes
theoretically questionable. For example, all images are degraded due
to atmospheric refraction and scattering of the returning signal. This
is termed “point spread”. Thus, both images in a resolution merge
operation have, to some (unknown) extent, been “smeared”. Thus,
both images in a resolution merge operation have, to an unknown
extent, already been degraded. It is not reasonable to assume that
each multispectral pixel can be precisely devolved into nine or more
subpixels.
Spectral Transform Three merge scenarios are possible. The simplest is when the input
low (spatial) resolution image is only one band; a single band of a
multispectral image, for example. In this case, the only option is to
select which band to use. If the low resolution image to be processed
is a multispectral image, two methods will be offered for creating the
grayscale representation of the multispectral image intensity; IHS
and PC.
The IHS method accepts only 3 input bands. It has been suggested
that this technique produces an output image that is the best for
visual interpretation. Thus, this technique would be appropriate
when producing a final output product for map production. Since a
visual product is likely to be only an R, G, B image, the 3-band
limitation on this method is not a distinct limitation. Clearly, if one
wished to sharpen more data layers, the bands could be done as
separate groups of 3 and then the whole dataset layerstacked back
together.
Spectral The enhancement techniques that follow require more than one band
of data. They can be used to:
Enhancement
• compress bands of data that are similar
• extract new bands of data that are more interpretable to the eye
histogram
Band B
histogram
Band A
0
0 255
Band A
data file values
Ellipse Diagram
In an n-dimensional histogram, an ellipse (2 dimensions), ellipsoid
(3 dimensions), or hyperellipsoid (more than 3 dimensions) is
formed if the distributions of each input band are normal or near
normal. (The term ellipse is used for general purposes here.)
To perform PCA, the axes of the spectral space are rotated, changing
the coordinates of each pixel in spectral space, as well as the data
file values. The new axes are parallel to the axes of the ellipse.
Principal Component
(new axis)
0
0 255
The first principal component shows the direction and length of the
widest transect of the ellipse. Therefore, as an axis in spectral space,
it measures the highest variation within the data. In Figure 66 it is
easy to see that the first eigenvalue is always greater than the
ranges of the input bands, just as the hypotenuse of a right triangle
must always be longer than the legs.
range of pc 1
data file values
Band B
range of Band B
range of Band A
0
0 255
Band A
data file values
PC 2
PC 1
90° angle
(orthogonal)
0
0 255
Although there are n output bands in a PCA, the first few bands
account for a high proportion of the variance in the data—in some
cases, almost 100%. Therefore, PCA is useful for compressing data
into fewer bands.
In other applications, useful information can be gathered from the
principal component bands with the least variance. These bands can
show subtle details in the image that were obscured by higher
contrast in the original image. These bands may also show regular
noise in the data (for example, the striping in old MSS data) (Faust,
1989).
E Cov ET = V
Where:
Cov = the covariance matrix
E = the matrix of eigenvectors
T = the transposition function
V = a diagonal matrix of eigenvalues, in which all nondiagonal
elements are zeros
V is computed so that its nonzero elements are ordered from
greatest to least, so that
v1 > v2 > v3... > vn
Source: Faust, 1989
Pe = ∑ dk Eke
k=1
RGB to IHS The color monitors used for image display on image processing
systems have three color guns. These correspond to red, green, and
blue (R,G,B), the additive primary colors. When displaying three
bands of a multiband data set, the viewed image is said to be in
R,G,B space.
However, it is possible to define an alternate color space that uses
intensity (I), hue (H), and saturation (S) as the three positioned
parameters (in lieu of R,G, and B). This system is advantageous in
that it presents colors more nearly as perceived by the human eye.
INTENSITY
Blue
255 SATURATION 0
Green
255,0 Red
HUE
To use the RGB to IHS transform, use the RGB to IHS function
from Image Interpreter.
M – r- M – g- M – b-
R = --------------- G = --------------- B = ---------------
M–m M–m M–m
Where:
R,G,B are each in the range of 0 to 1.0.
r, g, b are each in the range of 0 to 1.0.
M = largest value, r, g, or b
m = least value, r, g, or b
M+m
I = ----------------
2
The equations for calculating saturation in the range of 0 to 1.0 are:
M – m-, If I ≤ 0.5
S = ---------------
M+m
M – m -, If I > 0.5
S = ------------------------
2–M–m
Band X - Band Y
Band X + Band Y
Band X
Band Y
Applications
Index Examples
The following are examples of indices that have been
preprogrammed in the Image Interpreter in ERDAS IMAGINE:
• IR/R (infrared/red)
• SQRT (IR/R)
IR – R
• Normalized Difference Vegetation Index (NDVI) = ----------------
IR + R
IR
Sensor R Band
Band
Landsat MSS 7 5
SPOT XS 3 2
Landsat TM 4 3
NOAA AVHRR 2 1
Image Algebra
Image algebra is a general term used to describe operations that
combine the pixels of two or more raster layers in mathematical
combinations. For example, the calculation:
(infrared band) - (red band)
DNir - DNred
yields a simple, yet very useful, measure of the presence of
vegetation. At the other extreme is the Tasseled Cap calculation
(described in the following pages), which uses a more complicated
mathematical combination of as many as six bands to define
vegetation.
Band ratios, such as:
TM5
------------ = clay minerals
TM7
IR – R-
NDVI = ---------------
IR + R
Y
Z
Normalize Pixel albedo is affected by sensor look angle and local topographic
effects. For airborne sensors, this look angle effect can be large
across a scene. It is less pronounced for satellite sensors. Some
scanners look to both sides of the aircraft. For these data sets, the
average scene luminance between the two half-scenes can be large.
To help minimize these effects, an equal area normalization
algorithm can be applied (Zamudio and Atkinson, 1990). This
calculation shifts each (pixel) spectrum to the same overall average
brightness. This enhancement must be used with a consideration of
whether this assumption is valid for the scene. For an image that
contains two (or more) distinctly different regions (e.g., half ocean
and half forest), this may not be a valid assumption. Correctly
applied, this normalization algorithm helps remove albedo variations
and topographic effects.
Log Residuals The Log Residuals technique was originally described by Green and
Craig (Green and Craig, 1985), but has been variously modified by
researchers. The version implemented here is similar to the
approach of Lyon (Lyon, 1987). The algorithm can be conceptualized
as:
Output Spectrum = (input spectrum) - (average spectrum) -
(pixel brightness) + (image brightness)
All parameters in the above equation are in logarithmic space, hence
the name.
This algorithm corrects the image for atmospheric absorption,
systemic instrumental variation, and illuminance differences
between pixels.
Rescale Many hyperspectral scanners record the data in a format larger than
8-bit. In addition, many of the calculations used to correct the data
are performed with a floating point format to preserve precision. At
some point, it is advantageous to compress the data back into an 8-
bit range for effective storage and/or display. However, when
rescaling data to be used for imaging spectrometry analysis, it is
necessary to consider all data values within the data cube, not just
within the layer of interest. This algorithm is designed to maintain
the 3-dimensional integrity of the data values. Any bit format can be
input. The output image is always 8-bit.
When rescaling a data cube, a decision must be made as to which
bands to include in the rescaling. Clearly, a bad band (i.e., a low S/N
layer) should be excluded. Some sensors image in different regions
of the electromagnetic (EM) spectrum (e.g., reflective and thermal
infrared or long- and short-wave reflective infrared). When rescaling
these data sets, it may be appropriate to rescale each EM region
separately. These can be input using the Select Layer option in the
Viewer.
Processing Sequence The above (and other) processing steps are utilized to convert the
raw image into a form that is easier to interpret. This interpretation
often involves comparing the imagery, either visually or
automatically, to laboratory spectra or other known end-member
spectra. At present there is no widely accepted standard processing
sequence to achieve this, although some have been advanced in the
scientific literature (Zamudio and Atkinson, 1990; Kruse, 1988;
Green and Craig, 1985; Lyon, 1987). Two common processing
sequences have been programmed as single automatic
enhancements, as follows:
AOI Polygon
Click here to
enter an Area
of Interest
Signal to Noise The signal-to-noise (S/N) ratio is commonly used to evaluate the
usefulness or validity of a particular band. In this implementation,
S/N is defined as Mean/Std.Dev. in a 3 ×3 moving window. After
running this function on a data set, each layer in the output image
should be visually inspected to evaluate suitability for inclusion into
the analysis. Layers deemed unacceptable can be excluded from the
processing by using the Select Layers option of the various Graphical
User Interfaces (GUIs). This can be used as a sensor evaluation tool.
Mean per Pixel This algorithm outputs a single band, regardless of the number of
input bands. By visually inspecting this output image, it is possible
to see if particular pixels are outside the norm. While this does not
mean that these pixels are incorrect, they should be evaluated in this
context. For example, a CCD detector could have several sites
(pixels) that are dead or have an anomalous response, these would
be revealed in the mean-per-pixel image. This can be used as a
sensor evaluation tool.
Profile Tools To aid in visualizing this three-dimensional data cube, three basic
tools have been designed:
Spectral Library Two spectral libraries are presently included in the software package
(JPL and USGS). In addition, it is possible to extract spectra (pixels)
from a data set or prepare average spectra from an image and save
these in a user-derived spectral library. This library can then be used
for visual comparison with other image spectra, or it can be used as
input signatures in a classification.
System Requirements Because of the large number of bands, a hyperspectral data set can
be surprisingly large. For example, an AVIRIS scene is only 512 ×
614 pixels in dimension, which seems small. However, when
multiplied by 224 bands (channels) and 16 bits, it requires over 140
megabytes of data storage space. To process this scene requires
corresponding large swap and temp space. In practice, it has been
found that a 48 Mb memory board and 100 Mb of swap space is a
minimum requirement for efficient processing. Temporary file space
requirements depend upon the process being run.
Fourier Analysis Image enhancement techniques can be divided into two basic
categories: point and neighborhood. Point techniques enhance the
pixel based only on its value, with no concern for the values of
neighboring pixels. These techniques include contrast stretches
(nonadaptive), classification, and level slices. Neighborhood
techniques enhance a pixel based on the values of surrounding
pixels. As a result, these techniques require the processing of a
possibly large number of pixels for each output pixel. The most
common way of implementing these enhancements is via a moving
window convolution. However, as the size of the moving window
increases, the number of requisite calculations becomes enormous.
An enhancement that requires a convolution operation in the spatial
domain can be implemented as a simple multiplication in frequency
space—a much faster calculation.
NOTE: You may also want to refer to the works cited at the end of
this section for more information.
1---
sin 3x
3
0 π 2π 0 π 2π
0 π 2π 0 π 2π
Applications
Fourier transformations are typically used for the removal of noise
such as striping, spots, or vibration in imagery by identifying
periodicities (areas of high spatial frequency). Fourier editing can be
used to remove regular errors in data such as those caused by
sensor anomalies (e.g., striping). This analysis technique can also be
used across bands as another form of pattern/feature recognition.
Where:
M = the number of pixels horizontally
N = the number of pixels vertically
u,v = spatial frequency variables
e = 2.71828, the natural logarithm base
j = the imaginary component of a complex number
The number of pixels horizontally and vertically must each be a
power of two. If the dimensions of the input image are not a power
of two, they are padded up to the next highest power of two. There
is more information about this later in this section.
Source: Modified from Oppenheim and Schafer, 1975; Press et al,
1988.
Fourier Magnitude The raster image generated by the FFT calculation is not an optimum
image for viewing or editing. Each pixel of a fourier image is a
complex number (i.e., it has two components: real and imaginary).
For display as a single image, these components are combined in a
root-sum of squares operation. Also, since the dynamic range of
Fourier spectra vastly exceeds the range of a typical display device,
the Fourier Magnitude calculation involves a logarithmic function.
Finally, a Fourier image is symmetric about the origin (u, v = 0, 0).
If the origin is plotted at the upper left corner, the symmetry is more
difficult to see than if the origin is at the center of the image.
Therefore, in the Fourier magnitude image, the origin is shifted to
the center of the raster array.
In this transformation, each .fft layer is processed twice. First, the
maximum magnitude, |X|max, is computed. Then, the following
computation is performed for each FFT element magnitude x:
y ( x ) = 255.0ln -------------- ( e – 1 ) + 1
x
x max
1 -
∆u = -----------
M∆x
1-
∆v = ----------
N∆y
Where:
M = horizontal image size in pixels
N = vertical image size in pixels
∆x = pixel size
∆y = pixel size
For example, converting a 512 × 512 Landsat TM image (pixel size
= 28.5m) into a Fourier image:
1 –5 –1
∆u = ∆v = ------------------------- = 6.85 × 10 m
512 × 28.5
u or v Frequency
0 0
1 –5 –1
∆u = ∆v = ---------------------------- = 3.42 × 10 m
1024 × 28.5
u or v Frequency
0 0
• Resample the image so that its height and width are powers of
two.
300
512
512
IFFT The IFFT computes the inverse two-dimensional FFT of the spectrum
stored.
M – 1N – 1
j2πux ⁄ M + j2πvy ⁄ N
∑ ∑
1
f ( x, y ) ← --------------- [ F ( u, v )e ]
N1 N2
u = 0v = 0
0 ≤ x ≤ M – 1, 0 ≤ y ≤ N – 1
Low-Pass Filtering
The simplest example of this relationship is the low-pass kernel. The
name, low-pass kernel, is derived from a filter that would pass low
frequencies and block (filter out) high frequencies. In practice, this
is easily achieved in the spatial domain by the M = N = 3 kernel:
1 1 1
1 1 1
1 1 1
Obviously, as the size of the image and, particularly, the size of the
low-pass kernel increases, the calculation becomes more time-
consuming. Depending on the size of the input image and the size of
the kernel, it can be faster to generate a low-pass image via Fourier
processing.
12
2 2 2
u + v > D0
64 × 64 50 3
30 3.5
20 5
10 9
5 14
128 × 128 20 13
10 22
256 × 256 20 25
10 42
High-Pass Filtering
Just as images can be smoothed (blurred) by attenuating the high-
frequency components of an image using low-pass filters, images
can be sharpened and edge-enhanced by attenuating the low-
frequency components using high-pass filters. In the Fourier
domain, the high-pass operation is implemented by attenuating the
pixels’ frequencies that satisfy:
2 2 2
u + v < D0
• Ideal
• Bartlett (triangular)
• Butterworth
• Gaussian
• Hanning (cosine)
H(u,v)
gain
1
0 D(u,v)
D0
frequency
H(u, v) = 1 if D(u, v) ≤ D0
H(u, v) = 0 if D(u, v) > D0
All frequencies inside a circle of a radius D0 are retained completely
(passed), and all frequencies outside the radius are completely
attenuated. The point D0 is termed the cutoff frequency.
High-pass filtering using the ideal window looks like the following
illustration:
H(u,v)
gain
0 D(u,v)
D0
frequency
H(u, v) = 0 if D(u, v) ≤ D0
H(u, v) = 1 if D(u, v) > D0
All frequencies inside a circle of a radius D0 are completely
attenuated, and all frequencies outside the radius are retained
completely (passed).
A major disadvantage of the ideal filter is that it can cause ringing
artifacts, particularly if the radius (r) is small. The smoother
functions (e.g., Butterworth and Hanning) minimize this effect.
Bartlett
Filtering using the Bartlett window is a triangular function, as shown
in the following low- and high-pass cross sections:
Low-Pass High-Pass
H(u,v) H(u,v)
gain
gain
1 1
D(u,v) 0 D(u,v)
0 D0 D0
frequency frequency
1 1
gain
gain
0.5 0.5
D(u,v) 0 D(u,v)
0 1 2 3 1 2 3
D0 D0
frequency frequency
1
H ( u, v ) = ----------------------------------------------------
-
1 + [ ( D ( u, v ) ) ⁄ D 0 ] 2n
x 2
– ------
D0
H ( u, v ) = e
The equation for the Hanning low-pass window is:
H ( u, v ) = 0 otherwise
Fourier Noise Removal Occasionally, images are corrupted by noise that is periodic in
nature. An example of this is the scan lines that are present in some
TM images. When these images are transformed into Fourier space,
the periodic line pattern becomes a radial line. The Fourier Analysis
functions provide two main tools for reducing noise in images:
• editing
Editing
In practice, it has been found that radial lines centered at the Fourier
origin (u, v = 0, 0) are best removed using back-to-back wedges
centered at (0, 0). It is possible to remove these lines using very
narrow wedges with the Ideal window. However, the sudden
transitions resulting from zeroing-out sections of a Fourier image
causes a ringing of the image when it is transformed back into the
spatial domain. This effect can be lessened by using a less abrupt
window, such as Butterworth.
Other types of noise can produce artifacts, such as lines not centered
at u,v = 0,0 or circular spots in the Fourier image. These can be
removed using the tools provided in the FFT Editor. As these artifacts
are always symmetrical in the Fourier magnitude image, editing
tools operate on both components simultaneously. The FFT Editor
contains tools that enable you to attenuate a circular or rectangular
region anywhere on the image.
Homomorphic Filtering Homomorphic filtering is based upon the principle that an image may
be modeled as the product of illumination and reflectance
components:
I(x, y) = i(x, y) × r(x, y)
Where:
I(x, y) = image intensity (DN) at pixel x, y
i(x, y) = illumination of pixel x, y
r(x, y) = reflectance at pixel x, y
The illumination image is a function of lighting conditions and
shadows. The reflectance image is a function of the object being
imaged. A log function can be used to separate the two components
(i and r) of the image:
ln I(x, y) = ln i(x, y) + ln r(x, y)
This transforms the image from multiplicative to additive
superposition. With the two component images separated, any linear
operation can be performed. In this application, the image is now
transformed into Fourier space. Because the illumination component
usually dominates the low frequencies, while the reflectance
component dominates the higher frequencies, the image may be
effectively manipulated in the Fourier domain.
By using a filter on the Fourier image, which increases the high-
frequency components, the reflectance image (related to the target
material) may be enhanced, while the illumination image (related to
the scene illumination) is de-emphasized.
i decreased
r increased
Radar Imagery The nature of the surface phenomena involved in radar imaging is
inherently different from that of visible/infrared (VIS/IR) images.
Enhancement
When VIS/IR radiation strikes a surface it is either absorbed,
reflected, or transmitted. The absorption is based on the molecular
bonds in the (surface) material. Thus, this imagery provides
information on the chemical composition of the target.
When radar microwaves strike a surface, they are reflected
according to the physical and electrical properties of the surface,
rather than the chemical composition. The strength of radar return
is affected by slope, roughness, and vegetation cover. The
conductivity of a target area is related to the porosity of the soil and
its water content. Consequently, radar and VIS/IR data are
complementary; they provide different information about the target
area. An image in which these two data types are intelligently
combined can present much more information than either image by
itself.
• Mean filter
• Median filter
• Lee-Sigma filter
• Frost filter
• Gamma-MAP filter
Mean Filter
The Mean filter is a simple calculation. The pixel of interest (center
of window) is replaced by the arithmetic average of all values within
the window. This filter does not remove the aberrant (speckle)
value; it averages it into the data.
In theory, a bright and a dark pixel within the same window would
cancel each other out. This consideration would argue in favor of a
large window size (e.g., 7 × 7). However, averaging results in a loss
of detail, which argues for a small window size.
In general, this is the least satisfactory method of speckle reduction.
It is useful for applications where loss of resolution is not a problem.
Median Filter
A better way to reduce speckle, but still simplistic, is the Median
filter. This filter operates by arranging all DN values in sequential
order within the window that you define. The pixel of interest is
replaced by the value in the center of this distribution. A Median filter
is useful for removing pulse or spike noise. Pulse functions of less
than one-half of the moving window width are suppressed or
eliminated. In addition, step functions or ramp functions are
retained.
The effect of Mean and Median filters on various signals is shown (for
one dimension) in Figure 85.
Step
Ramp
Single Pulse
Double Pulse
The Median filter is useful for noise suppression in any image. It does
not affect step or ramp functions; it is an edge preserving filter
(Pratt, 1991). It is also applicable in removing pulse function noise,
which results from the inherent pulsing of microwaves. An example
of the application of the Median filter is the removal of dead-detector
striping, as found in Landsat 4 TM data (Crippen, 1989a).
= pixel of interest
= North region
= NE region
= SW region
2
Σ ( DN x, y – Mean )
Variance = -----------------------------------------------
n–1
VARIANCE
Standard Deviation = ----------------------------------- = Coefficient Of Variation =
sigma (σ)
MEAN
1 .52
2 .37
3 .30
4 .26
6 .21
8 .18
The Lee filters are based on the assumption that the mean and
variance of the pixel of interest are equal to the local mean and
variance of all pixels within the moving window you select.
The actual calculation used for the Lee filter is:
DNout = [Mean] + K[DNin - Mean]
Where:
Mean = average of pixels in a moving window
Var ( x )
K = ----------------------------------------------------
2 2
[ Mean ] σ + Var ( x )
Sigma
Pass Sigma Value Window Size
Multiplier
2 0.26 1 5×5
3 0.26 2 7×7
Sigma Sigma
Filter Pass Window Size
Value Multiplier
Local 3 NA NA 5 × 5 or 7 × 7
Region
Frost Filter
The Frost filter is a minimum mean square error algorithm that
adapts to the local statistics of the image. The local statistics serve
as weighting parameters for the impulse response of the filter
(moving window). This algorithm assumes that noise is
multiplicative with stationary statistics.
The formula used is:
–α t
DN = ∑ Kαe
n×n
Where:
2 2 2
α = ( 4 ⁄ nσ ) ( σ ⁄ I )
and
K = normalization constant
I = local mean
σ = local variance
σ = image coefficient of variation value
|t| = |X-X0| + |Y-Y0|
n = moving window size
Source: Lopes et al, 1990
Gamma-MAP Filter
The Maximum A Posteriori (MAP) filter attempts to estimate the
original pixel DN, which is assumed to lie between the local average
and the degraded (actual) pixel DN. MAP logic maximizes the a
posteriori probability density function with respect to the original
image.
Many speckle reduction filters (e.g., Lee, Lee-Sigma, Frost) assume
a Gaussian distribution for the speckle noise. Recent work has shown
this to be an invalid assumption. Natural vegetated areas have been
shown to be more properly modeled as having a Gamma distributed
cross section. This algorithm incorporates this assumption. The exact
formula used is the cubic equation:
ˆI 3 – I ˆI 2 + σ ( ˆI – DN ) = 0
Edge Detection Edge and line detection are important operations in digital image
processing. For example, geologists are often interested in mapping
lineaments, which may be fault lines or bedding structures. For this
purpose, edge and line detection are major enhancement
techniques.
In selecting an algorithm, it is first necessary to understand the
nature of what is being enhanced. Edge detection could imply
amplifying an edge, a line, or a spot (see Figure 87).
DN Value
DN Value
slope DN change
DN change 90o
slope midpoint
x or y x or y
Step edge
Ramp edge
DN change DN change
x or y x or y
Line Roof edge
Intensity
Edge detection algorithms can be broken down into 1st-order
derivative and 2nd-order derivative operations. Figure 89 shows
ideal one-dimensional edge and line intensity curves with the
associated 1st-order and 2nd-order derivatives.
g(x) g(x)
Original Feature
x x
∂-----g ∂g
-----
1st Derivative ∂x
∂x
x x
2
2nd Derivative ∂g
2 ∂g
-------- --------
2
∂x
2 ∂x
x x
1 1 1 1 0 –1
∂- = and ∂- =
---- 0 0 0 ---- 1 0 –1
∂x ∂y
–1 –1 –1 1 0 –1
–1 2 –1 –1 –1 –1
∂ 2- = and ∂2 =
------- –1 2 –1 -------- 2 2 2
2 2
∂x ∂y
–1 2 –1 –1 –1 –1
1 1 0 –1 –1
1 1 0 –1 –1
1 1 0 –1 –1
1 1 0 –1 –1
1 1 0 –1 –1
2 1 0 –1 –2 4 2 0 –2 –4
2 1 0 –1 –2 4 2 0 –2 –4
or
2 1 0 –1 –2 4 2 0 –2 –4
2 1 0 –1 –2 4 2 0 –2 –4
2 1 0 –1 –2 4 2 0 –2 –4
Zero-Sum Filters
A common type of edge detection kernel is a zero-sum filter. For this
type of filter, the coefficients are designed to add up to zero.
Following are examples of two zero-sum filters:
–1 –2 –1 1 0 –1
Sobel = 0 0 0 2 0 –2
1 2 1 1 0 –1
horizontal vertical
• variance (2nd-order)
• skewness (3rd-order)
• kurtosis (4th-order)
1---
2 2
Σ [ Σ λ ( x cλ – x ijλ ) ]
Mean Euclidean Distance = ---------------------------------------------
-
n–1
Where:
xijl = DN value for spectral band λ and pixel (i,j) of a
multispectral image
xcl = DN value for spectral band λ of a window’s center
pixel
n = number of pixels in a window
Variance
2
Σ ( x ij – M )
Variance = ----------------------------
-
n–1
Where:
xij = DN value of pixel (i,j)
n = number of pixels in a window
M = Mean of the moving window, where:
Skewness
3
Σ ( x ij – M )
Skew = -------------------------------
-
3---
2
(n – 1)(V)
Where:
xij = DN value of pixel (i,j)
n = number of pixels in a window
M = Mean of the moving window (see above)
V = Variance (see above)
Kurtosis
4
Σ ( x ij – M )
Kurtosis = ----------------------------
-
2
(n – 1 )(V)
Where:
xij = DN value of pixel (i,j)
n = number of pixels in a window
M = Mean of the moving window (see above)
V = Variance (see above)
Radiometric Correction: The raw radar image frequently contains radiometric errors due to:
Radar Imagery
• imperfections in the transmit and receive pattern of the radar
antenna
a1 + a2 + a3 + a4 ....a
=x Overall average
x
Overall average
= calibration coefficient
ax of line x
Range Direction
Slant-to-Ground Range Radar images also require slant-to-ground range correction, which is
Correction similar in concept to orthocorrecting a VIS/IR image. By design, an
imaging radar is always side-looking. In practice, the depression
angle is usually 75o at most. In operation, the radar sensor
determines the range (distance to) each target, as shown in Figure
92.
θ = depression angle
C
Dists
90o
θ
A B
Distg
Dist
cos θ = ------------s-
Dist g
• Multiplicative
Codisplaying
The simplest and most frequently used method of combining radar
with VIS/IR imagery is codisplaying on an RGB color monitor. In this
technique, the radar image is displayed with one (typically the red)
gun, while the green and blue guns display VIS/IR bands or band
ratios. This technique follows from no logical model and does not
truly merge the two data sets.
Use the Viewer with the Clear Display option disabled for this
type of merge. Select the color guns to display the different
layers.
The Classification
Process
Pattern Recognition Pattern recognition is the science—and art—of finding meaningful
patterns in data, which can be extracted through classification. By
spatially and spectrally enhancing an image, pattern recognition can
be performed with the human eye; the human brain automatically
sorts certain textures and colors into categories.
In a computer system, spectral pattern recognition can be more
scientific. Statistics are derived from the spectral characteristics of
all pixels in an image. Then, the pixels are sorted based on
mathematical criteria. The classification process breaks down into
two parts: training and classifying (using a decision rule).
Supervised Training
Supervised training is closely controlled by the analyst. In this
process, you select pixels that represent patterns or land cover
features that you recognize, or that you can identify with help from
other sources, such as aerial photos, ground truth data, or maps.
Knowledge of the data, and of the classes desired, is required before
classification.
By identifying patterns, you can instruct the computer system to
identify pixels with similar characteristics. If the classification is
accurate, the resulting classes represent the categories within the
data that you originally identified.
Output File When classifying an image file, the output file is an image file with a
thematic raster layer. This file automatically contains the following
data:
• class values
• class names
• color table
• statistics
• histogram
The image file also contains any signature attributes that were
selected in the ERDAS IMAGINE Supervised Classification utility.
The class names, values, and colors can be set with the
Signature Editor or the Raster Attribute Editor.
Iterative Classification A process is iterative when it repeats an action. The objective of the
ERDAS IMAGINE system is to enable you to iteratively create and
refine signatures and classified image files to arrive at a desired final
classification. The ERDAS IMAGINE classification utilities are tools to
be used as needed, not a numbered list of steps that must always be
followed in order.
The total classification can be achieved with either the supervised or
unsupervised methods, or a combination of both. Some examples
are below:
Classifying Enhanced For many specialized applications, classifying data that have been
Data merged, spectrally merged or enhanced—with principal components,
image algebra, or other transformations—can produce very specific
and meaningful results. However, without understanding the data
and the enhancements used, it is recommended that only the
original, remotely-sensed data be classified.
Limiting Dimensions
Although ERDAS IMAGINE allows an unlimited number of layers of
data to be used for one classification, it is usually wise to reduce the
dimensionality of the data as much as possible. Often, certain layers
of data are redundant or extraneous to the task at hand.
Unnecessary data take up valuable disk space, and cause the
computer system to perform more arduous calculations, which slows
down processing.
• What classes are most likely to be present in the data? That is,
which types of land cover, soil, or vegetation (or whatever) are
represented by the data?
Training Samples and Training samples (also called samples) are sets of pixels that
Feature Space Objects represent what is recognized as a discernible pattern, or potential
class. The system calculates statistics from the sample pixels to
create a parametric signature for the class.
The following terms are sometimes used interchangeably in
reference to training samples. For clarity, they are used in this
documentation as follows:
Use the Vector and AOI tools to digitize training samples from a
map. Use the Signature Editor to create signatures from training
samples that are identified with digitized polygons.
User-defined Polygon
Using your pattern recognition skills (with or without supplemental
ground truth information), you can identify samples by examining a
displayed image of the data and drawing a polygon around the
training site(s) of interest. For example, if it is known that oak trees
reflect certain frequencies of green and infrared light according to
ground truth data, you may be able to base your sample selections
on the data (taking atmospheric conditions, sun angle, time, date,
and other variations into account). The area within the polygon(s)
would be used to create a signature.
NOTE: The thematic raster layer must have the same coordinate
system as the image file being classified.
Selecting Feature The ERDAS IMAGINE Feature Space tools enable you to interactively
define feature space objects (AOIs) in the feature space image(s). A
Space Objects
feature space image is simply a graph of the data file values of one
band of data against the values of another band (often called a
scatterplot). In ERDAS IMAGINE, a feature space image has the
same data structure as a raster image; therefore, feature space
images can be used with other ERDAS IMAGINE utilities, including
zoom, color level slicing, virtual roam, Spatial Modeler, and Map
Composer.
band 2
band 1
Advantages Disadvantages
Unsupervised Unsupervised training requires only minimal initial input from you.
However, you have the task of interpreting the classes that are
Training
created by the unsupervised training algorithm.
Unsupervised training is also called clustering, because it is based on
the natural groupings of pixels in image data when they are plotted
in feature space. According to the specified parameters, these
groups can later be merged, disregarded, otherwise manipulated, or
used as the basis of a signature.
µB+ σB
data file values
µB
Band B
µB- σB
0
0 µA - σ µA µA+σA
A
Band A
data file values
Pixel Analysis
Pixels are analyzed beginning with the upper left corner of the image
and going left to right, block by block.
The spectral distance between the candidate pixel and each cluster
mean is calculated. The pixel is assigned to the cluster whose mean
is the closest. The ISODATA function creates an output image file
with a thematic raster layer and/or a signature file (.sig) as a result
of the clustering. At the end of each iteration, an image file exists
that shows the assignments of the pixels to the clusters.
Band B
2
Cluster
1
Band A
data file values
For the second iteration, the means of all clusters are recalculated,
causing them to shift in feature space. The entire process is
repeated—each candidate pixel is compared to the new cluster
means and assigned to the closest cluster mean.
Band A
data file values
Percentage Unchanged
After each iteration, the normalized percentage of pixels whose
assignments are unchanged since the last iteration is displayed in
the dialog. When this number reaches T (the convergence
threshold), the program terminates.
It is possible for the percentage of unchanged pixels to never
converge or reach T (the convergence threshold). Therefore, it may
be beneficial to monitor the percentage, or specify a reasonable
maximum number of iterations, M, so that the program does not run
indefinitely.
Advantages Disadvantages
RGB Clustering
frequency
between 16 and 34 in RED,
B and between 35 and 55 in
GREEN, and between 0 and
16 16 in BLUE.
0 35 195 255
16 98
98 G
R
195
16
34 R
55
35
G
0
35
16
B
25
B
5
Partitioning Parameters
It is necessary to specify the number of R, G, and B sections in each
dimension of the 3-dimensional scatterplot. The number of sections
should vary according to the histograms of each band. Broad
histograms should be divided into more sections, and narrow
histograms should be divided into fewer sections (see Figure 98).
Advantages Disadvantages
Not biased to the top or bottom Does not always create thematic
of the data file. The order in classes that can be analyzed for
which the pixels are examined informational purposes.
does not influence the outcome.
Advantages Disadvantages
Tips
Some starting values that usually produce good results with the
simple RGB clustering are:
R = 7
G = 6
B = 6
which results in 7 × 6 × 6 = 252 classes.
To decrease the number of output colors/classes or to darken the
output, decrease these values.
For the Advanced RGB clustering function, start with higher values
for R, G, and B. Adjust by raising the threshold parameter and/or
decreasing the R, G, and B parameter values until the desired
number of output classes is obtained.
Signature Files A signature is a set of data that defines a training sample, feature
space object (AOI), or cluster. The signature is used in a
classification process. Each classification decision rule (algorithm)
requires some signature attributes as input—these are stored in the
signature file (.sig). Signatures in ERDAS IMAGINE can be
parametric or nonparametric.
The following attributes are standard for all signatures (parametric
and nonparametric):
• color—the color for the signature and the color for the class in the
output thematic raster layer. This color is also used with other
signature visualization functions, such as alarms, masking,
ellipses, etc.
• value—the output class value for the signature. The output class
value does not necessarily need to be the class number of the
signature. This value should be a positive integer.
Parametric Signature
A parametric signature is based on statistical parameters (e.g.,
mean and covariance matrix) of the pixels that are in the training
sample or cluster. A parametric signature includes the following
attributes in addition to the standard attributes for signatures:
• the minimum and maximum data file value in each band for each
sample or cluster (minimum vector and maximum vector)
• the mean data file value in each band for each sample or cluster
(mean vector)
Nonparametric Signature
A nonparametric signature is based on an AOI that you define in the
feature space image for the image file being classified. A
nonparametric classifier uses a set of nonparametric signatures to
assign pixels to a class based on their location, either inside or
outside the area in the feature space image.
signature 1
data file values
data file values
signature 2
µµB2 +2 s
B2+2s
Band D
signature 1
µ
Band B
µD1
D1
µµB2 signature 2
B2
µ D2
D2
µ µB2B2-2-2s
s
µC2 µµC1
µ A2 -2 s
µ A2
µ A2 +2s
µA2
µA2-2s
µA2+2s
C2 C1
Band A Band C
data file values data file values
µA2 = mean in Band A for signature 2,
µB2 = mean in Band B for signature 2, etc.
µ A1= mean in Band A for signature 1,
µ A2= mean in Band A for signature 2, etc.
By analyzing the ellipse graphs for all band pairs, you can determine
which signatures and which bands provide accurate classification
results.
Contingency Matrix NOTE: This evaluation classifies all of the pixels in the selected AOIs
and compares the results to the pixels of a training sample.
Divergence
The formula for computing Divergence (Dij) is as follows:
1 –1 –1 1 –1 –1 T
D ij = --- tr ( ( C i – C j ) ( C i – C j ) ) + --- tr ( ( C i – C j ) ( µ i – µ j ) ( µ i – µ j ) )
2 2
Where:
i and j = the two signatures (classes) being compared
Ci = the covariance matrix of signature i
µi = the mean vector of signature i
tr = the trace function (matrix algebra)
T = the transposition function
Source: Swain and Davis, 1978
Transformed Divergence
The formula for computing Transformed Divergence (TD) is as
follows:
1 –1 –1 1 –1 –1 T
D ij = --- tr ( ( C i – C j ) ( C i – C j ) ) + --- tr ( ( C i – C j ) ( µ i – µ j ) ( µ i – µ j ) )
2 2
– D ij
TD ij = 2000 1 – exp ----------
8
Where:
i and j = the two signatures (classes) being compared
Ci = the covariance matrix of signature i
µi = the mean vector of signature i
tr = the trace function (matrix algebra)
T = the transposition function
Source: Swain and Davis, 1978
Jeffries-Matusita Distance
The formula for computing Jeffries-Matusita Distance (JM) is as
follows:
T Ci + Cj
–1
1 ( Ci + Cj ) ⁄ 2
α = --- ( µ i – µ j ) ----------------- ( µ i – µ j ) + --- ln --------------------------------
1
8 2 2 C × C
i j
–α
JM ij = 2(1 – e )
Where:
i and j = the two signatures (classes) being compared
Ci = the covariance matrix of signature i
µi = the mean vector of signature i
ln = the natural logarithm function
|Ci| = the determinant of Ci (matrix algebra)
Source: Swain and Davis, 1978
According to Jensen, “The JM distance has a saturating behavior with
increasing class separation like transformed divergence. However, it
is not as computationally efficient as transformed divergence”
(Jensen, 1996).
Separability
Both transformed divergence and Jeffries-Matusita distance have
upper and lower bounds. If the calculated divergence is equal to the
appropriate upper bound, then the signatures can be said to be
totally separable in the bands being studied. A calculated divergence
of zero means that the signatures are inseparable.
Weight Factors
As with the Bayesian classifier (explained below with maximum
likelihood), weight factors may be specified for each signature.
These weight factors are based on a priori probabilities that any
given pixel is assigned to each class. For example, if you know that
twice as many pixels should be assigned to Class A as to Class B,
then Class A should receive a weight factor that is twice that of Class
B.
c – 1 c
∑ ∑ fi fj Uij
i = 1 j = i + 1
W ij = -----------------------------------------------------
2
c c
1---
2 ∑ i
f – ∑ fi 2
i 1 i 1
Where:
i and j = the two signatures (classes) being compared
Uij = the unweighted divergence between i and j
Wij = the weighted divergence between i and j
c = the number of signatures (classes)
fi = the weight factor for signature i
Probability of Error
The Jeffries-Matusita distance is related to the pairwise probability of
error, which is the probability that a pixel assigned to class i is
actually in class j. Within a range, this probability can be estimated
according to the expression below:
Signature Manipulation In many cases, training must be repeated several times before the
desired signatures are produced. Signatures can be gathered from
different sources—different training samples, feature space images,
and different clustering programs—all using different techniques.
After each signature file is evaluated, you may merge, delete, or
create new signatures. The desired signatures can finally be moved
to one signature file to be used in the classification.
The following operations upon signatures and signature files are
possible with ERDAS IMAGINE:
Classification Once a set of reliable signatures has been created and evaluated, the
next step is to perform a classification of the data. Each pixel is
Decision Rules
analyzed independently. The measurement vector for each pixel is
compared to each signature, according to a decision rule, or
algorithm. Pixels that pass the criteria that are established by the
decision rule are then assigned to the class for that signature. ERDAS
IMAGINE enables you to classify the data both parametrically with
statistical representation, and nonparametrically as objects in
feature space. Figure 100 shows the flow of an image pixel through
the classification decision making process in ERDAS IMAGINE (Kloer,
1994).
• If the pixel falls into more than one class as a result of the
nonparametric test, the overlap rule is applied. With this rule, the
pixel is either classified by the parametric rule, processing order,
or left unclassified.
Nonparametric Rules ERDAS IMAGINE provides these decision rules for nonparametric
signatures:
• parallelepiped
• feature space
Unclassified Options
ERDAS IMAGINE provides these options if the pixel is not classified
by the nonparametric rule:
• parametric rule
• unclassified
Overlap Options
ERDAS IMAGINE provides these options if the pixel falls into more
than one feature space object:
• parametric rule
• by order
• unclassified
Parametric Rules ERDAS IMAGINE provides these commonly-used decision rules for
parametric signatures:
• minimum distance
• Mahalanobis distance
Candidate Pixel
No Nonparametric Rule
Yes
0 >1
Unclassified
Parametric Rule
Unclassified
Assignment
Class
Assignment
Parallelepiped In the parallelepiped decision rule, the data file values of the
candidate pixel are compared to upper and lower limits. These limits
can be either:
• the minimum and maximum data file values of each band in the
signature,
• any limits that you specify, based on your knowledge of the data
and signatures. This knowledge may come from the signature
evaluation techniques discussed above.
l = pixels in class 1
? ? ?
3 3
class 3 2 = pixels in class 2
data file values ? ? 3 3 3
3 3 = pixels in class 3
? ? ? 3 3
3 3 3 3 ?
? ? ? ? 3 3 ? = unclassified pixels
µB2+2s
Band B
3 3
3 ? ?
2 2 2 2 3 3 3
µA2 = mean of Band A,
2 ? ?
? 2 ? ? ?
2 2 ?
2 2 2 ? ? ?
? class 2
? ?
µB2
2 ?
2
2
2
2
?
1 1 1 1 1 µB2 = mean of Band B,
2 1 1 class 1 class 2
2
? 2 ? ?
2
?
?
µB2-2s class 2
µA2+2s
µA2
µA2-2s
Band A
data file values
Overlap Region
In cases where a pixel may fall into the overlap region of two or more
parallelepipeds, you must define how the pixel can be classified.
Advantages Disadvantages
Fast and simple, since the data Since parallelepipeds have corners,
file values are compared to pixels that are actually quite far,
limits that remain constant for spectrally, from the mean of the
each band in each signature. signature may be classified. An
example of this is shown in Figure 102.
Band B
µB Parallelepiped
boundary
*
candidate pixel
µA
Band A
data file values
Feature Space The feature space decision rule determines whether or not a
candidate pixel lies within the nonparametric signature in the feature
space image. When a pixel’s data file values are in the feature space
signature, then the pixel is assigned to that signature’s class. Figure
103 is a two-dimensional example of a feature space classification.
The polygons in this figure are AOIs used to define the feature space
signatures.
3 3 3 3 ? ?
3 3 3 3 ?
3
3
3 3 3 class 3 ? ?
? = pixels in class 1
data file values
l
?
?? ? ?
?
?
?
? ?
2 = pixels in class 2
Band B
2 2 2 ? ?
2 ? 3 = pixels in class 3
2 2 ?
2 2 ?
2 ? ? ?
2 2
= unclassified pixels
2 2
2
2 2
class 2 2 l l l l l
l l
l l l
l l l l
class 1
? ?
? ?
?? ?
?
? ? ?
?
Band A
d t fil l
Overlap Region
In cases where a pixel may fall into the overlap region of two or more
AOIs, you must define how the pixel can be classified.
Advantages Disadvantages
Often useful for a first-pass, The feature space decision rule allows
broad classification. overlap and unclassified pixels.
Minimum Distance The minimum distance decision rule (also called spectral distance)
calculates the spectral distance between the measurement vector for
the candidate pixel and the mean vector for each signature.
candidate pixel
µB3 µ3
u
µB1 u µ1
o
o µA1 µA2 µA3
Band A
data file values
∑ ( µci – Xxyi )
2
SD xyc =
i=1
Where:
n = number of bands (dimensions)
i = a particular band
c = a particular class
Xxyi = data file value of pixel x,y in band i
µci = mean of data file values in band i for the sample for
class c
SDxyc = spectral distance from pixel x,y to the mean of
class c
Source: Swain and Davis, 1978
When spectral distance is computed for all possible values of c (all
possible classes), the class of the candidate pixel is assigned to the
class for which SD is the lowest.
Advantages Disadvantages
The fastest decision rule to Does not consider class variability. For
compute, except for example, a class like an urban land
parallelepiped. cover class is made up of pixels with a
high variance, which may tend to be
farther from the mean of the signature.
Using this decision rule, outlying urban
pixels may be improperly classified.
Inversely, a class with less variance,
like water, may tend to overclassify
(that is, classify more pixels than are
appropriate to the class), because the
pixels that belong to the class are
usually spectrally closer to their mean
than those of other classes to their
means.
Mahalanobis Distance
Advantages Disadvantages
Maximum
Likelihood/Bayesian
Advantages Disadvantages
Advantages Disadvantages
Fuzzy
Methodology
Fuzzy Classification The Fuzzy Classification method takes into account that there are
pixels of mixed make-up, that is, a pixel cannot be definitively
assigned to one category. Jensen notes that, “Clearly, there needs
to be a way to make the classification algorithms more sensitive to
the imprecise (fuzzy) nature of the real world” (Jensen, 1996).
Fuzzy classification is designed to help you work with data that may
not fall into exactly one category or another. Fuzzy classification
works using a membership function, wherein a pixel’s value is
determined by whether it is closer to one class than another. A fuzzy
classification does not have definite boundaries, and each pixel can
belong to several different classes (Jensen, 1996).
Like traditional classification, fuzzy classification still uses training,
but the biggest difference is that “it is also possible to obtain
information on the various constituent classes found in a mixed
pixel. . .” (Jensen, 1996). Jensen goes on to explain that the process
of collecting training sites in a fuzzy classification is not as strict as
a traditional classification. In the fuzzy method, the training sites do
not have to have pixels that are exactly the same.
Once you have a fuzzy classification, the fuzzy convolution utility
allows you to perform a moving window convolution on a fuzzy
classification with multiple output class assignments. Using the
multilayer classification and distance file, the convolution creates a
new single class output file by computing a total weighted distance
for all classes in the window.
Fuzzy Convolution The Fuzzy Convolution operation creates a single classification layer
by calculating the total weighted inverse distance of all the classes
in a window of pixels. Then, it assigns the center pixel in the class
with the largest total inverse distance summed over the entire set of
fuzzy classification layers.
s s n
w ij
T[k] = ∑ ∑ ∑ D----------------
ijl [ k ]
-
i = 0j = 0l = 0
Where:
i = row index of window
j = column index of window
s = size of window (3, 5, or 7)
l = layer index of fuzzy set
n = number of fuzzy layers used
W = weight table for window
k = class value
D[k] = distance file value for class k
T[k] = total weighted distance of window for class k
The center pixel is assigned the class with the maximum T[k].
Knowledge Engineer With the Knowledge Engineer, you can open knowledge bases, which
are presented as decision trees in editing windows.
Hypothesis Rule
Slope > 0
Gentle Slope
Slope < 12
Slope > 0
Variable Editor
The Knowledge Engineer also makes use of a Variable Editor when
classifying images. The Variable editor provides for the definition of
the variable objects to be used in the rules conditions.
The two types of variables are raster and scalar. Raster variables
may be defined by imagery, feature layers (including vector layers),
graphic spatial models, or by running other programs. Scalar
variables my be defined with an explicit value, or defined as the
output from a model or external program.
After you select the input data for classification, the classification
output options, output files, output area, output cell size, and output
map projection, the Knowledge Classifier process can begin. An
inference engine then evaluates all hypotheses at each location
(calculating variable values, if required), and assigns the hypothesis
with the highest confidence. The output of the Knowledge Classifier
is a thematic image, and optionally, a confidence image.
Distance File
When a minimum distance, Mahalanobis distance, or maximum
likelihood classification is performed, a distance image file can be
produced in addition to the output thematic raster layer. A distance
image file is a one-band, 32-bit offset continuous raster layer in
which each data file value represents the result of a spectral distance
equation, depending upon the decision rule used.
The brighter pixels (with the higher distance file values) are
spectrally farther from the signature means for the classes to which
they re assigned. They are more likely to be misclassified.
The darker pixels are spectrally nearer, and more likely to be
classified correctly. If supervised training was used, the darkest
pixels are usually the training samples.
0
0
distance value
Figure 109 shows how the histogram of the distance image usually
appears. This distribution is called a chi-square distribution, as
opposed to a normal distribution, which is a symmetrical bell curve.
Threshold
The pixels that are the most likely to be misclassified have the higher
distance file values at the tail of this histogram. At some point that
you define—either mathematically or visually—the tail of this
histogram is cut off. The cutoff point is the threshold.
In both cases, thresholding has the effect of cutting the tail off of the
histogram of the distance image file, representing the pixels with the
highest distance values.
NOTE: You can use the ERDAS IMAGINE Accuracy Assessment utility
to perform an accuracy assessment for any thematic layer. This layer
does not have to be classified by ERDAS IMAGINE (e.g., you can run
an accuracy assessment on a thematic layer that was classified in
ERDAS Version 7.5 and imported into ERDAS IMAGINE).
Kappa Coefficient
The Kappa coefficient expresses the proportionate reduction in error
generated by a classification process compared with the error of a
completely random classification. For example, a value of .82 implies
that the classification process is avoiding 82 percent of the errors
that a completely random classification generates (Congalton,
1991).
Introduction
What is Photogrammetry is the “art, science and technology of obtaining
Photogrammetry? reliable information about physical objects and the environment
through the process of recording, measuring and interpreting
photographic images and patterns of electromagnetic radiant
imagery and other phenomena” (American Society of
Photogrammetry, 1980).
Photogrammetry was invented in 1851 by Laussedat, and has
continued to develop over the last 140 years. Over time, the
development of photogrammetry has passed through the phases of
plane table photogrammetry, analog photogrammetry, analytical
photogrammetry, and has now entered the phase of digital
photogrammetry (Konecny, 1994).
The traditional, and largest, application of photogrammetry is to
extract topographic information (e.g., topographic maps) from aerial
images. However, photogrammetric techniques have also been
applied to process satellite images and close range images in order
to acquire topographic or nontopographic information of
photographed objects.
Prior to the invention of the airplane, photographs taken on the
ground were used to extract the relationships between objects using
geometric principles. This was during the phase of plane table
photogrammetry.
In analog photogrammetry, starting with stereomeasurement in
1901, optical or mechanical instruments were used to reconstruct
three-dimensional geometry from two overlapping photographs. The
main product during this phase was topographic maps.
In analytical photogrammetry, the computer replaces some
expensive optical and mechanical components. The resulting devices
were analog/digital hybrids. Analytical aerotriangulation, analytical
plotters, and orthophoto projectors were the main developments
during this phase. Outputs of analytical photogrammetry can be
topographic maps, but can also be digital products, such as digital
maps and DEMs.
Types of Photographs and The types of photographs and images that can be processed within
Images IMAGINE LPS Project Manager include aerial, terrestrial, close range,
and oblique. Aerial or vertical (near vertical) photographs and
images are taken from a high vantage point above the Earth’s
surface. The camera axis of aerial or vertical photography is
commonly directed vertically (or near vertically) down. Aerial
photographs and images are commonly used for topographic and
planimetric mapping projects. Aerial photographs and images are
commonly captured from an aircraft or satellite.
Terrestrial or ground-based photographs and images are taken with
the camera stationed on or close to the Earth’s surface. Terrestrial
and close range photographs and images are commonly used for
applications involved with archeology, geomorphology, civil
engineering, architecture, industry, etc.
Oblique photographs and images are similar to aerial photographs
and images, except the camera axis is intentionally inclined at an
angle with the vertical. Oblique photographs and images are
commonly used for reconnaissance and corridor mapping
applications.
Digital photogrammetric systems use digitized photographs or digital
images as the primary source of input. Digital imagery can be
obtained from various sources. These include:
Why use As stated in the previous section, raw aerial photography and
Photogrammetry? satellite imagery have large geometric distortion that is caused by
various systematic and nonsystematic factors. The photogrammetric
modeling based on collinearity equations eliminates these errors
most efficiently, and creates the most reliable orthoimages from the
raw imagery. It is unique in terms of considering the image-forming
geometry, utilizing information between overlapping images, and
explicitly dealing with the third dimension: elevation.
In addition to orthoimages, photogrammetry can also provide other
geographic information such as a DEM, topographic features, and
line maps reliably and efficiently. In essence, photogrammetry
produces accurate and precise geographic information from a wide
range of photographs and images. Any measurement taken on a
photogrammetrically processed photograph or image reflects a
measurement taken on the ground. Rather than constantly go to the
field to measure distances, areas, angles, and point positions on the
Earth’s surface, photogrammetric tools allow for the accurate
collection of information from imagery. Photogrammetric
approaches for collecting geographic information save time and
money, and maintain the highest accuracies.
Image and Data During photographic or image collection, overlapping images are
exposed along a direction of flight. Most photogrammetric
Acquisition applications involve the use of overlapping images. In using more
than one image, the geometry associated with the camera/sensor,
image, and ground can be defined to greater accuracies and
precision.
During the collection of imagery, each point in the flight path at
which the camera exposes the film, or the sensor captures the
imagery, is called an exposure station (see Figure 111).
Flight path
Flight Line 3 of airplane
Flight Line 2
Flight Line 1
Exposure station
NOTE: The flying height above ground is used, versus the altitude
above sea level.
Strip 2
Photographic
block 20-30%
sidelap
Desktop Scanners Desktop scanners are general purpose devices. They lack the image
detail and geometric accuracy of photogrammetric quality units, but
they are much less expensive. When using a desktop scanner, you
should make sure that the active area is at least 9 × 9 inches (i.e.,
A3 type scanners), enabling you to capture the entire photo frame.
Scanning Resolutions One of the primary factors contributing to the overall accuracy of
block triangulation and orthorectification is the resolution of the
imagery being used. Image resolution is commonly determined by
the scanning resolution (if film photography is being used), or by the
pixel resolution of the sensor. In order to optimize the attainable
accuracy of a solution, the scanning resolution must be considered.
The appropriate scanning resolution is determined by balancing the
accuracy requirements versus the size of the mapping project and
the time required to process the project. Table 47 lists the scanning
resolutions associated with various scales of photography and image
file size.
1
dots per inch
c
Origin of pixel
coordinate
system
Origin of image
coordinate
system
z
y
S x
Z
Height
A
Y
ϕ Ground point A
Ground space ZA
ω YA
XG
κ
XA
ZG
xa’
Image space a’
ya’
x
z
Z
Y ZL
ϕ' Perspective Center
XL
κ'
YL
X
ω'
z
Perspective Center
yo x
ya’
Image plane xo O
xa’ a
• Principal point
• Focal length
• Lens distortion
Principal Point and Focal The principal point is mathematically defined as the intersection of
Length the perpendicular line through the perspective center of the image
plane. The length from the principal point to the perspective center
is called the focal length (Wang, Z., 1990).
The image plane is commonly referred to as the focal plane. For
wide-angle aerial cameras, the focal length is approximately 152
mm, or 6 inches. For some digital cameras, the focal length is 28
mm. Prior to conducting photogrammetric projects, the focal length
of a metric camera is accurately determined or calibrated in a
laboratory environment.
This mathematical definition is the basis of triangulation, but difficult
to determine optically. The optical definition of principal point is the
image position where the optical axis intersects the image plane. In
the laboratory, this is calibrated in two forms: principal point of
autocollimation and principal point of symmetry, which can be seen
from the camera calibration report. Most applications prefer to use
the principal point of symmetry since it can best compensate for the
lens distortion.
Fiducial Marks As stated previously, one of the steps associated with interior
orientation involves determining the image position of the principal
point for each image in the project. Therefore, the image positions
of the fiducial marks are measured on the image, and subsequently
compared to the calibrated coordinates of each fiducial mark.
Since the image space coordinate system has not yet been defined
for each image, the measured image coordinates of the fiducial
marks are referenced to a pixel or file coordinate system. The pixel
coordinate system has an x coordinate (column) and a y coordinate
(row). The origin of the pixel coordinate system is the upper left
corner of the image having a row and column value of 0 and 0,
respectively. Figure 117 illustrates the difference between the pixel
coordinate system and the image space coordinate system.
Ya-file Yo-file
xa Θ
Xa-file a
x = a1 + a2 X + a3 Y
y = b1 + b2 X + b3 Y
The x and y image coordinates associated with the calibrated fiducial
marks and the X and Y pixel coordinates of the measured fiducial
marks are used to determine six affine transformation coefficients.
The resulting six coefficients can then be used to transform each set
of row (y) and column (x) pixel coordinates to image coordinates.
The quality of the two-dimensional affine transformation is
represented using a root mean square (RMS) error. The RMS error
represents the degree of correspondence between the calibrated
fiducial mark coordinates and their respective measured image
coordinate values. Large RMS errors indicate poor correspondence.
This can be attributed to film deformation, poor scanning quality,
out-of-date calibration information, or image mismeasurement.
The affine transformation also defines the translation between the
origin of the pixel coordinate system and the image coordinate
system (xo-file and yo-file). Additionally, the affine transformation
takes into consideration rotation of the image coordinate system by
considering angle Θ (see Figure 117). A scanned image of an aerial
photograph is normally rotated due to the scanning procedure.
Lens Distortion Lens distortion deteriorates the positional accuracy of image points
located on the image plane. Two types of radial lens distortion exist:
radial and tangential lens distortion. Lens distortion occurs when
light rays passing through the lens are bent, thereby changing
directions and intersecting the image plane at positions deviant from
the norm. Figure 118 illustrates the difference between radial and
tangential lens distortion.
∆r ∆t
radial distance (r)
o x
3 5
∆r = k 0 r + k 1 r + k 2 r
z
y y´
ϕ
κ ω
x
O x´
f
o p yp
xp
Zo
Ground Point P
Z
Zp
Y
Xp
Xo
Yp
Yo
X
The Collinearity Equation The following section defines the relationship between the
camera/sensor, the image, and the ground. Most photogrammetric
tools utilize the following formulations in one form or another.
With reference to Figure 119, an image vector a can be defined as
the vector from the exposure station O to the image point p. A
ground space or object space vector A can be defined as the vector
from the exposure station O to the ground point P. The image vector
and ground vector are collinear, inferring that a line extending from
the exposure station to the image point and to the ground is linear.
The image vector and ground vector are only collinear if one is a
scalar multiple of the other. Therefore, the following statement can
be made:
a = kA
Where k is a scalar multiple. The image and ground vectors must be
within the same coordinate system. Therefore, image vector a is
comprised of the following components:
xp – xo
a = y –y
p o
–f
Xp – Xo
A = Yp – Yo
Zp – Zo
In order for the image and ground vectors to be within the same
coordinate system, the ground vector must be multiplied by the
rotation matrix M. The following equation can be formulated:
xp – xo Xp – Xo
y p – y o = kM Y p – Y o
–f Zp – Zo
m 11 ( X p – X o 1 ) + m 12 ( Y p – Y o1 ) + m 13 ( Z p – Z o1 )
x p – x o = – f ------------------------------------------------------------------------------------------------------------------------
-
m 31 ( X p – X o1 ) + m 32 ( Y p – Y o 1 ) + m 33 ( Z p – Z o1 )
m 21 ( X p – X o 1 ) + m 22 ( Y p – Y o1 ) + m 23 ( Z p – Z o1 )
y p – y o = – f ------------------------------------------------------------------------------------------------------------------------
-
m 31 ( X p – X o1 ) + m 32 ( Y p – Y o 1 ) + m 33 ( Z p – Z o1 )
O1
O2
o1
p2 o2
p1
Z
Zp
Y
Xo2
Xp
Xo1 Yo2
Yp
Yo1
X
Bundle Block Adjustment For mapping projects having more than two images, the use of space
intersection and space resection techniques is limited. This can be
attributed to the lack of information required to perform these tasks.
For example, it is fairly uncommon for the exterior orientation
parameters to be highly accurate for each photograph or image in a
project, since these values are generated photogrammetrically.
Airborne GPS and INS techniques normally provide initial
approximations to exterior orientation, but the final values for these
parameters must be adjusted to attain higher accuracies.
Similarly, rarely are there enough accurate GCPs for a project of 30
or more images to perform space resection (i.e., a minimum of 90 is
required). In the case that there are enough GCPs, the time required
to identify and measure all of the points would be costly.
Tie point
GCP
m 11 ( X A – X o 1 ) + m 12 ( Y A – Y o1 ) + m 13 ( Z A – Z o 1 )
x a1 – x o = – f ---------------------------------------------------------------------------------------------------------------------------
-
m 31 ( X A – X o1 ) + m 32 ( Y A – Y o1 ) + m 33 ( Z A – Z o1 )
m 21 ( X A – X o 1 ) + m 22 ( Y A – Y o1 ) + m 23 ( Z A – Z o 1 )
y a1 – y o = – f ---------------------------------------------------------------------------------------------------------------------------
-
m 31 ( X A – X o1 ) + m 32 ( Y A – Y o1 ) + m 33 ( Z A – Z o1 )
m′ 11 ( X A – X o2 ) + m′ 12 ( Y A – Y o 2 ) + m′ 13 ( Z A – Z o2 )
x a2 – x o = – f ---------------------------------------------------------------------------------------------------------------------------------
-
m′ 31 ( X A – X o 2 ) + m′ 32 ( Y A – Y o2 ) + m′ 33 ( Z A – Z o2 )
m′ 21 ( X A – X o 2 ) + m′ 22 ( Y A – Y o 2 ) + m′ 23 ( Z A – Z o 2 )
y a2 – y o = – f ---------------------------------------------------------------------------------------------------------------------------------
-
m′ 31 ( X A – X o2 ) + m′ 32 ( Y A – Y o 2 ) + m′ 33 ( Z A – Z o2 )
x a1, y a1
x a2, y a2
X o1, Y o 1, Z o
1
• X, Y, and Z coordinates of the tie points. Thus, for six tie points,
this includes eighteen unknowns (six tie points times three X, Y,
Z coordinates).
t –1 t
X = ( A PA ) A PL
The results from the block triangulation are then used as the primary
input for the following tasks:
• DEM extraction
• Orthorectification
Self-calibrating Bundle Normally, there are more or less systematic errors related to the
Adjustment imaging and processing system, such as lens distortion, film
distortion, atmosphere refraction, scanner errors, etc. These errors
reduce the accuracy of triangulation results, especially in dealing
with large-scale imagery and high accuracy triangulation. There are
several ways to reduce the influences of the systematic errors, like
a posteriori-compensation, test-field calibration, and the most
common approach: self-calibration (Konecny and Lehmann, 1984;
Wang, Z., 1990).
The self-calibrating methods use additional parameters in the
triangulation process to eliminate the systematic errors. How well it
works depends on many factors such as the strength of the block
(overlap amount, crossing flight lines), the GCP and tie point
distribution and amount, the size of systematic errors versus random
errors, the significance of the additional parameters, the correlation
between additional parameters, and other unknowns.
There was intensive research and development for additional
parameter models in photogrammetry in the 70s and the 80s, and
many research results are available (e.g., Bauer and Müller, 1972;
Brown 1975; Ebner, 1976; Grün, 1978; Jacobsen, 1980; Jacobsen,
1982; Li, 1985; Wang, Y., 1988a, Stojic et al, 1998). Based on these
scientific reports, IMAGINE LPS Project Manager provides four
groups of additional parameters for you to choose for different
triangulation circumstances. In addition, IMAGINE LPS Project
Manager allows the interior orientation parameters to be analytically
calibrated within its self-calibrating bundle block adjustment
capability.
Automatic Gross Error Normal random errors are subject to statistical normal distribution.
Detection In contrast, gross errors refer to errors that are large and are not
subject to normal distribution. The gross errors among the input data
for triangulation can lead to unreliable results. Research during the
80s in the photogrammetric community resulted in significant
achievements in automatic gross error detection in the triangulation
process (e.g., Kubik, 1982; Li, 1983; Li, 1985; Jacobsen, 1984; El-
Hakim and Ziemann, 1984; Wang, Y., 1988a).
Methods for gross error detection began with residual checking using
data-snooping and were later extended to robust estimation (Wang,
Z., 1990). The most common robust estimation method is the
iteration with selective weight functions. Based on the scientific
research results from the photogrammetric community, IMAGINE
LPS Project Manager offers two robust error detection methods
within the triangulation process.
• Intersection of roads
• Survey benchmarks
GCP Requirements The minimum GCP requirements for an accurate mapping project
vary with respect to the size of the project. With respect to
establishing a relationship between image space and ground space,
the theoretical minimum number of GCPs is two GCPs having X, Y,
and Z coordinates and one GCP having a Z coordinate associated
with it. This is a total of seven observations.
In establishing the mathematical relationship between image space
and object space, seven parameters defining the relationship must
be determined. The seven parameters include a scale factor
(describing the scale difference between image space and ground
space); X, Y, Z (defining the positional differences between image
space and object space); and three rotation angles (omega, phi, and
kappa) that define the rotational relationship between image space
and ground space.
In order to compute a unique solution, at least seven known
parameters must be available. In using the two X, Y, Z GCPs and one
vertical (Z) GCP, the relationship can be defined. However, to
increase the accuracy of a mapping project, using more GCPs is
highly recommended.
The following descriptions are provided for various projects:
Processing Multiple Figure 123 depicts the standard GCP configuration for a block of
Strips of Imagery images, comprising four strips of images, each containing eight
overlapping images.
Tie points in a
single image
Tie points
Automatic Tie Point Selecting and measuring tie points is very time-consuming and
Collection costly. Therefore, in recent years, one of the major focal points of
research and development in photogrammetry has concentrated on
the automated triangulation where the automatic tie point collection
is the main issue.
Correlation Windows
Area based matching uses correlation windows. These windows
consist of a local neighborhood of pixels. One example of correlation
windows is square neighborhoods (for example, 3 × 3, 5 × 5, 7 × 7
pixels). In practice, the windows vary in shape and dimension based
on the matching technique. Area correlation uses the characteristics
of these windows to match ground feature locations in one image to
ground features on the other.
A reference window is the source window on the first image, which
remains at a constant location. Its dimensions are usually square in
size (for example, 3 × 3, 5 × 5, and so on). Search windows are
candidate windows on the second image that are evaluated relative
to the reference window. During correlation, many different search
windows are examined until a location is found that best matches the
reference window.
Correlation Calculations
Two correlation calculations are described below: cross correlation
and least squares correlation. Most area based matching
calculations, including these methods, normalize the correlation
windows. Therefore, it is not necessary to balance the contrast or
brightness prior to running correlation. Cross correlation is more
robust in that it requires a less accurate a priori position than least
squares. However, its precision is limited to one pixel. Least squares
correlation can achieve precision levels of one-tenth of a pixel, but
requires an a priori position that is accurate to about two pixels. In
practice, cross correlation is often followed by least squares for high
accuracy.
Cross Correlation
Cross correlation computes the correlation coefficient of the gray
values between the template window and the search window
according to the following equation:
∑ [ g1 ( c1, r1 ) – g1 ] [ g2 ( c2, r2 ) – g2 ]
i, j
ρ = ------------------------------------------------------------------------------------------------------
2
∑ [ g1 ( c1, r1 ) – g1 ] ∑ [ g2 ( c2, r2 ) – g2 ]
2
i, j i, j
with
1 1
g 1 = --- ∑ g 1 ( c 1, r 1 ) g 2 = --- ∑ g 2 ( c 2, r 2 )
n n
i, j i, j
g 2 ( c 2, r 2 ) = h 0 + h 1 g 1 ( c 1, r 1 )
c2 = a0 + a1 c1 + a2 r1
r2 = b0 + b1 c1 + b2 r1
v = ( a 1 + a 2 c 1 + a 3 r 1 )g c + ( b 1 + b 2 c 1 + b 3 r 1 )g r – h 1 – h2 g 1 ( c 1, r 1 ) + ∆g
with ∆g = g 2 ( c 2, r 2 ) – g 1 ( c 1, r 1 )
Where:
gc and gr are the gradients of g2 (c2,r2).
Feature Based Matching Feature based matching determines the correspondence between
two image features. Most feature based techniques match extracted
point features (this is called feature point matching), as opposed to
other features, such as lines or complex objects. The feature points
are also commonly referred to as interest points. Poor contrast areas
can be avoided with feature based matching.
In order to implement feature based matching, the image features
must initially be extracted. There are several well-known operators
for feature point extraction. Examples include the Moravec Operator,
the Dreschler Operator, and the Förstner Operator (Förstner and
Gülch, 1987; Lü, 1988).
After the features are extracted, the attributes of the features are
compared between two images. The feature pair having the
attributes with the best fit is recognized as a match. IMAGINE LPS
Project Manager utilizes the Förstner interest operator to extract
feature points.
Relation Based Matching Relation based matching is also called structural matching
(Vosselman and Haala, 1992; Wang, Y., 1994; and Wang, Y., 1995).
This kind of matching technique uses the image features and the
relationship between the features. With relation based matching, the
corresponding image structures can be recognized automatically,
without any a priori information. However, the process is time-
consuming since it deals with varying types of information. Relation
based matching can also be applied for the automatic recognition of
control points.
Level 3
128 x 128 pixels
Resolution of 1:4
and
Level 2
256 x 256 pixels
Resolution of 1:2
scan lines
on image
ground
NOTE: The following section addresses only the 10 meter SPOT Pan
scenario.
A pixel in the SPOT image records the light detected by one of the
6000 light sensitive elements in the camera. Each pixel is defined by
file coordinates (column and row numbers). The physical dimension
of a single, light-sensitive element is 13 ×13 microns. This is the
pixel size in image coordinates. The center of the scene is the center
pixel of the center scan line. It is the origin of the image coordinate
system. Figure 128 depicts image coordinates in a satellite scene:
A XF
6000
x
lines C
6000 pixels
YF
Where:
A = origin of file coordinates
A-XF, A-YF = file coordinate axes
C = origin of image coordinates (center of scene)
C-x, C-y = image coordinate axes
SPOT Interior Orientation Figure 129 shows the interior orientation of a satellite scene. The
transformation between file coordinates and image coordinates is
constant.
On
Ok f
f
O1
orbiting direction
(N —> S)
f
PPn Pn
xn
scan lines
(image plane) Pk xk
PPk
P1 x1 ln
PP1 P1
lk
l1
For each scan line, a separate bundle of light rays is defined, where:
Pk = image point
xk = x value of image coordinates for scan line k
f = focal length of the camera
Ok = perspective center for scan line k, aligned along the
orbit
PPk = principal point for scan line k
lk = light rays for scan line, bundled at perspective
center Ok
SPOT Exterior Orientation SPOT satellite geometry is stable and the sensor parameters, such
as focal length, are well-known. However, the triangulation of SPOT
scenes is somewhat unstable because of the narrow, almost parallel
bundles of light rays.
O1 O2
vertical
orbit 1 orbit 2
sensors
I-
I+
EAST WEST
Earth’s surface
(ellipsoid)
C
scene coverage
Where:
C = center of the scene
I- = eastward inclination
I+ = westward inclination
O1,O2 = exposure stations (perspective centers of imagery)
The orientation angle of a satellite scene is the angle between a
perpendicular to the center scan line and the North direction. The
spatial motion of the satellite is described by the velocity vector. The
real motion of the satellite above the ground is further distorted by
the Earth’s rotation.
The velocity vector of a satellite is the satellite’s velocity if measured
as a vector through a point on the spheroid. It provides a technique
to represent the satellite’s speed as if the imaged area were flat
instead of being a curved surface (see Figure 131).
orbital path V
Where:
O = orientation angle
C = center of the scene
V = velocity vector
Satellite block triangulation provides a model for calculating the
spatial relationship between a satellite sensor and the ground
coordinate system for each line of data. This relationship is
expressed as the exterior orientation, which consists of
• the perspective center of the center scan line (i.e., X, Y, and Z),
• the three rotations of the center scan line (i.e., omega, phi, and
kappa), and
Collinearity Equations Modified collinearity equations are used to compute the exterior
and Satellite Block orientation parameters associated with the respective scan lines in
Triangulation the satellite scenes. Each scan line has a unique perspective center
and individual rotation angles. When the satellite moves from one
scan line to the next, these parameters change. Due to the smooth
motion of the satellite in orbit, the changes are small and can be
modeled by low order polynomial functions.
GCP
horizontal x
scan lines
• Earth curvature
DEM
Orthorectified image
Pl
f
Z
P
DTM
orthoimage
grayvalues
Where:
P = ground point
P1 = image point
O = perspective center (origin)
X,Z = ground coordinates (in DTM file)
f = focal length
Introduction Radar images are quite different from other remotely sensed
imagery you might use with ERDAS IMAGINE software. For example,
radar images may have speckle noise. Radar images, do, however,
contain a great deal of information. ERDAS IMAGINE has many radar
packages, including IMAGINE Radar Interpreter, IMAGINE
OrthoRadar, IMAGINE StereoSAR DEM, IMAGINE IFSAR DEM, and
the Generic SAR Node with which you can analyze your radar
imagery.You have already learned about the various methods of
speckle suppression—those are IMAGINE Radar Interpreter
functions.
This chapter tells you about the advanced radar processing packages
that ERDAS IMAGINE has to offer. The following sections go into
detail about the geometry and functionality of those modules of the
IMAGINE Radar Mapping Suite.
IMAGINE
OrthoRadar
Theory
Parameters Required for SAR image orthorectification requires certain information about the
Orthorectification sensor and the SAR image. Different sensors (RADARSAT, ERS, etc.)
express these parameters in different ways and in different units. To
simplify the design of our SAR tools and easily support future
sensors, all SAR images and sensors are described using our Generic
SAR model. The sensor-specific parameters are converted to a
Generic SAR model on import.
The following table lists the parameters of the Generic SAR model
and their units. These parameters can be viewed in the SAR
Parameters tab on the main Generic SAR Model Properties (IMAGINE
OrthoRadar) dialog.
Table 48: SAR Parameters Required for Orthorectification
Overview
The orthorectification process consists of several steps:
Ephemeris Modeling
The platform ephemeris is described by three or more platform
locations and velocities. To predict the platform position and velocity
at some time (t):
Rs,x = a 1 + a 2 t + a 3 t 2
Rs,y = b 1 + b 2 t + b 3 t 2
Rs,z = c 1 + c 2 t + c 3 t 2
Vs,x = d 1 + d 2 t + d 3 t 2
Vs,y = e 1 + e 2 t + e 3 t 2
Vs,z = f 1 + f 2 t + f 3 t 2
Where Rs is the sensor position and Vs is the sensor velocity:
2
1.0 t 1 t 1
A = 1.0 t t 2
2 1
2
1.0 t 3 t 3
Where t1, t2, and t3 are the times associated with each platform
position. Select t such that t = 0.0 corresponds to the time of the
second position point. Form vector b:
b = [ R s,x(1) R s,x(2) R s,x(3) ] T
Where Rs,x(i) is the x-coordinate of the i-th platform position (i =1:
3). We wish to solve Ax = b where x is:
x = [ a1 a2 a3 ] T
To do so, use LU decomposition. The process is repeated for: Rs,y,
Rs,z, Vs,x, Vs,y, and Vs,z
j–1
T ( j ) = T ( 0 ) + --------------- × t dur
Na – 1
Where T(0) is the image start time, Na is the number of range lines,
and tdur is the image duration time.
Doppler Centroid
The computation of the Doppler centroid fd to use with the SAR
imaging model depends on how the data was processed. If the data
was deskewed, this value is always 0. If the data is skewed, then this
value may be a nonzero constant or may vary with i.
Slant Range
The computation of the slant range to the pixel i depends on the
projection of the image. If the data is in a slant range projection,
then the computation of slant range is straightforward:
R sl ( i ) = r sl + ( i – 1 ) × ∆r sr
Where Rsl(i) is the slant range to pixel i, rsl is the near slant range,
and ∆rsr is the slant range pixel spacing.
If the projection is a ground range projection, then this computation
is potentially more complicated and depends on how the data was
originally projected into a ground range projection by the SAR
processor.
2
f D = ----------- ( R s – R t ) ⋅ ( V s – V t )
λR sl
f D > 0 for forward squint
R sl = | Rs - Rt |
2 2 2
Rs ( x ) + Rs ( y ) Rs ( z )
--------------------------------------- + ------------------------------
-=1
2 2
( R e + h targ ) ( R m + h targ )
ψ Rt
Rs R
(on Earth model)
(Earth center)
Ephemeris Adjustment
There are three possible adjustments that can be made: along track,
cross track, and radial. In IMAGINE OrthoRadar, the along track
adjustment is performed separately. The cross track and radial
adjustments are made simultaneously. These adjustments are made
using residuals associated with GCPs. Each GCP has a map
coordinate (such as lat, lon) and an elevation. Also, an SAR image
range line and range pixel must be given. The SAR image range line
and range pixel are converted to Rt using the method described
previously (substituting htarg = elevation of GCP above ellipsoid used
in SAR processing).
The along track adjustment is computed first, followed by the cross
track and radial adjustments. The two adjustment steps are then
repeated.
Output Formation
For each point in the output grid, there is an associated Rt. This
target should fall on the surface of the Earth model used for SAR
processing, thus a conversion is made between the Earth model used
for the output grid and the Earth model used during SAR processing.
The process of orthorectification starts with a location on the ground.
The line and pixel location of the pixel to this map location can be
determined from the map location and the sparse mapping grid. The
value at this pixel location is then assigned to the map location.
Figure 136 illustrates this process.
Rt
output grid
Import
Affine Tie
Registration Points
Image 2
Coregistere
Automatic
Image
Correlation
Parallax File
Range/Doppler
Stereo
Intersection
Sensor-
based DEM
Resample
Reproject
Digital
Elevation
Incidence
angles
Elevation of
point
NOTE: IMAGINE StereoSAR DEM has built-in checks that assure the
sensor associated with the Reference image is closer to the imaged
area than the sensor associated with the Match image.
Import
The imagery required for the IMAGINE StereoSAR DEM module can
be imported using the ERDAS IMAGINE radar-specific importers for
either RADARSAT or ESA (ERS-1, ERS-2). These importers
automatically extract data from the image header files and store it
in an Hfa file attached to the image. In addition, they abstract key
parameters necessary for sensor modeling and attach these to the
image as a Generic SAR Node Hfa file. Other radar imagery (e.g.,
SIR-C) can be imported using the Generic Binary Importer. The
Generic SAR Node can then be used to attach the Generic SAR Node
Hfa file.
Orbit Correction
Extensive testing of both the IMAGINE OrthoRadar and IMAGINE
StereoSAR DEM modules has indicated that the ephemeris data from
the RADARSAT and the ESA radar satellites is very accurate (see
appended accuracy reports). However, the accuracy does vary with
each image, and there is no a priori way to determine the accuracy
of a particular data set.
The modules of the IMAGINE Radar Mapping Suite: IMAGINE
OrthoRadar, IMAGINE StereoSAR DEM, and IMAGINE IFSAR DEM,
allow for correction of the sensor model using GCPs. Since the
supplied orbit ephemeris is very accurate, orbit correction should
only be attempted if you have very good GCPs. In practice, it has
been found that GCPs from 1:24 000 scale maps or a handheld GPS
are the minimum acceptable accuracy. In some instances, a single
accurate GCP has been found to result in a significant increase in
accuracy.
As with image warping, a uniform distribution of GCPs results in a
better overall result and a lower RMS error. Again, accurate GCPs are
an essential requirement. If your GCPs are questionable, you are
probably better off not using them. Similarly, the GCP must be
recognizable in the radar imagery to within plus or minus one to two
pixels. Road intersections, reservoir dams, airports, or similar man-
made features are usually best. Lacking one very accurate and
locatable GCP, it would be best to utilize several good GCPs
dispersed throughout the image as would be done for a rectification.
Refined Ephemeris
See the ERDAS IMAGINE Tour Guides for the IMAGINE Radar
Interpreter tour guide.
Rescale
This operation converts the input imagery bit format, commonly
unsigned 16-bit, to unsigned 8-bit using a two standard deviations
stretch. This is done to reduce the overall data file sizes. Testing has
not shown any advantage to retaining the original 16-bit format, and
use of this option is routinely recommended.
Register Register is the first of the Process Steps (other than Input) that must
be done. This operation serves two important functions, and proper
user input at this processing level affects the speed of subsequent
processing, and may affect the accuracy of the final output DEM.
Constrain This option is intended to allow you to define areas where it is not
necessary to search the entire search window area. A region of lakes
would be such an area. This reduces processing time and also
minimizes the likelihood of finding false positives. This option is not
implemented at present.
Level 3
128 × 128 pixels
Resolution of 1:4
and
Level 2
256 × 256 pixels
Resolution of 1:2
Level 1
512 × 512 pixels
Full resolution (1:1) Matching ends
on level 1
Template Size
The size of the template directly affects computation time: a larger
image chip takes more time. However, too small of a template could
contain insufficient image detail to allow accurate matching. A
balance must be struck between these two competing criteria, and is
somewhat image-dependent. A suitable template for a suburban
area with roads, fields, and other features could be much smaller
than the required template for a vast region of uniform ground cover.
Because of viewing geometry-induced differences in the Reference
and Match images, the template from the Reference image is never
identical to any area of the Match image. The template must be large
enough to minimize this effect.
The IMAGINE StereoSAR DEM correlator parameters shown in Table
49 are for the library file Std_LP_HD.ssc. These parameters are
appropriate for a RADARSAT Standard Beam mode (Std) stereopair
with low parallax (LP) and high density of detail (HD). The low
parallax parameters are appropriate for images of low to moderate
topography. The high density of detail (HD) parameters are
appropriate for the suburban area discussed above.
Note that the size of the template (Size X and Size Y) increases as
you go up the resolution pyramid. This size is the effective size if it
were on the bottom of the pyramid (i.e., the full resolution image).
Since they are actually on reduced resolution levels of the pyramid,
they are functionally smaller. Thus, the 220 × 220 template on Level
6 is actually only 36 × 36 during the actual search. By stating the
template size relative to the full resolution image, it is easy to display
a box of approximate size on the input image to evaluate the amount
of detail available to the correlator, and thus optimize the template
sizes.
Search Area
Considerable computer time is expended in searching the Match
image for the exact Match point. Thus, this search area should be
minimized. (In addition, searching too large of an area increases the
possibility of a false match.) For this reason, the software first
requires that the two images be registered. This gives the software
a rough idea of where the Match point might be. In stereo DEM
generation, you are looking for the offset of a point in the Match
image from its corresponding point in the Reference image
(parallax). The minimum and maximum displacement is quantified
in the Register step and is used to restrain the search area.
In Figure 140, the search area is defined by four parameters: -X, +X,
-Y, and +Y. Most of the displacement in radar imagery is a function
of the look angle and is in the range or x-axis direction. Thus, the
search area is always a rectangle emphasizing the x-axis. Because
the total search area (and, therefore, the total time) is X times Y, it
is important to keep these values to a minimum. Careful use at the
Register step easily achieves this.
Threshold
The degree of similarity between the Reference template and each
possible Match region within the search area must be quantified by
a mathematical metric. IMAGINE StereoSAR DEM uses the widely
accepted normalized correlation coefficient. The range of possible
values extends from -1 to +1, with +1 being an identical match. The
algorithm uses the maximum value within the search area as the
correlation point.
The threshold in Table 49 is the minimum numerical value of the
normalized correlation coefficient that is accepted as a correlation
point. If no value within the entire search area attains this minimum,
there is not a Match point for that level of the resolution pyramid. In
this case, the initial estimated position, passed from the previous
level of the resolution pyramid, is retained as the Match point.
Correlator Library
To aid both the novice and the expert in rapidly selecting and refining
an IMAGINE StereoSAR DEM correlator parameter file for a specific
image pair, a library of tested parameter files has been assembled
and is included with the software. These files are labeled using the
following syntax: (RADARSAT Beam mode)_(Magnitude of
Parallax)_(Density of Detail).
Magnitude of Parallax
The magnitude of the parallax is divided into high parallax (_HP) and
low parallax (_LP) options. This determination is based upon the
elevation changes and slopes within the images and is somewhat
subjective. This parameter determines the size of the search area.
Quick Tests
It is often advantageous to quickly produce a low resolution DEM to
verify that the automatic image correlator is optimum before
correlating on every pixel to produce the final DEM.
For this purpose, a Quick Test (_QT) correlator parameter file has
been provided for each of the full resolution correlator parameter
files in the .ssc library. These correlators process the image only
through resolution pyramid Level 3. Processing time up to this level
has been found to be acceptably fast, and testing has shown that if
the image is successfully processed to this level, the correlator
parameter file is probably appropriate.
Evaluation of the parallax files produced by the Quick Test
correlators and subsequent modification of the correlator parameter
file is discussed in "IMAGINE StereoSAR DEM Application" in the
IMAGINE Radar Mapping Suite Tour Guide.
Degrade The second Degrade step compresses the final parallax image file
(Level 1). While not strictly necessary, it is logical and has proven
advantageous to reduce the pixel size at this time to approximately
the intended posting of the final output DEM. Doing so at this time
decreases the variance (LE90) of the final DEM through averaging.
Height This step combines the information from the above processing steps
to derive surface elevations. The sensor models of the two input
images are combined to derive the stereo intersection geometry. The
parallax values for each pixel are processed through this geometric
relationship to derive a DEM in sensor (pixel) coordinates.
Comprehensive testing of the IMAGINE StereoSAR DEM module has
indicated that, with reasonable data sets and careful work, the
output DEM falls between DTED Level I and DTED Level II. This
corresponds to between USGS 30-meter and USGS 90-meter DEMs.
Thus, an output pixel size of 40 to 50 meters is consistent with this
expected precision.
The final step is to resample and reproject this sensor DEM in to the
desired final output DEM. The entire ERDAS IMAGINE reprojection
package is accessed within the IMAGINE StereoSAR DEM module.
Electromagnetic Wave In order to understand the SAR interferometric process, you must
Background have a general understanding of electromagnetic waves and how
they propagate. An electromagnetic wave is a changing electric field
that produces a changing magnetic field that produces a changing
electric field, and so on. As this process repeats, energy is
propagated through empty space at the speed of light.
Figure 142 gives a description of the type of electromagnetic wave
that we are interested in. In this diagram, E indicates the electric
field and H represents the magnetic field. The directions of E and H
are mutually perpendicular everywhere. In a uniform plane, wave E
and H lie in a plane and have the same value everywhere in that
plane.
Ey
Hz Direction of
propagation
z
λ
P
t = 0
t = T
---
4
t = T
---
2
π 2π 3π 4π
Equation 1
E y = cos ( ωt + βx )
Where:
2π
ω = ------
T
β = 2π
------
λ
Equation 1 is expressed in Cartesian coordinates and assumes that
the maximum magnitude of Ey is unity. It is more useful to express
this equation in exponential form and include a maximum term as in
Equation 2.
Equation 2
j 〈 ωt ± βx〉
Ey = E0 ⋅ e
t, x
The Interferometric Most uses of SAR imagery involve a display of the magnitude of the
Model image reflectivity and discard the phase when the complex image is
magnitude-detected. The phase of an image pixel representing a
single scatterer is deterministic; however, the phase of an image
pixel represents multiple scatterers (in the same resolution cell), and
is made up of both a deterministic and nondeterministic, statistical
part. For this reason, pixel phase in a single SAR image is generally
not useful. However, with proper selection of an imaging geometry,
two SAR images can be collected that have nearly identical
nondeterministic phase components. These two SAR images can be
subtracted, leaving only a useful deterministic phase difference of
the two images.
Figure 145 provides the basic geometric model for an interferometric
SAR system.
R1 – R2 R2
Z ac R1
Where:
A1 = antenna 1
A2 = antenna 2
Bi = baseline
R1 = vector from antenna 1 to point of interest
R2 = vector from antenna 2 to point of interest
Ψ = angle between R1 and baseline vectors (depression
angle)
Zac = antenna 1 height
Equation 3
j ( θ1 + Φ1 )
P1 = a1 ⋅ e
and
Equation 4
j ( θ2 + Φ2 )
P2 = a2 ⋅ e
Equation 5
4πR
Φ i = ------------i
λ
Equation 6
I = P1 ⋅ P 2'
Equation 7
– j ------ ( R 1 – R 2 )
4π
2 λ 2 jφ 12
I = a ⋅e = a ⋅e
Equation 8
4π ( R 2 – R 1 )
φ 12 = ------------------------------
λ
Equation 9
R 2 – R 1 ≈ B i cos ( ψ )
Equation 10
4πB i cos ( ψ )
φ 12 ≈ ------------------------------
λ
Equation 11
4πB i
∆φ T = – ------------ sin ( ψ )∆ψ
λ
or
Equation 12
λ
∆ψ = – ----------------------------- ∆φ 12
4πB i sin ( ψ )
ψ
ψ – ∆ψ
Z ac Z ac – ∆h
Z ac
-----------------
sin ( ψ )
∆ψ
ψ ∆h
X
Y
From this geometry, a change ∆ψ in depression angle is related to a
change ∆h in height (at the same range from mid-baseline) by
Equation 13.
Equation 13
Z ac sin ( ψ – ∆ψ )
Z ac – ∆h = --------------------------------------
-
sin ( ψ )
Equation 14
∆h ≈ Z ac cot ( ψ )∆ψ
λZ ac cot ( ψ )
∆h = – ----------------------------- ∆φ 12
4πB i sin ( ψ )
Image Registration In the discussion of the interferometric model of the last section, we
assumed that the pixels had been identified in each image that
contained the phase information for the scatterer of interest.
Aligning the images from the two antennas is the purpose of the
image registration step. For interferometric systems that employ two
antennas attached by a fixed boom and collect data simultaneously,
this registration is simple and deterministic. Given the collection
geometry, the registration can be calculated without referring to the
data. For repeat pass systems, the registration is not quite so simple.
Since the collection geometry cannot be precisely known, we must
use the data to help us achieve image registration.
The registration process for repeat pass interferometric systems is
generally broken into two steps: pixel and sub-pixel registration.
Pixel registration involves using the magnitude (visible) part of each
image to remove the image misregistration down to around a pixel.
This means that, after pixel registration, the two images are
registered to within one or two pixels of each other in both the range
and azimuth directions.
Pixel registration is best accomplished using a standard window
correlator to compare the magnitudes of the two images over a
specified window. You usually specify a starting point in the two
images, a window size, and a search range for the correlator to
search over. The process identifies the pixel offset that produces the
highest match between the two images, and therefore the best
interferogram. One offset is enough to pixel register the two images.
Pixel registration, in general, produces a reasonable interferogram,
but not the best possible. This is because of the nature of the phase
function for each of the images. In order to form an image from the
original signal data collected for each image, it is required that the
phase functions in range and azimuth be Nyquist sampled.
Nyquist sampling simply means that the original continuous function
can be reconstructed from the sampled data. This means that, while
the magnitude resolution is limited to the pixel sizes (often less than
that), the phase function can be reconstructed to much higher
resolutions. Because it is the phase functions that ultimately provide
the height information, it is important to register them as closely as
possible. This fine registration of the phase functions is the goal of
the sub-pixel registration step.
Equation 15
–1 – j ( u∆r + v∆a )
i ( r + ∆r, a + ∆a ) = ζ [ I ( ( u, v ) ⋅ e )]
Where:
r = range independent variable
a = azimuth independent variable
i(r, a) = interferogram in spacial domain
I(u, v) = interferogram in frequency domain
∆r = sub-pixel range offset (i.e., 0.25)
∆a = sub-pixel azimuth offset (i.e., 0.75)
ζ-1 = inverse Fourier transform
Applying this relation directly requires two-dimensional (2D) Fourier
transforms and inverse Fourier transforms for each window tested.
This is impractical given the computing requirements of Fourier
transforms. Fortunately, we can achieve the upsampled phase
functions we need using 2D sinc interpolation, which involves
convolving a 2D sync function of a given size over our search region.
Equation 16 defines the sync function for one dimension.
Equation 16
sin ( nπ )-
------------------
nπ
Equation 17
N M
∑ ∑ RE [ i ( r + i, a + j ) ] + jImg [ ( r + i, a + j ) ]
ˆi ( r, a ) = i-------------------------------------------------------------------------------------------------------------------
= 0j = 0
M+N
The sharp ridges that look like contour lines in Figure 147 and Figure
148 show where the phase functions wrap. The goal of the phase
unwrapping step is to make this one continuous function. This is
discussed in greater detail in “Phase Unwrapping”. Notice how the
filtered image of Figure 148 is much cleaner then that of Figure 147.
This filtering makes the phase unwrapping much easier.
Phase Flattening The phase function of Figure 148 is fairly well behaved and is ready
to be unwrapped. There are relatively few wrap lines and they are
distinct. Notice in the areas where the elevation is changing more
rapidly (mountain regions) the frequency of the wrapping increases.
In general, the higher the wrapping frequency, the more difficult the
area is to unwrap. Once the wrapping frequency exceeds the spacial
sampling of the phase image, information is lost. An important
technique in reducing this wrapping frequency is phase flattening.
Phase flattening involves removing high frequency phase wrapping
caused by the collection geometry. This high frequency wrapping is
mainly in the range direction, and is because of the range separation
of the antennas during the collection. Recall that it is this range
separation that gives the phase difference and therefore the height
information. The phase function of Figure 148 has already had phase
flattening applied to it. Figure 149 shows this same phase function
without phase flattening applied.
Phase Unwrapping We stated in “The Interferometric Model” that we must unwrap the
interferometric phase before we can use it to calculate height values.
In “Phase Noise Reduction” and “Phase Flattening”, we develop
methods of making the phase unwrapping job easier. This section
further defines the phase unwrapping problem and describes how to
solve it.
As an electromagnetic wave travels through space, it cycles through
its maximum and minimum phase values many times as shown in
Figure 150.
P1 P2
π 2π 3π 4π 5π 6π 7π
φ 1 = 3π
------ φ 2 = 11π
---------
2 2
Equation 18
φ 2 – φ 1 = 11π
--------- – 3π
------ = 4π
2 2
Recall from Equation 8 that finding the phase difference at two points
is the key to extracting height from interferometric phase.
Unfortunately, an interferometric system does not measure the total
pixel phase difference. Rather, it measures only the phase difference
that remains after subtracting all full 2π intervals present (module-
2π). This results in the following value for the phase difference of
Equation 18.
Equation 19
10π
continuous function
8π
6π
4π
wrapped function
2π
Registration In many cases, images of one area that are collected from different
sources must be used together. To be able to compare separate
images pixel by pixel, the pixel grids of each image must conform to
the other images in the data base. The tools for rectifying image data
are used to transform disparate images to the same coordinate
system.
Registration is the process of making an image conform to another
image. A map coordinate system is not necessarily involved. For
example, if image A is not rectified and it is being used with image
B, then image B must be registered to image A so that they conform
to each other. In this example, image A is not rectified to a particular
map projection, so there is no need to rectify image B to a map
projection.
When to Rectify Rectification is necessary in cases where the pixel grid of the image
must be changed to fit a map projection system or a reference
image. There are several reasons for rectifying image data:
• mosaicking images
This information is usually the same for each layer of an image file,
although it could be different. For example, the cell size of band 6 of
Landsat TM data is different than the cell size of the other bands.
Disadvantages of During rectification, the data file values of rectified pixels must be
Rectification resampled to fit into a new grid of pixel rows and columns. Although
some of the algorithms for calculating these values are highly
reliable, some spectral integrity of the data can be lost during
rectification. If map coordinates or map units are not needed in the
application, then it may be wiser not to rectify the image. An
unrectified image is more spectrally correct than a rectified image.
Classification
Some analysts recommend classification before rectification, since
the classification is then based on the original data values. Another
benefit is that a thematic file has only one band to rectify instead of
the multiple bands of a continuous file. On the other hand, it may be
beneficial to rectify the data first, especially when using GPS data for
the GCPs. Since these data are very accurate, the classification may
be more accurate if the new coordinates help to locate better training
samples.
Thematic Files
Nearest neighbor is the only appropriate resampling method for
thematic files, which may be a drawback in some applications. The
available resampling methods are discussed in detail later in this
chapter.
1. Locate GCPs.
Ground Control GCPs are specific pixels in an image for which the output map
coordinates (or other output coordinates) are known. GCPs consist
Points of two X,Y pairs of coordinates:
GCPs in ERDAS IMAGINE Any ERDAS IMAGINE image can have one GCP set associated with it.
The GCP set is stored in the image file along with the raster layers.
If a GCP set exists for the top file that is displayed in the Viewer, then
those GCPs can be displayed when the GCP Tool is opened.
In the CellArray of GCP data that displays in the GCP Tool, one
column shows the point ID of each GCP. The point ID is a name given
to GCPs in separate files that represent the same geographic
location. Such GCPs are called corresponding GCPs.
A default point ID string is provided (such as GCP #1), but you can
enter your own unique ID strings to set up corresponding GCPs as
needed. Even though only one set of GCPs is associated with an
image file, one GCP set can include GCPs for a number of
rectifications by changing the point IDs for different groups of
corresponding GCPs.
Entering GCPs Accurate GCPs are essential for an accurate rectification. From the
GCPs, the rectified coordinates for all other points in the image are
extrapolated. Select many GCPs throughout the scene. The more
dispersed the GCPs are, the more reliable the rectification is. GCPs
for large-scale imagery might include the intersection of two roads,
airport runways, utility corridors, towers, or buildings. For small-
scale imagery, larger features such as urban areas or geologic
features may be used. Landmarks that can vary (e.g., the edges of
lakes or other water bodies, vegetation, etc.) should not be used.
• Use the mouse to select a pixel from an image in the Viewer. With
both the source and destination Viewers open, enter source
coordinates and reference coordinates for image-to-image
registration.
Mouse Option
When entering GCPs with the mouse, you should try to match
coarser resolution imagery to finer resolution imagery (i.e., Landsat
TM to SPOT), and avoid stretching resolution spans greater than a
cubic convolution radius (a 4 × 4 area). In other words, you should
not try to match Landsat MSS to SPOT or Landsat TM to an aerial
photograph.
GCP Prediction and Automated GCP prediction enables you to pick a GCP in either
Matching coordinate system and automatically locate that point in the other
coordinate system based on the current transformation parameters.
Automated GCP matching is a step beyond GCP prediction. For
image-to-image rectification, a GCP selected in one image is
precisely matched to its counterpart in the other image using the
spectral characteristics of the data and the geometric
transformation. GCP matching enables you to fine tune a rectification
for highly accurate results.
GCP Prediction
GCP prediction is a useful technique to help determine if enough
GCPs have been gathered. After selecting several GCPs, select a
point in either the source or the destination image, then use GCP
prediction to locate the corresponding GCP on the other image
(map). This point is determined based on the current transformation
derived from existing GCPs. Examine the automatically generated
point and see how accurate it is. If it is within an acceptable range
of accuracy, then there may be enough GCPs to perform an accurate
rectification (depending upon how evenly dispersed the GCPs are).
If the automatically generated point is not accurate, then more GCPs
should be gathered before rectifying the image.
GCP prediction can also be used when applying an existing
transformation to another image in a data set. This saves time in
selecting another set of GCPs by hand. Once the GCPs are
automatically selected, those that do not meet an acceptable level of
error can be edited.
GCP Matching
In GCP matching, you can select which layers from the source and
destination images to use. Since the matching process is based on
the reflectance values, select layers that have similar spectral
wavelengths, such as two visible bands or two infrared bands. You
can perform histogram matching to ensure that there is no offset
between the images. You can also select the radius from the
predicted GCP from which the matching operation searches for a
spectrally similar pixels. The search window can be any odd size
between 5 × 5 and 21 × 21.
You can specify the order of the transformation you want to use
in the Transform Editor.
Transformation Matrix
A transformation matrix is computed from the GCPs. The matrix
consists of coefficients that are used in polynomial equations to
convert the coordinates. The size of the matrix depends upon the
order of transformation. The goal in calculating the coefficients of the
transformation matrix is to derive the polynomial equations for which
there is the least possible amount of error when they are used to
transform the reference coordinates of the GCPs into the source
coordinates. It is not always possible to derive coefficients that
produce no error. For example, in Figure 154, GCPs are plotted on a
graph and compared to the curve that is expressed by a polynomial.
GCP
Polynomial curve
Source X coordinate
• location in X and/or Y
• scale in X and/or Y
• skew in X and/or Y
• rotation
• scale
• offset
• rotate
• reflect
Scale
Scale is the same as the zoom option in the Viewer, except that you
can specify different scaling factors for X and Y.
Reflection
Reflection options enable you to perform the following operations:
b0 b1 b2
xo = a0 + a1 x + a2 y
yo = b0 + b1 x + b2 y
Where:
x and y are source coordinates (input)
xo and yo are rectified coordinates (output)
the coefficients of the transformation matrix are as above
original image
2∑i
i=1
(t + 1) × (t + 2)
Clearly, the size of the transformation matrix increases with the
order of the transformation.
t i
x o = Σ Σ a k × x × y
i–j j
i = o j = o
t i
yo = Σ Σ bk × x × y
i–j j
i = o j = o
Where:
t is the order of the polynomial
ak and bk are coefficients
the subscript k in ak and bk is determined by:
i⋅i+j+j
k = ---------------
2
An example of 3rd-order transformation equations for X and Y, using
numbers, is:
Reference X Coordinate
Source X Coordinate (input)
(output)
1 17
2 9
3 1
x r = ( 25 ) + ( – 8 ) x i
Where:
xr = the reference X coordinate
xi = the source X coordinate
This equation takes on the same format as the equation of a line (y
= mx + b). In mathematical terms, a 1st-order polynomial is linear.
Therefore, a 1st-order transformation is also known as a linear
transformation. This equation is graphed in Figure 157.
reference X coordinate
16
12 xr = (25) + (-8)xi
0
0 1 2 3 4
source X coordinate
Reference X Coordinate
Source X Coordinate (input)
(output)
1 17
2 7
3 1
16
12
0
0 1 2 3 4
source X coordinate
A line cannot connect these points, which illustrates that they cannot
be expressed by a 1st-order polynomial, like the one above. In this
case, a 2nd-order polynomial equation expresses these points:
2
x r = ( 31 ) + ( – 16 )x i + ( 2 )x i
Polynomials of the 2nd-order or higher are nonlinear. The graph of
this curve is drawn in Figure 159.
reference X coordinate
16
0
0 1 2 3 4
source X coordinate
Reference X Coordinate
Source X Coordinate (input)
(output)
1 17
2 7
3 1
4 5
16
8
(4,5)
4
0
0 1 2 3 4
source X coordinate
As illustrated in Figure 160, this fourth GCP does not fit on the curve
of the 2nd-order polynomial equation. To ensure that all of the GCPs
fit, the order of the transformation could be increased to 3rd-order.
The equation and graph in Figure 161 could then result.
reference X coordinate
16
0
0 1 2 3 4
source X coordinate
Reference X Coordinate
Source X Coordinate (input)
(output)
1 xo(1) = 17
2 xo(2) = 7
3 xo(3) = 1
4 xo(4) = 5
1 2 3 4 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 4
1 2 3 4 3 4 2 1
(------------------------------------
( t + 1 ) ( t + 2 ) -)
2
Use more than the minimum number of GCPs whenever possible.
Although it is possible to get a perfect fit, it is rare, no matter how
many GCPs are used.
For 1st- through 10th-order transformations, the minimum number
of GCPs required to perform a transformation is listed in the following
table:
1 3
2 6
3 10
4 15
5 21
6 28
7 36
8 45
9 55
10 66
For the best rectification results, you should always use more
than the minimum number of GCPs, and they should be well-
distributed.
p0 p2
p5
p7
p6
p4
p12
p3
p9
p8 p11
p10
Linear transformation The easiest and fastest is the linear transformation with the first
order polynomials:
xo = a 0 + a 1 x + a 2 y
yo = b 0 + b 1 x + b 2 y
Nonlinear transformation Even though the linear transformation is easy and fast, it has one
disadvantage. The transitions between triangles are not always
smooth. This phenomenon is obvious when shaded relief or contour
lines are derived from the DEM which is generated by the linear
rubber sheeting. It is caused by incorporating the slope change of
the control data at the triangle edges and vertices. In order to
distribute the slope change smoothly across triangles, the nonlinear
transformation with polynomial order larger than one is used by
considering the gradient information.
The fifth order or quintic polynomial transformation is chosen here
as the nonlinear rubber sheeting technique in this dissertation. It is
a smooth function. The transformation function and its first order
partial derivative are continuous. It is not difficult to construct
(Akima, 1978). The formula is as follows:
5 i
xo =
∑ ∑ ak ⋅ x
i–j j
⋅y
i=0 j=0
5 i
yo =
∑ ∑ bk ⋅ x
i–j j
⋅y
i=0 j=0
Check Point Analysis It should be emphasized that the independent check point analysis
is critical for determining the accuracy of rubber sheeting modeling.
For an exact modeling method like rubber sheeting, the ground
control points, which are used in the modeling process, do not have
much geometric residuals remaining. To evaluate the geometric
transformation between source and destination coordinate systems,
the accuracy assessment using independent check points is
recommended.
RMS Error RMS error is the distance between the input (source) location of a
GCP and the retransformed location for the same GCP. In other
words, it is the difference between the desired output coordinate for
a GCP and the actual output coordinate for the same point, when the
point is transformed with the geometric transformation.
RMS error is calculated with a distance equation:
2 2
RMS error = ( xr – xi ) + ( yr – yi )
Where:
xi and yi are the input source coordinates
xr and yr are the retransformed coordinates
RMS error is expressed as a distance in the source coordinate
system. If data file coordinates are the source coordinates, then the
RMS error is a distance in pixel widths. For example, an RMS error of
2 means that the reference pixel is 2 pixels away from the
retransformed pixel.
Residuals and RMS Error The GCP Tool contains columns for the X and Y residuals. Residuals
Per GCP are the distances between the source and retransformed coordinates
in one direction. They are shown for each GCP. The X residual is the
distance between the source X coordinate and the retransformed X
coordinate. The Y residual is the distance between the source Y
coordinate and the retransformed Y coordinate.
If the GCPs are consistently off in either the X or the Y direction,
more points should be added in that direction. This is a common
problem in off-nadir data.
Ri = XR i2 + YR i2
Where:
Ri = the RMS error for GCPi
XRi = the X residual for GCPi
YRi = the Y residual for GCPi
Figure 164 illustrates the relationship between the residuals and the
RMS error per point.
RMS error
Y residual
retransformed GCP
Total RMS Error From the residuals, the following calculations are made to determine
the total RMS error, the X RMS error, and the Y RMS error:
n
1---
n∑
Rx = XR i2
n
i=1 1---
n∑
n T = R x2 + R y2 or XR i2 + YR i2
1---
n∑
Ry = YR i2 i=1
i=1
Where:
Rx = X RMS error
Ry = Y RMS error
T = total RMS error
n = the number of GCPs
i = GCP number
XRi = the X residual for GCPi
YRi = the Y residual for GCPi
R
E i = -----i
T
Where:
Ei = error contribution of GCPi
Ri = the RMS error for GCPi
T = total RMS error
Retransformed coordinates
within this range are considered
correct
Acceptable RMS error is determined by the end use of the data base,
the type of data being used, and the accuracy of the GCPs and
ancillary data being used. For example, GCPs acquired from GPS
should have an accuracy of about 10 m, but GCPs from 1:24,000-
scale maps should have an accuracy of about 20 m.
It is important to remember that RMS error is reported in pixels.
Therefore, if you are rectifying Landsat TM data and want the
rectification to be accurate to within 30 meters, the RMS error should
not exceed 1.00. Acceptable accuracy depends on the image area
and the particular project.
Evaluating RMS Error To determine the order of polynomial transformation, you can assess
the relative distortion in going from image to map or map to map.
One should start with a 1st-order transformation unless it is known
that it does not work. It is possible to repeatedly compute
transformation matrices until an acceptable RMS error is reached.
• Throw out the GCP with the highest RMS error, assuming that
this GCP is the least accurate. Another transformation can then
be computed from the remaining GCPs. A closer fit should be
possible. However, if this is the only GCP in a particular region of
the image, it may cause greater error to remove it.
• Select only the points for which you have the most confidence.
GCP GCP
GCP GCP
If the output units are pixels, then the origin of the image is the
upper left corner. Otherwise, the origin is the lower left corner.
Enter the nominal cell size in the Nominal Cell Size dialog.
(xr,yr)
nearest to
(xr,yr)
Advantages Disadvantages
The easiest of the three methods Using on linear thematic data (e.g.,
to compute and the fastest to use. roads, streams) may result in breaks
or gaps in a network of linear data.
Bilinear Interpolation In bilinear interpolation, the data file value of the rectified pixel is
based upon the distances between the retransformed coordinate
location (xr, yr) and the four closest pixels in the input (source)
image (see Figure 168). In this example, the neighbor pixels are
numbered 1, 2, 3, and 4. Given the data file values of these four
pixels on a grid, the task is to calculate a data file value for r (Vr).
m r n
dx
(xr,yr)
3 4
D
V3
data file values
Vm
(V3 - V1) / D
V1
Y1 Ym Y3
D
data file coordinates
(Y)
V3 – V1
V m = ------------------- × dy + V 1
D
V4 – V2
V n = ------------------- × dy + V 2
D
From Vn and Vm, the data file value for r, which is at the
retransformed coordinate location (xr,yr),can be calculated in the
same manner:
Vn – Vm
V r = -------------------- × dx + V m
D
The following is attained by plugging in the equations for Vm and Vn
to this final equation for Vr :
V4 – V2 V3 – V1
------------------- × dy + V 2 – ------------------ - × dy + V 1
D D V3 – V1
Vr = - × dx + ------------------
----------------------------------------------------------------------------------------------------------- - × dy + V 1
D D
V 1 ( D – dx )( D – dy ) + V 2 ( dx )( D – dy ) + V 3 ( D – dx )( dy ) + V 4 ( dx ) ( dy )
V r = -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-
D2
In most cases D = 1, since data file coordinates are used as the
source coordinates and data file coordinates increment by 1.
Some equations for bilinear interpolation express the output data file
value as:
Vr = ∑ wi Vi
Where:
wi is a weighting factor
4
( D – ∆x i ) ( D – ∆y i )
Vr = ∑ ----------------------------------------------
D
2
- × Vi
i=1
Where:
∆xi = the change in the X direction between (xr,yr) and the data file
coordinate of pixel i
∆yi = the change in the Y direction between (xr,yr) and the data file
coordinate of
pixel i
Vi = the data file value for pixel i
D = the distance between pixels (in X or Y) in the source coordinate
system
For each of the four pixels, the data file value is weighted more if the
pixel is closer to (xr, yr).
Advantages Disadvantages
Results in output images that are Since pixels are averaged, bilinear
smoother, without the stair- interpolation has the effect of a low-
stepped effect that is possible with frequency convolution. Edges are
nearest neighbor. smoothed, and some extremes of
the data file values are lost.
(i,j)
(Xr,Yr)
Vr = ∑ V ( i – 1, j + n – 2 ) × f ( d ( i – 1, j + n – 2 ) + 1 ) +
n=1
V ( i, j + n – 2 ) × f ( d ( i, j + n – 2 ) ) +
V ( i + 1, j + n – 2 ) × f ( d ( i + 1, j + n – 2 ) – 1 ) +
(a + 2) x 3 – (a + 3) x 2 + 1 if x < 1
f ( x ) = a x 3 – 5a x 2 + 8a x – 4a
if 1 < x < 2
0 otherwise
Advantages Disadvantages
Data Points
The known data points are an array of raster of m × n,
x1 x2 xm-1 xm
y1
V1,1 V2,1 Vm-1,1 Vm,1
y2
V1,2 V2,2 Vm-1,2 Vm,2
yn-1
V1,n-1 V2,n-1 Vm-1,n-1 Vm,n-1
yn
V1,n V2,n Vm-1,n Vm,n
Xi + 1 = Xi + d
Yj + 1 = Yj + d
Where:
1<i<m
1<j<n
d is the cell size of the raster
Vi,j is the cell value in (xi ,yj)
Equations
A bicubic polynomial function V(x,y) is constructed as following:
3 3
( i, j )
∑ ∑ ap, q ( x – xi )
p q
V ( x, y ) = ( y – yj )
p = 0q = 0
in each cell
i = 1, 2,… , m ;j = 1, 2, …, n
V ( x i ,y j ) = V i, j ;i = 1, 2, …, m ;j = 1, 2, …, n
i.e., the spline must interpolate all data points.
3 3
( i r, j r )
∑ ∑ ap, q
p q
V ( x r, y r ) = ( x r – x ir ) ( y r – y jr )
p = 0q = 0
x ir
(ir, jr)
yjr
(xr ,yr)
d
Advantages Disadvantages
Map-to-Map There are many instances when you may need to change a map that
is already registered to a planar projection to another projection.
Coordinate
Some examples of when this is required are as follows
Conversions (Environmental Systems Research Institute, 1992):
• When the projection used for the files in the data base does not
produce the desired properties of a map.
Conversion Process To convert the map coordinate system of any georeferenced image,
ERDAS IMAGINE provides a shortcut to the rectification process. In
this procedure, GCPs are generated automatically along the
intersections of a grid that you specify. The program calculates the
reference coordinates for the GCPs with the appropriate conversion
formula and a transformation that can be used in the regular
rectification process.
Terrain Data Terrain data are usually expressed as a series of points with X,Y, and
Z values. When terrain data are collected in the field, they are
surveyed at a series of points including the extreme high and low
points of the terrain along features of interest that define the
topography such as streams and ridge lines, and at various points in
between.
DEM and DTED are expressed as regularly spaced points. To create
DEM and DTED files, a regular grid is overlaid on the topographic
contours. Elevations are read at each grid intersection point, as
shown in Figure 171.
20
30 20 22 29 34
40
31 39 38 34
30
50 20 45 48 41 30
Elevation data are derived from ground surveys and through manual
photogrammetric methods. Elevation points can also be generated
through digital orthographic methods.
See “Raster and Vector Data Sources” for more details on DEM
and DTED data. See “Photogrammetric Concepts” for more
information on the digital orthographic process.
• slopes between 45° and 90° are expressed as 100 - 200% slopes
a b c 10 m 20 m 25 m
Pixel X,Y has
elevation e.
d e f 22 m 30 m 25 m
g h i 20 m 24 m 18 m
First, the average elevation changes per unit of distance in the x and
y direction (∆x and ∆y) are calculated as:
∆x 1 = c – a ∆y 1 = a – g
∆x 2 = f – d ∆y 2 = b – h
∆x 3 = i – g ∆y 3 = c – i
∆x = ( ∆x 1 + ∆x 2 + ∆x 3 ) ⁄ 3 × x s
Where:
a...i = elevation values of pixels in a 3 × 3 window, as
shown above
xs = x pixel size = 30 meters
ys = y pixel size = 30 meters
The slope at pixel x,y is calculated as:
( ∆x ) 2 + ( ∆y ) 2- s = 0.0967
s = -------------------------------------
2
if s ≤1 percent slope = s × 100
100
if s >1 percent slope = 200 – ------
-
s
180
slope in degrees = tan–1 ( s ) × -------
π
Example
Slope images are often used in road planning. For example, if the
Department of Transportation specifies a maximum of 15% slope on
any road, it would be possible to recode all slope values that are
greater than 15% as unsuitable for road building.
A hypothetical example is given in Figure 173, which shows how the
slope is calculated for a single pixel.
10 m 20 m 25 m
22 m 25 m
20 m 24 m 18 m
∆x = 15 + 3 – 2- = 0.177
--------------------- ∆y = –--------------------------
10 – 4 + 7
= – 0.078
30 × 3 30 × 3
For the example, the slope is:
180
slope in degrees = tan– 1 ( s ) × ------- = tan–1 ( 0.0967 ) × 57.30 = 5.54
π
percent slope = 0.0967 × 100 = 9.67%
Aspect Images An aspect image is an image file that is gray scale coded according
to the prevailing direction of the slope at each pixel. Aspect is
expressed in degrees from north, clockwise, from 0 to 360. Due
north is 0 degrees. A value of 90 degrees is due east, 180 degrees
is due south, and 270 degrees is due west. A value of 361 degrees
is used to identify flat surfaces such as water bodies.
d e f
g h i
Where:
a...i = elevation values of pixels in a 3 × 3 window as
shown above
∆x = ( ∆x 1 + ∆x 2 + ∆x 3 ) ⁄ 3
∆y = ( ∆y 1 + ∆y 2 + ∆y 3 ) ⁄ 3
If ∆x = 0 and ∆y = 0, then the aspect is flat (coded to 361 degrees).
Otherwise,θ is calculated as:
∆x
θ = tan– 1 -------
∆y
Example
Aspect files are used in many of the same applications as slope files.
In transportation planning, for example, north facing slopes are
often avoided. Especially in northern climates, these would be
exposed to the most severe weather and would hold snow and ice
the longest. It would be possible to recode all pixels with north facing
aspects as undesirable for road building.
A hypothetical example is given in Figure 175, which shows how the
aspect is calculated for a single pixel.
10 m 20 m 25 m
22 m 25 m
20 m 24 m 18 m
30
40
50
= ≠
in sun shaded
Shaded relief images are an effective graphic tool. They can also be
used in analysis, e.g., snow melt over an area spanned by an
elevation surface. A series of relief images can be generated to
simulate the movement of the sun over the landscape. Snow melt
rates can then be estimated for each pixel based on the amount of
time it spends in sun or shadow. Shaded relief images can also be
used to enhance subtle detail in gray scale images such as
aeromagnetic, radar, gravity maps, etc.
The reflectance values are then applied to the original pixel values to
get the final result. All negative values are set to 0 or to the minimum
light level specified by you. These indicate shadowed areas. Light
reflectance in sunny areas falls within a range of values depending
on whether the pixel is directly facing the sun or not. (In the example
above, pixels facing northwest would be the brightest. Pixels facing
north-northwest and west-northwest would not be quite as bright.)
In a relief file, which is a DEM that shows surface relief, the surface
reflectance values are multiplied by the color lookup values for the
image file.
• DEM file
Lambertian Reflectance The Lambertian Reflectance model assumes that the surface reflects
Model incident solar energy uniformly in all directions, and that variations
in reflectance are due to the amount of incident radiation.
The following equation produces normalized brightness values
(Colby, 1991; Smith et al, 1980):
BVnormal λ = BV observed λ / cos i
Where:
BVnormal λ = normalized brightness values
BVobserved λ = observed brightness values
cos i = cosine of the incidence angle
Incidence Angle
The incidence angle is defined from:
cos i = cos (90 - θs) cos θn + sin (90 - θs) sin θn cos (φs - φn)
Where:
i = the angle between the solar rays and the normal to
the surface
θs = the elevation of the sun
φs = the azimuth of the sun
θn = the slope of each surface element
φn = the aspect of each surface element
If the surface has a slope of 0 degrees, then aspect is undefined and
i is simply 90 - θs.
Non-Lambertian Model Minnaert (Minnaert and Szeicz, 1961) proposed that the observed
surface does not reflect incident solar energy uniformly in all
directions. Instead, he formulated the Non-Lambertian model, which
takes into account variations in the terrain. This model, although
more computationally demanding than the Lambertian model, may
present more accurate results.
In a Non-Lambertian Reflectance model, the following equation is
used to normalize the brightness values in the image (Colby, 1991;
Smith et al, 1980):
BVnormal λ= (BVobserved λ cos e) / (cosk i cosk e)
Minnaert Constant
The Minnaert constant (k) may be found by regressing a set of
observed brightness values from the remotely sensed imagery with
known slope and aspect values, provided that all the observations in
this set are the same type of land cover. The k value is the slope of
the regression line (Hodgson and Shelley, 1994):
log (BVobserved λ cos e) = log BVnormal λ+ k log (cos i cos e)
NOTE: The Non-Lambertian model does not detect surfaces that are
shadowed by intervening topographic features between each pixel
and the sun. For these areas, a line-of-sight algorithm can identify
such shadowed pixels.
Introduction The dawning of GIS can legitimately be traced back to the beginning
of the human race. The earliest known map dates back to 2500
B.C.E., but there were probably maps before that time. Since then,
humans have been continually improving the methods of conveying
spatial information. The mid-eighteenth century brought the use of
map overlays to show troop movements in the Revolutionary War.
This could be considered an early GIS. The first British census in
1825 led to the science of demography, another application for GIS.
During the 1800s, many different cartographers and scientists were
all discovering the power of overlays to convey multiple levels of
information about an area (Star and Estes, 1990).
Frederick Law Olmstead has long been considered the father of
Landscape Architecture for his pioneering work in the early 20th
century. Many of the methods Olmstead used in Landscape
Architecture also involved the use of hand-drawn overlays. This type
of analysis was beginning to be used for a much wider range of
applications, such as change detection, urban planning, and
resource management (Rado, 1992).
The first system to be called a GIS was the Canadian Geographic
Information System, developed in 1962 by Roger Tomlinson of the
Canada Land Inventory. Unlike earlier systems that were developed
for a specific application, this system was designed to store digitized
map data and land-based attributes in an easily accessible format for
all of Canada. This system is still in operation today (Parent and
Church, 1987).
In 1969, Ian McHarg’s influential work, Design with Nature, was
published. This work on land suitability/capability analysis (SCA), a
system designed to analyze many data layers to produce a plan map,
discussed the use of overlays of spatially referenced data layers for
resource planning and management (Star and Estes, 1990).
The era of modern GIS really started in the 1970s, as analysts began
to program computers to automate some of the manual processes.
Software companies like ESRI and ERDAS developed software
packages that could input, display, and manipulate geographic data
to create new layers of information. The steady advances in features
and power of the hardware over the last ten years—and the decrease
in hardware costs—have made GIS technology accessible to a wide
range of users. The growth rate of the GIS industry in the last several
years has exceeded even the most optimistic projections.
You can input data into a GIS and output information. The
information you wish to derive determines the type of data that must
be input. For example, if you are looking for a suitable refuge for bald
eagles, zip code data is probably not needed, while land cover data
may be useful.
For this reason, the first step in any GIS project is usually an
assessment of the scope and goals of the study. Once the project is
defined, you can begin the process of building the database.
Although software and data are commercially available, a custom
database must be created for the particular project and study area.
The database must be designed to meet the needs of the
organization and objectives. ERDAS IMAGINE provides tools required
to build and manipulate a GIS database.
• data input
• analysis
Data input involves collecting the necessary data layers into a GIS
database. In the analysis phase, these data layers are combined and
manipulated in order to create new layers and to extract meaningful
information from them. This chapter discusses these steps in detail.
Data Input Acquiring the appropriate data for a project involves creating a
database of layers that encompasses the study area. A database
created with ERDAS IMAGINE can consist of:
Landsat TM Roads
SPOT panchromatic Census data
Aerial photograph Ownership parcels
Soils data Political boundaries
Land cover Landmarks
• site selection
• petroleum exploration
• mission planning
• change detection
On the other hand, vector data may be better suited for these
applications:
• urban planning
• traffic engineering
• facilities management
• Ratio classes differ from interval classes only in that ratio classes
have a natural zero point, such as rainfall amounts.
The variable being analyzed, and the way that it contributes to the
final product, determines the class numbering system used in the
thematic layers. Layers that have one numbering system can easily
be recoded to a new system. This is discussed in detail under
"Recoding".
Use the Vector Utilities menu from the Vector icon in the ERDAS
IMAGINE icon panel to convert vector layers to raster format, or
use the vector layers directly in Spatial Modeler.
For thematic data, these statistics are called attributes and may be
accompanied by many other types of information, as described in
"Attributes".
Vector Layers The vector layers used in ERDAS IMAGINE are based on the ArcInfo
data model and consist of points, lines, and polygons. These layers
are topologically complete, meaning that the spatial relationships
between features are maintained. Vector layers can be used to
represent transportation routes, utility corridors, communication
lines, tax parcels, school zones, voting districts, landmarks,
population density, etc. Vector layers can be analyzed independently
or in combination with continuous and thematic raster layers.
In ERDAS IMAGINE, vector layers may also be shapefiles based on
the ArcView data model.
Vector data can be acquired from several private and governmental
agencies. Vector data can also be created in ERDAS IMAGINE by
digitizing on the screen, using a digitizing tablet, or converting other
data types to vector format.
Attributes Text and numerical data that are associated with the classes of a
thematic layer or the features in a vector layer are called attributes.
This information can take the form of character strings, integer
numbers, or floating point numbers. Attributes work much like the
data that are handled by database management software. You may
define fields, which are categories of information about each class. A
record is the set of all attribute data for one class. Each record is like
an index card, containing information about one class or feature in a
file of many index cards, which contain similar information for the
other classes or features.
Attribute information for raster layers is stored in the image file.
Vector attribute information is stored in either an INFO file, dbf file,
or SDE database. In both cases, there are fields that are
automatically generated by the software, but more fields can be
added as needed to fully describe the data. Both are viewed in
CellArrays, which allow you to display and manipulate the
information. However, raster and vector attributes are handled
slightly differently, so a separate section on each follows.
• Class Name
• Class Value
• Opacity percentage
Vector Attributes Vector attributes are stored in the Vector Attributes CellArrays. You
can simply view attributes or use them to:
• label features
Analysis
ERDAS IMAGINE Analysis In ERDAS IMAGINE, GIS analysis functions and algorithms are
Tools accessible through three main tools:
Image Interpreter
The Image Interpreter houses a set of common functions that were
all created using either Model Maker or SML. They have been given
a dialog interface to match the other processes in ERDAS IMAGINE.
In most cases, these processes can be run from a single dialog.
However, the actual models are also provided with the software to
enable customized processing.
Many of the functions described in the following sections can be
accomplished using any of these tools. Model Maker is also easy to
use and utilizes many of the same steps that would be performed
when drawing a flow chart of an analysis. SML is intended for more
advanced analyses, and has been designed using natural language
commands and simple syntax rules. Some applications may require
a combination of these tools.
Analysis Procedures Once the database (layers and attribute data) is assembled, the
layers can be analyzed and new information extracted. Some
information can be extracted simply by looking at the layers and
visually comparing them to other layers. However, new information
can be retrieved by combining and comparing layers using the
following procedures:
Buffer
zones
Filtering Clumps
In cases where very small clumps are not useful, they can be filtered
out according to their sizes. This is sometimes referred to as
eliminating the salt and pepper effects, or sieving. In Figure 181, all
of the small clumps in the original (clumped) layer are eliminated.
8 2 8 4 4 5
8 6 8 5 4 5
2 6 8 5 4 5
2 6 8 3 5 5
2 6 8 3 4 5
2 2 6 3 4
8 2 6 4 4
8 6 8 4 4
2 6 3 5 5
2 6 3 4 5
2 8 4 4 5
In Figure 182, class 2 in the mask layer was selected for the mask.
Only the corresponding (shaded) pixels in the target layer are
scanned—the other values remain unchanged.
Neighborhood analysis creates a new thematic layer. There are
several types of analysis that can be performed upon each window
of pixels, as described below:
2 8 6 6 6 8 6 6
Output of one
2 8 6 6 6 2 48 6 iteration of the
2 2 8 6 6 2 2 8 sum operation
2 2 2 8 6
2 2 2 2 8
8 + 6 + 6 + 2 + 8 + 6 + 2 + 2 + 8 = 48
Recoding Class values can be recoded to new values. Recoding involves the
assignment of new values to one or more classes. Recoding is used
to:
• combine classes
0 0 Background
1 4 Riparian
3 1 Chaparral
4 4 Wetlands
5 1 Emergent Vegetation
6 1 Water
Overlay Composite
9
1 = commercial
9 4 2 = residential
9 1 2 3 = forest
9 5 4 = industrial
5 = wetlands
3 9 = steep slopes
(Land Use masked)
0 1 2 3 4 5
0 0 0 0 0 0 0
input layer 0 1 2 3 4 5
1
1 data
values 0 6 7 8 9 10
2
(rows)
3 0 11 12 13 14 15
In this diagram, the classes of the two input layers represent the
rows and columns of the matrix. The output classes are assigned
according to the coincidence of any two input classes.
Data Layers
In modeling, the concept of layers is especially important. Before
computers were used for modeling, the most widely used approach
was to overlay registered maps on paper or transparencies, with
each map corresponding to a separate theme. Today, digital files
replace these hardcopy layers and allow much more flexibility for
recoloring, recoding, and reproducing geographical information
(Steinitz et al, 1976).
In a model, the corresponding pixels at the same coordinates in all
input layers are addressed as if they were physically overlaid like
hardcopy maps.
Model Structure
A model created with Model Maker is essentially a flow chart that
defines:
The graphical models created in Model Maker all have the same basic
structure: input, function, output. The number of inputs, functions,
and outputs can vary, but the overall form remains constant. All
components must be connected to one another before the model can
be executed. The model on the left in Figure 187 is the most basic
form. The model on the right is more complex, but it retains the
same input/function/output flow.
Function
Input Function Input
Output
Output
Graphical models are stored in ASCII files with the .gmd extension.
There are several sample graphical models delivered with ERDAS
IMAGINE that can be used as is or edited for more customized
processing.
Category Description
Analysis Includes convolution filtering, histogram matching, contrast
stretch, principal components, and more.
Arithmetic Perform basic arithmetic functions including addition,
subtraction, multiplication, division, factorial, and modulus.
Bitwise Use bitwise and, or, exclusive or, and not.
Boolean Perform logical functions including and, or, and not.
Color Manipulate colors to and from RGB (red, green, blue) and IHS
(intensity, hue, saturation).
Conditional Run logical tests using conditional statements and
either...if...or...otherwise.
Data Generation Create raster layers from map coordinates, column numbers, or
row numbers. Create a matrix or table from a list of scalars.
Descriptor Read attribute information and map a raster through an
attribute column.
Distance Perform distance functions, including proximity analysis.
Exponential Use exponential operators, including natural and common
logarithmic, power, and square root.
Focal (Scan) Perform neighborhood analysis functions, including boundary,
density, diversity, majority, mean, minority, rank, standard
deviation, sum, and others.
Focal Use Opts Constraints on which pixel values to include in calculations for
the Focal (Scan) function.
Focal Apply Opts Constraints on which pixel values to apply the results of
calculations for the Focal (Scan) function.
Global Analyze an entire layer and output one value, such as diversity,
maximum, mean, minimum, standard deviation, sum, and
more.
Matrix Multiply, divide, and transpose matrices, as well as convert a
matrix to a table and vice versa.
Other Includes over 20 miscellaneous functions for data type
conversion, various tests, and other utilities.
Relational Includes equality, inequality, greater than, less than, greater
than or equal, less than or equal, and others.
Size Measure cell X and Y size, layer width and height, number of
rows and columns, etc.
Stack Statistics Perform operations over a stack of layers including diversity,
majority, max, mean, median, min, minority, standard
deviation, and sum.
Statistical Includes density, diversity, majority, mean, rank, standard
deviation, and more.
String Manipulate character strings.
Category Description
Surface Calculate aspect and degree/percent slope and produce shaded
relief.
Trigonometric Use common trigonometric functions, including sine/arcsine,
cosine/arccosine, tangent/arctangent, hyperbolic arcsine,
arccosine, cosine, sine, and tangent.
Zonal Perform zonal operations including summary, diversity,
majority, max, mean, min, range, and standard deviation.
See the ERDAS IMAGINE Tour Guides and the On-Line SML
manual for complete instructions on using Model Maker, and
more detailed information about the available functions and
operators.
• raster
• vector
• matrix
• table
• scalar
Raster
A raster object is a single layer or multilayer array of pixel data.
Rasters are typically used to specify and manipulate data from image
files.
Vector
Vector data in either a vector coverage, shapefile, or annotation
layer can be read directly into the Model Maker, converted from
vector to raster, then processed similarly to raster data; Model
Maker cannot write to coverages, or shapefiles or annotation layers.
Table
A table object is a series of numeric values, colors, or character
strings. A table has one column and a fixed number of rows. Tables
are typically used to store columns from the Raster Attribute Editor
or a list of values that pertains to the individual layers of a set of
layers. For example, a table with four rows could be used to store the
maximum value from each layer of a four layer image file. A table
may consist of up to 32,767 rows. Information in the table can be
attributes, calculated (e.g., histograms), or defined by you.
Scalar
A scalar object is a single numeric value, color, or character string.
Scalars are often used as weighting factors.
The graphics used in Model Maker to represent each of these objects
are shown in Figure 188.
+
Matrix Scalar
+
Vector
Table
Raster
Data Types The five object types described above may be any of the following
data types:
Output Parameters Since it is possible to have several inputs in one model, you can
optionally define the working window and the pixel cell size of the
output data along with the output map projection.
Working Window
Raster layers of differing areas can be input into one model.
However, the image area, or working window, must be specified in
order to use it in the model calculations. Either of the following
options can be selected:
Map Projection
The output map projection defaults to be the same as the first input,
or projection may be selected to be the same as a chosen input. The
output projection may also be selected from a projection library.
Using Attributes in With the criteria function in Model Maker, attribute data can be used
Models to determine output values. The criteria function simplifies the
process of creating a conditional statement. The criteria function can
be used to build a table of conditions that must be satisfied to output
a particular row value for an attribute (or cell value) associated with
the selected raster.
The inputs to a criteria function are rasters or vectors. The columns
of the criteria table represent either attributes associated with a
raster layer or the layer itself, if the cell values are of direct interest.
Criteria which must be met for each output column are entered in a
cell in that column (e.g., >5). Multiple sets of criteria may be entered
in multiple rows. The output raster contains the first row number of
a set of criteria that were met for a raster cell.
A simple model could create one output layer that shows only the
parks in need of repairs. The following logic would therefore be coded
into the model:
“If Turf Condition is not Good or Excellent, and if Path Condition
is not Good or Excellent, then the output class value is 1.
Otherwise, the output class value is 2.”
More than one input layer can also be used. For example, a model
could be created, using the input layers parks.img and soils.img, that
shows the soil types for parks with either fair or poor turf condition.
Attributes can be used from every input file.
The following is a slightly more complex example:
If you have a land cover file and you want to create a file of pine
forests larger than 10 acres, the criteria function could be used to
output values only for areas that satisfy the conditions of being both
pine forest and larger than 10 acres. The output file would have two
classes: pine forests larger than 10 acres and background. If you
want the output file to show varying sizes of pine forest, you would
simply add more conditions to the criteria table.
Comparisons of attributes can also be combined with mathematical
and logical functions on the class values of the input file(s). With
these capabilities, highly complex models can be created.
See the ERDAS IMAGINE Tour Guides or the On-Line Help for
specific instructions on using the criteria function.
Script Modeling SML is a script language used internally by Model Maker to execute
the operations specified in the graphical models that are created.
SML can also be used to directly write to models you create. It
includes all of the functions available in Model Maker, plus:
The Text Editor is available from the Tools menu located on the
ERDAS IMAGINE menu bar and from the Model Librarian (Spatial
Modeler).
In Figure 189, both the graphical and script models are shown for a
tasseled cap transformation. Notice how even the annotation on the
graphical model is included in the automatically generated script
model. Generating script models from graphical models may aid in
learning SML.
Tasseled Cap
Transformation
Models
Graphical Model
Script Model
SML also includes flow control structures so that you can utilize
conditional branching and looping in the models and statement block
structures, which cause a set of statements to be executed as a
group.
Declaration Example
In the script model in Figure 189, the following lines form the
declaration portion of the model:
INTEGER RASTER n1_tm_lanier FILE OLD NEAREST NEIGHBOR
"/usr/imagine/examples/tm_lanier.img";
FLOAT MATRIX n2_Custom_Matrix;
FLOAT RASTER n4_lntassel FILE NEW ATHEMATIC FLOAT
SINGLE "/usr/imagine/examples/lntassel.img";
Set Example
The following set statements are used:
SET CELLSIZE MIN;
SET WINDOW UNION;
Assignment Example
The following assignment statements are used:
n2_Custom_Matrix = MATRIX(3, 7:
0.331830, 0.331210, 0.551770, 0.425140, 0.480870,
0.000000, 0.252520,
-0.247170, -0.162630, -0.406390, 0.854680, 0.054930,
0.000000, -0.117490,
0.139290, 0.224900, 0.403590, 0.251780, -0.701330,
0.000000, -0.457320);
n4_lntassel = LINEARCOMB ( $n1_tm_lanier ,
$n2_Custom_Matrix ) ;
Variables Variables are objects in the Modeler that have been associated with
names using Declaration Statements. The declaration statement
defines the data type and object type of the variable. The declaration
may also associate a raster variable with certain layers of an image
file or a table variable with an attribute table. Assignment
Statements are used to set or change the value of a variable.
Vector Analysis Most of the operations discussed in the previous pages of this
chapter focus on raster data. However, in a complete GIS database,
both raster and vector layers are present. One of the most common
applications involving the combination of raster and vector data is
the updating of vector layers using current raster imagery as a
backdrop for vector editing. For example, if a vector database is
more than one or two years old, then there are probably errors due
to changes in the area (new roads, moved roads, new development,
etc.). When displaying existing vector layers over a raster layer, you
can dynamically update the vector layer by digitizing new or changed
features on the screen.
Vector layers can also be used to indicate an AOI for further
processing. Assume you want to run a site suitability model on only
areas designated for commercial development in the zoning
ordinances. By selecting these zones in a vector polygon layer, you
could restrict the model to only those areas in the raster input files.
Vector layers can also be used as inputs to models. Updated or new
attributes may also be written to vector layers in models.
Editing Vector Layers Editable features are polygons (as lines), lines, label points, and
nodes. There can be multiple features selected with a mixture of any
and all feature types. Editing operations and commands can be
performed on multiple or single selections. In addition to the basic
editing operations (e.g., cut, paste, copy, delete), you can also
perform the following operations on the line features in multiple or
single selections:
The Undo utility may be applied to any edits. The software stores all
edits in sequential order, so that continually pressing Undo reverses
the editing.
Constructing Either the Build or Clean option can be used to construct topology.
To create spatial relationships between features in a vector layer, it
Topology is necessary to create topology. After a vector layer is edited, the
topology must be constructed to maintain the topological
relationships between features. When topology is constructed, each
feature is assigned an internal number. These numbers are then
used to determine line connectivity and polygon contiguity. Once
calculated, these values are recorded and stored in that layer’s
associated attribute table.
Building and Cleaning The Build option processes points, lines, and polygons, but the Clean
Coverages option processes only lines and polygons. Build recognizes only
existing intersections (nodes), whereas Clean creates intersections
(nodes) wherever lines cross one another. The differences in these
two options are summarized in Table 59 (Environmental Systems
Research Institute, 1990).
Table 59: Comparison of Building and Cleaning Coverages
Processes:
Points Yes No
Errors
Constructing topology also helps to identify errors in the layer. Some
of the common errors found are:
When the Build or Clean options are used to construct the topology
of a vector layer, two kinds of potential node errors may be
observed; pseudo nodes and dangling nodes. These are identified in
the Viewer with special symbols. The default symbols used by
IMAGINE are shown in Figure 190 below but may be changed in the
Vector Properties dialog.
No label point
Pseudo node in polygon
(island)
Dangling nodes
Introduction Maps and mapping are the subject of the art and science known as
cartography—creating two-dimensional representations of our
three-dimensional Earth. These representations were once hand-
drawn with paper and pen. But now, map production is largely
automated—and the final output is not always paper. The capabilities
of a computer system are invaluable to map users, who often need
to know much more about an area than can be reproduced on paper,
no matter how large that piece of paper is or how small the
annotation is. Maps stored on a computer can be queried, analyzed,
and updated quickly.
As the veteran GIS and image processing authority, Roger F.
Tomlinson, said: “Mapped and related statistical data do form the
greatest storehouse of knowledge about the condition of the living
space of mankind.” With this thought in mind, it only makes sense
that maps be created as accurately as possible and be as accessible
as possible.
In the past, map making was carried out by mapping agencies who
took the analyst’s (be they surveyors, photogrammetrists, or
draftsmen) information and created a map to illustrate that
information. But today, in many cases, the analyst is the
cartographer and can design his maps to best suit the data and the
end user.
This chapter defines some basic cartographic terms and explains
how maps are created within the ERDAS IMAGINE environment.
Thematic Maps Thematic maps comprise a large portion of the maps that many
organizations create. For this reason, this map type is explored in
more detail.
Thematic maps may be subdivided into two groups:
• qualitative
• quantitative
You can create thematic data layers from continuous data (aerial
photography and satellite images) using the ERDAS IMAGINE
classification capabilities. See “Classification” for more
information.
Base Information
Thematic maps should include a base of information so that the
reader can easily relate the thematic data to the real world. This base
may be as simple as an outline of counties, states, or countries, to
something more complex, such as an aerial photograph or satellite
image. In the past, it was difficult and expensive to produce maps
that included both thematic and continuous data, but technological
advances have made this easy.
Color Selection
The colors used in thematic maps may or may not have anything to
do with the class or category of information shown. Cartographers
usually try to use a color scheme that highlights the primary purpose
of the map. The map reader’s perception of colors also plays an
important role. Most people are more sensitive to red, followed by
green, yellow, blue, and purple. Although color selection is left
entirely up to the map designer, some guidelines have been
established (Robinson and Sale, 1969).
• When mapping elevation data, start with blues for water, greens
in the lowlands, ranging up through yellows and browns to reds
in the higher elevations. This progression should not be used for
series other than elevation.
• In land cover mapping, use yellows and tans for dryness and
sparse vegetation and greens for lush vegetation.
• scale bars
• legends
• text
• representative fraction
• verbal statement
• scale bar
Representative Fraction
Map scale is often noted as a simple ratio or fraction called a
representative fraction. A map in which one inch on the map equals
24,000 inches on the ground could be described as having a scale of
1:24,000 or 1/24,000. The units on both sides of the ratio must be
the same.
Verbal Statement
A verbal statement of scale describes the distance on the map to the
distance on the ground. A verbal statement describing a scale of
1:1,000,000 is approximately 1 inch to 16 miles. The units on the
map and on the ground do not have to be the same in a verbal
statement. One-inch and 6-inch maps of the British Ordnance
Survey are often referred to by this method (1 inch to 1 mile, 6
inches to 1 mile) (Robinson and Sale, 1969).
Scale Bars
A scale bar is a graphic annotation element that describes map scale.
It shows the distance on paper that represents a geographical
distance on the map. Maps often include more than one scale bar to
indicate various measurement systems, such as kilometers and
miles.
Miles
1 0 1 2
1 kilometer
1 1 mile is
Map 1/40 inch 1 inch is
centimeter represented
Scale represents represents represented
represents by
by
1:2,000 4.200 ft 56.000 yd 20.000 m 31.680 in 50.00 cm
1:5,000 10.425 ft 139.000 yd 50.000 m 12.670 in 20.00 cm
1:10,000 6.952 yd 0.158 mi 0.100 km 6.340 in 10.00 cm
1:15,840 11.000 yd 0.250 mi 0.156 km 4.000 in 6.25 cm
1:20,000 13.904 yd 0.316 mi 0.200 km 3.170 in 5.00 cm
1:24,000 16.676 yd 0.379 mi 0.240 km 2.640 in 4.17 cm
1:25,000 17.380 yd 0.395 mi 0.250 km 2.530 in 4.00 cm
1:31,680 22.000 yd 0.500 mi 0.317 km 2.000 in 3.16 cm
1:50,000 34.716 yd 0.789 mi 0.500 km 1.270 in 2.00 cm
1:62,500 43.384 yd 0.986 mi 0.625 km 1.014 in 1.60 cm
1:63,360 0.025 mi 1.000 mi 0.634 km 1.000 in 1.58 cm
1:75,000 0.030 mi 1.180 mi 0.750 km 0.845 in 1.33 cm
1:80,000 0.032 mi 1.260 mi 0.800 km 0.792 in 1.25 cm
1:100,000 0.040 mi 1.580 mi 1.000 km 0.634 in 1.00 cm
1:125,000 0.050 mi 1.970 mi 1.250 km 0.507 in 8.00 mm
1:250,000 0.099 mi 3.950 mi 2.500 km 0.253 in 4.00 mm
1:500,000 0.197 mi 7.890 mi 5.000 km 0.127 in 2.00 mm
1:1,000,000 0.395 mi 15.780 mi 10.000 km 0.063 in 1.00 mm
Table 61 shows the number of pixels per inch for selected scales and
pixel sizes.
Pixe SCALE
l 1”=10 1”=200 1”=500 1”=100 1”=150 1”=200 1”=416 1”=1
Size 0’ ’ ’ 0’ 0’ 0’ 7’ mile
(m) 1:1200 1:2400 1:6000 1:12000 1:18000 1:24000 1:50000 1:63360
1 30.49 60.96 152.40 304.80 457.20 609.60 1270.00 1609.35
2 15.24 30.48 76.20 152.40 228.60 304.80 635.00 804.67
2.5 12.13 24.38 60.96 121.92 182.88 243.84 508.00 643.74
5 6.10 12.19 30.48 60.96 91.44 121.92 254.00 321.87
10 3.05 6.10 15.24 30.48 45.72 60.96 127.00 160.93
15 2.03 4.06 10.16 20.32 30.48 40.64 84.67 107.29
20 1.52 3.05 7.62 15.24 22.86 30.48 63.50 80.47
25 1.22 2.44 6.10 12.19 18.29 24.38 50.80 64.37
30 1.02 2.03 5.08 10.16 15.240 20.32 42.33 53.64
35 .87 1.74 4.35 8.71 13.08 17.42 36.29 45.98
40 .76 1.52 3.81 7.62 11.43 15.24 31.75 40.23
45 .68 1.35 3.39 6.77 10.16 13.55 28.22 35.76
50 .61 1.22 3.05 6.10 9.14 12.19 25.40 32.19
75 .41 .81 2.03 4.06 6.10 8.13 16.93 21.46
100 .30 .61 1.52 3.05 4.57 6.10 12.70 16.09
150 .20 .41 1.02 2.03 3.05 4.06 8.47 10.73
200 .15 .30 .76 1.52 2.29 3.05 6.35 8.05
250 .12 .24 .61 1.22 1.83 2.44 5.08 6.44
300 .10 .30 .51 1.02 1.52 2.03 4.23 5.36
350 .09 .17 .44 .87 1.31 1.74 3.63 4.60
400 .08 .15 .38 .76 1.14 1.52 3.18 4.02
450 .07 .14 .34 .68 1.02 1.35 2.82 3.58
500 .06 .12 .30 .61 .91 1.22 2.54 3.22
600 .05 .10 .25 .51 .76 1.02 2.12 2.69
700 .04 .09 .22 .44 .65 .87 1.81 2.30
800 .04 .08 .19 .38 .57 .76 1.59 2.01
900 .03 .07 .17 .34 .51 .68 1.41 1.79
1000 .03 .06 .15 .30 .46 .61 1.27 1.61
forest
swamp
developed
Neatlines, Tick Neatlines, tick marks, and grid lines serve to provide a
georeferencing system for map detail and are based on the map
Marks, and Grid
projection of the image shown.
Lines
• A neatline is a rectangular border around the image area of a
map. It differs from the map border in that the border usually
encloses the entire map, not just the image area.
• Tick marks are small lines along the edge of the image area or
neatline that indicate regular intervals of distance.
neatline
grid lines
tick marks
Symbols Since maps are a greatly reduced version of the real-world, objects
cannot be depicted in their true shape or size. Therefore, a set of
symbols is devised to represent real-world objects. There are two
major classes of symbols:
• replicative
• abstract
• point
• area
Symbol Types
These basic elements can be combined to create three different
types of replicative symbols:
Use the Symbol tool in the Annotation tool palette and the
symbol library to place symbols in maps.
Labels and Place names and other labels convey important information to the
reader about the features on the map. Any features that help orient
Descriptive Text
the reader or are important to the content of the map should be
labeled. Descriptive text on a map can include the map title and
subtitle, copyright information, captions, credits, production notes,
or other explanatory material.
Credits
Map credits (or source information) can include the data source and
acquisition date, accuracy information, and other details that are
required or helpful to readers. For example, if you include data that
you do not own in a map, you must give credit to the owner.
Use the Text tool in the Annotation tool palette to add labels and
descriptive text to maps.
Typography and Lettering The choice of type fonts and styles and how names are lettered can
make the difference between a clear and attractive map and a
jumble of imagery and text. As with many other aspects of map
design, this is a very subjective area and many organizations already
have guidelines to use. This section is intended as an introduction to
the concepts involved and to convey traditional guidelines, where
available.
If your organization does not have a set of guidelines for the
appearance of maps and you plan to produce many in the future, it
would be beneficial to develop a style guide specifically for mapping.
This ensures that all of the maps produced follow the same
conventions, regardless of who actually makes the map.
Type Styles
Type style refers to the appearance of the text and may include font,
size, and style (bold, italic, underline, etc.). Although the type styles
used in maps are purely a matter of the designer’s taste, the
following techniques help to make maps more legible (Robinson and
Sale, 1969; Dent, 1985).
• Put more important text in labels, titles, and names in all capital
letters and lesser important text in lowercase with initial capitals.
This is a matter of personal preference, although names in which
the letters must be spread out across a large area are better in
all capital letters. (Studies have found that capital letters are
more difficult to read, therefore lowercase letters might improve
the legibility of the map.)
Lettering
Lettering refers to the way in which place names and other labels are
added to a map. Letter spacing, orientation, and position are the
three most important factors in lettering. Here again, there are no
set rules for how lettering is to appear. Much is determined by the
purpose of the map and the end user. Many organizations have
developed their own rules for lettering. Here is a list of guidelines
that have been used by cartographers in the past (Robinson and
Sale, 1969; Dent, 1985).
• Where the continuity of names and other map data, such as lines
and tones, conflicts with the lettering, the data, but not the
names, should be interrupted.
Atlanta
Atlanta
GEORGIA G e o r g i a
Savannah
Savannah
Text Color
Many cartographers argue that all lettering on a map should be
black. However, the map may be well-served by incorporating color
into its design. In fact, studies have shown that coding labels by
color can improve a reader’s ability to find information (Dent, 1985).
This section is adapted from “Map Projections for Use with the
Geographic Information System” by Lee and Walsh (Lee and
Walsh, 1984).
• conformality
• equivalence
• equidistance
• true direction
Polar Azimuthal
Transverse Cylindrical (planar)
Oblique Azimuthal
(planar)
Oblique Cylindrical
Projection Types Although a great number of projections have been devised, the
majority of them are geometric or mathematical variants of the basic
direct geometric projection families described below. Choice of the
projection to be used depends upon the true property or combination
of properties desired for effective cartographic analysis.
Azimuthal Projections
Azimuthal projections, also called planar projections, are
accomplished by drawing lines from a given perspective point
through the globe onto a tangent plane. This is conceptually
equivalent to tracing a shadow of a figure cast by a light source. A
tangent plane intersects the global surface at only one point and is
perpendicular to a line passing through the center of the sphere.
Thus, these projections are symmetrical around a chosen center or
central meridian. Choice of the projection center determines the
aspect, or orientation, of the projection surface.
Azimuthal projections may be centered:
Conical Projections
Conical projections are accomplished by intersecting, or touching, a
cone with the global surface and mathematically projecting lines
onto this developable surface.
A tangent cone intersects the global surface to form a circle. Along
this line of intersection, the map is error-free and possess
equidistance. Usually, this line is a parallel, termed the standard
parallel.
Cones may also be secant, and intersect the global surface, forming
two circles that possess equidistance. In this case, the cone slices
underneath the global surface, between the standard parallels. Note
that the use of the word secant, in this instance, is only conceptual
and not geometrically accurate. Conceptually, the conical aspect
may be polar, equatorial, or oblique. Only polar conical projections
are supported in ERDAS IMAGINE.
Tangent Secant
one standard parallel two standard parallels
Cylindrical Projections
Cylindrical projections are accomplished by intersecting, or touching,
a cylinder with the global surface. The surface is mathematically
projected onto the cylinder, which is then cut and unrolled.
Tangent Secant
one standard parallel two standard parallels
Other Projections
The projections discussed so far are projections that are created by
projecting from a sphere (the Earth) onto a plane, cone, or cylinder.
Many other projections cannot be created so easily.
Modified projections are modified versions of another projection. For
example, the Space Oblique Mercator projection is a modification of
the Mercator projection. These modifications are made to reduce
distortion, often by including additional standard lines or a different
pattern of distortion.
Pseudo projections have only some of the characteristics of another
class projection. For example, the Sinusoidal is called a
pseudocylindrical projection because all lines of latitude are straight
and parallel, and all meridians are equally spaced. However, it
cannot truly be a cylindrical projection, because all meridians except
the central meridian are curved. This results in the Earth appearing
oval instead of rectangular (Environmental Systems Research
Institute, 1991).
Geographical and Map projections require a point of reference on the Earth’s surface.
Most often this is the center, or origin, of the projection. This point
Planar
is defined in two coordinate systems:
Coordinates
• geographical
Geographical
Geographical, or spherical, coordinates are based on the network of
latitude and longitude (Lat/Lon) lines that make up the graticule of
the Earth. Within the graticule, lines of longitude are called
meridians, which run north/south, with the prime meridian at 0°
(Greenwich, England). Meridians are designated as 0° to 180°, east
or west of the prime meridian. The 180° meridian (opposite the
prime meridian) is the International Dateline.
Lines of latitude are called parallels, which run east/west. Parallels
are designated as 0° at the equator to 90° at the poles. The equator
is the largest parallel. Latitude and longitude are defined with
respect to an origin located at the intersection of the equator and the
prime meridian. Lat/Lon coordinates are reported in degrees,
minutes, and seconds. Map projections are various arrangements of
the Earth’s latitude and longitude lines onto a plane.
Planar
Planar, or Cartesian, coordinates are defined by a column and row
position on a planar grid (X,Y). The origin of a planar coordinate
system is typically located south and west of the origin of the
projection. Coordinates increase from 0,0 going east and north. The
origin of the projection, being a false origin, is defined by values of
false easting and false northing. Grid references always contain an
even number of digits, and the first half refers to the easting and the
second half the northing.
In practice, this eliminates negative coordinate values and allows
locations on a map projection to be defined by positive coordinate
pairs. Values of false easting are read first and may be in meters or
feet.
USGS Projections
• Alaska Conformal
• Azimuthal Equidistant
• Behrmann
• Bonne
• Eckert I
• Eckert II
• Eckert III
• Eckert IV
• Eckert V
• Eckert VI
• EOSAT SOM
• Equidistant Conic
• Equidistant Cylindrical
• Gall Stereographic
• Gauss Kruger
• Geographic (Lat/Lon)
• Gnomonic
• Hammer
• Interrupted Mollweide
• Loximuthal
• Mercator
• Miller Cylindrical
• Mollweide
• Orthographic
• Plate Carrée
• Polar Stereographic
• Polyconic
• Quartic Authalic
• Robinson
• RSO
• Sinusoidal
• State Plane
• Stereographic
• Stereographic (Extended)
• Transverse Mercator
• UTM
• Wagner IV
• Wagner VII
• Winkel I
External Projections
• Albers Equal Area (see “Albers Conical Equal Area” on page 527)
• Cassini-Soldner
• Modified Polyconic
• Modified Stereographic
• Swiss Cylindrical
• Winkel’s Tripel
Units
Use the units of measure that are appropriate for the map projection
type.
dd(30,51,12) = 30.85333
-dd(30,51,12) = -30.85333
or
30:51:12 = 30.85333
You can also enter Lat/Lon coordinates in radians.
Constructio
Map projection Property Use
n
3 Albers Conical Equal Area Cone Equivalent Middle latitudes, E-W expanses
4 Lambert Conformal Conic Cone Conformal True Middle latitudes, E-W expanses flight
Direction (straight great circles)
Constructio
Map projection Property Use
n
Parameter 3 4 5 6 7 8b 9 10 11 12 13 14 15 16 17 18 19 20 b 21 b 22
Definition of
Spheroid
Spheroid • • • • • • • • • • • • • • • • • • •
selections
Definition of
Surface Viewing Window
a. Numbers are used for reference only and correspond to the numbers used in Table 63. Parameters for definition of map
projection types 0-2 are not applicable and are described in the text.
b. Additional parameters required for definition of the map projection are described in the text of “Map Projections”.
Deciding Factors Depending on your applications and the uses for the maps created,
one or several map projections may be used. Many factors must be
weighed when selecting a projection, including:
• type of map
• map accuracy
• scale
Guidelines Since the sixteenth century, there have been three fundamental
rules regarding map projection use (Maling, 1992):
Major axis
semi-major axis
semi-minor
axis
f = (a – b) ⁄ a
Where:
a = the equatorial radius (semi-major axis)
b = the polar radius (semi-minor axis)
Most map projections use eccentricity (e2) rather than flattening.
The relationship is:
• Airy
• Australian National
• Bessel
• Clarke 1866
• Clarke 1880
• Everest
• GRS 1980
• Helmert
• Hough
• International 1909
• Krasovsky
• Mercury 1960
• Modified Airy
• Modified Everest
• Southeast Asia
• Walbeck
• WGS 66
• WGS 72
• WGS 84
The spheroids listed above are the most commonly used. There
are many other spheroids available, and they are listed in the
Projection Chooser. These additional spheroids are not
documented in this manual. You can use the IMAGINE
Developers’ Toolkit to add your own map projections and
spheroids to ERDAS IMAGINE.
Semi- Semi-Minor
Spheroid Use
Major Axis Axis
GRS 1980 (Geodetic Reference 6378137.0 6356752.31414 Adopted in North America for
System) 1983 Earth-centered coordinate
system (satellite)
International 1909 (= Hayford) 6378388.0 6356911.94613 Remaining parts of the world not
listed here
Semi- Semi-Minor
Spheroid Use
Major Axis Axis
Sphere of Radius 6370997 m 6370997.0 6370997.0 A perfect sphere with the same
surface area as the Clarke 1866
spheroid
WGS 66 (World Geodetic System 6378145.0 6356759.769356 As WGS 72 above, older version
1966)
WGS 84 (World Geodetic System 6378137.0 6356752.3142451 As WGS 72, more recent
1984) 7929 calculation
Plan the Map After your analysis is complete, you can begin map composition. The
first step in creating a map is to plan its contents and layout. The
following questions may aid in the planning process:
• The colors used should be chosen carefully, since the maps are
printed in color.
• Select symbols that are widely recognized, and make sure they
are all explained in a legend.
See the tour guide about Map Composer in the ERDAS IMAGINE
Tour Guides for step-by-step instructions on creating a map.
Refer to the On-Line Help for details about how Map Composer
works.
Map Accuracy Maps are often used to influence legislation, promote a cause, or
enlighten a particular group before decisions are made. In these
cases, especially, map accuracy is of the utmost importance. There
are many factors that influence map accuracy: the projection used,
scale, base data, generalization, etc. The analyst/cartographer must
be aware of these factors before map production begins. The
accuracy of the map, in a large part, determines its usefulness. It is
usually up to individual organizations to perform accuracy
assessment and decide how those findings are reflected in the
products they produce. However, several agencies have established
guidelines for map makers.
US National Map Accuracy The United States Bureau of the Budget has developed the US
Standard National Map Accuracy Standard in an effort to standardize accuracy
reporting on maps. These guidelines are summarized below (Fisher,
1991):
• Maps that have been tested but fail to meet the requirements
should omit all mention of the standards on the legend.
USGS Land Use and Land The USGS has set standards of their own for land use and land cover
Cover Map Guidelines maps (Fisher, 1991):
USDA SCS Soils Maps The United States Department of Agriculture (USDA) has set
Guidelines standards for Soil Conservation Service (SCS) soils maps (Fisher,
1991):
• No single included soil type may occupy more than 10% of the
area of the map unit.
Digitized Hardcopy Maps Another method of expanding the database is by digitizing existing
hardcopy maps. Although this may seem like an easy way to gather
more information, care must be taken in pursuing this avenue if it is
necessary to maintain a particular level of accuracy. If the hardcopy
maps that are digitized are outdated, or were not produced using the
same accuracy standards that are currently in use, the digitized map
may negatively influence the overall accuracy of the database.
Introduction Hardcopy output refers to any output of image data to paper. These
topics are covered in this chapter:
• printing maps
Printing Maps ERDAS IMAGINE enables you to create and output a variety of types
of hardcopy maps, with several referencing features.
Scaled Maps A scaled map is a georeferenced map that has been projected to a
map projection, and is accurately laid-out and referenced to
represent distances and locations. A scaled map usually has a
legend, that includes a scale, such as 1 inch = 1000 feet. The scale
is often expressed as a ratio, like 1:12,000, where 1 inch on the map
represents 12,000 inches on the ground.
Printing Large Maps Some scaled maps do not fit on the paper that is used by the printer.
These methods are used to print and store large maps:
• A book map is laid out like the pages of a book. Each page fits on
the paper used by the printer. There is a border, but no tick
marks on every page.
+ +
+ +
neatline neatline
tick marks ++ +
+
Scale and Resolution The following scales and resolutions are noticeable during the
process of creating a map composition and sending the composition
to a hardcopy device:
• device resolution
Spatial Resolution
Spatial resolution is the area on the ground represented by each raw
image data pixel.
Display Scale
Display scale is the distance on the screen as related to one unit on
paper. For example, if the map composition is 24 inches by 36
inches, it would not be possible to view the entire composition on the
screen. Therefore, the scale could be set to 1:0.25 so that the entire
map composition would be in view.
Map Scale
The map scale is the distance on a map as related to the true
distance on the ground, or the area that one pixel represents
measured in map units. The map scale is defined when you create
an image area in the map composition. One map composition can
have multiple image areas set at different scales. These areas may
need to be shown at different scales for different applications.
Device Resolution
The number of dots that are printed per unit—for example, 300 dots
per inch (DPI).
Map Scaling Examples The ERDAS IMAGINE Map Composer enables you to define a map
size, as well as the size and scale for the image area within the map
composition. The examples in this section focus on the relationship
between these factors and the output file created by Map Composer
for the specific hardcopy device or file format. Figure 202 is the map
composition that is used in the examples. This composition was
originally created using the ERDAS IMAGINE Map Composer at a size
of 22” × 34”, and the hardcopy output must be in two different
formats.
If the specified size of the map (width and height) is greater than
the printable area for the printer, the output hardcopy map is
paneled. See the hardware manual of the hardcopy device for
information about the printable area of the device.
Output to TIFF
The limiting factor in this example is not page size, but disk space
(600 MB total). A three-band image file must be created in order to
convert the map composition to .tif file. Due to the three bands and
the high resolution, the image file could be very large. The .tif file is
output to a film recorder with a 1,000 DPI device resolution.
To determine the number of megabytes for the map composition, the
X and Y dimensions need to be calculated:
• Y = 34 × 1,000 = 34,000
Hardcopy Devices
The following hardcopy devices use halftoning to output an image or
map composition:
Continuous Tone Printing Continuous tone printing enables you to output color imagery using
the four process colors (cyan, magenta, yellow, and black). By using
varying percentages of these colors, it is possible to create a wide
range of colors. The printer converts digital data from the host
computer into a continuous tone image. The quality of the output
picture is similar to a photograph. The output is smoother than
halftoning because the dots for continuous tone printing can vary in
density.
Example
There are different processes by which continuous tone printers
generate a map. One example is a process called thermal dye
transfer. The entire image or map composition is loaded into the
printer’s memory. While the paper moves through the printer, heat
is used to transfer the dye from a ribbon, which has the dyes for all
of the four process colors, to the paper. The density of the dot
depends on the amount of heat applied by the printer to transfer the
dye. The amount of heat applied is determined by the brightness
values of the input image. This allows the printer to control the
amount of dye that is transferred to the paper to create a continuous
tone image.
Hardcopy Devices
The following hardcopy device uses continuous toning to output an
image or map composition:
• Tektronix Phaser II SD
NOTE: The above printers do not necessarily use the thermal dye
transfer process to generate a map.
See the user’s manual for the hardcopy device for more
information about continuous tone printing.
Contrast and Color Tables ERDAS IMAGINE contrast and color tables are used for some printing
processes, just as they are used in displaying an image. For
continuous raster layers, they are loaded from the ERDAS IMAGINE
contrast table. For thematic layers, they are loaded from the color
table. The translation of data file values to brightness values is
performed entirely by the software program.
Colors
Since a printer uses ink instead of light to create a visual image, the
primary colors of pigment (cyan, magenta, and yellow) are used in
printing, instead of the primary colors of light (red, green, and blue).
Cyan, magenta, and yellow can be combined to make black through
a subtractive process, whereas the primary colors of light are
additive—red, green, and blue combine to make white (Gonzalez and
Wintz, 1977).
The data file values that are sent to the printer and the contrast and
color tables that accompany the data file are all in the RGB color
scheme. The RGB brightness values in the contrast and color tables
must be converted to cyan, magenta, and yellow (CMY) values.
The RGB primary colors are the opposites of the CMY colors—
meaning, for example, that the presence of cyan in a color means an
equal lack of red. To convert the values, each RGB brightness value
is subtracted from the maximum brightness value to produce the
brightness value for the opposite color. The following equation shows
this relationship:
C = MAX - R
M = MAX - G
Y = MAX - B
Where:
MAX = the maximum brightness value
R = red value from lookup table
G = green value from lookup table
B = blue value from lookup table
C = calculated cyan value
M = calculated magenta value
Y = calculated yellow value
Black Ink
Although, theoretically, cyan, magenta, and yellow combine to
create black ink, the color that results is often a dark, muddy brown.
Many printers also use black ink for a truer black.
NOTE: Black ink may not be available on all printers. Consult the
user’s manual for your printer.
10
∑i
i=1
∑ Qi = 3 + 5 + 7 + 2 = 17
i=1
Where:
Q1 = 3
Q2 = 5
Q3 = 7
Q4 = 2
Statistics
Histogram In ERDAS IMAGINE image data files, each data file value (defined by
its row, column, and band) is a variable. ERDAS IMAGINE supports
the following data types:
• 1, 2, and 4-bit
300
0 X
0 100 255
data file values
Figure 203 shows the histogram for a band of data in which Y pixels
have data value X. For example, in this graph, 300 pixels (y) have
the data file value of 100 (x).
Bin Functions Bins are used to group ranges of data values together for better
manageability. Histograms and other descriptor columns for 1, 2, 4,
and 8-bit data are easy to handle since they contain a maximum of
256 rows. However, to have a row in a descriptor table for every
possible data value in floating point, complex, and 32-bit integer
data would yield an enormous amount of information. Therefore, the
bin function is provided to serve as a data reduction tool.
0 X < 0.01
Then, for example, row 23 of the histogram table would contain the
number of pixels in the layer whose value fell between .023 and
.024.
For example, a direct bin with 900 bins and an offset of -601
would look like the following:
0 X ≤ -600.5
1 -600.5 < X ≤ -599.5
.
Mean The mean (µ) of a set of values is its statistical average, such that,
if Qi represents a set of k values:
Q 1 + Q 2 + Q 3 + ... + Q k
µ = --------------------------------------------------------
-
k
or
k
Qi
µ = ∑ -----k
i=1
The mean of data with a normal distribution is the value at the peak
of the curve—the point where the distribution balances.
number of pixels
0
0 255
data file values
x–µ 2
– ------------
2σ
f ( x ) = e----------------------
σ 2π
Where:
x = the quantity’s distribution that is being
approximated
π and e = famous mathematical constants
The parameter µ controls how much the bell is shifted horizontally
so that its average matches the average of the distribution of x, while
σ adjusts the width of the bell to try to encompass the spread of the
given distribution. In choosing to approximate a distribution by the
nearest of the Normal Distributions, we describe the many values in
the bin function of its distribution with just two parameters. It is a
significant simplification that can greatly ease the computational
burden of many operations, but like all simplifications, it reduces the
accuracy of the conclusions we can draw.
Variance The mean of a set of values locates only the average value—it does
not adequately describe the set of values by itself. It is helpful to
know how much the data varies from its mean. However, a simple
average of the differences between each value and the mean equals
zero in every case, by definition of the mean. Therefore, the squares
of these differences are averaged so that a meaningful number
results (Larsen and Marx, 1981).
In theory, the variance is calculated as follows:
2
Var Q = E 〈 ( Q – µ Q ) 〉
Where:
E = expected value (weighted average)
2 = squared to make the distance a positive number
In practice, the use of this equation for variance does not usually
reflect the exact nature of the values that are used in the equation.
These values are usually only samples of a large data set, and
therefore, the mean and variance of the entire data set are
estimated, not known.
The equation used in practice follows. This is called the minimum
variance unbiased estimator of the variance, or the sample variance
(notated σ2).
∑ ( Qi – µQ )
2
2
σ Q ≈ i----------------------------------
=1 -
k–1
Where:
i = a particular pixel
k = the number of pixels (the higher the number, the better
the approximation)
Standard Deviation Since the variance is expressed in units squared, a more useful value
is the square root of the variance, which is expressed in units and
can be related back to the original values (Larsen and Marx, 1981).
The square root of the variance is the standard deviation.
Based on the equation for sample variance (σ2), the sample standard
deviation (σQ) for a set of values Q is computed as follows:
∑ ( Qi – µQ )
2
σQ = i----------------------------------
=1 -
k–1
In any distribution:
• more than 1/2 of the values are between µ-2σ and µ+2σ
• more than 3/4 of the values are between µ-3σ and µ+3σ
Cov QR = E 〈 ( Q – µ Q ) ( R – µ R )〉
Where:
Q and R = data file values in two bands
E = expected value
In practice, the sample covariance is computed with this equation:
∑ ( Qi – µQ ) ( Ri – µR )
C QR ≈ i-------------------------------------------------------
=1 -
k
Where:
i = a particular pixel
k = the number of pixels
Like variance, covariance is expressed in units squared.
k k
∑ ( Qi – µQ ) ( Qi – µQ ) ∑ ( Qi – µQ )
2
C QQ = i--------------------------------------------------------
=1 - = i----------------------------------
=1 -
k–1 k–1
Measurement Vector The measurement vector of a pixel is the set of data file values for
one pixel in all n bands. Although image data files are stored band-
by-band, it is often necessary to extract the measurement vectors
for individual pixels.
V1 n=3
Band 1
V2
Band 2
V3
Band 3
1 pixel
V1
V2
V3
Mean Vector When the measurement vectors of several pixels are analyzed, a
mean vector is often calculated. This is the vector of the means of
the data file values in each band. It has n elements.
Band 3
µ1
µ2
µ3
255
85
0
0 180 255
Band A
data file values
NOTE: If the image is 2-dimensional, the plot does not always have
to be 2-dimensional.
In Figure 207, the pixel that is plotted has a measurement vector of:
180
85
Feature Space Images Several techniques for the processing of multiband data make use of
a two-dimensional histogram, or feature space image. This is simply
a graph of the data file values of one band of data against the values
of another band.
255
∑ ( di – ei )
2
D =
i=1
Where:
D = spectral distance
n = number of bands (dimensions)
i = a particular band
di = data file value of pixel d in band i
ei = data file value of pixel e in band i
This is the equation for Euclidean distance—in two dimensions (when
n = 2), it can be simplified to the Pythagorean Theorem (c2 = a2 +
b2), or in this case:
D2 = (di - ei)2 + (dj - ej)2
NOTE: If one or all of A, B, C, D ... are 0, then the nature, but not
the complexity, of the transformation is changed. Mathematically, Ω
cannot be 0.
t i
xo = Σ Σ ak × x × y
i–j j
i = o j = o
t i
yo = Σ Σ bk × x × y
i–j j
i = o j = o
Where:
t is the order of the polynomial
ak and bk are coefficients
the subscript k in ak and bk is determined by:
⋅i+j+j
k = i---------------
2
A numerical example of 3rd-order transformation equations for x and
y is:
xo = 5 + 4x - 6y + 10x2 - 5xy + 1y2 + 3x3 + 7x2y - 11xy2 + 4y3
Transformation Matrix In the case of first order image rectification, the variables in the
polynomials (x and y) are the source coordinates of a GCP. The
coefficients are computed from the GCPs and stored as a
transformation matrix.
Matrix Notation Matrices and vectors are usually designated with a single capital
letter, such as M. For example:
2.2 4.6
M = 6.1 8.3
10.0 12.4
2.8
G = 6.5
10.1
G2 = 6.5
a1 a2 a3
C =
b1 b2 b3
x0 a1 a2 a3 1
=
y0 b1 b2 b3 xi
yi
R = CS, or
Where:
S = a matrix of the source coordinates (3 by 1)
C = the transformation matrix (2 by 3)
R = the matrix of rectified coordinates (2 by 1)
The sizes of the matrices are shown above to demonstrate a rule of
matrix multiplication. To multiply two matrices, the first matrix must
have the same number of columns as the second matrix has rows.
For example, if the first matrix is a by b, and the second matrix is m
by n, then b must equal m, and the product matrix has the size a by
n.
The formula for multiplying two matrices is:
( fg ) ij = ∑ fik gkj
k=1
Where:
i = a row in the product matrix
j = a column in the product matrix
f = an (a by b) matrix
g = an (m by n) matrix (b must equal m)
fg is an a by n matrix.
G = 2 6 10
T
3 4 12
• External Projections
NOTE: You cannot rectify to a new map projection using the Image
Information option. You should change map projection information
using Image Information only if you know the information to be
incorrect. Use the rectification tools to actually georeference an
image to a new map projection system.
• Alaska Conformal
• Azimuthal Equidistant
• Behrmann
• Bonne
• Cassini
• Eckert I
• Eckert II
• Eckert III
• Eckert IV
• Eckert V
• Eckert VI
• EOSAT SOM
• Equidistant Conic
• Equidistant Cylindrical
• Gall Stereographic
• Gauss Kruger
• Geographic (Lat/Lon)
• Gnomonic
• Hammer
• Interrupted Mollweide
• Mercator
• Miller Cylindrical
• Mollweide
• Orthographic
• Plate Carrée
• Polar Stereographic
• Polyconic
• Quartic Authalic
• Robinson
• RSO
• Sinusoidal
• State Plane
• Stereographic
• Stereographic (Extended)
• Transverse Mercator
• UTM
• Wagner IV
• Wagner VII
Property Conformal
Meridians N/A
Parallels N/A
Graticule N/A
spacing
Prompts
The following prompts display in the Projection Chooser once Alaska
Conformal is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
False easting
False northing
Construction Cone
Property Equal-area
Prompts
The following prompts display in the Projection Chooser once Albers
Conical Equal Area is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
In Figure 209, the standard parallels are 20°N and 60°N. Note the
change in spacing of the parallels.
Construction Plane
Property Equidistant
Prompts
The following prompts display in the Projection Chooser if Azimuthal
Equidistant is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Construction Cylindrical
Property Equal-area
Graticule spacing See Meridians and Parallels. Poles are straight lines
the same length as the Equator. Symmetry is
present about any meridian or the Equator.
Prompts
The following prompts display in the Projection Chooser once
Behrmann is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Construction Pseudocone
Property Equal-area
Meridians N/A
Prompts
The following prompts display in the Projection Chooser once Bonne
is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Construction Cylinder
Property Compromise
Meridians N/A
Parallels N/A
Prompts
The following prompts display in the Projection Chooser once Cassini
is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Scale Factor
Enter the scale factor.
Longitude of central meridian
Latitude of origin of projection
Enter the values for longitude of central meridian and latitude of
origin of projection.
False easting
False northing
Construction Pseudocylinder
Graticule spacing See Meridians and Parallels. Poles are lines one half
the length of the Equator. Symmetry exists about
the central meridian or the Equator.
Linear scale Scale is true along latitudes 47° 10’ N and S. Scale
is constant at any latitude (and latitude of opposite
sign) and any meridian.
Prompts
The following prompts display in the Projection Chooser once Eckert
I is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Construction Pseudocylinder
Property Equal-area
Graticule spacing See Meridians and Parallels. Pole lines are half the
length of the Equator. Symmetry exists at the
central meridian or the Equator.
Linear scale Scale is true along altitudes 55° 10’ N, and S. Scale
is constant along any latitude.
Prompts
The following prompts display in the Projection Chooser once Eckert
II is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Construction Pseudocylinder
Graticule spacing See Meridians and Parallels. Pole lines are half the
length of the Equator. Symmetry exists at the
central meridian or the Equator.
Linear scale Scale is correct only along 37° and 55’ N and S.
Features close to poles are compressed in the
north-south direction.
Prompts
The following prompts display in the Projection Chooser once Eckert
III is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Construction Pseudocylinder
Property Equal-area
Prompts
The following prompts display in the Projection Chooser once Eckert
IV is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Construction Pseudocylinder
Prompts
The following prompts display in the Projection Chooser once Eckert
V is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Construction Pseudocylinder
Property Equal-area
Prompts
The following prompts display in the Projection Chooser once Eckert
VI is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Prompts
The following prompts display in the Projection Chooser once EOSAT
SOM is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Construction Cone
Property Equidistant
Prompts
The following prompts display in the Projection Chooser if Equidistant
Conic is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Prompts
The following prompts display in the Projection Chooser if Equidistant
Cylindrical is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Construction Cylinder
Property Compromise
Prompts
The following prompts display in the Projection Chooser if
Equirectangular is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Construction Cylinder
Property Compromise
Prompts
The following prompts display in the Projection Chooser once Gall
Stereographic is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Prompts
The following prompts display in the Projection Chooser once Gauss
Kruger is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Scale factor
Designate the desired scale factor. This parameter is used to modify
scale distortion. A value of one indicates true scale only along the
central meridian. It may be desirable to have true scale along two
lines equidistant from and parallel to the central meridian, or to
lessen scale distortion away from the central meridian. A factor of
less than, but close to one is often used.
Longitude of central meridian
Enter a value for the longitude of the desired central meridian to
center the projection.
Latitude of origin of projection
Enter the value for the latitude of origin of projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Construction Plane
Property Compromise
Parallel
(Latitude)
60
Equator
30 6 0
3 0
0
Meridian
(Longitude)
Construction Plane
Property Compromise
Prompts
The following prompts display in the Projection Chooser if Gnomonic
is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Property Equal-area
Prompts
The following prompts display in the Projection Chooser once
Hammer is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Construction Pseudocylindrical
Property Equal-area
Linear scale Scale is true at each latitude between 40° 44’ N and
S along the central meridian within the same
latitude range. Scale varies with increased latitudes.
Prompts
The following prompts display in the Projection Chooser once
Interrupted Goode Homolosine is selected. Respond to the prompts
as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Prompts
The following prompts display in the Projection Chooser once
Interrupted Mollweide is selected. Respond to the prompts as
described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Construction Plane
Property Equal-area
In the polar aspect, latitude rings decrease their intervals from the
center outwards. In the equatorial aspect, parallels are curves
flattened in the middle. Meridians are also curved, except for the
central meridian, and spacing decreases toward the edges.
Construction Cone
Property Conformal
Prompts
The following prompts display in the Projection Chooser if Lambert
Conformal Conic is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
If you only have one standard parallel you should enter that
same value into all three latitude fields.
In Figure 228, the standard parallels are 20°N and 60°N. Note the
change in spacing of the parallels.
Construction Pseudocylindrical
Prompts
The following prompts display in the Projection Chooser if Loximuthal
is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Construction Cylinder
Property Conformal
In Figure 230, all angles are shown correctly, therefore small shapes
are true (i.e., the map is conformal). Rhumb lines are straight, which
makes it useful for navigation.
Construction Cylinder
Property Compromise
Prompts
The following prompts display in the Projection Chooser if Miller
Cylindrical is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Construction Cone
Property Equidistant
Prompts
The following prompts display in the Projection Chooser if Modified
Transverse Mercator is selected. Respond to the prompts as
described.
False easting
False northing
Construction Pseudocylinder
Property Equal-area
Prompts
The following prompts display in the Projection Chooser once
Mollweide is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Property Conformal
Meridians N/A
Parallels N/A
Linear scale Scale is within 0.02 percent of actual scale for the
country of New Zealand.
Prompts
The following prompts display in the Projection Chooser once New
Zealand Map Grid is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
The Spheroid Name defaults to International 1909. The Datum Name
defaults to Geodetic Datum 1949. These fields are not editable.
Easting Shift
Northing Shift
The Easting and Northing shifts are reported in meters.
Prompts
The following prompts display in the Projection Chooser once
Oblated Equal Area is selected. Respond to the prompts as
described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Parameter M
Parameter N
Enter the oblated equal area oval shape of parameters M and N.
Longitude of center of projection
Latitude of center of projection
Enter the longitude of the center of the projection and the latitude of
the center of the projection.
Rotation angle
Enter the oblated equal area oval rotation angle.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Construction Cylinder
Property Conformal
The USGS uses the Hotine version of Oblique Mercator. The Hotine
version is based on a study of conformal projections published by
British geodesist Martin Hotine in 1946-47. Prior to the
implementation of the Space Oblique Mercator, the Hotine version
was used for mapping Landsat satellite imagery.
Prompts
The following prompts display in the Projection Chooser if Oblique
Mercator (Hotine) is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Format A
For format A, the additional prompts are:
Azimuth east of north for central line
Longitude of point of origin
Format A defines the central line of the projection by the angle east
of north to the desired great circle path and by the latitude and
longitude of the point along the great circle path from which the
angle is measured. Appropriate values should be entered.
Construction Plane
Property Compromise
Prompts
The following prompts display in the Projection Chooser if Plate
Carrée is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Construction Plane
Property Conformal
Equator
S. Pole
Construction Cone
Property Compromise
Prompts
The following prompts display in the Projection Chooser if Polyconic
is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Construction Pseudocylindrical
Property Equal-area
Prompts
The following prompts display in the Projection Chooser once Quartic
Authalic is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Construction Pseudocylinder
Graticule spacing The central meridian and all parallels are linear.
This projection has been used both by Rand McNally and the National
Geographic Society.
Source: Environmental Systems Research Institute, 1997
Prompts
The following prompts display in the Projection Chooser once
Robinson is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Construction Cylinder
Property Conformal
Parallels N/A
Prompts
The following prompts display in the Projection Chooser once RSO is
selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
RSO Type
Select the RSO Type. You can choose from Borneo or Malaysia.
Construction Pseudocylinder
Property Equal-area
Construction Cylinder
Property Conformal
Prompts
The following prompts display in the Projection Chooser once Space
Oblique Mercator (Formats A & B) is selected. Respond to the
prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Format A (Generic Satellite)
Inclination of orbit at ascending node
Period of satellite revolution in minutes
Longitude of ascending orbit at equator
Landsat path flag
If you select Format A of the Space Oblique Mercator projection, you
need to supply the information listed above.
Format B (Landsat)
Landsat vehicle ID (1-5)
Specify whether the data are from Landsat 1, 2, 3, 4, or 5.
Path number (1-251 or 1-233)
For Landsats 1, 2, and 3, the path range is from 1 to 251. For
Landsats 4 and 5, the path range is from 1 to 233.
Prompts
The following prompts appear in the Projection Chooser if State Plane
is selected. Respond to the prompts as described.
State Plane Zone
Enter either the USGS zone code number as a positive value, or the
NOS zone code number as a negative value.
NAD27 or NAD83 or HARN
Either North America Datum 1927 (NAD27), North America Datum
1983 (NAD83), or High Accuracy Reference Network (HARN) may be
used to perform the State Plane calculations.
• NAD83 and HARN are based on the GRS 1980 spheroid. Some
zone numbers have been changed or deleted from NAD27.
Tables for both NAD27 and NAD83 zone numbers follow (Table 101
on page 612 and Table 102 on page 616). These tables include both
USGS and NOS code systems.
The following abbreviations are used in Table 101 on page 612 and
Table 102 on page 616:
Tr Merc = Transverse Mercator
Lambert = Lambert Conformal Conic
Oblique = Oblique Mercator (Hotine)
Polycon = Polyconic
Table 101: NAD27 State Plane Coordinate System for the United States
Code Number
Code Number
Code Number
Code Number
Code Number
Table 102: NAD83 State Plane Coordinate System for the United States
Code Number
Code Number
Code Number
Code Number
Code Number
Construction Plane
Property Conformal
In the equatorial aspect, all parallels except the Equator are circular
arcs. In the polar aspect, latitude rings are spaced farther apart, with
increasing distance from the pole.
Prompts
The following prompts display in the Projection Chooser once
Stereographic (Extended) is selected. Respond to the prompts as
described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Scale factor
Designate the desired scale factor. This parameter is used to modify
scale distortion. A value of one indicates true scale only along the
central meridian. It may be desirable to have true scale along two
lines equidistant from and parallel to the central meridian, or to
lessen scale distortion away from the central meridian. A factor of
less than, but close to one is often used.
Longitude of origin of projection
Latitude of origin of projection
Enter the values for longitude of origin of projection and latitude of
origin of projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Construction Cylinder
Property Conformal
Prompts
The following prompts display in the Projection Chooser if Transverse
Mercator is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Property Compromise
Meridians N/A
Parallels N/A
Graticule N/A
spacing
Prompts
The following prompts display in the Projection Chooser once Two
Point Equidistant is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
False easting
False northing
Enter values of false easting and false northing corresponding to the
desired center of the projection. These values must be in meters. It
is often convenient to make them large enough so that no negative
coordinates occur within the region of the map projection. That is,
the origin of the rectangular coordinate system should fall outside of
the map projection to the south and west.
Longitude of 1st point
Latitude of 1st point
Prompts
The following prompts display in the Projection Chooser if UTM is
chosen.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
UTM Zone
North or South
All values in Table 106 are in full degrees east (E) or west (W) of the
Greenwich prime meridian (0).
Central Central
Zone Meridia Range Zone Meridia Range
n n
1 177W 180W-174W 31 3E 0-6E
2 171W 174W-168W 32 9E 6E-12E
3 165W 168W-162W 33 15E 12E-18E
4 159W 162W-156W 34 21E 18E-24E
5 153W 156W-150W 35 27E 24E-30E
6 147W 150W-144W 36 33E 30E-36E
7 141W 144W-138W 37 39E 36E-42E
8 135W 138W-132W 38 45E 42E-48E
9 129W 132W-126W 39 51E 48E-54E
10 123W 126W-120W 40 57E 54E-60E
11 117W 120W-114W 41 63E 60E-66E
12 111W 114W-108W 42 69E 66E-72E
13 105W 108W-102W 43 75E 72E-78E
14 99W 102W-96W 44 81E 78E-84E
15 93W 96W-90W 45 87E 84E-90E
16 87W 90W-84W 46 93E 90E-96E
17 81W 84W-78W 47 99E 96E-102E
18 75W 78W-72W 48 105E 102E-108E
Central Central
Zone Meridia Range Zone Meridia Range
n n
19 69W 72W-66W 49 111E 108E-114E
20 63W 66W-60W 50 117E 114E-120E
21 57W 60W-54W 51 123E 120E-126E
22 51W 54W-48W 52 129E 126E-132E
23 45W 48W-42W 53 135E 132E-138E
24 39W 42W-36W 54 141E 138E-144E
25 33W 36W-30W 55 147E 144E-150E
26 27W 30W-24W 56 153E 150E-156E
27 21W 24W-18W 57 159E 156E-162E
28 15W 18W-12W 58 165E 162E-168E
29 9W 12W-6W 59 171E 168E-174E
30 3W 6W-0 60 177E 174E-180E
Construction Miscellaneous
Property Compromise
All lines are curved except the central meridian and the Equator.
Parallels are spaced farther apart toward the poles. Meridian spacing
is equal at the Equator. Scale is true along the Equator, but increases
rapidly toward the poles, which are usually not represented.
Van der Grinten I avoids the excessive stretching of the Mercator and
the shape distortion of many of the equal-area projections. It has
been used to show distribution of mineral resources on the ocean
floor.
Prompts
The following prompts display in the Projection Chooser if Van der
Grinten I is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
The Van der Grinten I projection resembles the Mercator, but it is not
conformal.
Construction Pseudocylinder
Property Equal-area
Graticule spacing See Meridians and Parallels. Poles are lines one half
as long as the Equator. Symmetry exists around the
central meridian or the Equator.
Prompts
The following prompts display in the Projection Chooser if Wagner IV
is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Property Equal-area
Graticule spacing See Meridians and Parallels. Poles are curved lines.
Symmetry exists about the central meridian or the
Equator.
Linear scale Scale decreases along the central meridian and the
Equator relative to distance from the center of the
Wagner VII projection.
Prompts
The following prompts display in the Projection Chooser if Wagner IV
is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Construction Pseudocylinder
Graticule spacing See Meridians and Parallels. Pole lines are 0.61 the
length of the Equator. Symmetry exists about the
central meridian or the Equator.
Linear scale Scale is true along latitudes 50° 28’ N and S. Scale
is constant along any given latitude as well as the
latitude of the opposite sign.
Prompts
The following prompts display in the Projection Chooser once Winkel
I is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
NOTE: ERDAS IMAGINE does not support datum shifts for these
external projections.
• Albers Equal Area (see “Albers Conical Equal Area” on page 527)
• Cassini-Soldner
• Modified Polyconic
• Modified Stereographic
• Winkel’s Tripel
Construction Cone
Property Conformal
The two oblique conics are joined with the poles 104° apart. A great
circle arc 104° long begins at 20°S and 110°W, cuts through Central
America, and terminates at 45°N and approximately 19°59’36”W.
The scale of the map is then increased by approximately 3.5%. The
origin of the coordinates is made 17°15’N, 73°02’W.
Prompts
The following prompts display in the Projection Chooser if Bipolar
Oblique Conic Conformal is selected.
Projection Name
Spheroid Type
Datum Name
Construction Cylinder
Property Compromise
The spherical form of the projection bears the same relation to the
Equidistant Cylindrical, or Plate Carrée, projection that the spherical
Transverse Mercator bears to the regular Mercator. Instead of having
the straight meridians and parallels of the Equidistant Cylindrical, the
Cassini has complex curves for each, except for the Equator, the
central meridian, and each meridian 90° away from the central
meridian, all of which are straight.
There is no distortion along the central meridian if it is maintained at
true scale, which is the usual case. If it is given a reduced scale
factor, the lines of true scale are two straight lines on the map,
parallel to and equidistant from, the central meridian. There is no
distortion along them instead.
Prompts
The following prompts display in the Projection Chooser if Cassini-
Soldner is selected.
Projection Name
Spheroid Type
Datum Name
Prompts
The following prompts display in the Projection Chooser if Laborde
Oblique Mercator is selected.
Projection Name
Spheroid Type
Datum Name
For more information, see “New Zealand Map Grid” on page 587.
Construction Cone
Property Compromise
Prompts
The following prompts display in the Projection Chooser if Modified
Polyconic is selected.
Projection Name
Spheroid Type
Datum Name
Construction Plane
Property Conformal
Prompts
The following prompts display in the Projection Chooser if Modified
Stereographic is selected.
Projection Name
Spheroid Type
Datum Name
Construction Pseudocylinder
Property Equal-area
The Mollweide is normally used for world maps and occasionally for
a very large region, such as the Pacific Ocean. This is because only
two points on the Mollweide are completely free of distortion unless
the projection is interrupted. These are the points at latitudes
40°44’12”N and S on the central meridian(s).
The world is shown in an ellipse with the Equator, its major axis,
twice as long as the central meridian, its minor axis. The meridians
90° east and west of the central meridian form a complete circle. All
other meridians are elliptical arcs which, with their opposite numbers
on the other side of the central meridian, form complete ellipses that
meet at the poles.
Prompts
The following prompts display in the Projection Chooser if Rectified
Skew Orthomorphic is selected.
Projection Name
Spheroid Type
Datum Name
Construction Pseudocylinder
Property Compromise
Prompts
The following prompts display in the Projection Chooser if Robinson
Pseudocylindrical is selected.
Projection Name
Spheroid Type
Datum Name
Prompts
The following prompts display in the Projection Chooser if Southern
Orientated Gauss Conformal is selected.
Projection Name
Spheroid Type
Datum Name
Prompts
The following prompts display in the Projection Chooser if Winkel’s
Tripel is selected.
Projection Name
Spheroid Type
Datum Name
Works Cited
Ackermann, 1983
Ackermann, F., 1983. High precision digital image correlation. Paper presented at 39th
Photogrammetric Week, Institute of Photogrammetry, University of Stuttgart, 231-243.
Adams, J.B., M. O. Smith, and A. R. Gillespie. 1989. Simple Models for Complex Natural Surfaces: A
Adams et al, 1989
Strategy for the Hyperspectral Era of Remote Sensing. Paper presented at Institute of Electrical
and Electronics Engineers, Inc. (IEEE) International Geosciences and Remote Sensing (IGARSS)
12th Canadian Symposium on Remote Sensing, Vancouver, British Columbia, Canada, July
1989, I:16-21.
Agouris, P., and T. Schenk. 1996. Automated Aerotriangulation Using Multiple Image Multipoint
Agouris and Schenk, 1996
Akima, H. 1978. A Method of Bivariate Interpolation and Smooth Surface Fitting for Irregularly
Akima, 1978
Sensing XLVI:10:1249.
Atkinson, P. 1985. Preliminary Results of the Effect of Resampling on Thematic Mapper Imagery.
Atkinson, 1985
1985 ACSM-ASPRS Fall Convention Technical Papers. Falls Church, Virginia: American Society
for Photogrammetry and Remote Sensing and American Congress on Surveying and Mapping.
Atlantis Scientific, Inc. 1997. Sources of SAR Data. Retrieved October 2, 1999, from
Atlantis Scientific, Inc., 1997
http://www.atlsci.com/library/sar_sources.html
Bauer, H., and J. Müller. 1972. Height accuracy of blocks and bundle block adjustment with additional
Bauer and Müller, 1972
parameters. International Society for Photogrammetry and Remote Sensing (ISPRS) 12th
Congress, Ottawa.
Benediktsson, J.A., P. H. Swain, O. K. Ersoy, and D. Hong 1990. Neural Network Approaches Versus
Benediktsson et al, 1990
Berk, A., L. S. Bernstein, and D. C. Robertson. 1989. MODTRAN: A Moderate Resolution Model for
Berk et al, 1989
Bernstein, R. 1983. Image Geometry and Rectification. Chapter 21 in Manual of Remote Sensing. Ed.
Bernstein, 1983
Electrical and Electronics Engineers, Inc. (IEEE) Transactions on Geoscience and Remote
Sensing 20 (3).
Cannon, T. M. 1983. Background Pattern Removal by Power Spectral Filtering. Applied Optics 22 (6):
Cannon, 1983
777-779.
Center for Health Applications of Aerospace Related Technologies (CHAART), The. 1998. Sensor
Center for Health Applications of Aerospace Related Technologies, 1998
———. 2000a. Sensor Specifications: Ikonos. Retrieved December 28, 2001, from
Center for Health Applications of Aerospace Related Technologies, 2000a
http://geo.arc.nasa.gov/sge/health/sensor/sensors/ikonos.html
———. 2000b. Sensor Specifications: Landsat. Retrieved December 31, 2001, from
Center for Health Applications of Aerospace Related Technologies, 2000b
http://geo.arc.nasa.gov/sge/health/sensor/sensors/landsat.html
———. 2000c. Sensor Specifications: SPOT. Retrieved December 31, 2001, from
Center for Health Applications of Aerospace Related Technologies, 2000c
http://geo.arc.nasa.gov/sge/health/sensor/sensors/spot.html
Centre National D’Etudes Spatiales (CNES). 1998. CNES: Centre National D’Etudes Spatiales.
Centre National D’Etudes Spatiales, 1998
within the Atmosphere. Chapter 5 in Manual of Remote Sensing. Ed. R. N. Colwell. Falls Church,
Virginia: American Society of Photogrammetry.
Chavez, P. S., Jr., G. L. Berlin, and W. B. Mitchell. 1977. Computer Enhancement Techniques of
Chavez et al, 1977
Landsat MSS Digital Images for Land Use/Land Cover Assessments. Remote Sensing Earth
Resource. 6:259.
Chavez, P. S., Jr., and G. L. Berlin. 1986. Restoration Techniques for SIR-B Digital Radar Images.
Chavez and Berlin, 1986
Paper presented at the Fifth Thematic Conference: Remote Sensing for Exploration Geology,
Reno, Nevada, September/October 1986.
Chavez, P. S., Jr., S. C. Sides, and J. A. Anderson. 1991. Comparison of Three Different Methods to
Chavez et al, 1991
Clark, R. N., and T. L. Roush. 1984. Reflectance Spectroscopy: Quantitative Analysis Techniques for
Clark and Roush, 1984
Clark, R. N., A. J. Gallagher, and G. A. Swayze. 1990. “Material Absorption Band Depth Mapping of
Clark et al, 1990
Imaging Spectrometer Data Using a Complete Band Shape Least-Squares Fit with Library
Reference Spectra”. Paper presented at the Second Airborne Visible/Infrared Imaging
Spectrometer (AVIRIS) Conference, Pasadena, California, June 1990. Jet Propulsion Laboratory
Publication 90-54:176-186.
Colwell, R. N., ed. 1983. Manual of Remote Sensing. 2d ed. Falls Church, Virginia: American Society
Colwell, 1983
of Photogrammetry.
Congalton, R. 1991. A Review of Assessing the Accuracy of Classifications of Remotely Sensed Data.
Congalton, 1991
Conrac Corporation. 1980. Raster Graphics Handbook. New York: Van Nostrand Reinhold.
Conrac Corporation, 1980
Crippen, R. E. 1987. The Regression Intersection Method of Adjusting Image Data for Band Ratioing.
Crippen, 1987
———. 1989a. A Simple Spatial Filtering Routine for the Cosmetic Removal of Scan-Line Noise from
Crippen, 1989a
Landsat TM P-Tape Imagery. Photogrammetric Engineering & Remote Sensing 55 (3): 327-331.
———. 1989b. Development of Remote Sensing Techniques for the Investigation of Neotectonic
Crippen, 1989b
Activity, Eastern Transverse Ranges and Vicinity, Southern California. Ph.D. diss., University of
California, Santa Barbara.
Crist, E. P., R. Laurin, and R. C. Cicone. 1986. Vegetation and Soils Information Contained in
Crist et al, 1986
Transformed Thematic Mapper Data. Paper presented at International Geosciences and Remote
Sensing Symposium (IGARSS)’ 86 Symposium, ESA Publications Division, ESA SP-254.
Crist, E. P., and R. J. Kauth. 1986. The Tasseled Cap De-Mystified. Photogrammetric Engineering &
Crist and Kauth, 1986
Croft, F. C., N. L. Faust, and D. W. Holcomb. 1993. Merging Radar and VIS/IR Imagery. Paper
Croft (Holcomb), 1993
presented at the Ninth Thematic Conference on Geologic Remote Sensing, Pasadena, California,
February 1993.
Cullen, C. G. 1972. Matrices and Linear Transformations. 2d ed. Reading, Massachusetts: Addison-
Cullen, 1972
Publishing Company.
Earth Remote Sensing Data Analysis Center (ERSDAC). 2000. JERS-1 OPS. Retrieved December 28,
Earth Remote Sensing Data Analysis Center, 2000
Eberlein, R. B., and J. S. Weszka. 1975. Mixtures of Derivative Operators as Edge Detectors.
Eberlein and Weszka, 1975
Ebner, H. 1976. Self-calibrating block adjustment. Bildmessung und Luftbildwesen 44: 128-139.
Ebner, 1976
& Sons.
El-Hakim, S.F. and H. Ziemann. 1984. A Step-by-Step Strategy for Gross-Error Detection.
El-Hakim and Ziemann, 1984
Environmental Systems Research Institute, Inc. 1990. Understanding GIS: The ArcInfo Method.
Environmental Systems Research Institute, 1990
———. 1992. ARC Command References 6.0. Redlands. California: ESRI, Incorporated.
Environmental Systems Research Institute, 1992
———. 1992. Data Conversion: Supported Data Translators. Redlands, California: ESRI,
Environmental Systems Research Institute, 1992
Incorporated.
———. 1992. Map Projections & Coordinate Management: Concepts and Procedures. Redlands,
Environmental Systems Research Institute, 1992
———. 1997. ArcInfo. Version 7.2.1. ArcInfo HELP. Redlands, California: ESRI, Incorporated.
Environmental Systems Research Institute, 1997
http://www.eurimage.com/Products/JERS_1.html
European Space Agency (ESA). 1995. ERS-2: A Continuation of the ERS-1 Success, by G. Duchossois
European Space Agency, 1995
———. 1997. SAR Mission Planning for ERS-1 and ERS-2, by S. D’Elia and S. Jutz. Retrieved October
European Space Agency, 1997
Fahnestock, J. D., and R. A. Schowengerdt. 1983. Spatially Variant Contrast Enhancement Using
Fahnestock and Schowengerdt, 1983
Science and Technology. Ed. A. Kent and J. G. Williams. New York: Marcel Dekker, Inc.
Faust, N. L., W. Sharp, D. W. Holcomb, P. Geladi, and K. Esbenson. 1991. Application of Multivariate
Faust et al, 1991
Image Analysis (MIA) to Analysis of TM and Hyperspectral Image Data for Mineral Exploration.
Paper presented at the Eighth Thematic Conference on Geologic Remote Sensing, Denver,
Colorado, April/May 1991.
Fisher, P. F. 1991. Spatial Data Sources and Data Problems. In Geographical Information Systems:
Fisher, 1991
Principles and Applications. Ed. D. J. Maguire, M. F. Goodchild, and D. W. Rhind. New York:
Longman Scientific & Technical.
Flaschka, H. A. 1969. Quantitative Analytical Chemistry: Vol 1. New York: Barnes & Noble, Inc.
Flaschka, 1969
corners and centers of circular features. Paper presented at the Intercommission Conference on
Fast Processing of Photogrammetric Data, Interlaken, Switzerland, June 1987, 281-305.
Fraser, S. J., et al. 1986. “Targeting Epithermal Alteration and Gossans in Weathered and Vegetated
Fraser, 1986
Terrains Using Aircraft Scanners: Successful Australian Case Histories.” Paper presented at the
fifth Thematic Conference: Remote Sensing for Exploration Geology, Reno, Nevada.
Free On-Line Dictionary Of Computing. 1999a. American Standard Code for Information Interchange.
Free On-Line Dictionary of Computing, 1999a
———. 1999b. central processing unit. Retrieved October 25, 1999, from
Free On-Line Dictionary of Computing, 1999b
http://foldoc.doc.ic.ac.uk/foldoc
http://foldoc.doc.ic.ac.uk/foldoc
Frost, V. S., J. A. Stiles, K. S. Shanmugan, and J. C. Holtzman. 1982. A Model for Radar Images and
Frost et al, 1982
Its Application to Adaptive Digital Filtering of Multiplicative Noise. Institute of Electrical and
Electronics Engineers, Inc. (IEEE) Transactions on Pattern Analysis and Machine Intelligence
PAMI-4 (2): 157-166.
Gonzalez, R. C., and P. Wintz. 1977. Digital Image Processing. Reading, Massachusetts: Addison-
Gonzalez and Wintz, 1977
Gonzalez, R. and Woods, R., Digital Image Processing. Prentice Hall, NJ, 2001.
Gonzalez and Woods, 2001
Green, A. A., and M. D. Craig. 1985. Analysis of Aircraft Spectrometer Data with Logarithmic
Green and Craig, 1985
Residuals. Paper presented at the AIS Data Analysis Workshop, Pasadena, California, April
1985. Jet Propulsion Laboratory (JOL) Publication 85 (41): 111-119.
Grün, A., 1978. Experiences with self calibrating bundle adjustment. Paper presented at the American
Grün, 1978
Haralick, R. M. 1979. Statistical and Structural Approaches to Texture. Paper presented at meeting
Haralick, 1979
of the Institute of Electrical and Electronics Engineers, Inc. (IEEE), Seattle, Washington, May
1979, 67 (5): 786-804.
Heipke, C. 1996. Automation of interior, relative and absolute orientation. International Archives of
Heipke, 1996
Helava, U.V. 1988. Object space least square correlation. International Archives of Photogrammetry
Helava, 1988
Hord, R. M. 1982. Digital Image Processing of Remotely Sensed Data. New York: Academic Press.
Hord, 1982
Iron, J. R., and G. W. Petersen. 1981. Texture Transforms of Remote Sensing Data. Remote Sensing
Iron and Petersen, 1981
of Environment 11:359-370.
Jacobsen, K. 1980. Vorschläge zur Konzeption und zur Bearbeitung von Bündelblockausgleichungen.
Jacobsen, 1980
Luftbildwesen, p. 213-217.
———. 1984. Experiences in blunder detection for Aerial Triangulation. Paper presented at
Jacobsen, 1984
International Society for Photogrammetry and Remote Sensing (ISPRS) 15th Congress, Rio de
Janeiro, Brazil, June 1984.
Jensen, J. R. 1986. Introductory Digital Image Processing: A Remote Sensing Perspective. Englewood
Jensen, 1986
Jensen, J. R. 1996. Introductory Digital Image Processing: A Remote Sensing Perspective. 2d ed.
Jensen, 1996
Jensen, J. R., et al. 1983. Urban/Suburban Land Use Analysis. Chapter 30 in Manual of Remote
Jensen et al, 1983
Johnston, R. J. 1980. Multivariate Statistical Analysis in Geography: A Primer on the General Linear
Johnston, 1980
Jordan, L. E., III, and L. Beck. 1999. NITFS—The National Imagery Transmission Format Standard.
Jordan and Beck, 1999
Kidwell, K. B., ed. 1988. NOAA Polar Orbiter Data (TIROS-N, NOAA-6, NOAA-7, NOAA-8, NOAA-9,
Kidwell, 1988
NOAA-10, and NOAA-11) Users Guide. Washington, DC: National Oceanic and Atmospheric
Administration.
King, Roger and Wang, Jianwen, “A Wavelet Based Algorithm for Pan Sharpening Landsat 7 Imagery”,
King et al, 2001
2001.
Selby, and S. A. Clough. 1988. Users Guide to LOWTRAN 7. Hanscom AFB, Massachusetts: Air
Force Geophysics Laboratory. AFGL-TR-88-0177.
Konecny, G. 1994. New Trends in Technology, and their Application: Photogrammetry and Remote
Konecny, 1994
Sensing—From Analog to Digital. Paper presented at the Thirteenth United Nations Regional
Cartographic Conference for Asia and the Pacific, Beijing, China, May 1994.
Kruse, F. A. 1988. Use of Airborne Imaging Spectrometer Data to Map Minerals Associated with
Kruse, 1988
Hydrothermally Altered Rocks in the Northern Grapevine Mountains, Nevada and California.
Remote Sensing of the Environment 24 (1): 31-51.
Krzystek, P. 1998. On the use of matching techniques for automatic aerial triangulation. Paper
Krzystek, 1998
presented at meeting of the International Society for Photogrammetry and Remote Sensing
(ISPRS) Commission III Conference, Columbus, Ohio, July 1998.
Kubik, K. 1982. An error theory for the Danish method. Paper presented at International Society for
Kubik, 1982
Photogrammetry and Remote Sensing (ISPRS) Commission III Symposium, Helsinki, Finland,
June 1982.
Larsen, R. J., and M. L. Marx. 1981. An Introduction to Mathematical Statistics and Its Applications.
Larsen and Marx, 1981
Lavreau, J. 1991. De-Hazing Landsat Thematic Mapper Images. Photogrammetric Engineering &
Lavreau, 1991
Leberl, F. W. 1990. Radargrammetric Image Processing. Norwood, Massachusetts: Artech House, Inc.
Leberl, 1990
Lee, J. E., and J. M. Walsh. 1984. Map Projections for Use with the Geographic Information System.
Lee and Walsh, 1984
Lee, J. S. 1981. “Speckle Analysis and Smoothing of Synthetic Aperture Radar Images.” Computer
Lee, 1981
Leick, A. 1990. GPS Satellite Surveying. New York, New York: John Wiley & Sons.
Leick, 1990
Lemeshewsky, George P, “Multispectral multisensor image fusion using wavelet transforms”, in Visual
Lemeshewsky, 1999
Image Processing VIII, S. K. Park and R. Juday, Ed., Proc SPIE 3716, pp214-222, 1999.
Li, D. 1983. Ein Verfahren zur Aufdeckung grober Fehler mit Hilfe der a posteriori-Varianzschätzung.
Li, 1983
———. 1985. Theorie und Untersuchung der Trennbarkeit von groben Paßpunktfehlern und
Li, 1985
Lillesand, T. M., and R. W. Kiefer. 1987. Remote Sensing and Image Interpretation. New York: John
Lillesand and Kiefer, 1987
Lopes, A., E. Nezry, R. Touzi, and H. Laur. 1990. Maximum A Posteriori Speckle Filtering and First
Lopes et al, 1990
Order Textural Models in SAR Images. Paper presented at the International Geoscience and
Remote Sensing Symposium (IGARSS), College Park, Maryland, May 1990, 3:2409-2412.
Lyon, R. J. P. 1987. Evaluation of AIS-2 Data over Hydrothermally Altered Granitoid Rocks.
Lyon, 1987
Proceedings of the Third AIS Data Analysis Workshop. JPL Pub. 87-30:107-119.
Magellan Corporation. 1999. GLONASS and the GPS+GLONASS Advantage. Retrieved October 25,
Magellan Corporation, 1999
Maling, D. H. 1992. Coordinate Systems and Map Projections. 2d ed. New York: Pergamon Press.
Maling, 1992
Mallat S.G., "A Theory for Multiresolution Signal Decomposition: The Wavelet Representation", IEEE
Mallat, 1989
Transactions on Pattern Analysis and Machine Intelligence, Volume 11. No 7., 1989.
Mendenhall, W., and R. L. Scheaffer. 1973. Mathematical Statistics with Applications. North Scituate,
Mendenhall and Scheaffer, 1973
Merenyi, E., J. V. Taranik, T. Monor, and W. Farrand. March 1996. Quantitative Comparison of Neural
Merenyi et al, 1996
Network and Conventional Classifiers for Hyperspectral Imagery. Paper presented at the Sixth
AVIRIS Conference. JPL Pub.
Minnaert, J. L., and G. Szeicz. 1961. The Reciprocity Principle in Lunar Photometry. Astrophysics
Minnaert and Szeicz, 1961
Journal 93:403-410.
Nagao, M., and T. Matsuyama. 1978. Edge Preserving Smoothing. Computer Graphics and Image
Nagao and Matsuyama, 1978
Processing 9:394-407.
National Aeronautics and Space Administration (NASA). 1995a. Mission Overview. Retrieved October
National Aeronautics and Space Administration, 1995a
———. 1995b. Thematic Mapper Simulators (TMS). Retrieved October 2, 1999, from
National Aeronautics and Space Administration, 1995b
http://geo.arc.nasa.gov/esdstaff/jskiles/top-down/OTTER/OTTER_docs/DAEDALUS.html
http://southport.jpl.nasa.gov/reports/iwgsar/3_SAR_Development.html
http://southport.jpl.nasa.gov/desc/SIRCdesc.html
http://geo.arc.nasa.gov/sge/landsat/landsat.html
———. 1999. An Overview of SeaWiFS and the SeaStar Spacecraft. Retrieved September 30, 1999,
National Aeronautics and Space Administration, 1999
from http://seawifs.gsfc.nasa.gov/SEAWIFS/SEASTAR/SPACECRAFT.html
http://landsat.gsfc.nasa.gov/project/L7_Specifications.html
National Imagery and Mapping Agency (NIMA). 1998. The National Imagery and Mapping Agency Fact
National Imagery and Mapping Agency, 1998
National Remote Sensing Agency, Department of Space, Government of India. 1998. Table 3.
National Remote Sensing Agency, 1998
Needham, B. H. 1986. Availability of Remotely Sensed Data and Information from the U.S. National
Needham, 1986
Oceanic and Atmospheric Administration’s Satellite Data Services Division. Chapter 9 in Satellite
Remote Sensing for Resources Development, edited by Karl-Heinz Szekielda. Gaithersburg,
Maryland: Graham & Trotman, Inc.
Oppenheim, A. V., and R. W. Schafer. 1975. Digital Signal Processing. Englewood Cliffs, New Jersey:
Oppenheim and Schafer, 1975
Prentice-Hall, Inc.
from http://www.orbimage.com/satellite/orbview3/orbview3.html
———. 2000. OrbView-3: High-Resolution Imagery in Real-Time. Retrieved December 31, 2000, from
ORBIMAGE, 2000
http://www.orbimage.com/corp/orbimage_system/ov3/
Parent, P., and R. Church. 1987. Evolution of Geographic Information Systems as Decision Making
Parent and Church, 1987
Pearson, F. 1990. Map Projections: Theory and Applications. Boca Raton, Florida: CRC Press, Inc.
Pearson, 1990
Peli, T., and J. S. Lim. 1982. Adaptive Filtering for Image Enhancement. Optical Engineering 21 (1):
Peli and Lim, 1982
108-112.
Pratt, W. K. 1991. Digital Image Processing. 2d ed. New York: John Wiley & Sons, Inc.
Pratt, 1991
Prewitt, J. M. S. 1970. Object Enhancement and Extraction. In Picture Processing and Psychopictorics.
Prewitt, 1970
http://radarsat.space.gc.ca/
Rado, B. Q. 1992. An Historical Analysis of GIS. Mapping Tomorrow’s Resources. Logan, Utah: Utah
Rado, 1992
State University.
from http:/www.remotesensing.org/geotiff/spec/geotiffhome.html
Robinson, A. H., and R. D. Sale. 1969. Elements of Cartography. 3d ed. New York: John Wiley & Sons,
Robinson and Sale, 1969
Inc.
Rockinger, O., and Fechner, T., “Pixel-Level Image Fusion”, in Signal Processing, Sensor Fusion and
Rockinger and Fechner, 1998
Sabins, F. F., Jr. 1987. Remote Sensing Principles and Interpretation. 2d ed. New York: W. H.
Sabins, 1987
Schenk, T., 1997. Towards automatic aerial triangulation. International Society for Photogrammetry
Schenk, 1997
and Remote Sensing (ISPRS) Journal of Photogrammetry and Remote Sensing 52 (3): 110-121.
———. 1983. Techniques for Image Processing and Classification in Remote Sensing. New York:
Schowengerdt, 1983
Academic Press.
Schwartz, A. A., and J. M. Soha. 1977. Variable Threshold Zonal Filtering. Applied Optics 16 (7).
Schwartz and Soha, 1977
Shensa, M., “The discrete wavelet transform”, IEEE Trans Sig Proc, v. 40, n. 10, pp. 2464-2482,
Shensa, 1992
1992.
Shikin, E. V., and A. I. Plis. 1995. Handbook on Splines for the User. Boca Raton: CRC Press, LLC.
Shikin and Plis, 1995
Simonett, D. S., et al. 1983. The Development and Principles of Remote Sensing. Chapter 1 in Manual
Simonett et al, 1983
Slater, P. N. 1980. Remote Sensing: Optics and Optical Systems. Reading, Massachusetts: Addison-
Slater, 1980
Smith, J. A., T. L. Lin, and K. J. Ranson. 1980. The Lambertian Assumption and Landsat Data.
Smith et al, 1980
Snyder, J. P. 1987. Map Projections--A Working Manual. Geological Survey Professional Paper 1395.
Snyder, 1987
Snyder, J. P., and P. M. Voxland. 1989. An Album of Map Projections. U.S. Geological Survey
Snyder and Voxland, 1989
Professional Paper 1453. Washington, DC: United States Government Printing Office.
Space Imaging. 1998. IRS-ID Satellite Imagery Available for Sale Worldwide. Retrieved October 1,
Space Imaging, 1998
http://www.spaceimage.com/aboutus/satellites/IKONOS/ikonos.html
———. 1999b. IRS (Indian Remote Sensing Satellite). Retrieved September 17, 1999, from
Space Imaging, 1999b
http://www.spaceimage.com/aboutus/satellites/IRS/IRS.html
http://www.spaceimage.com/aboutus/satellites/RADARSAT/radarsat.htm
SPOT Image. 1998. SPOT 4—In Service! Retrieved September 30, 1999 from
SPOT Image, 1998
http://www.spot.com/spot/home/news/press/Commish.htm
———. 1999. SPOT System Technical Data. Retrieved September 30, 1999, from
SPOT Image, 1999
http://www.spot.com/spot/home/system/introsat/seltec/seltec.htm
Srinivasan, R., M. Cannon, and J. White. 1988. Landsat Destriping Using Power Spectral Filtering.
Srinivasan et al, 1988
Star, J., and J. Estes. 1990. Geographic Information Systems: An Introduction. Englewood Cliffs, New
Star and Estes, 1990
Jersey: Prentice-Hall.
Steinitz, C., P. Parker, and L. E. Jordan, III. 1976. Hand Drawn Overlays: Their History and
Steinitz et al, 1976
Stojic’ , M., J. Chandler, P. Ashmore, and J. Luce. 1998. The assessment of sediment transport rates
Stojic et al, 1998
Strang, Gilbert and Nguyen, Truong, Wavelets and Filter Banks, Wellesley-Cambridge Press, 1997.
Strang et al, 1997
Suits, G. H. 1983. The Nature of Electromagnetic Radiation. Chapter 2 in Manual of Remote Sensing.
Suits, 1983
Swain, P. H. 1973. Pattern Recognition: A Basis for Remote Sensing Data Analysis (LARS Information
Swain, 1973
Note 111572). West Lafayette, Indiana: The Laboratory for Applications of Remote Sensing,
Purdue University.
Swain, P. H., and S. M. Davis. 1978. Remote Sensing: The Quantitative Approach. New York: McGraw
Swain and Davis, 1978
Tang, L., J. Braun, and R. Debitsch. 1997. Automatic Aerotriangulation - Concept, Realization and
Tang et al, 1997
Tou, J. T., and R. C. Gonzalez. 1974. Pattern Recognition Principles. Reading, Massachusetts:
Tou and Gonzalez, 1974
Tsingas, V. 1995. Operational use and empirical results of automatic aerial triangulation. Paper
Tsingas, 1995
presented at the 45th Photogrammetric Week, Wichmann Verlag, Karlsruhe, September 1995,
207-214.
Tucker, C. J. 1979. Red and Photographic Infrared Linear Combinations for Monitoring Vegetation.
Tucker, 1979
United States Geological Survey (USGS). 1999a. About the EROS Data Center. Retrieved October 25,
USGS, 1999a
http://mapping.usgs.gov/digitalbackyard/doqs.html
http://mcmcweb.er.usgs.gov/sdts/whatsdts.html
———. n.d. National Landsat Archive Production System (NLAPS). Retrieved September 30, 1999,
United States Geological Survey, n.d.
from http://edc.usgs.gov/glis/hyper/guide/nlaps.html
Vosselman, G., and N. Haala. 1992. Erkennung topographischer Paßpunkte durch relationale
Vosselman and Haala, 1992
Wang, Y. 1988a. A combined adjustment program system for close range photogrammetry. Journal
Wang, Y., 1988a
———. 1998b. Principles and applications of structural image matching. International Society for
Wang, Y., 1998b
Photogrammetry and Remote Sensing (ISPRS) Journal of Photogrammetry and Remote Sensing
53:154-165.
———. 1995. A New Method for Automatic Relative Orientation of Digital Images. Zeitschrift fuer
Wang, Y., 1995
Wang, Z. 1990. Principles of Photogrammetry (with Remote Sensing). Beijing, China: Press of Wuhan
Wang, Z., 1990
Technical University of Surveying and Mapping, and Publishing House of Surveying and
Mapping.
Watson, D. 1992. Contouring: A Guide to the Analysis and Display of Spatial Data. Tarrytown, New
Watson, 1992
Welch, R. 1990. 3-D Terrain Modeling for GIS Applications. GIS World 3 (5): 26-30.
Welch, 1990
Welch, R., and W. Ehlers. 1987. Merging Multiresolution SPOT HRV and Landsat TM Data.
Welch and Ehlers, 1987
Yang, X. 1997. Georeferencing CAMS Data: Polynomial Rectification and Beyond. Ph. D. dissertation,
Yang, 1997
Yang, X., and D. Williams. 1997. The Effect of DEM Data Uncertainty on the Quality of Orthoimage
Yang and Williams, 1997
Yocky, D. A., “Image merging and data fusion by means of the two-dimensional wavelet transform”,
Yocky, 1995
Geologic Materials in the Dolly Varden Mountains. Paper presented at the Second Airborne
Visible Infrared Imaging Sepctrometer (AVIRIS) Conference, Pasadena, California, June 1990,
Jet Propulsion Laboratories (JPL) Publication 90-54:162-66.
Zhang, Y., “A New Merging Method and its Spectral and Spatial Effects”, Int. J. Rem. Sens., vol. 20,
Zhang, 1999
Related Reading
Battrick, B., and L. Proud, eds. 1992. ERS-1 User Handbook. Noordwijk, The Netherlands: European
Space Agency Publications Division, c/o ESTEC.
Billingsley, F. C., et al. 1983. “Data Processing and Reprocessing.” Chapter 17 in Manual of Remote
Sensing, edited by Robert N. Colwell. Falls Church, Virginia: American Society of
Photogrammetry.
Burrus, C. S., and T. W. Parks. 1985. DFT/FFT and Convolution Algorithms: Theory and
Implementation. New York: John Wiley & Sons, Inc.
Center for Health Applications of Aerospace Related Technologies (CHAART), The. 1998. Sensor
Specifications: IRS-P3. Retrieved December 28, 2001, from
http://geo.arc.nasa.gov/sge/health/sensor/sensors/irsp3.html
Dangermond, J. 1989. A Review of Digital Data Commonly Available and Some of the Practical
Problems of Entering Them into a GIS. Fundamentals of Geographic Information Systems: A
Compendium. Ed. W. J. Ripple. Bethesda, Maryland: American Society for Photogrammetry and
Remote Sensing and American Congress on Surveying and Mapping.
Defense Mapping Agency Aerospace Center. 1989. Defense Mapping Agency Product Specifications
for ARC Digitized Raster Graphics (ADRG). St. Louis, Missouri: Defense Mapping Agency
Aerospace Center.
Duda, R. O., and P. E. Hart. 1973. Pattern Classification and Scene Analysis. New York: John Wiley
& Sons, Inc.
Elachi, C. 1992. “Radar Images of the Earth from Space.” Exploring Space.
Elachi, C. 1988. Spaceborne Radar Remote Sensing: Applications and Techniques. New York:
Institute of Electrical and Electronics Engineers, Inc. (IEEE) Press.
Elassal, A. A., and V. M. Caruso. 1983. USGS Digital Cartographic Data Standards: Digital Elevation
Models. Circular 895-B. Reston, Virginia: U.S. Geological Survey.
Federal Geographic Data Committee (FGDC). 1997. Content Standards for Digital Orthoimagery.
Federal Geographic Data Committee, Washington, DC.
Geological Remote Sensing Group. 1992. Geological Remote Sensing Group Newsletter 5.
Wallingford, United Kingdom: Institute of Hydrology.
Gonzalez, R. C., and R. E. R. Woods. 1992. Digital Image Processing. Reading, Massachusetts:
Addison-Wesley Publishing Company.
Guptill, S. C., ed. 1988. A Process for Evaluating Geographic Information Systems. U.S. Geological
Survey Open-File Report 88-105.
Jacobsen, K. 1994. Combined Block Adjustment with Precise Differential GPS Data. International
Archives of Photogrammetry and Remote Sensing 30 (B3): 422.
Jordan, L. E., III, B. Q. Rado, and S. L. Sperry. 1992. Meeting the Needs of the GIS and Image
Processing Industry in the 1990s. Photogrammetric Engineering & Remote Sensing 58 (8):
1249-1251.
Keates, J. S. 1973. Cartographic Design and Production. London: Longman Group Ltd.
Kennedy, M. 1996. The Global Positioning System and GIS: An Introduction. Chelsea, Michigan: Ann
Arbor Press, Inc.
Knuth, D. E. 1987. Digital Halftones by Dot Diffusion. Association for Computing Machinery
Transactions on Graphics 6:245-273.
Lue, Y., and K. Novak. 1991. Recursive Grid - Dynamic Window Matching for Automatic DEM
Generation. 1991 ACSM-ASPRS Fall Convention Technical Papers.
Menon, S., P. Gao, and C. Zhan. 1991. GRID: A Data Model and Functional Map Algebra for Raster
Geo-processing. Paper presented at Geographic Information Systems/Land Information
Systems (GIS/LIS) ‘91, Atlanta, Georgia, October 1991, 2:551-561.
Moffit, F. H., and E. M. Mikhail. 1980. Photogrammetry. 3d ed. New York: Harper& Row Publishers.
Nichols, D., J. Frew et al. 1983. Digital Hardware. Chapter 20 in Manual of Remote Sensing. Ed. R.
N. Colwell. Falls Church, Virginia: American Society of Photogrammetry.
Sader, S. A., and J. C. Winne. 1992. RGB-NDVI Colour Composites For Visualizing Forest Change
Dynamics. International Journal of Remote Sensing 13 (16): 3055-3067.
Short, N. M., Jr. 1982. The Landsat Tutorial Workbook: Basics of Satellite Remote Sensing.
Washington, DC: National Aeronautics and Space Administration.
Space Imaging. 1999. LANDSAT TM. Retrieved September 17, 1999, from
http://www.spaceimage.com/aboutus/satellites/Landsat/landsat.html
United States Geological Survey (USGS). 1999. Landsat Thematic Mapper Data. Retrieved September
30, 1999, from http://edc.usgs.gov/glis/hyper/guide/landsat_tm
Yang, X., R. Robinson, H. Lin, and A. Zusmanis. 1993. Digital Ortho Corrections Using Pre-
transformation Distortion Adjustment. 1993 ASPRS Technical Papers 3:425-434.