SVG 203 Intro Remote Sensing-General 2
SVG 203 Intro Remote Sensing-General 2
c. To create the awareness that the practice of remote sensing has become greatly simplified
by useful and affordable commercial software, which has made numerous advances in
recent years. Also, that Satellite and airborne platforms provide local and regional
perspective views of the Earth’s surface. These views come in a variety of resolutions and
are highly accurate depictions of surface objects. Satellite images and image processing
allow us to better understand and evaluate a variety of Earth processes occurring on the
surface and in the hydrosphere, biosphere, and atmosphere.
b. Included in this work is a background of the principles of remote sensing, with a focus on the
physics of electromagnetic waves and the interaction of electromagnetic waves with
objects. Aerial photography and history of remote sensing are briefly discussed.
d. The fundamentals of image processing are presented along with a summary of map projection
and information extraction. Helpful examples and tips are presented to clarify concepts and to
enable the efficient use of image processing. Examples focus on the use of images from the
Landsat series of satellite sensors, as this series has the longest and most continuous record of
Earth surface multispectral data.
e. Case Study Examples of remote sensing applications are presented. These include land use,
forestry, geology, hydrology, geography, meteorology, oceanography, and archeology.
Source Materials
This note is prepared with materials from the following sources:
a. US Army Corps Engineers Document EM 1110-2-2907, 1 October 2003.
“Electromagnetic Radiation is Simply Mysterious”
Remote sensing is broadly defined as the science and the practice of collecting
information about objects, areas or phenomena from a distance without being in
physical contact with them. In the present context, the definition of remote sensing
is restricted to mean the process of acquiring information about any object without
physically contacting it in any way regardless of whether the observer is
immediately adjacent to the object or millions of miles away.
a. Remote sensing describes the collection of data about an object, area, or phenomenon
from a distance with a device that is not in contact with the object. More commonly, the
term remote sensing refers to imagery and image information derived by both airborne and
satellite platforms that house sensor equipment. The data collected by the sensors are in the
form of electromagnetic energy (EM). Electromagnetic energy is the energy emitted,
absorbed, or reflected by objects. Electromagnetic energy is synonymous to many terms,
including electromagnetic radiation, radiant energy, energy, and radiation.
b. Sensors carried by platforms are engineered to detect variations of emitted and reflected
electromagnetic radiation. A simple and familiar example of a platform carrying a sensor is
a camera mounted on the underside of an airplane. The airplane may be a high or low altitude
platform while the camera functions as a sensor collecting data from the ground. The data in
this example are reflected electromagnetic energy commonly known as visible light. Likewise,
space borne platforms known as satellites, such as Landsat Thematic Mapper (Landsat TM) or
SPOT (Satellite Pour l’Observation de la Terra), carry a variety of sensors. Similar to the
camera, these sensors collect emitted and reflected electromagnetic energy, and are capable of
recording radiation from the visible and other portions of the spectrum. The type of platform
and sensor employed will control the image area and the detail viewed in the image, and
additionally they record characteristics of objects not seen by the human eye.
c. For this manual, remote sensing is defined as the acquisition, processing, and analysis of surface
and near surface data collected by airborne and satellite systems.
Remote sensing employ electromagnetic energy and to a great extent relies on the
interaction of electromagnetic energy with matter (object). It refers to the sensing
of EM radiation, which is reflected, scattered or emitted from the object.
Eme source
𝑰 𝒙, 𝒚 = 𝒊 𝒙, 𝒚 ∗ 𝒓 𝒙, 𝒚
sensor
𝒊 𝒙, 𝒚 𝑰 𝒙, 𝒚
Target object
𝒓 𝒙, 𝒚 image
Electromagnetic energy has the characteristic that every object interacts with it to
produce a radiation having unique pattern of brightness and brightness values
which can be recorded.
In other words, no two objects interact with eme in exactly the same way unless
they are the same object.
This property of the eme radiation is exploited to infer the nature of the objects by
collecting and analyzing the reflected/emitted energy from objects, which
intuitively, can be viewed as made up of a large collection of tiny dots or squares.
Spatial patterns and brightness variations evident in the data are then interpreted
in terms of geometric and descriptive characters of the material forming the
surface of the object.
Eme source
𝑰 𝒙, 𝒚 = 𝒊 𝒙, 𝒚 ∗ 𝒓 𝒙, 𝒚
sensor
𝒊 𝒙, 𝒚 𝑰 𝒙, 𝒚
Target object
𝒓 𝒙, 𝒚 image
1. Energy source
2. propagation of energy through atmosphere
3. energy interaction with earth’s surface features
4. Airborne/space borne sensors receiving the reflected and emitted energy
5. Transmission of data to earth station and generation of data produce
6. Multiple-date users
1. The energy source: The uniform energy source provides energy over all wave
lengths. The passive RS system relies on the sun as the strongest source of EM
energy and measures energy that is either reflected and or emitted from the earth’s
surface features. However, active RS systems use their own source of EM energy.
The source of electromagnetic radiation may be natural like the sun’s
radiated light which is reflected by the earth, or the Earth’s emitted heat,
or it could be man-made, like microwave radar. Any object above the
absolute zero 0ok generates E-M energy. The level of such generated
energy depends on the body temperature.
2. Propagation of energy from the atmosphere: The EM energy, from the source
pass through the atmosphere on its way to earth’s surface. Also, after reflection
from the earth’s surface, it again pass through the atmosphere on its way to sensor.
The atmosphere modifies the wave length and spectral distribution of energy to
some extent, and this modification varies particularly with the wave length.
Atmospheric interaction: electromagnetic energy passing through the
atmosphere is distorted and scattered.
5. Transmission of data to earth station and data product generation: The data
from the sensing system is transmitted to the ground based earth station along
with the telemetry data. The real-time (instantaneous) data handling system
consists of high density data tapes for recording and visual devices (such as
television) for quick look displays. The data products are mainly classified into two
categories:
(i) Pictorial or photographic product (analogue)
Data storage and processing: the collected data often in digital form is
organized and stored for subsequent use for various purposes. The type
of processing performed depends on the intended application.
6. Multiple data users: The multiple data users are those who have knowledge of
great depth, both of their respective disciplines as well as of remote sensing data
and analysis techniques. The same set of data becomes various forms of
information for different users with the understanding of their field and
interpretation skills.
It is a form of energy that moves with the velocity of light (3 x 108 m/sec) in a
harmonic pattern consisting of sinusoidal waves, equally and repetitively spaced in
time. It has two fields: (i) electrical field and (ii) magnetic field, both being
orthogonal to each other. Fig. 4.2 show the electromagnetic wave pattern, in which
the electric components are vertical and magnetic components are horizontal.
Figure 2-3. Propagation of the electromagnetic and magnetic field. Waves vibrate
perpendicular to the direction of motion; electric and magnetic fields are at right angle
to each other. These fields travel at the speed of light.
c. Temperature. The origin of all energy (electromagnetic energy or radiant energy) begins with
the vibration of subatomic particles called photons (Figure 2-2). All objects at a temperature
above absolute zero vibrate and therefore emit some form of electromagnetic energy.
Temperature is a measurement of this vibrational energy emitted from an object.
Humans are sensitive to the thermal aspects of temperature; the higher the temperature
is the greater is the sensation of heat. A “hot” object emits relatively large amounts of energy.
Conversely, a “cold” object emits relatively little energy.
d. Absolute Temperature Scale. The lowest possible temperature has been shown to be —
273.2°C and is the basis for the absolute temperature scale. The absolute temperature
scale, known as Kelvin, is adjusted by assigning —273.2°C to 0 K (“zero Kelvin”; no
degree sign). The Kelvin scale has the same temperature intervals as the Celsius scale,
so conversion between the two scales is simply a matter of adding or subtracting 273
(Table 2-1). Because all objects with temperatures above, or higher than, zero Kelvin
emit electromagnetic radiation, it is possible to collect, measure, and distinguish energy
emitted from adjacent objects.
Table 2-1
Different scales used to measure object temperature. Conversion formulas are listed below.
Object Fahrenheit (°F) Celsius (°Cl Kelvin (k)
Absolute Zero —273.2 0.0
Frozen Water 32.0 0.0 273.2
Boiling Water 212.0 100.0 373.2
Sun 9980.6 5527.0 5800.0
Earth 80.6 27 300
Human body 98.6 37.0 310.0
Conversion Formulas:
Celsius to Fahrenheit: F° = (1.8 x C°) + 32
Fahrenheit to Celsius: C’ = (F- 32)/1.8
Celsius to Kelvin: K = C’ + 273
Fahrenheit to Kelvin: K = [(F°- 32)/1.8] + 273
The Sun is the strongest source of radiant energy and can be approximated by a
body source of temperature 5750 - 60000 K. Although Sun produces EM radiation
across a range of wave lengths, the amount of energy it produces is not evenly
distributed along this range. Approximately 43% is radiated within the visible
wavelength (0.4 to 0.7 m), and 48% of the energy is transmitted at wave length
greater than 0.7 m, mainly within infrared range.
If the energy received at the edge of earth’s atmosphere were distributed evenly
over the earth, it would give an average incident flux density of 1367 W/m2. This is
known as the solar constant. Thirty five percent of incident radiant flux is reflected
back by the earth. This includes the energy reflected by clouds and atmosphere.
Seventeen percent of it is absorbed by the atmosphere while 48% is absorbed by
the earth’s surface materials (Mather, 1987).
The totality or array of all electromagnetic radiation that moves with the
velocity of light, characterized by wavelength or frequency is called
Electromagnetic spectrum. Usually wavelengths of between 10-10 -10+10m.
the Optical wave length is between 0.3 𝜇m to 15.0𝜇m, and are the
wavelengths mostly used in remote sensing. Energy at the optical
wavelengths can be reflected and refracted with solid materials like mirrors
and lenses.
Characteristics as Wave
The wave properties are very useful in explaining many of the results of its
interaction with matter. Waves can be described in terms of their:
4. Period (T) of a waveform is the time, in seconds needed for one full
wave to pass a fixed point.
c = f
𝐸 = ℎ𝑓
Where: E = energy
h = a constant known as plank’s constant
f = frequency.
8. The amplitude is the maximum value of the electric (or magnetic) field
and is a measure of the amount of energy that is transported by the wave.
Wave theory concept explains how EM energy propagates in the form of a wave.
However, this energy can only be detected when it interacts with matter. This
interaction suggests that the energy consists of many discrete units called photons
whose energy (Q) is given by:
ℎ.𝑐
𝑄 = ℎ. 𝑓 =
𝜆
where h = Planks constant = 6.6252 x 10-34 J-S
The above equation suggests that the shorter the wavelength of radiation, the more
is the energy content.
The radiated energy per unit area from a body at a temperature T is given
by
max = 2898/T um
Since
= 𝑐/𝑓
when
f = 0, =
f = , = 0
hence
0 (E – M Spectrum)
Microwave Bands
10 GHz 1 GHz
0.2 m 1.0 m 10 m 1 mm 1 cm 10 cm 1m
Ka Ku X C L P
Thermal
Visible Middle-IR infrared
S
UV Near-infrared
Figure 9-2 The wavelength and frequency of commonly used RADAR bands. RADAR antennas
transmit and receive very long wavelength energy measured in centimeters, unlike the
relatively short wavelength visible, near-infrared, middle-infrared and thermal infrared
regions measured in micrometers.
Earth-resource image analysts seem to grasp the concept of wave-length more readily than frequency, so
the convention is to describe a radar in terms of its wavelength. Conversely, engineers generally prefer to
work in units of frequency because as radiation passes through materials of different densities, frequency
remains constant while velocity and wavelength change. Since wavelength and frequency are
inversely related to the speed of light c , it really does not matter which unit of measurement is used as
long as one remembers the following relationships:
c
f
where = wave length, which is the distance between two adjacent peaks. The
wave lengths sensed by many remote sensing systems are extremely
small and are measured in terms of micro meter ( m or 10-6 m)
or nanometer (nm or 10-9 m)
f= frequency, which is defined as the number of peaks that pass any given
point in one second and is measured in Hertz (Hz).
Figure 2-5 Long wave-lengths maintain a low frequency and lower energy state
relative to the short wavelengths.
(2) Frequency. The rate at which a wave passes a fixed point is known as the wave frequency and
is denoted as v (nu”). The units of measurement for frequency are given as Hertz (Hz), the
number of wave cycles per second (Figures 2-5 and 2-6).
(3) Speed of electromagnetic radiation (or speed of light). Wavelength and frequency are
inversely related to one another, in other words as one increases the other de-creases. Their
relationship is expressed as:
𝑐 = 𝜆𝑣 (2-1)
where
Figure 2-7. Electromagnetic spectrum displayed in meter and Hz units. Short wavelengths
are shown on the left, long wavelength on the right. The visible spectrum shown in red.
(1) Though the spectrum is divided up for convenience, it is truly a continuum of increasing
wavelengths with no inherent differences among the radiations of varying wavelengths.
For instance, the scale in Figure 2-8 shows the color blue to be approximately in the range
of 435 to 520 nm (on other scales it is divided out at 446 to 520 nm). As the wavelengths
proceed in the direction of green they become increasingly less blue and more green; the
boundary is somewhat arbitrarily fixed at 520 nm to indicate this gradual change from blue
to green.
(2) Be aware of differences in the manner in which spectrum scales are drawn. Some authors
place the long wavelengths to the right (such as those shown in this manual), while others
place the longer wavelengths to the left. The scale can also be drawn on a vertical axis
(Figure 2-9). Units can be depicted in meters, nanometers, micrometers, or a combination
of these units. For clarity some authors add color in the visible spectrum to correspond to
the appropriate wavelength.
Diagram
(1) Ultraviolet. The ultraviolet (UV) portion of the spectrum contains radiation just
beyond the violet portion of the visible wavelengths. Radiation in this range has
short wavelengths (0.300 to 0.446 µm) and high frequency. UV wavelengths are
used in geologic and atmospheric science applications. Materials, such as rocks and
minerals, fluoresce or emit visible light in the presence of UV radiation. The
florescence associated with natural hydrocarbon seeps is useful in monitoring oil
fields at sea. In the upper atmosphere, ultraviolet light is greatly absorbed by ozone
(O3) and becomes an important tool in tracking changes in the ozone layer.
(2) Visible Light. The radiation detected by human eyes is in the spectrum range aptly
named the visible spectrum. Visible radiation or light is the only portion of the
spectrum that can be perceived as colors. These wavelengths span a very short
portion of the spectrum, ranging from approximately 0.4 to 0.7 µm. Because of this
short range, the visible portion of the spectrum is plotted on a linear scale (Figure
2-8). This linear scale allows the individual colors in the visible spectrum to be
discretely depicted. The shortest visible wavelength is violet and the longest is red.
(a) The visible colors and their corresponding wavelengths are listed below (Table
2-2) in micrometers and shown in nanometers in Figure 2.8.
Color Wavelength
Violet 0.4–0.446 µm
Blue 0.446–0.500 µm
Green 0.500–0.578 µm
Yellow 0.578–0.592 µm
Orange 0.592–0.620 µm
Red 0.620–0.7 µm
(b) Visible light detected by sensors depends greatly on the surface reflection characteristics
of objects. Urban feature identification, soil/vegetation discrimination, ocean productivity,
cloud cover, precipitation, snow, and ice cover are only a few examples of current
applications that use the visible range of the electromagnetic spectrum.
(3) Infrared. The portion of the spectrum adjacent to the visible range is the infra-red
(IR) region. The infrared region, plotted logarithmically, ranges from
approximately 0.7 to 100 µm, which is more than 100 times as wide as the visible
portion. The infrared region is divided into two categories, the reflected IR and the
emitted or thermal IR; this division is based on their radiation properties.
(a) Reflected Infrared. The reflected IR spans the 0.7- to 3.0-µm wavelengths.
Reflected IR shares radiation properties exhibited by the visible portion and is thus
used for similar purposes. Reflected IR is valuable for delineating healthy verses
unhealthy or fallow vegetation, and for distinguishing among vegetation, soil, and
rocks.
(b) Thermal Infrared. The thermal IR region represents the radiation that is emitted
from the Earth’s surface in the form of thermal energy. Thermal IR spans the 3.0-
to 100-µm range. These wavelengths are useful for monitoring temperature
variations in land, water, and ice.
(4) Microwave. Beyond the infrared is the microwave region, ranging on the spectrum
from 1 µm to 1 m (bands are listed in Table 2-3). Microwave radiation is the longest
wavelength used for remote sensing. This region includes a broad range of
wavelengths; on the short wavelength end of the range, microwaves exhibit
properties similar to the thermal IR radiation, whereas the longer wavelengths
maintain properties similar to those used for radio broadcasts.
(a) Microwave remote sensing is used in the studies of meteorology, hydrology, oceans,
geology, agriculture, forestry, and ice, and for topographic mapping. Because microwave
emission is influenced by moisture content, it is useful for mapping soil moisture, sea ice,
currents, and surface winds. Other applications include snow wetness analysis, profile
measurements of atmospheric ozone and water vapor, and detection of oil slicks.
(1) Quantifying Energy. The energy released from a radiating body in the form of a vibrating
photon traveling at the speed of light can be quantified by relating the energy’s wavelength
with its frequency. The following equation shows the relationship between wavelength,
frequency, and amount of energy in units of Joules:
Q=hν (2-2)
𝑄 = ℎ 𝑐/𝜆
where
= wavelength (m)
The equation for energy indicates that, for long wavelengths, the amount of energy will be low,
and for short wavelengths, the amount of energy will be high. For instance, blue light is on the
short wavelength end of the visible spectrum (0.446 to 0.050 µm) while red is on the longer end
of this range (0.620 to 0.700 µm). Blue light is a higher energy radiation than red light. The
following example illustrates this point:
Example: Using Q = h c/λ, which has more energy blue or red light?
Solution: Solve for 𝑸𝑏𝑙𝑢𝑒 (energy of blue light) and 𝑄𝑟𝑒𝑑 (energy of red light) and
compare.
ℎ = 6.6 𝑥 10−34 J s
𝑐 = 3.00 𝑥 10−8 𝑚/𝑠
* Don’t forget to convert length µm to meters (not shown here)
Blue
Red
Answer: Because 4.66 × 10−31 𝐽 is greater than 3.00 𝑥 10−31 𝐽 blue has more energy.
This explains why the blue portion of a fire is hotter that the red portions.
Example: Using 𝐿𝑚 = 𝐴/𝑇, what is the maximum wavelength emitted by a
human?
𝐿𝑚 = 2898 µ𝑚 𝐾/310𝐾
𝐿𝑚 = 9.3 µ𝑚
(1) As radiation passes through the atmosphere, it is greatly affected by the atmospheric
particles it encounters (Figure 2-12). This effect is known as atmospheric scattering and
atmospheric absorption and leads to changes in intensity, direction, and wave-length size.
The change the radiation experiences is a function of the atmospheric conditions, path
length, composition of the particle, and the wavelength measurement relative to the
diameter of the particle.
Figure 2-12. Various radiation obstacles and scatter paths. Modified from
two sources, http://orbit-net.nesdis.noaa.gov/arad/fpdt/tutorial/12-atmra.gif
and http://rst.gsfc.nasa.gov/Intro/Part2_4.html.
(2) Rayleigh scattering, Mie scattering, and nonselective scattering are three types of scatter
that occur as radiation passes through the atmosphere (Figure 2-12). These types of scatter
lead to the redirection and diffusion of the wavelength in addition to making the path of
the radiation longer.
Rayleigh Scatter = 1/ 𝜆4
where λ is the wavelength (m). This means that short wavelengths will undergo a large amount
of scatter, while large wavelengths will experience little scatter. Smaller wave-length radiation
reaching the sensor will appear more diffuse.
c. Why the sky is blue? Rayleigh scattering accounts for the Earth’s blue sky. We see
predominately blue because the wavelengths in the blue region (0.446–0.500 µm)
are more scattered than other spectra in the visible range. At dusk, when the sun is
low in the horizon creating a longer path length, the sky appears more red and
orange. The longer path length leads to an increase in Rayleigh scatter and results
in the depletion of the blue wavelengths. Only the longer red and orange
wavelengths will reach our eyes, hence beautiful orange and red sunsets. In
contrast, our moon has no atmosphere; subsequently, there is no Rayleigh scatter.
This explains why the moon’s sky appears black (shadows on the moon are more
black than shadows on the Earth for the same reason, see Figure 2-13).
Figure 2-13. Moon rising in the Earth’s horizon (left). The Earth’s atmosphere appears blue due to Rayleigh Scatter.
Photo taken from the moon’s surface shows the Earth rising (right). The Moon has no atmosphere, thus no atmospheric
scatter. Its sky appears black. Images taken from: http://antwrp.gsfc.nasa.gov/apod/ap001028.html, and
http://antwrp.gsfc.nasa.gov/apod/ap001231.html.
d. Mie Scattering. Mie scattering occurs when an atmospheric particle diameter is equal to
the radiation’s wavelength (φ = λ). This leads to a greater amount of scatter in the long
wavelength region of the spectrum. Mie scattering tends to occur in the presence of water
vapor and dust and will dominate in overcast or humid conditions. This type of scattering
explains the reddish hues of the sky following a forest fire or volcanic eruption.
Figure 2-15. Atmospheric windows with wavelength on the x-axis and percent transmission measured
in hertz on the y-axis. High transmission corresponds to an “atmospheric window,” which allows
radiation to penetrate the Earth’s atmosphere. The chemical formula is given for the molecule
responsible for sunlight absorption at particular wavelengths across the spectrum. Modified from
http://earthobservatory.nasa.gov:81/Library/RemoteSensing/remote_04.html.
(1) Ozone. Ozone (O3) absorbs harmful ultraviolet radiation from the sun. Without this
protective layer in the atmosphere, our skin would burn when exposed to sunlight.
(2) Carbon Dioxide. Carbon dioxide (CO2) is called a greenhouse gas because it greatly
absorbs thermal infrared radiation. Carbon dioxide thus serves to trap heat in the
atmosphere from radiation emitted from both the Sun and the Earth.
(3) Water vapor. Water vapor (H2O) in the atmosphere absorbs incoming long-wave infrared
and shortwave microwave radiation (22 to 1 µm). Water vapor in the lower atmosphere
varies annually from location to location. For example, the air mass above a desert would
have very little water vapor to absorb energy, while the tropics would have high
concentrations of water vapor (i.e., high humidity).
(4) Summary. Because these molecules absorb radiation in very specific regions of the
spectrum, the engineering and design of spectral sensors are developed to collect
wavelength data not influenced by atmospheric absorption. The areas of the spectrum that
are not severely influenced by atmospheric absorption are the most useful regions, and are
called atmospheric windows.
j. Geometric Effects. Random and non-random error occurs during the acquisition of
radiation data. Error can be attributed to such causes as sun angle, angle of sensor,
elevation of sensor, skew distortion from the Earth’s rotation, and path length.
Malfunctions in the sensor as it collects data and the motion of the platform are additional
sources of error. As the sensor collects data, it can develop sweep irregularities that result
in hundreds of meters of error. The pitch, roll, and yaw of platforms can create hundreds
to thousands of meters of error, depending on the altitude and resolution of the sensor.
Geometric corrections are typically applied by re-sampling an image, a process that shifts
and recalculates the data. The most commonly used re-sampling techniques include the use
of ground control points (see Chapter 5), applying a mathematical model, or re-sampling
by nearest neighbor or cubic convolution.
l. Atmospheric Correction Techniques. Data can be corrected by re-sampling with the use of
image processing software such as ERDAS Imagine or ENVI, or by the use of specialty
software. In many of the image processing software packages, atmospheric correction
models are included as a component of an import process. Also, data may have some
corrections applied by the vendor. When acquiring data, it is important to be aware of any
corrections that may have been applied to the data (see Chapter 4). Correction models can
be mathematically or empirically derived.
Scattering
It is unpredictable diffusion of radiation by molecules of the gases, dust and smoke
in the atomosphere. Scattering reduces the image contrast and changes the
spectral signatures of ground objects. Scattering is basically classified as (i)
selective, and (ii) non-selective, depending upon the size of particle with which the
electromagnetic radiation interacts. The selective scatter is further classified as (a)
Rayleigh’s scatter, and (ii) Mie’s scatter.
Rayleigh’s scatter: In the upper of the atmosphere, the diameter of the gas
molecules or particles is much less than the wave length of radiation. Hence haze
results on the remotely sensed imagery, causing a bluish grey cast on the image,
thus reducing the contrast. Lesser the wave length, more is the scattering.
Mie’s scatter: In the lower layers of atmosphere, where the diameter of water
vapour or dust particles approximately equals wave length of radiation, Mie’s
scatter occurs.
Absorption
Atmospheric windows
The amount of scattering or absorption depends upon (i) wave length, and (ii)
composition of the atmospheric. In order to minimize the effect of atmosphere, it
is essential to choose the regions with high transmittance.
Typical atmospheric windows on the regions of EM radiation are shown in Fig. 4.4
(1) Absorption. Absorption occurs when radiation penetrates a surface and is incorporated into
the molecular structure of the object. All objects absorb incoming incident radiation to
some degree. Absorbed radiation can later be emitted back to the atmosphere. Emitted
radiation is useful in thermal studies, but will not be discussed in detail in this work (see
Lillisand and Keifer [1994] Remote Sensing and Image Interpretation for information on
emitted energy).
(2) Transmission. Transmission occurs when radiation passes through material and exits the
other side of the object. Transmission plays a minor role in the energy’s interaction with
the target. This is attributable to the tendency for radiation to be absorbed be-fore it is
entirely transmitted. Transmission is a function of the properties of the object.
(3) Reflection. Reflection occurs when radiation is neither absorbed nor transmit-ted. The
reflection of the energy depends on the properties of the object and surface roughness
relative to the wavelength of the incident radiation. Differences in surface properties allow
the distinction of one object from another.
𝐸𝐼 = 𝐸𝐴 𝐸𝑇 + 𝐸𝑇 + 𝐸𝑅 (2-6)
where
where incident energy is the amount of incoming radiant energy and reflected energy is the amount
of energy bouncing off the object. Or from equation 2-5:
𝐸𝐼 = 𝐸𝐴 + 𝐸𝑇 + 𝐸𝑅
(6) Summary. Spectral radiance is the amount of energy received at the sensor per time, per
area, in the direction of the sensor (measured in steradian), and it is measured per
wavelength. The sensor therefore measures the fraction of reflectance for a given area/time
for every wavelength as well as the emitted. Reflected and emitted radiance is calculated
by the integration of energy over the reflected hemisphere resulting from diffuse reflection
(see http://rsd.gsfc.nasa.gov/goes/text/reflectance.pdf for details on this complex
calculation). Reflected radiance is orders of magnitude greater than emitted radiance. The
following paragraphs, therefore, focus on reflected radiance.
(1) Background.
(a) Remote sensing consists of making spectral measurements over space: how much
of what “color” of light is coming from what place on the ground. One thing that a
remote sensing applications scientist hopes for, but which is not always true, is that
surface features of interest will have different colors so that they will be distinct in
remote sensing data.
(b) A surface feature’s color can be characterized by the percentage of incoming
electromagnetic energy (illumination) it reflects at each wavelength across the
electro-magnetic spectrum. This is its spectral reflectance curve or “spectral
signature”; it is an unchanging property of the material. For example, an object such
as a leaf may reflect 3% of incoming blue light, 10% of green light and 3% of red
light. The amount of light it reflects depends on the amount and wavelength of
incoming illumination, but the per-cents are constant. Unfortunately, remote
sensing instruments do not record reflectance directly, rather radiance, which is the
amount (not the percent) of electromagnetic energy received in selected wavelength
bands. A change in illumination, more or less intense sun for instance, will change
the radiance. Spectral signatures are often represented as plots or graphs, with
wavelength on the horizontal axis, and the reflectance on the vertical axis (Figure
2-20 provides a spectral signature for snow).
(2) Important Reflectance Curves and Critical Spectral Regions. While there are too many
surface types to memorize all their spectral signatures, it is helpful to be familiar with the
basic spectral characteristics of green vegetation, soil, and water. This in turn helps
determine which regions of the spectrum are most important for distinguishing these
surface types.
(3) Spectral Reflectance of Green Vegetation. Reflectance of green vegetation (Figure 2-21)
is low in the visible portion of the spectrum owing to chlorophyll absorption, high in the
near IR due to the cell structure of the plant, and lower again in the shortwave IR due to
water in the cells. Within the visible portion of the spectrum, there is a local reflectance
peak in the green (0.55 µm) between the blue (0.45 µm) and red (0.68 µm) chlorophyll
absorption valleys (Samson, 2000; Lillesand and Kiefer, 1994).
Figure 2-21. Spectral reflectance of healthy vegetation. Graph developed for Prospect (2002
and 2003) using Aster Spectral Library (http://speclib.jpl.nasa.gov/) data
(4) Spectral Reflectance of Soil. Soil reflectance (Figure 2-22) typically increases with
wavelength in the visible portion of the spectrum and then stays relatively constant in the
near-IR and shortwave IR, with some local dips due to water absorption at 1.4 and 1.9 µm
and due to clay absorption at 1.4 and 2.2 µm (Lillesand and Kiefer, 1994).
Figure 2-22. Spectral reflectance of one variety of soil. Graph developed for Prospect (2002
and 2003) using Aster Spectral Library (http://speclib.jpl.nasa.gov/) data
(5)
(8) Real Life and Spectral Signatures. Knowledge of spectral reflectance curves is useful if
you are searching a remote sensing image for a particular material, or if you want to identify
what material a particular pixel represents. Before comparing image data with spectral
library reflectance curves, however, you must be aware of several things.
(a) Image data, which often measure radiance above the atmosphere, may have to be
corrected for atmospheric effects and converted to reflectance.
(b) Spectral reflectance curves, which typically have hundreds or thousands of spectral
bands, may have to be resampled to match the spectral bands of the remote sensing
image (typically a few to a couple of hundred).
(c) There is spectral variance within a surface type that a single spectral library
reflectance curve does not show. For instance, the Figure 2-25 below shows spectra
for a number of different soil types. Before depending on small spectral distinctions
to separate surface types, a note of caution is required: make sure that differences
within a type do not drown out the differences between types.
(d) While spectral libraries have known targets that are “pure types,” a pixel in a remote
sensing image very often includes a mixture of pure types: along edges of types
(e.g., water and land along a shoreline), or interspersed within a type (e.g., shadows
in a tree canopy, or soil background behind an agricultural crop).
Figure 2-25. Reflectance spectra of five soil types: A—soils having > 2% organic matter content
(OMC) and fine texture; B— soils having < 2% OMC and low iron content; C—soils having < 2%
OMC and medium iron content; D—soils having > 2% OMC, and coarse texture; and E— soil having
fine texture and high iron-oxide content (> 4%).
More on INTERACTION WITH EARTH’S SURFACE
INTERACTION MECHANISM
Interaction with matter can change the following properties of incident radiation:
(a) Intensity (b) Direction (c) Wave length (d) Polarisation, and (e) Phase.
The energy balance equation for radiation at a given wave length ( ) can be
expressed as follows.
E1 E R E A ET (4.6)
E R E1 [ E A ET ] (4.7)
ER EA ET
1
E1 E1 ET
or λ 1 [ ] (4.8)
ER EA ET
where Reflectance; Absorbance ; Transmittance
E1 E1 E
Since almost all earth surface features are very opaque in nature, the transmittance
( ) can be neglected. Also, according to Kirchoffs’ law of physics, the absorbance
( ) is taken as emissivity ( ). Hence Eq. 4.8 becomes
1 (4.9)
Eq. 4.9 is the fundamental equation by which the conceptual design of remote
sensing technology is built.
If 0 , then (i.e. reflectance) is equal to one; the means that the total energy
incident on the object is reflected and recorded by sensing system. The classical
example of this type of object is snow (i.e. white object).
If 1 , the = 0, indicating that whatever the energy incident on the object, is
completely absorbed by that object. Black body such as lamp smoke is an example
of this type of object. Hence it is seen that reflectance varies from zero for the black
body to one for white body.
Generally, E-M energy reaching the earth surface interacts with the earth
surface materials in 3 ways.
The reflected energy travels upwards through the atmosphere. The part of
it that comes within the field of view of the sensor is detected by the sensor
and converted into a numerical value according to pre-designed scheme.
M = R + T + A
Where M = total energy entering the earth/m2
R = portion reflected
T = portion transmitted
A = portion absorbed
The proportions of these energy components vary both for different objects
and also depending on the wavelength of the E-M energy. Dividing (1) by
M we obtain
i.e. r+t+a=1
t = transmittance
a = absorbance
Generally in day light remote sensing, the effect of the reflected energy
predominates and therefore, it is this reflectance characteristics of objects
that are of utmost importance.
EM SPECTRUM AND APPLICATIONS IN REMOTE SENSING
Fig. 4.3 shows the EM spectrum which is divided into discrete regions on the basis
of wavelength. Remote sensing mostly deals with energy in visible (Blue, green,
red) infrared (near-infrared, mid-infrared, thermal-infrared) regions Table 4.2 gives
the wave length region along with the principal applications in remote sensing.
Energy reflected from earth during daytime may be recorded as a function of
wavelength. The maximum amount of energy is reflected at 0.5 m, called the
reflected energy peak. Earth also radiates energy both during day and night time
with maximum energy radiated at 9.7 m, called radiant energy peak.
The pulse of electromagnetic radiation sent out by the transmitter through the antenna is of a specific
wavelength and duration (i.e. it has a pulse length measured in microseconds, sec). The wavelengths
of energy most commonly used in imaging radars are summarized in Table 9-3. The wavelengths are much
longer than visible, near-infrared, mid-infrared or thermal infrared energy used in other remote sensing
systems (Figure 9-2). Therefore, microwave energy is usually measured in centimeters rather than
micrometers (Carver, 1988). The unusual names associated with the radar wavelengths (e.g. K, Ka, Ku, X,
C, S, L, and P) are an artifact of the secret work on radar remote sensing in World War II when it was
customary to use an alphabetic descriptor instead of the actual wavelength or frequency. These
descriptors are still used today in much of the radar scientific literature.
The shortest radar wavelengths are designated K-band. K-band wavelengths should theoretically provide
the best radar resolution. Unfortunately, K-band wavelength energy is partially absorbed by water vapor
and cloud penetration can be limited. This is the reason that most ground-based weather radars used to
track cloud cover and precipitation are K-band. X-band is often the shortest wavelength range used for
orbital and suborbital imaging radars (Mikhail et al., 2001). Some RADAR systems function using more
than one frequency and are referred to as multiple-frequency radars (e.g. SIR-C and SRTM)
Chapter 4
Sensor Platforms and sensor types and data
acquisition
3-12 Satellite Platforms and Sensors.
a. There are currently over two-dozen satellite platforms orbiting the earth collecting data.
Satellites orbit in either a circular geo-synchronous or polar sun-synchronous path. Each satellite
carries one or more electromagnetic sensor(s), for example, Landsat 7 satellite carries one sensor,
the ETM+, while the satellite ENVISAT carries ten sensors and two microwave antennas. Some
sensors are named after the satellite that carries them, for instance IKONOS the satellite houses
IKONOS the sensor. See Appendices D and E for a list of satellite platforms, systems, and sensors.
b. Sensors are designed to capture particular spectral data. Nearly 100 sensors have been
designed and employed for long-term and short-term use. Appendix D summarizes details on
sensor functionality. New sensors are periodically added to the family of existing sensors while
older or poorly designed sensors become decommissioned or defunct. Some sensors are flown on
only one platform; a few, such as MODIS and MSS, are on-board more than one satellite. The
spectral data collected may span the visible (optical), blue, green, microwave, MIR/SWIR, NIR,
Red, or thermal IR Sensors can detect single wavelengths or frequencies and/or ranges of the EM
spectrum.
Figure 3-2. Satellite in Geostationary Orbit. Courtesy of the Natural Resources Canada.
b. The remaining remote sensing satellites have near polar orbits and are launched into a
sun synchronous orbit (Figure 3-3). They are typically inclined 8 degrees from the poles due to the
gravitational pull from the Earth’s bulge at the equator; this allows them to remain in orbit.
Depending on the swath width of the satellite (if it is non-pointable), the same area on the Earth
will be imaged at regular intervals (16 days for Landsat, 24 days for Radarsat).
Figure 3-3. Satellite Near Polar Orbit, Courtesy of the Natural Resources Canada.
3-14 Planning Satellite Acquisitions. Corps satellite acquisition must be arranged through the
Topographic Engineering Center (TEC) Imagery Office (TIO). It is very easy to transfer the cost
of the imagery to TEC via the Corps Financial Management System (CFMS). They will place the
order, receive and duplicate the imagery for entry into the National Imagery and Mapping Agency
(NIMA) archive called the Commercial Satellite Imagery Library (CSIL), and send the original to
the Corps requester. They buy the imagery under a governmental user license contract that licenses
free distribution to other government agencies and their contractors, but not outside of these. It is
important for Corps personnel to adhere to the conditions of the license. Additional information
concerning image acquisition is discussed in Chapter 4 (Section 4-1).
a. Turn Around Time. This is another item to consider. That is the time after acquisition
of the image that lapses before it is shipped to TEC-TIO and the original purchaser. Different
commercial providers handle this in different ways, but the usual is to charge an extra fee for a 1-
week turn around, and another fee for a 1 to 2 day turn around. For ex-ample, SPOT Code Red
programmed acquisition costs an extra $1000 and guarantees shipment as soon as acquired. The
ERS priority acquisition costs an extra $800 and guarantees shipment within 7 days, emergency
acquisition cost $1200 and guarantees shipment within 2 days, and near real time costs an extra
$1500 and guarantees shipment as soon as acquired. Also arrangement may be made for ftp image
transfers in emergency situations. Costs increase in a similar way with RADARSAT, IKONOS,
and QuickBird satellite imaging systems.
Sensors
b. Swath Planners.
• Landsat acquired daily over the CONUS, use DESCW swath planner on PC running
at least Windows 2000 for orbit locations. http://earth.esa.int/services/descw/
• Other commercial imaging systems, contact the TEC Imagery Office regard-ing
acquisitions.
3-15 Ground Penetrating Radar Sensors. Ground penetrating radar (GPR) uses
electromagnetic wave propagation and back scattering to image, locate, and quantitatively identify
changes in electrical and magnetic properties in the ground. Practical plat-forms for the GPR
include on-the-ground point measurements, profiling sleds, and near-ground helicopter surveys. It
has the highest resolution in subsurface imaging of any geo-physical method, approaching
centimeters. Depth of investigation varies from meters to several kilometers, depending upon
material properties. Detection of a subsurface feature depends upon contrast in the dielectric
electrical and magnetic properties. Interpretation of ground penetrating radar data can lead to
information about depth, orientation, size, and shape of buried objects, and soil water content.
b. CRREL has researched the use of radar for surveys of permafrost, glaciers, and river,
lake and sea ice covers since 1974. Helicopter surveys have been used to measure ice thickness in
New Hampshire and Alaska since 1986. For reports on the use of GPR within cold region
environments, a literature search from the CRREL website (http://www.crrel.usace.army.mil/) will
provide additional information. Current applications of GPR can be found at
http://www.crrel.usace.army.mil/sid/gpr/gpr.html.
c. A radar pulse is modulated at frequencies from 100 to 1000 MHz, with the lower
frequency penetrating deeper than the high frequency, but the high frequency having better
resolution than the low frequency. Basic pulse repetition rates are up to 128 Hz on a radar line
profiling system on a sled or airborne platform. Radar energy is reflected from both surface and
subsurface objects, allowing depth and thickness measurements to be made from two-way travel
time differences. An airborne speed of 25 m/s at a low altitude of no more than 3 m allows
collection of line profile data at 75 Hz in up to 4 m of depth with a 5-cm resolution on 1-ft (30.5
cm)-grid centers. Playback rates of 1.2 km/min. are possible for post processing of the data.
d. There are several commercial companies that do GPR surveys, such as Blackhawk
Geometrics and Geosphere Inc., found on the web at http://www.blackhawkgeo.com, and
http://www.geosphereinc.com.
3-1 Introduction. Remotely sensed data are collected by a myriad of satellite and air-borne
systems. A general understanding of the sensors and the platforms they operate on will help in
determining the most appropriate data set to choose for any project. This chapter reviews the nine
business practice areas in USACE Civil Works and examines the leading questions to be addressed
before the initiation of a remote sensing project. Airborne and satellite sensor systems are
presented along with operational details such as flight path/orbits, swath widths, acquisition, and
post processing options. Ground-based remote sensing GPR (Ground Penetrating Radar) is also
introduced. This chapter concludes with a summary of remote sensing and GIS matches for each
of the nine civil works business practice areas.
Remote sensing of the surface of the earth has a long history, dating from the use
of cameras carried by balloons and pigeons in the eighteenth and nineteenth
centuries. Later, air craft mounted systems were developed for military purposes
during the early part of 20th century. Air borne remote sensing was the well-known
remote sensing method used in the initial years of development of remote sensing
in 1960’s and 1970’s. Air crafts were mostly used as RS platforms for obtaining
photographs. Aircraft carrying the RS equipment should have maximum stability,
free from vibrations and fly with uniform speed. In India, three types of aircrafts
are currently used for RS operations: Dakota, AVRO and Beach-craft Superking Air
200. The RS equipment available in India are multi-spectral scanner, ocean colour
radiometer, aerial cameras for photography in B/W, colour & near infrared etc.
But the air craft operations are very expensive and moreover for periodical
monitoring of constantly changing phenomena like crop growth, vegetation cover
etc. Air craft based platform cannot provide cost and time effective solutions.
Space borne remote sensing platforms, such as a satellite, offer several advantages
over airborne platforms. It provides synoptic view (i.e. observation of large area in
a single image), systematic and repetitive coverage. Also, platforms in space are
very less affected by atmospheric drag, due to which the orbits can be well defined.
Entire earth or any designed portion can be covered at specified intervals
synoptically, which is immensely useful for management of natural resources.
Satellite: It is a platform that carries the sensor and other playloads required in RS
operation. It is put into earth’s orbit with the help of launch vehicles. The space
borne platforms are broadly divided into two classes:
(i) Low altitude near-polar orbiting satellites (ii) High altitude Geo-stationary
satellites
These are mostly the remote sensing satellites which revolve around earth in a Sun
synchronous orbit (altitude 700-1500 km) defined by its fixed inclination angle from
the earth’s NS axis. The orbital plane rotates to maintain precise pace with Sun’s
westward progress as the earth roates around Sun. Since the position in reference
to Sun is fixed, the satellite crosses the equator precisely at the same local solar
time.
Geo-stationary satellites
France, Sweden and Belgium joined together and pooled up their resources to
develop an earth observation programme known as system Pourl Observation dela
Terre, abbreviated as SPOT. The first satellite of the series, SPOT-1 was launched in
Feb. 1988. The high resolution data obtained from SPOT sensors, namely, Thematic
Mapper (TM) and High Resolution Visible (HRV) have been extensively used for
urban planning, urban growth assessment, transportation planning, besides the
conventional application related to natural resources.
1. Satellite for Earth Observation (SEO-1), now called Bhaskara-1 was the first
Indian remote sensing satellite launched by a soviet launch vehicle from
USSR in June, 1979, into a near circular orbit.
2. SEO-II, (Bhaskara II) was launched in Nov. 1981 from a soviet cosmodrome.
3. India’s first semi-operational remote sensing satellite (IRS) was launched by
the Soviet Union in Sept. 1987.
4. The IRS series of satellites launched by the IRS mission are: IRS IA, IRS IB, IRS
IC, IRS ID and IRS P4.
Remote Sensing Sensors
Remote sensing sensors are designed to record radiations in one or more parts of
the EM spectrum. Sensors are electronic instruments that receive EM radiation and
generate an electric signal that correspond to the energy variation of different
earth’s surface features. The signal can be recorded and displayed as numerical
data or an image. The strength of the signal depends upon (i) Energy flux, (ii)
Altitude, (iii) Spectral band width, (iv) instantaneous field of view (IFOV), and (v)
Dwell time.
A scanning system employs detectors with a narrow field of view which sweeps
across the terrain to produce an image. When photons of EM energy radiated or
reflected from earth surface feature encounter the detector, an electrical signal is
produced that varies in proportion to the number of photons.
Sensors may also be split into "active" sensor (i.e., when a signal is emitted by the sensor and its
reflection by the object is detected by the sensor) and "passive" sensor (i.e., when the reflection of
sunlight is detected by the sensor).
Passive sensors gather radiation that is emitted or reflected by the object or surrounding areas.
Reflected sunlight is the most common source of radiation measured by passive sensors. Examples of
passive remote sensors include film photography, infrared, charge-coupled devices, and radiometers.
Active collection, on the other hand, emits energy in order to scan objects and areas whereupon a
sensor then detects and measures the radiation that is reflected or backscattered from the target.
RADAR and LiDAR are examples of active remote sensing where the time delay between emission and
return is measured, establishing the location, speed and direction of an object.
There are many other types of sensors used in remote sensing but these
are of specialized design and so are not of immediate help in this course.
In general, all sensors operate by collecting radiation from a target of
interest and suitably separating the energy levels and recording the
pattern either photographically or numerically. It is immaterial whether
the recording is done photographically or numerically or both because it
is always possible to convert from one form of recording to another using
data converters.
This consists of a system of lenses attached the front end of a cone which
forms the body of the Camera. At the back of the camera body is a metal
and glass plate on which a light sensitive material such as a film is placed.
There are prohibition shutters for admitting or putting-off light radiation
from the camera. When the camera lens is opened to admit light, visible
em energy enters the camera and is brought to focus on the back of the
camera. The film material (halide crystals) can distinguish between
different brightness levels of the incoming radiation and thus can produce
patterns of the energy levels of the incoming energy. By developing the
exposed film, it is possible to produce an image of the source of the
radiation that enters the camera.
A device that uses a set of charged coupled devices (CCD) to detect and
record variation in energy levels. Another design used electro-mechanical
system for the same purpose.
Practical SENSORS
1. Linear Imaging and Self Scanning Sensor (IRS): This payload was on board
IRS 1A and IB satellites. It had four bands operating in visible and near IR
region.
2. Linear Imaging and Self Scanning Sensor (LISS II): This payload was board IRS
1A and 1B satellites. It has four bands operating in visible and near IR region.
3. Linear Imaging and Self Scanning Sensor (LISS III): This payload is on board
IRS 1C and 1D satellites. It has three bands operating in visible and near IR
region and one band in short wave infra region.
4. Panchromatic Sensor (PAN): This payload is on boards IRS 1C and 1D
satellites. It has two bands operating in visible and near IR region.
5. Wide Field Sensor (WiFS): This payload is on boards IRS 1C and 1D satellites.
It has two bands operating in visible and near IR region.
6. Modular Opto-Electronic Scanner (MOS): This payload is on board IRS P3
satellite.
7. Ocean Colour Monitor (OCM): This payload is on board IRS P4 satellite. It has
eight spectral bands operating in visible and near IR region.
8. Multi Scanning Microwave Radiometer (MSMR): This payload is on board
IRS 1D satellite. This is a passive microwave sensor.
Recall that any object interacts with em energy in 3-ways namely: absorption,
transmission and reflectance. In this lecture, we assume that the transmitted
energy is so small and can be ignored.
This means that the incident energy is distributed in the proportion of the
absorptance and reflectance properties of the object.
For daylight sensing, the absorption component plays less role and so the function
𝑓 𝑥, 𝑦 is the reflected component of eme and is affected by two factors namely: (1)
the amount of source illumination incident on the scene being viewed and (2) the
reflective capacity of the objects in the scene.
𝑰 𝒙, 𝒚 = 𝒊 𝒙, 𝒚 ∗ 𝒓 𝒙, 𝒚
(2)
Where
And
𝟎 ≤ 𝒓 𝒙, 𝒚 < 𝟏
(4)
It is essential to note that, even though a digital image is basically 4D dataset, since
each pixel has (x, y, f, DN) coordinate values and a digital number (DN),
representing the gray value, computational photogrammetric methods do not
adopt a simultaneous treatment of the multi-dimensional dataset in the solution
of a mapping problem. This is because of the complexity of 4D solution processes
and also the difficulty of combining attribute data modeling with geometric data
modeling. To simplify things, we employ the x, y pixel coordinates in the
formulations for 2D mapping works and x, y, f in formulations involved in 3D
geometric mapping. The digital numbers are then used in a separate process for
example, when it is necessary to resample the image after a geometric
transformation or to classify image pixels into object space features, the results of
which are then superposed on the map already prepared from the geometric
process.
5.0 Types of digital Images
Images are classified according to:
the platform used (terrestrial, aircraft, satellite)
the portion of eme spectrum used (gray scale (optical), RGB, thermal or infra-
red, microwave)
the type of sensors used (camera, scanners, radiometers, radar, LiDAR
Examples are aerial images, satellite images, and terrestrial images. Others are
photographic images (Plates 2, 3), stereo or overlapping images (Plate 4), RADAR
images, LIDAR, Panchromatic images, Colour images, thermal images and high
resolution images, vertical images, oblique images, etc.
Spatial resolution
The size of a pixel that is recorded in a raster image – typically pixels may correspond to square
areas ranging in side length from 1 to 1,000 metres (3.3 to 3,280.8 ft).
Spectral resolution
The wavelength width of the different frequency bands recorded – usually, this is related to the
number of frequency bands recorded by the platform. Current Landsat collection is that of
seven bands, including several in the infra-red spectrum, ranging from a spectral resolution of
0.07 to 2.1 μm. The Hyperion sensor on Earth Observing-1 resolves 220 bands from 0.4 to 2.5
μm, with a spectral resolution of 0.10 to 0.11 μm per band.
Radiometric resolution
The number of different intensities of radiation the sensor is able to distinguish. Typically, this
ranges from 8 to 14 bits, corresponding to 256 levels of the gray scale and up to 16,384
intensities or "shades" of colour, in each band. It also depends on the instrument noise.
Temporal resolution
The frequency of flyovers by the satellite or plane, and is only relevant in time-series studies or
those requiring an averaged or mosaic image as in deforesting monitoring. This was first used by
the intelligence community where repeated coverage revealed changes in infrastructure, the
deployment of units or the modification/introduction of equipment. Cloud cover over a given
area or object makes it necessary to repeat the collection of said location.
Spatial Resolution
This is the term used to express the metrical accuracy of an image. It is defined
as the diameter of the circular area on the object that is seen as a pixel or dot by
the imaging sensor. If the area is square, then the resolution is the length or
breadth of the area seen as a pixel. Spatial resolution is often expressed as the
number of pixels/dots per unit of space. If the space is the image space, the
units are in inches, centimeters or millimeters. If the space is the object space,
the units are in kilometers or miles. Thus, a spatial resolution of 25 dpi implies
that 25 pixels/dots occupy one inch in the image space ie. One pixel is
approximately 1mm in diameter. A spatial resolution of 1 meter is equivalent
1000 cells per kilometer in the object space. It should be noted that the spatial
resolution stated in terms of the object space dimensions can be converted into
the image space equivalent and vice versa if the scale of the image is given.
Radiometric Resolution
This term is often used to express the number of bits used to represent the DN
of each pixel in the image. For example, a one bit image is a black & white image
in which the maximum DN for any pixel is 1 while the minimum is 0. A 2-bit
image is one in which DNs of the pixels can range from 0 to 3, while an 8-bit
image is one in which the DNs can range from 0 to 255. An image that has more
than 1-bit radiometric resolution is generally termed a grey scale image.
Spectral Resolution
This is a term used to express the number of electromagnetic wave spectrums
used to acquire the image. Often, each em spectrum is called a channel or band;
thus, the number of channels or bands used to procure the image is called the
spectral resolution of the image. An image acquired using many sensors tuned
to different wavebands of the em spectrum is called a multi-channel image. A
colour image is a 3-band image acquired with sensors sensitive to red, green and
blue wavebands. Many satellite images are multi-channel images because the
sensors are always designed to capture different features of the earth and so are
tuned to wavebands of maximum reflectance for the different features of interest.
It should be noted that images taken in different spectral bands are separate
images even though they cover common areas. Such images can be processed
individually or together. The joint processing is often more rigorous and more
robust that the individual approach. Also, depending of the radiometric
resolution chosen, the memory size for multi-channel image is a direct multiple
of that for a one channel image. For an 8-bit resolution, a 3-channel image will
require 3*8 bits for the DN of one pixel.
Temporal Resolution
This is a term commonly used for an imaging system that makes repeat visits to
a scene at specified intervals of time. The repeat cycle expressed in terms of days
is called the temporal resolution of the imaging sensor. In general, temporal
resolution can be understood to mean the number of images taken at different
times for the same area of the object. Temporal resolution becomes important
when we are interested in studying the variation of earth’s surface phenomenon
with time. The more images available over time the better the analysis and the
more reliable the conclusions that may be drawn from such analysis.
It is defined as the length (or breadth) of the square area on the object
that is seen as a pixel or dot by the imaging sensor.
If the space is the image space, the units are in inches, centimeters or
millimeters.
In object space terms, the spatial resolution is also often called ground
sampling distance GSD.
It should be noted that the spatial resolution stated in terms of the object
space dimensions can be converted into the image space equivalent and
vice versa if the scale of the image is given.
Thus, a 640 x 480 image would measure 6.67 inches by 5 inches when
presented (e.g., displayed or printed) at 96 pixels per inch. On the other
hand, it would measure 1.6 inches by 1.2 inches at 400 pixels per inch.
The unplottable error can be used to judge the acceptability of the GSD
of an imaging system.
The concept of unplottable error can also be used to check for the
zoomable scale of an image such as a mosaic or an orthophoto.
Radiometric scale
The decimal and binary number systems are fundamental to a full understanding
of image representation in digital format. In this chapter we will examine binary
numbers and their relationship to the decimal number system.
The decimal system is a method of counting which involves the use of ten digits,
that is, a system with a base of ten. Each of the ten decimal digits, 0 through 9,
represents a certain quantity. The ten systems (digits) do not limit us to expressing
only ten different quantities because we use the various digits in appropriate
positions within a number to indicate the magnitude of the quantity (units, tens,
hundreds, thousands, etc,). We can express quantities up through nine before we
run out of digits, and the position of each digit within the number tells us the
magnitude it represents. If, for instance, we wish to express the quantity twenty-
three, we use (by their respective positions in the number) the digit 2 to represent
the quantity twenty and the digits 3 to represent the quantity three. Therefore, the
position of each of the digits in the decimal number indicates the magnitude of the
quantity represented and can be assigned a “weight”.
The value of a decimal number is the sum of the digits times their respective
weights. The weights are the units, tens, hundreds, thousands, etc. The weight of
each successive decimal digit position to the left in a decimal number is an
increasing power of ten. The weights are : unit (100), tens(101), hundreds (102),
thousands (103) etc. The following examples will illustrate the idea:
Example:
The binary number system is simply another way to count. It is less complicated
than the decimal system because it is composed of only two digits. It may seem
more difficult at first because it is unfamiliar to us.
Just as the decimal system with its ten digits is a base-ten system, the binary system
with its two digits is a base-two system. The two binary digits (bits) are 1 and 0. The
position of the 1 or 0 in a binary number indicates its “weight” or value within the
number, just as the position of a decimal digit determines the magnitude if that
digit. The weight of each successively higher position (to the left) in a binary
number is an increasing power of two.
Counting in Binary
To learn to count in binary, let is first look at how we count in decimal. We start at
0 and count up to 9 before we run out of digits. We then start another digit position
(to the left) and continue counting 10 through 99. At this point we have exhausted
all two-digit combinations, so a third is needed in order to count from 100 through
999.
A comparable situation occurs when counting in binary, except that we have only
two digits. We begin counting 0, 1; at this point we have used both digits, so we
include another digit position and continue 10, 11. We have now exhausted all
combinations of two digits, so a third is required. With three digits we can continue
to count – 100, 101, 110, and 111. Now we need a fourth digit to continue, and so
on.
Similar to the decimal number representation, a number in the binary system is represented using
a series of ‘columns’ and in a computer each column is used to represent a switch. The switches,
which are small-magnetized areas on a computer disk or memory, are usually grouped in packets
of eight, known as a byte. Several bytes can be linked together to make a computer word. Words
are grouped together in a data record and records are grouped together in a computer file. Sets of
computer files can be grouped together hierarchically in directories or subdirectories.
Q. Using a 32-bit word, what range of integer numbers can be represented if the left-most bit is
reserved for sign?
The following formula tells us how high we can count in decimal, beginning with zero, with n bits:
Highest decimal number = 2n 1 , where n is the number of bits arranged from left to right. For instance,
with two bits we can count from 0 through 3.
22 1 4 1 3
24 1 16 1 15
With a byte of eight bits we can represent 256 numbers from 0 to (28 – 1) i.e. from 0 to 255. If we combine
2 bytes in a 16-bit word it is possible to code numbers from 0 to 65 535. However, it is also useful to be
able to code positive and negative numbers so only the first fifteen bits (counting from the right) are used
for coding the number and the sixteenth bit (215) is used to determine the sign. This means that with
sixteen bits we can code numbers from –32 767 to +32 767. In image processing, only integer numbers
are used. A number lacking a fractional component is called an integer.
The method of evaluating a binary number is illustrated by the following example:
Example:
The binary sequence 0010 means 23 * 0 + 22 * 0 + 21 * 1 + 2o * 0 = 210
1) Sum-of-Weights Method
One way to find the binary number equivalent to a given decimal number is
to determine the set of binary weight values whose sum is equal to the decimal
number. For instance, the decimal number 9 can be expressed as the sum of binary
weights as follows:
9 = 8 + 1 = 23 + 20
By placing a 1 in the appropriate weight positions, 23 and 20, and a 0 in the other
positions, we have the binary number for decimal 9.
23 22 21 20
1 0 0 1 binary nine
Examples
Solutions:
1210 = 8 + 4 = 23 + 22 11002
2510 = 16 + 8 + 1 = 24 + 23 + 20 110012
5810 = 32 + 16 + 8 + 2 = 25 + 24 + 23 + 21 110102
8210 = 64 + 16 + 2 = 26 + 24 + 21 10100102
Q. Convert the decimal numbers 102, 225, 558, and 822 to binary.
Example
12 Div 2 = 6 R 0 (LSB)
6 Div 2 = 3 R 0
3 Div 2 = 1 R 1
1 Div 2 = 0 R 1
A digital image is often encoded in the form of a binary file for the purpose of storage
and transmission. Among the numerous encoding formats are BMP (Windows Bitmap),
JPEG (Joint Photographic Experts Group File Interchange Format), and TIFF (Tagged
Image File Format). Although these formats differ in technical details, they share
structural similarities. In general, a multichannel image may be stored in pixel-
interleave (BIP), or line-interleave (BIL) or non-interleave formats (BSQ).
For example, given the following 2-band, 7x7 pixel image, it can be stored in line
interleave (BIL) as follows:
Table 1 shows the matrix structure and the addressing system used for a digital
image.
S(c, 0 1 2 3 4 5
r)
0 1 2 3 4 1 0
1 1 5 8 9 2 1
(Scan Lines)
2 5 6 7 8 5 0
r
3 4 3 10 7 6 4
4 3 2 9 10 3 0
5 7 10 9 8 7 0
6 1 4 6 5 1 0
The position of a pixel in the image can be established by the column and line
intersecting at the pixel.
Symbolically, the pixel’s position is indicated as (c, r) where c stands for the column
and r for the line or row of the pixel.
By convention, the indexes c, r start from 0; and while c counts the columns from
left to right, r counts the lines from top to bottom.
Thus, the pixel at the upper left corner of the image has position (0, 0) i.e. c = 0, r =
0.
Typically the pixel at the upper left corner of an image is considered to be at the
origin (0,0) of a pixel coordinate system.
Thus the pixel at the lower right corner of a 640 x 480 image would have
coordinates (639,479), whereas the pixel at the upper right corner would have
coordinates (0, 479).
From the structure of an image described above, the total number of pixels in an
image is a function of the physical size of the image and the number of pixels per
unit length (e.g. inch) in the horizontal as well as the vertical direction.
This number of pixels per unit length is referred to as the spatial resolution of the
image (this is discussed later). Thus a 3 x 2 inch image at a resolution of 300 pixels
per inch would be expressed as 900x600 image and have a total of 540,000 pixels.
More commonly, image size is given as the total number of pixels in the horizontal
direction times the total number of pixels in the vertical direction (e.g., 512 x 512,
640 x 480, or 1024 x 768).
Although this convention makes it relatively straightforward to gauge the total
number of pixels in an image, it does not alone uniquely specify the size of the
image or its resolution as defined in the paragraph above.
For example: an image specified in pixel form as 640 x 480 would measure 6.67
inches by 5 inches when presented (e.g., displayed or printed) at 96 pixels per inch.
On the other hand, it would measure 1.6 inches by 1.2 inches at 400 pixels per inch.
Hence, the spatial resolution of the image must be given expressly or must be
estimated if a physical dimension on the image is given.
P(x, y)
x
Figure 2: metric image coordinate system
The brightness value for every pixel in the image is represented by a non-negative
integer called the digital number (DN) generated by the sensor using a chosen
radiometric scale. The DN for each pixel will depend on the reflectance property of
the object to which it belongs within the particular eme spectrum channel of the
sensor.
For satellite data such as Landsat and SPOT, the DNs represent the intensity of
reflected light in the visible, infrared, or other wavelengths. For imaging radar (SAR)
data or LiDAR laser pulse, the DNs represent the strength of a radar/laser pulse
returned to the antenna.
a digital image is often encoded in the form of a binary file for the purpose of
storage and transmission.
A digital image is often encoded in the form of a binary file for the purpose of
storage and transmission.
Although these formats differ in technical details, they share structural similarities.
In general, a multichannel image may be stored in
pixel-interleave (BIP),
line-interleave (BIL),
non-interleave formats (BSQ).
5 3 4 5 4 5 5 5 5 4 6 7 7 7
2 2 3 4 4 4 6 2 4 6 5 5 6 5
2 2 3 3 6 6 8 5 3 5 7 6 6 8
2 2 6 6 9 8 7 3 4 5 6 8 8 7
3 6 8 8 8 7 4 3 5 8 8 8 7 1
3 6 8 7 2 3 2 4 5 8 7 1 0 0
Band1 Band 2
START
5 3 4 5 4 5 5 5 5 4 6 7 7 7 2 2 3 4 4 4 6 2 4
6 5 5 6 5 2 2 3 3 6 6 8 5 3 5 7 6 6 8 2 2 6 6
9 8 7 3 4 5 6 8 8 7
END
Earlier paragraphs of this chapter explored the nature of emitted and reflected energy and the
interactions that influence the resultant radiation as it traverses from source to target to sensor.
This paragraph will examine the steps necessary to transfer radiation data from the satellite to the
ground and the subsequent conversion of the data to a useable form for display on a computer.
a. Conversion of the Radiation to Data. Data collected at a sensor are converted from a
continuous analog to a digital number. This is a necessary conversion, as electromagnetic
waves arrive at the sensor as a continuous stream of radiation. The incoming radiation is
sampled at regular time intervals and assigned a value (Figure 2-26). The value given to
the data is based on the use of a 6-, 7-, 8-, 9-, or 10-bit binary computer coding scale;
powers of 2 play an important role in this system. Using this coding allows a computer to
store and display the data. The computer translates the sequence of binary numbers, given
as ones and zeros, into a set of instructions with only two possible outcomes (1 or 0,
meaning “on” or “off”). The binary scale that is chosen (i.e., 8 bit data) will de-pend on the
level of brightness that the radiation exhibits. The brightness level is deter-mined by
measuring the voltage of the incoming energy. Below in Table 2-5 is a list of select bit
integer binary scales and their corresponding number of brightness levels. The ranges are
derived by exponentially raising the base of 2 by the number of bits.
DIAGRAM
Figure 2-26. Diagram illustrates the digital sampling of continuous analog voltage data. The DN values above
the curve represent the digital output values for that line segment.
Table 2-5 Digital number value ranges for various bit data
b. Diversion on Data Type. Digital number values for raw remote sensing data are usually
integers. Occasionally, data can be expressed as a decimal. The most popular code for
representing real numbers (a number that contains a fraction, i.e., 0.5, which is one-half) is
called the IEEE (Institute of Electrical and Electronics Engineers, pronounced I-triple-E)
Floating-Point Standard. ASCII text (American Standard Code for Information
Interchange; pronounced ask-ee) is another alternative computing value system. This
system is used for text data. You may need to be aware of the type of data used in an image,
particularly when determining the digital number in a pixel.
c. Transferring the Data from the Satellite to the Ground. The transfer of data stored in the
sensor from the satellite to the user is similar to the transmission of more familiar signals,
such as radio and television broadcasts and cellular phone conversations. Every-thing we
see and hear, whether it is a TV program with audio or a satellite image, originates as a
form of electromagnetic radiation. To transfer satellite data from the sensor to a location
on the ground, the radiation is coded (described in Paragraph 2-7a) and attached to a signal.
The signal is generally a high frequency electromagnetic wave that travels at the speed of
light. The data are instantaneously transferred and detected with the use of an appropriate
antenna and receiver.
(2) Satellites can only transmit data when in range of a receiving station. When outside of a
receiving range, satellites will store data until they fly within range of the next receiving
station. Some satellite receiving stations are mobile and can be placed on airplanes for swift
deployment. A mobile receiving station is extremely valuable for the immediate acquisition
of data relating to an emergency situation (flooding, forest fire, military strikes).
e. Data is Prepared for User. Once transmitted the carrier signal is filtered from the data,
which are decoded and recorded onto a high-density digital tape (HDDT) or a CD-ROM,
and in some cases transferred via file transfer protocol (FTP). The data can then undergo
geometric and radiometric preprocessing, generally by the vendor. The data are
subsequently recorded onto tape or CD compatible for a computer.
f. Hardware and Software Requirements. The hardware and software needed for satellite
image analysis will depend on the type of data to be processed. A number of free image
processing software programs are available and can be downloaded from the inter-net.
Some vendors provide a free trial or free tutorials. Highly sophisticated and powerful
software packages are also available for purchase. These packages require robust hard-
ware systems to sustain extended use. Software and hardware must be capable of man-
aging the requirements of a variety of data formats and file sizes. A single satellite image
file can be 300 MB prior to enhancement processing. Once processed and enhanced, the
resulting data files will be large and will require storage for continued analysis. Because
of the size of these files, software and hardware can be pushed to its limits. Regularly save
and back up your data files as software and hardware pushed to its limits can crash, losing
valuable information. Be sure to properly match your software requirements with
appropriate hardware capabilities.
(1) Satellite data can be displayed as an image on a computer monitor by an array of pixels, or
picture elements, containing digital numbers. The composition of the image is simply a
grid of continuous pixels, known as a raster image (Figure 2-27). The digital number (DN)
of a pixel is the result of the spatial, spectral, and radiometric averaging of reflected/emitted
radiation from a given area of ground cover (see below for information on spatial, spectral,
and radiometric resolution). The DN of a pixel is therefore the average radiance of the
surface area the pixel represents.
Figure 2-27. Figure illustrates the collection of raster data. Black grid (left) shows what area on the ground is
covered by each pixel in the image (right). A sensor measures the average spectrum from each pixel, recording
the photons coming in from that area. ASTER data of Lake Kissimmee, Florida, acquired 2001-08-18. Image
developed for Prospect (2002 and 2003).
(2) The value given to the DN is based on the brightness value of the radiation (see explanation
above and Figure 2-28). For most radiation, an 8-bit scale is used that corresponds to a
value range of 0–255 (Table 2-4). This means that 256 levels of brightness (DN values are
sometimes referred to as brightness values-𝐵𝑣 ) can be displayed, each representing the
intensity of the reflected/emitted radiation. On the image this translates to varying shades
of grays. A pixel with a brightness value of zero (𝐵𝑣 = 0) will appear black; a pixel with a
𝐵𝑣 of 255 will appear white (Figure 2-29). All brightness values in the range of 𝐵𝑣 = 1 to
254 will appear as increasingly brighter shades of gray. In Figure 2-30, the dark regions
represent water-dominated pixels, which have low reflectance/𝐵𝑣 , while the bright areas
are developed land (agricultural and forested), which has high re-flectance.
Figure 2-29. Raster array and accompanying digital number (DN) values for a single band image.
Dark pixels have low DN values while bright pixels have high values. Modified from Natural
Resources Canada image
http://www.ccrs.nrcan.gc.ca/ccrs/learn/tutorials/fundam/chapter1/chapter1_7_e.html.
(1) Information pertaining to the minimum and maximum brightness (𝐿𝑚𝑖𝑛 and 𝐿𝑚𝑎𝑥
respectively) is usually found in the metadata (see Chapter 5). The equation for deter-
mining radiance from the digital number is:
where
(2) This conversion can also be used to enhance the visual appearance of an image by
reassigning the DN values so they span the full gray scale range (see Paragraph 5-20).
i. Spectral Bands.
(1) Sensors collect wavelength data in bands. A number or a letter is typically as-signed to a
band. For instance, radiation that spans 0.45 to 0.52 µm is designated as band 1 for
Landsat 7 data; in the microwave region radiation spanning 15 to 30 cm is termed the L-
band. Not all bands are created equally. Landsat band 1 (B1) does not represent the same
wavelengths as SPOT’s B1.
(2) Band numbers are not the same as sensor numbers. For instance Landsat 4 does not refer
to band 4. It instead refers to the fourth satellite sensor placed into orbit by the Landsat
program. This can be confusing, as each satellite program has a fleet of satellites (in or out
of commission at different times), and each satellite program will define bands differently.
Two different satellites from the same program may even be collecting radiation at a
slightly difference wavelength range for the same band (Table 2-6). It is, there-fore,
important to know which satellite program and which sensor collected the data.
Table 2-6 Landsat Satellites and Sensors
The following table lists Landsat satellites 1-7, and provides band information and pixel size. The band numbers for one sensor does not
necessarily imply the same wavelength range. For example, notice that band 4 in Landsat 1-2 and 3 differ from the band 4 in Landsat 4-5
and Landsat 7. Source: http://landsat.gsfc.nasa.gov/guides/LANDSAT-7_dataset.html#8.
TM 1) 0.45 to 0.52 30
2) 0.52 to 0.60 30
3) 0.63 to 0.69 30
4) 0.76 to 0.90 30
5) 1.55 to 1.75 30
6) 10.4 to 12.5 120
7) 2.08 to 2.35 30
j. Color in the Image. Computers are capable of imaging three primary colors: red, green,
and blue (RGB). This is different from the color system used by printers, which uses
magenta, cyan, yellow, and black. The color systems are unique because of differences in
the nature of the application of the color. In the case of color on a computer monitor, the
monitor is black and the color is projected (called additive color) onto the screen. Print
processes require the application of color to paper. This is known as a sub-tractive process
owing to the removal of color by other pigments. For example, when white light that
contains all the visible wavelengths hits a poster with an image of a yellow flower, the
yellow pigment will remove the blue and green and will reflect yellow. Hence, the process
is termed subtractive. The different color systems (additive vs. sub-tractive) account for
the dissimilarities in color between a computer image and the corresponding printed image.
(1) Similar to the gray scale, color can also be displayed as an 8-bit image with 256
levels of brightness. Dark pixels have low values and will appear black with some
color, while bright pixels will contain high values and will contain 100% of the
designated color. In Figure 2-31, the 7 bands of a Landsat image are separated to
show the varying DNs for each band.
Figure 2-31. Individual DNs can be identified in each spectral band of an image. In this ex-ample the
seven bands of a subset from a Landsat image are displayed. Image developed for Prospect (2002 and
2003).
(2) When displaying an image on a computer monitor, the software allows a user to
assign a band to a particular color (this is termed as “loading the band”). Because
there are merely three possible colors (red, green, and blue) only three bands of
spectra can be displayed at a time. The possible band choices coupled with the
three-color combinations creates a seemingly endless number of possible color
display choices.
(3) The optimal band choice for display will depend of the spectral information needed
(see Paragraph 2-6b(7)). The color you designate for each band is somewhat
arbitrary, though preferences and standards do exist. For example, a typical
color/band designation of red/green/blue in bands 3/2/1 of Landsat displays the
imagery as true-color. These three bands are all in the visible part of the spectrum,
and the imagery appears as we see it with our eyes (Figure 2-32a). In Figure 2-32b,
band 4 (B4) is displayed in the red (called “red-gun” or “red-plane”) layer of the
bands 4/3/2, and vegetation in the agricultural fields appear red due to the infrared
location on the spectrum. In Figure 2-32c, band 4 (B4) is displayed as green. Green
is a logical choice for band 4 as it represents the wavelengths reflected by
vegetation.
a. The true color image appears with these bands in the visible part of the spectrum.
b. Using the near infra-red (NIR) band (4) in the red gun, healthy vegetation appears
red in the imagery.
c. Moving the NIR band into the green gun and adding band 5 to the red gun
changes the vegetation to green.
Figure 2-32. Three band combinations of Landsat imagery of 3/2/1, 4/3/2, and 5/4/3 in the RGB.
Images developed for Prospect (2002 and 2003).
k. Interpreting the Image. When interpreting the brightness of a gray scale image (Figure 2-
33), the brightness simply represents the amount of reflectance. For bright pixels the
reflectance is high, while dark pixels represent areas of low reflectance. By example, in a
gray scale display of Landsat 7 band 4, the brightest pixels represent areas where there is a
high reflectance in the wavelength range of 0.76 to 0.90 µm. This can be interpreted to
indicate the presence of healthy vegetation (lawns and golf courses).
(1) A color composite can be somewhat difficult to interpret owing to the mixing of
color. Similar to gray scale, the bright regions have high reflectance, and dark areas
have low reflectance. The interpretation becomes more difficult when we combine
different bands of data to produce what is known as false-color composites (Figure
2-33).
(2) White and black are the end members of the band color mixing. White pixels in a
color composite represent areas where reflectance is high in all three of the bands
dis-played. White is produced when 100% or each color (red, green, and blue) are
mixed in equal proportions. Black pixels are areas where there is an absence of
color due to the low DN or reflectance. The remaining color variations represent
the mixing of three band DNs. A magenta pixel is one that contains equal portions
of blue and red, while lacking green. Yellow pixels are those that are high in
reflectance for the bands in the green and red planes. (Go to Appendix C for a paper
model of the color cube/space.)
l. Data Resolution. A major consideration when choosing a sensor type is the definition of
resolution capabilities. “Resolution” in remote sensing refers to the ability of a sensor to
distinguish or resolve objects that are physically near or spectrally similar to other adjacent
objects. The term high or fine resolution suggests that there is a large degree of distinction
in the resolution. High resolution will allow a user to distinguish small, adjacent targets.
Low or coarse resolution indicates a broader averaging of radiation over a larger area (on
the ground or spectrally). Objects and their boundaries will be difficult to pinpoint in
images with coarse resolution. The four types of resolution in remote sensing include
spatial, spectral, radiometric, and temporal.
(a) An increase in spatial resolution corresponds to an increase in the ability to resolve one
feature physically from another. It is controlled by the geometry and power of the sensor
system and is a function of sensor altitude, detector size, focal size, and system
configuration.
(b) Spatial resolution is best described by the size of an image pixel. A pixel is a two-
dimensional square-shaped picture element displayed on a computer. The dimensions on
the ground (measured in meters or kilometers) projected in the instantaneous field of view
(IFOV) will determine the ratio of the pixel size to ground coverage. As an example, for a
SPOT image with 20- ×20-m pixels, one pixel in the digital image is equivalent to 20 m
square on the ground. To gauge the resolution needed to discern an object, the spatial
resolution should be half the size of the feature of interest. For example, if a project requires
the discernment of individual tree, the spatial resolution should be a minimum of 15 m. If
you need to know the percent of timber stands versus clearcuts, a resolution of 30 m will
be sufficient.
Table 2-7 Minimum image resolution required for various sized objects.
(2) Spectral Resolution. Spectral resolution is the size and number of wavelengths, intervals,
or divisions of the spectrum that a system is able to detect. Fine spectral resolution
generally means that it is possible to resolve a large number of similarly sized wavelengths,
as well as to detect radiation from a variety of regions of the spectrum. A coarse resolution
refers to large groupings of wavelengths and tends to be limited in the frequency range.
(a) Temporal resolution refers to the frequency of data collection. Data collected on
different dates allows for a comparison of surface features through time. If a project requires an
assessment of change, or change detection, it is important to know: 1) how many data sets already
exist for the site; 2) how far back in time the data set ranges; and 3) how frequently the satellite
returns to acquire the same location.
(b) Most satellite platforms will pass over the same spot at regular intervals that range
from days to weeks, depending on their orbit and spatial resolution (see Chapter 3). A few
examples of projects that require change detection are the growth of crops, deforestation, sediment
accumulation in estuaries, and urban development.
(5) Determine the Appropriate Resolution for the Project. Increasing resolution tends to lead
to more accurate and useful information; however, this is not true for every project. The
downside to increased resolution is the need for increased storage space and more powerful
hardware and software. High-resolution satellite imagery may not be the best choice when
all that is needed is good quality aerial photographs. It is, therefore, important to determine
the minimum resolution requirements needed to accomplish a given task from the outset.
This may save both time and funds.
2-8 Aerial Photography. A traditional form of mapping and surface analysis by re-mote sensing
is the use of aerial photographs. Low altitude aerial photographs have been in use since the Civil
War, when cameras mounted on balloons surveyed battlefields. Today, they provide a vast amount
of surface detail from a low to high altitude, vertical perspective. Because these photographs have
been collected for a longer period of time than satellite images, they allow for greater temporal
monitoring of spatial changes. Roads, buildings, farmlands, and lakes are easily identifiable and,
with experience, surface terrain, rock bodies, and structural faults can be identified and mapped.
In the field, photographs can aid in precisely locating target sites on a map.
a. Aerial photographs record objects in the visible and near infrared and come in a variety
of types and scales. Photos are available in black and white, natural color, false color infrared, and
low to high resolution.
c. In addition to the actual print or digital image, aerial photographs typically include
information pertaining to the photo acquisition. This information ideally includes the date, flight,
exposure, origin/focus, scale, altitude, fiducial marks, and commissioner (Figure 2-34). If the scale
is not documented on the photo, it can be determined by taking the ratio of the distance of two
objects measured on the photo vs. the distance of the same two objects calculated form
measurements taken from a map.
d. The measurement is best taken from one end of the photo to the other, passing through
the center (because error in the image increases away from the focus point). For precision, it is
best to average a number of ratios from across the image.
f. Aerial-photos are shot in a sequence with 60% overlap; this creates a stereo view when
two photos are viewed simultaneously. Stereoscopic viewing geometrically corrects photos by
eliminating errors attributable to camera tilt and terrain relief. Images are most easily seen in stereo
by viewing them through a stereoscope. With practice it is possible to see in stereo without the
stereoscope. This view will produce a three-dimensional image, allowing you to see topographic
relief and resistant vs. recessive rock types.
g. To maintain accuracy it is important to correlate objects seen in the image with the
actual object in the field. This verification is known as ground truth. Without ground truth you
may not be able to differentiate two similarly toned objects. For instance, two very different but
recessive geologic units could be mistakenly grouped together. Ground truth will also establish
the level of accuracy that can be attributed to the maps created based solely on photo
interpretations.
3-6 Airborne Digital Sensors. The advancement of airborne systems to include high resolution
digital sensors is becoming available through commercial companies. These systems are
established with onboard GPS for geographic coordinates of acquisitions, and real time image
processing. Additionally, by the time the plane lands on the ground, the data can be copied to
CDROM and be available for delivery to the customer with a basic level of processing. The data
at this level would require image calibration and additional processing. The data at this level would
require image calibration and additional processing. See Appendix F for a list of airborne system
sensors.
3-7 Airborne Geometries. There are several ways in which airborne image geometry can be
controlled. Transects should always be flown parallel to the principle plane to the sun, such that
the BRDF (bi-directional reflectance distribution function) is symmetrical on either side of the
nadir direction. The pilot should attempt to keep the plane level and fly straight line transects. But
since there are always some attitude disturbances, GPS and IMU (inertial measuring unit) data can
be used in post-processing the image data to take out this motion. The only way of guaranteeing
nadir look imagery is to have the sensor mounted on a gyro-stabilized platform. Without this, some
angular distortion of the imagery will result even if it is post-processed with the plane’s attitude
data and an elevation model (i.e., sides of buildings and trees will be seen and the areas hidden by
these targets will not be imaged). Shadow on one side of the buildings or trees cannot be eliminated
and the dynamic range of the imagery may not be great enough to pull anything out of the shadow
region. The only way to minimize this effect is to acquire the data at or near solar noon.
a. Planning airborne acquisitions requires both business and technical skills. For ex-ample,
to contract with an airborne image acquisition company, a sole source claim must be made that
this is the only company that has these special services. If not registered as a prospective
independent contractor for a Federal governmental agency, the company may need to file a Central
Contractor Registration (CCR) Application, phone (888-227-2423) and request a DUNS number
from Dun & Bradstreet, phone (800-333-0505). After this, it is necessary for the contractee to
advertise for services in the Federal Business Opportunities Daily (FBO Daily)
http://www.fbodaily.com. Another way of securing an airborne contractor is by riding an existing
Corps contract; the St. Louis District has several in place. A third way is by paying another
governmental agency, which has a contract in place. If the contractee is going to act as the lead for
a group acquisition among several other agencies, it may be necessary to execute some
Cooperative Research and Development Agreements (CRDAs) between the contractee and the
other agencies. As a word of caution, carefully spell out in the legal document what happens if the
contractor, for any reason, defaults on any of the image data collection areas. A data license should
be spelled out in the contract between the parties.
b. Technically, maps must be provided to the contractor of the image acquisition area.
They must be in the projection and datum required, for example Geographic and WGS84 (World
Geodetic System is an earth fixed global reference frame developed in 1984). The collection flight
lines should be drawn on the maps, with starting and ending coordinates for each straight-line
segment. If an area is to be imaged then the overlap between flight lines must be specified, usually
20%. If the collection technique is that of overlapping frames then both the sidelap and endlap
must be specified, between 20 and 30%. It is a good idea to generate these maps as vector
coverages because they are easily changed when in that format and can be inserted into formal
reports with any caption desired later.
The maximum angle allowable from nadir should be specified. Other technical considerations that
will affect the quality of the resulting imagery include: What sun angle is allowable? What lens
focal length is allowable? What altitude will the collection be flown? Will the imagery be flown
at several resolutions or just one? Who will do the orthorectification and mosaicing of the imagery?
Will DEMs, DTMs, or DSMs be used in the orthorectification process? How will unseen and
shadow areas be treated in the final product? When planning airborne acquisitions, these questions
should be part of the decision process.
a. The Camera. The concept of imaging the Earth’s surface has its roots in the
development of the camera, a black box housing light sensitive film. A small aperture allows light
reflected from objects to travel into the black box. The light then “exposes” film, positioned in the
interior, by activating a chemical emulsion on the film surface. After exposure, the film negative
(bright and dark are reversed) can be used to produce a positive print or a visual image of a scene.
b. Aerial Photography. The idea of mounting a camera on platforms above the ground for
a “birds-eye” view came about in the mid-1800s. In the 1800’s there were few objects that flew or
hovered above ground. During the US Civil War, cameras where mounted on balloons to survey
battlefield sites. Later, pigeons carrying cameras were employed
(http://www2.oneonta.edu/~baumanpr/ncge/rstf.htm), a platform with obvious disadvantages. The
use of balloons and other platforms created geometric problems that were eventually solved by the
development of a gyro-stabilized camera mounted on a rocket. This gyro-stabilizer was created by
the German scientist Maul and was launched in 1912.
c. First Satellites. The world’s first artificial satellite, Sputnik 1, was launched on 4
October 1957 by the Soviet Union. It was not until NASA’s meteorological satellite TIROS –1
was launched that the first satellite images were produced
(http://www.earth.nasa.gov/history/tiros/tiros1.html). Working on the same principles as the
camera, satellite sensors collect reflected radiation in a range of spectra and store the data for
eventual image processing (see above, this chapter).
d. NASA’s First Weather Satellites. NASA’s first satellite missions involved study of the
Earth’s weather patterns. TIROS (Television Infrared Operational Satellite) missions launched
10 experimental satellites in the early 1960’s in an effort to prepare for a per-manent weather
bureau satellite system known as TOS (TIROS Operating System). TIROS-N (next generation)
satellites currently monitor global weather and variations in the Earth’s atmosphere. The goal of
TIROS-N is to acquire high resolution, diurnal data that includes vertical profile measurements
of temperature and moisture.
e. Landsat Program. The 1970’s brought the introduction of the Landsat series with the
launching of ERTS-1 (also known as Landsat 1) by NASA. The Landsat program was the first
attempt to image whole earth resources, including terrestrial (land based) and marine resources.
Images from the Landsat series allowed for detailed mapping of land-masses on a regional and
continental scale.
(1) The Landsat imagery continues to provide a wide variety of information that is highly
useful for identifying and monitoring resources, such as fresh water, timberland, and minerals.
Landsat imagery is also used to assess hazards such as floods, droughts, forest fire, and pollution.
Geographers have used Landsat images to map previously un-known mountain ranges in
Antarctica and to map changes in coastlines in remote areas.
(2) A notable event in the history of the Landsat program was the addition of TM
(Thematic Mapper) first carried by Landsat 4 (for a summary of Landsat satellites see
http://geo.arc.nasa.gov/sge/landsat/lpsum.html). The Thematic Mapper provides a resolution as
low as 30 m, a great improvement over the 70-m resolution of earlier sensors. The TM devise
collects reflected radiation in the visible, infrared (IR), and thermal (IR) region of the spectrum.
(3) In the late 1970’s, the regulation of Landsat was transferred from NASA to NOAA,
and was briefly commercialized in the 1980s. The Landsat program is now operated by the USGS
EROS Data Center (US Geological Survey Earth Resources Observation Systems; see
http://landsat7.usgs.gov/index.html).
(1) The SPOT 1, 2, and 3 offer both panchromatic data (P or PAN) and three bands of
multispectral (XS) data. The panchromatic data span the visible spectrum without the blue
(0.51-0.73 µm) and maintains a 10-m resolution. The multispectral data provide 20-m
resolution, broken into three bands: Band 1 (Green) spans 0.50–0.59 µm, Band 2 (Red)
spans 0.61–0.68 µm, and Band 3 (Near Infrared) spans 0.79–0.89 µm. SPOT 4 also sup-
plies a 20-m resolution shortwave Infrared (mid IR) band (B4) covering 1.58 to 1.75 µm.
SPOT 5, launched in spring 2002, provides color imagery, elevation models, and an
impressive 2.5-m resolution. It houses scanners that collect panchromatic data at 5 m
resolution and four band multispectral data at 10-m resolution (see Appendix D-“SPOT”
file).
(2) SPOT 3 was decommissioned in 1996. SPOT 1, 2, 4, and 5 are operational at the time of
this writing. For information on the SPOT satellites go to
http://www.spotimage.fr/home/system/introsat/seltec/welcome.htm.
g. Future of Remote Sensing. The improved availability of satellite images coupled with the ease
of image processing has led to numerous and creative applications. Re-mote sensing has
dramatically brought about changes in the methodology associated with studying earth processes
on both regional and global scales. Advancements in sensor resolution, particularly spatial,
spectral, and temporal resolution, broaden the possible applications of satellite data.
(1) Government agencies around the world are pushing to meet the demand for re-
liable and continuous satellite coverage. Continuous operation improves the temporal data needed
to assess local and global change. Researchers are currently able to perform a 30-year temporal
analysis using satellite images on critical areas around the globe. This time frame can be extended
back with the incorporation of digital aerial photographs.
(2) Remote sensing has established itself as a powerful tool in the assessment and management
of U.S. lands. The Army Corps of Engineers has already incorporated this technology into
its nine business practice areas, demonstrating the tremendous value of remote sensing in
civil works projects.
(1) Spatial Frequency. Spatial frequency describes the pattern of digital values observed
across an image. Images with little contrast (very bright or very dark) have zero spatial frequency.
Images with a gradational change from bright to dark pixel values have low spatial frequency;
while those with large contrast (black and white) are said to have high spatial frequency. Images
can be altered from a high to low spatial frequency with the use of convolution methods.
(2) Convolution.
(a) Convolution is a mathematical operation used to change the spatial frequency of digital
data in the image. It is used to suppress noise in the data or to exaggerate features of interest. The
operation is performed with the use of a spatial kernel. A kernel is an array of digital number values
that form a matrix with odd numbered rows and columns (Table 5-2). The kernel values, or
coefficients, are used to average each pixel relative to its neighbor across the image. The output
data set will represent the averaging effect of the kernel coefficients. As a spatial filter, convolution
can smooth or blur images, thereby reducing image noise. In feature detection, such as an edge
enhancement, convolution works to exaggerate the spatial frequency in the image. Kernels can be
reapplied to an image to further smooth or exaggerate spatial frequency.
(b) Low pass filters apply a small gain to the input data (Table 5-2a). The resulting output
data will decrease the spatial frequency by de-emphasizing relatively bright pixels. Two types of
low pass filters are the simple mean and center-weighted mean methods (Table 5-2a and b). The
resultant image will appear blurred. Alternatively, high pass frequency filters (Table 5-2c)
increase image spatial frequency. These types of filters exaggerate edges without reducing image
details (an advantage over the Laplacian filter discussed below).
(a) The Laplacian filter detects discrete changes in spectral frequency and is used for
highlighting edge features in images. This type of filter works well for delineating linear features,
such as geologic strata or urban structures. The Laplacian is calculated by an edge enhancement
kernel (Table 5-2d and e); the middle number in the matrix is much higher or lower than the
adjacent coefficients. This type of kernel is sensitive to noise and the resulting output data will
exaggerate the pixel noise. A smoothing convolution filter can be applied to the image in advance
to reduce the edge filter's sensitivity to data noise.
The Convolution Method
Convolution is carried out by overlaying a kernel onto the pixel image and centering
its middle value over the pixel of interest. The kernel is first placed above the pixel
located at the top left corner of the image and moved from top to bottom, left to right.
Each kernel position will create an output pixel value, which is calculated by
multiplying each input pixel value with the kernel coefficient above it. The product
of the input data and kernel is then averaged over the array (sum of the product
divided by the number of pixels evaluated); the output value is assigned this average.
The kernel then moves to the next pixel, always using the original input data set for
calculating averages. Go to http://www.cla.sc.edu/geog/rslab/Rscc/rscc-frames.html
for an in-depth description and examples of the convolution method.
The pixels at the edges create a problem owing to the absence of neighboring pixels.
This problem can be solved by inventing input data values. A simpler solution for
this problem is to clip the bottom row and right column of pixels at the margin.
(b) The Laplacian filter measures the changes in spectral frequency or pixel intensity. In
areas of the image where the pixel intensity is constant, the filter assigns a digital number value of
0. Where there are changes in intensity, the filter assigns a positive or negative value to designate
an increase or decrease in the intensity change. The resulting image will appear black and white,
with white pixels defining the areas of changes in intensity.
a. Low Pass: simple mean kernel.
1 1 1
1 1 1
1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 2 2 2 1 1
1 1 1 10 1 1 1 1 1 2 2 2 1 1
1 1 1 1 1 1 1 1 1 2 2 2 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1
Raw data Output data
1 1 1
1 2 1
1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 2 2 2 1 1
1 1 1 10 1 1 1 1 1 2 3 2 1 1
1 1 1 1 1 1 1 1 1 2 2 2 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1
Raw data Output data
-1 1 -1
-1 8 -1
-1 1 -1
10 10 10 10 10 10 10 0 0 0 1 1 0 0
10 10 10 10 10 10 10 0 0 0 1 1 0 0
10 10 10 10 10 10 10 0 0 5 -5 5 0 0
10 10 10 15 10 10 10 0 0 5 40 5 0 0
10 10 10 10 10 10 10 0 0 5 -5 5 0 0
10 10 10 10 10 10 10 0 0 Output
0 1 data
1 0 0
Raw data
10 10 10 10 10 10 10 0 0 0 1 1 0 0
Raw data Output data
d. Direction Filter: north-south component kernel
-1 2 -1
-2 1 2
-1 2 -1
1 1 1 2 1 1 1 0 0 -4 8 -4 0 0
1 1 1 2 1 1 1 0 0 -4 8 -4 0 0
1 1 1 2 1 1 1 0 0 -4 8 -4 0 0
1 1 1 2 1 1 1 0 0 -4 8 -4 0 0
1 1 1 2 1 1 1 0 0 -4 8 -4 0 0
1 1 1 2 1 1 1 0 0 -4 8 -4 0 0
1 1 1 2 1 1 1 0 0 -4 8 -4 0 0
Raw data Output data
-1 2 -1
2 4 2
-1 -2 -1
1 1 1 2 1 1 1 0 0 0 0 0 0 0
1 1 1 2 1 1 1 0 0 0 0 0 0 0
1 1 1 2 1 1 1 0 0 0 0 0 0 0
1 1 1 2 1 1 1 0 0 0 0 0 0 0
1 1 1 2 1 1 1 0 0 0 0 0 0 0
1 1 1 2 1 1 1 0 0 0 0 0 0 0
1 1 1 2 1 1 1 0 0 0 0 0 0 0
Raw data Output data
(a) Minimum Mean Distance. Minimum distance to the mean is a simple computation that
classifies pixels based on their distance from the mean of the training class. 5-32
It is determined by plotting the pixel brightness and calculating its Euclidean distance (using the
Pythagorean theorem) to the unassigned pixel. Pixels are assigned to the training class for which
it has a minimum distance. The user designates a minimum distance threshold for an acceptable
distance; pixels with distance values above the designated threshold will be classified as unknown.
(4) Assessing Error. Accuracy can be qualitatively determined by an error matrix (Table
5-3). The matrix establishes the level of errors due to omission (exclusion error), commission
(inclusion error), and can tabulate an overall total accuracy. The error matrix lists the number of
pixels found within a given class. The rows in Table 5-2 list the pixels classified by the image
software. The columns list the number of pixels in the reference data (or reported from field data).
Omission error calculates the probability of a pixel being accurately classified; it is a comparison
to a reference. Commission deter-mines the probability that a pixel represents the class for which
it has been assigned. The total accuracy is measured by calculating the proportion correctly
classified pixel relative to the total tested number of pixels (Total = total correct/total tested).
Table 5-3 Omission and Commission Accuracy Assessment Matrix. Taken from Jensen (1996).
Reference Data
Producer’s Accuracy (measure of omission error) User’s Accuracy (measure of commission error)
Residential= 70/73 = 96–4% omission error Residential= 70/88 = 80–20% omission error
Commercial= 55/60 = 92–8% omission error Commercial= 55/58 = 95–5% omission error
Wetland= 99/103 = 96–4% omission error Wetland= 99/99 = 100–0% omission error
Forest= 97/50 = 74–26% omission error Forest= 37/41 = 90–10% omission error
Water= 121/121 = 100–0% omission error Water= 121/121 = 100–0% omission error
Example error matrix taken from Jensen (1986). Data are the result of an accuracy assessment of
Landsat TM data.
Image classification uses the brightness values in one or more spectral bands, and
classifies each pixel based on its spectral information
(a) Steps Required for Unsupervised Classification. The user designates 1) the number of
classes, 2) the maximum number of iterations, 3) the maximum number of times a pixel can be
moved from one cluster to another with each iteration, 4) the minimum distance from the mean,
and 5) the maximum standard deviation allowable. The program will iterate and recalculate the
cluster data until it reaches the iteration threshold designated by the user. Each cluster is chosen
by the algorithm and will be evenly distributed across the spectral range maintained by the pixels
in the scene. The resulting classification image (Figure 5-20) will approximate that which would
be produced with the use of a minimum mean distance classifier (see above, “classification
algorithm”). When the iteration thresh-old has been reached the program may require you to
rename and save the data clusters as a new file. The display will automatically assign a color to
each class; it is possible to alter the color assignments to match an existing color scheme (i.e., blue
= water, green = vegetation, red = urban) after the file has been saved. In the unsupervised
classification process, one class of pixels may be mixed and as-signed the color black. These pixels
represent values that did not meet the requirements set by the user. This may be attributable to
spectral “mixing” represented by the pixel.
(6) Evaluating Pixel Classes. The advantages of both the supervised and unsupervised
classification lie in the ease with which programs can perform statistical analysis. Once pixel
classes have been assigned, it is possible to list the exact number of pixels in each representative
class (Figure 5-17, classified column). As the size of each pixel is known from the metadata, the
metric area of each class can be quickly calculated. For example, you can very quickly determine
the percentage of fallow field area versus productive field area in an agricultural scene.