0% found this document useful (0 votes)
100 views117 pages

SVG 203 Intro Remote Sensing-General 2

a. Remote sensing involves collecting data about objects from a distance using sensors on airborne or satellite platforms. Sensors detect variations in electromagnetic energy emitted or reflected by surfaces. The data collected provides unique patterns that can be analyzed to infer characteristics of the objects. b. The basic components of a remote sensing system are an energy source, interactions between the energy and atmosphere and ground targets, sensors that record the energy data, and processing/display of the data digitally for interpretation. Electromagnetic energy is emitted from a source and interacts with the atmosphere and surface objects before being detected and recorded by sensors. c. Every object interacts with electromagnetic energy in a unique way based on its properties. Analyzing the spatial patterns

Uploaded by

CEO Dimeji
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
100 views117 pages

SVG 203 Intro Remote Sensing-General 2

a. Remote sensing involves collecting data about objects from a distance using sensors on airborne or satellite platforms. Sensors detect variations in electromagnetic energy emitted or reflected by surfaces. The data collected provides unique patterns that can be analyzed to infer characteristics of the objects. b. The basic components of a remote sensing system are an energy source, interactions between the energy and atmosphere and ground targets, sensors that record the energy data, and processing/display of the data digitally for interpretation. Electromagnetic energy is emitted from a source and interacts with the atmosphere and surface objects before being detected and recorded by sensors. c. Every object interacts with electromagnetic energy in a unique way based on its properties. Analyzing the spatial patterns

Uploaded by

CEO Dimeji
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 117

Introductory Remote Sensing

Purpose of this Lecture Notes.


a. This manual reviews the theory and practice of remote sensing and image processing. As a
spatial learning tool, remote sensing provides a cost-effective means of measuring,
monitoring, and mapping objects at or near the surface of the Earth.
b. A goal of the Note is to enable effective use of remotely sensed data for applications that
support sustainable development.

c. To create the awareness that the practice of remote sensing has become greatly simplified
by useful and affordable commercial software, which has made numerous advances in
recent years. Also, that Satellite and airborne platforms provide local and regional
perspective views of the Earth’s surface. These views come in a variety of resolutions and
are highly accurate depictions of surface objects. Satellite images and image processing
allow us to better understand and evaluate a variety of Earth processes occurring on the
surface and in the hydrosphere, biosphere, and atmosphere.

Contents of this Note


a. This note provides both theoretical and practical information to aid acquiring, processing, and
interpreting remotely sensed data. It also provides reference materials and sources for further
study and information.

b. Included in this work is a background of the principles of remote sensing, with a focus on the
physics of electromagnetic waves and the interaction of electromagnetic waves with
objects. Aerial photography and history of remote sensing are briefly discussed.

c. A compendium of sensor types is presented together with practical information on obtaining


image data.

d. The fundamentals of image processing are presented along with a summary of map projection
and information extraction. Helpful examples and tips are presented to clarify concepts and to
enable the efficient use of image processing. Examples focus on the use of images from the
Landsat series of satellite sensors, as this series has the longest and most continuous record of
Earth surface multispectral data.

e. Case Study Examples of remote sensing applications are presented. These include land use,
forestry, geology, hydrology, geography, meteorology, oceanography, and archeology.

Source Materials
This note is prepared with materials from the following sources:
a. US Army Corps Engineers Document EM 1110-2-2907, 1 October 2003.
“Electromagnetic Radiation is Simply Mysterious”

Chapter 1: Principles of Remote Sensing Systems


INTRODUCTION

Remote sensing is broadly defined as the science and the practice of collecting
information about objects, areas or phenomena from a distance without being in
physical contact with them. In the present context, the definition of remote sensing
is restricted to mean the process of acquiring information about any object without
physically contacting it in any way regardless of whether the observer is
immediately adjacent to the object or millions of miles away.

The major medium of contact or means of communication is the


electromagnetic energy (eme) either reflected or emitted by such objects.

The intensity and direction of such e-m-e emanating from a tiny


portion of the object are collected by a suitable sensing device and
recorded in digital or analogue form.

By studying the patterns and distribution of such recorded radiation,


it is possible to infer the characteristics of such objects both in terms
of physical dimensions and nature.

a. Remote sensing describes the collection of data about an object, area, or phenomenon
from a distance with a device that is not in contact with the object. More commonly, the
term remote sensing refers to imagery and image information derived by both airborne and
satellite platforms that house sensor equipment. The data collected by the sensors are in the
form of electromagnetic energy (EM). Electromagnetic energy is the energy emitted,
absorbed, or reflected by objects. Electromagnetic energy is synonymous to many terms,
including electromagnetic radiation, radiant energy, energy, and radiation.

b. Sensors carried by platforms are engineered to detect variations of emitted and reflected
electromagnetic radiation. A simple and familiar example of a platform carrying a sensor is
a camera mounted on the underside of an airplane. The airplane may be a high or low altitude
platform while the camera functions as a sensor collecting data from the ground. The data in
this example are reflected electromagnetic energy commonly known as visible light. Likewise,
space borne platforms known as satellites, such as Landsat Thematic Mapper (Landsat TM) or
SPOT (Satellite Pour l’Observation de la Terra), carry a variety of sensors. Similar to the
camera, these sensors collect emitted and reflected electromagnetic energy, and are capable of
recording radiation from the visible and other portions of the spectrum. The type of platform
and sensor employed will control the image area and the detail viewed in the image, and
additionally they record characteristics of objects not seen by the human eye.

c. For this manual, remote sensing is defined as the acquisition, processing, and analysis of surface
and near surface data collected by airborne and satellite systems.

BASIC PRINCIPLES OF REMOTE SENSING

Remote sensing employ electromagnetic energy and to a great extent relies on the
interaction of electromagnetic energy with matter (object). It refers to the sensing
of EM radiation, which is reflected, scattered or emitted from the object.

The Physical Basis of Remote Sensing

Electromagnetic energy has the characteristic that every object interacts


with it to produce a unique pattern of recorded radiation. In other words,
no two objects interact with e.m.e in the same way unless they are the
same object. This property of the e.m.e radiation is exploited by collecting
reflected radiant energy from different objects, and knowing that they
produce unique patterns, by studying these patterns one can infer the
nature of the objects involved. This is the physical basis of remote sensing
and is so fundamental to the understanding of the technology.
Practically, sensors mounted on a suitable platform collects or measure
the amounts of energy reflected from or emitted by the earth’s surface
objects, these measurements being made at a large number of points
distributed over the area. The sensors scan the area in a systematic
manner to build up the energy levels of the area. This is recorded in digital
form as an image of the area. Computer can then be used to display,
enhance and manipulate them.

Eme source
𝑰 𝒙, 𝒚 = 𝒊 𝒙, 𝒚 ∗ 𝒓 𝒙, 𝒚
sensor
𝒊 𝒙, 𝒚 𝑰 𝒙, 𝒚
Target object
𝒓 𝒙, 𝒚 image

Figure 1: Digital Image formation concept.

Spatial patterns evident in remotely sensed images are then interpreted in


terms of geographical variation in the nature of the material forming the
surface of the earth such materials may be vegetation, exposed soil, rock,
or water.

Electromagnetic energy has the characteristic that every object interacts with it to
produce a radiation having unique pattern of brightness and brightness values
which can be recorded.
In other words, no two objects interact with eme in exactly the same way unless
they are the same object.

This property of the eme radiation is exploited to infer the nature of the objects by
collecting and analyzing the reflected/emitted energy from objects, which
intuitively, can be viewed as made up of a large collection of tiny dots or squares.

The assembly of measurements made at a large number of points distributed over


the target can be displayed, enhanced and manipulated by the computer.

Spatial patterns and brightness variations evident in the data are then interpreted
in terms of geometric and descriptive characters of the material forming the
surface of the object.

This is the physical basis of remote sensing and is so fundamental to the


understanding of the technology. In this lecture, our emphasis will be on methods
of image metrology or extracting geometric information from images. The task of
image interpretation or extracting of thematic information is deferred to another
monograph. It is to be noted that for the image metrology, only the spatial location
of points (pixel) is used while the digital number of the pixel is used in the image
interpretation work.

Basic Components of a Remote Sensing System.


a. The overall process of remote sensing can be broken down into five components. These
components are: 1) an energy source; 2) the interaction of this energy with particles in the
atmosphere; 3) subsequent interaction with the ground target; 4) energy recorded by a sensor
as data; and 5) data processing and display digitally for visual and numerical interpretation.

b. Primary components of remote sensing are as follows:

• Electromagnetic energy is emitted from a source.


• This energy interacts with particles in the atmosphere.
• Energy interacts with surface objects.
• Energy is detected and recorded by the sensor.
• Data are displayed digitally for visual and numerical interpretation on a computer.

Eme source
𝑰 𝒙, 𝒚 = 𝒊 𝒙, 𝒚 ∗ 𝒓 𝒙, 𝒚
sensor
𝒊 𝒙, 𝒚 𝑰 𝒙, 𝒚
Target object
𝒓 𝒙, 𝒚 image

Figure 1: Digital Image formation concept.

CONCEPTUAL ARRANGEMENT OF A REMOTE SENSING SYSTEM

A remote sensing system using electromagnetic radiation consists of five


main components as shown below: viz (1) a source (radiation generator),
(2) interactions with the earth’s surface, (3) interaction with the
atmosphere, (4) a sensor and (5) data storage and processing.

An idealized remote sensing system consists of the following stages (Fig.)

1. Energy source
2. propagation of energy through atmosphere
3. energy interaction with earth’s surface features
4. Airborne/space borne sensors receiving the reflected and emitted energy
5. Transmission of data to earth station and generation of data produce
6. Multiple-date users
1. The energy source: The uniform energy source provides energy over all wave
lengths. The passive RS system relies on the sun as the strongest source of EM
energy and measures energy that is either reflected and or emitted from the earth’s
surface features. However, active RS systems use their own source of EM energy.
The source of electromagnetic radiation may be natural like the sun’s
radiated light which is reflected by the earth, or the Earth’s emitted heat,
or it could be man-made, like microwave radar. Any object above the
absolute zero 0ok generates E-M energy. The level of such generated
energy depends on the body temperature.

2. Propagation of energy from the atmosphere: The EM energy, from the source
pass through the atmosphere on its way to earth’s surface. Also, after reflection
from the earth’s surface, it again pass through the atmosphere on its way to sensor.
The atmosphere modifies the wave length and spectral distribution of energy to
some extent, and this modification varies particularly with the wave length.
Atmospheric interaction: electromagnetic energy passing through the
atmosphere is distorted and scattered.

3. Interaction of energy with surface features of the earth: The interaction of EM


energy, with earth’s surface features generates reflected and/or emitted signals
(spectral response patterns or signatures). The spectral response patterns play a
central role in detection, identification and analysis of earth’s surface material.
Earth’s surface interaction, the amount and characteristics of radiation
emitted or reflected from the Earth’s surface is dependent upon the
characteristics of the objects on the Earth’s surface. The substance here
is that there must be an object interacting with the radiant energy.

4. Air borne/space born sensors: Sensors are electromagnetic instruments


designed to receive and record retransmitted energy. They are mounted on
satellites, aero planes or even balloons. The sensors are highly sensitive to wave
lengths, yielding data on the absolute brightness from the object as a function of
wavelength. Sensor: the em-radiation that has interacted with the surface
of the earth and the atmosphere is recorded by a sensor, for example a
radiometer or camera. The most common form of data recordation is
digital format. A suitable radiometric scale is used to rate the energy level
received from a unit of the earth surface area.

Fig. 4.1 Idealised remote sensing system

5. Transmission of data to earth station and data product generation: The data
from the sensing system is transmitted to the ground based earth station along
with the telemetry data. The real-time (instantaneous) data handling system
consists of high density data tapes for recording and visual devices (such as
television) for quick look displays. The data products are mainly classified into two
categories:
(i) Pictorial or photographic product (analogue)

and (ii) Digital product

Data storage and processing: the collected data often in digital form is
organized and stored for subsequent use for various purposes. The type
of processing performed depends on the intended application.

6. Multiple data users: The multiple data users are those who have knowledge of
great depth, both of their respective disciplines as well as of remote sensing data
and analysis techniques. The same set of data becomes various forms of
information for different users with the understanding of their field and
interpretation skills.

It is helpful to recall this conceptual arrangement in any discussion of


remote sensing technique.

PASSIVE AND ACTIVE REMOTE SENSING SYSTEMS


Remote sensing systems may also be split into "active" sensing (i.e., when a signal
is emitted by the sensor and its reflection by the object is detected by the sensor)
and "passive" sensor (i.e., when the reflection of sunlight is detected by the sensor).

Passive sensors gather radiation that is emitted or reflected by the object or


surrounding areas. Reflected sunlight is the most common source of radiation
measured by passive sensors. Examples of passive remote sensors include film
photography, infrared, charge-coupled devices, and radiometers. Active collection,
on the other hand, emits energy in order to scan objects and areas whereupon a
sensor then detects and measures the radiation that is reflected or backscattered
from the target. RADAR and LiDAR are examples of active remote sensing where
the time delay between emission and return is measured, establishing the location,
speed and direction of an object.
Chapter 2: Electromagnetic Energy and
Sources
ELECTROMAGNETIC ENERGY

It is a form of energy that moves with the velocity of light (3 x 108 m/sec) in a
harmonic pattern consisting of sinusoidal waves, equally and repetitively spaced in
time. It has two fields: (i) electrical field and (ii) magnetic field, both being
orthogonal to each other. Fig. 4.2 show the electromagnetic wave pattern, in which
the electric components are vertical and magnetic components are horizontal.
Figure 2-3. Propagation of the electromagnetic and magnetic field. Waves vibrate
perpendicular to the direction of motion; electric and magnetic fields are at right angle
to each other. These fields travel at the speed of light.

2-4 Electromagnetic Energy Is Emitted from A Source.


a Summary of Electromagnetic Energy. Electromagnetic energy or radiation is derived from
the subatomic vibrations of matter and is measured in a quantity known as wavelength. The
units of wavelength are traditionally given as micrometers (µm) or nanometers (nm).
Electromagnetic energy travels through space at the speed of light and can be absorbed
and reflected by objects. To understand electromagnetic energy, it is necessary to discuss the
origin of radiation, which is related to the temperature of the matter from which it is
emitted.

c. Temperature. The origin of all energy (electromagnetic energy or radiant energy) begins with
the vibration of subatomic particles called photons (Figure 2-2). All objects at a temperature
above absolute zero vibrate and therefore emit some form of electromagnetic energy.
Temperature is a measurement of this vibrational energy emitted from an object.
Humans are sensitive to the thermal aspects of temperature; the higher the temperature
is the greater is the sensation of heat. A “hot” object emits relatively large amounts of energy.
Conversely, a “cold” object emits relatively little energy.

d. Absolute Temperature Scale. The lowest possible temperature has been shown to be —
273.2°C and is the basis for the absolute temperature scale. The absolute temperature
scale, known as Kelvin, is adjusted by assigning —273.2°C to 0 K (“zero Kelvin”; no
degree sign). The Kelvin scale has the same temperature intervals as the Celsius scale,
so conversion between the two scales is simply a matter of adding or subtracting 273
(Table 2-1). Because all objects with temperatures above, or higher than, zero Kelvin
emit electromagnetic radiation, it is possible to collect, measure, and distinguish energy
emitted from adjacent objects.
Table 2-1
Different scales used to measure object temperature. Conversion formulas are listed below.
Object Fahrenheit (°F) Celsius (°Cl Kelvin (k)
Absolute Zero —273.2 0.0
Frozen Water 32.0 0.0 273.2
Boiling Water 212.0 100.0 373.2
Sun 9980.6 5527.0 5800.0
Earth 80.6 27 300
Human body 98.6 37.0 310.0
Conversion Formulas:
Celsius to Fahrenheit: F° = (1.8 x C°) + 32
Fahrenheit to Celsius: C’ = (F- 32)/1.8
Celsius to Kelvin: K = C’ + 273
Fahrenheit to Kelvin: K = [(F°- 32)/1.8] + 273

Characteristics of E-M as Waves


Electromagnetic energy is radiant energy which moves with the velocity of
light (300,000 km/sec). this energy or radiation is given-out or generated
by anybody whose temperature is above the absolute zero 0o i.e (-273oc) or
(0ok). for example, the earth is at an average temperature of 27oc or 300k
and so will generate em energy at the intensity commensurate with its
temperature.

The Sun is the strongest source of radiant energy and can be approximated by a
body source of temperature 5750 - 60000 K. Although Sun produces EM radiation
across a range of wave lengths, the amount of energy it produces is not evenly
distributed along this range. Approximately 43% is radiated within the visible
wavelength (0.4 to 0.7  m), and 48% of the energy is transmitted at wave length
greater than 0.7  m, mainly within infrared range.

If the energy received at the edge of earth’s atmosphere were distributed evenly
over the earth, it would give an average incident flux density of 1367 W/m2. This is
known as the solar constant. Thirty five percent of incident radiant flux is reflected
back by the earth. This includes the energy reflected by clouds and atmosphere.
Seventeen percent of it is absorbed by the atmosphere while 48% is absorbed by
the earth’s surface materials (Mather, 1987).
The totality or array of all electromagnetic radiation that moves with the
velocity of light, characterized by wavelength or frequency is called
Electromagnetic spectrum. Usually wavelengths of between 10-10 -10+10m.
the Optical wave length is between 0.3 𝜇m to 15.0𝜇m, and are the
wavelengths mostly used in remote sensing. Energy at the optical
wavelengths can be reflected and refracted with solid materials like mirrors
and lenses.

E-M radiation can be viewed as either a wave (wave theory of Newton) or


as a stream of particles traveling in straight lines (corpuscular theory).

Characteristics as Stream of particles (straight line)

The straight-line view of electromagnetic radiation is useful in the


geometrical analysis of its interaction with matter. The most useful
characteristics in this mode are:

1. Speed or velocity (v) in a vacuum or in a medium, expressed in


kilometers per second. Speed is usually taken to be approximately
the speed of light in a vacuum.
2. Intensity of energy it carries (E)
3. Direction of travel in relation to a reference direction (𝜃 .
4. Does not travel around bends or corners
5. Can be reflected and refracted by materials

Characteristics as Wave

The wave properties are very useful in explaining many of the results of its
interaction with matter. Waves can be described in terms of their:

1. Frequency which is the number of waveforms per second or Hertz


(Hz)
2. Wavelength is the distance between successive peaks (or troughs)
of a waveform. It is usually measured in meters or fraction of a
meter.
3. Can move around bends or corners
The properties frequency (f) and wavelength () convey the same
information and are often used interchangeably.

Mathematically, the number of wavelength () per second is the


frequency (f) of the wave form.

4. Period (T) of a waveform is the time, in seconds needed for one full
wave to pass a fixed point.

The relationship between wavelength, frequency and period is


given by: T = 1/f

5. Velocity of e-m wave is the speed of light i.e. 3x108ms-1 and is


usually denoted by the symbol (c). the formula that connects the
speed of light, the wavelength and frequency of a waveform.

c = f

Thus, if a wave form has a wavelength of 0.6m, then its frequency


is

𝑓 = 𝑐/ = 300,000000/ 6𝑥10 − 7 ℎ𝑧 𝑜𝑟

= 600,000 GHz or 6 x 1014 Hz.

The Period of this waveform is the reciprocal of its frequency.

6. Energy: The amount of energy carried by an e-m wave is given by:

𝐸 = ℎ𝑓

Where: E = energy
h = a constant known as plank’s constant

= 6.3 x 10-34 Js)

f = frequency.

Therefore, energy increases linearly with frequency.

7. Electromagnetic energy consists of photons having particle like properties


such as energy and momentum. The EM energy is characterized in terms
of velocity c (  3 x 108 m/s), wave length  and frequency f. These
parameters are related by the equation.
𝑐
𝜆=
𝑓

where  = wavelength, which is the distance between two adjacent


peaks. The wave lengths sensed by many remote sensing systems are
extremely small and are measured in terms of micrometer (  m or
10-6 m) or nanometer (nm or 10-9 m)

f= frequency, which is defined as the number of peaks that pass


any given point in one second and is measured in Hertz (Hz).

8. The amplitude is the maximum value of the electric (or magnetic) field
and is a measure of the amount of energy that is transported by the wave.

Wave theory concept explains how EM energy propagates in the form of a wave.
However, this energy can only be detected when it interacts with matter. This
interaction suggests that the energy consists of many discrete units called photons
whose energy (Q) is given by:
ℎ.𝑐
𝑄 = ℎ. 𝑓 =
𝜆
where h = Planks constant = 6.6252 x 10-34 J-S

The above equation suggests that the shorter the wavelength of radiation, the more
is the energy content.

The radiated energy per unit area from a body at a temperature T is given
by

M =   T4 W/m2 where  = 5.6693x10-8 W/m

Wavelength of maximum radiation

A radiating body at a temperature ToK will radiate its maximum energy at


a wavelength given by

max = 2898/T um

Where T is temperature in degrees Kelvin.

ELECTROMAGNETIC SPECTRUM ( or E-M wavebands)

As described earlier, the totality or array of all electromagnetic radiation


that moves with the velocity of light and characterized by wavelengths or
frequency is called electromagnetic spectrum.

Although visible light is the most obvious manifestation of EM radiation, other


forms also exist. EM radiation can be produced at a range of wave lengths and can
be categorized according to its position into discrete regions which is generally
referred to electro-magnetic spectrum. Thus the electromagnetic spectrum is the
continuum of energy that ranges from meters to nano-meters in wave length (Fig.
4.3) travels at the speed of light and propagates through a vacuum like the outer
space (Sabine, 1986). All matter radiates a range of electromagnetic energy, with
the peak intensity shifting toward progressively shorter wave length at an
increasing temperature of the matter. In general, the wave lengths and frequencies
vary from shorter wavelength-high frequency cosmic waves to long wave length-
law frequency radio waves (Fig. 4.3 and Table 4.1)

Since

 = 𝑐/𝑓

when

f = 0,  = 

f = ,  = 0

hence

0     (E – M Spectrum)

In remote sensing, signals within the wavebands or wavelengths of 10-10 –


10+10 um are implied by electromagnetic spectrum. Practically the region
of the spectrum between 0.3 um and 15 um is referred to as the
optical wavelength or optical range and it is the band mostly used in
passive remote sensing.

Of particular interest to remote sensing are visible and near infrared


radiation in the waveband 0.4 – 3 um, infrared radiation in the waveband
3- um and microwave radiation in the waveband 5 – 500 mm (i.e. 5000 –
500,000 um). By studying the interaction of the radiation in these
wavebands with earth surface objects and materials, we will be able to
infer their characteristics from patterns produced by the radiation
emanating from those objects.

Microwave Bands

10 GHz 1 GHz

0.2 m 1.0 m 10 m 1 mm 1 cm 10 cm 1m

Ka Ku X C L P
Thermal
Visible Middle-IR infrared
S

UV Near-infrared

Figure 9-2 The wavelength and frequency of commonly used RADAR bands. RADAR antennas
transmit and receive very long wavelength energy measured in centimeters, unlike the
relatively short wavelength visible, near-infrared, middle-infrared and thermal infrared
regions measured in micrometers.

Earth-resource image analysts seem to grasp the concept of wave-length more readily than frequency, so
the convention is to describe a radar in terms of its wavelength. Conversely, engineers generally prefer to
work in units of frequency because as radiation passes through materials of different densities, frequency
remains constant while velocity and wavelength change. Since wavelength   and frequency   are
inversely related to the speed of light c  , it really does not matter which unit of measurement is used as
long as one remembers the following relationships:
c

f

where  = wave length, which is the distance between two adjacent peaks. The
wave lengths sensed by many remote sensing systems are extremely
small and are measured in terms of micro meter (  m or 10-6 m)
or nanometer (nm or 10-9 m)

f= frequency, which is defined as the number of peaks that pass any given
point in one second and is measured in Hertz (Hz).

Table 4.1 ELECTROMAGNETIC SPECTRAL REGIONS (SABINE, 1987)

Region Wave length Remarks

1. Gamma ray < 0.03 nm Incoming radiation is completely


absorbed by the upper atmosphere and
is not available for remote sensing

2. X-ray 0.03 to 3.0 Completely absorbed by atmosphere.


Not employed in remote sensing

3. Ultraviolet 0.3 to 0.4  m Incoming wavelengths less than 0.3 


m are completely absorbed by ozone in
the upper atmosphere.

4. Photographic 0.3 to 0.7  m Transmitted through atmosphere.


UV band Detectable with film and
photodetectors, but atmospheric
scattering is severe
5. Visible 0.4 to 0.7  m Images with film and photo detectors.
Includes reflected energy peak of earth
at 0.5  m

6. Infrared 0.7 to 1.00  Interaction with matter varies with


m wave length. Atmospheric transmission
windows are separated.

7. Reflected IR 0.7 to 3.0  m Reflected solar radiation that contains


band information about thermal properties
of materials. The bands from 0.7 to 0.9
 m is detectable with film and is
called the photographic IR band

8. Thermal IR 3 to 5  m Principal atmospheric windows in the 8


to 14  m thermal region. Images at
these wavelengths are acquired by
optical mechanical scanners and special
vidicon system but not by film.
Microwave 0.1 to 30 cm longer
wavelength can penetrate clouds, fog
and rain. Images may be acquired in the
active or passive mode.

9. Radar 0.1 to 30 cm Active form of microwave remote


sensing. Radar images are acquired at
various wavelength bands.

10. radio > 30 cm Longest wavelength portion of


electromagnetic spectrum. Some
classified radars with very long
wavelengths operate in this region.
Earths atmosphere absorbs energy in Gamma ray, X-ray and most of the ultra-violet
region. Therefore, these regions are not used for remote sensing. Remote sensing
deals with energy in visible, infrared, thermal and microwave regions. These regions
are further subdivided into bands such as blue, green, red (in visible region), near
infrared, mid-infrared, thermal and microwave etc. It is important to realize that
significant amount of remote sensing performed within infrared wave length is not
related to heat. It is photographic infrared at a slightly longer wave length (invisible
to human eye) than red. Thermal infrared remote sensing is carried out at longer
wave lengths.

e. Nature of Electromagnetic Waves. Electromagnetic energy travels along the path of a


sinusoidal wave (Figure 2-3). This wave of energy moves at the speed of light
3.00 𝑥 108 𝑚/𝑠 . All emitted and reflected energy travels at this rate, including light.
Electromagnetic energy has two components, the electric and magnetic fields. This energy
is defined by its wavelength () and frequency (v); see below for units. These fields are in-
phase, perpendicular to one another, and oscillate normal to their direction of propagation
(Figure 2-3). Familiar forms of radiant energy include X-rays, ultraviolet rays, visible light,
microwaves, and waves. All of these waves move and behave similarly; they differ only in
radiation intensity.

f. Measurement of Electromagnetic Wave Radiation.


(1) Wavelength. Electromagnetic waves are measured from wave crest to wave crest or
conversely from trough. This distance is known as wavelength ( or “lambda”) and is
expressed units of micrometers (𝜇𝑚) or nanometers (nm) (Figures 2-4 and 2-5).
Figures 2-4, Wave morphology—wave-length () is measured
from crest-to-crest or trough-to-trough.

Figure 2-5 Long wave-lengths maintain a low frequency and lower energy state
relative to the short wavelengths.

(2) Frequency. The rate at which a wave passes a fixed point is known as the wave frequency and
is denoted as v (nu”). The units of measurement for frequency are given as Hertz (Hz), the
number of wave cycles per second (Figures 2-5 and 2-6).

(3) Speed of electromagnetic radiation (or speed of light). Wavelength and frequency are
inversely related to one another, in other words as one increases the other de-creases. Their
relationship is expressed as:

𝑐 = 𝜆𝑣 (2-1)

where

𝑐 = 3.00 𝑥 108 m/s, the speed of light

λ = the wavelength (m)

ν = frequency (cycles/second, Hz).


This mathematical expression also indicates that wavelength (λ) and frequency (ν) are both
proportional to the speed of light (c). Because the speed of light (c) is constant, radiation with a
small wavelength will have a high frequency; conversely, radiation with a large wavelength will
have a low frequency.

Figure 2-7. Electromagnetic spectrum displayed in meter and Hz units. Short wavelengths
are shown on the left, long wavelength on the right. The visible spectrum shown in red.

g. Electromagnetic Spectrum. Electromagnetic radiation wavelengths are plotted on a


logarithmic scale known as the electromagnetic spectrum. The plot typically increases in
increments of powers of 10 (Figure 2-7). For convenience, regions of the electromagnetic
spectrum are categorized based for the most part on methods of sensing their wave-lengths.
For example, the visible light range is a category spanning 0.4–0.7 µm. The minimum and
maximum of this category is based on the ability of the human eye to sense radiation energy
within the 0.4- to 0.7-µm wavelength range.

(1) Though the spectrum is divided up for convenience, it is truly a continuum of increasing
wavelengths with no inherent differences among the radiations of varying wavelengths.
For instance, the scale in Figure 2-8 shows the color blue to be approximately in the range
of 435 to 520 nm (on other scales it is divided out at 446 to 520 nm). As the wavelengths
proceed in the direction of green they become increasingly less blue and more green; the
boundary is somewhat arbitrarily fixed at 520 nm to indicate this gradual change from blue
to green.

Figure 2-8. Visible spectrum illustrated here in color.

(2) Be aware of differences in the manner in which spectrum scales are drawn. Some authors
place the long wavelengths to the right (such as those shown in this manual), while others
place the longer wavelengths to the left. The scale can also be drawn on a vertical axis
(Figure 2-9). Units can be depicted in meters, nanometers, micrometers, or a combination
of these units. For clarity some authors add color in the visible spectrum to correspond to
the appropriate wavelength.

Diagram

Figure 2-9. Electromagnetic spectrum on a vertical scale.

h. Regions of the Electromagnetic Spectrum. Different regions of the electromagnetic


spectrum can provide discrete information about an object. The categories of the electro-
magnetic spectrum represent groups of measured electromagnetic radiation with similar
wavelength and frequency. Remote sensors are engineered to detect specific spectrum
wavelength and frequency ranges. Most sensors operate in the visible, infrared, and
microwave regions of the spectrum. The following paragraphs discuss the electromagnetic
spectrum regions and their general characteristics and potential use (also see Appendix B).
The spectrum regions are discussed in order of increasing wavelength and decreasing
frequency.

(1) Ultraviolet. The ultraviolet (UV) portion of the spectrum contains radiation just
beyond the violet portion of the visible wavelengths. Radiation in this range has
short wavelengths (0.300 to 0.446 µm) and high frequency. UV wavelengths are
used in geologic and atmospheric science applications. Materials, such as rocks and
minerals, fluoresce or emit visible light in the presence of UV radiation. The
florescence associated with natural hydrocarbon seeps is useful in monitoring oil
fields at sea. In the upper atmosphere, ultraviolet light is greatly absorbed by ozone
(O3) and becomes an important tool in tracking changes in the ozone layer.

(2) Visible Light. The radiation detected by human eyes is in the spectrum range aptly
named the visible spectrum. Visible radiation or light is the only portion of the
spectrum that can be perceived as colors. These wavelengths span a very short
portion of the spectrum, ranging from approximately 0.4 to 0.7 µm. Because of this
short range, the visible portion of the spectrum is plotted on a linear scale (Figure
2-8). This linear scale allows the individual colors in the visible spectrum to be
discretely depicted. The shortest visible wavelength is violet and the longest is red.
(a) The visible colors and their corresponding wavelengths are listed below (Table
2-2) in micrometers and shown in nanometers in Figure 2.8.

Table 2-2 Wavelengths of the primary colors of the visible spectrum

Color Wavelength
Violet 0.4–0.446 µm
Blue 0.446–0.500 µm
Green 0.500–0.578 µm
Yellow 0.578–0.592 µm
Orange 0.592–0.620 µm
Red 0.620–0.7 µm

(b) Visible light detected by sensors depends greatly on the surface reflection characteristics
of objects. Urban feature identification, soil/vegetation discrimination, ocean productivity,
cloud cover, precipitation, snow, and ice cover are only a few examples of current
applications that use the visible range of the electromagnetic spectrum.

(3) Infrared. The portion of the spectrum adjacent to the visible range is the infra-red
(IR) region. The infrared region, plotted logarithmically, ranges from
approximately 0.7 to 100 µm, which is more than 100 times as wide as the visible
portion. The infrared region is divided into two categories, the reflected IR and the
emitted or thermal IR; this division is based on their radiation properties.

(a) Reflected Infrared. The reflected IR spans the 0.7- to 3.0-µm wavelengths.
Reflected IR shares radiation properties exhibited by the visible portion and is thus
used for similar purposes. Reflected IR is valuable for delineating healthy verses
unhealthy or fallow vegetation, and for distinguishing among vegetation, soil, and
rocks.

(b) Thermal Infrared. The thermal IR region represents the radiation that is emitted
from the Earth’s surface in the form of thermal energy. Thermal IR spans the 3.0-
to 100-µm range. These wavelengths are useful for monitoring temperature
variations in land, water, and ice.

(4) Microwave. Beyond the infrared is the microwave region, ranging on the spectrum
from 1 µm to 1 m (bands are listed in Table 2-3). Microwave radiation is the longest
wavelength used for remote sensing. This region includes a broad range of
wavelengths; on the short wavelength end of the range, microwaves exhibit
properties similar to the thermal IR radiation, whereas the longer wavelengths
maintain properties similar to those used for radio broadcasts.

Table 2-3 Wavelengths of various bands in the microwave range

Band Frequency (MHz) Wavelength (cm)


Ka 40,000–26,000 0.8–1.1
K 26,500–18,500 1.1–1.7
X 12,500–8000 2.4–3.8
C 8000–4000 3.8–7.5
L 2000–1000 15.0–30.0
P 1000–300 30.0–100.0

(a) Microwave remote sensing is used in the studies of meteorology, hydrology, oceans,
geology, agriculture, forestry, and ice, and for topographic mapping. Because microwave
emission is influenced by moisture content, it is useful for mapping soil moisture, sea ice,
currents, and surface winds. Other applications include snow wetness analysis, profile
measurements of atmospheric ozone and water vapor, and detection of oil slicks.

(b) For more information on spectrum regions, see Appendix B.

i. Energy as it Relates to Wavelength, Frequency, and Temperature. As stated above,


energy can be quantified by its wavelength and frequency. It is also useful to measure the
intensity exhibited by electromagnetic energy. Intensity can be described by Q and is
measured in units of Joules.

(1) Quantifying Energy. The energy released from a radiating body in the form of a vibrating
photon traveling at the speed of light can be quantified by relating the energy’s wavelength
with its frequency. The following equation shows the relationship between wavelength,
frequency, and amount of energy in units of Joules:

Q=hν (2-2)

Because 𝑐 = 𝜆𝜈, 𝑄 also equals

𝑄 = ℎ 𝑐/𝜆

where

Q = energy of a photon in Joules (J)

ℎ = Plank’s constant (6.6 𝑥 10−34 J s)

𝑐 = 3.00 𝑥 108 𝑚/𝑠, the speed of light

 = wavelength (m)

𝑣 = frequency (cycles/second, Hz)

The equation for energy indicates that, for long wavelengths, the amount of energy will be low,
and for short wavelengths, the amount of energy will be high. For instance, blue light is on the
short wavelength end of the visible spectrum (0.446 to 0.050 µm) while red is on the longer end
of this range (0.620 to 0.700 µm). Blue light is a higher energy radiation than red light. The
following example illustrates this point:

Example: Using Q = h c/λ, which has more energy blue or red light?

Solution: Solve for 𝑸𝑏𝑙𝑢𝑒 (energy of blue light) and 𝑄𝑟𝑒𝑑 (energy of red light) and
compare.

Calculation: λ𝑏𝑙𝑢𝑒 = 0.425 𝜇𝑚 (From Table 2-2)

ℎ = 6.6 𝑥 10−34 J s
𝑐 = 3.00 𝑥 10−8 𝑚/𝑠
* Don’t forget to convert length µm to meters (not shown here)

Blue

𝑸𝑏𝑙𝑢𝑒 = 6.6 𝑥 10−34 𝐽 𝑠 3.00 𝑥 108 𝑚/𝑠 /0.425 𝜇𝑚

𝑸𝑏𝑙𝑢𝑒 = 4.66 10−31 𝐽

Red

𝑸𝑟𝑒𝑑 = 6.6 𝑥 10−34 𝐽 𝑠𝑒𝑐𝑜𝑛𝑑𝑠 3.00 𝑥 108 𝑚/𝑠 /0.660 𝜇𝑚

𝑸𝑟𝑒𝑑 = 3.00 𝑥 10−31 𝐽

Answer: Because 4.66 × 10−31 𝐽 is greater than 3.00 𝑥 10−31 𝐽 blue has more energy.

This explains why the blue portion of a fire is hotter that the red portions.
Example: Using 𝐿𝑚 = 𝐴/𝑇, what is the maximum wavelength emitted by a
human?

Solution: Solve for Lm given T from Table 2-1

Calculation: T = 98.6C or 310 K (From Table 2-1) A = 2898 µm Kelvin

𝐿𝑚 = 2898 µ𝑚 𝐾/310𝐾

𝐿𝑚 = 9.3 µ𝑚

Answer: Humans emit radiation at a maximum wavelength of 9.3 µm; this


is well beyond what the eye is capable of seeing. Humans can
see in the visible part of the electromagnetic spectrum at
wavelengths of 0.4–0.7µm.

Interaction of Electromagnetic Energy


with Particles in the Atmosphere.
a. Atmospheric Effects. Remote sensing requires that electromagnetic radiation travel some
distance through the Earth’s atmosphere from the source to the sensor. Radiation from the
Sun or an active sensor will initially travel through the atmosphere, strike the ground target,
and pass through the atmosphere a second time before it reaches a sensor (Figure 2-1; path
B). The total distance the radiation travels in the atmosphere is called the path length. For
electromagnetic radiation emitted from the Earth, the path length will be half the path
length of the radiation from the sun or an active source.

(1) As radiation passes through the atmosphere, it is greatly affected by the atmospheric
particles it encounters (Figure 2-12). This effect is known as atmospheric scattering and
atmospheric absorption and leads to changes in intensity, direction, and wave-length size.
The change the radiation experiences is a function of the atmospheric conditions, path
length, composition of the particle, and the wavelength measurement relative to the
diameter of the particle.
Figure 2-12. Various radiation obstacles and scatter paths. Modified from
two sources, http://orbit-net.nesdis.noaa.gov/arad/fpdt/tutorial/12-atmra.gif
and http://rst.gsfc.nasa.gov/Intro/Part2_4.html.

(2) Rayleigh scattering, Mie scattering, and nonselective scattering are three types of scatter
that occur as radiation passes through the atmosphere (Figure 2-12). These types of scatter
lead to the redirection and diffusion of the wavelength in addition to making the path of
the radiation longer.

b. Rayleigh Scattering. Rayleigh scattering dominates when the diameter of


atmospheric particles are much smaller than the incoming radiation wavelength
(φ<λ). This leads to a greater amount of short wavelength scatter owing to the small
particle size of atmospheric gases. Scattering is inversely proportional to
wavelength by the 4 power, or…

Rayleigh Scatter = 1/ 𝜆4

where λ is the wavelength (m). This means that short wavelengths will undergo a large amount
of scatter, while large wavelengths will experience little scatter. Smaller wave-length radiation
reaching the sensor will appear more diffuse.

c. Why the sky is blue? Rayleigh scattering accounts for the Earth’s blue sky. We see
predominately blue because the wavelengths in the blue region (0.446–0.500 µm)
are more scattered than other spectra in the visible range. At dusk, when the sun is
low in the horizon creating a longer path length, the sky appears more red and
orange. The longer path length leads to an increase in Rayleigh scatter and results
in the depletion of the blue wavelengths. Only the longer red and orange
wavelengths will reach our eyes, hence beautiful orange and red sunsets. In
contrast, our moon has no atmosphere; subsequently, there is no Rayleigh scatter.
This explains why the moon’s sky appears black (shadows on the moon are more
black than shadows on the Earth for the same reason, see Figure 2-13).

Figure 2-13. Moon rising in the Earth’s horizon (left). The Earth’s atmosphere appears blue due to Rayleigh Scatter.
Photo taken from the moon’s surface shows the Earth rising (right). The Moon has no atmosphere, thus no atmospheric
scatter. Its sky appears black. Images taken from: http://antwrp.gsfc.nasa.gov/apod/ap001028.html, and
http://antwrp.gsfc.nasa.gov/apod/ap001231.html.

d. Mie Scattering. Mie scattering occurs when an atmospheric particle diameter is equal to
the radiation’s wavelength (φ = λ). This leads to a greater amount of scatter in the long
wavelength region of the spectrum. Mie scattering tends to occur in the presence of water
vapor and dust and will dominate in overcast or humid conditions. This type of scattering
explains the reddish hues of the sky following a forest fire or volcanic eruption.

e. Nonselective Scattering. Nonselective scattering dominates when the diameter of


atmospheric particles (5–100 µm) is much larger than the incoming radiation wavelength
(φ>>λ). This leads to the scatter of visible, near infrared, and mid-infrared. All these
wavelengths are equally scattered and will combine to create a white appearance in the sky;
this is why clouds appear white (Figure 2-14).
Figure 2-14. Non-selective scattering by larger atmospheric particles (like water droplets) affects all
wavelengths, causing white clouds.

Figure 2-15. Atmospheric windows with wavelength on the x-axis and percent transmission measured
in hertz on the y-axis. High transmission corresponds to an “atmospheric window,” which allows
radiation to penetrate the Earth’s atmosphere. The chemical formula is given for the molecule
responsible for sunlight absorption at particular wavelengths across the spectrum. Modified from
http://earthobservatory.nasa.gov:81/Library/RemoteSensing/remote_04.html.

f. Atmospheric Absorption and Atmospheric Windows. Absorption of electromagnetic


radiation is another mechanism at work in the atmosphere. This phenomenon occurs as
molecules absorb radiant energy at various wavelengths (Figure 2-12). Ozone (O3), car-
bon dioxide (CO2), and water vapor (H2O) are the three main atmospheric compounds that
absorb radiation. Each gas absorbs radiation at a particular wavelength. To a lesser extent,
oxygen (O2) and nitrogen dioxide (NO2) also absorb radiation (Figure 2-15). Below is a
summary of these three major atmospheric constituents and their significance in remote
sensing.

g. The role of atmospheric compounds in the atmosphere.

(1) Ozone. Ozone (O3) absorbs harmful ultraviolet radiation from the sun. Without this
protective layer in the atmosphere, our skin would burn when exposed to sunlight.
(2) Carbon Dioxide. Carbon dioxide (CO2) is called a greenhouse gas because it greatly
absorbs thermal infrared radiation. Carbon dioxide thus serves to trap heat in the
atmosphere from radiation emitted from both the Sun and the Earth.

(3) Water vapor. Water vapor (H2O) in the atmosphere absorbs incoming long-wave infrared
and shortwave microwave radiation (22 to 1 µm). Water vapor in the lower atmosphere
varies annually from location to location. For example, the air mass above a desert would
have very little water vapor to absorb energy, while the tropics would have high
concentrations of water vapor (i.e., high humidity).

(4) Summary. Because these molecules absorb radiation in very specific regions of the
spectrum, the engineering and design of spectral sensors are developed to collect
wavelength data not influenced by atmospheric absorption. The areas of the spectrum that
are not severely influenced by atmospheric absorption are the most useful regions, and are
called atmospheric windows.

h. Summary of Atmospheric Scattering and Absorption. Together atmospheric scatter and


absorption place limitations on the spectra range useful for remote sensing. Table 2-4
summarizes the causes and effects of atmospheric scattering and absorption due to
atmospheric effects.

i. Spectrum Bands. By comparing the characteristics of the radiation in atmospheric


windows (Figure 2-15; areas where reflectance on the y-axis is high), groups or bands of
wavelengths have been shown to effectively delineate objects at or near the Earth’s surface.
The visible portion of the spectrum coincides with an atmospheric window, and the
maximum emitted energy from the Sun. Thermal infrared energy emitted by the Earth
corresponds to an atmospheric window around 10 µm, while the large window at wave-
lengths larger than 1 mm is associated with the microwave region (Figure 2-16).

Table 2-4 Properties of Radiation Scatter and Absorption in the Atmosphere

Atmospheric scattering Diameter (φ) of particle


relative to incoming
Result Scattering Result

Rayleigh scattering ∅ < Short wavelengths are scattered


Mie scattering ∅ = Long wavelengths are scattered
Nonselective scattering ∅ ≫ All wavelengths are equally scattered
Absorption No relationship C02, H20, and O3and remove
wavelengths
Figure 2-16. Atmospheric windows related to the emitted energy supplied by the sun and the Earth. Notice
that the sun’s maximum output (shown in yellow) coincides with an atmospheric window in the visible range
of the spectrum. This phenomenon is important in optical remote sensing. Modified from
http://www.ccrs.nrcan.gc.ca/ccrs/learn/tutorials/fundam/chapter1/chapter1_4_e.html.

j. Geometric Effects. Random and non-random error occurs during the acquisition of
radiation data. Error can be attributed to such causes as sun angle, angle of sensor,
elevation of sensor, skew distortion from the Earth’s rotation, and path length.
Malfunctions in the sensor as it collects data and the motion of the platform are additional
sources of error. As the sensor collects data, it can develop sweep irregularities that result
in hundreds of meters of error. The pitch, roll, and yaw of platforms can create hundreds
to thousands of meters of error, depending on the altitude and resolution of the sensor.
Geometric corrections are typically applied by re-sampling an image, a process that shifts
and recalculates the data. The most commonly used re-sampling techniques include the use
of ground control points (see Chapter 5), applying a mathematical model, or re-sampling
by nearest neighbor or cubic convolution.

k. Atmospheric and Geometric Corrections. Data correction is required for calculating


reflectance values from radiance values (see Equation 2-5 below) recorded at a sensor and
for reducing positional distortion caused by known sensor error. It is extremely important
to make corrections when comparing one scene with another and when performing a
temporal analysis. Corrected data can then be evaluated in relation to a spectral data library
(see Paragraph 2-6b) to compare an object to its standard. Corrections are not necessary if
objects are to be distinguished by relative comparisons within an individual scene.

l. Atmospheric Correction Techniques. Data can be corrected by re-sampling with the use of
image processing software such as ERDAS Imagine or ENVI, or by the use of specialty
software. In many of the image processing software packages, atmospheric correction
models are included as a component of an import process. Also, data may have some
corrections applied by the vendor. When acquiring data, it is important to be aware of any
corrections that may have been applied to the data (see Chapter 4). Correction models can
be mathematically or empirically derived.

m. Empirical Modeling Corrections. Measured or empirical data collected on the ground at


the time the sensor passes overhead allows for a comparison between ground spectral
reflectance measurements and sensor radiation reflectance measurements. Typical data
collection includes spectral measurements of selected objects within a scene as well as a
sampling of the atmospheric properties that prevailed during sensor acquisition. The
empirical data are then compared with image data to interpolate an appropriate correction.
Empirical corrections have many limitations, including cost, spectral equipment
availability, site accessibility, and advanced preparation. It is critical to time the field
spectral data collection to coincide with the same day and time the satellite collects
radiation data. This requires knowledge of the satellite’s path and revisit schedule. For
archived data it is impossible to collect the field spectral measurements needed for
developing an empirical model that will correct atmospheric error. In such a case, a
mathematical model using an estimate of the field parameters must complete the correction.

n. Mathematical Modeling Corrections. Alternatively, corrections that are mathematically


derived rely on estimated atmospheric parameters from the scene. These parameters
include visibility, humidity, and the percent and type of aerosols present in the atmosphere.
Data values or ratios are used to determine the atmospheric parameters. Subsequently a
mathematical model is extracted and applied to the data for re-sampling. This type of
modeling can be completed with the aid of software programs such as 6S, MODTRAN,
and ATREM (see http://atol.ucsd.edu/~pflatau/rtelib/ for a list and description of correction
modeling software).

More on INTERACTION WITH THE ATMOSPHERE

In remote sensing, EM radiation must pass through atmosphere in order to reach


the earth’s surface and to the sensor after reflection and emission from earth’s
surface features. The water vapour, oxygen, ozone, CO2 , aerosols, etc. present in
the atmosphere influence EM radiation through the mechanism of (i) scattering,
and (ii) absorption.

Scattering
It is unpredictable diffusion of radiation by molecules of the gases, dust and smoke
in the atomosphere. Scattering reduces the image contrast and changes the
spectral signatures of ground objects. Scattering is basically classified as (i)
selective, and (ii) non-selective, depending upon the size of particle with which the
electromagnetic radiation interacts. The selective scatter is further classified as (a)
Rayleigh’s scatter, and (ii) Mie’s scatter.

Rayleigh’s scatter: In the upper of the atmosphere, the diameter of the gas
molecules or particles is much less than the wave length of radiation. Hence haze
results on the remotely sensed imagery, causing a bluish grey cast on the image,
thus reducing the contrast. Lesser the wave length, more is the scattering.

Mie’s scatter: In the lower layers of atmosphere, where the diameter of water
vapour or dust particles approximately equals wave length of radiation, Mie’s
scatter occurs.

Non – selective scatter: In he lower of atmosphere, where the diameter of


particles, is several times more (approximately ten times) than radiation
wavelength. For visible wave lengths, the main sources of non-selective scattering
are pollen grains, cloud droplets, ice and snow crystals and raindrops. It scatters all
wave length of visible light with equal efficiency. It justifies the reason why cloud
appears white in the image.

Absorption

In contrast to scattering, atmospheric absorption results the effective loss of


energy as a consequence of the attenuating nature of atmosphere constituents,
like molecules of ozone, CO2 and water vapour. Oxygen absorbs in the ultraviolet
region and also has an absorption band centered on 6.3  m. Similarly CO2 presents
a number of wave lengths reaching the surface. Water vapour is an extremely
important absorber of EM radiation within infrared part of the spectrum.

Atmospheric windows

The amount of scattering or absorption depends upon (i) wave length, and (ii)
composition of the atmospheric. In order to minimize the effect of atmosphere, it
is essential to choose the regions with high transmittance.

The wavelength at which EM radiations are partially or wholly transmitted through


the atmosphere are known as atmospheric windows and are used to acquire remote
sensing data.

Typical atmospheric windows on the regions of EM radiation are shown in Fig. 4.4

Fig. 4.4 Atmospheric Windows

The sensors on remote sensing satellites must be designed in such a way as to


obtain data within these well-defined atmospheric windows.

When E-M energy passes through the atmosphere, it is either reflected


(scattered) transmitted or absorbed by the particles (gasses or dust
particles) in the atmosphere. As we have seen before these effects are
wavelength dependent. The absorbed e-m energy heats up the atmosphere
and does not reach the target. The transmitted e-m energy reaches the
target and interacts with it.
The reflected e-m energy is scattered or redirected in 3 ways depending
upon the relationship between the wavelength of the radiation and the size
of the atmospheric particles causing the reflection.

1. Rayleigh Scattering: this occurs when radiation  is much larger


than the particle size (i.e. water droplets), this causes the sky to
appear blue because scattering is high at the blue .
2. Mie Scattering: This occurs when  is almost equal to particle
size.
3. Nonselective Scattering:  particle size (dust particle).

Electromagnetic Energy Interacts with


Surface and Near Surface Objects.
a. Energy Interactions with the Earth's Surface. Electromagnetic energy that reaches a target
will be absorbed, transmitted, and reflected. The proportion of each depends on the
composition and texture of the target’s surface. Figure 2-17 illustrates these three
interactions. Much of remote sensing is concerned with reflected energy.
Figure 2-17. Radiation striking a target is reflected, absorbed, or transmitted through the
medium. Radiation is also emitted from ground targets.

(1) Absorption. Absorption occurs when radiation penetrates a surface and is incorporated into
the molecular structure of the object. All objects absorb incoming incident radiation to
some degree. Absorbed radiation can later be emitted back to the atmosphere. Emitted
radiation is useful in thermal studies, but will not be discussed in detail in this work (see
Lillisand and Keifer [1994] Remote Sensing and Image Interpretation for information on
emitted energy).

(2) Transmission. Transmission occurs when radiation passes through material and exits the
other side of the object. Transmission plays a minor role in the energy’s interaction with
the target. This is attributable to the tendency for radiation to be absorbed be-fore it is
entirely transmitted. Transmission is a function of the properties of the object.

(3) Reflection. Reflection occurs when radiation is neither absorbed nor transmit-ted. The
reflection of the energy depends on the properties of the object and surface roughness
relative to the wavelength of the incident radiation. Differences in surface properties allow
the distinction of one object from another.

(a) Absorption, transmission, and reflection are related to one another by

𝐸𝐼 = 𝐸𝐴 𝐸𝑇 + 𝐸𝑇 + 𝐸𝑅 (2-6)

where

𝐸𝐼 = incident energy striking an object


𝐸𝐴 = absorbed radiation
𝐸𝑇 = transmitted energy
𝐸𝑅 = reflected energy.
(b) The amount of each interaction will be a function of the incoming wave-length, the
composition of the material, and the smoothness of the surface.

(4) Reflectance of Radiation. Reflectance is simply a measurement of the percent-age of


incoming or incident energy that a surface reflects

Reflectance = Reflected energy/Incident energy (2-7)

where incident energy is the amount of incoming radiant energy and reflected energy is the amount
of energy bouncing off the object. Or from equation 2-5:

𝐸𝐼 = 𝐸𝐴 + 𝐸𝑇 + 𝐸𝑅

Reflectance = 𝐸𝑅 /𝐸1 (2-8)

Reflectance is a fixed characteristic of an object. Surface features can be distinguished by


comparing the reflectance of different objects at each wavelength. Reflectance comparisons rely
on the unchanging proportion of reflected energy relative to the sum of in-coming energy. This
permits the distinction of objects regardless of the amount of incident energy. Unique objects
reflect differently, while similar objects only reflect differently if there has been a physical or
chemical change. Note: reflectance is not the same as reflection.

(6) Summary. Spectral radiance is the amount of energy received at the sensor per time, per
area, in the direction of the sensor (measured in steradian), and it is measured per
wavelength. The sensor therefore measures the fraction of reflectance for a given area/time
for every wavelength as well as the emitted. Reflected and emitted radiance is calculated
by the integration of energy over the reflected hemisphere resulting from diffuse reflection
(see http://rsd.gsfc.nasa.gov/goes/text/reflectance.pdf for details on this complex
calculation). Reflected radiance is orders of magnitude greater than emitted radiance. The
following paragraphs, therefore, focus on reflected radiance.

b. Spectral Reflectance Curves.

(1) Background.

(a) Remote sensing consists of making spectral measurements over space: how much
of what “color” of light is coming from what place on the ground. One thing that a
remote sensing applications scientist hopes for, but which is not always true, is that
surface features of interest will have different colors so that they will be distinct in
remote sensing data.
(b) A surface feature’s color can be characterized by the percentage of incoming
electromagnetic energy (illumination) it reflects at each wavelength across the
electro-magnetic spectrum. This is its spectral reflectance curve or “spectral
signature”; it is an unchanging property of the material. For example, an object such
as a leaf may reflect 3% of incoming blue light, 10% of green light and 3% of red
light. The amount of light it reflects depends on the amount and wavelength of
incoming illumination, but the per-cents are constant. Unfortunately, remote
sensing instruments do not record reflectance directly, rather radiance, which is the
amount (not the percent) of electromagnetic energy received in selected wavelength
bands. A change in illumination, more or less intense sun for instance, will change
the radiance. Spectral signatures are often represented as plots or graphs, with
wavelength on the horizontal axis, and the reflectance on the vertical axis (Figure
2-20 provides a spectral signature for snow).

(2) Important Reflectance Curves and Critical Spectral Regions. While there are too many
surface types to memorize all their spectral signatures, it is helpful to be familiar with the
basic spectral characteristics of green vegetation, soil, and water. This in turn helps
determine which regions of the spectrum are most important for distinguishing these
surface types.

(3) Spectral Reflectance of Green Vegetation. Reflectance of green vegetation (Figure 2-21)
is low in the visible portion of the spectrum owing to chlorophyll absorption, high in the
near IR due to the cell structure of the plant, and lower again in the shortwave IR due to
water in the cells. Within the visible portion of the spectrum, there is a local reflectance
peak in the green (0.55 µm) between the blue (0.45 µm) and red (0.68 µm) chlorophyll
absorption valleys (Samson, 2000; Lillesand and Kiefer, 1994).

Samson, 2000; Lillesand and Kiefer, 1994).


Figure 2-20. Spectral reflectance of snow. Graph developed for Prospect (2002 and 2003)
using Aster Spectral Library (http://speclib.jpl.nasa.gov/) data

Figure 2-21. Spectral reflectance of healthy vegetation. Graph developed for Prospect (2002
and 2003) using Aster Spectral Library (http://speclib.jpl.nasa.gov/) data

(4) Spectral Reflectance of Soil. Soil reflectance (Figure 2-22) typically increases with
wavelength in the visible portion of the spectrum and then stays relatively constant in the
near-IR and shortwave IR, with some local dips due to water absorption at 1.4 and 1.9 µm
and due to clay absorption at 1.4 and 2.2 µm (Lillesand and Kiefer, 1994).
Figure 2-22. Spectral reflectance of one variety of soil. Graph developed for Prospect (2002
and 2003) using Aster Spectral Library (http://speclib.jpl.nasa.gov/) data

(5)

(8) Real Life and Spectral Signatures. Knowledge of spectral reflectance curves is useful if
you are searching a remote sensing image for a particular material, or if you want to identify
what material a particular pixel represents. Before comparing image data with spectral
library reflectance curves, however, you must be aware of several things.

(a) Image data, which often measure radiance above the atmosphere, may have to be
corrected for atmospheric effects and converted to reflectance.

(b) Spectral reflectance curves, which typically have hundreds or thousands of spectral
bands, may have to be resampled to match the spectral bands of the remote sensing
image (typically a few to a couple of hundred).

(c) There is spectral variance within a surface type that a single spectral library
reflectance curve does not show. For instance, the Figure 2-25 below shows spectra
for a number of different soil types. Before depending on small spectral distinctions
to separate surface types, a note of caution is required: make sure that differences
within a type do not drown out the differences between types.

(d) While spectral libraries have known targets that are “pure types,” a pixel in a remote
sensing image very often includes a mixture of pure types: along edges of types
(e.g., water and land along a shoreline), or interspersed within a type (e.g., shadows
in a tree canopy, or soil background behind an agricultural crop).

Figure 2-25. Reflectance spectra of five soil types: A—soils having > 2% organic matter content
(OMC) and fine texture; B— soils having < 2% OMC and low iron content; C—soils having < 2%
OMC and medium iron content; D—soils having > 2% OMC, and coarse texture; and E— soil having
fine texture and high iron-oxide content (> 4%).
More on INTERACTION WITH EARTH’S SURFACE

EM energy that strikes or encounters matter (object) is called incident radiation.


The EM radiation striking the surface may be (i) reflected/scattered, (ii) absorbed,
and/or (iii) transmitted. These processes are not mutually exclusive – EM radiators
may be partially reflected and partially absorbed. Which processes actually occur
depends on the following factors (1) wavelength of radiation (2) angle of incidence,
(3) surface roughness, and (4) condition and composition of surface material.

INTERACTION MECHANISM

Interaction with matter can change the following properties of incident radiation:
(a) Intensity (b) Direction (c) Wave length (d) Polarisation, and (e) Phase.

The science of remote sensing detects and records there changes.

The energy balance equation for radiation at a given wave length ( ) can be
expressed as follows.

E1  E R   E A   ET  (4.6)

where E1  = Incident energy; E R  = Reflected energy

E A  = Absorbed energy; ET  = Transmitted energy

The proportion of each fraction ( E R  / E A  / ET  ) will vary for different materials


depending upon their composition and condition. Within a given features type,
these proportions will vary at different wave length, thus helping in discrimination
of different objects. Reflection, scattering, emission are called surface phenomenon
because these are determined by the properties of surface, viz colour, roughness.
Transmission and absorption are called volume phenomena because these are
determined by the internal characteristics of the matter, viz. density and condition.

Modification of basic equation: In remote sensing, the amount of reflected energy


( E R  ) is more important than the absorbed and transmitted energies. Hence it is
more convenient to rearrange the terms of Eq. 4.6 as follows:

E R   E1   [ E A   ET  ] (4.7)

Eq. 4.7 is known as the balance equation.

Dividing all the terms by E1  , we get

ER   EA  ET  
 1   
E1   E1  ET  

or  λ  1  [     ] (4.8)

ER  EA  ET 
where    Reflectance;     Absorbance ;     Transmittance
E1 E1  E

Since almost all earth surface features are very opaque in nature, the transmittance
(  ) can be neglected. Also, according to Kirchoffs’ law of physics, the absorbance
(   ) is taken as emissivity (  ). Hence Eq. 4.8 becomes

 1    (4.9)

Eq. 4.9 is the fundamental equation by which the conceptual design of remote
sensing technology is built.

If    0 , then  (i.e. reflectance) is equal to one; the means that the total energy
incident on the object is reflected and recorded by sensing system. The classical
example of this type of object is snow (i.e. white object).
If    1 , the  = 0, indicating that whatever the energy incident on the object, is
completely absorbed by that object. Black body such as lamp smoke is an example
of this type of object. Hence it is seen that reflectance varies from zero for the black
body to one for white body.

Generally, E-M energy reaching the earth surface interacts with the earth
surface materials in 3 ways.

(1) the energy may be reflected by the material


(2) the energy may be transmitted by the material
(3) the energy may be absorbed by the material

The reflected energy travels upwards through the atmosphere. The part of
it that comes within the field of view of the sensor is detected by the sensor
and converted into a numerical value according to pre-designed scheme.

The transmitted portion is allowed to go through the material and is not


recorded while the absorbed portion is converted to heat energy which
raises the temperature of the material. If the condition of the environment
permits, the absorbed energy is later emitted as radiation which when it
comes to the field of view of the sensor, it is collected and converted to
numbers according to pre-designed scheme.

Mathematically, the interaction of E – M energy with surface material may


be represented as !

M = R + T + A
Where M = total energy entering the earth/m2

R = portion reflected

T = portion transmitted

A = portion absorbed

The proportions of these energy components vary both for different objects
and also depending on the wavelength of the E-M energy. Dividing (1) by
M we obtain

1 = R\M + T\M + A\M

i.e. r+t+a=1

r = reflectance of the material

t = transmittance

a = absorbance

These properties are wavelength dependent and curves can be plotted to


show the variation of these properties with wavelength for each earth
surface.

Generally in day light remote sensing, the effect of the reflected energy
predominates and therefore, it is this reflectance characteristics of objects
that are of utmost importance.
EM SPECTRUM AND APPLICATIONS IN REMOTE SENSING

Fig. 4.3 shows the EM spectrum which is divided into discrete regions on the basis
of wavelength. Remote sensing mostly deals with energy in visible (Blue, green,
red) infrared (near-infrared, mid-infrared, thermal-infrared) regions Table 4.2 gives
the wave length region along with the principal applications in remote sensing.
Energy reflected from earth during daytime may be recorded as a function of
wavelength. The maximum amount of energy is reflected at 0.5  m, called the
reflected energy peak. Earth also radiates energy both during day and night time
with maximum energy radiated at 9.7  m, called radiant energy peak.

TABLE 4.2 WAVE LENGTH REGIONS AND THEIR APPLICATIONS IN REMOTE


SENSING

Region Wavelength Principal Applications

(a) Visible Region

1. Blue 0.45 – 0.52 Coastal morphology and


sedimentation study, soil and
vegetation differentiation, coniferous
and deciduous vegetable
discrimination

2. Green 0.52 – 0.60 Vigor assessment, Rock and soil


discrimination, Turbidity and
bathymetry studies

3. Red 0.63 – 0.69 Plant species differentiation

(b) Infrared Region


4. Near Infrared 0.76 – 0.90 Vegetation vigour, Biomass,
delineation of water features, land
forms/geomorphic studies.

5. Mid-Infrared 1.55 – 1.75 Vegetation moisture content, soil


moisture content, snow and cloud
differentiation

6. Mid-Infrared 2.08 – 2.35 Differentiation of geological materials


& soils

7. Thermal IR 3.0 – 5.0 For hot targets, i.e. Fires and


volcanoes

8. Thermal IR 10.4 – 12.5 Thermal sensing, vegetation


discrimination, volcanic studies.

RADAR APPLICATIONS (Wavelength, Frequency and Pulse Length)

The pulse of electromagnetic radiation sent out by the transmitter through the antenna is of a specific
wavelength and duration (i.e. it has a pulse length measured in microseconds,  sec). The wavelengths
of energy most commonly used in imaging radars are summarized in Table 9-3. The wavelengths are much
longer than visible, near-infrared, mid-infrared or thermal infrared energy used in other remote sensing
systems (Figure 9-2). Therefore, microwave energy is usually measured in centimeters rather than
micrometers (Carver, 1988). The unusual names associated with the radar wavelengths (e.g. K, Ka, Ku, X,
C, S, L, and P) are an artifact of the secret work on radar remote sensing in World War II when it was
customary to use an alphabetic descriptor instead of the actual wavelength or frequency. These
descriptors are still used today in much of the radar scientific literature.

The shortest radar wavelengths are designated K-band. K-band wavelengths should theoretically provide
the best radar resolution. Unfortunately, K-band wavelength energy is partially absorbed by water vapor
and cloud penetration can be limited. This is the reason that most ground-based weather radars used to
track cloud cover and precipitation are K-band. X-band is often the shortest wavelength range used for
orbital and suborbital imaging radars (Mikhail et al., 2001). Some RADAR systems function using more
than one frequency and are referred to as multiple-frequency radars (e.g. SIR-C and SRTM)
Chapter 4
Sensor Platforms and sensor types and data
acquisition
3-12 Satellite Platforms and Sensors.

a. There are currently over two-dozen satellite platforms orbiting the earth collecting data.
Satellites orbit in either a circular geo-synchronous or polar sun-synchronous path. Each satellite
carries one or more electromagnetic sensor(s), for example, Landsat 7 satellite carries one sensor,
the ETM+, while the satellite ENVISAT carries ten sensors and two microwave antennas. Some
sensors are named after the satellite that carries them, for instance IKONOS the satellite houses
IKONOS the sensor. See Appendices D and E for a list of satellite platforms, systems, and sensors.

b. Sensors are designed to capture particular spectral data. Nearly 100 sensors have been
designed and employed for long-term and short-term use. Appendix D summarizes details on
sensor functionality. New sensors are periodically added to the family of existing sensors while
older or poorly designed sensors become decommissioned or defunct. Some sensors are flown on
only one platform; a few, such as MODIS and MSS, are on-board more than one satellite. The
spectral data collected may span the visible (optical), blue, green, microwave, MIR/SWIR, NIR,
Red, or thermal IR Sensors can detect single wavelengths or frequencies and/or ranges of the EM
spectrum.

3-13 Satellite Orbits.


a. Remote sensing satellites are placed into different orbits for special purposes. The
weather satellites are geo-stationary, so that they can image the same spot on the Earth
continuously. They have equatorial orbits where the orbital period is the same as that
of the Earth and the path is around the Earth’s equator. This is similar to the
communication satellites that continuously service the same area on the Earth (Figure
3-2).

Figure 3-2. Satellite in Geostationary Orbit. Courtesy of the Natural Resources Canada.

b. The remaining remote sensing satellites have near polar orbits and are launched into a
sun synchronous orbit (Figure 3-3). They are typically inclined 8 degrees from the poles due to the
gravitational pull from the Earth’s bulge at the equator; this allows them to remain in orbit.
Depending on the swath width of the satellite (if it is non-pointable), the same area on the Earth
will be imaged at regular intervals (16 days for Landsat, 24 days for Radarsat).

Figure 3-3. Satellite Near Polar Orbit, Courtesy of the Natural Resources Canada.
3-14 Planning Satellite Acquisitions. Corps satellite acquisition must be arranged through the
Topographic Engineering Center (TEC) Imagery Office (TIO). It is very easy to transfer the cost
of the imagery to TEC via the Corps Financial Management System (CFMS). They will place the
order, receive and duplicate the imagery for entry into the National Imagery and Mapping Agency
(NIMA) archive called the Commercial Satellite Imagery Library (CSIL), and send the original to
the Corps requester. They buy the imagery under a governmental user license contract that licenses
free distribution to other government agencies and their contractors, but not outside of these. It is
important for Corps personnel to adhere to the conditions of the license. Additional information
concerning image acquisition is discussed in Chapter 4 (Section 4-1).

a. Turn Around Time. This is another item to consider. That is the time after acquisition
of the image that lapses before it is shipped to TEC-TIO and the original purchaser. Different
commercial providers handle this in different ways, but the usual is to charge an extra fee for a 1-
week turn around, and another fee for a 1 to 2 day turn around. For ex-ample, SPOT Code Red
programmed acquisition costs an extra $1000 and guarantees shipment as soon as acquired. The
ERS priority acquisition costs an extra $800 and guarantees shipment within 7 days, emergency
acquisition cost $1200 and guarantees shipment within 2 days, and near real time costs an extra
$1500 and guarantees shipment as soon as acquired. Also arrangement may be made for ftp image
transfers in emergency situations. Costs increase in a similar way with RADARSAT, IKONOS,
and QuickBird satellite imaging systems.

Sensors
b. Swath Planners.

• Landsat acquired daily over the CONUS, use DESCW swath planner on PC running
at least Windows 2000 for orbit locations. http://earth.esa.int/services/descw/

• ERS, JERS, ENVISAT—not routinely taken, use DESCW swath planner on PC


running at least Windows 2000 for orbit locations. http://earth.esa.int/services/descw/

• RADARSAT—not routinely acquired, contact the TEC Imagery Office regarding


acquisitions of Radarsat data.

• Other commercial imaging systems, contact the TEC Imagery Office regard-ing
acquisitions.

3-15 Ground Penetrating Radar Sensors. Ground penetrating radar (GPR) uses
electromagnetic wave propagation and back scattering to image, locate, and quantitatively identify
changes in electrical and magnetic properties in the ground. Practical plat-forms for the GPR
include on-the-ground point measurements, profiling sleds, and near-ground helicopter surveys. It
has the highest resolution in subsurface imaging of any geo-physical method, approaching
centimeters. Depth of investigation varies from meters to several kilometers, depending upon
material properties. Detection of a subsurface feature depends upon contrast in the dielectric
electrical and magnetic properties. Interpretation of ground penetrating radar data can lead to
information about depth, orientation, size, and shape of buried objects, and soil water content.

a. GPR is a fully operational Cold Regions Research and Engineering Laboratory


(CRREL) resource. It has been used in a variety of projects: e.g., in Antarctica profiling for
crevasses, in Alaska probing for subpermafrost water table and contaminant path-ways, at Fort
Richardson probing for buried chemical and fuel drums, and for the ice bathymetry of rivers and
lakes from a helicopter.

b. CRREL has researched the use of radar for surveys of permafrost, glaciers, and river,
lake and sea ice covers since 1974. Helicopter surveys have been used to measure ice thickness in
New Hampshire and Alaska since 1986. For reports on the use of GPR within cold region
environments, a literature search from the CRREL website (http://www.crrel.usace.army.mil/) will
provide additional information. Current applications of GPR can be found at
http://www.crrel.usace.army.mil/sid/gpr/gpr.html.

c. A radar pulse is modulated at frequencies from 100 to 1000 MHz, with the lower
frequency penetrating deeper than the high frequency, but the high frequency having better
resolution than the low frequency. Basic pulse repetition rates are up to 128 Hz on a radar line
profiling system on a sled or airborne platform. Radar energy is reflected from both surface and
subsurface objects, allowing depth and thickness measurements to be made from two-way travel
time differences. An airborne speed of 25 m/s at a low altitude of no more than 3 m allows
collection of line profile data at 75 Hz in up to 4 m of depth with a 5-cm resolution on 1-ft (30.5
cm)-grid centers. Playback rates of 1.2 km/min. are possible for post processing of the data.

d. There are several commercial companies that do GPR surveys, such as Blackhawk
Geometrics and Geosphere Inc., found on the web at http://www.blackhawkgeo.com, and
http://www.geosphereinc.com.

3-1 Introduction. Remotely sensed data are collected by a myriad of satellite and air-borne
systems. A general understanding of the sensors and the platforms they operate on will help in
determining the most appropriate data set to choose for any project. This chapter reviews the nine
business practice areas in USACE Civil Works and examines the leading questions to be addressed
before the initiation of a remote sensing project. Airborne and satellite sensor systems are
presented along with operational details such as flight path/orbits, swath widths, acquisition, and
post processing options. Ground-based remote sensing GPR (Ground Penetrating Radar) is also
introduced. This chapter concludes with a summary of remote sensing and GIS matches for each
of the nine civil works business practice areas.

Platforms: Remote Sensing Platforms

1. Terrestrial platforms (Ladder, crane, human hand, housetop,


TRIPODS, etc)
2. Drones
3. aircrafts (aero planes, balloons)
4. spacecrafts (space shutter or STS)
5. earth orbiting satellites (landsat, spot, TIROS, NOAA)

REMOTE SENSING PLATFORMS

Two types of platforms have been in use in remote sensing

(i) Air borne platforms (ii) Space based platforms

1. Air borne platforms

Remote sensing of the surface of the earth has a long history, dating from the use
of cameras carried by balloons and pigeons in the eighteenth and nineteenth
centuries. Later, air craft mounted systems were developed for military purposes
during the early part of 20th century. Air borne remote sensing was the well-known
remote sensing method used in the initial years of development of remote sensing
in 1960’s and 1970’s. Air crafts were mostly used as RS platforms for obtaining
photographs. Aircraft carrying the RS equipment should have maximum stability,
free from vibrations and fly with uniform speed. In India, three types of aircrafts
are currently used for RS operations: Dakota, AVRO and Beach-craft Superking Air
200. The RS equipment available in India are multi-spectral scanner, ocean colour
radiometer, aerial cameras for photography in B/W, colour & near infrared etc.
But the air craft operations are very expensive and moreover for periodical
monitoring of constantly changing phenomena like crop growth, vegetation cover
etc. Air craft based platform cannot provide cost and time effective solutions.

2. Space based platforms

Space borne remote sensing platforms, such as a satellite, offer several advantages
over airborne platforms. It provides synoptic view (i.e. observation of large area in
a single image), systematic and repetitive coverage. Also, platforms in space are
very less affected by atmospheric drag, due to which the orbits can be well defined.
Entire earth or any designed portion can be covered at specified intervals
synoptically, which is immensely useful for management of natural resources.

Satellite: It is a platform that carries the sensor and other playloads required in RS
operation. It is put into earth’s orbit with the help of launch vehicles. The space
borne platforms are broadly divided into two classes:

(i) Low altitude near-polar orbiting satellites (ii) High altitude Geo-stationary
satellites

Polar orbiting satellites

These are mostly the remote sensing satellites which revolve around earth in a Sun
synchronous orbit (altitude 700-1500 km) defined by its fixed inclination angle from
the earth’s NS axis. The orbital plane rotates to maintain precise pace with Sun’s
westward progress as the earth roates around Sun. Since the position in reference
to Sun is fixed, the satellite crosses the equator precisely at the same local solar
time.

Geo-stationary satellites

These are mostly communication/meteorological satellites which are stationary in


reference to the earth. In other words, their velocity is equal to the velocity with
which earth rotates about its axis. Such satellites always cover the fixed area over
earth surface and their altitude is about 36000 km.
Landstat Satellite Programme

National Aeronautical and space Administration (NASA) of USA planned the


launching of a series of Earth Resources Technology Satellites (ERTS), and
consequently ERTS-1 was launched in July 1972 and was in operation till July 1978.
Subsequently, NASA renamed ERTS programme as “Landstat satellites have been
launched so far Landstat images have found a large number of applications such as
agriculture, botany, cartography, civil engineering, environmental monitoring,
forestry, geography, geology, land resources analysis, land use planning,
oceanography and water quality analysis.

SPOT Satellite Programme

France, Sweden and Belgium joined together and pooled up their resources to
develop an earth observation programme known as system Pourl Observation dela
Terre, abbreviated as SPOT. The first satellite of the series, SPOT-1 was launched in
Feb. 1988. The high resolution data obtained from SPOT sensors, namely, Thematic
Mapper (TM) and High Resolution Visible (HRV) have been extensively used for
urban planning, urban growth assessment, transportation planning, besides the
conventional application related to natural resources.

Indian Remote Sensing Satellites (IRS)

1. Satellite for Earth Observation (SEO-1), now called Bhaskara-1 was the first
Indian remote sensing satellite launched by a soviet launch vehicle from
USSR in June, 1979, into a near circular orbit.
2. SEO-II, (Bhaskara II) was launched in Nov. 1981 from a soviet cosmodrome.
3. India’s first semi-operational remote sensing satellite (IRS) was launched by
the Soviet Union in Sept. 1987.
4. The IRS series of satellites launched by the IRS mission are: IRS IA, IRS IB, IRS
IC, IRS ID and IRS P4.
Remote Sensing Sensors
Remote sensing sensors are designed to record radiations in one or more parts of
the EM spectrum. Sensors are electronic instruments that receive EM radiation and
generate an electric signal that correspond to the energy variation of different
earth’s surface features. The signal can be recorded and displayed as numerical
data or an image. The strength of the signal depends upon (i) Energy flux, (ii)
Altitude, (iii) Spectral band width, (iv) instantaneous field of view (IFOV), and (v)
Dwell time.

A scanning system employs detectors with a narrow field of view which sweeps
across the terrain to produce an image. When photons of EM energy radiated or
reflected from earth surface feature encounter the detector, an electrical signal is
produced that varies in proportion to the number of photons.

As we have said earlier, a sensor is any device that is sensitive to levels or


changes in physical quantities (such as light intensity, temperature,
energy level of an E-M radiation etc.) and converts these changes or levels
into a form suitable for recording to produce a pattern which can be
understood or used to infer the characteristics of the source of such
radiation.

Typical examples of sensors that are important to remote sensing are:


(1) Photographic Camera (which is sensitive to the visible em-
spectrum)
(2) Scanner (a device (line scanner) which is sensitive to em-energy
of a specified wavelength for which it is designed),
(3) Multispectral scanner (a line scanning device that is sensitive to
em energy of more than one wavelength)
(4) Microwave (Radar) sensing devices.

Sensors may also be split into "active" sensor (i.e., when a signal is emitted by the sensor and its
reflection by the object is detected by the sensor) and "passive" sensor (i.e., when the reflection of
sunlight is detected by the sensor).

Passive sensors gather radiation that is emitted or reflected by the object or surrounding areas.
Reflected sunlight is the most common source of radiation measured by passive sensors. Examples of
passive remote sensors include film photography, infrared, charge-coupled devices, and radiometers.
Active collection, on the other hand, emits energy in order to scan objects and areas whereupon a
sensor then detects and measures the radiation that is reflected or backscattered from the target.
RADAR and LiDAR are examples of active remote sensing where the time delay between emission and
return is measured, establishing the location, speed and direction of an object.

There are many other types of sensors used in remote sensing but these
are of specialized design and so are not of immediate help in this course.
In general, all sensors operate by collecting radiation from a target of
interest and suitably separating the energy levels and recording the
pattern either photographically or numerically. It is immaterial whether
the recording is done photographically or numerically or both because it
is always possible to convert from one form of recording to another using
data converters.

(1) The Photographic Camera

This consists of a system of lenses attached the front end of a cone which
forms the body of the Camera. At the back of the camera body is a metal
and glass plate on which a light sensitive material such as a film is placed.
There are prohibition shutters for admitting or putting-off light radiation
from the camera. When the camera lens is opened to admit light, visible
em energy enters the camera and is brought to focus on the back of the
camera. The film material (halide crystals) can distinguish between
different brightness levels of the incoming radiation and thus can produce
patterns of the energy levels of the incoming energy. By developing the
exposed film, it is possible to produce an image of the source of the
radiation that enters the camera.

Thus the photographic image is a type of data produced by a remote


sensing sensor.

(2) The Scanner

A device that uses a set of charged coupled devices (CCD) to detect and
record variation in energy levels. Another design used electro-mechanical
system for the same purpose.

(3) Multispectral Scanner (MSS)

A set of scanners which operate in different wave bands simultaneously,


this makes for better accuracy of the data.

Radar (Microwave Sensor): These are usually active sensing


systems which send energy in the microwave band to the targets
and monitor the returned signals. They have an advantage of
being usable at all times and seasons. They can even penetrate
clouds which often prevent effective use of the scanning and
photography – based systems foe remote sensing.
LiDAR (light detection and ranging Sensor): These are usually
active sensing systems which send energy in the visible/infrared
band to the targets and monitor the returned signals.
Other types of sensors
3-15 Ground Penetrating Radar Sensors. Ground penetrating radar (GPR) uses
electromagnetic wave propagation and back scattering to image, locate, and
quantitatively identify changes in electrical and magnetic properties in the ground.
Practical plat-forms for the GPR include on-the-ground point measurements, profiling
sleds, and near-ground helicopter surveys. It has the highest resolution in subsurface
imaging of any geo-physical method, approaching centimeters. Depth of investigation
varies from meters to several kilometers, depending upon material properties.
Detection of a subsurface feature depends upon contrast in the dielectric electrical and
magnetic properties. Interpretation of ground penetrating radar data can lead to
information about depth, orientation, size, and shape of buried objects, and soil water
content.
a. GPR is a fully operational Cold Regions Research and Engineering Laboratory
(CRREL) resource. It has been used in a variety of projects: e.g., in Antarctica profiling
for crevasses, in Alaska probing for subpermafrost water table and contaminant path-
ways, at Fort Richardson probing for buried chemical and fuel drums, and for the ice
bathymetry of rivers and lakes from a helicopter.
b. CRREL has researched the use of radar for surveys of permafrost, glaciers, and
river, lake and sea ice covers since 1974. Helicopter surveys have been used to measure
ice thickness in New Hampshire and Alaska since 1986. For reports on the use of GPR
within cold region environments, a literature search from the CRREL website
(http://www.crrel.usace.army.mil/) will provide additional information. Current
applications of GPR can be found at http://www.crrel.usace.army.mil/sid/gpr/gpr.html.
c. A radar pulse is modulated at frequencies from 100 to 1000 MHz, with the lower
frequency penetrating deeper than the high frequency, but the high frequency having
better resolution than the low frequency. Basic pulse repetition rates are up to 128 Hz
on a radar line profiling system on a sled or airborne platform. Radar energy is reflected
from both surface and subsurface objects, allowing depth and thickness measurements
to be made from two-way travel time differences. An airborne speed of 25 m/s at a low
altitude of no more than 3 m allows collection of line profile data at 75 Hz in up to 4 m
of depth with a 5-cm resolution on 1-ft (30.5 cm)-grid centers. Playback rates of 1.2
km/min. are possible for post processing of the data.
d. There are several commercial companies that do GPR surveys, such as Blackhawk
Geometrics and Geosphere Inc., found on the web at http://www.blackhawkgeo.com,
and http://www.geosphereinc.com.

3-9 Bathymetric and Hydrographic Sensors.


a. The Scanning Hydrographic Operational Airborne Lidar Survey (SHOALS
http://shoals.sam.usace.army.mil/default.htm) system is used in airborne lidar
bathymet-ric mapping. The Joint Airborne Lidar Bathymetry Technical Center of
Expertise (JALBTCX) is a partnership between the South Atlantic Division, US Army
Corps of Engineers (USACE), the Naval Meteorology and Oceanography Command
and Naval Oceanographic Office and USACE's Engineer Research and Development
Center. JALBTCX owns and operates the SHOALS system. SHOALS flies on small
fixed wing aircraft, Twin Otter, or on a Bell 212 helicopter. The SHOALS system can
collect data on a 4-m grid with vertical accuracy of 15 cm. In clear water bathymetry
can be collected at 2–3 times Secchi depth or 60 m. It does not work in murky or
sediment-laden waters.
b. The Corps uses vessels equipped with acoustic transducers for hydrographic sur-
veys. The USACE uses multibeam sonar technology in channel and harbor surveys.
Mul-tibeam sonar systems are used for planning the depth of dredging needed in these
shallow waters, where the accuracy requirement is critical and the need for correct and
thorough calibration is necessary. USACE districts have acquired two types of
multibeam trans-ducers from different manufacturers, the Reson Seabat and the Odom
Echoscan multi-beam. The navigation and acquisition software commonly in use by
USACE districts is HYPACK and HYSWEEP, by Coastal Oceanographics Inc. For
further information see the web site at
https://velvet.tec.army.mil/access/milgov/fact_sheet/multibea.html (due to security
restrictions this site can only be accessed by USACE employees).

3-10 Laser Induced Fluorescence.


a. Laser fluorosensors detect a primary characteristic of oil, namely their
characteristic fluorescence spectral signature and intensity. There are very few
substances in the natural environment that fluoresce, those that do, fluoresce with
sufficiently different spectral signatures and intensities that they can be readily
identified. The Laser Environ-mental Airborne Fluorosensor (LEAF) is the only sensor
that can positively detect oil in complex environments including, beaches and
shorelines, kelp beds, and in ice and snow. In situations where oil contaminates these
environments, a laser fluorosensor proves to be invaluable as a result of its ability to
positively detect oil http://www.etcentre.org/home/water_e.html.
b. Other uses of laser fluorosensors are to detect uranium oxide present in facilities,
abandoned mines, and spill areas that require remediation. See Special Technologies
Laboratory of Bechtel, NV, http://www.nv.doe.gov/business/capabilities/lifi/.
3-11 Airborne Gamma.
a. An AC-500S Aero Commander aircraft is used by the National Operational
Hydrologic Remote Sensing Center (NOHRSC) to conduct aerial snow survey
operations in the snow-affected regions of the United States and Canada. During the
snow season (January–April), snow water equivalent measurements are gathered over
a number of the 1600+ pre-surveyed flight lines using a gamma radiation detection
system mounted in the cabin of the aircraft. During survey flights, this system is flown
at 500 ft (152 m) above the ground at ground speeds ranging between 100 and 120
knots (~51 to 62 m/s). Gamma radiation emitted from trace elements of potassium,
uranium, and thorium radio-isotopes in the upper 20 cm of soil is attenuated by soil
moisture and water mass in the snow cover. Through careful analysis, differences
between airborne radiation measurements made over bare ground are compared to
those of snow-covered ground. The radiation differences are corrected for air mass
attenuation and extraneous gamma contamination from cosmic sources. Air mass is
corrected using output from precision temperature, radar altimeter, and pressure
sensors mounted on and within the aircraft. Output from the snow survey system results
in a mean areal snow water equivalent value within ±1 cm. Information collected
during snow survey missions, along with other environmental data, is used by the
National Weather Service (NWS), and other agencies, to forecast river levels and
potential flooding events attributable to snowmelt water runoff
(http://www.aoc.noaa.gov/_).
b. Other companies use airborne gamma to detect the presence of above normal
gamma ray count, indicative of uranium, potassium, and thorium elements in the
Earth’s crust (for example, Edcon, Inc., http://www.edcon.com, and the Remote
Sensing Laboratory at Bechtel, Nevada). The USGS conducted an extensive survey
over the state of Alaska as part of the National Uranium Resource Evaluation (NURE)
program that ran from 1974 to 1983, http://edc.usgs.gov/.
3-12 Satellite Platforms and Sensors.
a. There are currently over two-dozen satellite platforms orbiting the earth
collecting data. Satellites orbit in either a circular geo-synchronous or polar sun-
synchronous path. Each satellite carries one or more electromagnetic sensor(s), for
example, Landsat 7 satellite carries one sensor, the ETM+, while the satellite
ENVISAT carries ten sensors and two microwave antennas. Some sensors are named
after the satellite that carries them, for instance IKONOS the satellite houses IKONOS
the sensor. See Appendices D and E for a list of satellite platforms, systems, and
sensors.
b. Sensors are designed to capture particular spectral data. Nearly 100 sensors have
been designed and employed for long-term and short-term use. Appendix D
summarizes details on sensor functionality. New sensors are periodically added to the
family of existing sensors while older or poorly designed sensors become
decommissioned or defunct. Some sensors are flown on only one platform; a few, such
as MODIS and MSS, are on-board more than one satellite. The spectral data collected
may span the visible (optical), blue, green, microwave, MIR/SWIR, NIR, Red, or
thermal IR Sensors can detect single wavelengths or frequencies and/or ranges of the
EM spectrum.

Practical SENSORS

Sensors on board of Indian Remote Sensing Satellites (IRS)

1. Linear Imaging and Self Scanning Sensor (IRS): This payload was on board
IRS 1A and IB satellites. It had four bands operating in visible and near IR
region.
2. Linear Imaging and Self Scanning Sensor (LISS II): This payload was board IRS
1A and 1B satellites. It has four bands operating in visible and near IR region.
3. Linear Imaging and Self Scanning Sensor (LISS III): This payload is on board
IRS 1C and 1D satellites. It has three bands operating in visible and near IR
region and one band in short wave infra region.
4. Panchromatic Sensor (PAN): This payload is on boards IRS 1C and 1D
satellites. It has two bands operating in visible and near IR region.
5. Wide Field Sensor (WiFS): This payload is on boards IRS 1C and 1D satellites.
It has two bands operating in visible and near IR region.
6. Modular Opto-Electronic Scanner (MOS): This payload is on board IRS P3
satellite.
7. Ocean Colour Monitor (OCM): This payload is on board IRS P4 satellite. It has
eight spectral bands operating in visible and near IR region.
8. Multi Scanning Microwave Radiometer (MSMR): This payload is on board
IRS 1D satellite. This is a passive microwave sensor.

Remote sensing Data and acquisition


Digital images

X.2 What is a digital Image?


The term “image” as described above, refers to a continuous or discrete record of
intensity/brightness values of electromagnetic energy emitted or reflected from an
object or geographic area captured with a sensor calibrated to record point-wise
intensity values arranged in matrix (rectangular) form. Examples of such images are
aerial photographs (which is a continuous model when in hardcopy) and Landsat
MSS images (discrete model when in digital format). In mathematical terms, an
image may be represented by a two-dimensional function of the form 𝒇 𝒙, 𝒚 . The
value or amplitude of f at spatial coordinates 𝑥, 𝑦 is a positive scalar quantity called
digital number (DN) which is a measure of the intensity/brightness at that location
(see Table 1).

When an image is generated from a physical process such as camera photography


or scanning, its intensity values are aggregates of the energy radiated by small units
or cells of the target and the values range between 0 (no radiation) and some
maximum value (signifying highest radiation).

As a consequence, 𝒇 𝒙, 𝒚 must be non-negative. Its maximum value depends on the


design of the sensor (radiometric resolution or radiometric scale). It must also be
finite; that is, its values cannot be infinite.

Recall that any object interacts with em energy in 3-ways namely: absorption,
transmission and reflectance. In this lecture, we assume that the transmitted
energy is so small and can be ignored.

This means that the incident energy is distributed in the proportion of the
absorptance and reflectance properties of the object.
For daylight sensing, the absorption component plays less role and so the function
𝑓 𝑥, 𝑦 is the reflected component of eme and is affected by two factors namely: (1)
the amount of source illumination incident on the scene being viewed and (2) the
reflective capacity of the objects in the scene.

Denoting the two factors as functions on a plane 𝑖 𝑥, 𝑦 and 𝑟 𝑥, 𝑦 , respectively, then


the image function which we can represent by the more intuitive notation 𝑰(𝒙, 𝒚)
can be expressed as product function:

𝑰 𝒙, 𝒚 = 𝒊 𝒙, 𝒚 ∗ 𝒓 𝒙, 𝒚
(2)

Where

𝟎 < 𝒊 𝒙, 𝒚 < ∞ (3)

And

𝟎 ≤ 𝒓 𝒙, 𝒚 < 𝟏
(4)

Equation (4) indicates that reflectance is bounded by 0 (total absorption) and 1


(total reflectance). The nature of 𝒊 𝒙, 𝒚 is determined by the illumination
source, and 𝒓 𝒙, 𝒚 is determined by the characteristics of the imaged object.

Thus, an image is a two-dimensional function, 𝑰 𝒙, 𝒚 , where x and y are spatial


(plane) coordinates, and the amplitude of I at any pair of coordinates 𝒙, 𝒚 is
called the intensity or gray level of the image at that point. When x, y and the
intensity values I are all finite, discrete quantities, we call the image a digital image.
Eme source
𝑰 𝒙, 𝒚 = 𝒊 𝒙, 𝒚 ∗ 𝒓 𝒙, 𝒚
sensor
𝒊 𝒙, 𝒚 𝑰 𝒙, 𝒚
Target object
𝒓 𝒙, 𝒚 image
Figure 1: Digital Image formation concept.

In this lecture, we will focus on digital (discrete) images acquired by imaging


sensors such as cameras and scanners carried on aircrafts and satellite platforms
and recorded in digital formats ( jpeg, tiff, bmp, etc.).

Digital Number (DN) generation

Decimal to Binary Conversion

Binary to Decimal Conversion

It is essential to note that, even though a digital image is basically 4D dataset, since
each pixel has (x, y, f, DN) coordinate values and a digital number (DN),
representing the gray value, computational photogrammetric methods do not
adopt a simultaneous treatment of the multi-dimensional dataset in the solution
of a mapping problem. This is because of the complexity of 4D solution processes
and also the difficulty of combining attribute data modeling with geometric data
modeling. To simplify things, we employ the x, y pixel coordinates in the
formulations for 2D mapping works and x, y, f in formulations involved in 3D
geometric mapping. The digital numbers are then used in a separate process for
example, when it is necessary to resample the image after a geometric
transformation or to classify image pixels into object space features, the results of
which are then superposed on the map already prepared from the geometric
process.
5.0 Types of digital Images
Images are classified according to:
the platform used (terrestrial, aircraft, satellite)
the portion of eme spectrum used (gray scale (optical), RGB, thermal or infra-
red, microwave)
the type of sensors used (camera, scanners, radiometers, radar, LiDAR

Examples are aerial images, satellite images, and terrestrial images. Others are
photographic images (Plates 2, 3), stereo or overlapping images (Plate 4), RADAR
images, LIDAR, Panchromatic images, Colour images, thermal images and high
resolution images, vertical images, oblique images, etc.

Plate 2: Oblique Photographic imagery


Plate 3: Vertical Aerial photograph
Plate 4: A Block of 4 overlapping images

Qualities of remote sensing sensors


The quality of remote sensing data is judged by its Spatial Resolution, Radiometric
Resolution, Spectral resolution and temporal resolutions.

Spatial resolution

The size of a pixel that is recorded in a raster image – typically pixels may correspond to square
areas ranging in side length from 1 to 1,000 metres (3.3 to 3,280.8 ft).

Spectral resolution

The wavelength width of the different frequency bands recorded – usually, this is related to the
number of frequency bands recorded by the platform. Current Landsat collection is that of
seven bands, including several in the infra-red spectrum, ranging from a spectral resolution of
0.07 to 2.1 μm. The Hyperion sensor on Earth Observing-1 resolves 220 bands from 0.4 to 2.5
μm, with a spectral resolution of 0.10 to 0.11 μm per band.
Radiometric resolution

The number of different intensities of radiation the sensor is able to distinguish. Typically, this
ranges from 8 to 14 bits, corresponding to 256 levels of the gray scale and up to 16,384
intensities or "shades" of colour, in each band. It also depends on the instrument noise.

Temporal resolution

The frequency of flyovers by the satellite or plane, and is only relevant in time-series studies or
those requiring an averaged or mosaic image as in deforesting monitoring. This was first used by
the intelligence community where repeated coverage revealed changes in infrastructure, the
deployment of units or the modification/introduction of equipment. Cloud cover over a given
area or object makes it necessary to repeat the collection of said location.

6.0 THE DIFFERENT RESOLUTIONS OF AN IMAGE

The term resolution is often used to express certain qualities of a digital


image. Depending on the quality in reference, an additional word is used
to specify the quality. The following are the most common types of
resolutions used in remote sensing: Spatial Resolution, Radiometric
Resolution, Spectral Resolution, and Temporal Resolution.

 Spatial Resolution
This is the term used to express the metrical accuracy of an image. It is defined
as the diameter of the circular area on the object that is seen as a pixel or dot by
the imaging sensor. If the area is square, then the resolution is the length or
breadth of the area seen as a pixel. Spatial resolution is often expressed as the
number of pixels/dots per unit of space. If the space is the image space, the
units are in inches, centimeters or millimeters. If the space is the object space,
the units are in kilometers or miles. Thus, a spatial resolution of 25 dpi implies
that 25 pixels/dots occupy one inch in the image space ie. One pixel is
approximately 1mm in diameter. A spatial resolution of 1 meter is equivalent
1000 cells per kilometer in the object space. It should be noted that the spatial
resolution stated in terms of the object space dimensions can be converted into
the image space equivalent and vice versa if the scale of the image is given.

 Radiometric Resolution
This term is often used to express the number of bits used to represent the DN
of each pixel in the image. For example, a one bit image is a black & white image
in which the maximum DN for any pixel is 1 while the minimum is 0. A 2-bit
image is one in which DNs of the pixels can range from 0 to 3, while an 8-bit
image is one in which the DNs can range from 0 to 255. An image that has more
than 1-bit radiometric resolution is generally termed a grey scale image.

 Spectral Resolution
This is a term used to express the number of electromagnetic wave spectrums
used to acquire the image. Often, each em spectrum is called a channel or band;
thus, the number of channels or bands used to procure the image is called the
spectral resolution of the image. An image acquired using many sensors tuned
to different wavebands of the em spectrum is called a multi-channel image. A
colour image is a 3-band image acquired with sensors sensitive to red, green and
blue wavebands. Many satellite images are multi-channel images because the
sensors are always designed to capture different features of the earth and so are
tuned to wavebands of maximum reflectance for the different features of interest.
It should be noted that images taken in different spectral bands are separate
images even though they cover common areas. Such images can be processed
individually or together. The joint processing is often more rigorous and more
robust that the individual approach. Also, depending of the radiometric
resolution chosen, the memory size for multi-channel image is a direct multiple
of that for a one channel image. For an 8-bit resolution, a 3-channel image will
require 3*8 bits for the DN of one pixel.

 Temporal Resolution
This is a term commonly used for an imaging system that makes repeat visits to
a scene at specified intervals of time. The repeat cycle expressed in terms of days
is called the temporal resolution of the imaging sensor. In general, temporal
resolution can be understood to mean the number of images taken at different
times for the same area of the object. Temporal resolution becomes important
when we are interested in studying the variation of earth’s surface phenomenon
with time. The more images available over time the better the analysis and the
more reliable the conclusions that may be drawn from such analysis.

9.0 Analysis of the SPATIAL RESOLUTION OF AN IMAGE


The term spatial resolution is often used to express the metrical accuracy
of a digital image.

It is defined as the length (or breadth) of the square area on the object
that is seen as a pixel or dot by the imaging sensor.

Spatial resolution is often expressed as the number of pixels/dots per


unit of space.

If the space is the image space, the units are in inches, centimeters or
millimeters.

If the space is the object space, the units are in meters/kilometers or


feet/miles.

Thus, a spatial resolution of 25 dpi implies that 25 pixels/dots occupy one


inch in the image space i.e, one pixel is approximately 1mm in length.

A spatial resolution of 1 meter is equivalent to 1000 cells per kilometer


in the object space.

In object space terms, the spatial resolution is also often called ground
sampling distance GSD.

It should be noted that the spatial resolution stated in terms of the object
space dimensions can be converted into the image space equivalent and
vice versa if the scale of the image is given.

The total number of pixels in an image is a function of the size of the


image and the spatial resolution of the image.

Thus a 3 x 2 inch image at a resolution of 300 pixels per inch would be


written as 900x600 and will have a total of 540,000 pixels.
Frequently image size is given as the total number of pixels in the
horizontal direction times the total number of pixels in the vertical
direction (e.g., 512 x 512, 640 x 480, or 1024 x 768).

Although this convention makes it relatively straightforward to gauge


the total number of pixels in an image, it does not specify the size of the
image unless accompanied by the spatial resolution.

Thus, a 640 x 480 image would measure 6.67 inches by 5 inches when
presented (e.g., displayed or printed) at 96 pixels per inch. On the other
hand, it would measure 1.6 inches by 1.2 inches at 400 pixels per inch.

an important consideration of image spatial resolution is the accuracy


requirement of a project. The pixel size must always be smaller than or
equal to the unplottable error of the resulting product.

The unplottable error is determined by the capability (resolution) of the


plotting device (or medium) and the scale of the resulting map.

The unplottable error can be used to judge the acceptability of the GSD
of an imaging system.

For instance, on a map at a scale of 1:10,000 to be produced on a


computer screen with 0.5mm resolution, the unplottable error becomes
(0.5*10,000) = 5m. Thus, an image with a GSD below 5m will be good
enough.

If the scale changes to 1:5,000, then the unplottable error is (0.5*5,000)


= 2.5m. Suppose the screen resolution is (0.1mm) and the scale is
1:10,000, then the maximum acceptable GSD is 1m. If the scale changes
to 1:5,000 then the maximum GSD is 0.5m.
The reality of the modern plotting systems is that they have high
resolutions.

For instance, some computer screens have 0.01-0.05mm resolutions.


Some inkjet plotters have resolutions of 600 dpi which is equivalent to
25.4/600mm=0.04mm. This implies that very high spatial resolution
standards can be set for images to be used for mapping.

The concept of unplottable error can also be used to check for the
zoomable scale of an image such as a mosaic or an orthophoto.

For example, if an image is to be zoomable to a given scale on a display


device of given resolution, each pixel of the image must not be larger
than the indicated plotting error, otherwise, the image sharpness will
degrade as each pixel takes more than one pixel of the display and this
results in blocked pixels.

For example, if an orthophoto is to be zoomable up to a scale of 1:2,000


on a screen of 0.1mm resolution, then the unplottable error is 0.1*2,000
= 200mm or 0.2m. Thus, the image pixel must be smaller than .2m to
avoid losing visual clarity. Note that the scale indicated is the largest
scale to which the image may be zoomed.

An important matter in using digital images for mapping and positioning


is the need to ensure that the pixel size or image resolution is less than
or equal to the allowable image measurement errors (unplottable errors
in the image unit) for the intended project.

Also, it may be necessary to transform the image from a pixel-based


coordinates (integer numbers) to metric (continuous) units. Such
transformation may include scale, rotation and translation as the case
may be.

Notwithstanding the image referencing system adopted, a properly


coordinated image may be referred to as an image vector space as we
shall see in later sections. This possibility permits us to use all the axioms
of vector spaces in the derivation of the mathematical models and
computational algorithms to be discussed.

Pixel Brightness VALUES (OR RADIOMETRIC SCORES)


 The brightness value for every pixel in the image is
represented by a non-negative integer called the
digital number (DN) generated by the sensor using a
chosen radiometric scale. The DN for each pixel will
depend on the reflectance property of the object to
which it belongs within the particular eme spectrum
channel of the sensor.
 For satellite data such as Landsat and SPOT, the DNs
represent the intensity of reflected light in the
visible, infrared, or other wavelengths. For imaging
radar (SAR) data or LiDAR laser pulse, the DNs
represent the strength of a radar/laser pulse
returned to the antenna.
a digital image is often encoded in the form of a binary
file for the purpose of storage and transmission

Radiometric scale

Establishing the Radiometric Scale and Resolution for an


Imaging Sensor by Binary Arithmetic

3. DECIMAL AND BINARY NUMBER SYSTEMS

The decimal and binary number systems are fundamental to a full understanding
of image representation in digital format. In this chapter we will examine binary
numbers and their relationship to the decimal number system.

3.1 DECIMAL NUMBERS

The decimal system is a method of counting which involves the use of ten digits,
that is, a system with a base of ten. Each of the ten decimal digits, 0 through 9,
represents a certain quantity. The ten systems (digits) do not limit us to expressing
only ten different quantities because we use the various digits in appropriate
positions within a number to indicate the magnitude of the quantity (units, tens,
hundreds, thousands, etc,). We can express quantities up through nine before we
run out of digits, and the position of each digit within the number tells us the
magnitude it represents. If, for instance, we wish to express the quantity twenty-
three, we use (by their respective positions in the number) the digit 2 to represent
the quantity twenty and the digits 3 to represent the quantity three. Therefore, the
position of each of the digits in the decimal number indicates the magnitude of the
quantity represented and can be assigned a “weight”.

The value of a decimal number is the sum of the digits times their respective
weights. The weights are the units, tens, hundreds, thousands, etc. The weight of
each successive decimal digit position to the left in a decimal number is an
increasing power of ten. The weights are : unit (100), tens(101), hundreds (102),
thousands (103) etc. The following examples will illustrate the idea:

Example:

The value of the decimal number 354 = 3*102+5*101+4*100 = 300+50+4

3.2 BINARY NUMBERS

The binary number system is simply another way to count. It is less complicated
than the decimal system because it is composed of only two digits. It may seem
more difficult at first because it is unfamiliar to us.
Just as the decimal system with its ten digits is a base-ten system, the binary system
with its two digits is a base-two system. The two binary digits (bits) are 1 and 0. The
position of the 1 or 0 in a binary number indicates its “weight” or value within the
number, just as the position of a decimal digit determines the magnitude if that
digit. The weight of each successively higher position (to the left) in a binary
number is an increasing power of two.

Counting in Binary

To learn to count in binary, let is first look at how we count in decimal. We start at
0 and count up to 9 before we run out of digits. We then start another digit position
(to the left) and continue counting 10 through 99. At this point we have exhausted
all two-digit combinations, so a third is needed in order to count from 100 through
999.

A comparable situation occurs when counting in binary, except that we have only
two digits. We begin counting 0, 1; at this point we have used both digits, so we
include another digit position and continue 10, 11. We have now exhausted all
combinations of two digits, so a third is required. With three digits we can continue
to count – 100, 101, 110, and 111. Now we need a fourth digit to continue, and so
on.

Similar to the decimal number representation, a number in the binary system is represented using
a series of ‘columns’ and in a computer each column is used to represent a switch. The switches,
which are small-magnetized areas on a computer disk or memory, are usually grouped in packets
of eight, known as a byte. Several bytes can be linked together to make a computer word. Words
are grouped together in a data record and records are grouped together in a computer file. Sets of
computer files can be grouped together hierarchically in directories or subdirectories.
Q. Using a 32-bit word, what range of integer numbers can be represented if the left-most bit is
reserved for sign?

3.3 Conversion of Binary Number to Decimal


A binary number is a weighted number, as mentioned previously. The value of a given binary number in
terms of its decimal equivalent can be determined by adding the product of each bit and its weight. The
right-most bit is the least significant bit (LSB) in a binary number and has a weight of 2 0  1. The weights
increase by a power of two for each bit from right to left. It is important to remember that any positive
number raised to a zero power equals 1. The decimal value of a binary number is the sum of the products
of each binary digit in the number and the its corresponding weight.

Example: binary number 100 = 1*22+0*21+0*20 = 4

The following formula tells us how high we can count in decimal, beginning with zero, with n bits:

Highest decimal number = 2n  1 , where n is the number of bits arranged from left to right. For instance,
with two bits we can count from 0 through 3.

22  1  4  1  3

With four bits, we can count from 0 to 15.

24  1  16  1  15

With a byte of eight bits we can represent 256 numbers from 0 to (28 – 1) i.e. from 0 to 255. If we combine
2 bytes in a 16-bit word it is possible to code numbers from 0 to 65 535. However, it is also useful to be
able to code positive and negative numbers so only the first fifteen bits (counting from the right) are used
for coding the number and the sixteenth bit (215) is used to determine the sign. This means that with
sixteen bits we can code numbers from –32 767 to +32 767. In image processing, only integer numbers
are used. A number lacking a fractional component is called an integer.
The method of evaluating a binary number is illustrated by the following example:

Example:
The binary sequence 0010 means 23 * 0 + 22 * 0 + 21 * 1 + 2o * 0 = 210

The binary sequence 1100 means 23 * 1 + 22 * 1 + 21 * 0 + 2o * 0 = 1210

Q What is the decimal equivalent of the sequence of switches 111111112?

3.4 DECIMAL-TO- BINARY CONVERSION

In Section 3.3 we discussed how to determine the equivalent decimal value of a


binary number. Now, we will learn two ways of converting from a decimal to a
binary number.

1) Sum-of-Weights Method
One way to find the binary number equivalent to a given decimal number is
to determine the set of binary weight values whose sum is equal to the decimal
number. For instance, the decimal number 9 can be expressed as the sum of binary
weights as follows:

9 = 8 + 1 = 23 + 20

By placing a 1 in the appropriate weight positions, 23 and 20, and a 0 in the other
positions, we have the binary number for decimal 9.

23 22 21 20

1 0 0 1 binary nine
Examples

Convert the decimal numbers 12, 25, 58, and 82 to binary.

Solutions:

1210 = 8 + 4 = 23 + 22 11002

2510 = 16 + 8 + 1 = 24 + 23 + 20 110012

5810 = 32 + 16 + 8 + 2 = 25 + 24 + 23 + 21 110102

8210 = 64 + 16 + 2 = 26 + 24 + 21 10100102

Q. Convert the decimal numbers 102, 225, 558, and 822 to binary.

2) Repeated Division-by-2 Method


A more systematic method of converting from decimal to binary is the
repeated division by two process. For example, to convert the decimal number 12
to binary we begin by dividing 12 by 2, and then we divide each resulting quotient
by 2 until we have a 0 quotient. The remainder generated by each division forms
the binary number. The first remainder to be produced is the least significant bit
(LSB) in the binary number.

Example

12 Div 2 = 6 R 0 (LSB)

6 Div 2 = 3 R 0
3 Div 2 = 1 R 1

1 Div 2 = 0 R 1

Hence 1210 = 11002

Q. Convert the decimal numbers 19 and 45 to binary

3.5 IMAGE ENCODING AND STORAGE

A digital image is often encoded in the form of a binary file for the purpose of storage
and transmission. Among the numerous encoding formats are BMP (Windows Bitmap),
JPEG (Joint Photographic Experts Group File Interchange Format), and TIFF (Tagged
Image File Format). Although these formats differ in technical details, they share
structural similarities. In general, a multichannel image may be stored in pixel-
interleave (BIP), or line-interleave (BIL) or non-interleave formats (BSQ).

For example, given the following 2-band, 7x7 pixel image, it can be stored in line
interleave (BIL) as follows:

6.0 THE MATRIX STRUCTURE OF A DIGITAL IMAGE

As described above, a digital image is composed of discrete pixels or picture


elements (dots) each of which represents a spatial area on the target object being
viewed.

These pixels are arranged in a row-and-column fashion to form a rectangular


picture area (an array of dots or matrix of pixels, grid of cells), sometimes referred
to as a raster. The matrix structure allows an implicit recordation of the direction
of the eme ray from the individual elements of the object.

Table 1 shows the matrix structure and the addressing system used for a digital
image.

Table 1: An Image matrix


(columns or pixel position) c

S(c, 0 1 2 3 4 5
r)
0 1 2 3 4 1 0
1 1 5 8 9 2 1
(Scan Lines)
2 5 6 7 8 5 0
r
3 4 3 10 7 6 4
4 3 2 9 10 3 0
5 7 10 9 8 7 0
6 1 4 6 5 1 0

Pixel addressing style

The position of a pixel in the image can be established by the column and line
intersecting at the pixel.

Symbolically, the pixel’s position is indicated as (c, r) where c stands for the column
and r for the line or row of the pixel.
By convention, the indexes c, r start from 0; and while c counts the columns from
left to right, r counts the lines from top to bottom.

Thus, the pixel at the upper left corner of the image has position (0, 0) i.e. c = 0, r =
0.

Typically the pixel at the upper left corner of an image is considered to be at the
origin (0,0) of a pixel coordinate system.

Thus the pixel at the lower right corner of a 640 x 480 image would have
coordinates (639,479), whereas the pixel at the upper right corner would have
coordinates (0, 479).

From the structure of an image described above, the total number of pixels in an
image is a function of the physical size of the image and the number of pixels per
unit length (e.g. inch) in the horizontal as well as the vertical direction.

This number of pixels per unit length is referred to as the spatial resolution of the
image (this is discussed later). Thus a 3 x 2 inch image at a resolution of 300 pixels
per inch would be expressed as 900x600 image and have a total of 540,000 pixels.

More commonly, image size is given as the total number of pixels in the horizontal
direction times the total number of pixels in the vertical direction (e.g., 512 x 512,
640 x 480, or 1024 x 768).
Although this convention makes it relatively straightforward to gauge the total
number of pixels in an image, it does not alone uniquely specify the size of the
image or its resolution as defined in the paragraph above.

For example: an image specified in pixel form as 640 x 480 would measure 6.67
inches by 5 inches when presented (e.g., displayed or printed) at 96 pixels per inch.
On the other hand, it would measure 1.6 inches by 1.2 inches at 400 pixels per inch.

Hence, the spatial resolution of the image must be given expressly or must be
estimated if a physical dimension on the image is given.

1) Pixel-based coordinate system


The position of a pixel in the image can be established by the
column and line intersecting at the pixel.
the pixel’s position is indicated as (c, r) where c stands for the
column and r for the scanline of the pixel.
By convention, the indexes c, r start from 0; and while c counts
the columns from left to right, r counts the lines from top to
bottom.
the pixel at the upper left corner of the image has position (0, 0)
i.e. c = 0, r = 0.
Typically the pixel at the upper left corner of an image is
considered to be at the origin (0,0) of the pixel coordinate
system.
Thus the pixel at the lower right corner of a 640 x 480 image
would have coordinates (639,479), whereas the pixel at the
upper right corner would have coordinates (639, 0).

Pixel position (c)


0
Scan Line (r)

Figure 1: image pixel coordinate system


2) Continuous (metric) coordinate system
For engineering applications, the image positions (x, y) are
expressed in metric units such as millimeters (centimeters) or
inches.
locate the origin at the center of the image.
origin is located at the principal point as determined by the
intersection of opposite fiducial marks.
location of the origin at the center is mandatory for using
projective formulation in the data reduction process.
we shall adopt the convention of representing an image point as
a position vector in the pixel-based system:
a) p(c, r) in the pixel-based system, where p will be the position
vector (or the digital number DN), c is the column and r is the
row or scan line number,
b) p(x, y) in the metric coordinate system, where x is the distance
of the point along the x-axis and y is the distance along the y-
axis, with the understanding that the origin of the metric system
is at the geometric center of the image.

P(x, y)

x
Figure 2: metric image coordinate system

PIXEL BRIGHTNESS LEVEL REPRESENTATION

The brightness value for every pixel in the image is represented by a non-negative
integer called the digital number (DN) generated by the sensor using a chosen
radiometric scale. The DN for each pixel will depend on the reflectance property of
the object to which it belongs within the particular eme spectrum channel of the
sensor.

For satellite data such as Landsat and SPOT, the DNs represent the intensity of
reflected light in the visible, infrared, or other wavelengths. For imaging radar (SAR)
data or LiDAR laser pulse, the DNs represent the strength of a radar/laser pulse
returned to the antenna.

a digital image is often encoded in the form of a binary file for the purpose of
storage and transmission.

Table 2: A digital Image

30 200 200 90 90 90 90 90 50 50 170

30 200 200 120 90 90 90 90 50 50 170

30 200 200 120 90 90 90 90 50 50 170

30 200 200 120 120 90 90 90 50 50 170

30 200 200 120 120 90 90 90 50 50 170

30 200 200 120 120 120 90 90 50 50 170

30 200 200 60 120 120 120 90 50 50 170

30 200 200 60 60 120 120 120 50 50 170

30 200 200 60 60 60 120 120 120 50 170

30 200 200 60 60 60 60 120 120 120 170


24 240 240 240 240 240 240 240 240 240 170
0

8.0 IMAGE ENCODING AND STORAGE

A digital image is often encoded in the form of a binary file for the purpose of
storage and transmission.

Among the numerous encoding formats are:

 BMP (Windows Bitmap),


 JPEG (Joint Photographic Experts Group File Interchange Format),
 TIFF (Tagged Image File Format).

Although these formats differ in technical details, they share structural similarities.
In general, a multichannel image may be stored in

 pixel-interleave (BIP),
 line-interleave (BIL),
 non-interleave formats (BSQ).

For example, given the following 2-bands, 7x4 pixel image,

5 3 4 5 4 5 5 5 5 4 6 7 7 7

2 2 3 4 4 4 6 2 4 6 5 5 6 5

2 2 3 3 6 6 8 5 3 5 7 6 6 8

2 2 6 6 9 8 7 3 4 5 6 8 8 7

3 6 8 8 8 7 4 3 5 8 8 8 7 1

3 6 8 7 2 3 2 4 5 8 7 1 0 0
Band1 Band 2

it can be stored in line interleave (BIL) as follows:

Header line: A digital image, 2 bands, 7 x 4 pixels (area), line interleaved.

START

5 3 4 5 4 5 5 5 5 4 6 7 7 7 2 2 3 4 4 4 6 2 4

6 5 5 6 5 2 2 3 3 6 6 8 5 3 5 7 6 6 8 2 2 6 6

9 8 7 3 4 5 6 8 8 7

END

Earlier paragraphs of this chapter explored the nature of emitted and reflected energy and the
interactions that influence the resultant radiation as it traverses from source to target to sensor.
This paragraph will examine the steps necessary to transfer radiation data from the satellite to the
ground and the subsequent conversion of the data to a useable form for display on a computer.

a. Conversion of the Radiation to Data. Data collected at a sensor are converted from a
continuous analog to a digital number. This is a necessary conversion, as electromagnetic
waves arrive at the sensor as a continuous stream of radiation. The incoming radiation is
sampled at regular time intervals and assigned a value (Figure 2-26). The value given to
the data is based on the use of a 6-, 7-, 8-, 9-, or 10-bit binary computer coding scale;
powers of 2 play an important role in this system. Using this coding allows a computer to
store and display the data. The computer translates the sequence of binary numbers, given
as ones and zeros, into a set of instructions with only two possible outcomes (1 or 0,
meaning “on” or “off”). The binary scale that is chosen (i.e., 8 bit data) will de-pend on the
level of brightness that the radiation exhibits. The brightness level is deter-mined by
measuring the voltage of the incoming energy. Below in Table 2-5 is a list of select bit
integer binary scales and their corresponding number of brightness levels. The ranges are
derived by exponentially raising the base of 2 by the number of bits.

DIAGRAM

Figure 2-26. Diagram illustrates the digital sampling of continuous analog voltage data. The DN values above
the curve represent the digital output values for that line segment.

Table 2-5 Digital number value ranges for various bit data

Number of bits Exponent of 2 Digital Number Value Range


(DN)
6 26 64 0-63
8 28 256 0-255
10 210 1024 0-1023
16 216 65536 0-65535

b. Diversion on Data Type. Digital number values for raw remote sensing data are usually
integers. Occasionally, data can be expressed as a decimal. The most popular code for
representing real numbers (a number that contains a fraction, i.e., 0.5, which is one-half) is
called the IEEE (Institute of Electrical and Electronics Engineers, pronounced I-triple-E)
Floating-Point Standard. ASCII text (American Standard Code for Information
Interchange; pronounced ask-ee) is another alternative computing value system. This
system is used for text data. You may need to be aware of the type of data used in an image,
particularly when determining the digital number in a pixel.

c. Transferring the Data from the Satellite to the Ground. The transfer of data stored in the
sensor from the satellite to the user is similar to the transmission of more familiar signals,
such as radio and television broadcasts and cellular phone conversations. Every-thing we
see and hear, whether it is a TV program with audio or a satellite image, originates as a
form of electromagnetic radiation. To transfer satellite data from the sensor to a location
on the ground, the radiation is coded (described in Paragraph 2-7a) and attached to a signal.
The signal is generally a high frequency electromagnetic wave that travels at the speed of
light. The data are instantaneously transferred and detected with the use of an appropriate
antenna and receiver.

d. Satellite Receiving Stations.


(1) Satellite receiving stations are positioned throughout the world. Each satellite program has
its own fleet of receiving stations with a limited range from which it can pick up the satellite
signal. For an example of locations and coverage of SPOT receiving stations go to
http://www.spotimage.fr/home/system/introexp/station/welcome.htm.

(2) Satellites can only transmit data when in range of a receiving station. When outside of a
receiving range, satellites will store data until they fly within range of the next receiving
station. Some satellite receiving stations are mobile and can be placed on airplanes for swift
deployment. A mobile receiving station is extremely valuable for the immediate acquisition
of data relating to an emergency situation (flooding, forest fire, military strikes).

e. Data is Prepared for User. Once transmitted the carrier signal is filtered from the data,
which are decoded and recorded onto a high-density digital tape (HDDT) or a CD-ROM,
and in some cases transferred via file transfer protocol (FTP). The data can then undergo
geometric and radiometric preprocessing, generally by the vendor. The data are
subsequently recorded onto tape or CD compatible for a computer.

f. Hardware and Software Requirements. The hardware and software needed for satellite
image analysis will depend on the type of data to be processed. A number of free image
processing software programs are available and can be downloaded from the inter-net.
Some vendors provide a free trial or free tutorials. Highly sophisticated and powerful
software packages are also available for purchase. These packages require robust hard-
ware systems to sustain extended use. Software and hardware must be capable of man-
aging the requirements of a variety of data formats and file sizes. A single satellite image
file can be 300 MB prior to enhancement processing. Once processed and enhanced, the
resulting data files will be large and will require storage for continued analysis. Because
of the size of these files, software and hardware can be pushed to its limits. Regularly save
and back up your data files as software and hardware pushed to its limits can crash, losing
valuable information. Be sure to properly match your software requirements with
appropriate hardware capabilities.

g. Turning Digital Data into Images.

(1) Satellite data can be displayed as an image on a computer monitor by an array of pixels, or
picture elements, containing digital numbers. The composition of the image is simply a
grid of continuous pixels, known as a raster image (Figure 2-27). The digital number (DN)
of a pixel is the result of the spatial, spectral, and radiometric averaging of reflected/emitted
radiation from a given area of ground cover (see below for information on spatial, spectral,
and radiometric resolution). The DN of a pixel is therefore the average radiance of the
surface area the pixel represents.
Figure 2-27. Figure illustrates the collection of raster data. Black grid (left) shows what area on the ground is
covered by each pixel in the image (right). A sensor measures the average spectrum from each pixel, recording
the photons coming in from that area. ASTER data of Lake Kissimmee, Florida, acquired 2001-08-18. Image
developed for Prospect (2002 and 2003).

(2) The value given to the DN is based on the brightness value of the radiation (see explanation
above and Figure 2-28). For most radiation, an 8-bit scale is used that corresponds to a
value range of 0–255 (Table 2-4). This means that 256 levels of brightness (DN values are
sometimes referred to as brightness values-𝐵𝑣 ) can be displayed, each representing the
intensity of the reflected/emitted radiation. On the image this translates to varying shades
of grays. A pixel with a brightness value of zero (𝐵𝑣 = 0) will appear black; a pixel with a
𝐵𝑣 of 255 will appear white (Figure 2-29). All brightness values in the range of 𝐵𝑣 = 1 to
254 will appear as increasingly brighter shades of gray. In Figure 2-30, the dark regions
represent water-dominated pixels, which have low reflectance/𝐵𝑣 , while the bright areas
are developed land (agricultural and forested), which has high re-flectance.

Figure 2-29. Raster array and accompanying digital number (DN) values for a single band image.
Dark pixels have low DN values while bright pixels have high values. Modified from Natural
Resources Canada image
http://www.ccrs.nrcan.gc.ca/ccrs/learn/tutorials/fundam/chapter1/chapter1_7_e.html.

h. Converting Digital Numbers to Radiance. Conversion of a digital number to its


corresponding radiance is necessary when comparing images from different satellite
sensors or from different times. Each satellite sensor has its own calibration parameter,
which is based on the use of a linear equation that relates the minimum and maximum
radiation brightness. Each spectrum band (see Paragraph 2-7i) also has its own radiation
minimum and maximum.

(1) Information pertaining to the minimum and maximum brightness (𝐿𝑚𝑖𝑛 and 𝐿𝑚𝑎𝑥
respectively) is usually found in the metadata (see Chapter 5). The equation for deter-
mining radiance from the digital number is:

𝐿 = 𝐿𝑚𝑎𝑥 – 𝐿𝑚𝑖𝑛 /255 × 𝐷𝑁 + 𝐿𝑚𝑖𝑛 (2-10)

where

𝐿 = radiance expressed in 𝑊𝑚−2 𝑠𝑟 −1


𝐿𝑚𝑖𝑛 = spectral radiance corresponding to the minimum digital number
𝐿𝑚𝑎𝑥 =spectral radiance corresponding to the maximum digital number
𝐷𝑁 = digital number given a value based on the bit scale used.

(2) This conversion can also be used to enhance the visual appearance of an image by
reassigning the DN values so they span the full gray scale range (see Paragraph 5-20).

i. Spectral Bands.

(1) Sensors collect wavelength data in bands. A number or a letter is typically as-signed to a
band. For instance, radiation that spans 0.45 to 0.52 µm is designated as band 1 for
Landsat 7 data; in the microwave region radiation spanning 15 to 30 cm is termed the L-
band. Not all bands are created equally. Landsat band 1 (B1) does not represent the same
wavelengths as SPOT’s B1.

(2) Band numbers are not the same as sensor numbers. For instance Landsat 4 does not refer
to band 4. It instead refers to the fourth satellite sensor placed into orbit by the Landsat
program. This can be confusing, as each satellite program has a fleet of satellites (in or out
of commission at different times), and each satellite program will define bands differently.
Two different satellites from the same program may even be collecting radiation at a
slightly difference wavelength range for the same band (Table 2-6). It is, there-fore,
important to know which satellite program and which sensor collected the data.
Table 2-6 Landsat Satellites and Sensors
The following table lists Landsat satellites 1-7, and provides band information and pixel size. The band numbers for one sensor does not
necessarily imply the same wavelength range. For example, notice that band 4 in Landsat 1-2 and 3 differ from the band 4 in Landsat 4-5
and Landsat 7. Source: http://landsat.gsfc.nasa.gov/guides/LANDSAT-7_dataset.html#8.

Satellite Sensor Band number Band wavelengths Pixel Size


Landsats 1-2 RBV 1) 0.45 to 0.57 80
2) 0.58 to 0.0.83 80
3) 0.70 to 0.83 80
MSS 4) 0.5 to 0.6 79
5) 0.6 to 0.7 79
6) 0.7 to 0.8 79
7) 0.8 to 1.1 79

Landsat 3 RBV 1) 0.45 to 0.52 40

MSS 4) 0.5 to 0.6 79


5) 0.6 to 0.7 79
6) 0.7 to 0.8 79
7) 0.8 to 1.1 79
8) 0.4 to 12.6 240

Landsat 4-5 MSS 4) 0.5 to 0.6 82


5) 0.6 to 0.7 82
6) 0.7 to 0.8 82
7) 0.8 to 1.1 82

TM 1) 0.45 to 0.52 30
2) 0.52 to 0.60 30
3) 0.63 to 0.69 30
4) 0.76 to 0.90 30
5) 1.55 to 1.75 30
6) 10.4 to 12.5 120
7) 2.08 to 2.35 30

Landsat 7 ETM 1) 0.45 to 0.52


2) 0.52 to 0.60 30
3) 0.63 to 0.69 30
4) 0.76 to 0.90 30
5) 1.55 to 1.75 30
6) 10.4 to 12.5 30
7) 2.08 to 2.35 150
30

PAN 4) 0.50 to 0.90 15

j. Color in the Image. Computers are capable of imaging three primary colors: red, green,
and blue (RGB). This is different from the color system used by printers, which uses
magenta, cyan, yellow, and black. The color systems are unique because of differences in
the nature of the application of the color. In the case of color on a computer monitor, the
monitor is black and the color is projected (called additive color) onto the screen. Print
processes require the application of color to paper. This is known as a sub-tractive process
owing to the removal of color by other pigments. For example, when white light that
contains all the visible wavelengths hits a poster with an image of a yellow flower, the
yellow pigment will remove the blue and green and will reflect yellow. Hence, the process
is termed subtractive. The different color systems (additive vs. sub-tractive) account for
the dissimilarities in color between a computer image and the corresponding printed image.

(1) Similar to the gray scale, color can also be displayed as an 8-bit image with 256
levels of brightness. Dark pixels have low values and will appear black with some
color, while bright pixels will contain high values and will contain 100% of the
designated color. In Figure 2-31, the 7 bands of a Landsat image are separated to
show the varying DNs for each band.

Figure 2-31. Individual DNs can be identified in each spectral band of an image. In this ex-ample the
seven bands of a subset from a Landsat image are displayed. Image developed for Prospect (2002 and
2003).

(2) When displaying an image on a computer monitor, the software allows a user to
assign a band to a particular color (this is termed as “loading the band”). Because
there are merely three possible colors (red, green, and blue) only three bands of
spectra can be displayed at a time. The possible band choices coupled with the
three-color combinations creates a seemingly endless number of possible color
display choices.

(3) The optimal band choice for display will depend of the spectral information needed
(see Paragraph 2-6b(7)). The color you designate for each band is somewhat
arbitrary, though preferences and standards do exist. For example, a typical
color/band designation of red/green/blue in bands 3/2/1 of Landsat displays the
imagery as true-color. These three bands are all in the visible part of the spectrum,
and the imagery appears as we see it with our eyes (Figure 2-32a). In Figure 2-32b,
band 4 (B4) is displayed in the red (called “red-gun” or “red-plane”) layer of the
bands 4/3/2, and vegetation in the agricultural fields appear red due to the infrared
location on the spectrum. In Figure 2-32c, band 4 (B4) is displayed as green. Green
is a logical choice for band 4 as it represents the wavelengths reflected by
vegetation.

a. The true color image appears with these bands in the visible part of the spectrum.

b. Using the near infra-red (NIR) band (4) in the red gun, healthy vegetation appears
red in the imagery.
c. Moving the NIR band into the green gun and adding band 5 to the red gun
changes the vegetation to green.
Figure 2-32. Three band combinations of Landsat imagery of 3/2/1, 4/3/2, and 5/4/3 in the RGB.
Images developed for Prospect (2002 and 2003).
k. Interpreting the Image. When interpreting the brightness of a gray scale image (Figure 2-
33), the brightness simply represents the amount of reflectance. For bright pixels the
reflectance is high, while dark pixels represent areas of low reflectance. By example, in a
gray scale display of Landsat 7 band 4, the brightest pixels represent areas where there is a
high reflectance in the wavelength range of 0.76 to 0.90 µm. This can be interpreted to
indicate the presence of healthy vegetation (lawns and golf courses).

(1) A color composite can be somewhat difficult to interpret owing to the mixing of
color. Similar to gray scale, the bright regions have high reflectance, and dark areas
have low reflectance. The interpretation becomes more difficult when we combine
different bands of data to produce what is known as false-color composites (Figure
2-33).

(2) White and black are the end members of the band color mixing. White pixels in a
color composite represent areas where reflectance is high in all three of the bands
dis-played. White is produced when 100% or each color (red, green, and blue) are
mixed in equal proportions. Black pixels are areas where there is an absence of
color due to the low DN or reflectance. The remaining color variations represent
the mixing of three band DNs. A magenta pixel is one that contains equal portions
of blue and red, while lacking green. Yellow pixels are those that are high in
reflectance for the bands in the green and red planes. (Go to Appendix C for a paper
model of the color cube/space.)
l. Data Resolution. A major consideration when choosing a sensor type is the definition of
resolution capabilities. “Resolution” in remote sensing refers to the ability of a sensor to
distinguish or resolve objects that are physically near or spectrally similar to other adjacent
objects. The term high or fine resolution suggests that there is a large degree of distinction
in the resolution. High resolution will allow a user to distinguish small, adjacent targets.
Low or coarse resolution indicates a broader averaging of radiation over a larger area (on
the ground or spectrally). Objects and their boundaries will be difficult to pinpoint in
images with coarse resolution. The four types of resolution in remote sensing include
spatial, spectral, radiometric, and temporal.

(1) Spatial Resolution.

(a) An increase in spatial resolution corresponds to an increase in the ability to resolve one
feature physically from another. It is controlled by the geometry and power of the sensor
system and is a function of sensor altitude, detector size, focal size, and system
configuration.

(b) Spatial resolution is best described by the size of an image pixel. A pixel is a two-
dimensional square-shaped picture element displayed on a computer. The dimensions on
the ground (measured in meters or kilometers) projected in the instantaneous field of view
(IFOV) will determine the ratio of the pixel size to ground coverage. As an example, for a
SPOT image with 20- ×20-m pixels, one pixel in the digital image is equivalent to 20 m
square on the ground. To gauge the resolution needed to discern an object, the spatial
resolution should be half the size of the feature of interest. For example, if a project requires
the discernment of individual tree, the spatial resolution should be a minimum of 15 m. If
you need to know the percent of timber stands versus clearcuts, a resolution of 30 m will
be sufficient.

Table 2-7 Minimum image resolution required for various sized objects.

Resolution Feature Object


(m)
0.5 1.0
1.0 2.0
1.5 3.0
2.0 4.0
2.5 5.0
5.0 10.0
10.0 20.0
15.0 30.0
20.0 40.0
25.0 50.0

(2) Spectral Resolution. Spectral resolution is the size and number of wavelengths, intervals,
or divisions of the spectrum that a system is able to detect. Fine spectral resolution
generally means that it is possible to resolve a large number of similarly sized wavelengths,
as well as to detect radiation from a variety of regions of the spectrum. A coarse resolution
refers to large groupings of wavelengths and tends to be limited in the frequency range.

(3) Radiometric Resolution. Radiometric resolution is a detector’s ability to distinguish


differences in the strength of emitted or reflected electromagnetic radiation. A high
radiometric resolution allows for the distinction between subtle differences in signal
strength.

(4) Temporal Resolution.

(a) Temporal resolution refers to the frequency of data collection. Data collected on
different dates allows for a comparison of surface features through time. If a project requires an
assessment of change, or change detection, it is important to know: 1) how many data sets already
exist for the site; 2) how far back in time the data set ranges; and 3) how frequently the satellite
returns to acquire the same location.

(b) Most satellite platforms will pass over the same spot at regular intervals that range
from days to weeks, depending on their orbit and spatial resolution (see Chapter 3). A few
examples of projects that require change detection are the growth of crops, deforestation, sediment
accumulation in estuaries, and urban development.

(5) Determine the Appropriate Resolution for the Project. Increasing resolution tends to lead
to more accurate and useful information; however, this is not true for every project. The
downside to increased resolution is the need for increased storage space and more powerful
hardware and software. High-resolution satellite imagery may not be the best choice when
all that is needed is good quality aerial photographs. It is, therefore, important to determine
the minimum resolution requirements needed to accomplish a given task from the outset.
This may save both time and funds.

2-8 Aerial Photography. A traditional form of mapping and surface analysis by re-mote sensing
is the use of aerial photographs. Low altitude aerial photographs have been in use since the Civil
War, when cameras mounted on balloons surveyed battlefields. Today, they provide a vast amount
of surface detail from a low to high altitude, vertical perspective. Because these photographs have
been collected for a longer period of time than satellite images, they allow for greater temporal
monitoring of spatial changes. Roads, buildings, farmlands, and lakes are easily identifiable and,
with experience, surface terrain, rock bodies, and structural faults can be identified and mapped.
In the field, photographs can aid in precisely locating target sites on a map.

a. Aerial photographs record objects in the visible and near infrared and come in a variety
of types and scales. Photos are available in black and white, natural color, false color infrared, and
low to high resolution.

b. Resolution in aerial photographs is defined as the resolvable difference between


adjacent line segments. Large-scale aerial photographs maintain a fine resolution that allows users
to isolate small objects such as individual trees. Photographs obtained at high altitudes produce a
small-scale, which gives a broader view of surface features.

c. In addition to the actual print or digital image, aerial photographs typically include
information pertaining to the photo acquisition. This information ideally includes the date, flight,
exposure, origin/focus, scale, altitude, fiducial marks, and commissioner (Figure 2-34). If the scale
is not documented on the photo, it can be determined by taking the ratio of the distance of two
objects measured on the photo vs. the distance of the same two objects calculated form
measurements taken from a map.

Photo scale = photo distance/ground distance = 𝑑/𝐷 (2-11)

d. The measurement is best taken from one end of the photo to the other, passing through
the center (because error in the image increases away from the focus point). For precision, it is
best to average a number of ratios from across the image.

e. Photos are interpreted by recognizing various elements in a photo by the distinction of


tone, texture, size, shape, pattern, shadow, site, and association. For instance, airport landing strips
can look like roads, but their large widths, multiple intersections at small angles, and the
positioning of airport hangers and other buildings allow the interpreter to correctly identify these
“roads” as a special use area.

f. Aerial-photos are shot in a sequence with 60% overlap; this creates a stereo view when
two photos are viewed simultaneously. Stereoscopic viewing geometrically corrects photos by
eliminating errors attributable to camera tilt and terrain relief. Images are most easily seen in stereo
by viewing them through a stereoscope. With practice it is possible to see in stereo without the
stereoscope. This view will produce a three-dimensional image, allowing you to see topographic
relief and resistant vs. recessive rock types.

g. To maintain accuracy it is important to correlate objects seen in the image with the
actual object in the field. This verification is known as ground truth. Without ground truth you
may not be able to differentiate two similarly toned objects. For instance, two very different but
recessive geologic units could be mistakenly grouped together. Ground truth will also establish
the level of accuracy that can be attributed to the maps created based solely on photo
interpretations.

h. For information on aerial photograph acquisition, see Chapter 4. Chapter 5 presents a


discussion on the digital display and use of aerial photos in image processing.

3-6 Airborne Digital Sensors. The advancement of airborne systems to include high resolution
digital sensors is becoming available through commercial companies. These systems are
established with onboard GPS for geographic coordinates of acquisitions, and real time image
processing. Additionally, by the time the plane lands on the ground, the data can be copied to
CDROM and be available for delivery to the customer with a basic level of processing. The data
at this level would require image calibration and additional processing. The data at this level would
require image calibration and additional processing. See Appendix F for a list of airborne system
sensors.

3-7 Airborne Geometries. There are several ways in which airborne image geometry can be
controlled. Transects should always be flown parallel to the principle plane to the sun, such that
the BRDF (bi-directional reflectance distribution function) is symmetrical on either side of the
nadir direction. The pilot should attempt to keep the plane level and fly straight line transects. But
since there are always some attitude disturbances, GPS and IMU (inertial measuring unit) data can
be used in post-processing the image data to take out this motion. The only way of guaranteeing
nadir look imagery is to have the sensor mounted on a gyro-stabilized platform. Without this, some
angular distortion of the imagery will result even if it is post-processed with the plane’s attitude
data and an elevation model (i.e., sides of buildings and trees will be seen and the areas hidden by
these targets will not be imaged). Shadow on one side of the buildings or trees cannot be eliminated
and the dynamic range of the imagery may not be great enough to pull anything out of the shadow
region. The only way to minimize this effect is to acquire the data at or near solar noon.

3-8 Planning Airborne Acquisitions.

a. Planning airborne acquisitions requires both business and technical skills. For ex-ample,
to contract with an airborne image acquisition company, a sole source claim must be made that
this is the only company that has these special services. If not registered as a prospective
independent contractor for a Federal governmental agency, the company may need to file a Central
Contractor Registration (CCR) Application, phone (888-227-2423) and request a DUNS number
from Dun & Bradstreet, phone (800-333-0505). After this, it is necessary for the contractee to
advertise for services in the Federal Business Opportunities Daily (FBO Daily)
http://www.fbodaily.com. Another way of securing an airborne contractor is by riding an existing
Corps contract; the St. Louis District has several in place. A third way is by paying another
governmental agency, which has a contract in place. If the contractee is going to act as the lead for
a group acquisition among several other agencies, it may be necessary to execute some
Cooperative Research and Development Agreements (CRDAs) between the contractee and the
other agencies. As a word of caution, carefully spell out in the legal document what happens if the
contractor, for any reason, defaults on any of the image data collection areas. A data license should
be spelled out in the contract between the parties.

b. Technically, maps must be provided to the contractor of the image acquisition area.
They must be in the projection and datum required, for example Geographic and WGS84 (World
Geodetic System is an earth fixed global reference frame developed in 1984). The collection flight
lines should be drawn on the maps, with starting and ending coordinates for each straight-line
segment. If an area is to be imaged then the overlap between flight lines must be specified, usually
20%. If the collection technique is that of overlapping frames then both the sidelap and endlap
must be specified, between 20 and 30%. It is a good idea to generate these maps as vector
coverages because they are easily changed when in that format and can be inserted into formal
reports with any caption desired later.

The maximum angle allowable from nadir should be specified. Other technical considerations that
will affect the quality of the resulting imagery include: What sun angle is allowable? What lens
focal length is allowable? What altitude will the collection be flown? Will the imagery be flown
at several resolutions or just one? Who will do the orthorectification and mosaicing of the imagery?
Will DEMs, DTMs, or DSMs be used in the orthorectification process? How will unseen and
shadow areas be treated in the final product? When planning airborne acquisitions, these questions
should be part of the decision process.

More History of remote sensing


2-9 Brief History of Remote Sensing. Remote sensing technologies have been built upon by the
work of researchers from a variety of disciplines. One must look further than 100 years ago to
understand the foundations of this technology. For a timeline his-tory of the development of remote
sensing see http://rst.gsfc.nasa.gov/Intro/Part2_8.html. The chronology shows that remote sensing
has matured rapidly since the 1970s. This advancement has been driven by both the military and
commercial sectors in an effort to effectively model and monitor Earth processes. For brevity, this
overview focuses on camera use in remote sensing followed by the development of two NASA
programs and France’s SPOT system. To learn more about the development of remote sensing and
de-tails of other satellite programs see http://rst.gsfc.nasa.gov/Front/tofc.html.

a. The Camera. The concept of imaging the Earth’s surface has its roots in the
development of the camera, a black box housing light sensitive film. A small aperture allows light
reflected from objects to travel into the black box. The light then “exposes” film, positioned in the
interior, by activating a chemical emulsion on the film surface. After exposure, the film negative
(bright and dark are reversed) can be used to produce a positive print or a visual image of a scene.
b. Aerial Photography. The idea of mounting a camera on platforms above the ground for
a “birds-eye” view came about in the mid-1800s. In the 1800’s there were few objects that flew or
hovered above ground. During the US Civil War, cameras where mounted on balloons to survey
battlefield sites. Later, pigeons carrying cameras were employed
(http://www2.oneonta.edu/~baumanpr/ncge/rstf.htm), a platform with obvious disadvantages. The
use of balloons and other platforms created geometric problems that were eventually solved by the
development of a gyro-stabilized camera mounted on a rocket. This gyro-stabilizer was created by
the German scientist Maul and was launched in 1912.

c. First Satellites. The world’s first artificial satellite, Sputnik 1, was launched on 4
October 1957 by the Soviet Union. It was not until NASA’s meteorological satellite TIROS –1
was launched that the first satellite images were produced
(http://www.earth.nasa.gov/history/tiros/tiros1.html). Working on the same principles as the
camera, satellite sensors collect reflected radiation in a range of spectra and store the data for
eventual image processing (see above, this chapter).

d. NASA’s First Weather Satellites. NASA’s first satellite missions involved study of the
Earth’s weather patterns. TIROS (Television Infrared Operational Satellite) missions launched
10 experimental satellites in the early 1960’s in an effort to prepare for a per-manent weather
bureau satellite system known as TOS (TIROS Operating System). TIROS-N (next generation)
satellites currently monitor global weather and variations in the Earth’s atmosphere. The goal of
TIROS-N is to acquire high resolution, diurnal data that includes vertical profile measurements
of temperature and moisture.

e. Landsat Program. The 1970’s brought the introduction of the Landsat series with the
launching of ERTS-1 (also known as Landsat 1) by NASA. The Landsat program was the first
attempt to image whole earth resources, including terrestrial (land based) and marine resources.
Images from the Landsat series allowed for detailed mapping of land-masses on a regional and
continental scale.

(1) The Landsat imagery continues to provide a wide variety of information that is highly
useful for identifying and monitoring resources, such as fresh water, timberland, and minerals.
Landsat imagery is also used to assess hazards such as floods, droughts, forest fire, and pollution.
Geographers have used Landsat images to map previously un-known mountain ranges in
Antarctica and to map changes in coastlines in remote areas.

(2) A notable event in the history of the Landsat program was the addition of TM
(Thematic Mapper) first carried by Landsat 4 (for a summary of Landsat satellites see
http://geo.arc.nasa.gov/sge/landsat/lpsum.html). The Thematic Mapper provides a resolution as
low as 30 m, a great improvement over the 70-m resolution of earlier sensors. The TM devise
collects reflected radiation in the visible, infrared (IR), and thermal (IR) region of the spectrum.
(3) In the late 1970’s, the regulation of Landsat was transferred from NASA to NOAA,
and was briefly commercialized in the 1980s. The Landsat program is now operated by the USGS
EROS Data Center (US Geological Survey Earth Resources Observation Systems; see
http://landsat7.usgs.gov/index.html).

(4) As government sponsored programs have become increasingly commercialized and


other countries develop their own remote sensors, NASA’s focus has shifted from sensor
development to data sharing. NASA’s Data Acquisition Centers serves as a clearing-house for
satellite data; these data can now be shared via the internet.

f. France’s SPOT Satellite System. As a technology, remote sensing continues to advance


globally with the introduction of satellite systems in other countries such as France, Japan, and
India. France’s SPOT (Satellite Pour l’Observation de la Terra) has provided reliable high-
resolution (20 to 10 m resolution) image data since 1986.

(1) The SPOT 1, 2, and 3 offer both panchromatic data (P or PAN) and three bands of
multispectral (XS) data. The panchromatic data span the visible spectrum without the blue
(0.51-0.73 µm) and maintains a 10-m resolution. The multispectral data provide 20-m
resolution, broken into three bands: Band 1 (Green) spans 0.50–0.59 µm, Band 2 (Red)
spans 0.61–0.68 µm, and Band 3 (Near Infrared) spans 0.79–0.89 µm. SPOT 4 also sup-
plies a 20-m resolution shortwave Infrared (mid IR) band (B4) covering 1.58 to 1.75 µm.
SPOT 5, launched in spring 2002, provides color imagery, elevation models, and an
impressive 2.5-m resolution. It houses scanners that collect panchromatic data at 5 m
resolution and four band multispectral data at 10-m resolution (see Appendix D-“SPOT”
file).

(2) SPOT 3 was decommissioned in 1996. SPOT 1, 2, 4, and 5 are operational at the time of
this writing. For information on the SPOT satellites go to
http://www.spotimage.fr/home/system/introsat/seltec/welcome.htm.

g. Future of Remote Sensing. The improved availability of satellite images coupled with the ease
of image processing has led to numerous and creative applications. Re-mote sensing has
dramatically brought about changes in the methodology associated with studying earth processes
on both regional and global scales. Advancements in sensor resolution, particularly spatial,
spectral, and temporal resolution, broaden the possible applications of satellite data.

(1) Government agencies around the world are pushing to meet the demand for re-
liable and continuous satellite coverage. Continuous operation improves the temporal data needed
to assess local and global change. Researchers are currently able to perform a 30-year temporal
analysis using satellite images on critical areas around the globe. This time frame can be extended
back with the incorporation of digital aerial photographs.
(2) Remote sensing has established itself as a powerful tool in the assessment and management
of U.S. lands. The Army Corps of Engineers has already incorporated this technology into
its nine business practice areas, demonstrating the tremendous value of remote sensing in
civil works projects.

Data Acquisition and Archives


c. Image Enhancement: Spatial Filters. It is occasionally advantageous to reduce the detail or
exaggerate particular features in an image. This can be done by a convolution method creating an
altered or “filtered” output image data file. Numerous spatial filters have been developed and can
be automated within software programs. A user can also develop his or her own spatial filter to
control the output data set. Presented below is a short introduction to the method of convolution
and a few commonly used spatial filters.

(1) Spatial Frequency. Spatial frequency describes the pattern of digital values observed
across an image. Images with little contrast (very bright or very dark) have zero spatial frequency.
Images with a gradational change from bright to dark pixel values have low spatial frequency;
while those with large contrast (black and white) are said to have high spatial frequency. Images
can be altered from a high to low spatial frequency with the use of convolution methods.

(2) Convolution.

(a) Convolution is a mathematical operation used to change the spatial frequency of digital
data in the image. It is used to suppress noise in the data or to exaggerate features of interest. The
operation is performed with the use of a spatial kernel. A kernel is an array of digital number values
that form a matrix with odd numbered rows and columns (Table 5-2). The kernel values, or
coefficients, are used to average each pixel relative to its neighbor across the image. The output
data set will represent the averaging effect of the kernel coefficients. As a spatial filter, convolution
can smooth or blur images, thereby reducing image noise. In feature detection, such as an edge
enhancement, convolution works to exaggerate the spatial frequency in the image. Kernels can be
reapplied to an image to further smooth or exaggerate spatial frequency.

(b) Low pass filters apply a small gain to the input data (Table 5-2a). The resulting output
data will decrease the spatial frequency by de-emphasizing relatively bright pixels. Two types of
low pass filters are the simple mean and center-weighted mean methods (Table 5-2a and b). The
resultant image will appear blurred. Alternatively, high pass frequency filters (Table 5-2c)
increase image spatial frequency. These types of filters exaggerate edges without reducing image
details (an advantage over the Laplacian filter discussed below).

(2) Laplacian or Edge Detection Filter.

(a) The Laplacian filter detects discrete changes in spectral frequency and is used for
highlighting edge features in images. This type of filter works well for delineating linear features,
such as geologic strata or urban structures. The Laplacian is calculated by an edge enhancement
kernel (Table 5-2d and e); the middle number in the matrix is much higher or lower than the
adjacent coefficients. This type of kernel is sensitive to noise and the resulting output data will
exaggerate the pixel noise. A smoothing convolution filter can be applied to the image in advance
to reduce the edge filter's sensitivity to data noise.
The Convolution Method

Convolution is carried out by overlaying a kernel onto the pixel image and centering
its middle value over the pixel of interest. The kernel is first placed above the pixel
located at the top left corner of the image and moved from top to bottom, left to right.
Each kernel position will create an output pixel value, which is calculated by
multiplying each input pixel value with the kernel coefficient above it. The product
of the input data and kernel is then averaged over the array (sum of the product
divided by the number of pixels evaluated); the output value is assigned this average.
The kernel then moves to the next pixel, always using the original input data set for
calculating averages. Go to http://www.cla.sc.edu/geog/rslab/Rscc/rscc-frames.html
for an in-depth description and examples of the convolution method.

The pixels at the edges create a problem owing to the absence of neighboring pixels.
This problem can be solved by inventing input data values. A simpler solution for
this problem is to clip the bottom row and right column of pixels at the margin.

(b) The Laplacian filter measures the changes in spectral frequency or pixel intensity. In
areas of the image where the pixel intensity is constant, the filter assigns a digital number value of
0. Where there are changes in intensity, the filter assigns a positive or negative value to designate
an increase or decrease in the intensity change. The resulting image will appear black and white,
with white pixels defining the areas of changes in intensity.
a. Low Pass: simple mean kernel.

1 1 1
1 1 1
1 1 1

1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 2 2 2 1 1
1 1 1 10 1 1 1 1 1 2 2 2 1 1
1 1 1 1 1 1 1 1 1 2 2 2 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1
Raw data Output data

b. Low Pass: center weighted mean kernel.

1 1 1
1 2 1
1 1 1

1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 2 2 2 1 1
1 1 1 10 1 1 1 1 1 2 3 2 1 1
1 1 1 1 1 1 1 1 1 2 2 2 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1
Raw data Output data

c. High Pass kernel.

-1 1 -1
-1 8 -1
-1 1 -1

10 10 10 10 10 10 10 0 0 0 1 1 0 0
10 10 10 10 10 10 10 0 0 0 1 1 0 0
10 10 10 10 10 10 10 0 0 5 -5 5 0 0
10 10 10 15 10 10 10 0 0 5 40 5 0 0
10 10 10 10 10 10 10 0 0 5 -5 5 0 0
10 10 10 10 10 10 10 0 0 Output
0 1 data
1 0 0
Raw data
10 10 10 10 10 10 10 0 0 0 1 1 0 0
Raw data Output data
d. Direction Filter: north-south component kernel

-1 2 -1
-2 1 2
-1 2 -1

1 1 1 2 1 1 1 0 0 -4 8 -4 0 0
1 1 1 2 1 1 1 0 0 -4 8 -4 0 0
1 1 1 2 1 1 1 0 0 -4 8 -4 0 0
1 1 1 2 1 1 1 0 0 -4 8 -4 0 0
1 1 1 2 1 1 1 0 0 -4 8 -4 0 0
1 1 1 2 1 1 1 0 0 -4 8 -4 0 0
1 1 1 2 1 1 1 0 0 -4 8 -4 0 0
Raw data Output data

e. Direction Filter: East-west component kernel.

-1 2 -1
2 4 2
-1 -2 -1

1 1 1 2 1 1 1 0 0 0 0 0 0 0
1 1 1 2 1 1 1 0 0 0 0 0 0 0
1 1 1 2 1 1 1 0 0 0 0 0 0 0
1 1 1 2 1 1 1 0 0 0 0 0 0 0
1 1 1 2 1 1 1 0 0 0 0 0 0 0
1 1 1 2 1 1 1 0 0 0 0 0 0 0
1 1 1 2 1 1 1 0 0 0 0 0 0 0
Raw data Output data

Image classification methods


(3) Classification Algorithms. Image pixels are extracted into the designated classes by a
computed discriminant analysis. The three types of discriminant analysis algorithms are: minimum
mean distance, maximum likelihood, and parallelepiped. All use brightness plots to establish the
relationship between individual pixels and the training class (or training site).

(a) Minimum Mean Distance. Minimum distance to the mean is a simple computation that
classifies pixels based on their distance from the mean of the training class. 5-32

It is determined by plotting the pixel brightness and calculating its Euclidean distance (using the
Pythagorean theorem) to the unassigned pixel. Pixels are assigned to the training class for which
it has a minimum distance. The user designates a minimum distance threshold for an acceptable
distance; pixels with distance values above the designated threshold will be classified as unknown.

(b) Parallelepiped. In a parallelepiped computation, unassigned pixels are grouped into a


class when their brightness values fall within a range of the training mean. An acceptable digital
number range is established by setting the maximum and minimum class range to plus and minus
the standard deviation from the training mean. The pixel brightness value simply needs to fall
within the class range, and is not based on its Euclidean distance. It is possible for a pixel to have
a brightness value close to a class and not fall within its acceptable range. Likewise, a pixel may
be far from a class mean, but fall within the range and therefore be grouped with that class. This
type of classification can create training site overlap, causing some pixels to be misclassified.

(c) Maximum Likelihood. Maximum Likelihood is computationally complex. It


establishes the variance and covariance about the mean of the training classes. This algorithm then
statistically calculates the probability of an unassigned pixel belonging to each class. The pixel is
then assigned to the class for which it has the highest probability. Figure 5-19 visually illustrates
the differences between these supervised classification methods.

(4) Assessing Error. Accuracy can be qualitatively determined by an error matrix (Table
5-3). The matrix establishes the level of errors due to omission (exclusion error), commission
(inclusion error), and can tabulate an overall total accuracy. The error matrix lists the number of
pixels found within a given class. The rows in Table 5-2 list the pixels classified by the image
software. The columns list the number of pixels in the reference data (or reported from field data).
Omission error calculates the probability of a pixel being accurately classified; it is a comparison
to a reference. Commission deter-mines the probability that a pixel represents the class for which
it has been assigned. The total accuracy is measured by calculating the proportion correctly
classified pixel relative to the total tested number of pixels (Total = total correct/total tested).

Table 5-3 Omission and Commission Accuracy Assessment Matrix. Taken from Jensen (1996).

Reference Data

Classification Residential Commercial Wetland forest water Raw total


Residential 70 5 0 13 0 88
Commercial 3 55 0 0 0 58
Wetland 0 0 99 0 0 99
Forest 0 0 4 37 0 41
Water 0 0 0 0 121 121
Column Total 73 60 103 50 121 407
Overall Accuracy
= 382/407=93.86%

Producer’s Accuracy (measure of omission error) User’s Accuracy (measure of commission error)
Residential= 70/73 = 96–4% omission error Residential= 70/88 = 80–20% omission error
Commercial= 55/60 = 92–8% omission error Commercial= 55/58 = 95–5% omission error
Wetland= 99/103 = 96–4% omission error Wetland= 99/99 = 100–0% omission error
Forest= 97/50 = 74–26% omission error Forest= 37/41 = 90–10% omission error
Water= 121/121 = 100–0% omission error Water= 121/121 = 100–0% omission error

Example error matrix taken from Jensen (1986). Data are the result of an accuracy assessment of
Landsat TM data.

Classification method summary

Image classification uses the brightness values in one or more spectral bands, and
classifies each pixel based on its spectral information

The goal in classification is to assign remaining pixels in the image to a


designated class such as water, forest, agriculture, urban, etc.

The resulting classified image is composed of a collection of pixels, color-coded to


represent a particular theme. The overall process then leads to the creation of a
thematic map to be used to visually and statistically assess the scene.

(5) Unsupervised Classification. Unsupervised classification does not re-quire prior


knowledge. This type of classification relies on a computed algorithm that clusters pixels based on
their inherent spectral similarities.

(a) Steps Required for Unsupervised Classification. The user designates 1) the number of
classes, 2) the maximum number of iterations, 3) the maximum number of times a pixel can be
moved from one cluster to another with each iteration, 4) the minimum distance from the mean,
and 5) the maximum standard deviation allowable. The program will iterate and recalculate the
cluster data until it reaches the iteration threshold designated by the user. Each cluster is chosen
by the algorithm and will be evenly distributed across the spectral range maintained by the pixels
in the scene. The resulting classification image (Figure 5-20) will approximate that which would
be produced with the use of a minimum mean distance classifier (see above, “classification
algorithm”). When the iteration thresh-old has been reached the program may require you to
rename and save the data clusters as a new file. The display will automatically assign a color to
each class; it is possible to alter the color assignments to match an existing color scheme (i.e., blue
= water, green = vegetation, red = urban) after the file has been saved. In the unsupervised
classification process, one class of pixels may be mixed and as-signed the color black. These pixels
represent values that did not meet the requirements set by the user. This may be attributable to
spectral “mixing” represented by the pixel.

(b) Advantages of Using Unsupervised Classification. Unsupervised classification is


useful for evaluating areas where you have little or no knowledge of the site. It can be used as an
initial tool to assess the scene prior to a supervised classification. Unlike supervised classification,
which requires the user to hand select the training sites, the unsupervised classification is unbiased
in its geo-graphical assessment of pixels.

(c) Disadvantages of Using Unsupervised Classification. The lack of in-formation about


a scene can make the necessary algorithm decisions difficult. For instance, without knowledge of
a scene, a user may have to experiment with the number of spectral clusters to assign. Each
iteration is time consuming and the final image may be difficult to interpret (particularly if there
is a large number of unidentified pixels such as those in Figure 5-19). The unsupervised
classification is not sensitive to covariation and variations in the spectral signature to objects. The
algorithm may mistakenly separate pixels with slightly different spectral values and assign them
to a unique cluster when they, in fact, represent a spectral continuum of a group of similar objects.

(6) Evaluating Pixel Classes. The advantages of both the supervised and unsupervised
classification lie in the ease with which programs can perform statistical analysis. Once pixel
classes have been assigned, it is possible to list the exact number of pixels in each representative
class (Figure 5-17, classified column). As the size of each pixel is known from the metadata, the
metric area of each class can be quickly calculated. For example, you can very quickly determine
the percentage of fallow field area versus productive field area in an agricultural scene.

You might also like