UNIT V (1)
UNIT V (1)
INTERPRETATION
LiDAR Data:
LiDAR sensors emit laser pulses and measure the time it takes for the
pulses to return, providing highly accurate elevation data. LiDAR is
essential for creating Digital Elevation Models (DEMs), assessing
terrain characteristics, and mapping landforms and vegetation
structure.
MODIS Data:
The Moderate Resolution Imaging Spectroradiometer (MODIS)
aboard NASA's Terra and Aqua satellites provides global coverage
with moderate spatial resolution and daily revisits. MODIS data is
used for monitoring vegetation dynamics, fire activity, sea surface
temperature, and atmospheric conditions.
(i) Tone
Ground objects of different colour reflect the incident radiation
differently depending upon the incident wave length, physical
and chemical constituents of the objects.
The imagery as recorded in remote sensing is in different shades
or tones. For example, ploughed and cultivated lands record
differently from fallow fields. Tone is expressed qualitatively as
light, medium and dark.
In SLAR imagery, for example, the shadows cast by non-return
of the microwaves appear darker than those parts where greater
reflection takes place. These parts appear of lighter tone.
Similarly in thermal imagery objects at higher temperature are
recorded of lighter tone compared to objects at lower
temperature, which appear of medium to darker tone. Similarly
top soil appears as of dark tone compared to soil containing
quartz sand.
The coniferous trees appear in lighter tone compared to broad
leave tree clumps.
Tone, therefore, refers to the colour or reflective brightness.
Tone along with texture and shadow (as described below) help
in Interpretation and hence is a very important key.
Differences in moisture content of the soil or rock result in
differences in tone. In a black and white photograph dark tone
indicates dark bodies, namely, greater moisture contents and
grey or white tone reflect the dry soil.
The aerial photos with good contrast bring out tonal differences
and hence help in better interpretation. Tonal contrast can be
enhanced by use of high contrast film, high contrast paper or by
specialized image processing techniques such as 'Dodging' or
'Digital Enhancement'.
Sometimes Infrared film can give better contrast but it can also
reduce resolution and loss of detail in shadows.
(ii) Texture
Texture is an expression of roughness or smoothness as
exhibited by the imagery. It is the rate of change of tonal values.
Mathematically it is given as dD/dx where D is the Density and
'x' the distance measured from one arbitrary starting point, and
can be measured numerically by the use of microdensitometer.
Changes of density 'D' from point 'A' of the imagery to point 'B'
as measured by the micro-densitometer divided by the distance
gives the texture values numerically. Texture is dependent upon.
(a) photographic tone
(b) shape,
(c) size,
(d) pattern and scale of the imagery.
Any slight variation of these can change the texture. Texture can
qualitatively be expressed as course, medium and fine. The
texture is a combination of several image characteristics such as
tone, shadow, size, shape and pattern etc., and is produced by a
mixture of features too small to be seen individually because the
texture by definition is the frequency of tonal changes.
As an example, leaves of a tree are too small to be seen on an
aerial photo collectively along with shadow they give what is
called texture, which in turn helps to differentiate between
shrubs and trees. Texture sometimes can be very important
factor in determining the slope stability.
In the case of a humid ground, the blockage of water or bad
drainage a characteristic texture results.
Even spring and seepage of water from the base of clay give a
kind of 'turbulant' texture.
(iii) Association
The relation of a particular feature to its surroundings is an
important key to interpretation.
Sometimes a single feature by itself may not be distinctive
enough to permit its identification.
For example, Sink holes appears as dark spots on an imagery
where the surface or immediate subsurface soil consists of lime
stones, Thus the appearance of sink holes is always associated
with surface lime stone formation.
An example is that of kettle holes which appear as depressions
on photos due to terminal moraine and glacial terrain.
An another example is that of dark-toned features associated
with a flood plain of a river, which can be interpreted as infilled
oxbow lakes.
(iv) Shape
Some ground features have typical shapes due to the structure or
topography. For example air fields and football stadium easily can be
interpreted because of their finite ground shapes and geometry
whereas volcanic covers, sand, river terraces, cliffs, gullies can be
identified because of their characteristics shape controlled by geology
and topography.
(v) Size
The size of an image also helps for its identification whether it is
relative or absolute.
Sometimes the measurements of height (as by using parallax
bar) also gives clues to the nature of the object.
For example, measurement of height of different clumps of trees
gives an idea of the different species, similarly the measurement
of dip and strike of rock formation help in identifying
sedimentary formation.
Similarly the measurements of width of roads help in
discriminating roads of different categories i.e,national, state,
local etc. Size of course, is dependent upon the scale of imagery.
(vi) Shadows
Shadows cast by objects are sometimes important clues to their
identification and
Interpretation.
For example, shadow of a suspension bridge can easily be
discriminated from that of cantilever bridge.
Similarly circular shadows are indicative of coniferous trees.
Tall buildings and chimneys, and towers etc., can easily be
identified for their characteristic shadows. Shadows on the other
hand can sometimes render interpretation difficult i.e. dark slope
shadows covering important detail.
(vii) Site factor or Topographic Location
Relative elevation or specific location of objects can be helpful
to identify certain features.
For example, sudden appearance or disappearance of vegetation
is a good clue to the underlying soil type.
(viii) Pattern
Pattern is the orderly spatial arrangement of geological
topographic or vegetation features. This spatial arrangement
may be two-dimensional (plan view) or 3-dimensional (space).
Geological pattern may be linear or curved. Linear pattern are
formed of a very large number of continuous or discontinuous
short ticks which when viewed by eye appear to be continuous
lines.
Examples of linear geological pattern are faults, fractures, joints,
dykes, bedding planes, anticlines etc.,
Examples of curved features are plunging anticlines and folds.
Lineaments or lineations may be short, medium or long running
for several hundred kilometers. These are very important
expressions of the lithologic characters of the underlying rocks
and the attitude of the rock
bodies, spacing of planes of bedding and other structural
weaknesses and the control extendedby them over the surface
features. Vegetation pattern may be of the 'Block' type or
'Alignment' type.
The 'Alignment' type may be further subdivided into the Linear,
Parallel and curved type.
Alignments are due to narrow rockbands or faults. Since faults
retain moisture, vegetation is aligned along the fault lines.
Example of topographic pattern is the typical drainage
patterns(controlled and uncontrolled type). The uncontrolled
types are those, which are purely governed by topography, i.e.,
the slops whereas the controlled type are those, which are
governed by the underlying geological formations.
DIGITAL INTERPRETATION
Digital interpretation in remote sensing refers to the process of
analyzing and extracting meaningful information from digital imagery
acquired by satellite, aerial, or other remote sensing platforms. Unlike
traditional visual interpretation, which relies on human analysts to
interpret features in photographs or maps, digital interpretation
involves the use of computer-based techniques to automate or assist in
the analysis of remote sensing data.The steps carried out are
1. Image Processing
2. Feature Extraction
3. Change Detection
4. Quantitative Analysis
5. Integration with GIS and Modeling
6. Validation and Accuracy Assessment
Figure data interpretation
Image Processing:
Digital interpretation begins with image processing, which
involves a series of computational techniques to enhance,
correct, and manipulate remote sensing imagery.
Preprocessing steps may include radiometric and geometric
correction, atmospheric correction, noise reduction, and image
fusion to improve the quality and usability of the data.
Feature Extraction:
Digital interpretation techniques are used to automatically or
semi-automatically extract features of interest from remote
sensing imagery.
Feature extraction algorithms identify and delineate objects or
land cover classes based on their spectral, spatial, and textural
characteristics.
Common feature extraction methods include classification,
segmentation, object-based image analysis (OBIA), and
machine learning algorithms such as supervised and
unsupervised classification.
Change Detection:
Digital interpretation facilitates the detection and analysis of
temporal changes in remote sensing data.
Change detection algorithms compare multiple images acquired
at different times to identify areas of change, such as urban
expansion, deforestation, land cover conversion, or natural
disasters.
Techniques such as image differencing, image rationing, and
time-series analysis are used to quantify and characterize the
magnitude and extent of changes over time.
Quantitative Analysis:
Digital interpretation enables quantitative analysis of remote
sensing data, allowing for the measurement and extraction of
numerical information from imagery.
Quantitative analysis may include calculating vegetation
indices, estimating land surface temperature, determining object
heights or volumes, and deriving biophysical parameters such as
biomass or soil moisture.
These quantitative measurements provide valuable insights into
environmental processes, ecosystem dynamics, and land surface
characteristics.
Integration with GIS and Modeling:
Digital interpretation outputs are often integrated with
geographic information systems (GIS) and spatial analysis tools
to perform further analysis, visualization, and modeling.
GIS allows for the spatial representation, manipulation, and
overlay of remote sensing data with other geospatial datasets,
enabling comprehensive spatial analysis and decision-making.
Digital interpretation results can be used as input for
environmental modeling, land use planning, resource
management, and disaster risk assessment.
PREPROCESSING
Preprocessing functions involve those operations that are
normally required prior to the maindata analysis and extraction
of information, and are generally grouped as radiometric or
geometric corrections.
Radiometric corrections include correcting the data for sensor
irregularities and unwanted sensor or atmospheric noise, and
converting the data so they accurately represent the reflected or
emitted radiation measured by the sensor.
Geometric corrections include correcting for geometric
distortions due to sensor-Earth geometry variations, and
conversion of the data to real world coordinates (e.g. latitude
and longitude) on the Earth's surface.
The objective of the second group of image processing functions
grouped under the term of image enhancement, is solely to
improve the appearance of the imagery to assist in visual
interpretation and analysis.
Examples of enhancement functionsinclude contrast stretching
to increase the tonal distinction between various features in a
scene, and spatial filtering to enhance (or suppress) specific
spatial patterns in an image.
Image transformations are operations similar in concept to those
for image enhancement. However, unlike image enhancement
operations which are normally applied only to a single channel
of data at a time, image transformations usually involve
combined processing of data from multiple spectral bands.
Arithmetic operations (i.e. subtraction, addition, multiplication,
division) are performed to combine and transform the original
bands into "new" images which better display or highlight
certain features in the scene.
We will look at some of these operations including various
methods of spectral or band ratioing, and a procedure called
principal components analysis which is used to more efficiently
represent the information
Image classification
Image classification is a procedure to automatically categorize
all pixels in an image of a terrain into land cover classes.
Normally, multispectral data are used to perform the
classification of the spectral pattern present within the data for
each pixel is used as the numerical basis for categorization.
This concept is dealt under the broad subject, namely, Pattern
Recognition.
Spectral pattern recognition refers to the family of classification
procedures that utilises this pixel-by-pixel spectral information
as the basis for automated land cover classification.
Spatial pattern recognition involves the categorization of image
pixels on the basis of the spatial relationship with pixels
surrounding them. Image classification techniques are grouped
into two types, namely
1. Supervised
2. Unsupervised
The classification process may also include features, such as,
land surface elevation and the soil type that are not derived from
the image.
A pattern is thus a set of measurements on the chosen features
for the individual to be classified. The classification process
may therefore be considered a form of pattern recognition, that
is, the identification of the pattern associated with each pixel
position in an image in terms of the characteristics of the objects
or on the earth's surface.
Supervised Classification
A supervised classification algorithm requires a training sample
for each class, that is, a collection of data points known to have
come from the class of interest. The classification is thus based
on how "close" a point to be classified is to each training
sample.
We shall not attempt to define the word "close" other than to say
that both geometric and statistical distance measures are used in
practical pattern recognition algorithms.
The training samples are representative of the known classes of
interest to the analyst.
Classification methods that relay on use of training patterns are
called supervised classification methods.
The three basic steps involved in a typical supervised
classification procedure are as follows :
(i) Training stage:
The analyst identifies representative training areas and develops
numerical descriptions of the spectral signatures of each land
cover type of interest in the scene.
(ii) The classification stage:
Each pixel in the image data set IS categorized into the land
cover class it most closely resembles. If the pixel is
insufficiently similar to any training data set it is usually labeled
'Unknown'.
(iii) The output stage:
The results may be used in a number of different ways. Three
typical
forms of output products are thematic maps, tables and digital
data files which become input data for GIS.
The output of image classification becomes input for GIS for
spatial analysis of the terrain. Figure depicts the flow of
operations to be performed during image classification of
remotely sensed data of an area which ultimately leads to create
database as an input for GIS.
Figure Basic steps supervised classification
There are a number of powerful supervised classifiers based on
the statistics, which are commonly, used for various
applications.
A few of them are a minimum distance to means method,
average distance method, parallelepiped method, maximum
likelihood method, modified maximum likelihood method,
baysian's method, decision tree classification, and discriminant
functions.
The principles and working algorithms of all these supervised
classifiers are available in almost all standard books on remote
sensing and so details are not provided here.
Since all the supervised classification methods use training data
samples, it is more appropriate to consider some of the
fundamental characteristics of training data.
Training Dataset
A training dataset is a set of measurements (points from an
image) whose category membership is known by the analyst.
This set must be selected based on additional information
derived from maps, field surveys, aerial photographs, and
analyst's knowledge of usual spectral signatures of different
cover classes.
Selecting a good set of training points is one of the most critical
aspects of the classification procedure.
These guidelines are as following:
(i)Select sufficient number of points for each class. If each
measurement vector has N features, then select N+1 points per
class and the practical minimum is 10*N per class. If the class
shows a lot of variability (the scatter plot showing considerable
spreading or scatter among training points), select a larger number
of points, subject to practical limits of time, effort and expense.
The more the training points, the better the "extra points" to
evaluate the accuracy of the classifier.
Figure Image Classification
The more the points, the more accurate the classification will be.
(ii)Select training data sets which are representative of the classes of
interest that show both typical average feature values and a typical
degree of variability. For each class, select several training areas on
the image, instead of just one. Each training area should contain a
moderately large number of pixels. Pick training areas from
seemingly heterogeneous or appearing regions. Pick training areas
that are widely and spatially dispersed, across the full image. For each
class, select the training areas which are uniformly distributed across
the image and with high density.
(iii)Check that selected areas have unimodel distributions
(histograms). A bimodal histogram suggests that pixels from two
different classes may be included in the training sample.
(iv)Select training sets (physically) using a computer-based
classification system:
Poorest method: Using coordinates of training points or training
regions directly.
Better method: using joystick, trackball, light pen, directly on the
image.
For example. EASI/PACE : The program should show the histograms,
mean and standard deviations for each region selected, and for each
class in total.
The Program should allow to iterate; do classification using one set of
training points, then come back and modify training sets and class
definitions without starting all over again. There should be options to
combine classes from previous classification.
(v) The program should allow one to designate half of the points as
training points, and the other half to test the accuracy of the trained
classifier. Before it is used, the training set should be evaluated by
examining scatterplots and/or histograms for each class. It should
show unimodel distributions, hopefully approximating normal
distributions. If not unimodel, one may want to select new training
sets. After the discriminant functions and the classification rule is
derived, accuracy must be tested.
Two acceptable techniques which are commonly used are:
(a) Designate a randomly selected half of the training points as test
points, before developing classifier. Use the other half for training.
Then classify the half of the data not used for training.
Develop contingency table (confusion matrix) to indicate probability
of error in each class. This procedure is actually a measure of the
consistency of the classifier.
(b) Randomly select a set of pixel regions from the image of an
unknown class. Classify them using the discriminant function and
rules developed from the training set. Then verify the correctness of
the classification (again with a confusion matrix) by checking the
identity of these regions using external information sources like maps
and aerial photos.
(vi)Separability of classes: So far, we have looked at an ideal situation
where there is no overlap between different classes. In reality the
classes are likely to overlap. It can be seen that the less the overlap
between classes the lower the chance of misclassifying a given pixel.
Classes that have little overlap is said to be highly separable.
Unsupervised Classification
Unsupervised classification algorithms do not compare .points
to be classified with training data.
Rather, unsupervised algorithms examine a large number of
unknown data vectors and divide them into classes based on
properties inherent to the data themselves.
The classes that result stem from differences observed in the
data. In particular, use is made of the notion that data vectors
within a class should be in some sense mutually close together
in the measurement space, whereas data vectors in different
classes should be comparatively well separated.
If the components of the data vectors represent the responses in
different spectral bands, the resulting classes might be referred
to as spectral classes, as opposed to information classes, which
represent the ground cover types of interest to the analyst.
The two types of classes described above, information classes
and spectral classes, may not exactly correspond to each other.
For instance, two information classes, corn and soya beans, may
look alike spectrally. We would say that the two classes are not
separable spectrally.
At certain times of the growing season corn and soya beans are
not spectrally distinct while at other times they are. On the other
hand a single information class may be composed of two
spectral classes.
Differences in planting dates or seed variety might result in the
information
class" corn" being reflectance differences of tasseled and
untasseled corn.
Image enhancement
Low sensitivity of the detectors, weak signal of the objects
present on the earth surface, similar reflectance of different
objects and environmental conditions at the time of recording
are the major causes of low contrast of the image.
Another problem that complicates photographic display of
digital image is that the human eye is poor at discriminating the
slight radiometric or spectral differences that may characterize
the features. The main aim of digital enhancement is to amplify
these slight differences for better clarity of the image scene.
This means digital enhancement increases the separability
(contrast) between the interested classes or features.
The digital image enhancement may be defined as some
mathematical operations that are to be applied to digital remote
sensing input data to improve the visual appearance of an image
for better interpretability or subsequent digital analysis. Since
the image quality is a subjective measure varying from person to
person , there is no simple rule which may produce a single best
result.
Contrast Enhancement
The sensors mounted on board the aircraft and satellites have to
be capable of detecting upwelling radiance levels ranging from
low (oceans) or ice).
For any particular area that is being imaged it is unlikely that the
full dynamic range of the sensor will be used and the
corresponding image is dull and lacking in contrast or over
bright. In terms of the RGB model, the pixel values are clustered
in a narrow range of grey levels.
If this narrow range of gray levels could be altered so as to fit
the full range of grey levels, then the contrast between the dark
and light areas of the image would be improved while
maintaining the relative distribution of the gray levels. It is
indeed the manipulation of look-up table values.
The enhancement operations are normally applied to image data
after the appropriate restoration procedures have been
performed. The most commonly applied digital enhancement
techniques will be considered now.
The sensitivity of remote sensing detectors was designed to
record a wide range of terrain brightness from black asphalt and
basaltic rocks to White Sea ice under a wide range of lighting
conditions. In general, few of the scenes have the full brightness
range to produce an image with optimum contrast ratio it is
inevitable to utilize the entire dynamic range. Digital contrast
enhancement is thus of prime importance.
The objective of contrast stretching is to expand the narrow
dynamic range of gray values (digital numbers) typically present
in an input image.
A variety of contrast stretching algorithms is available and is
broadly categorized as linear contrast stretching and non-linear
contrast stretching.
Spatial Filtering Techniques
Some of the most commonly used filtering techniques are given
below.
a. Low Pass Filters
b. Median Filter
c. High Pass Filters
d. Filtering for Edge Enhancement