0% found this document useful (0 votes)
20 views35 pages

UNIT V (1)

Uploaded by

jeron2934
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views35 pages

UNIT V (1)

Uploaded by

jeron2934
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 35

UNIT V-DATA PRODUCTS AND

INTERPRETATION

PHOTOGRAPHIC AND DIGITAL PRODUCTS


Introduction
 Remote sensing involves the collection and interpretation of
data about the Earth's surface without direct contact.
 Both photographic and digital technologies play vital roles in
capturing and analyzing remote sensing data.
 Traditional methods relied on photographic film, while modern
approaches use digital sensors. The remote sensing data
products are available to the users in the form of
1. photographic products such as paper prints, film negatives,
dispositive of black and white,and false colour composite (FCC)
on a variety of scales
2. digital form as computer compatible tape (CCTs) after necessary
corrections.
 Broadly, satellite data products can be classified into different
types based on satellite and sensor, level of preprocessing and
the media.
 Data products acquired for the specific period can be generated
if the data pertaining to the period of interest is available in
archives.
 Depending upon the corrections applied and on the level of
processing, data products can be classified as :raw data, partially
corrected products, standard products, geocoded products, and
precision products.
 The raw data is radiometrically and geometrically uncorrected
data with ancillary information (stereo products for
photogrammetric studies). Standard products are radiometrically
and geometrically corrected for systematic errors. Geocoded
products are systematically and geometrically corrected
products. The systematic corrections are based on the standard
survey of India toposheet and rotation of pixels to align to true
north and resampled to standard square pixel.

Figure Types of product media (NRSA, 1999)


 Precision products are radiometric and geometric corrections
refined with the use of ground control points to achieve greater
locational accuracy.
 Data products can be broadly classified into two types
depending upon the output media, as photographic and digital.
 The Figure shows the types of products based on media.
Photographic products can either be in black and white, or
colour. Further they could be either film or paper products, and
in films it is possible to have either positive film or negative
film.
 The sizes of photographic products can vary depending on the
enlargement needed, and this is specified as 1X, 2X, 4X and so
on. The size of film recorders is generally 240 mm and this is
the basic master output from which further products are
generated. When we say colour photographic products, it
generally means false colour composites (plate3).
 FCCs are generated by combining the data contained in 3
different spectral bands into one image by assigning blue, green
and red colours to the data in three spectral bands respectively
during the exposure of a colour negative.
 The choice of band combinations can be determined depending
upon the application on hand.
 Different types of photographic products supplied by National
Remote Sensing Agency (NRSA) data centre, Govt., of India
(NDC) are: Standard B/W and FCC, films. Standard products
are available in colour, and black and white in the form of 240
mm films, either as negatives or positives. Figure shows the
various photographic products of different sizes and different
media of printing. Paper prints both Band Wand FCC are
supplied in various scales.
 They are 1 X (contact prints) , 2X (two times enlarged) and 4X
(four times enlarged) and 5X (5 times enlarged). Depending
upon the enlargement the scale of the product varies (IRS
handbook, 1998).
 The photographic products contain certain details annotated on
the margins. These are useful for identifying the scene, sensor,
date of pass, processing level, band combination, and so on .
 Basically, the visual interpretation of the remote sensing data is
based on the False Colour Composites (FCCs). Even after the
digital techniques, the results are visually interpreted.
 Scientists, analysts and other users may interpret the same scene
for different purposes. In fact it is one of the rare sources of
information which can generate multiple themes, such as , water
resources, soil, land use, and urban sprawl.
Figure Photographic and digital
TYPES, LEVELS AND OPEN SOURCE SATELLITE DATA
PRODUCTS
Types of Satellite Data:
1. Optical Imagery
2. Radar Imagery
3. Hyperspectral Imagery
4. LiDAR Data
5. Thermal Imagery
Optical Imagery:
Captured by satellites equipped with optical sensors, these images are
similar to what the human eye perceives. They provide information in
various spectral bands, including visible, near-infrared, and thermal
infrared, allowing for the analysis of land cover, vegetation health,
urban development, and more.
Figure optical imagery
Radar Imagery:
Synthetic Aperture Radar (SAR) satellites emit microwave signals
and measure the return signal to create images. SAR data is useful for
all-weather and day-night imaging, terrain mapping, monitoring
changes in Earth's surface, and detecting objects such as ships and oil
spills.

Figure radar image


Hyperspectral Imagery:
These images capture information in hundreds of narrow spectral
bands, providing detailed spectral signatures for materials on the
Earth's surface. Hyperspectral data is valuable for tasks like mineral
identification, environmental monitoring, and precision agriculture.
Figure hyperspectral imagery
Thermal Imagery:
Sensors aboard satellites measure thermal infrared radiation emitted
by the Earth's surface. Thermal imagery is used for detecting heat
anomalies, monitoring volcanic activity, assessing urban heat islands,
and studying climate change impacts.

Figure thermal imagery

LiDAR Data:
LiDAR sensors emit laser pulses and measure the time it takes for the
pulses to return, providing highly accurate elevation data. LiDAR is
essential for creating Digital Elevation Models (DEMs), assessing
terrain characteristics, and mapping landforms and vegetation
structure.

Figure lidar imagery


Levels of Satellite Data:
1. Level-0
2. Level-1
3. Level-2
4. Level-3
5. Level-4

Figure levels of satellite data


Level-0:
Raw data as received from the satellite without any processing.
Level-1:
Data processed to correct for sensor artifacts, geometric distortions,
and radiometric calibrations, making it usable for further analysis.
Level-2:
Further processed data with atmospheric correction applied to remove
atmospheric effects, enhancing the accuracy of quantitative analysis.
Level-3:
Data that is georeferenced and often aggregated over time or space to
create global or regional datasets suitable for thematic mapping and
trend analysis.
Level-4:
Derived products generated by combining satellite data with other
datasets or models to produce value-added products such as
vegetation indices, land cover maps, and climate variables.
Open Source Satellite Data Products:
1. Sentinel Data
2. Landsat Data
3. MODIS Data
4. ESA Earth Observation Data
5. NASA Earth Observing System Data
Sentinel Data:
The European Space Agency's Sentinel satellites offer free and open
access to a wealth of optical, radar, and thermal data through the
Copernicus Open Access Hub. Sentinel data is widely used for
environmental monitoring, disaster management, and scientific
research.
Figure Sentinel Data
Landsat Data:
The Landsat program, jointly managed by NASA and the USGS,
provides the longest continuous record of Earth observations from
space. Landsat data, including Landsat 8 and Landsat 9, is freely
available and widely used for monitoring land cover change,
assessing ecosystem health, and managing natural resources.
Figure Landsat Data

MODIS Data:
The Moderate Resolution Imaging Spectroradiometer (MODIS)
aboard NASA's Terra and Aqua satellites provides global coverage
with moderate spatial resolution and daily revisits. MODIS data is
used for monitoring vegetation dynamics, fire activity, sea surface
temperature, and atmospheric conditions.

Figure MODIS Data


ESA Earth Observation Data:
In addition to Sentinel data, the European Space Agency (ESA) offers
access to other Earth observation missions such as the Envisat, ERS,
and CryoSat satellites, providing a diverse range of data products for
scientific and operational applications.
Figure ESA Earth Observation Data
NASA Earth Observing System Data:
NASA's Earth Observing System (EOS) satellites, including Terra,
Aqua, and Aura, provide a wealth of data on Earth's atmosphere,
oceans, land surfaces, and biosphere. These data are freely available
through NASA's Earthdata Search and Distributed Active Archive
Centers (DAACs).

Figure NASA Earth Observing System Data


SELECTION AND PROCUREMENT OF DATA
1. Define Objectives and Requirements
2. Research Available Data Sources
3. Assess Data Quality and Suitability
4. Select Appropriate Data Products
5. Acquire Data
6. Ensure Data Compatibility and Preprocessing
7. Document and Validate Data
Define Objectives and Requirements:
 Clearly define the objectives of your remote sensing project or
analysis. Determine what information you need to achieve your
goals.
 Identify the spatial and temporal resolutions required for your
study area and time frame.
 Consider the spectral bands or data types necessary to address
your research questions or applications.
Research Available Data Sources:
 Explore the various sources of remote sensing data, including
satellite missions, aerial surveys, government agencies, research
institutions, and commercial providers.
 Familiarize yourself with the characteristics, capabilities, and
limitations of different sensors and platforms.
 Investigate open-source data repositories and archives that offer
free or low-cost access to satellite imagery and other remote
sensing datasets.
Assess Data Quality and Suitability:
 Evaluate the quality, accuracy, and reliability of available
datasets. Consider factors such as radiometric and geometric
calibration, sensor resolution, and spectral characteristics.
 Assess whether the spatial, spectral, and temporal resolutions
meet your requirements for the intended application.
 Verify the data's currency and relevance to your study area and
research objectives.
Select Appropriate Data Products:
 Choose the remote sensing data products that best match your
project's needs and specifications.
 Select datasets that provide the required spatial coverage,
spectral information, and temporal frequency for your analysis.
 Consider complementary datasets or multi-sensor/multi-
temporal approaches to enhance the robustness and accuracy of
your results.
Acquire Data:
 Once you've identified the desired datasets, proceed to acquire
the data through appropriate channels.
 Utilize online portals, data archives, and distribution platforms
provided by satellite agencies, government organizations, and
data providers.
 Depending on the availability and licensing terms, download or
request access to the required datasets.
Ensure Data Compatibility and Preprocessing:
 Verify that the acquired data are compatible with your analysis
software and workflow.
 Perform necessary preprocessing steps, such as geometric
correction, radiometric calibration, and atmospheric correction,
to enhance the usability and accuracy of the data.
 Address any potential issues or artifacts that may affect the
interpretation or analysis of the remote sensing data.

Document and Validate Data:


 Document the metadata and provenance of the acquired datasets,
including sensor specifications, acquisition parameters, and
processing history.
 Validate the accuracy and reliability of the data through ground
truth measurements, validation studies, or comparison with
reference datasets.
 Document any uncertainties or limitations associated with the
remote sensing data to ensure transparency and rigor in your
analysis.
VISUAL INTERPRETATION- BASIC ELEMENTS AND
INTERPRETATION KEYS
Basic elements of image interpretation
A systematic study .of aerial photographs and satellite imageries
usually involves several characteristics of features shown on an
image. The following characteristics (elements) are called
fundamental picture elements. These elements aid visual
interpretation process of aerial photos and/or satellite imagery.

Figure elements of image interpretation

(i) Tone
 Ground objects of different colour reflect the incident radiation
differently depending upon the incident wave length, physical
and chemical constituents of the objects.
 The imagery as recorded in remote sensing is in different shades
or tones. For example, ploughed and cultivated lands record
differently from fallow fields. Tone is expressed qualitatively as
light, medium and dark.
 In SLAR imagery, for example, the shadows cast by non-return
of the microwaves appear darker than those parts where greater
reflection takes place. These parts appear of lighter tone.
 Similarly in thermal imagery objects at higher temperature are
recorded of lighter tone compared to objects at lower
temperature, which appear of medium to darker tone. Similarly
top soil appears as of dark tone compared to soil containing
quartz sand.
 The coniferous trees appear in lighter tone compared to broad
leave tree clumps.
 Tone, therefore, refers to the colour or reflective brightness.
Tone along with texture and shadow (as described below) help
in Interpretation and hence is a very important key.
 Differences in moisture content of the soil or rock result in
differences in tone. In a black and white photograph dark tone
indicates dark bodies, namely, greater moisture contents and
grey or white tone reflect the dry soil.
 The aerial photos with good contrast bring out tonal differences
and hence help in better interpretation. Tonal contrast can be
enhanced by use of high contrast film, high contrast paper or by
specialized image processing techniques such as 'Dodging' or
'Digital Enhancement'.
 Sometimes Infrared film can give better contrast but it can also
reduce resolution and loss of detail in shadows.
(ii) Texture
 Texture is an expression of roughness or smoothness as
exhibited by the imagery. It is the rate of change of tonal values.
 Mathematically it is given as dD/dx where D is the Density and
'x' the distance measured from one arbitrary starting point, and
can be measured numerically by the use of microdensitometer.
 Changes of density 'D' from point 'A' of the imagery to point 'B'
as measured by the micro-densitometer divided by the distance
gives the texture values numerically. Texture is dependent upon.
(a) photographic tone
(b) shape,
(c) size,
(d) pattern and scale of the imagery.
 Any slight variation of these can change the texture. Texture can
qualitatively be expressed as course, medium and fine. The
texture is a combination of several image characteristics such as
tone, shadow, size, shape and pattern etc., and is produced by a
mixture of features too small to be seen individually because the
texture by definition is the frequency of tonal changes.
 As an example, leaves of a tree are too small to be seen on an
aerial photo collectively along with shadow they give what is
called texture, which in turn helps to differentiate between
shrubs and trees. Texture sometimes can be very important
factor in determining the slope stability.
 In the case of a humid ground, the blockage of water or bad
drainage a characteristic texture results.
 Even spring and seepage of water from the base of clay give a
kind of 'turbulant' texture.
(iii) Association
 The relation of a particular feature to its surroundings is an
important key to interpretation.
 Sometimes a single feature by itself may not be distinctive
enough to permit its identification.
 For example, Sink holes appears as dark spots on an imagery
where the surface or immediate subsurface soil consists of lime
stones, Thus the appearance of sink holes is always associated
with surface lime stone formation.
 An example is that of kettle holes which appear as depressions
on photos due to terminal moraine and glacial terrain.
 An another example is that of dark-toned features associated
with a flood plain of a river, which can be interpreted as infilled
oxbow lakes.
(iv) Shape
Some ground features have typical shapes due to the structure or
topography. For example air fields and football stadium easily can be
interpreted because of their finite ground shapes and geometry
whereas volcanic covers, sand, river terraces, cliffs, gullies can be
identified because of their characteristics shape controlled by geology
and topography.
(v) Size
 The size of an image also helps for its identification whether it is
relative or absolute.
 Sometimes the measurements of height (as by using parallax
bar) also gives clues to the nature of the object.
 For example, measurement of height of different clumps of trees
gives an idea of the different species, similarly the measurement
of dip and strike of rock formation help in identifying
sedimentary formation.
 Similarly the measurements of width of roads help in
discriminating roads of different categories i.e,national, state,
local etc. Size of course, is dependent upon the scale of imagery.
(vi) Shadows
 Shadows cast by objects are sometimes important clues to their
identification and
 Interpretation.
 For example, shadow of a suspension bridge can easily be
discriminated from that of cantilever bridge.
 Similarly circular shadows are indicative of coniferous trees.
Tall buildings and chimneys, and towers etc., can easily be
identified for their characteristic shadows. Shadows on the other
hand can sometimes render interpretation difficult i.e. dark slope
shadows covering important detail.
(vii) Site factor or Topographic Location
 Relative elevation or specific location of objects can be helpful
to identify certain features.
 For example, sudden appearance or disappearance of vegetation
is a good clue to the underlying soil type.
(viii) Pattern
 Pattern is the orderly spatial arrangement of geological
topographic or vegetation features. This spatial arrangement
may be two-dimensional (plan view) or 3-dimensional (space).
 Geological pattern may be linear or curved. Linear pattern are
formed of a very large number of continuous or discontinuous
short ticks which when viewed by eye appear to be continuous
lines.
 Examples of linear geological pattern are faults, fractures, joints,
dykes, bedding planes, anticlines etc.,
 Examples of curved features are plunging anticlines and folds.
Lineaments or lineations may be short, medium or long running
for several hundred kilometers. These are very important
expressions of the lithologic characters of the underlying rocks
and the attitude of the rock
 bodies, spacing of planes of bedding and other structural
weaknesses and the control extendedby them over the surface
features. Vegetation pattern may be of the 'Block' type or
'Alignment' type.
 The 'Alignment' type may be further subdivided into the Linear,
Parallel and curved type.
 Alignments are due to narrow rockbands or faults. Since faults
retain moisture, vegetation is aligned along the fault lines.
Example of topographic pattern is the typical drainage
patterns(controlled and uncontrolled type). The uncontrolled
types are those, which are purely governed by topography, i.e.,
the slops whereas the controlled type are those, which are
governed by the underlying geological formations.

Key Elements of Visual Image Interpretation


 Keys that provide useful reference of refresher materials and
valuable trainingaids for novice interpreters are called image
interpretation keys.
 These image interpretation keys are very much useful for the
interpretation of complex imageries or photographs.
 These keys provide a method of organising the information in a
consistent manner and provide guidance about the correct
identification of features or conditions on the images.
 Ideally, it consists of two basic parts'
(i) a collection of annotated or captioned images (stereopalrs)
illustrative of the features or conditions to be identified, and
(ii) a graphic or word description that sets forth in some
systematic fashion the image recognition characteristics of
those features or conditions. There are two types of keys:
selective key and elimination key.
Selective Key
 Selective key is also called reference key which contains
numerous examples images with supporting text.
 The interpreter select one example image that most nearly
resembles the fracture or condition found on the image under
study.
Elimination Key
 An elimination key is arranged so that interpretation process
step by step from general to specific, and leads to the
elimination of all features of conditions except the one being
identified.
 Elimination keys are also called dichotomous keys where the
interpreter makes a series of choices between two alternatives
and progressively eliminates all but one possible answer.

DIGITAL INTERPRETATION
Digital interpretation in remote sensing refers to the process of
analyzing and extracting meaningful information from digital imagery
acquired by satellite, aerial, or other remote sensing platforms. Unlike
traditional visual interpretation, which relies on human analysts to
interpret features in photographs or maps, digital interpretation
involves the use of computer-based techniques to automate or assist in
the analysis of remote sensing data.The steps carried out are
1. Image Processing
2. Feature Extraction
3. Change Detection
4. Quantitative Analysis
5. Integration with GIS and Modeling
6. Validation and Accuracy Assessment
Figure data interpretation
Image Processing:
 Digital interpretation begins with image processing, which
involves a series of computational techniques to enhance,
correct, and manipulate remote sensing imagery.
 Preprocessing steps may include radiometric and geometric
correction, atmospheric correction, noise reduction, and image
fusion to improve the quality and usability of the data.
Feature Extraction:
 Digital interpretation techniques are used to automatically or
semi-automatically extract features of interest from remote
sensing imagery.
 Feature extraction algorithms identify and delineate objects or
land cover classes based on their spectral, spatial, and textural
characteristics.
 Common feature extraction methods include classification,
segmentation, object-based image analysis (OBIA), and
machine learning algorithms such as supervised and
unsupervised classification.
Change Detection:
 Digital interpretation facilitates the detection and analysis of
temporal changes in remote sensing data.
 Change detection algorithms compare multiple images acquired
at different times to identify areas of change, such as urban
expansion, deforestation, land cover conversion, or natural
disasters.
 Techniques such as image differencing, image rationing, and
time-series analysis are used to quantify and characterize the
magnitude and extent of changes over time.
Quantitative Analysis:
 Digital interpretation enables quantitative analysis of remote
sensing data, allowing for the measurement and extraction of
numerical information from imagery.
 Quantitative analysis may include calculating vegetation
indices, estimating land surface temperature, determining object
heights or volumes, and deriving biophysical parameters such as
biomass or soil moisture.
 These quantitative measurements provide valuable insights into
environmental processes, ecosystem dynamics, and land surface
characteristics.
Integration with GIS and Modeling:
 Digital interpretation outputs are often integrated with
geographic information systems (GIS) and spatial analysis tools
to perform further analysis, visualization, and modeling.
 GIS allows for the spatial representation, manipulation, and
overlay of remote sensing data with other geospatial datasets,
enabling comprehensive spatial analysis and decision-making.
 Digital interpretation results can be used as input for
environmental modeling, land use planning, resource
management, and disaster risk assessment.

Validation and Accuracy Assessment:


 Digital interpretation results are validated and assessed for
accuracy through ground truth measurements, reference data, or
validation studies.
 Accuracy assessment techniques compare digital interpretation
outputs with independently collected data to evaluate the
reliability and precision of the analysis.
 Validation ensures the quality and credibility of the digital
interpretation results for informed decision-making and
scientific research.
CONCEPTS OF IMAGE RECTIFICATION
Introduction
 As seen in the earlier chapters, remote sensing data can be
analysed using visual image interpretation techniques if the data
are in the hardcopy or pictorial form. It is used extensively to
locate specific features and conditions, which are then geocoded
for inclusion in GIS.
 Visual image interpretation techniques have certain
disadvantages and may require extensive training and are labour
intensive. In this technique, the spectral characteristics are not
always fully evaluated because of the limited ability of the eye
to discern tonal values and analyse the spectral changes.
 If the data are in digital mode, the remote sensing data can be
analysed using digital image processing techniques and such a
database can be used in raster GIS. In applications where
spectral patterns are more informative, it is preferable to analyse
digital data rather than pictorial data.
 In today's world of advanced technology where most remote
sensing data are recorded in digital format, virtually all image
interpretation and analysis involves some element of digital
processing.
 Digital image processing may involve numerous procedures
including formatting and correcting of the data, digital
enhancement to facilitate better visual interpretation, or even
automated classification of targets and features entirely by
computer.
 In order to process remote sensing imagery digitally, the data
must be recorded and available in a digital form suitable for
storage on a computer tape or disk. Obviously, the other
requirement for digital image processing is a computer system,
sometimes referred to as an image analysis system, with the
appropriate hardware and software to process the data.
 Several commercially available software systems have been
developed specifically for remote sensing image processing and
analysis.
 For discussion purposes, most of the common image processing
functions available in image analysis systems can be categorized
into the following four categories:
1. Preprocessing Image Enhancement
2. Image Transformation
3. Image Classification and Analysis

PREPROCESSING
 Preprocessing functions involve those operations that are
normally required prior to the maindata analysis and extraction
of information, and are generally grouped as radiometric or
geometric corrections.
 Radiometric corrections include correcting the data for sensor
irregularities and unwanted sensor or atmospheric noise, and
converting the data so they accurately represent the reflected or
emitted radiation measured by the sensor.
 Geometric corrections include correcting for geometric
distortions due to sensor-Earth geometry variations, and
conversion of the data to real world coordinates (e.g. latitude
and longitude) on the Earth's surface.
 The objective of the second group of image processing functions
grouped under the term of image enhancement, is solely to
improve the appearance of the imagery to assist in visual
interpretation and analysis.
 Examples of enhancement functionsinclude contrast stretching
to increase the tonal distinction between various features in a
scene, and spatial filtering to enhance (or suppress) specific
spatial patterns in an image.
 Image transformations are operations similar in concept to those
for image enhancement. However, unlike image enhancement
operations which are normally applied only to a single channel
of data at a time, image transformations usually involve
combined processing of data from multiple spectral bands.
Arithmetic operations (i.e. subtraction, addition, multiplication,
division) are performed to combine and transform the original
bands into "new" images which better display or highlight
certain features in the scene.
 We will look at some of these operations including various
methods of spectral or band ratioing, and a procedure called
principal components analysis which is used to more efficiently
represent the information

Figure image classification

Image classification
 Image classification is a procedure to automatically categorize
all pixels in an image of a terrain into land cover classes.
 Normally, multispectral data are used to perform the
classification of the spectral pattern present within the data for
each pixel is used as the numerical basis for categorization.
 This concept is dealt under the broad subject, namely, Pattern
Recognition.
 Spectral pattern recognition refers to the family of classification
procedures that utilises this pixel-by-pixel spectral information
as the basis for automated land cover classification.
 Spatial pattern recognition involves the categorization of image
pixels on the basis of the spatial relationship with pixels
surrounding them. Image classification techniques are grouped
into two types, namely
1. Supervised
2. Unsupervised
 The classification process may also include features, such as,
land surface elevation and the soil type that are not derived from
the image.
 A pattern is thus a set of measurements on the chosen features
for the individual to be classified. The classification process
may therefore be considered a form of pattern recognition, that
is, the identification of the pattern associated with each pixel
position in an image in terms of the characteristics of the objects
or on the earth's surface.

Supervised Classification
 A supervised classification algorithm requires a training sample
for each class, that is, a collection of data points known to have
come from the class of interest. The classification is thus based
on how "close" a point to be classified is to each training
sample.
 We shall not attempt to define the word "close" other than to say
that both geometric and statistical distance measures are used in
practical pattern recognition algorithms.
 The training samples are representative of the known classes of
interest to the analyst.
 Classification methods that relay on use of training patterns are
called supervised classification methods.
 The three basic steps involved in a typical supervised
classification procedure are as follows :
(i) Training stage:
 The analyst identifies representative training areas and develops
numerical descriptions of the spectral signatures of each land
cover type of interest in the scene.
(ii) The classification stage:
 Each pixel in the image data set IS categorized into the land
cover class it most closely resembles. If the pixel is
insufficiently similar to any training data set it is usually labeled
'Unknown'.
(iii) The output stage:
 The results may be used in a number of different ways. Three
typical
forms of output products are thematic maps, tables and digital
data files which become input data for GIS.
 The output of image classification becomes input for GIS for
spatial analysis of the terrain. Figure depicts the flow of
operations to be performed during image classification of
remotely sensed data of an area which ultimately leads to create
database as an input for GIS.
Figure Basic steps supervised classification
 There are a number of powerful supervised classifiers based on
the statistics, which are commonly, used for various
applications.
 A few of them are a minimum distance to means method,
average distance method, parallelepiped method, maximum
likelihood method, modified maximum likelihood method,
baysian's method, decision tree classification, and discriminant
functions.
 The principles and working algorithms of all these supervised
classifiers are available in almost all standard books on remote
sensing and so details are not provided here.
 Since all the supervised classification methods use training data
samples, it is more appropriate to consider some of the
fundamental characteristics of training data.
Training Dataset
 A training dataset is a set of measurements (points from an
image) whose category membership is known by the analyst.
 This set must be selected based on additional information
derived from maps, field surveys, aerial photographs, and
analyst's knowledge of usual spectral signatures of different
cover classes.
 Selecting a good set of training points is one of the most critical
aspects of the classification procedure.
 These guidelines are as following:
(i)Select sufficient number of points for each class. If each
measurement vector has N features, then select N+1 points per
class and the practical minimum is 10*N per class. If the class
shows a lot of variability (the scatter plot showing considerable
spreading or scatter among training points), select a larger number
of points, subject to practical limits of time, effort and expense.
The more the training points, the better the "extra points" to
evaluate the accuracy of the classifier.
Figure Image Classification
The more the points, the more accurate the classification will be.
(ii)Select training data sets which are representative of the classes of
interest that show both typical average feature values and a typical
degree of variability. For each class, select several training areas on
the image, instead of just one. Each training area should contain a
moderately large number of pixels. Pick training areas from
seemingly heterogeneous or appearing regions. Pick training areas
that are widely and spatially dispersed, across the full image. For each
class, select the training areas which are uniformly distributed across
the image and with high density.
(iii)Check that selected areas have unimodel distributions
(histograms). A bimodal histogram suggests that pixels from two
different classes may be included in the training sample.
(iv)Select training sets (physically) using a computer-based
classification system:
Poorest method: Using coordinates of training points or training
regions directly.
Better method: using joystick, trackball, light pen, directly on the
image.
For example. EASI/PACE : The program should show the histograms,
mean and standard deviations for each region selected, and for each
class in total.
The Program should allow to iterate; do classification using one set of
training points, then come back and modify training sets and class
definitions without starting all over again. There should be options to
combine classes from previous classification.
(v) The program should allow one to designate half of the points as
training points, and the other half to test the accuracy of the trained
classifier. Before it is used, the training set should be evaluated by
examining scatterplots and/or histograms for each class. It should
show unimodel distributions, hopefully approximating normal
distributions. If not unimodel, one may want to select new training
sets. After the discriminant functions and the classification rule is
derived, accuracy must be tested.
 Two acceptable techniques which are commonly used are:
(a) Designate a randomly selected half of the training points as test
points, before developing classifier. Use the other half for training.
Then classify the half of the data not used for training.
Develop contingency table (confusion matrix) to indicate probability
of error in each class. This procedure is actually a measure of the
consistency of the classifier.
(b) Randomly select a set of pixel regions from the image of an
unknown class. Classify them using the discriminant function and
rules developed from the training set. Then verify the correctness of
the classification (again with a confusion matrix) by checking the
identity of these regions using external information sources like maps
and aerial photos.
(vi)Separability of classes: So far, we have looked at an ideal situation
where there is no overlap between different classes. In reality the
classes are likely to overlap. It can be seen that the less the overlap
between classes the lower the chance of misclassifying a given pixel.
Classes that have little overlap is said to be highly separable.

Unsupervised Classification
 Unsupervised classification algorithms do not compare .points
to be classified with training data.
 Rather, unsupervised algorithms examine a large number of
unknown data vectors and divide them into classes based on
properties inherent to the data themselves.
 The classes that result stem from differences observed in the
data. In particular, use is made of the notion that data vectors
within a class should be in some sense mutually close together
in the measurement space, whereas data vectors in different
classes should be comparatively well separated.
 If the components of the data vectors represent the responses in
different spectral bands, the resulting classes might be referred
to as spectral classes, as opposed to information classes, which
represent the ground cover types of interest to the analyst.
 The two types of classes described above, information classes
and spectral classes, may not exactly correspond to each other.
For instance, two information classes, corn and soya beans, may
look alike spectrally. We would say that the two classes are not
separable spectrally.
 At certain times of the growing season corn and soya beans are
not spectrally distinct while at other times they are. On the other
hand a single information class may be composed of two
spectral classes.
 Differences in planting dates or seed variety might result in the
information
 class" corn" being reflectance differences of tasseled and
untasseled corn.
Image enhancement
 Low sensitivity of the detectors, weak signal of the objects
present on the earth surface, similar reflectance of different
objects and environmental conditions at the time of recording
are the major causes of low contrast of the image.
 Another problem that complicates photographic display of
digital image is that the human eye is poor at discriminating the
slight radiometric or spectral differences that may characterize
the features. The main aim of digital enhancement is to amplify
these slight differences for better clarity of the image scene.
 This means digital enhancement increases the separability
(contrast) between the interested classes or features.
 The digital image enhancement may be defined as some
mathematical operations that are to be applied to digital remote
sensing input data to improve the visual appearance of an image
for better interpretability or subsequent digital analysis. Since
the image quality is a subjective measure varying from person to
person , there is no simple rule which may produce a single best
result.

Figure image enhancement


 Normally, two or more operations on the input image may
suffice to fulfill the
 desire of the analyst, although the enhanced product may have a
fraction of the total information stored in the original image.
 As in many outer areas of knowledge, the distinction between
one type of analysis and another is a matter of personal taste and
need of the interpreter. In remote sensing literature, many digital
enhancement algorithms are available. They are contrast
stretching enhancement, rationing, linear combinations,
principal component analysis, and spatial filtering.
 Broadly, the enhancement techniques are categorized as point
operations and local operations.
 Point operations modify the values of each pixel in an image
data set independently, whereas local operations modify the
values of each pixel in the context of the pixel values
surrounding it.
 Point operations include contrast enhancement and band
combinations, but spatial filtering is an example of local
operations. In this section, contrast enhancement, linear contrast
stretch ,histogram equalization, logarithmic contrast
enhancement, and exponential contrast enhancement are
considered.

Contrast Enhancement
 The sensors mounted on board the aircraft and satellites have to
be capable of detecting upwelling radiance levels ranging from
low (oceans) or ice).

Figure Contrast Enhancement

 For any particular area that is being imaged it is unlikely that the
full dynamic range of the sensor will be used and the
corresponding image is dull and lacking in contrast or over
bright. In terms of the RGB model, the pixel values are clustered
in a narrow range of grey levels.
 If this narrow range of gray levels could be altered so as to fit
the full range of grey levels, then the contrast between the dark
and light areas of the image would be improved while
maintaining the relative distribution of the gray levels. It is
indeed the manipulation of look-up table values.
 The enhancement operations are normally applied to image data
after the appropriate restoration procedures have been
performed. The most commonly applied digital enhancement
techniques will be considered now.
 The sensitivity of remote sensing detectors was designed to
record a wide range of terrain brightness from black asphalt and
basaltic rocks to White Sea ice under a wide range of lighting
conditions. In general, few of the scenes have the full brightness
range to produce an image with optimum contrast ratio it is
inevitable to utilize the entire dynamic range. Digital contrast
enhancement is thus of prime importance.
 The objective of contrast stretching is to expand the narrow
dynamic range of gray values (digital numbers) typically present
in an input image.
 A variety of contrast stretching algorithms is available and is
broadly categorized as linear contrast stretching and non-linear
contrast stretching.
Spatial Filtering Techniques
 Some of the most commonly used filtering techniques are given
below.
a. Low Pass Filters
b. Median Filter
c. High Pass Filters
d. Filtering for Edge Enhancement

 A characteristic of remotely sensed images is a parameter called


spatial frequency, defined as the number of changes in
brightness values per unit distance for any particular part of an
image.
 If there are few changes in brightness value over a given area it
is termed as a low frequency area. If the brightness values
changes dramatically over very short distances, this is called
high frequency area.
 Algorithms which perform image enhancement are called filters
because they suppress certain frequencies and pass (emphasise)
others. Filters that pass high frequencies while emphasizing fine
detail and edges called high frequency filters, and filters that
pass low frequencies called low frequency filters.
 Filtering is performed by using convolution windows. These
windows are called mask, template filter or kernel. In the
process of filtering, the window is moved over the input image
from extreme top left hand corner of the scene.
 The discrete mathematical function transforming the original
input image digital number to a new digital value.
 First it will move along the line. As soon as the line is complete,
it will restart
 for the next line for covering the entire image.
 The mask window may be rectangular (1 x 3, or 1 x 5 pixels)
size or square (3 x 3, 5 x 5 or 7 x 7 pixels size). Each pixel of
the window is given a weightage. For low pass filters all
thenweights in the window will be positive and for high pass
filter all the values may be negative or zero, but the central pixel
will be positive with higher weightage value.
 In the case of high pass filter the algebraic sum of all the
weights in the window will be a zero.
 Many types of mask windows of different sizes can be designed
by changing the size and varying weightage within the window.
 The simplest form of mathematical function performed in
filtering operation is
 neighbourhood averaging.
 Another commonly used discrete function is to calculate the
sum
 of the products given by the elements of the mask and the input
image pixel digital numbers of the central pixel digital number
in the moving window.

You might also like