0% found this document useful (0 votes)
19 views

In Order To Update and Compile Maps With High Accuracy

Uploaded by

No User
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

In Order To Update and Compile Maps With High Accuracy

Uploaded by

No User
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

In order to update and compile maps with high accuracy, the satellite digital

data have to be manipulated using image processing techniques.


namely, (i) Preprocessing (iii) Image enhancement (v) Image transformation (ii) Image Registration
(iv) Image filtering (vi) Image classification

1. Preprocessing Remotely sensed raw data, received from imaging sensor mounted on
satellite platforms generally contain flaws and deficiences. The correction of deficiences and
removal of flaws present in the data through some methods are termed as pre-processing
methods. This correction model involves the initial processing of raw image data to correct
geometric distortions, to calibrate the data radiometrically and to eliminate the noise
present in the data. All pre-processing methods are considered under three heads, namely,
(i) geometric correction methods, (ii) radiometric correction methods, and (iii) atmospheric
correction methods.

2. Image Registration Image registration is the translation and rotation alignment process by
which two images/ maps of like geometries and of the same set of objects are positioned
coincident with respect to one another so that corresponding elements of the same ground
area appear in the same place on the registered images This is the most precise geometric
correction since each pixel can be referenced not only by its row and column in a digital
image matrix after rectification is completed, but it is also rigorously referenced in degrees,
feet or meters In a standard map projection. Whenever accurate data, direction and distance
measurements are required, geometric rectification is required. This is often called as an
image-to-map rectification.

3. Image Enhancement Techniques Low sensitivity of the detectors, weak signal of the objects
present on the earth surface, similar reflectance of different objects and environmental
conditions at the time of recording are the major causes of low contrast of the image.
Another problem that complicates photographic display of digital image is that the human
eye is poor at discriminating the slight radiometric or spectral differences that may
characterise the features. The main aim of digital enhancement is to amplify these slight
differences for better clarity of the image scene. This means digital enhancement increases
the separability (contrast) between the interested classes or features. The digital image
enhancement may be defined as some mathematical operations that are to be applied to
digital remote sensing input data to improve the visual appearance of an image for better
interpretability or subsequent digital analysis

Contrast Enhancement The sensors mounted on board the aircraft and satellites have to be
capable of detecting upwelling radiance levels ranging from low (oceans) to very high (snow
or ice). For any particular area that is being imaged it is unlikely that the full dynamic 175
Remote Sensing and GIS range of the sensor will be used and the corresponding image is dull
and lacking in contrast or overbright. In terms of the RGB model, the pixel values are
clustered in a narrow range of grey levels. If this narrow range of gray levels could be altered
so as to fit the full range of grey levels, then the contrast between the dark and light areas of
the image would be improved while maintaining the relative distribution of the gray levels. It
is indeed the manipulation of look-up table values. The enhancement operations are
normally applied to image data after the appropriate restoration procedures have been
performed. The most commonly applied digital enhancement techniques will be considered
now.

4. Spatial Filtering Techniques A characteristic of remotely sensed images is a parameter called


spatial frequency, defined as the number of changes in brightness values per unit distance
for any particular part of an image. If there are few changes in brightness value over a given
area it is termed as a low frequency area. If the brightness values changes dramatically over
very short distances, this is called high frequency area. Algorithms which perform image
enhancement are called filters because they suppress certain frequencies and pass
(emphasise) others. Filters that pass high frequencies while emphasising fine detail and
edges called high frequency filters, and filters that pass low frequencies called low frequency
filters.

5. Image Transformations The term 'transform' is used a little loosely in this chapter, for the
arithmetic operators of addition, subtraction, multiplication, and division are included
although they are not strictly transformations. All the transformations in image processing of
remotely sensed data allow the generation of a new image based on the arithmetic
operations, mathematical statistics and fourier transformations. The new image or a
composite image is derived by means of two or more band combinations, arithmetics of
various band data individually and/or application of mathematics of multiple band data. The
resulting image may well have properties that make it more suited to a particular purpose
than the original

NDVI Transformation The remote sensing data is used extensively for large area vegetation
monitoring. Typically the spectral bands used for this purpose are visible and near IR bands.
Various mathematical combinations of these bands have been used for the computation of
NDVI, which is an indicator of the presence and condition of green vegetation. These
mathematical quantities are referred to as Vegetation Indices. There are three such indices,
Simple Vegetation Indices, Rational Vegetation Indices, and Normalised Differential
Vegetation Indices.

6. Image Classification Image classification is a procedure to automatically categorize all pixels


in an image of a terrain into land cover classes. Normally, multispectral data are used to
perform the classification of the spectral pattern present within the data for each pixel is
used as the numerical basis for categorization. This concept is dealt under the broad subject,
namely, Pattern Recognition. Spectral pattern recognition refers to the family of classification
procedures that utilises this pixel-by-pixel spectral information as the basis for automated
land cover classification. Spatial pattern recognition involves the categorization of image
pixels on the basis of the spatial relationship with pixels surrounding them. Image
classification techniques are greuped into two types, namely supervised and unsupervised.
Summary of Visual Image Interpretation
This text describes visual image interpretation as a process of analyzing images to extract information
and communicate it to others. Here are the key points:

• Purpose: Identify objects, their locations, and sizes from images.


Beyond basic observation: It goes beyond simply recognizing what's in the image, involving
measurements and analysis.
• Techniques: This chapter focuses on using standard tools like aerial photographs and satellite
imagery.
• Extracting information: The goal is to extract thematic information (e.g., land cover types) for
further use in Geographic Information Systems (GIS).
• Data analysis: The interpreter examines the image content alongside supplementary data like
maps and field reports.
• Factors affecting success: Success depends on the interpreter's experience, knowledge of the
area, image quality, and even their artistic sense.
• Reliable interpretation: Training, experience, and a keen eye can lead to more accurate and
reliable information extraction.

You might also like