unit2
unit2
Terminology:
• Image Interpretation: A general term used when interpreting satellite images.
• Aerial Photo Interpretation: Specifically refers to the interpretation of aerial
photographs.
Types of Interpretation:
1. Visual Interpretation:
o Relies on human expertise to analyze imagery without computational assistance.
o Typically involves examining hardcopy photographs or printed satellite images.
2. Digital Interpretation:
o Performed with the assistance of computer software.
o Suitable for analyzing digital data from aerial or satellite sources, often leveraging
algorithms for feature detection and classification.
1. Tone
Tone refers to the color or relative brightness in an image:
• Color Images: Indicated by hue or color variation.
• Black & White Images: Denoted by shades of gray.
Tonal differences arise due to an object's reflection, transmission, or absorption of light.
Factors influencing tone include:
• Reflectivity and light angle.
• Geographic location and latitude.
• Camera type, film sensitivity, and processing.
2. Size
Size relates to the scale and proportional dimensions of an object in the image.
• Relative Size: Comparison to other known objects.
• Absolute Size: Measured based on the image scale.
The height of objects can be inferred through shadows. For instance:
• Automobiles, railways, or rivers have distinctive relative sizes that assist in recognition.
3. Shape
Shape refers to the outline or general form of an object.
• Geometric Shapes: Often human-made (e.g., buildings, roads).
• Irregular Shapes: Typically natural objects (e.g., rivers, forests).
Certain objects, like railways and highways, are distinguishable by their unique shapes.
4. Texture
Texture represents the roughness or smoothness of a surface in an image.
• Dependent on tone, shape, size, and pattern.
• Qualitative Descriptions: Coarse, fine, rippled, mottled, etc.
Examples:
• Grass and water appear smooth.
• Forest canopies or rugged terrain appear rough.
5. Association
Association is the occurrence of features in relation to surrounding elements.
• Example: High schools can be identified by associated features like adjacent football
fields.
Context plays a critical role in feature identification.
6. Shadow
Shadows help in identifying objects by providing:
1. Shape/Outline: Offering a profile view of objects.
2. Contrast: Highlighting taller objects with distinct shadows.
Challenges:
• Objects in shadows are harder to discern due to low reflectivity.
Example: The Qutub Minar casts a longer shadow than smaller objects like trees or buildings.
7. Site
Site refers to the topographic position or geographic context of a feature.
• Example: Sewage treatment facilities are typically found in low-lying areas near rivers.
Landforms and associated geology can also be identified using site characteristics.
8. Pattern
Pattern is the spatial arrangement of objects in an image.
• Example: Specific tree species may form identifiable arrangements based on ecological
factors.
• Terrain features like drainage systems or land-use patterns also form recognizable
patterns.
DEFINITION
➢ Digital Image Processing is the manipulation of the digital data with the help of the
computer hardware and software to produce digital maps in which specific information has
been extracted and highlighted.
➢ Pre-processing, and
➢ Image Processing
1. Pre-processing
Remotely sensed raw data generally contains flaws and deficiencies received from imaging sensor
mounted on the satellite. The correction of deficiencies and removal of flaws present in the data through some
methods are termed as pre–processing methods this correction model involves to correct geometric distortions, to
calibrate the data radiometrically and to eliminate noise present in the data. All the pre–processing methods are
1. Image Processing
➢ The second stage in Digital image Processing entails five different operations, namely,
A. Image Registration
Image registration is a critical process in remote sensing and image analysis that involves
aligning two images or maps with similar geometric properties. The goal is to ensure that
corresponding elements of the same ground area in both images appear in the same spatial
location.
Process:
1. Identification of Control Points: Select easily recognizable features in both images (e.g.,
road intersections, landmarks).
3. Resampling: Adjust the pixel values of the transformed image to fit the new geometry.
B. Image Enhancement
Image enhancement techniques improve the quality of images for better human interpretation by increasing
contrast, emphasizing edges, and enhancing specific features. Methods include contrast stretching, histogram
equalization, spatial filtering, and color enhancement, each tailored to highlight relevant details or suppress
noise for effective analysis.
C. Image Filtering
D. Image Transformation
E. Image Classification
Image Classification is the process of categorizing all pixels in a digital image into land use or
land cover classes based on their spectral or spatial patterns. It is a part of the broader field of
pattern recognition.
Types of Classification:
Supervised Classification
In supervised classification, two commonly used algorithms are Parallelepiped Classifier and
Maximum Likelihood Classifier:
a) Parallelepiped Classifier
• This method uses predefined class limits (based on training data) to classify pixels.
• A parallelepiped is created around the mean of each class in the feature space.
• Pixel assignment:
Example:
Pixel 1 is classified as forest, and Pixel 2 as urban, based on the parallelepiped dimensions.
• Working principle:
o Assigns a pixel to the class for which it has the highest probability of membership.
o It estimates means and variances of classes using training data and calculates
probabilities based on these.
• MLC considers both the mean brightness values and their variability for classification.
• Strengths:
o Accurate and robust if high-quality training data and valid assumptions about class
distributions are available.
Unsupervised Classification
Unsupervised classification is an automated process that organizes pixels into clusters without
analyst intervention, relying entirely on the statistical properties of image data distribution
(clustering).
Steps of Unsupervised Classification
Clustering Algorithms
a) K-Means Clustering
o Mean values are recalculated until they stabilize beyond a defined threshold.
o Pixels are classified based on the minimum distance to the mean or another
principle.
• Key features:
• Works best when all image bands have similar data ranges.