0% found this document useful (0 votes)
20 views

unit2

rs and gis r20

Uploaded by

vamsi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

unit2

rs and gis r20

Uploaded by

vamsi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Introduction to Image Analysis

Definition and Scope:


Image analysis refers to the process of extracting meaningful and actionable information from
images, typically using digital image processing techniques. The tasks involved can range from
straightforward operations, such as reading barcoded tags, to highly sophisticated applications
like facial recognition.
Remote sensing data can be analyzed using two primary approaches:
1. Visual Image Interpretation (Hardcopy or Pictorial Form):
o Involves manually examining imagery to locate specific features or conditions.
o Results are often geo-coded for integration into Geographic Information Systems
(GIS).
Advantages:
o Intuitive and useful for pattern recognition or contextual analysis.
Disadvantages:
o Labor-Intensive: Requires significant manual effort and extensive training.
o Limited Spectral Evaluation: The human eye has a restricted ability to discern
tonal variations and subtle spectral differences, limiting its effectiveness for
certain analyses.
2. Digital Image Processing (Digital Form):
o Involves computational analysis of remote sensing data, often stored in raster
formats.
o Particularly effective when spectral patterns provide critical information.
Advantages:
o Allows detailed spectral analysis that surpasses human visual capabilities.
o The output can be directly integrated into raster GIS databases for advanced
spatial analysis.
Visual Interpretation
Definition:
Visual interpretation is the process of extracting qualitative and quantitative information
about objects or features from aerial photographs or satellite images. This technique relies on
human visual perception to analyze the imagery.

Terminology:
• Image Interpretation: A general term used when interpreting satellite images.
• Aerial Photo Interpretation: Specifically refers to the interpretation of aerial
photographs.

Types of Interpretation:
1. Visual Interpretation:
o Relies on human expertise to analyze imagery without computational assistance.
o Typically involves examining hardcopy photographs or printed satellite images.
2. Digital Interpretation:
o Performed with the assistance of computer software.
o Suitable for analyzing digital data from aerial or satellite sources, often leveraging
algorithms for feature detection and classification.

ELEMENTS OF VISULA INTERPRETTION


Visual Interpretation Process:
Visual interpretation involves systematically analyzing imagery using key elements such as:
• Shape: The form or outline of objects.
• Size: The dimensions or scale relative to known objects.
• Tone/Color: Variations in brightness or hues.
• Texture: The surface quality, such as smoothness or roughness.
• Pattern: Spatial arrangements or repetitive configurations.
• Shadow: Provides clues about the height and shape of objects.
• Location/Association: Contextual positioning and relationships between objects.

1. Tone
Tone refers to the color or relative brightness in an image:
• Color Images: Indicated by hue or color variation.
• Black & White Images: Denoted by shades of gray.
Tonal differences arise due to an object's reflection, transmission, or absorption of light.
Factors influencing tone include:
• Reflectivity and light angle.
• Geographic location and latitude.
• Camera type, film sensitivity, and processing.

2. Size
Size relates to the scale and proportional dimensions of an object in the image.
• Relative Size: Comparison to other known objects.
• Absolute Size: Measured based on the image scale.
The height of objects can be inferred through shadows. For instance:
• Automobiles, railways, or rivers have distinctive relative sizes that assist in recognition.
3. Shape
Shape refers to the outline or general form of an object.
• Geometric Shapes: Often human-made (e.g., buildings, roads).
• Irregular Shapes: Typically natural objects (e.g., rivers, forests).
Certain objects, like railways and highways, are distinguishable by their unique shapes.

4. Texture
Texture represents the roughness or smoothness of a surface in an image.
• Dependent on tone, shape, size, and pattern.
• Qualitative Descriptions: Coarse, fine, rippled, mottled, etc.
Examples:
• Grass and water appear smooth.
• Forest canopies or rugged terrain appear rough.

5. Association
Association is the occurrence of features in relation to surrounding elements.
• Example: High schools can be identified by associated features like adjacent football
fields.
Context plays a critical role in feature identification.

6. Shadow
Shadows help in identifying objects by providing:
1. Shape/Outline: Offering a profile view of objects.
2. Contrast: Highlighting taller objects with distinct shadows.
Challenges:
• Objects in shadows are harder to discern due to low reflectivity.
Example: The Qutub Minar casts a longer shadow than smaller objects like trees or buildings.
7. Site
Site refers to the topographic position or geographic context of a feature.
• Example: Sewage treatment facilities are typically found in low-lying areas near rivers.
Landforms and associated geology can also be identified using site characteristics.

8. Pattern
Pattern is the spatial arrangement of objects in an image.
• Example: Specific tree species may form identifiable arrangements based on ecological
factors.
• Terrain features like drainage systems or land-use patterns also form recognizable
patterns.

DIGITAL IMAGE CLASSIFICATION

DEFINITION

➢ Digital Image Processing is the manipulation of the digital data with the help of the
computer hardware and software to produce digital maps in which specific information has
been extracted and highlighted.

➢ Pre-processing, and

➢ Image Processing

1. Pre-processing

Remotely sensed raw data generally contains flaws and deficiencies received from imaging sensor
mounted on the satellite. The correction of deficiencies and removal of flaws present in the data through some
methods are termed as pre–processing methods this correction model involves to correct geometric distortions, to
calibrate the data radiometrically and to eliminate noise present in the data. All the pre–processing methods are

a) Radiometric correction method


1. Correction for missing lines
2. Correction for periodic lines striping
3. Random noise correction

b) Atmospheric correction method


c) Geometric correction methods.

1. Image Processing

➢ The second stage in Digital image Processing entails five different operations, namely,

A. Image Registration

Image registration is a critical process in remote sensing and image analysis that involves
aligning two images or maps with similar geometric properties. The goal is to ensure that
corresponding elements of the same ground area in both images appear in the same spatial
location.

Process:

1. Identification of Control Points: Select easily recognizable features in both images (e.g.,
road intersections, landmarks).

2. Transformation: Apply mathematical models to translate, rotate, or scale one image to


align with the other.

3. Resampling: Adjust the pixel values of the transformed image to fit the new geometry.
B. Image Enhancement

Image enhancement techniques improve the quality of images for better human interpretation by increasing
contrast, emphasizing edges, and enhancing specific features. Methods include contrast stretching, histogram
equalization, spatial filtering, and color enhancement, each tailored to highlight relevant details or suppress
noise for effective analysis.

C. Image Filtering

D. Image Transformation

E. Image Classification

Image Classification is the process of categorizing all pixels in a digital image into land use or
land cover classes based on their spectral or spatial patterns. It is a part of the broader field of
pattern recognition.

Types of Classification:

1. Supervised Classification: User-driven method where predefined training data guides


the identification of classes in the image.

2. Unsupervised Classification: Automated method where the system identifies natural


groupings or clusters in the data without user-defined inputs.

Supervised Classification

In supervised classification, two commonly used algorithms are Parallelepiped Classifier and
Maximum Likelihood Classifier:

a) Parallelepiped Classifier

• This method uses predefined class limits (based on training data) to classify pixels.

• A parallelepiped is created around the mean of each class in the feature space.
• Pixel assignment:

o If a pixel falls inside a parallelepiped, it is assigned to that class.

o If it overlaps multiple classes, it is assigned to an overlap class.

o If it doesn’t fall in any parallelepiped, it is assigned to the null class.

Example:
Pixel 1 is classified as forest, and Pixel 2 as urban, based on the parallelepiped dimensions.

b) Maximum Likelihood Classifier (MLC)

• A widely used and powerful algorithm in remote sensing.

• Working principle:

o Assigns a pixel to the class for which it has the highest probability of membership.

o It estimates means and variances of classes using training data and calculates
probabilities based on these.

• MLC considers both the mean brightness values and their variability for classification.

• Strengths:

o Accurate and robust if high-quality training data and valid assumptions about class
distributions are available.

Unsupervised Classification

Unsupervised classification is an automated process that organizes pixels into clusters without
analyst intervention, relying entirely on the statistical properties of image data distribution
(clustering).
Steps of Unsupervised Classification

1. Pixels are grouped based on their spectral similarities.

2. The output is an image of statistical clusters.

3. Interpretation is required to assign meaningful thematic labels to the clusters based on


prior knowledge.

Clustering Algorithms

a) K-Means Clustering

• Pixels are grouped into clusters based on initial mean values.

• The process is iterative:

o Mean values are recalculated until they stabilize beyond a defined threshold.

o Pixels are classified based on the minimum distance to the mean or another
principle.

• The number of clusters (k) is predefined by the user.

• Objective: Minimize variability within clusters while maximizing differences between


clusters.

b) ISODATA Clustering (Iterative Self-Organizing Data Analysis Technique)

• An enhancement of K-Means Clustering.

• Key features:

o Iterative classification until specified results are achieved.


o Automatically merges similar clusters and splits heterogeneous clusters.

• Especially effective when the number of clusters is unknown.

• Works best when all image bands have similar data ranges.

You might also like