0% found this document useful (0 votes)
11 views62 pages

What Is GIS

Uploaded by

aniksheakh48
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views62 pages

What Is GIS

Uploaded by

aniksheakh48
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

Lecture – 1

Importance of GIS
1. What is GIS?
Geographic – 80% of data collected is associated with some location in space.
Information - attributes, or the characteristics (data), can be used to symbolize
and provide further insight into a given location.
System – a seamless operation linking the information to the geography – which
requires hardware, networks, software, data, and operational procedures.
• A container of maps in digital form
• A computerized tool for solving geographic problems
• A spatial decision support system
• A mechanized inventory of geographically distributed features and
facilities
• A tool for revealing what is otherwise invisible in geographic information
• A tool for performing operations on geographic data that are too tedious or
expensive or inaccurate if performed by hand (Longley et al)
2. Relation of traditional discipline with GIS
Functions of GIS
Data Base Spatial
Data Acquisition

Pre- processing

Data Structure

Discipline
Application
Modeling
Mapping

Analysis

Display

Geography ✓ ✓ ✓
Cartography ✓ ✓ ✓ ✓
Remote Sensing ✓ ✓ ✓ ✓
Photogrammetry ✓ ✓ ✓ ✓
Surveying ✓ ✓
Geodesy ✓
Statistics ✓ ✓ ✓
Operations Research ✓ ✓
Computer Science ✓ ✓ ✓ ✓ ✓ ✓
Mathematics ✓ ✓ ✓
Civil Engineering ✓ ✓
Urban Planning ✓ ✓ ✓

1|Zaman
Lecture – 1

3. Application of GIS
Urban planning
• Analyze the urban growth and its direction of expansion.
• Find suitable sites for further urban development.
• Land use analysis.
Transportation planning
• Traffic density studies.
• Creating extensive database on traffic information, speed, road geometry,
traffic flow and other spatial data.
• Accident analysis and hot spot analysis.
• Emergency route planning.
Environmental science
• Monitor the environment and analyze changes.
• Environmental Impact assessment (EIA) can be carried out efficiently by
integrating various GIS layers.
• Locate arsenic contamination in shallow and deep tube wells.
Land administration
• Create digital cadastral databases which necessary for digital taxation and
utility management.
Utility management
• Determine optimal oil and gas pipeline routes which will minimize
economic loss and negative socio- environmental impacts.
• Drainage planning.
Hydrology
• Snow cover mapping.
• Runoff prediction.
• Hydrological modeling.
• Stream extraction.
• Catchment and watershed delineation.
Forestry
• Monitor degree of deforestation.
• Measure density of vegetation.

2|Zaman
Lecture – 1

• managing, analyzing, and visualizing wildlife data such as habitat loss,


pollution, invasive species introduction.
Agriculture
• Evaluate the irrigation performance.
• Determine the best crop to plant and the best place for planting. This could
increase food production.
• Identification of major crops and their conditions, and determination of
their areal extent and yield.
Fisheries
• GIS can be used in fisheries assessment and management.
• GIS can help in identifying illegal fishing spots.
• GIS helps to get accurate information regarding various commercial
activities in the ocean industry area
Geology
• Identify soil types and delineate soil boundaries.
• Soil suitability for various land use activities like dam site location.
• Sedimentation and erosion mapping.
• Analyze rock formation and characteristics.
Tourism
• GIS is an ideal platform tools required to generate a better understanding,
and can serve the needs of tourists. They will get all the information on
click, measure distance, find hotels, restaurant and even navigate to their
respective links.
Social science
• School student walking distance analysis.
• site selection and suitability analysis: finding the right sites to locate new
business, schools, hospitals, waste treatment plant and even dustbins.
• Crime mapping and defense purpose.
Disaster management
• Hazard identification
• Hazard zone mapping/ Vulnerability mapping
• Risk analysis and management
• Identification of suitable location for rehabilitation/shelters
• Disaster damage estimation

3|Zaman
Lecture – 1

Water & sanitation


• Urban/Rural water supply system
• Suitability analysis of tube wells, sanitation facilities and solid waste
dumping sites
• Ground water quality analysis and monitoring
• water/sanitation demand modeling
• Service area mapping
• Participatory GIS mapping
4. Integrated subsystem of GIS
Data Processing Data Analysis Information User
Subsystem Subsystem Subsystem

Data Acquisition: from Retrieval and Analysis: Users may be


maps, images or field may be simple responses researchers, planners,
surveys. to queries, or complex managers.
statistical analysis of
Data Input: data must be large sets of data. It provides interaction
input from source between GIS group and
material to the digital Information Output: how users to plan analytical
database. to display the results? As procedures and data
maps or tables? Or will structures.
Data Storage: how often the information be fed
is it used, how should it into some other digital
be updated, is it system?
confidential?

5. Components of GIS

Hardware, Software, People, Data, Methods, Network etc.

6. Five M’s in GIS

4|Zaman
Lecture – 1

7. Data type in GIS

Spatial Data
• Spatial data relates to the geometry of spatial feature and its relative location.
• Spatial data are used to provide the visual representation of a geographic space.
Attribute Data
• Aspatial data give the descriptive information about the spatial features.
• Attribute data is stored in a table which links to the feature location.
Meta Data
• Meta data is data about data.
• It is a summary document providing documentation about spatial and
attribute data, describing content, quality, condition, timing, accuracy and
other characteristics of a data set
8. Data structure or model in GIS

Vector data
• Uses points and their x-, y- coordinates to construct spatial features of
points, lines, and areas.
• Sometimes Z (3D), sometimes M (linear reference)
• Used to represent discrete objects over the space
• Vector objects exist independent of any other nearby features
Raster data
• Raster is a grid consisting of individual cells or pixels.
• Each cell holds a value (elevation, radiance, reflectance, rainfall, or land
use type).
• Values can be integer or decimal
• Data can be discrete or continuous
• The resolution of the data is the size on the ground by each cell.

9. Data format in GIS


Vector
• AutoCAD" Drawing Files (DWG)
• Digital Exchange Format (DXF)
• Digital Line Graph (DLG)
• Shapefile

5|Zaman
Lecture – 1

• SVG (Scalable Vector Graphics) GeoDatabase


Raster
• Standard Raster Format
• Tagged Image File Format (TIFF) Geo-TIFF
• Graphics Interchange Format (GIF)
• Joint Photograph Experts Group (JPEG)
• Digital Elevation Model (DEM)
• Band Interleaved by Line (BIL),
• Band Interleaved by Pixel (BIP)

10.Vector Vs. Raster: Advantage Vs. Disadvantage


Vector (Disadvantage) Raster (Advantage)
Complex data structure. Simple data structure.
Simulation modeling of processes and Mathematical modeling is easy
analytical calculations are more because all spatial entities have a
difficult. simple, regular shape.
Not good at representing Easy to display continuously
continuously changing surfaces such changing surfaces such as elevation
as elevation or moisture in soil or soil moisture
The shapes of features are accurately The shapes of features cannot be
represented. accurate represented without high
spatial resolution.
Vector data is good for managing Raster data can have very large
attributes. Vector data can have dataset.
smaller dataset.
Coordinate transformation is easy. Coordinate transformations are
difficult & time consuming.

6|Zaman
Lecture – 1

11.Difference between discreate data and continuation data based on GIS


Feature Discrete Data Continuous Data
Definition Consists of distinct, Can take any value within a given
separate values. range.
Nature of Countable, often whole Infinite possibilities, including
Values numbers. fractional or decimal values.
Examples Number of trees in a Elevation, temperature, rainfall
forest, population count, amount.
land use types.
Representation Often represented using Represented using raster data
in GIS vector data model model (grids of cells) or vector
(points, lines, polygons). data model for continuous
features.
Analysis Suitable for counting, Suitable for spatial analysis
Approach categorization, and involving interpolation, modeling,
distinct classification. and continuous surface
representation.
Spatial Features are separate, Features can have a continuous
Relationships with no intermediate distribution with gradual
values between them. variations.

7|Zaman
Lecture – 2

1. Major Technological Tools for Disaster Management

Mitigation and

Preparedness
Prevention
Reduction

Response

Recovery
Risk
GIS/RS/GPS √√√ √√√ √√ √√ √
Analytical tools √√√ √√√ √ √
Blogging √√ √√√
Internet √√ √√ √√ √√ √√√
Mobile (voice) √ √√√ √√√
Satellite Communication √√√ √√√ √√√ √√√
Web 2.0, Social Networking √ √ √√√
TV, Radio √√ √√√ √√√

2. Risk triangle
• If any one of these sides increases, the area of the triangle increases, hence
the amount of risk also increases.
• If any one of the sides reduces, the risk reduces.
• If we can eliminate one side there is no risk.

8|Zaman
Lecture – 3

Ellipse
Ellipse is an oval, with a major axis (the longer axis) and a minor axis (the shorter
axis). It is a two-dimensional shape.

Ellipsoid or spheroid
• If you rotate the ellipse, the shape of the rotated figure is the spheroid. A
spheroid is a three-dimensional shape created from a two-dimensional
ellipse.
• It is now known that an oblate ellipsoid (or spheroid) is a better
approximation of the shape of the earth.

Geoid
• The geoid is defined as surface of the earth's gravity field, which is roughly
same as mean sea level.
• Since the mass of the earth is not uniform at all points, and the direction of
gravity changes, the shape of the geoid is irregular. Because of these
irregularities the geoid is a very difficult surface to describe or relate
locations.
Classically, an ellipsoid was chosen and fitted to a particular area of interest, for
example a country, either at a single point or as the best fit between several points
on the earth’s surface.
Example of Globally fitted ellipsoid:
▪ WGS 1984 spheroid
Example of Locally fitted ellipsoid:
▪ Clarke 1866 spheroid
▪ Geodetic Reference System (GRS) 1980 spheroid

Datum
It is also necessary to define the spatial relationship (position and orientation)
between the chosen ellipsoid and the geoid. This is achieved through the
definition of a “Geodetic datum”. While a spheroid approximates the shape of the
earth, a datum defines the position of the spheroid relative to the center of the
earth. A datum provides a frame of reference for measuring locations on the
surface of the earth. It defines the origin and orientation of latitude and longitude
lines.
▪ North American Datum (NAD) 1927 using the Clarke 1866 spheroid

9|Zaman
Lecture – 3

▪ NAD 1983 using the Geodetic Reference System (GRS) 1980 spheroid
▪ World Geodetic System (WGS) 1984 using the WGS 1984 spheroid

Datum transformation
Changing from one datum to another data is refer to as transformation which
requires complex mathematical calculation.

Coordinate system
A coordinate system is a reference system used to represent the locations of
geographic features, imagery, and observations, such as Global Positioning
System (GPS) locations, within a common geographic framework.
▪ Geographic
▪ Planimetric/Projected

Geographic coordinate system


A geographic coordinate system uses a three-dimensional spherical surface to
define locations on the earth. It includes
▪ An angular unit of measure,
▪ A prime meridian, and
▪ A datum (based on a spheroid).
Latitude and Longitude
Spherical coordinates are measured from the earth's center. Represented by
latitude and longitude in decimal degree. The origin of the graticule (0,0) is
defined by where the equator and prime meridian intersect.
Latitude values are measured relative to the equator and range from -90° at the
South Pole to +90° at the North Pole.
Longitude values are measured relative to the prime meridian. They range from -
180° when traveling west to 180° when traveling east.
Projected coordinate system
The earth's geographic coordinates are projected onto a two-dimensional planar
surface. Unit of measurement typically is meter or feet.
There are mainly three types of projection:
i. Cylindrical ii. Conical iii. Azimuthal

10 | Z a m a n
Lecture – 3

All three projection types can also be transverse or secant.


Projection Example
Cylindrical Mercator projection, Cassini, Equirectangular,
Mollweide, Sinusoidal, Robinson
Conical Lambert conformal conic, Albers conic, Bonne,
Bottomley, Werner, American polyconic
Azimuthal Gnomonic, Lambert azimuthal equal-area,
Stereographic
Others Authagraph, Octant projection, Cahill's Butterfly
Map, GS50, Peirce quincuncial, Van der Grinten

Map Distortion
▪ Map distortion are unavoidable when making flat map of a globe.
▪ Distortion may take different forms in different parts of the map
▪ Distortion is usually less near the points or lines of intersection where the
map surface touches or intersects the globe.
▪ Few points on map have zero distortion.

A map can show one or more but never all of the following at same time:
▪ True Directions
▪ True Distances
▪ True Areas
▪ True Shapes

Projection Preserve Example


Conformal Preserve local Mercator projection, Transverse
shape. Mercator projection, Lambert
conformal conic projection.
Equal area Preserve the area The Lambert cylindrical equal-
of displayed area.
features.
Recommended for political /
public purposes.

11 | Z a m a n
Lecture – 3

Equidistant Preserve the Sinusoidal projection, Azimuthal


distances between Equidistant.
certain points.
True-direction Preserve the true Azimuthal projection. Mostly
direction. used for navigation.

UTM
▪ The globe is divided into 60 zones, each spanning 6° of longitude.
▪ Each zone has its own central meridian. The origin for each zone is its
central meridian and the equator.
▪ Cylindrical transverse secant projection system is used for each of this
zone.

UTM (Scale Factor)


▪ The two small circles are 180 kilometers east and west of the central
Meridian at the Equator.
▪ Small circles have a scale factor of 1.
▪ The center line of a UTM grid zone has a scale factor of 0.9996. This means
that a distance of 100 meters on an ellipsoid would be 99.96 meters on a
map.

UTM (False Easting & False Northing)


▪ False easting is a linear value applied to the origin of the x-coordinates.
▪ False northing is a linear value applied to the origin of the y-coordinates.
▪ These values are usually applied to ensure that all x- and y-values are
positive.
▪ You can also use these to reduce the range of the x- or y-coordinate values.
For example, if you know all y-values are greater than 5,000,000 meters,
you could apply a false northing of - 5,000,000.

Bangladesh Transverse Mercator (BTM)


▪ Bangladesh falls into 2 UTM zones: Zone 45N (central meridian 870E),
Zone 46N (central meridian 930E)
▪ Parameters values are different between two zones.
▪ Instead of 870E or 930E, BTM uses 900E as central meridian

12 | Z a m a n
Lecture – 3

Georeferencing
Aligning geographic data to a known coordinate system so it can be viewed,
queried, and analyzed with other geographic data. Georeferencing may involve
shifting, rotating, scaling, skewing, and in some cases warping, rubber sheeting,
or ortho-rectifying the data.
▪ To a vector
▪ To a raster
By entering specific x, y coordinate
The higher the transformation order, the more complex the distortion that can be
corrected. However, transformations higher than third order are rarely needed. In
general, if your raster dataset needs to be stretched, scaled, and rotated, use a first-
order transformation. If, however, the raster dataset must be bent or curved, use
a second- or third-order transformation.

Density tools
With the Density tools, you can calculate the density of input features within a
neighborhood around each output raster cell.
By calculating density, you are in a sense spreading the values (of the input) out
over a surface. The magnitude at each sample location (line or point) is distributed
throughout the study area, and a density value is calculated for each cell in the
output raster.
The density calculations are dependent on accurate distance and area calculations.
The following table lists the available Density tools and provides a brief
description of each.
Kernel Density Calculates a magnitude-per-unit area from point or polyline
features using a kernel function to fit a smoothly tapered
surface to each point or polyline.
Line Density Calculates a magnitude-per-unit area from polyline features
that fall within a radius around each cell.
Point Density Calculates a magnitude-per-unit area from point features that
fall within a neighborhood around each cell.

13 | Z a m a n
Lecture – 3

Point density
▪ Point density calculates a magnitude per unit area from point features that
fall within a neighborhood around each cell.
▪ Larger values of the radius parameter produce a more generalized density
raster. Smaller values produce a raster that shows more detail.
▪ Only the points that fall within the neighborhood are considered when
calculating the density. If no points fall within the neighborhood at a
particular cell, that cell is assigned NoData.

Line density
▪ Line density calculates a magnitude per unit area from line features that
fall within a neighborhood around each cell.
▪ Only the portion of a line within the neighborhood is considered when
calculating the density. If no lines fall within the neighborhood at a
particular cell, that cell is assigned NoData.
▪ Larger values of the radius parameter produce a more generalized density
raster. Smaller values produce a raster that shows more detail.

Kernel density
▪ KD calculates the density of features in a neighborhood around those
features. i.e., finding density of houses, or crime reports influencing a city.
▪ KD instead spreads the known quantity of the population for each point out
from the point location. A kernel function is then used to fit a smoothly
tapered surface to each point.
▪ The difference between the output of those two tools and that of Kernel
Density is that in point and line density, a neighborhood is specified that
calculates the density of the population around each output cell. Kernel
density spreads the known quantity of the population for each point out
from the point location. The resulting surfaces surrounding each point in
kernel density are based on a quadratic formula with the highest value at
the center of the surface (the point location) and tapering to zero at the
search radius distance.
▪ Estimation is predicting an unknown value at a location from reference
points. Actually, in predicting the unknown value we interpolate the value
from known points value. Therefore, we also called KDE as Kernel Density
Interpolation. In estimation a point value, KDE uses a Probability Density
Function (PDF).

14 | Z a m a n
Lecture – 3

▪ The normal distribution curve looks like a bell, and simply in KDE we call
this as Kernel Shape. If we look at heatmap plugin, there are some Kernel
shapes available, there are: Quartic, Triangular, Uniform and
Epanechnikov.
▪ The kernel shape controls the rate at which the influence of a point
decreases as the distance from the point increases.

▪ Different kernels decay at different rates, so a triweight kernel gives


features greater weight for distances closer to the point then the
Epanechnikov kernel does. Consequently, triweight results in “sharper”
hotspots, and Epanechnikov results in “smoother” hotspots.
▪ Decay ratio: Can be used with Triangular kernels to further control how
heat from a feature decreases with distance from the feature.

Heatmap
Heatmap is a nice visualization method to display event density or occurrence.
Heatmap is also used in clustering points where more points in an area will have
higher value compare to less point in the same area. Therefore, with a heatmap
we can see a concentration of event's occurrence.

Centroid
Centroid are point features that represent the geometric center (centroid) for
multipoint, line, and area features.
The geometric centroid of a convex object always lies in the object. A non-convex
object might have a centroid that is outside the figure itself. The centroid of a ring
or a bowl, for example, lies in the object's central void.

15 | Z a m a n
Lecture – 4

Introduction to Geometric Network


Geometric networks offer a way to model common networks and infrastructures
found in the real world. Water distribution, electrical lines, gas pipelines,
telephone services, and water flow in a stream are all examples of resource flows
that can be modeled and analyzed using a geometric network.
A geometric network is a set of connected edges and junctions, along with
connectivity rules, that are used to represent and model the behavior of a common
network infrastructure in the real world.
Geometric networks are comprised of two types of features: edges and junctions.
Edges and junctions in a geometric network are special types of features in the
geodatabase called network features. Think of them as point and line features
with extra behavior that is specific to a geometric network.

Edges
An edge is a feature that has a length through which some commodity flows.
Edges are created from line feature classes in a feature dataset.
Examples of edges: Water Mains, Electrical Transmission Lines, Gas Pipelines,
and Telephone Lines
There are two types of edges in a geometric network:
▪ Simple Edges Simple edges allow resources to enter one end of the edge
and exit the other end of the edge. The resource cannot be siphoned off or
exit along the simple edge; it can only leave the edge at its endpoint. An
example of a simple edge would be a water lateral in a water network.
▪ Complex Edges Complex edges allow resources to flow from one end to
the other, just like simple edges, but they also allow resources to be
siphoned off along the edge without having to physically split the edge
feature. An example of a complex edge would be a water main in a water
network.

Water Main
The main water distribution line is a single complex pipe with multiple lateral
lines connected to junctions along its length. The water main is not split at the
junction where each lateral connects to the main, but does allow the water to
siphoned off along each of the laterals

16 | Z a m a n
Lecture – 4

Water Lateral or Service Lateral


The water service lateral is the pipe that provides water from the water main in
the street to a home or business.

Junctions
A junction is a feature that allows two or more edges to connect and facilitates
the transfer of flow and resources between edges. Junctions are created from point
feature classes in a feature dataset.
Examples of junctions: Fuses, switches, service taps, and valves
There are two types of junctions in a geometric network:
▪ User-defined junctions: Junctions that are created based on a user's source
data (point feature classes) when the geometric network is first established.
Examples of junctions are service points, fuses, stream gauges, or taps.
Junctions correspond to a single junction element in the logical network.
▪ Orphan junctions: When the geometric network is created, a simple
junction feature class is created along with it called the orphan junction
feature class. The orphan junction feature class is used by the geometric
network to maintain network integrity. During the creation of the
geometric network, an orphan junction is inserted at the endpoint of any
edge at which a geometrically coincident junction does not already exist in
your source data.
Special types of junctions in a geometric network:
▪ Mid-span connectivity Connecting a junction at mid-span to an edge,
thereby allowing resources to be siphoned from the edge; but leaving the
edge as a single feature. This is supported by complex edges only
❖ Activities related to creating geometric networks
1. Creating new geometric network
2. Builds Geometric networks from existing data
3. Geometric network connectivity rule

1. Creating new geometric networks


The basic methodology for creating a geometric network is to determine which
feature classes will participate in the network and what role each will play.
Optionally, a series of network weights can be specified, as can other more
advanced parameters Ideally, your data should be clean before you build a

17 | Z a m a n
Lecture – 4

network. Clean data means that all features that should be connected in the
network are geometrically coincident—that is, no overshoots or undershoots.
However, if this is not the case, the data may be snapped during the network
building process.
▪ Ways for clean data:
1. Set snapping tolerance /Cluster processing
2. Create topological relationships
Topology Rules
Topology rules define the permissible spatial relationships between features. The
rules you define for a topology control the relationships between features within
a feature class, between features in different feature classes, or between subtypes
of features.
Use your features' spatial relationships and Behaviour to define topology rules -
▪ Parcels cannot overlap. Adjacent parcels have shared boundaries.
▪ Stream lines cannot overlap and must connect to one another at their
endpoints.
▪ Adjacent counties have shared edges. Counties must completely cover and
nest within states.
▪ Adjacent Census Blocks have shared edges. Census Blocks must not
overlap, and Census Blocks must completely cover and nest within Block
Groups.
▪ Road centerlines must connect at their endpoints.
▪ Road centerlines and Census Blocks share coincident geometry (edges and
nodes).

Cluster Processing
Creating topological relationships involves analyzing the coordinate locations of
feature vertices among features in the same feature class as well as between the
feature classes that participate in the topology.
Those that fall within a specified distance of one another are assumed to represent
the same location and are assigned a common coordinate value (in other words,
they are collocated).
A cluster tolerance is used to integrate vertices. All vertices that are within the
cluster tolerance may move slightly in the validation process. The default cluster
tolerance is based on the precision defined for the dataset. The default cluster
tolerance is 0.001 meters in real-world units. It is 10 times the distance of the x,

18 | Z a m a n
Lecture – 4

y resolution (which defines the amount of numerical precision used to store


coordinates).
2. Builds geometric networks from existing data
Geometric networks are objects within the geodatabase. To work with a
geometric network in ArcMap, you must load a minimum of one feature class
that participates in the network.
▪ Open feature dataset within geodatabase
▪ Open feature classes with feature dataset
▪ Open geometric network
▪ Symbolize geometric network

3. Connectivity rules
Network connectivity rules constrain the type of network features that may be
connected to one another and the number of features of any particular type that
can be connected to features of another type. By establishing these rules, along
with others, such as attribute domains, you can maintain the integrity of the
network data in the database.
For example, in a water network, a hydrant can connect to a hydrant lateral but
not to a service lateral. Similarly, in the same water network, a 10-inch
transmission main can only connect to an 8-inch transmission main through a
reducer.

The Logical Network


When a geometric network is created, the geodatabase also creates a
corresponding logical network, which is used to represent and model connectivity
relationships between features. The logical network is the connectivity graph used
for tracing and flow operations. All connectivity between edges and junctions is
maintained in the logical network.
❖ Activities related to editing geometric networks
1. Identify geometric network build error
2. Verify network connectivity
3. Managing geometric network

1. Identifying geometric network build errors


When building a geometric network, the feature classes that are selected to
participate in the network may contain features whose geometries are invalid

19 | Z a m a n
Lecture – 4

within the context of a geometric network. These geometries include the


following:
▪ Features that have empty geometry
▪ Edge features that contain multiple parts
▪ Edge features that form a closed loop or have the same from and to junction
▪ Edge features that have zero length
▪ Junctions coincident with an edge-feature vertex having a different z-value
▪ Standalone junctions; which are junctions not connected to any edges
▪ Edge features prevented from collapsing on themselves because their
length is near the snapping tolerance.

2. Verify network connectivity &network geometry


The Verify Network Connectivity command will create a selection set of network
features with inconsistent connectivity and display a dialog box listing the
number of selected features.
▪ A network with no corresponding network elements
▪ A network feature with one or more missing network elements
▪ A network feature with duplicate network elements
▪ A network feature associated with inconsistent or invalid network elements
▪ A network feature associated with or connected to a nonexistent network
feature
▪ A network junction that is not coincident with edges to which it is
connected
▪ A network element associated with a zero-length edge
▪ A network edge with invalid edge element order

3. Commands for editing


following commands are available for editing:
▪ Network Build Errors
▪ Verify Network Geometry
▪ Verify Network Connectivity
▪ Rebuild Connectivity
▪ Repair Connectivity
▪ Connect
▪ Disconnect

20 | Z a m a n
Lecture – 4

What can you do with geometric networks?


Analysis Applications
Calculate the shortest Various kinds of utility companies use this as a
path between two method of inspecting the logical consistency of a
points. network and verifying connectivity between two
points.
Find all connected or Electric companies can see which part of the
disconnected network network is disconnected and use that information to
elements. figure out how to reconnect it.
Find loops or circuits in An electrical short circuit can be discovered.
the network.
Determine flow Managers or engineers can see the direction of flow
direction of edges when along edges, and ArcGIS can use the flow
sources or sinks are set. directions to perform flow-specific network
analyses.
Trace network elements Water utilities can determine which valves to shut
upstream or off when a pipe burst.
downstream from a
point.
Calculate the shortest Environmental monitoring stations can hone in on a
path upstream from one source of pollution in streams.
point to another.
Find all network Electric utility companies can use the phone calls
elements upstream from of customers experiencing an outage to locate
many points and suspect transformers or downed lines.
determine which
elements are common to
them all.

❖ Activities related to analyzing geometric networks


1. Determine the Flow Direction
2. Calculate the shortest path
3. Tracing

21 | Z a m a n
Lecture – 4

1. Determining Flow Direction


The flow direction in a network can be determined using two methods:
a) Digitized direction of network edges: Flow direction using digitized direction
is determined by the following:
▪ The connectivity of the network
▪ Direction of the edge features
Flow direction using the digitized direction of edges can only be set using the set
flow direction geoprocessing tool
b) Through the definition of junctions that sources or sinks flow: Flow direction
using sources and sinks is determined by the following:
▪ The connectivity of the network
▪ The locations of sources and sinks in the network
▪ The enabled or disabled state of features.

The decision to use sources and sinks to drive flow through a geometric network
must be made at the time of network creation and is a setting applied to junction
feature classes.
When a network is created with junction feature classes using sources or sinks,
individual junction features can then be defined as either sources or sinks.
Flow moves away from sources or toward sinks. Because flow direction can be
established with either sources or sinks, you should only use sources or sinks in
a network (otherwise, your network may have edges with indeterminate flow).
On the Utility Network Analyst toolbar, click Flow > Display Arrows For …..
✓ Sources are junction features that push flow away from themselves through
the edges of the network. For example, in a water distribution network, pump
stations can be modeled as sources since they drive the water through the pipes
away from the pump stations.
✓ Sinks are junction features that pull flow toward themselves from the edges
in the network. For example, in a sewer network, a wastewater treatment plant
may be modeled as a sink since gravity drives all water toward it.

22 | Z a m a n
Lecture – 4

Three Categories of flow direction:


▪ Determinate flow direction: If the flow direction of an edge can be uniquely
determined from the connectivity of the network, the locations of sources and
sinks, and the enabled or disabled states of features, the feature is said to have
determinate flow.

▪ Indeterminate Flow Direction: Indeterminate flow in a network occurs when


the flow direction cannot be uniquely determined from the topology of the
network, the locations of sources and sinks, or the enabled or disabled states
of the features.
▪ Uninitialized Flow Direction: Uninitialized flow direction in a network occurs
in edges that are isolated from the sources and sinks in the network. This can
happen if the edge is not topologically connected through the network to the
sources and sinks or if the edge is only connected to sources and sinks through
disabled features.

Earth Surface
DEM
A DEM is a 'bare earth' elevation model, unmodified from its original data source
(such as lidar, ifsar, or an auto correlated photogrammetric surface) which is
supposedly free of vegetation, buildings, and other 'Non-ground' objects.
Digital Elevation Models (DEMs) are a type of raster GIS layer.
In a DEM, each cell of raster GIS layer has a value corresponding to its elevation
(z- values at regularly spaced intervals).
The intervals between each of the grid points will always be referenced to some
geographical coordinate system
The details of the peaks and valleys in the terrain will be better modeled with
small grid spacing than when the grid intervals are very large.

23 | Z a m a n
Lecture – 4

DSM
A DSM is an elevation model that includes the tops of buildings, trees, power
lines, and any other objects. Commonly this is seen as a canopy model and only
'sees' ground where there is nothing else overtop of it.
DTM
A DTM is effectively a DEM that has been augmented by elements such as
breaklines and observations other than the original data to correct for artifacts
produced by using only the original data. This is often done by using
photogrammetrically derived linework introduced into a DEM surface.
It includes not only heights and elevations but other geographical elements and
natural features such as rivers, ridge lines, etc.

TIN
Triangular irregular networks (TIN) are a form of vector-based digital geographic
data and are constructed by triangulating a set of vertices (points).
The vertices are connected with a series of edges to form a network of triangles.
There are different
methods of interpolation to
form these triangles, such
as Delaunay triangulation
or distance ordering.
ArcGIS supports the
Delaunay triangulation
method.
Contour
An imaginary line that connects points of equal value. A contour map typically
shows multiple contours such as elevation or temperature contours. The contour

24 | Z a m a n
Lecture – 4

interval of a contour map is the difference in elevation between successive


contour lines.
Slope
Slope is calculated by finding the ratio of the "vertical change" to the "horizontal
change" between (any) two distinct points on a line. Sometimes the ratio is
expressed as a quotient ("rise over run"), giving the same number for every two
distinct points on the same line. The concept of measuring slope from a
topographic map is a familiar one for most professionals in the landscape
planning/surveying professions.
Slope is a measurement of how steep the ground surface is. The steeper the
surface the greater the slope. Slope is measured by calculating the tangent of the
surface. The tangent is calculated by dividing the vertical change in elevation by
the horizontal distance. If we view the surface in cross section we can visualize a
right-angle triangle
Slope is normally expressed in planning as a percent slope which is the tangent
(slope) multiplied by 100.
Percent Slope = Height / Base * 100
Another form of expressing slope is in degrees. To calculate degrees, one takes
the Arc Tangent of the slope.
Degrees Slope = ArcTangent (Height /Base)
Aspect
It can be thought of as the slope direction. The value of each cell in an aspect
dataset indicates the direction the cell's slope faces.
It is measured clockwise in degrees from 0 (due north) to 360 (again due north),
coming full circle. Flat areas having no downslope direction are given a value of
- 1.
Roughness
Terrain roughness (surface roughness, ruggedness, terrain rugosity, micro
topography, micro relief) is defined as the variability or irregularity in elevation
(highs and lows) within a sampled terrain unit.
Vector ruggedness measures (VRM) capture variability in slope and aspect in a
single ratio

25 | Z a m a n
Lecture – 4

The terrain ruggedness index (TRI) is a measurement developed by Riley, et al.


(1999) to express the amount of elevation difference between adjacent cells of a
digital elevation grid.
Terrain Ruggedness Index calculates the sum change in elevation between a
central cell and its eight surrounding neighbours, or a nominated number of
adjacent cells
Hillshade
Hill shading is a technique used to visualize terrain as shaded relief, illuminating
it with a hypothetical light source. The illumination value for each raster cell is
determined by its orientation to the light source, which is based on slope and
aspect. Hill Shading estimates surface reflectance from the sun at any altitude and
any azimuth. The reflectance is calculated in a range from 0 to 100.
Interpolation
Interpolation is the process of using points with known values or sample points
to estimate values at other unknown points. It can be used to predict unknown
values for any geographic point data, such as elevation, rainfall, chemical
concentrations, noise levels, and so on.
The available interpolation methods are listed below-
1. Inverse Distance Weighted (IDW)
2. Natural Neighbor Inverse Weighted (NNIDW)
3. Spline
4. Kriging
5. Trend

1. Inverse Distance Weighted (IDW)


Inverse Distance Weighted interpolation is a deterministic spatial interpolation
approach to estimate an unknown value at a location using some known values
with corresponding weighted values.
It assumes that each input point has a local influence that diminishes with
distance. It weights the points closer to the processing cell greater than those
further away. A specified number of points, or all points within a specified radius
can be used to determine the output value of each location. A radius is generated
around each grid node from which data points are selected to be used in the
calculation.

26 | Z a m a n
Lecture – 4

It is a moving weighted average


interpolator. The basic IDW
interpolation formula can be seen in
equation 1. Where x* is unknown value
at a location to be determined, w is the
weight, and x is known point value. The
weight is inverse distance of a point to
each known point value that is used in
the calculation. Simply the weight can
be calculated using equation 2.
If we see the formula in equation 2, there is a P variable which stands for Power.
There is no particular rule in defining the P value, but from the equation, we can
see that the higher P value will give lower weight. From my experience the
optimum P value is in range 1 to 2. Of course, it cannot apply to each case.

Advantages of IDW
▪ Can estimate extreme changes in terrain such as: Cliffs, Fault Lines.
▪ Dense evenly space points are well interpolated (flat areas with cliffs).
▪ Can increase or decrease amount of sample points to influence cell values.

Disadvantages of IDW
▪ Cannot estimate above maximum or below minimum values.
▪ Not very good for peaks or mountainous areas.

NNIDW
Like IDW, this method is a weighted- average interpolation method.
Instead of finding an interpolated point’s value using all of the input points
weighted by their distance, Natural Neighbors interpolation creates a Delauney
Triangulation of the input points and selects the closest nodes, then weights their
values by proportionate area.

Advantages of NNIDW
Handles large numbers of sample points efficiently.

Spline
Spline estimates values using a mathematical function that minimizes overall
surface curvature, resulting in a smooth surface that passes exactly through the
input points.

27 | Z a m a n
Lecture – 4

Advantages of Spline
▪ Useful for estimating above maximum and below minimum points.
▪ Creates a smooth surface effect.

Disadvantages of Spline
▪ Cliffs and fault lines are not well
▪ presented due to smoothing effect.
When the sample points are close together and have extreme differences in value,
Spline interpolation doesn’t work as well. This is because Spline uses slope
calculations (change over distance) to figure out the shape of the flexible rubber
sheet.

Kriging
Kriging is a geostatistical interpolation technique that considers both the distance
and the degree of variation between known data points when estimating values in
unknown areas.
Kriging assumes that the distance or direction between sample points reflects a
spatial correlation that can be used to explain variation in the surface.

Advantages of Kriging
▪ Directional influences can be accounted for: Soil Erosion, Siltation Flow,
Lava Flow and Winds.
▪ Exceeds the minimum and maximum point values.

Disadvantages of Kriging
▪ Does not pass through any of the point values and causes interpolated
values to be higher or lower then real values.

Trend
Trend is a statistical method that finds the surface that fits the sample points using
a least-square regression fit. It fits one polynomial equation to the entire surface.
The surface is constructed so that for every input point, the total of the differences
between the actual values and the estimated values (i.e., the variance) will be as
small as possible.

Disadvantages of Trend
It is an inexact interpolator and the interpolated surface rarely passes through the
sample points.

28 | Z a m a n
Lecture – 5

Introduction to Remote Sensing


Remote Sensing is the science (and to some extent: art) of acquiring information
about the Earth’s surface without actually being in contract with it.
“This is done by sensing and recording reflected or emitted energy and
processing, analyzing and applying that information.”
Source: Canada Centre for Remote Sensing, CCRS Tutorial

“Remote Sensing: the science and art of obtaining useful information about an
object, area, or phenomenon through the analysis of data acquired by a device
that is not in contact with the object under investigation.”
Source: T.M. Lilles and R.W. Kiefer, Remote Sensing and Image Interpretation, Wiley book

Earth observation (EO) by Remote Sensing is the interpretation and


understanding of measurements.
Source: P.M. Mather, Computer processing of Remotely-sensed images, Wiley book

“Aircraft and satellites are the common platform from which Remote Sensing
observations are made.”
Source: F.F. Sabins, Remote Sensing principle and interpretation, Freeman book

Earth observation (EO) by Remote Sensing is the interpretation and


understanding of measurements.
Source: P.M. Mather, Computer processing of Remotely-sensed images, Wiley book

“Aircraft and satellites are the common platform from which Remote Sensing
observations are made.”
Source: F.F. Sabins, Remote Sensing principle and interpretation, Freeman book

Elements of Remote Sensing


Energy Source (A)
• The first requirement for remote
sensing is to have an energy
source which illuminates or
provides electromagnetic
energy to the target of interest.
Radiation and the Atmosphere (B)
• As the energy travels from its
source to the target, it will
interact with the atmosphere it
passes through.

29 | Z a m a n
Lecture – 5

• This interaction may take place a second time as the energy travels from
the target to the sensor.
Interaction with the Target (C)
• The energy (electromagnetic radiation) interacts with the target depending
on the properties of both the target and the radiation.
Sensor (D)
• After the energy has been scattered by, or emitted from the target, a sensor
is required to collect and record the electromagnetic radiation.
Record, Transmission and Processing (E)
• The energy recorded by the sensor has to be transmitted, often in electronic
form, to a receiving and processing station where the data are processed
into an image.
Interpretation and Analysis (F)
• The processed image is interpreted, visually and/or digitally (image
analysis), to extract information about the target which was illuminated.
Application (G)
• The extracted information assists to solve a particular problem.
A. Energy Sources
The sun is the most obvious source of electromagnetic energy measured in remote
sensing.
Sometime man-made sources of electromagnetic energy can be used
B. Radiation and Interaction with the Atmosphere
❖ Radiation Principals
The sun radiates energy equally in all directions and the Earth intercepts and
receives part of this energy. The power flux reaching the top of the Earth's
atmosphere is about 1400 Watts/m2.
Electromagnetic energy from the sun comes to Earth in the form of radiation. This
energy is pure energy, not requiring any matter (or medium) for its existence or
movement. This energy can therefore travel through space (which is a vacuum).
This energy (radiation) is the means by which information is transmitted from an
object.
EMR is travelling at a velocity c (speed of light) equal to 3*108 m/s in a
sinusoidal, harmonic fashion. EMR consists of an electrical (E) field and a

30 | Z a m a n
Lecture – 5

magnetic (M) field. Wavelength can be defined as the distance between two
successive peaks (or troughs) in waves of energy. Frequency is measured by
counting the number of peaks that pass a given point every second.
❖ Electromagnetic Spectrum
Represents the continuum of electromagnetic energy
• From extremely short wavelengths (cosmic and gamma rays)
• To extremely long wave lengths (radio and television waves)
In each of the region’s adjacent wavelengths “behave similarly” or are generated
by similar mechanisms. However, the division between UV and visible or
microwave and thermal infrared is not hard. The regions blur into each other.
Three regions are of particular importance for RS.
i. Visible ii. Infrared iii. Microwave
It is so called because it is detected by the eyes, whereas other forms of EMR are
invisible to the unaided eye.
i. The Visible Spectrum (Visible light)
The spectrum range of visible light is 0.4 -0.7 μm
Wavebands are perceived as particular colors:

ii. The Infrared Spectrum


Infrared = beyond the red
Covers the wavelength range from approximately 0.7 μm to 100 μm

31 | Z a m a n
Lecture – 5

iii. The Microwave Spectrum


Ranges from sub millimeter to (1 to 3) meters. Further subdivided in bands: K,
X, C, S, L, P –band. Some microwave sensors can detect small amounts of
radiation at those wavelengths that are emitted by the Earth-passive sensors. But
the important RS microwave sensors are all active systems. i.e. they generate,
transmit and record microwave radiation
Interaction of Electromagnetic Radiation with the Atmosphere
As the energy travels from a source (the sun) to the target (the Earth’s surface) it
interacts on its travel with the Earth’s atmosphere. It interacts with particles and
gas molecules in the atmosphere
Two mechanisms of interaction with atmosphere:
Particles or large gas molecules
present in the atmosphere cause the
EMR to be redirected from its
original path.
How much scattering takes place
depends on
• The wave length of the radiation
• The abundance of the particles or gases
• The distance the radiation travels through the atmosphere

Types of scattering
Rayleigh Scattering
▪ Occurs when particles are very small (≤ 0.1 μm)
▪ Particles: small specks of dust or nitrogen and oxygen molecules
▪ Shorter wavelengths of energy (e.g. blue) are much more scattered than
longer wavelengths (e.g. red)
▪ Blue sky syndrome
Mie Scattering
▪ Occurs when particles are just between 0.4 –0.7 μm
▪ Particles: dust, pollen, smoke, water vapor, salt particles from oceanic
evaporation
▪ Mie scattering tends to affect longer wavelengths than those affected by
Rayleigh scattering
▪ Sunrise or Sunset

32 | Z a m a n
Lecture – 5

Non-Selective Scattering
▪ Occurs when the particles are much larger than the wavelength of the
radiation (above 10 μm)
▪ Particles: water droplets, ice fragments, large dust particles all wavelengths
are scattered about equally (Σ - >white light), causes fog and clouds to
appear white to our eyes.
▪ Is a primary cause of haze of water, dust?

C. Interaction with Target


❖ Interaction of electromagnetic radiation with Earth surface material
Electromagnetic energy that is not absorbed or scattered in the atmosphere can
reach and interact with the Earth’s surface
Three forms of interaction
• Absorption
• Transmission
• Reflection
The total incident energy will interact with the targets in one or more of these
three ways. Most interest in RS is in measuring radiation reflected from targets.
Reflectance is the ratio between the irradiance and the radiant emittance of an
object.
• Diffuse reflectance
• Specular reflectance

Angle of incidence and angle of reflectance are equal and no scattering occurs at
the surface.
Scattering Mechanism

33 | Z a m a n
Lecture – 5

Target reflection
Basic assumption made in RS is that specific targets (soils, rocks, vegetation,
water) have an individual and characteristic manner of interacting with incident
radiation => spectral response => spectral signature
A spectral reflection curve describes the spectral response of a target for a certain
region e.g. 0.4 -2.5 μm
Vegetation:
• Lower reflection in B, R
• Higher reflection in G
• High reflection in NIR
Water: low reflection (= high absorption) in R, NIR maybe a bit higher reflection
in B, G. Generally, water looks dark if viewed from space maybe a bit more blue,
green than red but blue-sky reflection may dominate.
Which spectral bands can be used most effectively in RS?

D. Sensor or Satellite
A sensor is a device that measures and records electromagnetic energy. Sensors
can be divided into two groups.
Passive sensors depend on an external source of energy, usually the sun. The
most common passive sensor is the photographic camera.
Active sensors have their own source of energy, an example would be a radar
gun. These sensors send out a signal and measure the amount reflected back.
Active sensors are more controlled because they do not depend upon other
sources.

34 | Z a m a n
Lecture – 5

Passive Sensor Active Sensor


Energy Source Reflected energy requires Own energy source for
illumination of the Earth illumination of the Earth
(daytime)
Time of Use Only at day Day or night
Example Camera Radar gun
Weather Must be sunny Can work in cloudy weather

Swath and Nadir


As a satellite revolves around the earth, the sensor sees a certain portion of the
earth’s surface. The area imaged is referred to as the Swath.
The surface directly below the satellite is called the Nadir point.

Orbit
The path followed by a satellite is referred to as its orbit.
Types of ORBIT
I. Geostationary Orbit
▪ Placed at a high altitude approximately 35,800 kilometers (22,300 miles)
▪ Orbital period of the satellite matches rotational speed of the earth
▪ View the same portion of the earth’s surface at all times
▪ Very high temporal resolution.
▪ Weather and communication satellites commonly have these types of
orbits.
▪ A single geostationary satellite is on a line of sight with about 40 percent
of the earth's surface.
▪ Three such satellites, each separated by 120 degrees of longitude, can
provide coverage of the entire planet, with the exception of small circular
regions centered at the north and south geographic poles.
▪ Requires plenty of time to make a round trip from the satellite to the surface
and back.

II. Polar Orbit


▪ Satellites are designed to follow a north south orbit which, in conjunction
with the earth’s rotation (west-east), allows them to cover most of the
earth’s surface over a period of time. These are Near-polar orbits.
▪ Earth rotates under the satellite, so each time satellite makes a pass, it sees
a new area.

35 | Z a m a n
Lecture – 5

▪ Global coverage
▪ Polar orbiting satellite observes same area on earth once per day or less.
▪ Placed at low altitude (700 to 800 kilometers).
▪ Low temporal resolution
▪ Application: Air quality, water quality, land cover, vegetation studies

III. Sun-synchronized orbit


▪ Many of these satellites orbits are Sun-synchronous such that they cover
each area of the world at a constant local time of day.
▪ It is placed at an altitude from 700km to 800km.
▪ These satellites need constant amount of sunlight and work best at bright
sun light.

Platforms
1. Space-borne: 100 - 36000 km (i.e. space shuttles /stations, rockets)
a. Space Shuttle: 250-300km
b. Space Station: 300-400km
c. Low level satellite: 700 – 1500km
d. High level satellite: About 36000km
2. Air-borne: flying height up to 50km (i.e. aircraft, helicopter, balloon)
3. Ground based: 10-15 km (i.e. Ground vehicles, tower)

E. Record, Transmission and Processing


Space scanning principles
While the platform is moving over the ground the images are build up
sequentially line-by-line. This results in a continuous strip image of the imaged
scene
Optical-mechanical
scanning with scanning
mirrors
FOV
Remark: satellites have often
oscillating mirrors because
they have a narrow FOV
(e.g. 10 -15o)
Angular FOV = portion of
the mirror sweep

36 | Z a m a n
Lecture – 5

IFOV
Example:
Landsat 1-3 had a nominal altitude of 913 km, but the actual varied between 880
km and 940 km. The nominal IFOV of Landsat MSS is generally specified as 79
m, the actual resolution (IFOV) is between 76 and 81 m.
IFOV: Instantaneous FOV is determined
by the physical size of the sensitive element
of the detector and the effective focal
length of the scanner optics (given in rad or
mrad).
Conversion between rad and deg: 2π ≅ 360o
Example: detector size 0.1 mm (quite big) and focal length 100 mm
IFOV = 0.1/100 = 1mrad (≅ 0.057o)
F. Interpretation and Analysis
Image Interpretation
Analysis of remote sensing imagery involves the identification of various targets
in an image. Those targets can be environmental or artificial feature. They consist
of points, lines or areas. Target may be identified or defined in terms of their way
of they reflect or emit radiation.
Process of Image Interpretation
Detection, Identification, Delineation, Enumeration, Mensuration
Elements to Interpret Image
Tone
Tone refers to the relative brightness or color of objects in an image. The tonal
variation is due to the reflection, emittance, transmission or absorption character
of an objects. In General, smooth surface tends to have high reflectance, rougher
surface less reflectance.
Shape
Shape refers to general form, structure or outline of individual object. Shape can
be very distinctive clue for interpretation. Straight edge shape typically represents

37 | Z a m a n
Lecture – 5

urban or agricultural (fields) targets, while natural features like forest edges are
generally more irregular in shape.
Size
Size of an object is a function of scale. An interpreter had to distinguish zones of
land uses and had to identified number of buildings in it. Large buildings such as
factories or warehouses would suggest commercial property whereas small
building would indicate residential use
Texture
Texture refers to the arrangement and frequency of tonal variation in particular
areas of an image. Smooth textures are most often the result of uniform, even
surfaces such as fields, asphalts or grasslands. Rough texture represents irregular
structure such as forest canopy.
Pattern
Pattern refers to an orderly repetition of similar tones and texture will produce a
distinctive and ultimately recognizable pattern. Orchards with evenly spaced trees
and urban street with regularly spaced houses are good examples of pattern.
Shadow
Shadow may provide relative height of a target. Shadows can also reduce or
eliminate interpretation in their area of influence, since targets within shadows
are much less (or not at all) discernible from their surroundings.
Association
Association considers the relationship between other recognizable objects or
features in proximity to the target of interest. Commercial properties may be
associated with proximity to major transportation routes. Residential area may be
associated with schools, playgrounds and sport fields.
Advantages in visual Interpretation
• Simple method
• Inexpensive equipment
• Uses brightness and spatial content of the image
• Subjective and Qualitative
• Concrete
Advantages of digital processing
• Cost-effective for large geographic areas
• Cost-effective for repetitive interpretations

38 | Z a m a n
Lecture – 5

• Consistent results
• Complex interpretation algorithms possible
• Speed may be an advantage
• Explore alternatives
• Compatible with other digital data
Disadvantages of digital processing
• Expensive for small areas
• Expensive for one-time interpretations
• Start-up costs may be high
• Requires elaborate, single-purpose equipment
• Accuracy may be difficult to evaluate
• Data may be expensive, or not available
• Preprocessing may be required
• May require large support staff
G. Application
• Application of RS for Water and Sanitation
• Suitable Location Analysis
• Land use Classification
• Water Resource monitoring and Management
• Monitoring of Sanitation System
• Temporal Analysis
• Surveillance of vector-borne/ water borne infectious diseases
• Epidemiology Monitoring
Application in Disaster Management
• Hazard identification
• Hazard zoning
• Vulnerability mapping
• Risk analysis and management
• Identification of suitable location for rehabilitation/shelters
• Disaster damage estimation
Advantages of Remote Sensing
• A large or wide area can be covered by a single image/Photo. Different
Satellites with different sensor systems may cover different extent of areas.

39 | Z a m a n
Lecture – 5

• We can get the data of any area repeatedly at regular intervals of time,
enabling monitoring of changes.
• Coverage of inaccessible or difficult terrain like mountains, thick forests
etc. are imaged.
• Since data is obtained in digital form & in different channels, computer
processing and analysis becomes possible.
• Economic in cost and time.

40 | Z a m a n
Lecture – 6

Four ways to express image resolution


1. Spatial Resolution
2. Temporal Resolution
3. Spectral Resolution
4. Radiometric Resolution
1. Spatial Resolution
A digital image consists of an array of pixels. Each pixel contains information
about a small area on the land surface, which is considered as a single object. It
is expressed by the size of the pixel on the ground in meters.
Spatial resolution describes the ability of a sensor to identify the smallest size
detail of a pattern on an image. Spatial resolution is how much detail of an image
is visible to human eye. The distance between distinguishable patterns or objects
in an image that can be separated from each other is often expressed in meters
related to its distance on the ground.
Example: Spatial resolution 25m means, if the distance between two objects is
minimum 25m, then the object will be clearly identified by the human eyes.
A measure of size of pixel is given by the Instantaneous Field of View (IFOV).
The IFOV is the angular cone of visibility of the sensor, or the area on the Earth’s
surface that is seen at one particular moment of time. IFOV is dependent on the
altitude of the sensor above the ground level and the viewing angle of the sensor.
The size of the area viewed on the ground can be obtained by multiplying the
IFOV (in radians) by the distance from the ground to the sensor. This area on the
ground is called the ground resolution or ground resolution cell. It is also referred
as the spatial resolution of the remote sensing system. Because no satellite has a
perfectly stable orbit its height above the Earth will vary. This means the actual
IFOV (area on the ground) might differ from the nominal IFOV.
IFOV: instantaneous FOV is determined by the physical size of the sensitive
element of the detector and the effective focal length of the scanner optics (given
in rad or mrad).
Conversion between rad and deg: 2π ≅ 360o
Example: detector size 0.1 mm (quite big) and focal length 100 mm
IFOV = 0.1/100 = 1mrad (≅ 0.057o)

41 | Z a m a n
Lecture – 6

2. Temporal Resolution
Temporal resolution is a measure of the repeat cycle or frequency with which a
sensor revisits the same part of the Earth’s surface. It varies by satellites. The
frequency will vary from several times per day, for a typical weather satellite, to
8 – 20 times a year for a moderate ground resolution satellite, such as Landsat
TM. The frequency characteristics will be determined by the design of the
satellite sensor and its orbit pattern.
3. Spectral Resolution
Most of the digital images collected by satellite-borne sensors are multi-band or
multi-spectral. Individual images are separately recorded in discrete spectral
bands. Spectral resolution refers to the width of these spectral bands. This will
determine the degree to which individual targets (vegetation spaces, rock types,
etc.) can be discriminated on the multi-spectral image.
Example: Different rock types might only be separable if the recording device is
capable of collecting data in a narrow wave band. A wide-band instrument would
simply average out the differences.
Important aspects of spectral resolution are
▪ The numbers of spectral bands
▪ The width
▪ The position in the spectrum
Generally surface features can be better distinguished from multiple narrow
bands, then from a single wide band.
Panchromatic band: Panchromatic is essentially a black and white band. It is
one single band and typically it has a wide band width. This data is often available
at the highest resolution.
Multispectral band: Multispectral data is captured over a series of narrower
bands. There are limited number of bands generally under 10. Higher resolution
sensors with only four bands (R, G, B, NIR)
Hyperspectral band: These systems cover a similar wavelength range to
multispectral systems, but in much narrower bands. This dramatically increases
the number of bands (and thus precision) available for image classification
(typically tens and even hundreds of very narrow bands).
Moreover, hyperspectral signature libraries have been created in lab conditions
and contain hundreds of signatures for different types of landcovers, including

42 | Z a m a n
Lecture – 6

many minerals and other earth materials. Thus, it should be possible to match
signatures to surface materials with great precision.
However, environmental conditions and natural variations in materials (which
make them different from standard library materials) make this difficult.
In addition, classification procedures have not been developed for hyperspectral
data to the degree they have been for multispectral imagery. As a consequence,
multispectral imagery still represents the major tool of remote sensing today.
Example: AVIRIS and MODIS
4. Radiometric Resolution
A digital image is made up of square or rectangular areas called pixels. Each pixel
has an associated pixel value known as digital number (DN) or brightness value
or gray level which depends on the amount of reflected energy from the ground.
An object reflecting more energy records a higher digital number for itself on the
digital image and vice versa.
▪ Radiometric resolution is how well the differences in brightness in an
image can be perceived.
▪ It is measured through grey value. It is a kind of digital number (DN) which
is a variable assigned to a pixel.
Image data are generally displayed in a range of grey tones, representing with
black digital a number of 0 and white representing the maximum value
▪ The maximum number of brightness levels available depends on the
number of bits used in representing the energy recorded.
▪ Thus, if a sensor used 8 bits to record the data, there would be 2 8 = 256
digital values available, ranging from 0 to 255.
▪ However, if only 4 bits were used, then only 24 = 16 values ranging from 0
to 15 would be available. Thus, the radiometric resolution would be much
less.

Introduction to GPS
The Global Positioning System (GPS) is a space- based satellite navigation
system that provides location and time information. GPS is a made up of a
network of 24 satellites placed into orbit by the U.S. Department of Defense. GPS
satellites are powered by solar energy. They have backup batteries onboard to
keep them running in the event of a solar eclipse, when there's no solar power.
Small rocket boosters on each satellite keep them flying in the correct path.

43 | Z a m a n
Lecture – 6

GPS was originally intended for military applications, but in the 1980's, the
government made the system available for civilian use. GPS works in any weather
conditions, anywhere in the world, 24 hours a day, 365 days a year.
These satellites are travelling at speeds of roughly 7,000 miles an hour. Each
satellite weighs about 2,000 pounds and is built to last about ten years. The 24
satellites that make up the GPS space segment are orbiting the earth about 12,000
miles above us. The orbital period of one satellite is 24h
Segments of GPS
GPS consists of three segments:
1. The space segment,
2. The control segment,
3. The user segments
1. Space segment
The space segment consists of the 24-satellite constellation. Each GPS satellite
transmits a signal, which has a number of components: two sine waves (also
known as carrier frequencies), two digital codes, and a navigation message. The
carriers and the codes are used mainly to determine the distance from the user’s
receiver to the GPS satellites. Navigation message contains, along with other
information, the coordinates (the location) of the satellites as a function of time.
The transmitted signals are controlled by highly accurate atomic clocks onboard
the satellites.
2. Control segment
The control segment consists of a worldwide network of tracking stations, with a
master control station (MCS) located in the United States at Colorado Springs,
Colorado. The primary task of the operational control segment is tracking the
GPS satellites in order to determine and predict satellite locations, system
integrity, behavior of the satellite atomic clocks, atmospheric data, the satellite
almanac, and other considerations.
3. User segment:
The user segment includes all military and civilian users. With a GPS receiver
connected to a GPS antenna, a user can receive the GPS signals, which can be
used to determine his or her position anywhere in the world. GPS is currently
available to all users worldwide at no direct charge.

44 | Z a m a n
Lecture – 6

Concept of GPS
The idea behind GPS is rather simple. If
the distances from a point on the Earth (a
GPS receiver) to three GPS satellites are
known along with the satellite locations,
then the location of the point (or receiver)
can be determined by simply applying
the well- known concept of resection.
To measure distance from speed of light
we need a very accurate clock (clock
error of 1/100 sec = distance error of
3000 km). GPS Satellites have very
accurate atomic clocks.
Our receivers do not have atomic clocks, so how can we measure time with
necessary accuracy?
Satellite Navigation systems are deliberately constructed in such a way that from
any point on Earth, at least 4 satellites are “visible”. Thus, despite an inaccuracy
on the part of the receiver clock and resulting time errors, a position can be
calculated to within an accuracy of approx. 5 – 10m.

Trilateration method
One key aspect of GPS is the use of an atomic clock, which ensures accurate
timekeeping. It is crucial for your mobile phone to have a precise time
measurement in order to achieve accurate GPS calculations. Albert Einstein's
theory of relativity also plays a significant role in GPS technology, providing a
real-life application for this scientific concept.

45 | Z a m a n
Lecture – 6

Imagine your friend wants to know your location, and you have a mobile phone
equipped with a GPS receiver. GPS utilizes a mathematical technique called
trilateration to determine someone's position. Let's first understand trilateration
in a two-dimensional context. With the help of two satellites and their distance
measurements, your position can be narrowed down to two possible points of
intersection.
To further refine your location, a third satellite is required. The intersection of the
circles formed by the three satellites determines your exact position in a two-
dimensional space. In a three-dimensional setting, three satellites are needed to
narrow down your location to two possible points. By introducing the Earth's
surface as a fourth reference point, the correct position can be determined based
on the intersection of a sphere and a circle.
Now let's explore how the distance between you and the satellite is measured.
Each satellite is equipped with an atomic clock that sends intermittent radio
signals to Earth. Your receiver, such as a smartphone, receives these signals and
calculates the time difference between the sent and received times. Multiplying
this time difference by the speed of light enables the determination of the distance
between you and the satellites.
However, the accuracy of your mobile device's clock is crucial for precise GPS
calculations. Crystal clocks used in smartphones and laptops are not as accurate
as atomic clocks, leading to potential errors. To overcome this, the time offset
between your device and the satellites is considered as an additional unknown.
By incorporating the measurements from four satellites, including the time offset,
your location can be accurately determined without the need for an atomic clock
in your mobile device.
Map vs Photograph
Map Photograph
Orthogonal projection Central Perspective projection
Uniform scale Variable scale
Terrain relief without distortion Relief displacement in the image
All objects are visible Not all objects are visible
An abstract representation A real Representation
Representation geometrically correct Representation geometrically not correct

46 | Z a m a n
Lecture – 6

Ortho Images
Ortho-images contain advantages of
both air photos and line maps
• Pictorial qualities of air-photos
• Planimetric correctness of the line
map
Map data can be overlain onto the
image. Ortho photo maps (ortho
image maps) can be prepared
comparatively faster than the normal
topographic map. Useful for mapping
inaccessible areas. Frequent map
update is possible with ortho-photos (ortho-images)

RADAR
Radar is an object-detection system that uses radio waves or microwaves to
determine the range of an object. A radar system is typically enhanced to provide
direction, altitude, speed and object size. It is an active sensor. It has day, night
and all-weather observation capability. It is a valuable and affordable collision
avoidance tool.
Basic RADAR System
• Transmitter: generates successive short bursts or pulses of microwave at
regular interval
• Antenna: receives portions of the transmitted energy reflected or
backscattered from objects (echo)
Imaging Principals of Synthetic Aperture RADAR (SAR)
Imaging RADAR platforms are designed as a side-looking system. By measuring
the time delay between transmission of pulse and reception of backscattered
signal, distance between objects and RADAR and thus their location can be
determined. As the platform moves forward along the flight track while
constantly recording and RADAR signals, transmitting, processing a two-
dimensional image of the surface is generated.
Application of RADAR
• To identify the epicenter of earthquake
• To identify volcanic movement

47 | Z a m a n
Lecture – 6

• To identify glacier flow


• For velocity measurement of moving objects
• For ship detection
• Time series analysis
• Urban Analysis
LiDAR Light Detection and Ranging (LiDER)
LiDAR is fundamentally an airborne distance technology. LiDER uses laser to
illuminate a target. It is active remote sensing and it is a sampling tool.
Depending on the wavelength laser used, LiDAR can develop 3D point clouds of
a wide range of objects, such as –rocks, non-metallic objects, rain, chemical
compounds, aerosols, clouds, and even single molecules.
Data Collection Process
• Optical sensor transmits laser beams toward a target while moving through
specific survey routes.
• The reflection from the target is detected and analyzed by receivers.
• These receivers record the precise time from when the laser pulse left the
system to when it is returned to calculate the range distance between the
sensor and the target.
• Combined with the positional information (GPS and INS), these distance
measurements are transformed to measurements of actual three-
dimensional points of the reflective target in object space.
• The point data is post-processed into highly accurate georeferenced x,y,z
coordinates.
Laser Return
The first returned laser pulse is the most significant return and will be associated
with the highest feature in the landscape like a treetop or the top of a building.
Multiple returns are capable of detecting the elevations of several objects. The
intermediate returns, in general, are used for vegetation structure. The last return
is used for bare-earth terrain models. But there is exception. For example, if a
pulse hits a thick branch on its way to the ground and the pulse does not actually
reach the ground, the last return is not from the ground but from the branch that
reflected the entire laser pulse.
Application
• Measuring agricultural productivity,
• Distinguishing faint archeological remains

48 | Z a m a n
Lecture – 6

• Measuring tree canopy heights


• Determining forest biomass values
• Advancing the science of geomorphology
• Measuring volcano uplift and glacier decline
• Measuring snow pack
• Providing data for topographic maps
• Developing land use plans and site analysis

49 | Z a m a n
Lecture – 7

Digital Image Interpretation


Change Detection
Remote sensing data are primary sources for analyze environmental processes in
local or global scale. These data are used to find out change detection in recent
decades. Remote sensing data (such as Landsat data, Sentinel data, Spot image
etc.) are very useful for visualization, classification and analysis of area.
These data can be categorized based on their resolution, electromagnetic
spectrum, energy source, imaging media and number of bands. Higher the
resolution of satellite data (spatial resolution, spectral resolution, radiometric
resolution, temporal resolution), higher degree of accuracy will achieve during
classification.
Generally, Landsat data are used for classification. Landsat data having several
bands based on their wavelength (blue band, green band, red band, infrared band,
thermal band, panchromatic).
Panchromatic band is used for increase the resolution of data. Landsat 7 data
having total of 8 band while Landsat 8 data having 11 bands.
For analysis of Normal Difference Vegetation Index (NDVI), Normal Difference
Built-up Index (NDBI) and Normal Difference Water Index (NDWI), only four
bands are used (Green, Red, NIR, SWIR).
NDVI
Normalized Difference Vegetation Index (NDVI) quantifies vegetation by
measuring the difference between near-infrared (which vegetation strongly
reflects) and red light (which vegetation absorbs).
As shown below, Normalized Difference
Vegetation Index (NDVI)
The result of this formula generates a value between -1 and +1. If you have low
reflectance (or low values) in the red channel and high reflectance in the NIR
channel, this will yield a high NDVI value and vice versa.
NDVI = (Band 4 – Band 3) / (Band 4 + Band 3) For Landsat 7 data,
NDVI = (Band 5 – Band 4) / (Band 5 + Band 4) For Landsat 8 data,
NDVI = -1 to 0 represent Water bodies
NDVI = -0.1 to 0.1 represent Barren rocks, sand, or snow

50 | Z a m a n
Lecture – 7

NDVI = 0.2 to 0.5 represent Shrubs and grasslands or senescing crops


NDVI = 0.6 to 1.0 represent Dense vegetation or tropical rainforest
Application of NDVI
For example, in agriculture, farmers use NDVI for precision farming and to
measure biomass. Whereas, in forestry, foresters use NDVI to quantify forest
supply and leaf area index.
Furthermore, NASA states that NDVI is a good indicator of drought. When water
limits vegetation growth, it has a lower relative NDVI and density of vegetation.
Other commonly used vegetation indices
• Enhanced Vegetation Index (EVI),
• Perpendicular Vegetation Index (PVI),
• Ration Vegetation Index (RVI)
NDWI
NDWI = (NIR – SWIR) / (NIR + SWIR)
NDWI = (Band 4 – Band 5) / (Band 4 + Band 5) For Landsat 7 data
NDWI = (Band 5 – Band 6) / (Band 5 + Band 6) For Landsat 8 data
But result appear form above formula is poor in quality. The pure water neither
reflects NIR nor SWIR. The formula of NDWI then modified by Xu (2005). It
uses Green and SWIR band.
MNDWI = (Green – SWIR) / (Green + SWIR)
NDWI = (Band 2 – Band 5) / (Band 2 + Band 5) For Landsat 7 data,
NDWI = (Band 3 – Band 6) / (Band 3 + Band 6) For Landsat 8 data,
NDBI = (SWIR – NIR) / (SWIR + NIR)
NDBI = (Band 5 – Band 4) / (Band 5 + Band 4) For Landsat 7 data,
NDBI = (Band 6 – Band 5) / (Band 6 + Band 5) For Landsat 8 data,
BU = NDBI - NDVI
NDBI
Most common indexes for analysis the built-up areas-
• Normalized Difference Built-up Index (NDBI),

51 | Z a m a n
Lecture – 7

• Built-up Index (BU),


• Urban Index (UI),
• Index-based Built-up Index (IBI),
• Enhanced Built-up and Bareness Index (EBBI)

Digital Image Interpretation


Image classification
Categorizing all pixels in a digital image into one or several object classes. From
the categorized data, thematic maps of the land cover may be produced.
Types of Image Classification
1. Pixel based Classification
a. Supervised Image Classification
b. Unsupervised Image Classification
2. Object based Classification
a. Supervised Image Classification
Supervised classification refers to the process of measuring characteristic features
of the objects one wants to classify by using training sets of known objects or
object classes and use them to determine the class membership of all other pixels
in an image.
Step 1: Select training
samples
Identify the homogeneous
samples of interest or
training area based on
analyst knowledge and
familiarity with
geographical area.
This knowledge may be derived from fieldwork, analysis of high-resolution aerial
photos, maps or other sources like reports.
Step 2: Develop scatter plot
A scatter plot (scatter gram) is one of the easiest ways to perceive the distribution
of values measured on two bands. One band is plotted against the other for each

52 | Z a m a n
Lecture – 7

pixel and the vector (band 1, band 2) determines the position of that pixel in the
two-dimensional Euclidean space.
In the scatterplot of the class means in the 1 and 2 bands, the data points for the
non-vegetated landcover classes generally lie on a straight line passing through
the origin. This line is called the "soil line".
The position of a point in the feature space is directly related to the values of the
two features. Obviously points belonging to the same class tend to cluster and
points belonging to different classes tend to be separated.
This is the underlying assumption of any classification scheme. Adding a third
feature leads to a three-dimensional scatter diagram. The problem of N-
dimensional feature spaces is that they cannot be visualized properly.
Step 3: Signature development
Given a scatterplot one can recognize-
• Two district clusters
• The compactness of each cluster
• The distance in feature space (example: d12, d13)
• A linear decision boundary (boundary between two clusters/classes)
Develop signature based on statistical algorithm like mean, variance, covariance
or min-max.
Step 4: Select Classifier
Examine each pixel of image and make decision about which of the signature it
resemble most using classifier like K-mean, maximum likelihood or
parallelpied/box classifier etc.
Three types of classifier-
(1) Parallelepiped or box classifier
(2) K-mean Classification
(3) Maximum Likelihood Classification

1. Parallelepiped or box classifier


• Decision: All points within min-max region (box) of scatter plot
• All other points are unclassified
• In practice not always easy

53 | Z a m a n
Lecture – 7

2. K-means or centroid method


• Mean of each training class
• Distance from each unknown pixel to the centre of each class.
• Nearest center decision: Labelling criterion = shortest distance => class j
3. Maximum likelihood method
• Mean of each training class Xi
• Covariance matrix for each
mean Ci
• Determine probability that a
pixel belongs to any trained
class
• Decision: the “most likely”
class for each Pixel i gives the
maximum likelihood result
• Probability that P belongs to X1 is higher than probability that P belongs
to X2
Step 5: Apply classifier to image
2. Unsupervised Image Classification
Unsupervised classification is a clustering process which aims at the
determination of the number of distinct, naturally occurring groups and the
allocation of pixels to these groups (or classes). In this respect it can be considered
as a segmentation technique which aims at subdividing an image into meaningful
regions. It first defines the number of class to be created without any prior
knowledge.
Points (pixels) in the same cluster are more similar (in some sense or another) to
each other than to those in other clusters.
“Similarity” within a cluster can be described
• Connectivity-based by the distance needed to connect pixels
• Centroid-based by representing a cluster by its mean
• Density-based by defining clusters by areas of higher density

54 | Z a m a n
Lecture – 7

Supervised Vs. Unsupervised


Supervised Unsupervised
Classes based on information Classes based on spectral properties
categories
Predefined classes Unknown Classes. Unexpected
category can be revealed
Serious Classification Error. No Classification Error

B. Object based Image Classification


More recently segmentation-based classification concepts have been added. They
are often called OBIA (object-based image analysis) approaches.
Object-based image analysis is based on image objects, i.e. groups of pixels that
are similar to one another. Image objects are found by image segmentation using
spectral properties, size, shape, texture, and maybe other context from a pixel
neighborhood.
Unlike pixel-based classification, Object based classification not only consider
spectral properties, but also considers contextual properties like shape, size,
color, tone, shadow, texture, location etc.
Pixel based classification processes are mostly automatic, but object-based
classification is semi-automatic and iterative process.

55 | Z a m a n
Lecture – 8

Photogrammetry
Basics of Photogrammetry
Illusion
An instance of a wrong or misinterpreted perception of a sensory experience.
Perspective
The art of representing three- dimensional objects on a two-dimensional surface
so as to give the right impression of their height, width, depth, and position in
relation to each other quickly. A technique of depicting volumes and spatial
relationships on a flat surface.
Artificial Stereo viewing
• Stereoscopes (Divided optical path)
• Anaglyph (red-green)
• Polarization
• Temporal (Shutters)
Photogrammetry describes from three words:
‘photo’ – light ‘gram’ – drawing ‘metry’ – measurement
The art and science of obtaining reliable measurement by means of images. It is
the science and art of determining the size and shape of objects from measurement
of images.
The art, science and technique to obtain reliable geometric and thematic
information on the earth and its physical environment by acquisition,
measurement and interpretation of images, which are obtained by remote sensing
sensor systems
Application of Photogrammetry
• Image acquisition (Metric cameras, Film, Photogrammetric Scanner)
• Aerial triangulation (Georeferencing)
• DTM Generation (Terrain and surface)
• Stereo compilation (Object extraction, GIS, map)
• Orthophoto and Orthomosaik generation (for GIS)
• Spatial data for Geoinformatics (for GIS)
• Features
o High precision and reliability
o Efficient, nationwide applications and Established (world-wide)

56 | Z a m a n
Lecture – 8

Photogrammetry vs Remote Sensing


Photogrammetry Remote Sensing
Sensor Aerial Camera SPOT, LANDSAT (Scanner)
Data Source Aerial Images Analogue Digitally recorded stripes
or digital
Technique Stereo measurement Image processing
Automation Mostly controlled by Automatic procedures.
Operator Operators are needed for
accuracy

Basics of Camera Geometry and Optics


Image acquisition according to application
• Aerial
• Close-range
Camera type
• Metric film camera
• Digital camera (film-less)
• Close-range cameras
Viewing direction
• Nadir view
• Normal view
• Side looking view
• Horizontal view
• Zenith view
Camera Opening Angle
• Normal angle (NA): 2α ≈ 60
• Wide Angle (WA): 2α ≈ 90
• Super wide angle (SWA): 2α ≈ 120
• Small angle: 2α < 60
Image format
• Image format: 23*23cm² (9 *9)
Focal Length
• f = 85mm (Zeiss) SWA 120
• f = 88mm (Leica)

57 | Z a m a n
Lecture – 8

• f = 153mm
• f = 210mm WA 90
• f = 306mm NA
• f = 600mm Super NA
Lens formula
a = object distance b = image distance f = focal length
A single lens cannot guarantee a sharp image over the whole image plane not well
focused (increasing towards the border of the image)
Imaging Error

Image Distortion
Distortion due to camera orientation
• Scale
• Tilt
Distortion due to camera geometry
• Lens distortion
• Film distortion
Distortion due to other effects
• Relief distortion
• Earth curvature
• Refraction

58 | Z a m a n
Lecture – 8

Scale of a vertical aerial photograph


Scale of the vertical photograph is variable over an undulating terrain

Displacement due to tilt


• Aircraft cannot fly level
• An area which would be square
on the ground if no tilt will be distorted
• Can be measured directly with Inertial navigation System (INS)
Film Distortion
Film distortion may arise from temperature or humidity changes and may be the
same in all direction (scale change) or differential (stretching). Image distortion
may come from errors in scanner - non-constant scanning speed, non-orthogonal
axes, measurement error.
Lens Distortion
Characteristics of lens distortion comes from camera
calibration certificate. Normally only radial
distortion is significant in metric cameras used for
aerial photography - this requires correction radial
from principal point
pa = r aa’= Δr
tan β = r – Δr/ f
Δr = r - f tan β
Relief Displacement
Relief displacement in a vertical aerial photograph is radially away from the
principle point of the photograph

59 | Z a m a n
Lecture – 8

Hence the correction has to be radially towards the principle point


Earth curvature
When is earth curvature significant?
pa = r; aa' = Δr
Δr = Hr2 /2Rf 2
Direction of r always away from p
Where R is radius of earth
Refraction
pa = r; aa’ = r
r = f α/cos2 θ
Image Correction
Interior Orientation
1. Interior Orientation
Inner orientation is the process of re-establishing the geometry of the camera as
the photograph was taken
Basic requirements: 2 parameters
• Point joining object and image is a straight line
• Principal point (p) is known
• Principal distance (f) is known
Principal points: Intersection point of optical axis through image plane
Principal distance: Distance of projection center from image plane
2. Exterior Orientation
Defines the Orientation (Position and
attitude) of the image, i.e. the image
coordinate system in relation to an exterior-
(world-, object-) coordinate system image-
object relation by 6 parameters
1. 3 translations X0, Y0, Z0
2. 3 rotations,
a. primary, right handed around X

60 | Z a m a n
Lecture – 8

b. secondary, right handed around Y


c. tertiary, right handed around Z

Ground Control Points


GCPs are always necessary if no position and orientation is provided on the
aircraft. It is always advisable to provide redundancy in ground control. Minimum
number of GCPs required for a stereo pair is 3 points, always use at least 4.
Normally aerial triangulation is used to densify GCPs and provide tie points.
GCPs can be points of detail (natural points) or targets (artificial points). Features
or targets must be large enough to be clearly visible on the image but small
enough to point to accurately.
Planning items
• Time of year
• flight days/month (usually 03/04 or 09/10) few vegetation
• Time of day: 10-15h cloudless, sunshine) or below cloud cover (no cast
shadows)
• Weather conditions
• Time schedule
• External conditions (costs etc.)
• Image scale Image scale
• Overlap
• Flying height
• Base
• Model area
• Number of strips
• Number of images and # of image pairs
• Strip distance

61 | Z a m a n
Lecture – 8

62 | Z a m a n

You might also like