0% found this document useful (0 votes)
277 views

Geographical Information Systems (GIS) and Remote Sensing I

This document provides an overview of Geographic Information Systems (GIS) and Remote Sensing. It defines GIS as a computer system for capturing, storing, analyzing and displaying geographic data. The key components of a GIS are hardware, software, data, people and methods. GIS allows users to integrate spatial data with other data, analyze it, and present results in graphic and report form. Remote sensing involves collecting data about Earth through sensors on satellites, aircraft or other platforms. It covers concepts like electromagnetic radiation, interactions with the atmosphere and Earth's surface, and processing of raw remote sensing data. The document outlines course contents on GIS and remote sensing.

Uploaded by

Michael Langat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
277 views

Geographical Information Systems (GIS) and Remote Sensing I

This document provides an overview of Geographic Information Systems (GIS) and Remote Sensing. It defines GIS as a computer system for capturing, storing, analyzing and displaying geographic data. The key components of a GIS are hardware, software, data, people and methods. GIS allows users to integrate spatial data with other data, analyze it, and present results in graphic and report form. Remote sensing involves collecting data about Earth through sensors on satellites, aircraft or other platforms. It covers concepts like electromagnetic radiation, interactions with the atmosphere and Earth's surface, and processing of raw remote sensing data. The document outlines course contents on GIS and remote sensing.

Uploaded by

Michael Langat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 81

Geographical Information Systems (GIS)

and Remote Sensing I

Course Code: WEN 2333:

Contents
GEOGRAPHIC INFORMATION SYSTEM............................................................................................. 3

Introduction to GIS................................................................................................................. 3

Components of Geographic Information System........................................................... 5


GIS Functions........................................................................................................................... 8

Sources of GIS Data................................................................................................................ 9

Geographic Information and Spatial Data Types......................................................... 10

Geographic fields.................................................................................................................. 12

Data types and values.......................................................................................................... 12

Spatial Data Models.............................................................................................................. 13

Topology And Spatial Relationships................................................................................ 18

Topology Errors.....................................................................................................................24

Spatial Referencing and Positioning Frames................................................................. 28

Map Projections.................................................................................................................... 32

Geographic (datum) transformations..............................................................................42

Applications of GIS............................................................................................................... 43
REMOTE SENSING.......................................................................................................................... 46

Concepts and Foundations of Remote Sensing............................................................. 46

Introduction........................................................................................................................... 46

Electromagnetic radiation.................................................................................................. 47

Interactions with the Atmosphere................................................................................... 50

Interactions with the Earth’s Surface.............................................................................. 54

Sensing of EM radiation...................................................................................................... 59

Processing of Raw Remote Sensing Data............................................................................ 67

Elementary image enhancement..........................................................................................68

An (a) original, (b) contrast enhanced, and (c) edge enhanced image........................ 69

Course Curriculum................................................................................................................... 74
GEOGRAPHIC INFORMATION SYSTEM
Introduction to GIS
(GIS) refers to a computer-based tool for mapping and analyzing
geographicphenomenon that exist, and events that occur on Earth. GIS technology
integrates common database operations such as query and statistical analysis
with the unique visualization and geographic analysis benefits offered by maps.
These abilities distinguish GIS from other information systems and make it
valuable to a wide range of public and private enterprises for explaining events,
predicting outcomes, and planning strategies, Currently, GIS is a multi-billion-
dollar industry employing hundreds of thousands of people and is taught in
schools, colleges, and universities throughout the world. Professionals and
domain specialists in every discipline are increasingly aware of the advantages of
using GIS technology.

Thus, GIS provides facilities for data capture, data management, data
manipulation and analysis, and then presents results in both graphic and report
form, with a particular emphasis upon preserving and utilizing inherent
characteristics of spatial data. The ability to incorporate spatial data, manage it,
analyze it, and answer spatial questions is the distinctive characteristic of
geographic information systems, for addressing their unique spatial problems.

An information system is established to achieve the objectives of collecting,


storing, analyzing, and presenting information in a systematic manner

The term geographic, which implies a spatial component to the system, are also
characterized with two additional crucial properties

The reference to geographic space, which means the data are registered to a
geographical coordinate system

The representation at geographic scale, which means the data are normally
recorded at small scales and may be generalized and symbolized.

Represent the real world by processing data and applying it in map form, Allows
geographic features in real world locations to be digitally represented so they can
be presented in map form and manipulated to address some problem. A
Geographic Information System (GIS) is a computer-based system including
software, hardware, people, and geographic information

GIS can, create, edit, query, analyze, and display map information on the
computer

GIS have the following definitions;

System that contains spatially referenced data that can be analysed and converted
to information for a specific set of purposes, or application. The key feature of a
GIS is the analysis of data to produce new information.” Phil Parent (1988)

A system of computer hardware, software, and procedures designed to support


the capture, management, manipulation, analysis, modularly and display of
spatially referenced data for solving complex planning and management
problems.” Federal Interagency Coordinating Committee (1988)

A database system in which most of the data are spatially indexed and upon
which a set of procedures operated in order to answer queries about spatial
entities in the database (Smith et al, 1987)

A decision support system involving the integration of spatially referenced data in


problem solving environment (Cowen 1988)

Francis Hanigan (1988) states that:

GIS is “any information management system which can:

Collect, store and retrieve information based on its spatial location

Identify locations within a targeted environment which meet specific criteria

Explore relationships among data sets within that environment

Analyse the related data spatially as an aid to making decisions about that
environment

Facilitate selecting and passing data to application-specific analytical models


capable of assessing the impact of alternatives on the chosen environment
GIS in summary

Components of Geographic Information System


A working GIS integrates five key components; Hardware, Software, Data, People,
and Methods.
Hardware
Hardware is the computer on which a GIS operates. GIS software runs on a wide
range of hardware types, from centralized computer servers to desktop
computers used in stand-alone or networked configurations. The users interact
with it by typing, clicking, pointing or speaking.

Fast computer, large data storage capacities, and a high-quality display are some
of the specification requirements for computers.A fast computer is required
because spatial analyses are often applied over large areas and/or at high spatial
resolutions and calculations often have to be repeated over tens of millions of
times, In GIS large volumes of data must be entered to define the shape and
location of geographic features, such as roads, rivers.

Hardware includes; Desktops, laptops, personal digital assistant (pda)


Software
GIS software provides the functions and tools needed to capture, store, analyze,
and display geographic information. Key software components are

Tools for the input and manipulation of geographic information

A database management system (DBMS)

Tools that support geographic query, analysis, and visualization

A graphical user interface (GUI) for easy access to tools

Examples of the major or widely used software packages

ArcGIS,

ERDAS (Earth Resources Data Analysis System) used for Remote sensing image
processing

ILWIS(mostly used for hydrological modeling),

QGIS

Data

Geospatial data record the location and characteristics of natural features


Geographic data and related tabular data can be collected in-house or purchased
from a commercial data provider. A GIS will integrate spatial data with other data
resources and can even use a Data Base Management System, used by most
organizations to organize and maintain their data and to manage spatial data.
People
People are the user of the GIS system. GIS technology is of limited value without
the people who manage the system and develop plans for applying it to real-world
problems. GIS users range from technical specialists who design and maintain the
system to those who use it to help them perform their everyday work.

Methods
A successful GIS operates according to a well-designed plan and business rules,
which are the models and operating practices unique to each organization.

GIS Functions
Data Pre-processing and Manipulation

(i) Data validation and editing, eg checking and correction.

(ii) Structure conversion, eg conversion from vector to raster.

Geometric conversion, eg map registration, scale changes, projection changes


(iii)
, map transformations, rotation.

Generalisation and classification, eg reclassifying data, aggregation or


(iv)
disaggregation, co-ordinate thinning.

(v) Integration, eg overlaying, combining map layers or edge matching.

(vi)
Map enhancement, eg image enhancement, add title, scale, key, map
symbolism, draping overlays.

Interpolation, e.g. kriging, spline functions, Thiessen polygons, plus centroid


(vii)
determination and extrapolation.

(viii
Buffer generation, eg calculating and defining corridors.
)

Data searching and retrieval, eg on points, lines or areas, on user defined

(ix) themes or by using Boolean logic. Also browsing, querying and windowing.

Data Analysis

 (i)  Spatial analysis, eg connectivity, proximity, contiguity, intervisibility, digital


terrain modelling.

(ii)  Statistical analysis, eg histograms, correlation, measures of dispersion,


frequency analysis.

(iii)  Measurement, eg line length, area and volume calculations, distance and
directions.

Data Display

 (i)  Graphical display, eg maps and graphs with symbols, labels or annotations.

(ii)  Textual display, eg reports, tables.

Database Management

 (i)  Support and monitoring of multi-user access to the database.

(ii)  Coping with systems failure.

(iii)  Communication linkages with other systems.

(iv)  Editing and up-dating of databases.

(v)  Organising the database for efficient storage and retrieval.

(vi)  Maintenance of database security and integrity.


(vii)  Provision of a “data independent” view of the database.

Sources of GIS Data


Common sources of data include hard copy maps, tables of attributes, aerial
photos and satellite imageries, Surveying.

Geographic Information and Spatial Data Types


Data model is, basically, a conceptual representation of the data structures in a
database. Whereas data structures comprise objects of data, relationships
between data objects and rules which regulate operations on the objects. In other
words, data model represents a set of rules or guidelines which are used to
convert the real world features into digitally and logically represented spatial
objects. In GIS, data models comprise the rules which are essential to define what
is in operational GIS and its supporting system. Data model is the core of any GIS
which gives a set of constructs for describing and representing selected aspects of
the real world in a computer (Longley et al., 2005).

Spatial data- Is the data or information that identifies the geographic location of
features and boundaries on Earth, such as natural or constructed features, oceans,
and more. Spatial data is usually stored as coordinate and topology, and is data
that can be mapped describes the absolute and relative location of geographic
features.

Attribute data- It describes characteristics of the spatial features. These


characteristics can be quantitative and/or qualitative in nature. Attribute data is
often referred to as tabular data. The coordinate location of a forestry stand
would be spatial data, while the characteristics of the forestry stand, e.g. cover
group, dominant species, crown closure, height, these are all attribute data

There are three common representations of spatial data, used in GIS, namely:
Vector, raster and triangulated. Each representation has merits and is suited for
particular kinds of information and analysis.

GEOGRAPHIC PHENOMENA

A manifestation of an entity or process of interest that;

can be named or described

can be georefferenced

cab be assigned a time (interval) at which it is/ was present

the relevant phenomena for a given application depends entirely on one’s


objectives such as in water management, the object of study might be river basins,
ground water levels, measurement of actual evapo-transpirations, meteorological
data, irrigation levels, water budget and measurement of total water use. All these
can be named or described, georefferenced and provided with a time interval at
which each exists. Not all relevant phenomena come as triplets (description,
georeference, time interval), though many do. If the georeference is missing, we
seem to have something of interest that is not positioned in space: an example is a
legal document in a cadastral system. It is obviously somewhere, but its position
in space is not considered relevant. If the time interval is missing, we might have
a phenomenon of interest that is considered to be always there, i.e. the time
interval is (likely to be considered) infinite. If the description is missing, then we
have something that exists in space and time, yet cannot be described. Obviously
this last issue very much limits the usefulness of the information.

Geographic fields
A field is a geographic phenomenon that has a value ‘everywhere’ in the study
area. We can therefore think of a field as a mathematical function f that associates
a specific value with any position in the study area. Hence if (x,y) is a position in
the study area, then f(x,y) stands for the value of the field f at locality (x,y). Fields
can be discrete or continuous.

Continuous field the underlying function is assumed to be ‘mathematically smooth’


, meaning that the field values along any path through the study area do not
change abruptly, but only gradually. Good examples of continuous fields are air
temperature, barometric pressure, soil salinity and elevation.

Discrete fields divide the study space in mutually exclusive, bounded parts, with
all locations in one part having the same field value. Typical examples are land
classifications, for instance, using either geological classes, soil type, land use type,
crop type or natural vegetation type.

Data types and values


There are different kinds of data values which we can use to represent
‘phenomena’. It is important to note that some of these data types limit the types
of analyses that we can do on the data itself:

Nominal data values are values that provide a name or identifier so that we can
discriminate between different values, but that is about all we can do. Specifically,
we cannot do true computations with these values. An example are the names of
geological units. This kind of data value is called categorical data when the values
assigned are sorted according to some set of non-overlapping categories. For
example, we might identify the soil type of a given area to belong to a certain (pre-
defined) category
Ordinal data values are data values that can be put in some natural sequence but
that do not allow any other type of computation. Household income, for instance,
could be classified as being either ‘low’, ‘average’ or ‘high’. Clearly this is their
natural sequence, but this is all we can say—we cannot say that a high income is
twice as high as an average income.

Interval data values are quantitative, in that they allow simple forms of
computation like addition and subtraction. However, interval data has no
arithmetic zero value, and does not support multiplication or division. For
instance, a temperature of 20 ◦C is not twice as warm as 10 ◦C, and thus centigrade
temperatures are interval data values, not ratio data values.

Ratio data values allow most, if not all, forms of arithmetic computation. Rational
data have a natural zero value, and multiplication and division of values are
possible operators (distances measured in metres are an example). Continuous
fields can be expected to have ratio data values, and hence we can interpolate
them.

Nominal and categorical data values as ‘qualitative’ data, because we are limited
in terms of the computations we can do on this type of data. Interval and ratio
data is known as ‘quantitative’ data, as it refers to quantities

Spatial Data Models


Vector Data

Vector data provide a way to represent real world features within the GIS
environment. A vector feature has its shape represented using geometry. The
geometry is made up of one or more interconnected vertices. A vertex describes a
position in space using an x, y and optionally z axis. In the vector data model,
features on the earth are represented as:

points

lines / routes

polygons / regions

TINs (triangulated irregular networks)

polygons / regions

TINs (triangulated irregular networks)


Vector data are good at

accurately representing true shape and size

representing non-continuous data (e.g., rivers, political boundaries, road lines,


mountain peaks)

creating aesthetically pleasing mapsconserving disk space

Advantages

Data can be represented at its original resolution and form


without generalization.

Graphic output is usually more aesthetically pleasing (traditional


cartographic representation);

Since most data, e.g. hard copy maps, is in vector form no data
conversion is required.

Accurate geographic location of data is maintained.

Allows for efficient encoding of topology, and as a result more


efficient operations that require topological information, e.g.
proximity, network analysis.

Disadvantages

The location of each vertex needs to be stored explicitly.

For effective analysis, vector data must be converted into a


topological structure. This is often processing intensive and
usually requires extensive data cleaning. As well, topology is
static, and any updating or editing of the vector data requires re-
building of the topology.
Algorithms for manipulative and analysis functions are
complex and may be processing intensive. Often, this inherently
limits the functionality for large data sets, e.g. a large number of
features.

Continuous data, such as elevation data, is not effectively


represented in vector form. Usually substantial data
generalization or interpolation is required for these data layers.

Spatial analysis and filtering within polygons is impossible

Raster data

Raster consists of a matrix of cells (or pixels) organized into rows and columns (
or a grid) where each cell contains a value representing information, such as
temperature. Rasters are digital aerial photographs, imagery from satellites,
digital pictures, or even scanned maps.In raster datasets, each cell (which is also
known as a pixel) has a value. The cell values represent the phenomenon
portrayed by the raster dataset such as a category, magnitude, height, or spectral
value. The category could be a land-use class such as grassland, forest, or road. A
magnitude might represent gravity, noise pollution, or percent rainfall. Height (
distance) could represent surface elevation above mean sea level, which can be
used to derive slope, aspect, and watershed properties. Spectral values are used in
satellite imagery and aerial photography to represent light reflectance and color.

Data stored in a raster format represents real-world phenomena:


Thematic data (also known as discrete) represents features such as land-use or
soils data.

Continuous data represents phenomena such as temperature, elevation, or


spectral data such as satellite images and aerial photographs.

Pictures include scanned maps or drawings and building photographs

The area (or surface) represented by each cell consists of the same width and
height and is an equal portion of the entire surface represented by the raster. For
example, a raster representing elevation (that is, digital elevation model) may
cover an area of 100 square kilometers. If there were 100 cells in this raster, each
cell would represent 1 square kilometer of equal width and height (that is, 1 km x
1 km).
The dimension of the cells can be as large or as small as needed to represent the
surface conveyed by the raster dataset and the features within the surface, such
as a square kilometer, square foot, or even square centimeter. The cell size
determines how coarse or fine the patterns or features in the raster will appear.
The smaller the cell size, the smoother or more detailed the raster will be.
However, the greater the number of cells, the longer it will take to process, and it
will increase the demand for storage space. If a cell size is too large, information
may be lost or subtle patterns may be obscured.

Advantages :

The geographic location of each cell is implied by its position in the


cell matrix. Accordingly, other than an origin point, e.g. bottom left
corner, no geographic coordinates are stored.

Due to the nature of the data storage technique data analysis is


usually easy to program and quick to perform.

The inherent nature of raster maps, e.g. one attribute maps, is


ideally suited for mathematical modeling and quantitative analysis.

Discrete data, e.g. forestry stands, is accommodated equally well as


continuous data, e.g. elevation data, and facilitates the integrating of
the two data types.

Grid-cell systems are very compatible with raster-based output


devices, e.g. electrostatic plotters, graphic terminals.

Disadvantages:

The cell size determines the resolution at which the data is


represented.;

It is especially difficult to adequately represent linear features


depending on the cell resolution. Accordingly, network linkages are
difficult to establish.

Processing of associated attribute data may be cumbersome if large


amounts of data exists. Raster maps inherently reflect only one
attribute or characteristic for an area.

Since most input data is in vector form, data must undergo vector-to-
raster conversion. Besides increased processing requirements this
may introduce data integrity concerns due to generalization and
choice of inappropriate cell size.

Most output maps from grid-cell systems do not conform to high-


quality cartographic needs.
Topology And Spatial Relationships
Topology expresses explicitly the spatial relationships between connecting or
adjacent vector features (points, polylines and polygons) in a GIS, such as two
lines meeting perfectly at a point and directed line having an explicit left and
right side.

Topological or topology based data are useful for detecting and correcting
digitizing error in geographic data set and are necessary for some GIS analyses.

Topologic data structures help insure that information is not unnecessarily


repeated. The database stores one line only in order to represent a boundary (as
opposed to two lines, one for each polygon). The database tells us that the line is
the “left side” of one polygon and the “right side” of the adjacent polygon.

Topology is necessary for carrying out some types of spatial analysis, such as
network analysis.

Topology deals with spatial and structural properties of geometric objects,


independent of their extension, type, or geometric form. Among the types of
topological properties of objects there are: the number of dimensions an object
has or the relationships that exist between objects. The topology simplifies
analysis functions, as the following examples show: joining adjacent areas with
similar properties. It is important to distinguish between vector data formats and
raster data formats. For example, imagine an area represented by a vector data
model: it is composed of a border, which separates the interior from the exterior
of the surface. The same area represented by a raster data model consists of
several grid cells. There is no border existing as a separating line. Thus, the
algorithms implemented for vector data models are not valid for raster data
models. In the following example, we only show topological operations in vector
data models.

VECTOR An interesting method for the classification of topological relations was


proposed by Egenhofer(1993) (Worboys et al. 2004). It is called the 9‐intersection
schema. This intersection scheme is an elegant approach for the classification of
topological configurations. The basic idea is based on the concept that each
element is composed of a boundary (b), an interior (i), and an exterior (e). The
concept of interior, boundary and complement (exterior) are defined in the
general topology.

Boundary

The boundary consists of points or lines that separate the interior from the
exterior. The edge of a line consists of the endpoints. The boundary of a polygon is
the line that defines the perimeter.

Interior

The interior of an object consists of points, lines or areas that are in the object but
do not belong boundary.

Complement

The complement, also called exterior, consists of the points, lines and areas which
are not in the object.

The basic method used to compare two geometrical objects is to analyze the
intersections between all the possible pairs that can be built with the interior,
exterior and boundary of these two objects. Based on the resulting "intersection"
matrix, the relationships between the two geometrical objects can be classified
Three basic topological relationships are usually stored: connectivity, adjacency,
and enclosure. Connectivity describes how lines are connected to each other to
form a network. Adjacency describes whether two areas are next to each other,
and enclosure describes whether two areas are nested.

Three basic topological relationships are usually stored: connectivity, adjacency,


and enclosure. Connectivity describes how lines are connected to each other to
form a network. Adjacency describes whether two areas are next to each other,
and enclosure describes whether two areas are nested.

Components of Topology

Topology has three basic components:

1. Connectivity (Arc – Node Topology):

Points along an arc that define its shape are called Vertices.

Endpoints of the arc are called Nodes.

Arcs join only at the Nodes.

Connectivity is defined through arc-node topology. This is the basis for many
network tracing and pathfinding operations. Connectivity allows you to identify a
route to the airport, connect streams to rivers, or follow a path from the water
treatment plant to a house

2. Area Definition / Containment (Polygon – Arc Topology):

An enclosed polygon has a measurable area.

Lists of arcs define boundaries and closed areas are maintained.

Polygons are represented as a series of (x, y) coordinates that connect to define an


area.

Many of the geographic features that may be represented cover a distinguishable


area on the surface of the earth, such as lakes, parcels of land, and census tracts.
An area is represented in the vector model by one or more boundaries defining a
polygon

3. Contiguity:
Every arc has a direction

A GIS maintains a list of Polygons on the left and right side of each arc.

The computer then uses this information to determine which features are next to
one another.

Two geographic features that share a boundary are called adjacent. Contiguity is
the topological concept that allows the vector data model to determine adjacency.
Polygon topology defines contiguity. Polygons are contiguous to each other if they
share a common arc. This is the basis for many neighbor and overlay operations.

The most important topological relations between objects that are used in GIS
applications are listed in the following sequence. Note that there are three
different geometries (point, line, polygon) on which the topological relations are
applied.

Disjoint

There is no intersection area between object A and object B.


Meet

Object A and object B meet at the boundary. The boundaries meet, but not the interior.
Two geometry objects meet if the boundaries touch.

Overlap

Object A and object B overlap. Overlap with disjoint: The interior of an object
intersects the boundary and the interior of the other object, but the boundaries do
not intersect. That is the case if a line starts outside a polygon (area) and ends in
the interior of the polygon. Overlap with Intersect: The boundaries and the interior
of both objects intersect. If a geometry object has to intersect another geometry
object the geometry needs to be part of the dimension of the bigger object. That
means:

Points
- Cannot intersect with points, lines or areas.

Lines
- Cannot intersect with points.
- Can intersect with other lines » intersection = point.
- Can intersect with polygons » intersection = lines (or points).
Contains

Object A contains object B. The interior and the boundary of an object are
completely inside of the other object. A geometry object cannot contain a
geometry object of higher order. E.g.:

Points can not contain lines or polygons.

Lines can not contain polygons.

Inside

Object B lies inside object A. It is the opposite of "contain". If A is inside B, then B


contains A.

Covers

Object A covers object B. The interior of an object is completely inside the other
object and the boundaries intersect. A geometry object can’t include a geometry
object of higher dimension. That means:

Points can not contain lines or polygons.

Lines can not contain polygons.


Covered by

Object B is covered by object A. It is the opposite of "covers". If A is covered by B,


then B covers A.

Equal

Object B and object A match. The interior and the boundary of an object are lying on
the boundary of the other object and vice versa. This happens when a line falls
exactly on the boundary of a polygon. The coordinates of all components have to
be equal. The compared geometry objects must be equal. That means:

Point = point

Line = line

Polygon = polygon

Topology Errors
There are different types of topological errors and they can be grouped according
to whether the vector feature types are polygons or polylines. Topological errors
with polygon features can include unclosed polygons, gaps between polygon
borders or overlapping polygon borders. A common topological error with
polyline features is that they do not meet perfectly at a point (node). These errors
are introduced during the process of digitization.

Digitizing in GIS is the process of converting geographic data either from a


hardcopy or a scanned image into vector data by tracing the features.  During the
digitzing process, features from the traced map or image are captured as
coordinates in either point, line, or polygon format.

Types of Digitizing in GIS

There are several types of digitizing methods.  Manual digitizing involves tracing
geographic features from an external digitizing tablet using a puck (a type of
mouse specialized for tracing and capturing geographic features from the tablet).
 Heads up digitizing (also referred to as on-screen digitizing) is the method of
tracing geographic features from another dataset (usually an aerial, satellite
image, or scanned image of a map) directly on the computer screen.  Automated
digitizing involves using image processing software that contains pattern
recognition technology to generated vectors.  More detail about creating
geographic data can be found in this article: Methods for Creating Spatial
Databases.

Types of Digitizing Errors in GIS

Since most common methods of digitizing involve the interpretation of geographic


features via the human hand, there are several types of errors that can occur
during the course of capturing the data. The type of error that occurs when the
feature is not captured properly is called a positional error, as opposed to
attribute errors where information about the feature capture is inaccurate or
false. These positional error types are outlined below, and a visualization of the
different methods is shown at the bottom of this section.

During the digitizing process, vectors are connected to other lines by a node,
which marks the point of intersection.  Vertices are defining points along the
shape of an unbroken line.  All lines have a starting point known as a starting
node and an ending node.  If the line is not a straight line, then any bends and
curves on that line are defined by vertices (vertex for a singular bend).  Any
intersection of two lines is denoted by node at the point of the intersection.

Dangles or Dangling Nodes

Dangles or dangling nodes are lines that are not connected but should be.  With
dangling nodes, gaps occur in the linework where the two lines should be
connected.  Dangling nodes also occur when a digitized polygon doesn’t connect
back to itself, leaving a gap where the two end nodes should have connected,
creating what is called an open polygon.

AN OPEN POLYGON CAUSED BY THE ENDPOINTS NOT SNAPPING TOGETHER.

Switchbacks, Knots, and Loops

These types of errors are introduced when the digitizer has an unsteady hand and
moves the cursor or puck in such a way that the line being digitized ends up with
extra vertices and/or nodes.  In the case of switchbacks, extra vertices are
introduced and the line ends up with a bend in it.  With knots and loops, the line
folds back onto itself, creating small polygon like geometry known as weird
polygons.

Overshoots and Undershoots

Similar to dangles, overshoots and undershoots happen when the line digitized
doesn’t connect properly with the neighboring line it should intersect with.
 During digitization a snap tolerance is set by the digitizer.  The snap tolerance or
snap distance is the measurement of the diameter extending from the point of the
cursor.  Any nodes of neighboring lines that fall within the circle of the snap
tolerance will result in the end points of the line being digitized automatically
snapping to the nearest node.  Undershoots and overshoots occur when the snap
distance is either not set or is set too low for the scale being digitized.  Conversely,
if the snap distance is set too high and the line endpoint snaps to the wrong node.
 In a few cases, undershoots and overshoots are not actually errors.  One instance
would be the presence of cul-de-sacs (i.e. dead ends) within a road GIS database.

THE CIRCLE REPRESENTS THE AREA OF THE SNAP TOLERANCE. THE LINE BEING
DIGITIZED WILL AUTOMATICALLY SNAP TO THE NEAREST NODES WITHIN THE SNAP
TOLERANCE AREA.

Slivers

Slivers are gaps in a digitized polygon layer where the adjoining polygons have
gaps between them.  Again, setting the proper parameters for snap tolerance is
critical for ensuring that the edges of adjoining polygons snap together to
eliminate those gaps.  Where the two adjacent polygons overlap in error, the area
where the two polygons overlap is called a sliver.
GAP AND SLIVER ERRORS IN DIGITIZED POLYGONS
Spatial Referencing and Positioning Frames
Coordinate system

A coordinate system is a reference system used to represent the locations of


geographic features, imagery, and observations, such as Global Positioning System
(GPS) locations, within a common geographic framework.

Coordinate systems enable geographic datasets to use common locations for


integration.

Each coordinate system is defined by the following:

Its measurement framework, which is either geographic (in which spherical


coordinates are measured from the earth's center) or planimetric (in which the
earth's coordinates are projected onto a two-dimensional planar surface)

Units of measurement (typically feet or meters for projected coordinate systems


or decimal degrees for latitude-longitude)

The definition of the map projection for projected coordinate systems

Other measurement system properties such as a spheroid of reference, a datum,


one or more standard parallels, a central meridian, and possible shifts in the x-
and y-directions

Several hundred geographic coordinate systems and a few thousand projected


coordinate systems are available for use. In addition, you can define a custom
coordinate system.

Types of coordinate systems

The following are two common types of coordinate systems used in a geographic
information system (GIS):

A global or spherical coordinate system such as latitude-longitude. These are often


referred to as geographic coordinate systems.

A projected coordinate system such as universal transverse Mercator (UTM),


Albers Equal Area, or Robinson, all of which (along with numerous other map
projection models) provide various mechanisms to project maps of the earth's
spherical surface onto a two-dimensional Cartesian coordinate plane. Projected
coordinate systems are referred to as map projections.

Coordinate systems (both geographic and projected) provide a framework for


defining real-world locations.

Spatial reference

A spatial reference is a series of parameters that define the coordinate system and
other spatial properties for each dataset in the geodatabase. It is typical that all
datasets for the same area (and in the same geodatabase) use a common spatial
reference definition.

A spatial reference includes the following:

The coordinate system

The coordinate precision with which coordinates are stored (often referred to as
the coordinate resolution)

Processing tolerances (such as the cluster tolerance)

The spatial extent covered by the dataset (often referred to as the spatial domain)

Geographic coordinate systems

The most common locational reference system is the spherical coordinate system
measured in latitude and longitude. This system can be used to identify point
locations anywhere on the earth’s surface. A point is referenced by its longitude
and latitude values. Longitude and latitude are angles measured from the earth's
center to a point on the earth's surface. Because of its ability to reference locations
, the spherical coordinate system is usually referred to as the Geographic
Coordinate System, also known as the Global Reference System.

Longitude is measured east and west, while latitude is measured north and south.
Longitude lines, also called meridians, stretch between the north and south poles.
Latitude lines, also called parallels, encircle the globe with parallel rings.

Latitude and longitude are traditionally measured in degrees, minutes, and


seconds (DMS). Longitude values range from 0° at the Prime Meridian (the
meridian that passes through Greenwich, England) to 180° when traveling east
and from 0° to –180° when traveling west from the Prime Meridian. A geographic
coordinate system (GCS) uses a three-dimensional spherical surface to define
locations on the earth.

A GCS includes an angular unit of measure, a prime meridian, and a datum (based
on a spheroid). The spheroid defines the size and shape of the earth model, while
the datum connects the spheroid to the earth's surface.

The following illustration shows the world as a globe with longitude and latitude
values:

In the spherical system, horizontal lines, or east–west lines, are lines of equal
latitude, or parallels. Vertical lines, or north–south lines, are lines of equal
longitude, or meridians. These lines encompass the globe and form a gridded
network called a graticule.

The line of latitude midway between the poles is called the equator. It defines the
line of zero latitude. The line of zero longitude is called the prime meridian. For
most GCSs, the prime meridian is the longitude that passes through Greenwich,
England. The origin of the graticule (0,0) is defined by where the equator and
prime meridian intersect.

Latitude and longitude values are traditionally measured either in decimal


degrees or in degrees, minutes, and seconds (DMS). Latitude values are measured
relative to the equator and range from –90° at the south pole to +90° at the north
pole. Longitude values are measured relative to the prime meridian. They range
from –180° when traveling west to 180° when traveling east. If the prime meridian
is at Greenwich, then Australia, which is south of the equator and east of
Greenwich, has positive longitude values and negative latitude values.

Projected coordinate systems

Because it is difficult to make measurements in spherical coordinates, geographic


data is projected into planar coordinate systems (often called Cartesian
coordinates systems). On a flat surface, locations are identified by x,y coordinates
on a grid, with the origin at the center of the grid. Each position has two values
that reference it to that central location; one specifies its horizontal position and
the other its vertical position. These two values are called the x coordinate and the
y coordinate.

A projected coordinate system (PCS) is defined on a flat, two-dimensional surface.


Unlike a GCS, a PCS has constant lengths, angles, and areas across the two
dimensions. A PCS is always based on a GCS that is based on a sphere or spheroid.
 In addition to the GCS, a PCS includes a map projection, a set of projection
parameters that customize the map projection for a particular location, and a
linear unit of measure.

Map Projections
A map projection is a method for converting the earth’s 3D surface to a map’s two-
dimensional surface. A map projection can represent the earth’s entire surface or
only a portion of it, depending on your needs.

The term map projection was coined by early cartographers, who employed the
concept of projecting light from a source through the earth’s surface and onto a
two-dimensional surface. Although maps are created using mathematical
formulas rather than projected light, the concept is valid, and cartographers use
the term projection to describe the mathematical process. Today, all projections use
formulas: mathematical expressions that convert data from a geographic location
(latitude and longitude) on the earth to a representative location on a flat surface.

Projection surfaces

The selection of a suitable map projection is important if you are going to


calculate areas, distances, or directions from coordinates. To help us understand
map projections better, you can group them into classifications. One way to group
them is by their distortion characteristics such as shape, area, distance, and
direction. Another way is to classify them by the developable surface used to
make the projection equations. There are three developable surfaces: cylinders,
cones, and planes, each giving a distinctive shape to the parallels. With cylinders,
 parallels are straight; with cones, concentric circles are formed; with planes,
eccentric circles are formed. Most common map projections may be conceptually
or geometrically projected onto one of these surfaces touching or intersecting the
globe. 

Representing the earth's surface in two dimensions causes distortion in the shape,
area, distance, or direction of the data. The easiest way to try to transfer the
information onto a flat surface is to convert the geographic coordinates into an X
and Y coordinate system, where x is longitude and y is latitude. This is an example
of “projecting” onto a plane. Coordinates can also be "projected" onto two other
flat surfaces, a cylinder or cone, and then unfolded into a map. The grid formed
by the latitude and longitude on a map is called the graticule. There are
thousands of different map projections all depending on how they intersect
earth’s surface and how they are oriented. For example, the line of latitude or
longitude where a projection intersects or “cuts” the earth’s surface is called the
point of contact, or standard line, where distortion is minimized. Orientations of
the three shapes can also vary between equatorial (standard lines of latitude),
transverse (standard lines of longitude), and oblique (standard line other than
latitude or longitude). Different projections cause different types of distortions.
Some projections are designed to minimize the distortion of one or two of the data
's characteristics. A projection could maintain the area of a feature but alter its
shape. In the following illustration, data near the poles is stretched:

Types of projections

Map projections can be generally classified according to what spatial attribute


they preserve and projection surfaces.

Map project according to attribute they preserve

Equal area
Equal areaprojections preserve area. Many thematic maps use an equal area
projection. Maps of the United States commonly use the Albers Equal Area Conic
projection.

Conformal

Conformal projections preserve shape and are useful for navigational charts and
weather maps. Shape is preserved for small areas, but the shape of a large area
such as a continent is significantly distorted. The Lambert Conformal Conic and
Mercator projections are common conformal projections.

Equidistant

Equidistant projections preserve distances, but no projection can preserve


distances from all points to all other points. Instead, distance can be held true
from one point (or from a few points) to all other points or along all meridians or
parallels. If you will use your map to find features within a certain distance of
other features, you should use an equidistant map projection.

Azimuthal

Azimuthal projections preserve direction from one point to all other points. This
quality can be combined with equal area, conformal, and equidistant projections,
as in the Lambert Equal Area Azimuthal and the Azimuthal Equidistant
projections.

Other projections minimize overall distortion but do not preserve any of the four
spatial properties of area, shape, distance, and direction. For example, the
Robinson projection is neither equal area nor conformal but is aesthetically
pleasing and useful for general mapping.

Map project according to projection surfaces

Azimuthal or planar projection is usually tangent to a specific point on earth’s


surface, but may also be secant. This point, or focus, may be a pole, the equator, or
other oblique point. Normally though, the azimuthal projection is used for polar
charts due to distortion at other latitudes.
Cylindrical projection usually places the earth inside a cylinder with the equator
tangent or secant to the inside of the cylinder. If the cylinder is placed
perpendicular to the axis of the earth, the resulting projection is called a
transverse projection.

Conic projection, a cone is placed over the earth, normally tangent to one or more
lines of latitude. This tangent line is called a standard parallel and, in general,
distortion increases the further away you get from this line. A conic projection
works best over mid latitudes for this reason.

Projection distortions

The conversion of geographic locations from a geographic coordinate system to a


Cartesian coordinate system causes distortion. The projection process distorts one
or more of the spatial properties listed below.

Shape

Area

Distance

Direction

Because spatial properties are often used to make decisions, anyone who uses
maps should know which projections distort which properties and to what extent.
For example, choosing a Peters projection gives you accurate area calculations but
inaccurate shapes; a Mercator projection maintains true direction but sacrifices
accuracy for area and distance; and a Robinson projection is a compromise of all
the properties. The projection you choose significantly affects the properties of a
small-scale map but has less effect on the properties of a large-scale map.
Criteria for selecting a projection

Selecting a projection

What is the size of study area/ scale?

Is it high, mid or low Latitude?

Will you need to Calculate area?

What distortion can you live with?

Is there other data you need to include and in what projection is it?

Projection parameters

A map projection by itself is not enough to define a PCS. You can state that a
dataset is in Transverse Mercator, but that's not enough information. Where is the
center of the projection? Was a scale factor used? Without knowing the exact
values for the projection parameters, the dataset cannot be reprojected.

You can also get some idea of the amount of distortion the projection has added to
the data. If you're interested in Australia but you know that a dataset's projection
is centered at 0,0, the intersection of the equator and the Greenwich prime
meridian, you might want to think about changing the center of the projection.

Each map projection has a set of parameters that you must define. The
parameters specify the origin and customize a projection for your area of interest.
Angular parameters use the GCS units, while linear parameters use the PCS units.

Linear parameters

False easting is a linear value applied to the origin of the x-coordinates. False
northing is a linear value applied to the origin of the y-coordinates. False easting
and northing values are usually applied to ensure that all x- and y- values are
positive. You can also use the false easting and northing parameters to reduce the
range of the x- or y- coordinate values. For example, if you know all y- values are
greater than 5,000,000 meters, you could apply a false northing of –5,000,000.
Height defines the point of perspective above the surface of the sphere or
spheroid for the Vertical Near-Side Perspective projection.

Angular parameters

Azimuth defines the centerline of a projection. The rotation angle measures east
from north. It is used with the azimuth cases of the Hotine Oblique Mercator
projection. 

Central meridian defines the origin of the x-coordinates.

Longitude of origin defines the origin of the x-coordinates. The central meridian
and longitude of origin parameters are synonymous.

Central parallel defines the origin of the y-coordinates.

Latitude of origin defines the origin of the y-coordinates. This parameter may not
be located at the center of the projection. In particular, conic projections use this
parameter to set the origin of the y-coordinates below the area of interest. In that
instance, you do not need to set a false northing parameter to ensure that all y-
coordinates are positive.
Longitude of center is used with the Hotine Oblique Mercator center (both two-
point and azimuth) cases to define the origin of the x-coordinates. It is usually
synonymous with the longitude of origin and central meridian parameters.

Latitude of center is used with the Hotine Oblique Mercator center (both two-
point and azimuth) cases to define the origin of the y-coordinates. It is almost
always the center of the projection.

Standard parallel 1 and standard parallel 2 are used with conic projections to
define the latitude lines where the scale is 1.0. When defining a Lambert
Conformal Conic projection with one standard parallel, the first standard parallel
defines the origin of the y-coordinates. 

For other conic cases, the y-coordinate origin is defined by the latitude of origin
parameter:

Longitude of first point

Latitude of first point

Longitude of second point

Latitude of second point

The previous four parameters are used with the Two-Point Equidistant and Hotine
Oblique Mercator projections. They specify two geographic points that define the
center axis of a projection.

Pseudo standard parallel 1 is used in the Krovak projection to define the oblique
cone’s standard parallel.

X,y plane rotation defines the orientation of the Krovak projection along with the
x-scale and y-scale parameters.

Unitless parameters

Scale factor is a unitless value applied to the center point or centerline of a map
projection. The scale factor is usually slightly less than one. The UTM coordinate
system, which uses the Transverse Mercator projection, has a scale factor of 0.
9996. Rather than 1.0, the scale along the central meridian of the projection is 0.
9996. This creates two almost parallel lines approximately 180 kilometers, or
about 1°, away where the scale is 1.0. The scale factor reduces the overall
distortion of the projection in the area of interest.

X and y scales are used in the Krovak projection to orient the axes.

Option is used in the Cube and Fuller projections. In the Cube projection, option
defines the location of the polar facets. An option of 0 in the Fuller projection
displays all 20 facets. Specifying an option value between 1 and 20 displays a
single facet.

Vertical coordinate systems

A vertical coordinate system defines the origin for height or depth values. Like a
horizontal coordinate system, most of the information in a vertical coordinate
system is not needed unless you want to display or combine a dataset with other
data that uses a different vertical coordinate system. Perhaps the most important
part of a vertical coordinate system is its unit of measure. The unit of measure is
always linear (for example, international feet or meters). Another important part
is whether the z-values represent heights (elevations) or depths. For each type, the
z-axis direction is positive "up" or "down," respectively.

In the following illustration, there are two vertical coordinate systems: mean sea
level and mean low water. Mean sea level is used as the zero level for height
values. Mean low water is a depth-based vertical coordinate system.

One z-value is shown for the height-based mean sea level system. Any point that
falls below the mean sea level line but is referenced to it will have a negative z-
value. The mean low water system has two z-values associated with it. Because
the mean low water system is depth based, the z-values are positive. Any point
that falls above the mean low water line but is referenced to it will have a
negative z-value.

You cannot define a vertical coordinate system on a dataset without a


corresponding geographic or projected coordinate system.

Common GIS Projections

Mercator- A conformal, cylindrical projection tangent to the equator. Originally


created to display accurate compass bearings for sea travel. An additional feature
of this projection is that all local shapes are accurate and clearly defined.

Transverse Mercator - Similar to the Mercator except that the cylinder is tangent
along a meridian instead of the equator. The result is a conformal projection that
minimizes distortion along a north-south line, but does not maintain true
directions.

Universal Transverse Mercator (UTM) – Based on a Transverse Mercator


projection centered in the middle of zones that are 6 degrees in longitude wide.
These zones have been created throughout the world.

Lambert Conformal Conic – A conic, confromal projection typically intersecting


parallels of latitude, standard parallels, in the northern hemisphere. This
projection is one of the best for middle latitudes because distortion is lowest in the
band between the standard parallels. It is similar to the Albers Conic Equal Area
projection except that the Lambert Conformal Conic projection portrays shape
more accurately than area.

State Plane – A standard set of projections for the United States

based on either the Lambert Conformal Conic or transverse mercator projection,


depending on the orientation of each state. Large states commonly require
several state plane zones.

Lambert Equal Area - An equidistant, conic projection similar to the Lambert


Conformal Conic that preserves areas.
Albers Equal Area Conic - This conic projection uses two standard parallels to
reduce some of the distortion of a projection with one standard parallel. Shape
and linear scale distortion are minimized between standard parallels.

A horizontal datum is a reference frame used to locate features on the earth’s


surface. It is defined by an ellipsoid and that ellipsoid’s position relative to the
earth. There are two types of datums: earth-centered and local. An earth-centered
datum has its origin placed at the earth’s currently known center of mass and is
more accurate overall. A local datum is aligned so that it closely corresponds to
the earth’s surface for a particular area and can be more accurate for that
particular area. Within both of the basic types of datums, you can have several
global and local datums. Because datums establish reference points to measure
surface locations, they also enable us to calculate planar coordinate values when
applying a projection to a particular area.

Ellipsoid

The earth is often treated as a sphere to make mathematical calculations easier;


however, its shape is actually an ellipsoid. Rotating an ellipse about an axis forms
an ellipsoid. An ellipsoid is like a flattened circle with radius lengths along its
major and minor axes of length a and b, respectively. The diagram shows that the
ellipsoid is symmetric when divided at the equator (i.e., the southern hemisphere
and the northern hemisphere are identical in shape). This is not strictly correct,
because the earth is slightly pear-shaped; however, the difference in shape
between the hemispheres is very slight.

Locations on the earth are referenced to the datum, different datums have
different coordinate values for the same location

Geographic (datum) transformations


If two datasets are not referenced to the same geographic coordinate system, you
may need to perform a geographic (datum) transformation. This is a well-defined
mathematical method to convert coordinates between two geographic coordinate
systems. As with the coordinate systems, there are several hundred predefined
geographic transformations that you can access. It is very important to correctly
use a geographic transformation if it is required. When neglected, coordinates can
be in the wrong location by up to a few hundred meters. Sometimes no
transformation exists, or you have to use a third GCS like the World Geodetic
System 1984 (WGS84) and combine two transformations.
Applications of GIS
Agriculture

GIS application in agriculture has been playing an increasingly important role in


crop production throughout the world by helping farmers in increasing
production, reducing costs, and managing their land resources more efficiently.
GIS application in agriculture such as agricultural mapping plays a vital role in
monitoring and management of soil and irrigation of any given farm land. GIS
agriculture and agricultural mapping act as an essential tools for management of
agricultural sector by acquiring and implementing the accurate information into
a mapping environment. GIS application in agriculture also helps in management
and control of agricultural resources. GIS agriculture technology helps in
improvement of the present systems of acquiring and generating GIS agriculture
and resources data

Archaeology

The combination of GIS and archaeology has been considered a perfect match,
since archaeology often involves the study of the spatial dimension of human
behavior over time, and all archaeology carries a spatial component. GIS is adept
at processing these large volumes of data, especially that which is geographically
referenced

Business

With the modernization and advancement in technology many tools such as GIS
can help in accomplishing an array of tasks such as marketing, site selection,
survey and data management. The spatial data management and analysis can
help in better strategic marketing and management of any business vertical.
Business geographic can assist in monitoring of the prelaunch and launch of a
product by giving insights about market penetration, product reception and sale
analysis. Environment
Geology

This application shows how a GIS in combination with geological data sets can be
used to solve specific geological problems. The training on digital image
processing focuses on the usage of field and laboratory spectral data to gain a
better understanding of Remote Sensing products.

Hydrology

A water balance equation can be used to describe the flow of water in and out of a
system. A system can be one of several hydrological domains, such as a column of
soil or a drainage basin Water balance can also refer to the ways in which an
organism maintains water in dry or hot conditions. It is often discussed in
reference to plants or arthropods, which have a variety of water retention
mechanisms, including a lipid waxy coating that has limited permeability.

A water balance can be used to help manage water supply and predict where
there may be water shortages. It is also used in irrigation, runoff assessment,
floodcontrol and pollution control. Further it is used in the design of subsurface
drainage system which may be horizontal or vertical.

Waste management

These are activities and action required to manage waste from its inception to its
final disposal. This includes amongst other things, collection, transport, treatment
and disposal of waste together with monitoring and regulation. It also
encompasses the legal and regulatory framework that relates to waste
management encompassing guidance on recycling etc

Management of Water Utilities for WSP’s

This included development of GIS datasets for utilities used for water supply like
Main pipe network, sub-mains, service connections, appurtenances, master
meters, zonal meters and customer meter. Most of these assets are hidden
underground and it’s important to keep good record of their information for
purposes of efficient monitoring, repairs and extension of service and non-
revenue water management.
Urban Planning

The usefulness of GIS as a tool for building planning support systems is best
assessed with reference to the nature of the scientific input required at the
various stages of decision making.

Land degradation

Irrigated agriculture areas often face problems of water logging and salinity. The
problems are refereed as twin problem as waterlogging leads to soil salinization
in long run. Identification of extent of the affected area is pre requisite for
reclamation. Area with surface pondage and moist soil can be delineated easily
using remote sensing data. Water has black tone in standard FCC in visible and
near IR bands. Most soil has dark signature in these imageries. Shallow water
table conditions often are not detected using optical remotely sensed data unless
its expression is visible on the surface of the earth. Areas where yield is affected
can be monitored. Saline areas possess salt efflorescence on the surface. Due to
this, saline areas have bright appearance in optical remote sensing. Sodic area
have different signature than saline areas. In sodic areas, the infiltration is very
less and thus water gets stagnant in the areas and thus the area can be identified
through surface pondage and moist soil. Both, saline and sodic areas have poor
growth of vegetation. Waterlogged areas can also be delineated using GIS
technique using water depth map. The map is processed to correct any
discrepancies in depths (e.g. negative values). The maps can be utilized to classify
areas as waterlogged/ critical, potential waterlogged and safe.

Groundwater studies

Groundwater potential and quality can be mapped in GIS environment. Various


layers namely slope, geology, hydromorphogeology, distances to drainage channel
, tanks and lineaments, depth to water table, depth of weathered zone can be
overlaid and integrated on GIS environment to obtain groundwater potential map
.
REMOTE SENSING
Concepts and Foundations of Remote Sensing

Introduction
Remote sensing (RS), also called Earth Observation (EO), refers to obtaining
information about objects or areas at the earth’s surface without being in direct
contact with the object of area.

Remote sensing techniques allow taking images of the earth surface in various
wavelength region of the electromagnetic spectrum (EMS). One of the major
characteristics of a remotely sensed image is the wavelength region it represents
in the EMS. Some of the images represent reflected solar radiation in the visible
and the near infrared regions of the electromagnetic spectrum others are the
measurements of the energy emitted by the earth surface itself i.e. in the thermal
infrared wavelength region. The energy measured in the microwave region is the
measure of relative return from the earth’s surface, where the energy is
transmitted from the sensor itself. This is known as active remote sensing, since the
energy source is provided by the remote sensing platform. Whereas the systems
where the remote sensing measurements depend upon the external energy source
, such as sun are referred to as passive remote sensing systems.
Active Remote Sensing Passive Remote Sensing

Detection and discrimination of objects or surface features means detecting and


recording of radiant energy reflected or emitted by objects or surface material.
Different objects return different amount of energy in different bands of the
electromagnetic spectrum, incident upon it. This depends on the property of
material (structural, chemical, and physical), surface roughness, angle of
incidence, intensity, and wavelength of radiant energy.

The Remote Sensing is basically a multi-disciplinary science which includes a


combination of various disciplines such as optics, spectroscopy, photography,
computer, electronics and telecommunication, satellite launching etc. All these
technologies are integrated to act as one complete system in itself, known as
Remote Sensing System. There are a number of stages in a Remote Sensing process,
and each of them is important for successful operation.

Stages in Remote Sensing

1. Emission of electromagnetic radiation, or EMR (sun/self- emission).

2. Transmission of energy from the source to the surface of the earth, as well as
absorption and scattering.

3. Interaction of EMR with the earth’s surface: reflection and emission.

4. Transmission of energy from the surface to the remote sensor.

5. Sensor data output


6. Data transmission, processing and analysis.

Electromagnetic radiation
With the exception of objects at absolute zero, all objects emit electromagnetic
radiation. Objects also reflect radiation that has been emitted by other objects.

An electromagnetic radiation is a form of energy that propagates as wave motion


in a harmonic sinusoidal fashion at velocity of light (i.e. 299,792 km/s).

Electromagnetic energy is generated by several mechanisms, including changes in


the energy levels of electrons, acceleration of electrical charges, decay of
radioactive substances, and the thermal motion of atoms and molecules. Nuclear
reactions within the Sun produce a full spectrum of electromagnetic radiation,
which is transmitted through space without experiencing major changes. As this
radiation approaches the Earth, it passes through the atmosphere before reaching
the Earth’s surface. Some is reflected upward from the Earth’s surface; it is this
radiation that forms the basis for photographs and similar images. Other solar
radiation is absorbed at the surface of the Earth and is then reradiated as thermal
energy. This thermal energy can also be used to form remotely sensed images,
although they differ greatly from the aerial photographs formed from reflected
energy. Finally, man-made radiation, such as that generated by imaging radars, is
also used for remote sensing (Campbell and Wynne, 2011).

Electromagnetic radiation (Figure below) consists of an electrical field (E) that


varies in magnitude in a direction perpendicular to the direction of propagation.
In addition a magnetic field (H) oriented at right angles to the electrical field is
propagated in phase with the electrical field.
An electromagnetic radiation is characterized by its:

a. Wavelength: the distance, expressed in unit of length, from one wave crest to the
next.

b. Frequency: the number of crests passing a fixed point in a given period of time.
Frequency is often measured in hertz, 1 Hertz corresponding to one cycle per
second.

c. Amplitude: the height of each peak. Amplitude (formally known as spectral


irradiance) is often expressed as energy levels, expressed as watts per square meter
per micrometer (i.e. as energy level per wavelength interval).

d. Phase: phase specifies if the peaks of one wave align with those of another. Two
waves oscillating together are said to be “in phase”. If the peaks of one wave
match with the troughs of another wave, the 2 waves are said to be “out of phase”.

Frequency (v) and wavelength (λ) are related: c= λv

The electromagnetic spectrum has been divided in different regions (Figure below
).

These subdivisions have been arbitrarily defined for convenience and traditions
within different disciplines. The regions really used in remote sensing start from
ultraviolet up to microwave.

a. Ultraviolet spectrum
Ultraviolet region (0.01-0.40 μm) is a short-wavelength region lying between the X-
ray region and the limit of human vision. This region is sometimes subdivided
into the near ultraviolet (or UV-A; 0.32-0.40 μm), far ultraviolet (or UV-B; 0.32-0.28 μm)
and extreme ultraviolet (UV-C; below 0.28 μm). Near ultraviolet radiation is known
for its ability to induce fluorescence, emission of visible radiation, in some
materials. Fluorescence occurs when an object illuminated with radiation of one
wavelength emits radiation at different wavelength. It has significance for a
specialized form of remote sensing, notably for basic photosynthesis research and
monitoring vegetation. Light energy absorbed by the photosynthetic pigments (
chlorophylls and carotenoids) in the leaf mesophyll cells is mainly used for
photosynthesis. A small part of the absorbed light is, however, lost in the
deexcitation process of excited chlorophyll a molecules as red chlorophyll
fluorescence and infra-red radiation (heat emission). This phenomenon is called
the “Kautsky effect”. Ultraviolet radiation is however easily scattered by the
earth’s atmosphere and is therefore not generally used for remote sensing of
earth material.

b. Visible spectrum

The visible spectrum, whose limits are defined by the sensitivity of the human
visual system, represents only a small portion of the electromagnetic spectrum. It
has however an obvious importance in remote sensing. It has been divided in
three segments or additive primaries: the blue (0.4-0.5 μm), the green (0.5-0.6 μm)
and the red (0.6-0.7 μm). Equal proportions of the three additive primaries
combine to form white light. The color of an object is defined by the color of the
light that it reflects.

c. Infrared spectrum

Infrared spectrum is a relatively large region (from 0.72 to 15 μm i.e. roughly 40


times as wide as the visible light) and therefore it encompasses radiation with
varied properties. Two important categories can be particularly noticed. The first
category includes the near infrared (0.72-1.30 μm) and the mid-infrared (1.30-3.00 μm)
radiations. This category of radiation behaves in a manner analogous to radiation
in the visible spectrum. The second category is composed of the far infrared (or
thermal infrared) region (7.0 μm – 1 mm) bordering the microwave region. This
category is fundamentally different from that in the visible and the near infrared
regions. Whereas near infrared radiation is essentially solar radiation reflected
from the earth’s surface, far infrared radiation is emitted by the Earth.
d. Microwave radiation

Microwave radiation represents the longest wavelengths used in remote sensing.


The shortest wavelengths in this range have properties similar to thermal
infrared region. The main advantage of this portion of the electromagnetic
spectrum is its ability to penetrate through the clouds (microwave radiation is one
of the atmospheric windows).

Interactions with the Atmosphere


All radiation used for remote sensing must pass through the Earth’s atmosphere.
If the sensor is carried by a low-flying aircraft, effects of the atmosphere on image
quality may be negligible. In contrast, energy that reaches sensors carried by
Earth satellites must pass through the entire depth of the Earth’s atmosphere.
Under these conditions, atmospheric effects may have substantial impact on the
quality of images and data that the sensors generate. Therefore, the practice of
remote sensing requires knowledge of interactions of electromagnetic energy
with the atmosphere.

Before reaching the surface, the sun radiation crosses the atmosphere and is
subject to modification by several physical processes including scattering,
absorption and refraction.

a. Atmospheric scattering

Scattering is the redirection of EMR by particles suspended in the atmosphere or


by large molecules of atmospheric gases. The effect of scattering is to redirect
radiation so that a portion of the incoming solar beam is directed back toward, as
well as toward the earth’s surface. Scattering not only reduces the image contrast
but also changes the spectral signature of ground objects as seen by the sensor.
The amount of scattering depends upon the size of the particles, their abundance,
the wavelength of radiation, depth of the atmosphere through which the energy is
travelling and the concentration of the particles. The concentration of particulate
matter varies both in time and over season. Thus the effects of scattering will be
uneven spatially and will vary from time to time.

Theoretically scattering can be divided into three categories depending upon the
wavelength of radiation being scattered and the size of the particles causing the
scattering.

Rayleigh scattering
Rayleigh scattering mainly consists of scattering caused by atmospheric molecules
and other tiny particles. This occurs when the particles causing the scattering are
much smaller in diameter (less than one tenth) than the wavelengths of radiation
interacting with them.

Smaller particles present in the atmosphere scatter the shorter wavelengths more
compared to the longer wavelengths.

The scattering effect or the intensity of the scattered light is inversely


proportional to the fourth power of wavelength for Rayleigh scattering. Hence,
the shorter wavelengths are scattered more than longer wavelengths.

Rayleigh scattering is also known as selective scattering or molecular scattering.

Molecules of Oxygen and Nitrogen (which are dominant in the atmosphere) cause
this type of scattering of the visible part of the electromagnetic radiation. Within
the visible range, smaller wavelength blue light is scattered more compared to the
green or red. A "blue" sky is thus a manifestation of Rayleigh scatter. The blue
light is scattered around 4 times and UV light is scattered about 16 times as much
as red light. This consequently results in a blue sky. However, at sunrise and
sunset, the sun's rays have to travel a longer path, causing complete scattering (
and absorption) of shorter wavelength radiations. As a result, only the longer
wavelength portions (orange and red) which are less scattered will be visible.

The haze in imagery and the bluish-grey cast in a colour image when taken from
high altitude are mainly due to Rayleigh scatter.

Mie Scattering

Another type of scattering is Mie scattering, which occurs when the wavelengths
of the energy is almost equal to the diameter of the atmospheric particles. In this
type of scattering longer wavelengths also get scattered compared to Rayleigh
scatter.

In Mie scattering, intensity of the scattered light varies approximately as the


inverse of the wavelength.

Mie scattering is usually caused by the aerosol particles such as dust, smoke and
pollen. Gas molecules in the atmosphere are too small to cause Mie scattering of
the radiation commonly used for remote sensing.

Non-selective scattering

A third type of scattering is non-selective scatter, which occurs when the


diameters of the atmospheric particles are much larger (approximately 10 times)
than the wavelengths being sensed. Particles such as pollen, cloud droplets, ice
crystals and raindrops can cause non-selective scattering of the visible light.

For visible light (of wavelength 0.4-0.7μm), non-selective scattering is generally


caused by water droplets which is having diameter commonly in the range of 5 to
100 μm. This scattering is non-selective with respect to wavelength since all visible
and IR wavelengths get scattered equally giving white or even grey colour to the
clouds.
b. Absorption and transmission

Absorption is the process in which incident energy is retained by particles in the


atmosphere at a given wavelength. Unlike scattering, atmospheric absorption
causes an effective loss of energy to atmospheric constituents.

The absorbing medium will not only absorb a portion of the total energy, but will
also reflect, refract or scatter the energy. The absorbed energy may also be
transmitted back to the atmosphere.

The most efficient absorbers of solar radiation are water vapour, carbon dioxide,
and ozone. Gaseous components of the atmosphere are selective absorbers of the
electromagnetic radiation, i.e., these gases absorb electromagnetic energy in
specific wavelength bands. Arrangement of the gaseous molecules and their
energy levels determine the wavelengths that are absorbed.

Since the atmosphere contains many different gases and particles, it absorbs and
transmits many different wavelengths of electromagnetic radiation. Even though
all the wavelengths from the Sun reach the top of the atmosphere, due to the
atmospheric absorption, only limited wavelengths can pass through the
atmosphere. The ranges of wavelength that are partially or wholly transmitted
through the atmosphere are known as "atmospheric windows." Remote sensing
data acquisition is limited through these atmospheric windows.

Only those wavelengths outside the main absorption ranges of atmospheric gases
can be used for remote sensing. The useful ranges are referred to as atmospheric
transmission windows and include:

1. The window from


• 0.4 to 2 µm. The radiation in this range (visible,
NIR, SWIR) is mainly reflected radiation. Because this type of
radiation follows the laws of optics, remote sensors operating in
this range are often referred to as optical sensors.
• the TIR range, namely two narrow windows
2. Three windows in
around 3 and 5 µm, and a third, relatively broad window
extending from approximately 8 µm to 14 µm.

The atmospheric windows and the absorption characteristics are shown in the
figure below.
100
H2O

and
50 CO2
CO2 H2O O3 CO2 H2O

0
0 2 4 6 8 10 12 14 16 18 20 22
Wavelength (μm)
From this figure it can be seen that many of the wavelengths are not useful for
remote sensing of the Earth’s surface, simply because the corresponding radiation
cannot penetrate the atmosphere. Only those wavelengths outside the main
absorption ranges of atmospheric gases can be used for remote sensing.

c. Refraction

Refraction is the bending of light at the contact between two media. This
phenomenon occurs in the atmosphere as the light passes through the
atmospheric layers of varied clarity, humidity and temperature. These variations
influence the density of atmospheric layers, which in turn, causes the bending of
light as they pass from one layer to another. The most common phenomena are
the mirage like apparitions sometimes visible in the distance on hot summer.

Interactions with the Earth’s Surface


Radiation from the sun (i.e. with passive remote sensing systems), when incident
upon the earth’s surface, is either reflected by the surface, transmitted into the
surface or absorbed and emitted by the surface (See figure below). The
electromagnetic radiation, on interaction, experiences a number of changes in
magnitude, direction, wavelength, polarization and phase. These changes are
detected by the remote sensor and enable the interpreter to obtain useful
information about the object of interest. The remotely sensed data contain both
spatial information (size, shape and orientation) and spectral information (tone,
color and spectral signature).
From the interaction mechanisms point of view, with the object-visible and
infrared wavelengths from 0.3 μm to 16 μm can be divided into three regions.

The spectral band from 0.3 μm to 3 μm is known as the reflective region. In this
band, the radiation sensed by the sensor is that due to the sun, reflected by the
earth’s surface. The band corresponding to the atmospheric window between 8
μm and 14 μm is known as the thermal infrared band. The energy available in this
band for remote sensing is due to thermal emission from the earth’s surface. Both
reflection and self-emission are important in the intermediate band from 3 μm to
5.5 μm.

In the microwave region of the spectrum, radar active sensors provide their own
source of electromagnetic radiation. The electromagnetic radiation produced by
the radars is transmitted to the earth’s surface and the electromagnetic radiation
reflected (back scattered) from the surface is recorded and analyzed. The
microwave region can also be monitored with passive sensors, called microwave
radiometers, which record the radiation emitted by the terrain in the microwave
region.

Passive (optical) remote sensing

Passive optical remote sensing makes use of the radiations reflected or emitted by
remote objects. Passive sensors measure ambient levels of existing sources of
energy, the major energy source being the sun. The solar radiation is transmitted
through the atmosphere before interacting with the surface. Part of the solar
radiation that reaches the surface is reflected and transmitted back through the
atmosphere towards the sensor.

The electromagnetic energy reaching earth surface can be either reflected,


absorbed and/or transmitted. Only the reflected portion is relevant since it is
usually this which is returned to the sensor system.

Reflection

Of all the interactions in the reflective region, surface reflections are the most
useful and revealing in remote sensing applications. Reflection occurs when an
electromagnetic radiation is redirected as it strikes a non-transparent surface. The
nature of the reflection depends on size of surface irregularities (roughness or
softness) in relation to the wavelength of the radiation considered. Two types of
reflection can be distinguished. If the surface is smooth relative to the wavelength,
specular reflection occurs. Specular reflection redirects all, or almost all, of the
incident radiation in a single direction (the angle of incidence being equals to the
angle of reflection).For visible radiation, specular reflection can occur with
surfaces such as a mirror, smooth metal, or a calm water body.

If a surface is rough relative to wavelength, it acts as a diffuse, or isotropic,


reflector. Energy is scattered more or less equally in all directions. For visible
radiation, many natural surfaces might behave as diffuse reflectors, including, for
example, uniform grassy surfaces. A perfectly diffuse reflector (known as a
Lambertian surface) would have equal brightnesses when observed from any
angle.

Bidirectional reflectance distribution function


Because of its simplicity and directness, the concept of a Lambertian surface is
frequently used as an approximation of the optical behavior of objects observed
in remote sensing. However, the Lambertian model does not hold precisely for
many, if not most, natural surfaces. Actual surfaces exhibit complex patterns of
reflection determined by details of surface geometry (e.g., the sizes, shapes, and
orientations of plant leaves). Some surfaces may approximate Lambertian
behavior at some incidence angles but exhibit clearly non-Lambertian properties
at other angles. Reflection characteristics of a surface are described by the
bidirectional reflectance distribution function (BRDF).

The BRDF is a mathematical description of the optical behavior of a surface with


respect to angles of illumination and observation, given that it has been
illuminated with a parallel beam of light at a specified azimuth and elevation (The
function is “bidirectional” in the sense that it accounts both for the angle of
illumination and the angle of observation.).Description of BRDFs for actual, rather
than idealized, surfaces permits assessment of the degrees to which they
approach the ideals of specular and diffuse surfaces.

Reflectance

For many applications in remote sensing, the brightness of a surface is best


represented not as irradiance but rather as reflectance which is expressed as the
relative brightness of a surface as measured for a specific wavelength interval:

As a ratio, reflectance is a dimensionless number (between 0 and 1) but is


commonly expressed as a percentage.

Spectral signature

The quantity reflected to the sensor system varies according to the nature of the
encountered surface and the position in the electromagnetic spectrum where the
measurement is being taken. Using remote sensing instruments, we can observe
the brightnesses of objects over a range of wavelengths, so that there are
numerous points of comparison between brightnesses of separate objects. The
magnitude of energy that an object reflects or emits across a range of wavelengths
is called its spectral response pattern (also called spectral signature).

Considering that everything in nature has its own unique distribution of reflected,
emitted, and absorbed radiation, the detailed knowledge of the spectral signature
should ideally allow identification of features of interest, such as separate kinds of
crops, forests, or minerals and allow obtaining information about shape, size, and
other physical and chemical properties.

The Figure below illustrates for example the spectral response patterns of water,
brownish gray soil, and grass between about 0.3 and 2.7 micrometers. The graph
shows that grass, for instance, reflects relatively little energy in the visible band (
although the spike in the middle of the visible band explains why grass looks
green). Like most vegetation, the chlorophyll in grass absorbs visible energy (
particularly in the blue and red wavelengths) for use during photosynthesis.
About half of the incoming near-infrared radiation is reflected, however, which is
characteristic of healthy, hydrated vegetation. Brownish gray soil reflects more
energy at longer wavelengths than grass. Water absorbs most incoming radiation
across the entire range of wavelengths.

Vegetation has a characteristic spectral signature showing a clear opposition


between the visible and the near infra-red (Figure below). This characteristic
comes from the chlorophyllous activity and the presence of water in the leaves.
The reflectance and transmittance of a fresh leaf and a dry leaf are for example
shown in the figure below. The reflectance of a fresh green leaf is characterized
by a strong absorption of the chlorophyll in the visible region, centred on 650 nm,
a plateau of high reflectance in the near-infrared, around 850 nm, and water
absorptions in the middle infrared region at 1450 nm and at 1950 nm. The
transition from the strong chlorophyll absorption in the visible to the high
reflectance in the near-infrared is known as red-edge. These spectral
characteristics of the leaf are maintained at canopy level. The red-edge feature is
the most important characteristic of vegetation, and it is the basis of many
vegetation indexes. The point of maximum slope in the leaf reflectance, the
inflexion point of the red-edge feature, occurs at wavelengths between 690 and
740 nm. The red-edge position and the red-edge slope have been shown to be
correlated to chlorophyll concentration (Curran et al., 1990; Gitelson et al., 1997).
When the leaf dries, it losses chlorophyll and the absorptions due to other leaf
constituents such as lignin can be observed. A dry leaf absorbs much less
radiation than a fresh leaf (González Sanpedro, 2008).

Fresh and dry leave spectral reflectance

Sensing of EM radiation
The review of properties of EM radiation shows that different forms of radiation
can provide us with different information about terrain-surface features.
Different applications of Earth Observation are likely to benefit from sensing in
different ranges of the EM spectrum. A geo-informatics engineer who wants to
discriminate objects for topographic mapping will prefer to use an optical sensor
operating in the visible range. An environmentalist who needs to monitor heat
losses of a nuclear power plant will use a sensor that detects thermal emission. A
geologist interested in surface roughness, because it indicates to him rock type,
will rely on microwave sensing. Different demands combined with different
technical solutions have resulted in a multitude of sensors. In this section we will
classify various remote sensors and discuss their common features. Peculiarities
will then be treated later in appropriate sections.

Remote Sensing Data Acquisition Process

Obstacles to sensing

A remote sensor is a device that detects EM radiation, quantifies it and, usually,


records it in an analogue or digital form. A remote sensor may also transmit
recorded data (to a receiving station on the ground). Many sensors used in Earth
Observation detect reflected solar radiation. Others detect the radiation emitted
by the Earth itself. There are, however, some obstacles to be overcome. The Sun
does not always shine brightly and there are regions on the globe almost
permanently under cloud cover. There are also regions that have seasons with
very low Sun elevation, so that objects cast long shadows over long periods.
Furthermore, at night there are only emissions— and perhaps moonlight. Sensors
detecting reflected solar radiation are useless at night and face problems when
dealing with unfavorable seasonal and weather conditions. Sensors detecting
emitted terrestrial radiation do not directly depend on the Sun as a source of
illumination; they can be operated any time. The Earth’s emissions, we have
learned, occurs only at longer wavelengths because of the relatively low surface
temperature and because long EM waves do not hold much energy, which makes
them more difficult to sense.
Active versus Passive
Luckily we do not have to rely only on solar and terrestrial radiation. We can
build instruments that emit EM radiation and then detect the radiation returning
from the target object or surface. Such instruments are called active sensors, as
opposed to passive sensors, which measure reflected solar or terrestrial radiation.
An example of an active sensor is a laser rangefinder, a device that can be bought
for a few Euros in a shop. Another very common active sensor is a camera with a
flash unit (which will operate below certain levels of light). The same camera
without the flash unit is a passive sensor. The main advantages of active sensors
are that they can be operated day and night and have a controlled illuminating
signal. They are often designed to work in an EM spectrum range that is less
affected by the atmosphere and weather conditions.
Most remote sensors measure either the intensity or the phase of EM radiation.
Some— like a simple laser rangefinder—only measure the elapsed time between
sending a radiation signal and receiving it back. Radar sensors may measure both
intensity and phase. Phase measuring sensors are used for precise ranging (
distance measurement), e.g. by GPS “phase receivers” or continuous-wave laser
scanners.

Spectral Resolution

The radiance is observed for a spectral band, not for a single wavelength. A
spectral band or wavelength band is an interval of the EM spectrum in which the
average radiance is measured. Sensors such as a panchromatic camera, a radar
sensor and a laser scanner only measure in one specific band. A multispectral
scanner or a digital camera measures in several spectral bands at the same time.
Multispectral sensors have several channels, one for each spectral band.

Sensing in several spectral bands simultaneously allows us to relate properties


that show up well in specific spectral bands. For example, reflection
characteristics in the spectral band 2 to 2.4 µm (as recorded by Landsat-5 TM
channel 7) tell us something about the mineral composition of soil. The combined
reflection characteristics in the red and NIR bands (from Landsat-5 TM channels 3
and 4) can tell us something about biomass and plant health.

Landsat MSS (MultiSpectral Scanner), the first civil space-borne Earth Observation
sensor, had sensing elements (detectors) for three rather broad spectral bands in
the visible range of the spectrum, each with a width of 100 nm, and one broader
band in the NIR range. A hyperspectral scanner uses detectors for many more, but
narrower, bands, which may be as narrow as 20 nm, or even less. We say a
hyperspectral sensor has a higher ‘spectral resolution’ than a multispectral one.
Spectral Resolution is the number of intervals or channels of EM in which radiance
is measured.

Spatial Resolution

When arranging the DNs in a two-dimensional array, we can readily visualize


them as grey values. We refer to the obtained “image” as a digital image and to a
sensor producing digital images as an imaging sensor. The array of DNs
represents an image in terms of discrete picture elements, called pixels. The value
of a pixel (it’s DN) corresponds to the radiance of the light reflected from the small
ground area viewed by the relevant detector. The smaller the detector, the smaller
will be the area on the ground that corresponds to one pixel. The size of the
“ground resolution cell” is often referred to as “pixel size on the ground”. This is
referred to as the spatial resolution of the image.

Remote Sensing Sensors

Classification of sensors

Remote sensors can be classified and labeled in different ways. According to


whatever our prime interest in remote sensing could be, geometric properties,
spectral differences, or an intensity distribution of an object or scene we can
distinguish three salient types of sensors: altimeters, spectrometers, and
radiometers.
Altimeters

Laser and radar altimeters are non-imaging sensors that provide information
about the elevation of water and land surfaces.

Radiometers

Radiometers measure radiance and typically sense in one broad spectral band or
in only a few bands, but with high radiometric resolution. Thermal sensors, such
as the channels 3 to 5 of NOAA’s AVHRR or the channels 10 to 14 of Terra’s ASTER,
are called (imaging) radiometers.

Panchromatic radiometers can have a very high spatial resolution, whereas


microwave radiometers have a low spatial resolution because of the low levels of
energy inherent in this spectral range. Scatterometers are non-imaging
radiometers.

Radiometers are used for a wide range of applications: for example, detecting
forest/bush/coal fires; determining soil moisture and plant response; monitoring
ecosystem dynamics; and analysing energy balance across land and sea surfaces.

Spectrometers
Spectrometers measure radiance in many (usually about 100 or 200 nm) narrow,
contiguous spectral bands and therefore have a high spectral resolution. Their
spatial resolution is moderate to low. The prime use of imaging spectrometers is
to identify surface materials from the mineral composition of soils, to
concentrations of suspended matter in surface water and chlorophyll content.

The following list gives a short description of each group and refers to the section
in which they are treated in more detail:

1. Gamma ray spectrometers are mainly used in mineral exploration.


2. Aerial film cameras have been the remote sensing workhorse for decades.
Today, they are used primarily for large-scale topographic mapping,
cadastral mapping, and orthophoto production for urban planning.
3. Digital aerial cameras are not conquering the market as quickly as digital
cameras did on the consumer market; they are treated together with
optical scanners.
4. Digital video cameras are not only used to record movies. They are also
used in aerial Earth Observation to provide low cost (and low resolution)
images for mainly qualitative purposes, for instance to provide visual
information about an area.
5. Multispectral scanners are mostly operated from satellites and other space
vehicles. The essential difference between multispectral scanners and
satellite line cameras is the imaging/optical system employed: multispectral
scanners use a moving mirror to “scan” a line (i.e. a narrow strip on the
ground) and a single detector instead of recording intensity values of an
entire line at one instant by an array of detectors as for line cameras.
6. Thermal scanners are placed here in the optical domain purely for the sake
of convenience. They exist as special instruments and as a component of
multi- spectral radiometers. Thermal scanners provide us with data that
can be directly related to object temperature.
7. Passive microwave radiometers detect emitted radiation of the Earth’s
surface in the 10 to 1000 mm wavelength range. These radiometers are
mainly used in mineral exploration, for monitoring soil-moisture changes,
and for snow and ice detection.
8. Laser scanners measure the distance from the laser instrument to many
points of the target in “no time” (e.g. 150,000 points in one second). Laser
ranging is often referred to as LIDAR (Light Detection and Ranging). The
prime application of airborne laser scanning (ALS) is for creating high
resolution digital surface models and digital terrain models.
9. Imaging radar (Radio Detection and Ranging) operates in the spectral range
10–1000 mm. Radar instruments are active sensors and because of the
range of wavelengths used they can provide data day and night, under all
weather conditions. Radar waves can penetrate clouds; only heavy rainfall
affects imaging to some degree. One of its applications is, therefore, the
mapping of areas that are subject to permanent cloud cover.
10. Radar altimeters are used to measure elevation profiles of the Earth’s
surface that is parallel to the receiving satellite’s orbit. Radar altimeters
operate in the 10– 60 mm range and allow us to calculate elevation with
an accuracy of 20–50 mm. Radar altimeters are useful for measuring
relatively smooth surfaces.
11. Sonar, which stands for Sound Navigation Ranging, is used, for example, for
mapping river beds and sea floors, and for detecting obstacles underwater.
Sonar works by emitting a small burst of sound from a ship. The sound is
reflected off the bottom of the body of water. The time taken for the
reflected pulse to be received corresponds to the depth of the water. More
advanced systems also record the intensity of the return signal, thus giving
information about the material on the sea floor.

Remote Sensing Satellite Systems

Sensors used in Earth Observation can be operated at altitudes ranging from just
a few centimeters above the ground using field equipment to those far beyond the
atmosphere. Very often the sensor is mounted on a moving vehicle which we call
the platform such as an aircraft or a satellite. Occasionally, static platforms are
used.

Satellites are launched by rocket into space, where they then circle the Earth for 5
to 12 years on a predefined orbit. The choice of orbit depends on the objectives of
the sensor mission; orbit characteristics and different orbit types are explained
below. A satellite must travel at high speed to orbit at a certain distance from the
Earth; the closer to the Earth, the faster the speed required. A space station such
as ISS has a mean orbital altitude of 400 km and travels at roughly 27,000 km/h.
The Moon at a distance of 384,400 km can conveniently circle the Earth at only
3700 km/h. At altitudes of 200 km, satellites already encounter traces of the
atmosphere, which causes rapid orbital and mechanical decay. The higher the
altitude, the longer is the expected lifetime of the satellite.
The majority of civilian Earth-observing satellites orbit at altitudes ranging from
500 to 1000 km. Here we generally find the “big boys”, such as Landsat-7 (2200 kg)
and Envisat (8200 kg), but the mini-satellites of the Disaster Management
Constellation (DMC) also orbit in this range. DMC satellites have a weight of
around 100 kg and were launched by several countries into space early in the cur-
rent millennium at relatively low cost. These satellites represent a network for
disaster monitoring that provides images in three or four spectral bands with a
ground pixel size of 32 m or smaller.

Satellites have the advantage over


× aerial survey of continuity. Meteosat-9, for
example, delivers a new image of the same area every 15 minutes and it has done
so every day for many years. The high temporal resolution at low cost goes
together with a low spatial resolution (pixel size on the ground of 1 by 1 km). Both
the temporal and the spatial resolution of satellite remote sensors are fixed. While
aerial surveys have been restricted in some countries, access to satellite RS data is
commonly easier, although not every type of satellite RS image is universally
available.

Orbital Parameters

The monitoring capabilities of a satellite-borne sensor are to a large extent


determined by the parameters of the satellite’s orbit. An orbit is a circular or
elliptical path de- scribed by the satellite in its movement around the Earth.
Different types of orbits are required to achieve continuous monitoring (
meteorology), global mapping (land cover mapping) or selective imaging (urban
areas). For Earth Observation, the following orbit characteristics are relevant:

1. Orbital altitude is the


• distance (in km) from the satellite to the surface of the
Earth. It influences to a large extent the area that can be viewed (i.e. the
spatial coverage) and the details that can be observed (i.e. the spatial
resolution). In general, the higher the altitude, the larger the spatial
coverage but the lower the spatial resolution.
2. Orbital inclination angle is the angle (in degrees) between the orbital plane

and the equatorial plane. The inclination angle of the orbit determines,
together with the field of view (FOV) of the sensor, the latitudes up to
which the Earth can be observed. If the inclination is 600, then the satellite
orbits the Earth between the latitudes 600 N and 600 S. If the satellite is in a
low-Earth orbit with an inclination of 600, then it cannot observe parts of
the Earth at latitudes above 600 North and below 600 South, which means it
cannot be used for observations of the Earth’s polar regions.
3. Orbital period is the time (in minutes) required to complete one full orbit.

For instance, if a polar satellite orbits at 806 km mean altitude, then it has
an orbital period of 101 minutes. The Moon has an orbital period of 27.3
days. The speed of the platform has implications for the type of images that
can be acquired. A camera on a low-Earth orbit satellite would need a very
short exposure time to avoid motion blur resulting from the high speed.
Short exposure times, however, require high intensities of incident
radiation, which is a problem in space because of atmospheric absorption.
It should be obvious that the contradictory demands of high spatial
resolution, no motion blur, high temporal resolution, long satellite lifetime
(thus lower cost) represent a serious challenge for satellite- sensor
designers.
4. Repeat cycle is the time (in days) between two successive identical orbits.

The revisit time (i.e. the time between two subsequent images of the same
area) is determined by the repeat cycle together with the pointing
capability of the sensor. Pointing capability refers to the possibility of the
sensor–platform combination to look to the side, or forward, or backward,
and not only vertically downwards. Many modern satellites have such a
capability. We can make use of the pointing capability to reduce the time
between successive observations of the same area, to image an area that is
not covered by clouds at that moment, and to produce stereo images.

The following orbit types are most common for remote sensing missions:

5. Polar orbit refers •to orbits with an inclination angle between 80 ◦ and 100 ◦.
An orbit having an inclination larger than 90 ◦ means that the satellite’s
motion is in a westward direction. Such a polar orbit enables observation
of the whole globe, also near the poles. Satellites typically orbit at altitudes
of 600–1000 km.
6. Sun-synchronous orbit refers to a polar or near-polar orbit chosen in such a

way that the satellite always passes overhead at the same time. Most Sun-
synchronous orbits cross the Equator mid-morning, at around 10:30 h local
solar time. At that moment the Sun angle is low and the shadows that
create reveal terrain relief. In addition to day light images, a Sun-
synchronous orbit also allows the satellite to record night images (thermal
or radar, passive) during the ascending phase of the orbit on the night side
of the Earth.
7. A Geostationary orbit refers to orbits that position the satellite above the

Equator (inclination angle: 00) at an altitude of approximately 36,000 km.
At this distance, the orbital period of the satellite is equal to the rotational
period of the Earth, exactly one sidereal day. The result is that the satellite
has a fixed position relative to the Earth. Geostationary orbits are used for
meteorological and telecommunication satellites.

Thermal remote sensing

Thermal remote sensing is based on the measuring of electromagnetic radiation


in the infrared region of the spectrum. The wavelengths most commonly used are
those in the intervals 3–5 µm and 8–14 µm, in which the atmosphere is fairly
transparent and the signal is only slightly attenuated by atmospheric absorption.
Since the source of the radiation is the heat of the imaged surface itself the
handling and processing of Thermal Infra Red data is considerably different from
remote sensing based on reflected sunlight:

• 1. The surface temperature is the main factor that determines the amount of
emitted radiation measured in the thermal wavelengths. The temperature
of an object varies greatly depending on time of day, season, location,
exposure to solar irradiation etc and is difficult to predict. In reflectance
remote sensing, on the other hand, the incoming radiation from the Sun is
considered constant and can be readily calculated, although atmospheric
correction has to be taken into account.
2. In reflectance remote sensing, the characteristic property we are interested

in is the reflectance of the surface at different wavelengths. In thermal
remote sensing, however, the one property we are interested in is, rather,
how well radiation is emitted from the surface at different wavelengths.
3. Since thermal remote sensing does not depend on reflected sunlight, it can

also be done at night (for some applications this is even better than during
the day).

Applications of Thermal Remote Sensing

Thermal hotspot detection: Another application of thermal remote sensing is the


detection and monitoring of small areas with thermal anomalies. The anomalies
can be related to fires, such as forest fires or underground coal fires, or to
volcanic activity, such as lava flows and geothermal fields.

Glaciers monitoring: With thermal remote sensing, studies of glaciers can go


further than the plain observation of their extent. Understanding the dynamics of
a glacier’s state requires environmental variables. Ground surface temperature is
obviously among the most important variables that affect glacier dynamics.

Urban heat islands: The temperature of many urban areas is significantly higher
than that of surrounding natural and rural areas. This phenomenon is referred
to as an urban heat island. The temperature difference is usually larger at
night than during the day and occurs mainly due to the change of matter covering
the land as a result of urban development: land cover in built-up areas retains
heat much better than land cover in natural and rural areas. This affects the
environment in many ways: it modifies rainfall patterns, wind patterns, air
quality, the seasonality of vegetation growth, and so on.

Processing of Raw Remote Sensing Data


Radiometric correction

Various techniques can be group under the heading radiometric correction, which
aims to correct for various factors that cause degradation of raw RS data.
Radiometrically correcting data should make them more suitable for information
extraction. Techniques for modifying recorded DNs serve any of the four main
purposes out- lined below:

1. Image Enhancement:
• Enhancing images so that they are better suited for
visual interpretation. Image enhancement techniques are introduced in a
separate section because they can be taken a step further, namely to “low-
level image processing” for computer visualization.
2. Image Restoration: Correcting data for imperfections of the sensor. The
detectors of a camera all have a slightly different response. We can
determine the differences by radiometric calibration and, accordingly,
apply radiometric correction later to the recordings of the camera.
Scanners often use several detectors per channel instead of only one. Again
, the detectors will each have (slightly) different radiometric responses,
with the consequence that the resulting image may be striped. A destriping
correction will normalize the detectors relatively, if calibration data is
absent. A detector may also fail. We may then obtain an image in which,
for example, every 10th line is black. A line drop correction will
cosmetically fix the data. Another detector problem is random noise, which
degrades radiometric information content and makes an RS image appear
as if salt and pepper was sprinkled over the scene. There can be other
degradations caused by the sensor-platform system that are not so easily
corrected, such as compensating for image motion blur, which relies on a
mathematically complex technique. We got used to referring to these types
of radiometric corrections as image restoration.
• 3. Scene Normalization: This is correcting data for scene peculiarities and
atmospheric disturbances. One scene peculiarity is how the scene is
illuminated. Consider an area at appreciable latitude, such as the
Netherlands. The illumination of the area will be quite different in winter
than in summer (overall brightness, shadows, etc.), because of differences
in Sun elevation.

4. Atmospheric Correction: An atmospheric degradation effect, which is


already disturbing when extracting information from one RS image, is
atmospheric scattering. Sky radiance at the detector causes haze in the
image and reduces contrast.

Elementary image enhancement


There are two approaches to elementary image processing to enhance an image:
histogram operations and filtering. Histogram operations aim at global contrast
enhancement, in order to increase the visual distinction between features, while
filter operations aim at enhancing local contrast (edge enhancement) and
suppressing unwanted image detail. Histogram operations look at DN values
without considering where they occur in the image and assign new values from a
look-up table based on image statistics. Filtering is a “local operation” in which the
new value of a pixel is computed based on the values of the pixels in the local
neighborhood. The figure below shows the effect of contrast enhancement and
edge enhancement for the same input image.
An (a) original, (b) contrast enhanced, and (c) edge enhanced image

Histograms Operations

The radiometric properties of a digital image are revealed by its histogram, which
de- scribes the distribution of the pixel values of the image. By changing the
histogram, we change the visual quality of the image. Pixel values (DNs) for 8 bit
data range from 0 to 255, so a histogram shows the number of pixels having each
value in this range, i.e. the frequency distribution of the DNs.

Two techniques of contrast enhancement will now be described: linear contrast


stretch and histogram equalization (occasionally also called histogram stretch).
Both are “grey scale transformations”, which convert input DNs (of raw data) to
new brightness values (for a more appealing image) by a user-defined “transfer
function”.

Linear contrast stretch is a simple grey scale transformation in which the lowest
input DN of interest becomes zero and the highest DN of interest becomes 255.
The monitor will display zero as black and 255 as white.

Histogram equalization is a non-linear transformation that aims at achieving a


more uniform distribution in the histogram.

It is important to note that contrast enhancement (by linear stretch or histogram


equalization) merely amplifies small differences between DN values so that we
can visually differentiate between features more easily. Contrast enhancement
does not, however, increase the information content of the data and it does not
consider a pixel neighborhood.
108
58 158
Histogram
Raw data
0 255
No stretch

0 158 Image
58 0 58 158 255
255

58 108 158 Raw data


0 255
Linear stretch
Image
0 58 127 255 0 127 255

58 108 158 Raw data


0 255
Histogram
stretch
Image
0 58 0 38 255
255
Filter operations

A further step in producing optimal images for interpretation is to apply filtering.


Filtering is usually carried out for a single band. Filters algorithms can be used to
enhance images by, for example, reducing noise (“smoothing an image”) or
sharpening a blurred image. Filter operations are also used to extract features
from images, edges and lines, and to automatically recognize patterns and detect
objects. There are two broad categories of filters: linear and non-linear filters.

Linear filters calculate the new value of a pixel as a linear combination of the given
values of the pixel and those of neighbouring pixels. A simple example of the use
×
of a linear smoothing filter is when the average of the pixel values in a 3 ×3 pixel
neighbourhood is computed and that average is used as the new value of the
central pixel in the neighbourhood. We can conveniently define such a linear
filter through a kernel. The kernel specifies the size of the neighbourhood that is
considered (3×3, or 5×5, or 3×1, etc.) and the coefficients for the linear
combination.

Filters are mainly use for:

1. Noise Reduction
This is achieved by a kernel in which all the values are 1. This means that
the values of the pixels in the neighborhood are summed and averaged.
The effect of applying this averaging filter is that an image will become
blurred or smoothed. This filter could be applied to radar images to reduce
the effect of speckle.
Kernel Input Output
1 1 1

1 1 1 16 12 20

1 1 1 13 9 15 12

2 7 12
2. Edge Correction
Filtering can also be used to detect the edges of objects in images. Such

edges correspond to local differences in DN values. This is done using a
gradient filter, which calculates the difference between neighbour pixels in
some direction. Edge detection filtering produces small values in
homogeneous areas of an image, while edges are represented by large
positive or negative values. Edge detection filtering can be easily
recognized by examining kernel elements: their sum must be zero.

x-gradient filter y-gradient filter all-directional filter


0 0 0 0 -1 0 -1 -1 -1

-1 0 1 0 0 0 -1 8 -1

0 0 0 0 1 0 -1 -1 -1

3. Edge Enhancement
Filtering can also be used to emphasize local differences in DN values by
increasing contrast, for example for linear features such as roads, canals
and geological faults. This is done using an edge enhancing filter, which
calculates the difference between the central pixel and its neighbours. This
is implemented using negative values for the non-central kernel elements.
Kernel Filter Used for
Edge Enhancement
-1 -1 -1

-1 16 -1

-1 -1 -1

a) Original image b) Edge enhanced image c) Smoothed image

Image Interpretation
In general, methods for extracting information from remote sensing images can
be subdivided into two groups:

• 1. Information extraction based on visual image interpretation. Typical


examples of this approach are visual interpretation methods for land use
or soil mapping. Acquisition of data from aerial photographs for
topographic mapping is also based on visual interpretation.
2. Information extraction based on semi-automatic processing by computer.

Examples of this approach include automatic generation of DTMs, digital
image classification and calculation of surface parameters.

The most intuitive way of extracting information from remote sensing images is
by visual image interpretation, which is based on our ability to relate colours and
patterns in an image to real world features.

Interpretation fundamentals

Human vision: Human vision goes a step beyond the perception of colour: it deals
with the ability of a person to draw conclusions from visual observations. When
analysing an image, typically you find yourself somewhere between the following
two processes: direct and spontaneous recognition; and logical inference, using
clues to draw conclusions by a process of reasoning. Spontaneous recognition refers
to the ability of an interpreter to identify objects or features at first glance. Eg
Agronomists would immediately recognize the pivot irrigation systems from their
circular shape. They are able to do so because of earlier (professional) experience.
Similarly, most people can directly relate what they see on an aerial photo to the
terrain features of the place where they live (because of “scene knowledge”).

Logical inference means that the interpreter applies reasoning. In the reasoning,
the interpreter uses acquired professional knowledge and experience. Logical
inference is, for example, concluding that a rectangular shape is a swimming pool
because of its location in a backyard garden near to a house.

Interpretation elements

We need a set of terms to express the characteristics of an image that we can use
when interpreting the image. These characteristics are called interpretation
elements and are used, for example, to define interpretation keys, which provide
guidelines on how to recognize certain objects.
RS of an area in Spain. The circular areas are pivot irrigation systems.

1. Tone is defined as the relative brightness in a Black and White image. Tonal
variations are an important interpretation element. The tonal expression
of objects in an image is directly related to the amount of light (or other
forms of EM radiation) reflected (or emitted) from the surface. Different
types of rock, soil or vegetation are most likely have different tones.
Variations in moisture conditions are also reflected as tonal differences in
an image: increasing moisture content gives darker grey tones.
• 2. Texture relates to the frequency of tonal change. Texture may be described
by terms such as coarse or fine, smooth or rough, even or uneven, mottled,
speck- led, granular, linear, woolly, etc. Texture can often be related to
terrain surface roughness. Texture is strongly related to the spatial
resolution of the sensor used. A pattern on a large-scale image may show as
texture on a small-scale image of the same scene.
3. Pattern refers to the spatial arrangement of objects and implies the
characteristic repetition of certain forms or relationships. Pattern can be
described by terms such as concentric, radial and checkerboard. Some land
uses have specific and characteristic patterns when observed from the air
or space. Different types of irrigation may spring to mind, or different
types of housing on an urban fringe. Other typical examples include
hydrological systems (a river and its tributaries) and patterns related to
erosion.
4. Size of objects can be considered in a relative or absolute sense. The width

of a road can be estimated, for example, by comparing it to the size of the
cars using it, which is generally known. Subsequently, the width
determines the road type, e.g. primary road, secondary road, and so on.
5. Height differences are important for distinguishing between different

vegetation types, building types, etc. Elevation differences provide us with
clues in geomorphological mapping. We need a stereogram and
stereoscopic viewing to observe height and elevation. Stereoscopic viewing
facilitates interpretation of both natural and man-made features.
6. Location/association refers to situation of an object in the terrain or in

relation to surroundings. A forest in the mountains is different from a
forest close to the sea or one near a meandering river. A large building at
the end of a number of converging railroads is likely to be a railway
station—we would not expect a hospital at such a location.
7. Shape refers to the general form, structure, or outline of individual objects.
Shape can be a very distinctive clue for interpretation. Straight edge shapes
typically represent urban or agricultural (field) targets, while natural
features, such as forest edges, are generally more irregular in shape, except
where man has created a road or clear cuts. Farm or crop land irrigated by
rotating sprinkler systems would appear as circular shapes.
8. Shadow is also helpful in interpretation as it may provide an idea of the
profile and relative height of a target or targets which may make
identification easier. However, shadows can also reduce or eliminate
interpretation in their area of influence, since targets within shadows are
much less (or not at all) discernible from their surroundings. Shadow is
also useful for enhancing or identifying topography and landforms,
particularly in radar imagery.

With these eight interpretation elements, you may have noticed a relation with
the spatial extent of the feature to which they relate. Tone or hue can be defined
for a single pixel; texture is defined for a group of adjacent pixels, not for a single
pixel. The other interpretation elements relate to individual objects or a
combination of objects. The simultaneous and often intuitive use of all these
elements is the strength of visual image interpretation.

Course Curriculum
Prerequisites: General Skills in Mathematics, Physics, Geography and
Computer Applications.

Introduction:

This Course Unit is intended to equip the trainee with relevant knowledge, skills
and attitudes in Concepts and Principles of Remote Sensing, the data acquisition
process and the interpretation of remote sensing images, Types of spatial data
types in GIS, Spatial Referencing and Positioning Frames used in GIS and
applications of Remote Sensing and GIS in water and related sectors.

Learning Objectives:

At the end of this course unit, the trainee should be able to:

a) Explain the Concepts and Principles of Remote Sensing and GIS


b) Outline the spatial data types used in GIS
c) Describe and apply the elements of Image interpretation to interpret
remote sensing images
d) Describe Spatial Referencing and Positioning Frames used in GIS
e) List various applications of Remote Sensing and GIS in water and related
sectors
Learning Activities: Lectures, Exercises, Group discussions, Practical-Visual Image
Interpretation Lab work-Computer aided Image interpretation

Topics and Learning Activities


Introduction: Definition of Remote Sensing, Definition of GIS, Relationship between
Remote Sensing and GIS, Relevance of Remote Sensing and GIS in the Water
Sector Concepts and Foundations of Remote Sensing: Introduction, Sources of
Energy, Electromagnetic Spectrum, Radiation Interaction with Atmosphere and
Earth surface features, Types of Remote Sensing, Remote Sensing Data Acquisition
Process Introduction, Data Sources, Remote Sensing Sensors, Remote Sensing
Satellite Systems, Processing of Raw Remote Sensing Data Image Interpretation:
Introduction, Visual Image Interpretation, Equipments for Image Interpretation,
Concepts and Foundations of GIS:Components of GIS, Principles of GIS, Functions of
GIS, Ideal GIS and Real World, Sources of Data in GIS. Geographic Information and
Spatial Data Types: Models and representations of the real world, Geographic
phenomena, Computer representations of geographic information (Vector and
Raster), Topology and spatial relationships Spatial Referencing and Positioning
Frames: Coordinate systems, Map projections, Coordinate transformations,
Applications of Remote Sensing and GIS: Introduction to applications of Remote
Sensing and GIS in Agriculture Forestry, Geology, Hydrology, Planning, Land
Cover Mapping, marine resources, disaster management, Drought mitigation and
food security, etc

Learning Activities: Lectures, Exercises, Group discussions, Practical-Visual Image


Interpretation Lab work-Computer aided Image interpretation

References

1. Wim H. Bakker et al, Principles of Remote Sensing, An introductory


textbook.
2. Keiffer et al, Remote Sensing and Image Interpretation.
3. Arthur and Ladson, Introduction to Remote Sensing.
4. Otto and Rolf, Principles of Geographic Information Systems, An
introductory textbook.

Learning Resources

GIS and Remote Sensing Laboratory, Remote Sensing and GIS Data, GIS and
Remote Sensing Text books, journals ,Broadband Internet

Assessment
CATs 30%, Main Exam 70%

You might also like