Remote Sensing Abstracts
Remote Sensing Abstracts
Remote sensing is the science of obtaining information about objects or areas from a distance, typically
from aircraft or spacecraft, increasingy important source of dat for mapping
- aerial photography
- laser scanning
- satellite altimery
Photogrammetry - the science of measuring distances and dimensions of objects on the Earth from
stereo images, typically taken from aircraft by special cameras.
Geodesy deals with the determination of the shape of the Earth and the definition of coordinate
systems. It provides the necessary geometric reference for working with satellite images.
The relationship with cartography is of a two-way nature. Cartographic tools are used to present the
results of remote sensing analysis.
GIS (Geographic Information Systems) contain a number of image processing tools.
Remote sensing uses many methods of image processing. Image filtering, contrast manipulation, object
classification of images, etc. are also used in a wide range of fields, such as medicine.
Some fields of remote sensing such as radar remote sensing or altimetry use general methods of signal
processing. There is also a large overlap between remote sensing and a field called airborne geophysics.
Methods such as airborne gravimetry provide interesting area information about the geological
structure, but are not considered part of remote sensing.
1820s – combination of two known principles: camera obscura and photosensitive substances /
exposure – 8 hours
1907 – stereoautograph, which enables spatal perception of a landscape based on observation of two
consecutive photos in the image series
1960-1972 – American satellite, Corona, was launched into orbit(resolution of 12m, 1.8m)
1978 – the first civilian satellite to carry a synthetic aperture radar(SAR) sensor -> satellite oceanography
1986 – second civilian remote sensing system, French SPOT system (Satellite Pour l'Observation de la
Terre)(resolution of 20m)
Before powerful PCs were available, satellite image processing was dependent on mainframes
Space Shuttle Endeavor mission, during which the Earth's surface was captured by a synthetic aperture
radar. Based on this data, a global digital surface model was created by the JPL. It was the first
homogeneous and, moreover, publicly available model, which led to the development of a number of
applications in many fields such as geomorphology, hydrology, etc.
Aerial photography
Aerial photographs are images taken from an aircraft in a vertical direction. It is an instant recording of
the landscape from the vertical perspective.
A vertical view of the landscape is unusual for human perception. Its advantage is the portability of
information from the image to the map. Aerial photographs have much in common with maps, but they,
unlike maps, are not in a symbolized form and are not generalized. For their correct reading the skill of
photointerpretation is essential. A considerable amount of practice, previous knowledge of the terrain or
at least knowledge of the observed landscape type are very useful. The main advantages of aerial
photography are high resolution and low cost.
Ortho-rectified images can be used as a raster layer in GIS. They are often used in interactive 3D
landscape viewers, such as Google Earth.
Near infrared materials have emerged -> layer sensitive to green, red, near infrared(NIR) radiation
Relief(perspective) distortion
To perform mapping based on photogrphs / converting elements in the image into a symbolied form ->
need to recognize objects correctly
- shape
- spatial pattern
We can distinguish based on their characteristic spatial pattern: hop gradens, vineyards, orchads and
other agricultural areas / cultivated from a natural forest
- size
elative size of an object with respect to another object rather than absolute size is usually used. If there
is high contrast with respect to the surroundings, it is even possible to distinguish features that are
smaller than the image resolution (pixel size). A typical example is a white dividing line on a dark tarmac
road.
Variation of tone in an image is caused by the differences in the reflectance of different parts of the
Earth's surface. When working with an image, relative brightness (tone) to the surrounding areas is
usually used. Generally, the brightest surfaces in BW photographs or panchromatic satellite images are
surfaces without vegetation, i.e. the bare soil of cultivated fields, sand and gravel in river beds or in sand
pits.
Water bodies -> usually the darkest in the image (Pure deep water is very dark while turbulent water or
water with algae is lighter)
The built-up areas -> light grey with occasional patches of reddish or bluish tones
The human eye is able to recognize many more colours than shades of grey. When viewing an image in
natural RGB colours, it usually has a rather ‘flat’ appearance. This means that there are quite a few
different colours such as tones of green and brown. The built-up areas are mainly light grey with
occasional patches of reddish or bluish tones
- shadow
To distingush: trees from shrubs, types of bridges and various buildings, the animal species(counting
mammals in African national parks)
- texture
- association
Is the interconnection of multiple objets in an image which helps to determine the type of one of them
Sewage treatment plants -> close to rivers on their outflow from the city
Train station -.> next to the rails in a city / near a village and has access road
Digital photogrammetry
Satellite sensors
Remote sensing data
Formation of digital images
Remote sensing data is generated mostly by orbital scanning (transverse dimension of the image is
provided by linear CCD detectors/other techniques; the movement of the satellite ensures scanning in
the direction of the lines)
CCD detectors (Charge Coupled Device) – the main source of the image data in many fields, works based
on the photoelectric effect -> the incident photon brings one of the electrons of the detector to the
excited state. The realesed electrons are led to the amplifier by electrodes. The output voltage is
proportional to the reflectance of scanned area on the Earth’s surface.
CCD detectors are sensitive to VNIR(visible and near infrared) radiation / used in most modern optical
orbital sensors(MSI instrument on European Sentinel-2 satellites)
Nyquist theorem - The continuous analogue signal generated by the detectors is further converted to
digital (discrete) signal. To maintain the information content of the analogue signal, the sampling
frequency must be at least twice the maximum frequency of the analogue signal.
The image is reconstructed based on received signal and the information on the number of pixels in an
image lines.
Digital image – a matrix of pixels. Each pixel contains a discrete value of relative signal strength, referred
to as a digital number(DN)
The intensity is displayed on the display devices as a grayscale -> if it is a single band
RGB synthesis – production of a colour image -> each channel has 246 grey levels -> theoretically
possible to display over 16 million colours = 256^3
Resolution
Spatial resolution – the minimum pixel size needed to interpret an object, defines the ability of the
remote sensing system to identify individual objects on the Earth’s surface
If there is a high contrast -> objects below the resoltuion threshold can be also captured(white lines on a
road)
Very high resolution <5m Ikonos (1 m), SPOT-6 (1.5m), Pléiades (0.5)
Spatial resolution also determines the maximum scale at which an image can be displayed without
recognizing individual pixels. On devices using light emission for display, such as LCD monitors, the
maximum scale on the display is determined by the monitor resolution, which is typically 100 dpi. Thus,
one pixel is displayed on the monitor as an element of 0.254 mm size. The most detailed scale at which
an image with spatial resolution of 10 m can be displayed on the monitor is thus approximately 1:
40,000. However, far more dpi is required for printing on paper
Lecture notes
3. Which sensor stands mostly for active sensing and which stands for
passive sensing?
Active: SAR sensors (active microwave systems); radars, lidars and altimeters
Passive: mostly use orbital scanning. Older orbital scanners such as TM on Landsat
4 and 5 or AVHRR on NOAA used cross-track scanning technique
4. The Sentinel-2 MSI image segment colour assignment.
MSI uses the principle of longitudinal scanning.
RGB (4, 3, 2)
True color composite uses visible light bands red (B04), green (B03) and blue (B02)
in the corresponding red, green and blue color channels,
5. What does the symbol λ mean?
Wavelength
6. What is passive sensing?
Passive remote sensing systems record radiation reflected from the Earth's surface
or long-wave radiation emitted by the Earth's surface.
Active systems have their own source of electromagnetic radiation, which is
transmitted by the device to the Earth's surface and subsequently the reflected part
is received. These include radars, lidars and altimeters
7. What is the formula for the Normalized difference vegetation index?
Normalized Difference Vegetation Index
NDVI it reflects the photosynthetic activity of vegetation, positively correlates with
various biophysical parameters - e.g. biomass amount, vegetation health state,