0% found this document useful (0 votes)
6 views8 pages

Remote Sensing Abstracts

Remote sensing is the science of gathering information about objects or areas from a distance using various methods such as aerial photography, laser scanning, and satellite altimetry. It encompasses photogrammetry, geodesy, and GIS, and has applications across multiple fields including medicine and environmental monitoring. The history of remote sensing includes significant technological advancements from early photography to modern satellite systems, enabling high-resolution imaging and data analysis.

Uploaded by

ynwaitl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views8 pages

Remote Sensing Abstracts

Remote sensing is the science of gathering information about objects or areas from a distance using various methods such as aerial photography, laser scanning, and satellite altimetry. It encompasses photogrammetry, geodesy, and GIS, and has applications across multiple fields including medicine and environmental monitoring. The history of remote sensing includes significant technological advancements from early photography to modern satellite systems, enabling high-resolution imaging and data analysis.

Uploaded by

ynwaitl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Remote sensing abstract to pass that test on the first attempt

Remote sensing is the science of obtaining information about objects or areas from a distance, typically
from aircraft or spacecraft, increasingy important source of dat for mapping

- aerial photography
- laser scanning
- satellite altimery

Photogrammetry - the science of measuring distances and dimensions of objects on the Earth from
stereo images, typically taken from aircraft by special cameras.
Geodesy deals with the determination of the shape of the Earth and the definition of coordinate
systems. It provides the necessary geometric reference for working with satellite images.
The relationship with cartography is of a two-way nature. Cartographic tools are used to present the
results of remote sensing analysis.
GIS (Geographic Information Systems) contain a number of image processing tools.

Remote sensing uses many methods of image processing. Image filtering, contrast manipulation, object
classification of images, etc. are also used in a wide range of fields, such as medicine.
Some fields of remote sensing such as radar remote sensing or altimetry use general methods of signal
processing. There is also a large overlap between remote sensing and a field called airborne geophysics.
Methods such as airborne gravimetry provide interesting area information about the geological
structure, but are not considered part of remote sensing.

Remote sensing systems:


1. Airborne: UAVs(Unmanned Aerial Vehicle) - a technology employs RGB, multispectral and
thermal sensors reaching resolution of centimetres. Usually, several hectares can be covered
during one flight. The use of UAVs is regulated by relatively strict legislation, especially with
respect to built-up areas, protected areas and close to airports.
2. Orbital – polar orbit satellite systems, shuttles and orbital stations

History of remote sensing


Camera obscura (pinhole camera) – simple device consisting of a box with a small circular opening on
one side. Light rays reflected from surrounding objects enter through the opening into the bx -> inverted
image on its opposite side.

The beginnings of photography

1614 – silver salts darkened when exposed to light

1820s – combination of two known principles: camera obscura and photosensitive substances /
exposure – 8 hours

1838 – use of sensitive layer of silver iodide / exposure – several minutes

1855 – discovery of additive color mixing

Red x green = yellow


Green x blue = cyan characteristic of incident colour rays and not of colour pigments
Blue x red = purple
Stereophotography – two drawings of the same object displayed from a slightly different position in
space

1783 – use of hot balloons to observe battlefields

1858 – first photograph from a balloon

1907 – stereoautograph, which enables spatal perception of a landscape based on observation of two
consecutive photos in the image series

1908 – a photo from an aircraft

1936 – production of ortho-photos

1946 – image of earth curvature on 105 km

1957 – first artificial satellite, Sputnik, was launched

1960-1972 – American satellite, Corona, was launched into orbit(resolution of 12m, 1.8m)

1971-1986 – American satellite, Hexagon(resolution of up to 0.6m)

1972 – first civilian satellite, Landsat by NASA

1978 – the first civilian satellite to carry a synthetic aperture radar(SAR) sensor -> satellite oceanography

1986 – second civilian remote sensing system, French SPOT system (Satellite Pour l'Observation de la
Terre)(resolution of 20m)

Before powerful PCs were available, satellite image processing was dependent on mainframes

Space Shuttle Endeavor mission, during which the Earth's surface was captured by a synthetic aperture
radar. Based on this data, a global digital surface model was created by the JPL. It was the first
homogeneous and, moreover, publicly available model, which led to the development of a number of
applications in many fields such as geomorphology, hydrology, etc.

2008 – free data access of Landsat

Aerial photography
Aerial photographs are images taken from an aircraft in a vertical direction. It is an instant recording of
the landscape from the vertical perspective.

A vertical view of the landscape is unusual for human perception. Its advantage is the portability of
information from the image to the map. Aerial photographs have much in common with maps, but they,
unlike maps, are not in a symbolized form and are not generalized. For their correct reading the skill of
photointerpretation is essential. A considerable amount of practice, previous knowledge of the terrain or
at least knowledge of the observed landscape type are very useful. The main advantages of aerial
photography are high resolution and low cost.

Ortho-rectified images can be used as a raster layer in GIS. They are often used in interactive 3D
landscape viewers, such as Google Earth.

Mainly black and white photographs based on silber iodide


Orange and yellow colour filters -> maintain sharpness, block the blue light that is most affected by
atmospheric scattering

Near infrared materials have emerged -> layer sensitive to green, red, near infrared(NIR) radiation

Green vegetation -> red on positives

Bare areas -> bluish

Water -> dark

+ distinguishing between vegetation and greenery-imitating military camouflage -> camouflage-


detection films

+ high sensitivity to vegetation is used in forestry and ecology

+ make it possible to identify damage to vegentation by drought or pests

Spectrozonal – colour infrared photograps following in the Soviet model in Czechoslovakia

Basic elements of aerial photographs

Digital aerial cameras

Scale of aerial photographs

Relief(perspective) distortion

Photo interpretation (visual interpretation)

To perform mapping based on photogrphs / converting elements in the image into a symbolied form ->
need to recognize objects correctly

Elements using in photo interpretation:

- shape

Ground plans of building -> rectangular

Stadiums, airports, factory buildings -> specific shape

Sewage treament plants -> circular tanks

Railways -> more regular radii of curves

Tree species -> shape of crowns

- spatial pattern

Is the repetition of a certin element(bush/tree)

Characterizes a typical spatial arrangement of curface classes(rangeland/forest)

We can distinguish based on their characteristic spatial pattern: hop gradens, vineyards, orchads and
other agricultural areas / cultivated from a natural forest

- size

elative size of an object with respect to another object rather than absolute size is usually used. If there
is high contrast with respect to the surroundings, it is even possible to distinguish features that are
smaller than the image resolution (pixel size). A typical example is a white dividing line on a dark tarmac
road.

- grey shade/colour tone

Variation of tone in an image is caused by the differences in the reflectance of different parts of the
Earth's surface. When working with an image, relative brightness (tone) to the surrounding areas is
usually used. Generally, the brightest surfaces in BW photographs or panchromatic satellite images are
surfaces without vegetation, i.e. the bare soil of cultivated fields, sand and gravel in river beds or in sand
pits.

Darker surfaces -> vegetation (the darkest are coniferous forests)

Water bodies -> usually the darkest in the image (Pure deep water is very dark while turbulent water or
water with algae is lighter)

The built-up areas -> light grey with occasional patches of reddish or bluish tones

The human eye is able to recognize many more colours than shades of grey. When viewing an image in
natural RGB colours, it usually has a rather ‘flat’ appearance. This means that there are quite a few
different colours such as tones of green and brown. The built-up areas are mainly light grey with
occasional patches of reddish or bluish tones

- shadow

To distingush: trees from shrubs, types of bridges and various buildings, the animal species(counting
mammals in African national parks)

Length of shadow -> determine the height of the object

- texture

Is the variation of tone or color without recognizable individual objects

Different crops -> different textures in the images

- association

Is the interconnection of multiple objets in an image which helps to determine the type of one of them

Sewage treatment plants -> close to rivers on their outflow from the city

Train station -.> next to the rails in a city / near a village and has access road

Stereoscopic properties of images

Scanning of aerial photographs

Digital photogrammetry

Aerial photography archives

Satellite sensors
Remote sensing data
Formation of digital images

Remote sensing data is generated mostly by orbital scanning (transverse dimension of the image is
provided by linear CCD detectors/other techniques; the movement of the satellite ensures scanning in
the direction of the lines)

CCD detectors (Charge Coupled Device) – the main source of the image data in many fields, works based
on the photoelectric effect -> the incident photon brings one of the electrons of the detector to the
excited state. The realesed electrons are led to the amplifier by electrodes. The output voltage is
proportional to the reflectance of scanned area on the Earth’s surface.

CCD detectors are sensitive to VNIR(visible and near infrared) radiation / used in most modern optical
orbital sensors(MSI instrument on European Sentinel-2 satellites)

Nyquist theorem - The continuous analogue signal generated by the detectors is further converted to
digital (discrete) signal. To maintain the information content of the analogue signal, the sampling
frequency must be at least twice the maximum frequency of the analogue signal.

The image is reconstructed based on received signal and the information on the number of pixels in an
image lines.

Digital image – a matrix of pixels. Each pixel contains a discrete value of relative signal strength, referred
to as a digital number(DN)

The intensity is displayed on the display devices as a grayscale -> if it is a single band

RGB – three bands combined and displayed in three primaru colours

RGB synthesis – production of a colour image -> each channel has 246 grey levels -> theoretically
possible to display over 16 million colours = 256^3

Resolution

Spatial resolution – the minimum pixel size needed to interpret an object, defines the ability of the
remote sensing system to identify individual objects on the Earth’s surface

Spatial resolution of:

2 m -> to identify small buidings

5m -> for secondary roads

10m -> for main roads

If there is a high contrast -> objects below the resoltuion threshold can be also captured(white lines on a
road)

prostorové rozlišení velikost pixelu příklady družic

Low resolution > 1 km NOAA (1.1km)

Medium resolution 100 m - 1 km Sentinel 3 (300 m), MODIS (250 m)


High resolution 5 - 100 m Sentinel-2 (10 m), Landsat (30 m)

Very high resolution <5m Ikonos (1 m), SPOT-6 (1.5m), Pléiades (0.5)

Spatial resolution also determines the maximum scale at which an image can be displayed without
recognizing individual pixels. On devices using light emission for display, such as LCD monitors, the
maximum scale on the display is determined by the monitor resolution, which is typically 100 dpi. Thus,
one pixel is displayed on the monitor as an element of 0.254 mm size. The most detailed scale at which
an image with spatial resolution of 10 m can be displayed on the monitor is thus approximately 1:
40,000. However, far more dpi is required for printing on paper

Lecture notes

1. Assign the correct types of radiance to the symbols in the figure.


????????

2. Specify the correct spatial resolution to the satellites.


prostorové rozlišení velikost pixelu příklady družic
Low resolution >1 km NOAA (1.1km)
Medium resolution 100 m - 1 km Sentinel 3 (300 m), MODIS (250 m)
High resolution 5 - 100 m Sentinel-2 (10 m), Landsat (30 m)
Very high resolution <5m Ikonos (1 m), SPOT-6 (1.5m), Pléiades (0.5)

3. Which sensor stands mostly for active sensing and which stands for
passive sensing?
Active: SAR sensors (active microwave systems); radars, lidars and altimeters
Passive: mostly use orbital scanning. Older orbital scanners such as TM on Landsat
4 and 5 or AVHRR on NOAA used cross-track scanning technique
4. The Sentinel-2 MSI image segment colour assignment.
MSI uses the principle of longitudinal scanning.

RGB (4, 3, 2)
True color composite uses visible light bands red (B04), green (B03) and blue (B02)
in the corresponding red, green and blue color channels,
5. What does the symbol λ mean?
Wavelength
6. What is passive sensing?
Passive remote sensing systems record radiation reflected from the Earth's surface
or long-wave radiation emitted by the Earth's surface.
Active systems have their own source of electromagnetic radiation, which is
transmitted by the device to the Earth's surface and subsequently the reflected part
is received. These include radars, lidars and altimeters
7. What is the formula for the Normalized difference vegetation index?
Normalized Difference Vegetation Index
NDVI it reflects the photosynthetic activity of vegetation, positively correlates with
various biophysical parameters - e.g. biomass amount, vegetation health state,

NDVI= (NIR−RED)(NIR+RED)𝑁𝐷𝑉𝐼= (𝑁𝐼𝑅−𝑅𝐸𝐷)(𝑁𝐼𝑅+𝑅𝐸𝐷)


chlorophyll content. Theoretical range of values -1 to 1.

8. What is the Landsat 7 orbit?


The first civilian satellite remote sensing system was Landsat. Landsat 7 (April 1999
- ...)
A significant advantage of Landsat is that the entire archive containing millions of
scenes is available at no cost.
The Landsat program continues to this day. A big advantage of Landsat is the
backward continuity of the data due to the well-maintained data acquisition
parameters. Seven Landsat satellites have been successfully launched so far. Only
Landsat-6 failed at launch. The data of the first three Landsat satellites are rarely
used due to their lower spatial and radiometric resolution.
9. What does the false color composite stand for?
A colour image can be composed by synthesizing several different bands of a
multispectral image. . Colour synthesis (RGB) consists of three bands, each is
expressed in shades of one of the basic colours – red, green and blue (RGB); it is
called additive synthesis. Additive mixing is based on mixing colours that are
related to the physiology of the human eye. There is an addition of colours and thus
an increase in light, the basic colour is black. It is used on monitors and TV screens.
You can compose images in true and false colour compositions.
True colours are obtained when the red channel enters the band measuring the
red part of the EM spectrum, green band goes into the green channel and blue band
goes into the blue channel. The resulting composition depicts (almost) real colours
as perceived by the eye – hence the designation of true (natural) colour. In the case
of Sentinel-2 data, this is a synthesis of the so-called 4-3-2 (that is, the fourth
channel enters the red channel, the green one the third and the blue one the
second)
If we combine bands in a different order or use different bands for synthesis, so-
called false (unnatural) colours would arise. This can be used, for example, to
highlight certain phenomena that could be misinterpreted in natural colour
compositions. It is necessary to be familiar with the spectral behaviour of objects to
assemble false colour syntheses. Colour composite can be arbitrary, but particular
combinations of bands have been empirically derived for the observation of certain
phenomena. Typically, for example, synthesis with an infrared band that is
particularly useful in the interpretation of water bodies and vegetation.

10.Which of the types of waves below has the longest wavelength?


11.Which sentence is correct about remote sensing?
"Remote sensing is the science of obtaining information about objects or areas from
a distance, typically from aircraft or spacecraft". - NOAA

You might also like