IMINT_c12_verResearchGate
IMINT_c12_verResearchGate
net/publication/270686775
CITATION READS
1 16,933
1 author:
Vladimir Kovarik
University of Defence
34 PUBLICATIONS 238 CITATIONS
SEE PROFILE
All content following this page was uploaded by Vladimir Kovarik on 23 September 2017.
S - 10441
Author:
Vladimír KOVAŘÍK
BRNO 2011
Imagery Intelligence (IMINT)
Table of Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1. Definition, capabilities and limitations of IMINT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2. History of IMINT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3. Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1 Image data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Imagery providers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.1 Satellite sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.1.1 Commercial satellites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2.1.2 Military satellites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2.1.3 Future satellite systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2.2 Airborne sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2.3 Ground sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3 Image resolution and interpretability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.3.1 Effect and limitations of spatial resolution . . . . . . . . . . . . . . . . . . . . . 30
3.3.2 Image Interpretability Rating Scales . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.3.2.1 National Image Interpretability Rating Scale . . . . . . . . . . . . . . 33
3.3.2.2 NATO Image Interpretability Rating Scale . . . . . . . . . . . . . . . . 34
4. Collateral information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.1 Interpretation keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.2 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.3 Theme encyclopaedias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.4 Report templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5. Image analysis process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.1 Image pre-processing and enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.1.1 Image pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.1.2 Image enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.2 Elements of image interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.3 Image interpretation techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.3.1 Feature extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.3.2 Meaning extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.3.3 Change detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6. Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
6.1 Target reporting guides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
6.2 Other IMINT products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
7. Other domains of imagery intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
2
Imagery Intelligence (IMINT)
Introduction
The general reason for imagery collection is to examine or to interpret the images for the
purpose of identifying objects and judging their significance. In military terms the main
purpose of imagery collection and exploitation is to discover new targets, to detect changes
on Earth’s surface, to perform surveillance, to plan missions, or to assess the combat results.
And even more specifically, apart from creating the geographic products the imagery helps,
for example, assess military and industrial capabilities of the opponent, locate his military
forces and monitor their status, watch the deployment and dispositions of the enemy’s
military units, or monitor the main supply routes or activity at the weapon storage sites.
Also the image analysis is not used solely for the military purposes. It can serve in the
process of the business espionage when the imagery can prompt the company about potential
problems of its rival or competitor and that may lead to a change of the company’s activities.
It can also support the effort of international bodies in peacekeeping, verification of treaties,
or arms proliferation control.
The result of the image analysis process is not necessarily always another image. It can be
a standard or specialized report or even, when exaggerated, just an information or
confirmation of the existence of the object or feature in the terrain. This depends on the
request of a customer or an end user and also on other factors, such as quality of the image,
time availability for accomplishing the task and so on.
The purpose of this study text is to introduce the term Imagery Intelligence, referred to as
IMINT; to explain its history, fundamentals, capabilities and limitations; to present basic
procedures used in a process of image analysis; to show the role of collateral information;
and the ways of reporting of results to the customer. Since imagery is a fundamental element
of IMINT, a great attention is paid to imagery sources, characteristics and processing
methods. Although virtually any image can be exploited in IMINT, most of this text is focused
primarily on aerial and satellite imagery.
The basic information that is to be extracted from an image, are the answers to the
questions Where, What, and When. And digital image processing is essential for providing
accurate answers.
This study text builds on a short introduction to IMINT fundamentals ‘Introduction to
Imagery Intelligence (IMINT)’ that was prepared within the European Social Funds program
at the University of Defence in Brno[1]. This text does not address the organization nor
planning of IMINT on different command levels. It focuses primarily on technical procedures
used in IMINT.
3
Imagery Intelligence (IMINT)
Capabilities of IMINT
IMINT is an extremely valuable part of intelligence. IMINT provides concrete,
detailed, and precise information on the location and physical characteristics of both
the threat and the environment. It is the primary source of information concerning
key terrain features, installations, and infrastructure used to build detailed
intelligence studies, reports, and target materials. Order of battle (OOB) analysis,
enemy courses of action assessments, development of target intelligence, and battle
damage assessment (BDA) are intelligence functions that rely heavily upon IMINT.
Limitations of IMINT
The major limitations of IMINT are the time required to task, collect, process,
analyse, and disseminate the imagery product; the detailed planning and
coordination required to ensure the collected imagery is received in time to impact
the decision making process; and the requirement for considerable assets in
personnel, equipment, and communications connectivity to conduct IMINT
operations. Also, imagery operations can be hampered by weather; enemy air
defence capability; and enemy camouflage, cover, concealment and deception
activities.
4
Imagery Intelligence (IMINT)
2. History of IMINT
There was a long way from the first attempts to see the Earth’s surface from above to
today’s platforms passing in myriads over our heads and collecting imagery.
The French brothers Jacques Etienne and Joseph Michel Montgolfier launched the first
successful balloon ascension on 5 June, 1783 and the first clearly recorded manned flight was
performed in November that year. The military exploitation of that promising device came
soon. The first decisive use of a balloon for aerial observation was performed by the French
Aerostatic Corps at the Battle of Fleurus in 1794, when it was used for reconnaissance during
the Austrian army bombardment. Not long after the invention of the photography (by a
French inventor Joseph Nicéphore Niépce in 1822) the French photographer, journalist and
balloonist Gaspar Félix Tournachon, known as ‘Nadar’, became the first person to take the
aerial photograph in 1858. It was a view of Petit-Bicêtre taken from a tethered hot-air
balloon, 80 metres above the ground. However Nadar’s earliest photographs did not survive
and therefore the oldest surviving balloon photograph is a view of Boston made in October
1860 by Samuel A. King and James W. Black (see Fig. 1).
a) b) c)
Figure 1. Self-portrait of Nadar (a). Nadar’s balloon ‘Le Géant’ (b). The oldest surviving
balloon photograph - Boston, 1860 (c). (http://en.wikipedia.org)
At that time the French Aerostatic Corps were long ago disbanded but America’s use of this
new technology began in the Civil War when Professor Thaddeus S. C. Lowe offered his
service to the United States1. His first assignment was with the Topographical Engineers
1 Making a tethered ascent to 150 metres Lowe sent the telegram to the White House with the following message:
‘Balloon Enterprise, Washington, D. C. 16 June 1861, To President United States:
This point of observation commands an area nearly fifty miles in diameter. The city with its girdle of encampments
presents a superb scene. I have pleasure in sending you this first dispatch ever telegraphed from an aerial station and
in acknowledging indebtedness to your encouragement for the opportunity of demonstrating the availability of the
science of aeronautics in the service of the country. T. S. C. Lowe’. [4]
5
Imagery Intelligence (IMINT)
where his balloon was used for aerial observations and map making. Later he created a
reconnaissance balloon unit and over the next two years he made thousands of
reconnaissance flights. Ballooning did not last the war, however [5].
Attention also turned to the use of kites. They were first used approximately 2,800 years
ago in China but the first kite photographs were taken by Arthur Batut in Labruguière in
France in 18882 (see Fig. 2). In America the first photograph was taken in New Jersey in 1895
by William A. Eddy. After perfecting his ability to take clear photographs from varying
altitudes he offered his system to the Navy and the system was then used in the Cuban
campaign during the Spanish American War in 1898 [6].
a) b)
Figure 2. Arthur Batut’s kite with the camera (a). The first kite photograph -
Labruguière, 1888 (b). (http://en.wikipedia.org)
The next platform that seemed to be very promising was a rocket. In 1891 Ludwig
Rahrmann patented a means of attaching a camera to a large calibre artillery projectile or
rocket and this inspired a German engineer Alfred Maul to develop and patent his Maul
Camera Rocket in 19033. The camera was launched into the air with a black powder rocket.
When the rocket had reached an altitude of about 800 metres a few seconds later its top
sprang open and the camera descended on a parachute. A timer triggered the taking of the
photograph. The Maul Camera Rocket was demonstrated in 1912 to the Austrian Army and
tested as a means for reconnaissance in the Turkish-Bulgarian war in 1912 and 1913. It was
not used afterwards because aircraft were much more effective.
2Some sources say that the English meteorologist E. D. Archibald was among the first to take successful
photographs from kites in 1887.
3Majority of sources say that the Swedish inventor, Alfred Nobel was the first in taking successfully aerial
photograph from a rocket mounted camera in 1897. The latest findings show that it was probably not the case.
Nobel patented his rocket camera in 1896 and therefore he was the first but the famous ‘Nobel´s rocket camera
photos’ were taken in April 1897, four months after his death, most probably from a top of a hill [7].
6
Imagery Intelligence (IMINT)
In 1903 the German apothecary Julius Neubronner designed a tiny breast-mounted camera
for pigeons (Fig. 3 and Fig. 4). Later the German Ministry of War was interested in his system
of taking aerial photographs and investigated its adaptability for topographic reconnaissance.
At that time Neubronner already designed and described other pigeon camera models
including stereoscopic and panoramic cameras. After almost ten years of negotiations
Neubronner received the state’s acquisition of his invention in 1914. But these plans were
spoiled by the outbreak of the First World War and Neubronner had to provide all his pigeons
and equipment to the military. Although the battlefield tests were satisfactory, military did
not employ the technique more widely. After the war, the War Ministry responded to
Neubronners inquiry that the use of pigeons in aerial photography had no military value and
further experiments were not justified. However in 1932 it was reported that the German
army was training pigeons for taking aerial photography and that the cameras were capable
of 200 exposures per flight. Also the French claimed that they had developed film cameras for
pigeons as well as a method for having the birds released behind enemy lines by trained dogs.
Figure 4. Aerial photograph taken on pigeon photo flight showing also the pigeon wingtips.
(http://en.wikipedia.org)
7
Imagery Intelligence (IMINT)
The Wright brothers, Orville and Wilbur, invented and built the world’s first successful
airplane and made the first controlled, powered and sustained heavier-than-air human flight
on 17 December, 1903. Although not the first to build and fly experimental aircraft, the
Wright brothers were the first to invent aircraft controls that made fixed-wing powered flight
possible. The first photograph taken from an airplane was a motion picture shot over
Centocelli, Italy, in 1909, in a plane piloted by Wilbur Wright. Most of this early photography
provided an oblique rather than a vertical view of the ground. Popular illustrative pictures of
a number of large cities and other scenic attractions were also produced using this means.
The first use of airplanes in combat missions was by the Italian Air Force during the Italo-
Turkish War of 1911-1912. On 23 October, 1911 an Italian pilot flew over the Turkish lines in
Libya to conduct the first aerial reconnaissance mission in history.
However until the First World War the aerial photography was not acquired and utilized
on a large-scale, systematic basis. But then cameras were specifically designed for aerial
reconnaissance and associated processing facilities were developed to produce thousands of
photographs per day (Fig. 5). Equally as important as the technological advances was the
development of photo interpretation techniques to obtain intelligence information from the
photographs. By observing the deployment of men and material over a period of time, it was
possible to anticipate military manoeuvres. By the end of the First World War, there had been
substantial improvement in aircraft, cameras and processing equipment and a relatively large
number of people had gained experience in different aspects of aerial photographs
acquisition and utilization.
Figure 5. Military aerial observer during the First World War. (http://pw20c.mcmaster.ca)
During the First World War aerial photography soon replaced sketching and drawing by
the aerial observers. Cameras especially designed for use in airplanes were being produced.
The battle maps used by both sides were produced from aerial photographs, and by the end
of the war, both sides were recording the entire front at least twice a day (Fig. 6). After the
war, England estimated that its flyers took one-half million photographs during the four years
of the war, and Germany calculated that if you laid all its aerial photographs side by side, they
8
Imagery Intelligence (IMINT)
would cover an area six times the size of Germany. The quality of cameras had improved so
much by the end of the war that photographs taken at 4,500 metres could be enlarged to
show footprints in the mud.
During the Second World War, the reconnaissance was classified under two main
headings: mapping and damage assessment. Enemy activity was recorded and new
installations were located, so that accurate maps, to be used by the ground forces, could be
made. From damage assessment photographs, the exact moment when a target that had been
previously hit should be re-attacked could be calculated, and the effectiveness of the enemy's
rebuilding programme could be assessed.
Immediately after the Second World War, long range aerial reconnaissance was taken up
by adapted jet bombers capable of flying higher or faster than the enemy. The onset of the
Cold War led the development of highly specialized and secret strategic reconnaissance
aircraft, or spy planes, such as the Lockheed U-2 and its successor, the Lockheed SR-71
‘Blackbird’ (Fig. 7).
Then cameras returned back to the rockets. From the beginning of 1960s cameras and
other sensors started acquiring imagery from the satellites. Using the platforms orbiting the
Earth brought significant advantages comparing to other means of image acquisition used
before, such as synoptic view and repetitive coverage of an area of interest, observation of
remote areas or areas with a difficult access, independence on political boundaries or a
current situation on the ground, independence on weather, etc. More details about current
commercial and military satellites and satellite constellations are given in Chapter 3.2.1.
9
Imagery Intelligence (IMINT)
One of the most important platforms that were developed was the Unmanned Aerial
Vehicle (UAV). Although its history began already after the First World War when the first
radio controlled aerial targets and aerial torpedoes were tested, the first use of these vehicles
as the reconnaissance platforms dates back to the late 1950s, when they were used by the US
in the North Vietnam and the North Korea. There are many UAV types differing in size, shape
and characteristics, and (apart from other military employment) they all can carry sensors
acquiring imagery or other types of data. The advantage of using the UAVs is that they can be
used in high-risk missions, they offer performance beyond human aircraft capacities, such as
high endurance or high G-force, and they offer a high flexibility of missions. More details
about various aerial platforms are given in Chapter 3.2.2.
a) b)
Figure 7. The Lockheed U-2 in one of the latest variants - TR-1A (a). The Lockheed SR-71
‘Blackbird’ (b). (http://en.wikipedia.org)
In addition, other seemingly old-fashioned but in fact very modern platforms can be put in
this overview. These are aerostats, blimps and hybrid airships which represent high altitude,
very long endurance and low operating cost platforms (Fig. 8). It can be said that after a
century and a half the cameras returned back to the balloons.
Figure 8. The modern aerostats enable permanent surveillance over key areas.
(http://www.defenseindustrydaily.com)
10
Imagery Intelligence (IMINT)
3. Imagery
Image data4 can be classified into various categories. A category that is used probably most
often is a resolution of the imagery. With respect to resolution, the image data can be
described as low resolution, medium resolution, high resolution, or very high resolution data.
In this case, the term ‘resolution’ refers to the spatial resolution of the image data. That
means the ground dimensions of the smallest element of the image, i.e. a pixel. This quality is
sometimes expressed also as the Instantaneous Field of View (IFOV) of the sensor, the
Ground Sample/Sampling Distance (GSD) or the Ground Resolved Distance (GRD). Obviously,
the most important image data type for IMINT is the very high resolution data, i.e. those with
GSD less than 3 m.
The IFOV is the angular cone of visibility of GSD is the distance on the
the sensor and determines the area on the ground represented by each
Earth’s surface which is ‘seen’ from a given pixel expressed in ground units.
altitude at one particular moment in time.
4 This chapter is focused mainly on the imagery acquired by the satellite sensors and airborne metric cameras and
scanners.
11
Imagery Intelligence (IMINT)
With respect to the portion of the electromagnetic spectrum (EMS) in which the data were
acquired, the image data can be described as visible, near infrared, thermal, or microwave
(Fig. 9).
Visible data
The term used for imagery acquired in a visible portion of the EMS, i.e. with
wavelengths between 0.4 µm (violet colour) and 0.7 μm (red colour).
Thermal data
The term used for imagery acquired in the far infrared portion of the EMS, i.e. with
wavelengths between approximately 3.0 µm and 100 μm.
12
Imagery Intelligence (IMINT)
With respect to the number of spectral bands, the image data can be panchromatic,
multispectral, or hyperspectral.
Panchromatic data
The term used for imagery acquired in a broad range of wavelengths within the
visible portion of EMS.
Multispectral data
The term used for imagery acquired simultaneously in a variety of different
wavelength ranges (e.g. four bands on the Ikonos satellite or eight bands on the
WorldView-2 satellite).
Hyperspectral data
The term used for imagery acquired simultaneously in hundreds of very narrow
spectral bands throughout the visible, near-infrared, and mid-infrared portions of
the EMS (e.g. 220 bands on the EO-1 satellite).
With respect to the position of the line of sight (i.e. the orientation of the camera or a
sensor relative to the ground), the image data can be vertical, oblique or panoramic.
Vertical data
The term used for the imagery acquired with the camera or a sensor directed exactly
vertically. This type of imagery is of the most common use for remote sensing and
mapping purposes.
Oblique data
The term used for the imagery acquired
with the camera or a sensor
intentionally directed at some angle Image Data Types
between horizontal and vertical
orientations (usually up to 60 degrees). Visible
The types of aerial photographs are Near IR
Mid IR
sometimes further divided as high
Thermal
oblique and low oblique. High oblique Microwave
photographs usually include the Panchromatic
horizon while low oblique photographs Multispectral
do not. Hyperspectral
Vertical
Panoramic data Oblique
The term used for the imagery or Panoramic
Low resolution
photographs covering a large field of Medium resolution
view due to the camera look angle High resolution
exceeding 60 degrees. Very high resolution
13
Imagery Intelligence (IMINT)
The term spatial resolution was explained in this chapter. However there are other
categories using the term resolution such as radiometric resolution, spectral resolution, and
temporal resolution.
Radiometric resolution
The term radiometric resolution describes the ability of a sensor to discriminate tiny
differences in energy reflected from (or emitted by) the object. The finer the
radiometric resolution of a sensor, the more sensitive it is to detect small differences
in reflected or emitted energy.
Spectral resolution
Spectral resolution refers to the ability of a sensor to define fine wavelength
intervals, i.e. it defines the bandwidth. The finer the spectral resolution, the
narrower the wavelengths range for a particular channel or band. Also, spectral
resolution might express the number of bands that the image consists of.
Temporal resolution
Temporal resolution refers to the ability to acquire imagery of the same object or the
same portion of the Earth’s surface at different periods of time.
Resolution
There are four different characteristics describing the image quality that use the term
resolution:
Spatial resolution describes the ground dimensions of the smallest element of the
image, i.e. a pixel.
Radiometric resolution describes the ability of a sensor to discriminate tiny
differences in energy reflected from (or emitted by) the object.
Spectral resolution refers to the ability of a sensor to define fine wavelength
intervals, i.e. it defines the bandwidth. Also, spectral resolution might express the
number of bands the image consists of.
Temporal resolution refers to the ability to acquire imagery of the same object or
the same portion of the Earth’s surface at different periods of time.
Each type of data offers different possibilities for their use; each particular type of data is
useful for different specific tasks. For example, panchromatic data is mostly characterized by
the finest spatial resolution, which predetermines it particularly for interpretation and
mapping. Multispectral data can be used for feature extraction using classification or for
various types of analysis based on spectral information. Hyperspectral data provides the
potential for more accurate and detailed information extraction than is possible with other
types of remotely sensed data. In other words, hyperspectral data enables to identify
particular materials thanks to their specific spectral signatures. Multitemporal data can be
14
Imagery Intelligence (IMINT)
used for change detection or monitoring dynamic processes on the ground. More detailed
information concerning the image types, their characteristics and methods of processing can
be found for example in [8], [9] or [10].
Apart from the imagery and its various forms described above there are other types of
image data being used for deriving intelligence information. It is for example a Full Motion
Video (FMV). Due to using extremely versatile UAVs as the platform the video-based data are
capable of providing the most recent view of the area of interest. (Fig. 10) The latest trend is
fusing FMV with other intelligence data such as aerial photographs, satellite imagery, vector
layers, etc. Current software tools enable image analyst to generate a georeferenced image as
a result of mosaicking of large number of individual video frames, to place annotations and to
generate static images and reports [11].
Figure 10. Three examples of the individual video frames of the FMV data.
(http://www.onwar.eu)
15
Imagery Intelligence (IMINT)
Across-track scanner
The era of the across-track scanner began in 1970’s. The principle of this device is
based on a rotating mirror that scans the Earth’s surface in a series of lines
composed of individual pixels. The lines are oriented perpendicular to the direction
of motion of the sensor platform. As the platform moves forward over the Earth,
successive scans build-up a two-dimensional image of the Earth’s surface (see
Fig. 11). The across-track scanner is also referred to as ‘whiskbroom scanner’. Over
the years it turned out that due to a presence of moving parts the across-track
scanners were prone to wear and failure. However this type of a scanner is still used
today, for example on Landsat-7.
Along-track scanner
Along-track scanners also use the forward motion of the platform to record
successive scan lines and build up a two-dimensional image, perpendicular to the
flight direction. However, instead of a scanning mirror, they use a linear array of the
5 Operation of satellites is not affected by weather, however cloud cover can prevent some sensors from acquiring
the images of the Earth’s surface.
6 There was also a system of the Return Beam Vidicon (RBV) cameras measuring the reflected solar radiation. The
viewed ground scene was stored on the photosensitive surface of the camera tube, and, after shuttering, the image
was scanned by an electronic beam to produce a video signal output. The RBV system was employed only on the
Landsat-1, Landsat-2 and Landsat-3 satellites in 1972, 1975 and 1978 respectively.
16
Imagery Intelligence (IMINT)
CCD (Charge-Coupled Devices) detectors which are ‘pushed’ along in the flight track
direction. These systems are also referred to as ‘pushbroom scanners’ (see Fig. 11).
Comparing to the across-track scanners these detectors are generally smaller,
lighter, require less power, and are more reliable. They also provide imagery of a
better spatial, spectral, and radiometric resolution.
Figure 11. The principle of the across-track (left) and the along-track scanners (right).
Radar
Radar (RAdio Detection And Ranging) systems are active sensors which provide
their own source of electromagnetic energy. Active radar sensors emit microwave
radiation in a series of pulses from an antenna, looking obliquely at the surface
perpendicular to the direction of motion. When the energy reaches the target, some
of the energy is reflected back towards the sensor. This backscattered microwave
radiation is detected. The time required for the energy to travel to the target and
return back to the sensor determines the distance or range to the target. By
recording the range and magnitude of the energy reflected from all targets as the
system passes by, a two-dimensional image of the surface can be produced. Because
microwave energy is able to penetrate through clouds and images can be acquired
day or night, the radar is called an all-weather sensor7.
Radar imagery differs from traditional imagery acquired using optical and electro-
optical systems. Its appearance is affected by both the property of a radar signal (i.e.
the wavelength, polarization, viewing geometry) and the property of a target surface
(i.e. roughness, chemical properties, moisture content). Radar imagery resembles
the black and white photography with a ‘salt and pepper’ texture, i.e. a speckle. The
bright pixels represent those areas where a significant amount of energy was
backscattered to the radar, i.e. due to a high moisture content, small incidence angle,
etc. The dark pixels represent areas where the energy was reflected away from the
sensor, for example due to a smooth surface (see Fig. 12).
17
Imagery Intelligence (IMINT)
There are also special techniques utilizing radar such as Side-Looking Airborne
Radar (SLAR), Synthetic Aperture Radar (SAR), radargrammetry or interferometry
(for details see [8] or [10]).
Figure 12. An example of the orthorectified radar image. Due to the fact that the energy
was reflected away from the sensor, the sea surface and the airport runways are clearly
visible on the image. (http://www.intermap.com)
18
Imagery Intelligence (IMINT)
ALOS
ALOS (Advanced Land Observation Satellite) is the satellite of the Japan Aerospace
Exploration Agency (JAXA). The satellite of today uncommon weight of nearly four
tonnes was launched in January 2006 to operate at an altitude of 690 km. It acquired
both the optical and radar imagery. A sensor acquiring panchromatic imagery was a
‘three-line imager’ with three independent systems for forward, nadir8 and
backward looking for achieving along-track stereo images. It had a GSD of 2.5 m and
provided a swath width of 35 km (triplet stereo) or 70 km (nadir observations). The
mission was ended in May 2011. Two follow-on satellites of this program are under
preparations. ALOS-2 will be the SAR mission providing the radar imagery with a
GSD of 1 m (Spotlight mode) and the launch is planned for 2013. ALOS-3 will be the
optical mission providing imagery with a GSD of 0.8 m (panchromatic, in nadir) and
5 m (multispectral). The launch is planned for 2015.
CartoSat-2
CartoSat-2 is a direct follow-on satellite of CartoSat-1/IRS P5 operated by the Indian
Space Research Organisation (ISRO). It was launched in January 2007 and was
placed to the orbit at an altitude of 630 km. The payload consists of a single
Panchromatic Camera providing panchromatic imagery with a GSD of 0.8 m at nadir.
The two nearly identical follow-on satellites were already launched. CartoSat-2A
was launched in April 2008; CartoSat-2B was launched in July 2010. Both satellites
provide panchromatic imagery with a GSD of 0.8 m at nadir, the latter has pointing
capability of ±26 degrees both along-track and across-track which allows to acquire
stereoscopic imagery.
CBERS-2B/ZY-1
It is the third satellite of the CBERS (China-Brazil Earth Resources Satellite) program
(formerly ZY-1) that was launched in September 2007 to operate at an altitude of
770 km. In addition to four-band multispectral imagery with a GSD of 19.5 m it
acquired panchromatic imagery with a GSD of 2.7 m. Also it had capability to point
the sensors up to 32 degrees from nadir. The CBERS-2B mission was ended in May
2010. The two follow-on satellites, CBERS-3 and CBERS-4, are planned for launching
in 2012 and 2013, respectively.
COSMO-SkyMed
COSMO-SkyMed (Constellation of Small Satellites for Mediterranean basin
Observations) is a four-satellite constellation of the Italian Space Agency and Italian
Ministry of Defence (MOD). Each of the satellites is equipped with a SAR-2000
instrument working in the X-band and they operate at an altitude of 620 km. The
first satellite of the constellation, COSMO-SkyMed-1, was launched in June 2007,
8
Nadir is a point on the ground directly in line with the sensor and the centre of the Earth. The direction
opposite of the nadir is the zenith.
19
Imagery Intelligence (IMINT)
EROS A
The EROS A is a minisatellite operated by the ImageSat International in the
Netherlands Antilles and used the design of the Israeli military satellite Ofeq 3. The
EROS A was launched in December 2000 from the Svobodnyi launch site in the
Russian Federation and it operates at an altitude of 490 km. The payload consists of
the Panchromatic Imaging Camera that provides the imagery with a GSD of 1.9 m.
The pointing capability of the camera allows acquiring stereo images during one
pass as well as triplets or two stereopairs in a single pass.
EROS B
The EROS B is a minisatellite very similar to EROS A and is operated by the ImageSat
International in the Netherlands Antilles. The EROS B was launched in April 2006
from the Svobodnyi launch site in the Russian Federation and it operates at an
altitude of 500 km. The payload consists of the improved Panchromatic Imaging
Camera that provides the imagery with a GSD of 0.7 m. The pointing capability of the
camera allows to generate mosaics or to acquire stereo images during one pass.
FormoSat-2/ROCSat-2
The Taiwan National Space Program Office operated FormoSat-2 satellite (renamed
from ROCSat-2) was launched in May 2004 and was placed to the orbit at an altitude
of 890 km. It carries a pushbroom-type imager that acquires both the multispectral
and panchromatic imagery, the latter with a GSD of 2 m. It is also capable of
acquiring stereo images.
GeoEye-1
The GeoEye-1 (formerly known as OrbView-5) is operated by the company GeoEye
and was launched in September 2008. It operates at an altitude of 670 km. The
payload consists of the GeoEye Imaging Systems that acquires both the multispectral
and panchromatic imagery with a GSD of 1.64 m (at nadir) and 0.41 m (at nadir),
respectively. The imager provides the stereo imagery acquired in any direction
(along-track and across-track).
IKONOS
Five months after a failure during launching of the first satellite the company Space
Imaging (since 2006 GeoEye) successfully launched the IKONOS-2 (today referred to
as IKONOS) satellite in September 1999. It was the first commercial satellite ever
providing the very high resolution imagery. It operates at an altitude of 680 km and
acquires both the multispectral and panchromatic imagery with a GSD of 4 m and
1 m, respectively. At nadir the resolution of panchromatic imagery is 0.82 m.
20
Imagery Intelligence (IMINT)
IRS P5/CartoSat-1
The IRS P5 satellite (also known as CartoSat-1) operated by the Indian Space
Research Organisation (ISRO) belongs to many ISRO remote sensing satellites and
was launched in May 2005. It was placed to the orbit at an altitude of 620 km. The
payload instrumentation consists of two panchromatic cameras PAN-F (forward
pointing) and PAN-A (aft pointing) providing for-aft stereo imagery with a GSD of
2.5 m.
KOMPSAT-2/Arirang-2
KOMPSAT-2 (Korea Multi-Purpose Satellite-2 also referred to as Arirang-2) is the
second satellite of this Korea Airspace Research Institute program. It was launched
from the Russian Plesetsk Cosmodrome in July 2006 and was placed to the orbit at
an altitude of 680 km. The Multi-Spectral Camera acquires both the multispectral
and panchromatic imagery with a GSD of 4 m and 1 m, respectively.
MACSat/RazakSat
The MACSat (Medium-sized Aperture Camera Satellite) is a minisatellite mission of
the international cooperation project between Malaysia and Korea. It was launched
in July 2009 and operates at an altitude approx. 680 km on an orbit with unusual
inclination of 9 degrees. Its pushbroom imager provides both the multispectral and
panchromatic imagery with a GSD of 5 m and 2.5 m, respectively.
QuickBird
After the failure of the first two satellites of the company DigitalGlobe (formerly
EarthWatch) - EarlyBird (1997) and QuickBird-1 (2000) - the QuickBird-2 was
launched successfully in October 2001. The satellite, today referred to as QuickBird,
operates at an altitude of 445 km. The pushbroom-type high resolution camera on
board acquires both the multispectral and panchromatic imagery with a GSD of
2.4 m and 0.6 m, respectively. Due to its pointing capability of ±30 degrees both
along-track and across-track it can provide single scenes, mosaics, and stereo
imagery.
RADARSAT 2
It is the second satellite of the Canadian Space Agency’s program RADARSAT that
was launched in December 2007. It operates at an altitude of 790 km and acquires C-
band radar data at eleven different modes. The resolution of data varies from 100 m
(ScanSAR Wide mode) to 3 m (Ultra-fine mode). As the evolution of the program, the
RADARSAT Constellation consisting of the three new satellites that will provide the
daily access to 95 % of the Earth’s surface is planned for launching in 2015.
Resurs-DK1
The satellite, currently operated by the Russian Federal Space Agency (Roskosmos),
with a design derived from the former Soviet reconnaissance satellites ‘Jantar’ was
21
Imagery Intelligence (IMINT)
launched in June 2006 and placed to the orbit at an altitude of 360 km (at perigee9).
It provides both the multispectral imagery with a GSD between 2.5 and 3.5 m and
panchromatic imagery with a GSD of 1 m.
RISAT-2
The RISAT (Radar Imaging Satellite) is the first ISRO satellite mission using an active
radar sensor system. It is officially presented as a satellite for disaster management
applications, however it is supposedly used by the Indian MOD as an all-weather
surveillance satellite. RISAT-2 uses a C-band SAR (5.35 GHz). It was launched in
April 2009 and placed to the orbit at an altitude of 450 km (at perigee). A sensor
uses various modes, for example the High Resolution Spotlight mode which provides
imagery with a GSD better than 2 m.
SPOT 5
The SPOT 5 is the last satellite from a constellation of the SPOT (Systéme Probatoire
d’Observation de la Terre) satellites and is operated by the French companies CNES
(Centre National d’Etudes Spatiales) and SPOT Image. The satellite was launched in
May 2002 and operates at an altitude of 825 km. As all the SPOT satellites it acquires
both the multispectral and panchromatic imagery. The two HRG (High Resolution
Geometric) instruments acquire panchromatic imagery with a GSD of 5 m however
the ‘Supermode’ technology allows to improve it to 2.5 m. The sensors can be
steered up to 27° across-track, enabling stereoscopic imaging and increased revisit
capabilities.
TanDEM-X
TanDEM-X is a high-resolution interferometric SAR mission of the German
Aerospace Centre. The satellite was launched in June 2010 and it is almost identical
to the TerraSAR-X1 satellite. Both satellites fly in a close formation (at a distance
varying from 300 m to 500 m) and therefore provide a flexible single-pass SAR
interferometric configuration. The data acquired by these satellites enable to
generate a global, consistent, and high-precision digital elevation model.
TerraSAR-X1/TSX-1
TerraSAR-X1 (also referred to as TSX-1) is the first satellite of a German SAR
mission. It was launched in June 2007 and operates at an altitude of 500 km. The
SAR instrument works in the X-band, i.e. it uses the frequency of 9.65 GHz, and
acquires data in a variety of modes. The Spotlight HS and Spotlight SL modes provide
the highest spatial resolution, which is 1 m.
TopSat
The British first ever very high resolution satellite operated by the British National
Space Centre and funded by MOD was launched in October 2005 from the Pleseck
9 Perigee is a point in outer space where an object travelling around the Earth is closest to the Earth. The point at
the greatest distance from the Earth is called apogee.
22
Imagery Intelligence (IMINT)
launch site in Russia. It operates at an altitude of 680 km (at perigee) and in addition
to three-band multispectral imagery with a GSD of 5 m it acquires panchromatic
imagery with a GSD of 2.5 m. The sensor can use the viewing angle up to 30 degrees.
WorldView-1
Represented as the next-generation satellite the WorldView-1 is operated by
DigitalGlobe. It was launched in September 2007 and placed to the orbit at an
altitude of 490 km. The pushbroom-type high resolution camera on board acquires
panchromatic imagery with a GSD of 0.5 m (at nadir). It also provides a single pass
stereo coverage.
WorldView-2
WorldView-2 is the follow-on satellite to WorldView-1 satellite operated by
DigitalGlobe. It was launched in October 2009 and placed to the orbit at an altitude
of 770 km. The panchromatic CCD array of the high resolution camera on board uses
more than 35,000 detectors in a row and provides panchromatic imagery with a GSD
of 0.46 m (at nadir). It also acquires 8-band multispectral imagery with a GSD of
1.8 m (at nadir). In addition, it provides a single pass stereo coverage.
The graph in Fig. 13 shows the orbit altitudes and GSD values for selected commercial
satellites acquiring panchromatic imagery.
Figure 13. Orbit altitudes and GSD of selected satellites acquiring panchromatic imagery.
With the increasing number of satellites orbiting the Earth the times between images being
captured at the same location is decreasing. The revisit time is an important issue if changes
to the features of interest shall be detected. Also the large choice of sensors can increase the
independence of the user on the data suppliers and it leads to reduction of data prices.
23
Imagery Intelligence (IMINT)
Helios
It is the second generation military surveillance satellite of the program conducted
by France in conjunction with Belgium, Greece, Italy, and Spain. It is operated by the
French MOD’s agency DGA (Délégation Générale pour l’Armament) and the French
space agency CNES (Centre National d’Etudes Spatiales). To date there were four
launches of the satellites: Helios 1A in July 1995, Helios 1B in December 1999,
Helios 2A in December 2004, and finally Helios 2B in December 2009. The Helios 2B
operates at an altitude of 680 km and provides imagery with a GSD of 0.5 m [12].
IGS
The IGS (Information Gathering Satellite) are the Japan’s first military
reconnaissance satellites launched between March 2003 and November 2009. The
satellites, called Optical (IGS-1A, IGS-3A, IGS-4A, IGS-5A) or Radar (IGS-1B, IGS-3B,
IGS-4B), carry either optical or synthetic aperture radar sensor, respectively. The
satellites probably operate in pairs in two orbital planes at an altitude of 490 km.
The optical sensors acquire panchromatic and multispectral imagery with a GSD of
1 m and 4.5 m, respectively. The radar sensor works in the C-Band and acquires data
with a GSD between 1 m and 3 m. With a full constellation of these satellites Japan is
now able to gather optical and radar imagery of any place in the world every day
[13].
KH
The Key Hole (KH) is the first US reconnaissance satellite. The first satellites of this
program were launched under a name Discoverer or Corona. They all carried a
special panoramic camera called the Key Hole. From early 1960s this name had been
used also for the satellites. In the beginning the project was managed by the CIA and
the US Air Force, later it was operated by the National Reconnaissance Office (NRO).
The program started with the KH-1 prototype satellite launched in April 1959 to test
the film capsule recovery techniques. There were 22 satellite launches of the KH-1
between April 1959 and September 1960, however only one mission was successful.
The typical orbit altitude was 200 km at perigee and 900 km at apogee and the
satellite mass varied from 620 kg to 860 kg. The camera provided a spatial
resolution of 12 m (such resolution was not available for commercial imagery until
launching the SPOT 3 satellite in 1993 that is more than three decades later).
10
Since not all these satellites are operated exclusively by the national MODs, it would be more appropriate
to use the term ‘government satellites‘. However, this term is not used very often.
24
Imagery Intelligence (IMINT)
25
Imagery Intelligence (IMINT)
Onyx/Lacrosse
The Onyx (formerly Lacrosse) is a radar imaging reconnaissance satellite operated
by the NRO. The first satellite of this type was deployed from the space shuttle
Atlantis in December 1988 and operated at a typical orbit altitude of 660 km. The
resolution of the imagery was probably between 1 and 3 m.
The last satellite of this type, Onyx 5/Lacrosse 5, was launched in April 2005 [18]. It
operates at an altitude of 720 km and is supposedly capable of manoeuvring in the
orbit. It provides radar imagery with a GSD of about 1 m.
SAR-Lupe
It is the first German satellite-based radar reconnaissance system. It is a
constellation of five identical small satellites in three orbital planes to provide
worldwide coverage at an altitude of approximately 500 km. They were all launched
from the Plesetsk Cosmodrome in Russia in December 2006, July 2007, November
2007, March 2008, and July 2008. The synthetic aperture radar works in the X-Band
and provides data with a GSD better than 1 m [19].
There are also other military satellites and satellite constellations, such as Chinese FSW,
ZY, or Yaogan; Russian Kosmos/Araks, Kosmos/Orlets, Kosmos/Persona, etc.
26
Imagery Intelligence (IMINT)
a) b) c)
Figure 14. Examples of airborne mounted sensors: the ADS80 Airborne Digital Sensor
from Leica (a), the ALTM Orion Airborne Laser Terrain Mapper from Optech (b), and the
Raven Eye II Unmanned Multi-mission Stabilized Payload from Northrop Grumman (c).
(http://www.leica-geosystems.com, http://www.optech.ca,
http://www.es.northropgrumman.com)
Aircraft
There is a wide variety of aircraft used as a platform for acquiring the imagery.
These can be military aircraft, such as reconnaissance, surveillance, or tactical
reconnaissance. These can be various types of helicopters. These can be also special
photogrammetric aircraft using highly specialised metric cameras and airborne
scanners.
UAV
The UAVs are also known as Remotely Piloted Aircraft (RPA), Unmanned Aerial
Systems (UAS) or ‘drones’. They can be fixed- or rotary-wing aircraft and lighter-
than-air and near-space systems. In many cases certain types of UAVs can overcome
some weaknesses of using aircraft. For instance they can operate in much higher
altitudes, they can utilize solar power as a source of energy, or they can hover above
the point or area of interest for many hours or even days.
27
Imagery Intelligence (IMINT)
There is a wide variety of UAV types that can be classified according to their weight
or the operating altitude such as micro, mini, small, tactical, high altitude long
endurance (HALE), medium altitude long endurance (MALE), and so on [20].
Balloons
Although the general term ‘balloon’ is used there are several distinct classes of this
platform type. These are blimps or non-rigid airships, aerostats, moored balloons,
etc. Blimps are free-flying airships without an internal supporting framework and
they are powered whereas aerostats are anchored to the ground. Because the
aerostats are not highly pressurized, bullets will not burst them and they can
actually remain buoyant for hours after suffering multiple punctures.
One of the latest classes of this platform is the hybrid airship. It gains lift from three
different sources: the aerostatic lift given by the on-board helium, the aerodynamic
lift given by its hull shape, and lastly the diesel engines and vector vanes. One
example of this platform can be the Long-Endurance Multi-intelligence Vehicle
(LEMV) being currently developed by the Northrop Grumman for the United States
Army11. The LEMV will potentially be capable of lifting a 2,300 kg payload up to
6,100 m for up to three weeks (see Fig. 15).
11 The Lockheed Martin was contracted to construct a high-altitude airship (HAA) for the US Army. The HAA was
intended to operate above the jet stream at a hight of above 18,000 m for up to one month. It was designed to be
150 m long and 46 m in diameter. Due to budget problems the HAA program was canceled.
28
Imagery Intelligence (IMINT)
a) b) c)
Figure 16. Examples of Unattended Ground Sensors: BAA Observation and Reconnaissance
Equipment (a), Terrain Commander 2 Network Enabled Surveillance System (b), and Falcon
Watch Remote Intrusion Detection And Surveillance System (c).
(http://www.rheinmetall-defence.de, http://www.textron.com, http://www.harris.com)
29
Imagery Intelligence (IMINT)
The left column in Fig. 17 shows the arrangement of the two adjacent objects on the
Earth’s surface, separated by a distance d, where the squares in a background represent the
GSD footprint of the image. The middle column shows the appropriate (gray) pixels, affected
by the reflectance of the real objects, as they appear in the image. The right column shows the
relationship between the GSD of the image and the distance separating the real objects.
30
Imagery Intelligence (IMINT)
If
d
GSD
2
then the affected pixels in the image will be always depicted as adjacent, regardless the
position of the GSD footprint with respect to a position of the objects on the Earth’s surface. It
means that these objects will always be depicted as one object.
If
d
GSD
2
then the chance of distinguishing the objects as two individual objects will depend on the
position of the GSD footprint. Only in cases when the GSD footprint will fit the gap between
the objects, these objects will be distinguished separately. In all other cases they will be
depicted as one object.
If
d
GSD
2
then the objects will be always depicted as separate, regardless the position of the GSD
footprint.
It should be emphasized that having an image with a spatial resolution of 10 m does not
mean that it is possible to recognize objects of size of 10 m. Spatial resolution affects the
interpretation process and all its individual tasks. The closer is the spatial resolution to a size
of the object the less chance is to recognize or even detect the object in an image. In this
context, it is necessary to distinguish between individual interpretation tasks: detection,
recognition, identification, and technical analysis.
Detection
It is the ability to discover the existence of an object based on its configuration and
on other contextual information without recognizing it.
Recognition
It is the ability to class the identity of a feature or an object on an image within a
group (e.g. tank, single-lane bridge).
Identification
It is the ability to identify a feature or an object on an image as a precise type (T-72
tank, MiG-21).
Technical analysis
It is the ability to describe precisely a feature, an object or a component of the object.
31
Imagery Intelligence (IMINT)
For example, the imagery with a spatial resolution of 4.5 m will allow to detect an aircraft.
Resolution of 1.5 m will allow to recognize a class of that aircraft, such as a fighter, a fighter
bomber, or a bomber. But to identify a type of the aircraft precisely, the imagery with a
spatial resolution of 0.15 m will be needed. When a technical analysis is required, that is a
precise description of fine details of that aircraft, a spatial resolution of 0.04 m will be needed.
The examples of image resolution required for other target types are shown in Tab. 1. (In
practice it is not as simple as the above mentioned example, however, for our purposes we
can afford such simplification).
Not only the spatial resolution affects the information content of an image and hence its
interpretability. It is necessary to consider also acquisition conditions such as illumination,
shadows, sensor look angle, or influence of the atmosphere.
32
Imagery Intelligence (IMINT)
words, rather than define quality in terms of physical parameters such as GSD or signal-to-
noise ratio (SNR), NIIRS defines quality in terms of the ability to extract information12.
In order to relate the imagery quality, expressed in terms of NIIRS, to fundamental image
attributes, the Image Quality Equation (IQE) can be specified. In other words, the IQE relates
the impact of sensor system and acquisition parameters to the measurement of final image
quality. The equation includes a detailed description of a number of variables, such as scale
(might be expressed as the GSD), scene contrast, sharpness (determined from the system
modulation transfer function - MTF), illumination, atmospheric conditions, or optics and
sensor characteristics. An IQE can be used in designing new imaging system or for optimizing
collection from existing systems. The IQE can be generally expressed as
The following two rating scales will be discussed in more detail - the US National Image
Interpretability Rating Scale and the NATO Image Interpretability Rating Scale.
12 As the FMV data are being used more often and more extensively in the IMINT process, the standard measures
for assessing the image interpretability are needed also for the motion imagery. Due to dynamic nature of motion
imagery, especially the factors such as target motion and camera motion, and also the data compression, do not
allow applying the NIIRS that were developed for still imagery. The Video National Imagery Interpretability Rating
Standard (V-NIIRS) is one of the current projects in development.
33
Imagery Intelligence (IMINT)
cultural criteria are for example buildings, roads, railroads, and bridges, and can be used for
rating when military objects are not present.
34
Imagery Intelligence (IMINT)
4. Collateral information
The imagery itself usually cannot provide the exact, definitive, and unambiguous result
therefore other information might be used. Collateral, or ancillary, information is a vital
information source and a tool the image analyst needs to be able to perform his or her task. It
is usually the non-image information used to assist in the interpretation of an image. There
are numerous types of collateral information that can be used: interpretation keys, models,
theme encyclopaedias, report templates, previous images, previous reports, ground pictures,
maps, books, statistical tables, field observations and so on. Collateral information is used by
image analyst often intuitively in a form of his or her knowledge which is based on everyday
experience and formal training [10].
In order to categorize the objects and features and thereby facilitate the use of collateral
information, especially the process of reporting, several basic themes of military interest,
called the target categories in NATO, were defined. These may be organized according to the
producer’s operating procedures or tailored with respect to the customer’s requirements
(see an example in Fig. 18). Within NATO, the target category list is specified in STANAG 3596
‘Air Reconnaissance Requesting and Target Reporting Guide’ (see Fig. 19).
Aeronautical Installations
Ports and Harbour Installations
Military Installations
Electronic Installations
Storage Facilities
Industrial Installations
Networks (roads, railroads, waterways, power lines, pipelines, etc.)
General Terrain Features
Missile Systems
Special Equipment of Ground/Air/Naval Forces
35
Imagery Intelligence (IMINT)
Figure 19. The Target Category List defined in STANAG 3596 [].
36
Imagery Intelligence (IMINT)
Direct key
This type of the interpretation key allows performing direct identification of the
objects (see Fig. 20 and Fig. 21). The interpretation key may be in a form of a
catalogue showing particular types of objects or equipment from the selected theme
of interest or a target category list. It usually contains technical parameters, typical
features enabling recognition, and one or more pictures or diagrams. The direct keys
usually contain a large number of the examples of the objects that are built on
known facilities.
Selection key
Using this type of the interpretation key the image analyst can identify the objects by
comparing various characteristics and parameters - on images and/or word
descriptions - until a correct or most probable object is finally selected (Fig. 22).
Elimination key
This type of the interpretation key uses selection of types and shapes of the objects
and their parts by eliminating non-corresponding choices in lists. These keys are
composed of word descriptions ranging through various levels of broad to specific
characteristic discrimination. The image analyst progresses down through this
hierarchy, making choices at branching description paths. Finally, by the process of
eliminating all differing features, the object is identified (Fig. 23).
In NATO the STANAG 3483 ‘Air Reconnaissance Intelligence Reporting Nomenclature -
ATP-26(C)’ defines the standard terminology for description of equipment and installations
and presents the information sorted into eight following sections: Roads/Rail/Waterways,
Navy, Army, Aircraft/Airfield, Installations, Terrain Features and Coverage, Civilian Industry,
Electronics [27]. Each feature is accompanied with either the example of its appearance in an
image or a scheme.
There are also commercial interpretation keys provided primarily to the customers using
imagery acquired by the company operated satellites. The TerraSAR-X IMINT Manual
provided by the Infoterra company can be one example [28]. It shows hundreds of defence
and security related objects how they appear in high resolution SAR data. Each SAR image
example is displayed side by side with high resolution optical image depicting the same
object in the same scale and the explanation of the effects is provided (Fig. 24).
37
Imagery Intelligence (IMINT)
a)
b)
Figure 20. An example of the direct interpretation key: the S-75/SA-2 Guideline
SAM site. Typical site configuration (a) and the real site depicted on an image (b).
(http://www.ausairpower.net, http://maps.google.com)
38
Imagery Intelligence (IMINT)
a)
b)
Figure 21. An example of the direct interpretation key: the S-200/SA-5 Gammon
SAM site. Typical site configuration (a) and the real site depicted on the image
(b). (http://www.ausairpower.net, http://maps.google.com)
39
Imagery Intelligence (IMINT)
Figure 22. An example of the selection interpretation key: mobile bridges [25].
Figure 23. An example of the elimination interpretation key: aircraft. Apart from
the shape of wing, other identification elements are used - type of wing, position of
wing, wing tip shape, type of engine, engine intake arrangements, shape of
fuselage, tale assembly shape, etc. [25].
40
Imagery Intelligence (IMINT)
a)
b)
Figure 24. Two examples from the TerraSAR-X IMINT Manual: power line pylon (a)
and fence (b). (http://tim.infoterra.de)
41
Imagery Intelligence (IMINT)
4.2 Models
The models are conceptual objects describing natural or man-made features or objects as
they appear on an image. They can be of the two following types: general models and exact
models.
General models
General models provide a generic description of natural or man-made features or
objects. For example the ammunition storage model shows that that type of
installation will often contain storage area, administrative and technical buildings,
fire fighting facility, passive protection, etc.
Exact models
Exact models (sometimes referred to as precise models) pair the real appearance of
particular objects (i.e. the ground truth) with their appearance on an image (see
Fig. 25).
a)
b)
Figure 25. The examples of the exact model showing the ground truth (left) and the
object appearance on an image (right): oil storage tanks (a) and cooling towers (b).
(http://maps.google.com)
42
Imagery Intelligence (IMINT)
Functional knowledge
Functional knowledge helps answer a question: How does it work? It helps learn the
basic process from generating the steam, rotation of a turbine up to transforming
the voltage.
Organizational knowledge
Organizational knowledge provides answers to a question: What is the generic
layout of the installation? It shows all basic components of a power plant, such as
fuel storage; generator hall; cooling towers; transformer yard; etc. (see Fig. 26).
Contextual knowledge
Contextual knowledge helps answer a question: What is the best location for this
type of installation? It helps learn that a power plant needs to have an access to a
source of fresh water; it will be placed close to either populated place or industrial
area or both; etc.
Creating a theme encyclopaedia takes a long time and requires investing a considerable
amount of know-how therefore creators are not always willing to share such documents. As
well as the interpretation keys the theme encyclopaedias are built on experience of the image
analysts. Some organizations, such as the Groupement pour le Développement de la
Télédétection Aérospatiale (GDTA)13, publish separate examples only for the educational
purposes in their official courses [25]; other organizations, such as the European Union
Satellite Centre (EUSC)14, employ these documents solely in their internal training
materials [29].
13The GDTA, established in 1973 in Toulouse, France, used to be the organization whos goal was to develop the
methods of remote sensing and to promote exploitation of the methods and products of remote sensing by
providing an education to specialists on various levels. The GDTA used to be a group of several French
organizations, such as Centre National d‘Etudes Spatiales (CNES), Institute Géographique National (IGN), etc. The
GDTA was closed in 2005.
14 The EUSC is an agency of the Council of the European Union. It was founded in 1992 and is located in Torrejón
de Ardoz, Spain. Its main mission is to support the decision making of the EU by providing the products resulting
from the analysis of satellite imagery and collateral data.
43
Imagery Intelligence (IMINT)
5-MWe
reactor site
50-MWe
reactor
Radiochemical
laboratory
complex
Fuel
fabrication
area
44
Imagery Intelligence (IMINT)
Figure 27. Example of the NATO target reporting guide: airfields [26
45
Imagery Intelligence (IMINT)
The IMINT cycle mirrors the intelligence cycle - Planning and Direction, Collection,
Processing and Exploitation, Production, Dissemination, Utilization15. These steps define both
a sequential and an interdependent process for developing IMINT (Fig. 28) [3]. To stay
focused on imagery processing only the following two steps will be examined: processing and
production.
Processing and exploitation in the intelligence cycle involves the conversion of collected
data into information suitable for the production of intelligence. Imagery processing for
IMINT purposes refers to the conversion or transformation of exposed film or electronic
photographic, electro-optical, infrared, and radar imagery into a form usable for
interpretation and analysis.
Production in the intelligence cycle is the activity that converts information into
intelligence through the integration, analysis, evaluation, and interpretation of all-source data
and the preparation of intelligence products in support of known or anticipated user
requirements. IMINT production refers to writing imagery reports; annotating imagery
products; creating imagery-derived products; integrating and fusing IMINT into all-source
intelligence products, and so on (IMINT products are discussed more in Chapter 6.2).
Since digital imagery has its own characteristics and differs significantly from the
traditional photographic prints and transparencies, it requires special treatment in the
context of visual interpretation. It means that prior to commencing the interpretation process
15 Other sources present different cycles. For example, GDTA presents the basic image cycle as follows:
Information requirements, Tasking, Collection, Processing and distribution, Exploitation, Imagery derived
reporting, Report dissemination [25].
46
Imagery Intelligence (IMINT)
the imagery must be either pre-processed or enhanced, or both. The interpretation as such
requires employing a combination of several elements of image interpretation describing
characteristics of objects and features as they appear on imagery. Then the image analysis
process comprises several successive actions performed by an image analyst. These can be
feature extraction, meaning extraction, or change detection.
Radiometric corrections
The aim of the radiometric corrections is to change the brightness values (digital
numbers) of the individual pixels. The main reason for this correction is a presence
of the noise that may be caused by the atmospheric conditions or by a sensor. Also
the radiometric errors can occur. These errors used to be caused by a sensor
malfunction and it can look like missing individual pixels or lines of pixels, which is
called striping or banding. These errors are usually corrected by filtering or by
replacing the missing values with the pixel values of the adjacent pixels.
Sometimes the atmospheric correction is referred separately from the radiometric
corrections because the effects of the atmosphere upon the imagery are not
considered errors - they are part of a signal received by a sensor. However, it is often
important to remove these effects, especially for image matching and change
detection analysis. The atmospheric correction can exploit various methods ranging
from detailed modelling of the atmospheric conditions during data acquisition, to
calculations based on the image data.
Geometric corrections
The reason for applying the geometric corrections comes from the fact that the raw
data suffer from serious geometric distortions impeding exploitation of that data as
a geographic product. These distortions may be caused by various factors, such as
16 This refers particularly to the imagery acquired by satellite sensors and airborne metric cameras and scanners.
47
Imagery Intelligence (IMINT)
the imperfections of the sensor; the curvature and rotation of the Earth; the platform
altitude, attitude, and velocity; the terrain relief; etc. Geometric corrections are
aimed at transforming the image coordinate system and regulating the pixel so that
the resulting image is in accordance with the map coordinates or is converted to a
specific map projection. Georeferencing, rectification, geocoding or resampling are
the procedures which allow to compensate for geometric distortions.
Spatial enhancement
The aim of spatial enhancement is to increase the interpretability of an image, to
facilitate potential automated feature extraction, and to eliminate or at least to reduce
the effects of sensor imperfections. While radiometric enhancements operate on each
pixel individually, spatial enhancement modifies pixel values based on the values of
surrounding pixels. Spatial enhancement deals largely with spatial frequency, which is
the difference between the highest and lowest values of a contiguous set of pixels.
Among the techniques of this group there are convolution filtering, resolution merge,
etc.
Spectral enhancement
Spectral enhancement works with multispectral or hyperspectral imagery. The
manipulations comprise colour synthesis, colour enhancement of particular bands, or
transformations of image data to a form which is more convenient for interpretation.
Among the techniques of this group there are principal component analysis,
decorrelation stretch, RGB/IHS transforms, or working with (vegetation) indices.
48
Imagery Intelligence (IMINT)
Tone
Tone is the only directly evaluated element. For black-and-white images, image tone
denotes the lightness of darkness of a region within an image. Tone may be
characterized as ‘light’, ‘medium gray’, ‘dark gray’, ‘dark’, and so on. For colour
imagery, image tone refers simply to ‘colour’, such as ‘dark green’ or ‘light blue’ and so
on. Since image tone can be influenced by factors other than the absolute brightness of
the Earth’s surface, the caution should be employed during interpretation. It is known
that a human interpreter can provide reliable estimates of relative differences in tone,
but cannot describe accurately absolute image brightness.
Texture
Image texture refers to the apparent roughness or smoothness of a portion of an image.
Usually texture is caused by the pattern of highlighted and shadowed areas created
when an irregular surface is illuminated from an oblique angle.
Shadow
Shadow is an especially important clue in the interpretation of objects. A building or
vehicle, illuminated at an angle, casts a shadow that may reveal characteristics of its
size or shape that would not be obvious from the overhead view alone.
Pattern
Pattern refers to the arrangement of individual objects into distinctive recurring forms
that facilitate their recognition on imagery. Pattern on an image usually follows from a
functional relationship between the individual features that compose the pattern.
49
Imagery Intelligence (IMINT)
Association
Association specifies the occurrence of certain objects or features, usually without the
strict spatial arrangement implied by pattern. Association of specific items has great
significance when the identification of a specific class of equipment implies that other,
more important, items are likely to be found nearby. In other words, certain objects are
closely linked to others, so that presence of one leads to a high level probability of
presence of others. Association is one of the most helpful clues for identification of
man-made features.
Shape
Shape describes an external form or configuration of an object and is result of the
geometric arrangement of tone and colour elements. The shape of some objects is so
distinctive that they can be identified by this element only. For example, natural
objects have usually irregular shape with irregular boundaries whereas man-made
objects have usually geometrical shape and distinct boundaries.
Size
Size is important in two ways. First, the relative size of an object or feature in relation
to other objects on the image provides the interpreter with an intuitive notion of its
scale, even though no measurements or calculations may have been made. Second,
absolute measurements of the size of an object can confirm its identification based
upon other factors, especially if its dimensions are so distinctive that they form definite
criteria for specific items or classes of items. Furthermore, absolute measurements
permit derivation of quantitative information, including lengths, volumes, or even rates
of movement.
Site
Site refers to topographic position or a location of objects with respect to each other or
to terrain features. For example, power plants need water supply for cooling, sewage
treatment facilities are positioned at low topographic sites near rivers to collect waste
flowing from higher locations, etc.
50
Imagery Intelligence (IMINT)
17 An optical illusion first described by the Italian psychologist Gaetano Kanizsa in 1955.
51
Imagery Intelligence (IMINT)
Classification
Classification in this context means the assignment of objects, features, or areas to
classes based on their appearance on the imagery. It is a different process than
traditional classification in remote sensing18.
Enumeration
Enumeration is the task of listing or counting discrete items visible on an image. For
example, housing units can be classified as ‘detached house’, ‘multistory residential’,
etc. and then reported as numbers present within a defined area.
Measurement
Measurement, or mensuration, may range from a simple visual estimate of the size,
shape and colour of an object to detailed measurement and calculation of distance,
height, volumes and areas and also scene brightness or density.
Measurements should be always performed using some tool because human eye can be
misled. However since the image processing phase involves also georeferencing or
even orthorectification current software applications provide reliable tools for
obtaining exact coordinates and dimensions of objects and features depicted on an
image.
Delineation
An image analyst must often delineate, or outline, regions as they are observed on
images. He or she must be able to separate distinct areal units that are characterized by
specific tones and textures, and to identify edges or boundaries between separate
areas.
However, the manual interpretation is not the only method of extracting information from
the imagery. There are various techniques, commonly used in remote sensing, that can
facilitate feature extraction, such as automated classification, spectral analysis, multitemporal
analysis, application of spectral (or vegetation) indices, etc.
18 The aim of classification in remote sensing is to process the multispectral (or hyperspectral) imagery in such a
way that the spectral classes (i.e. the groups of pixels that are uniform with respect to their brightness values in
the different spectral channels) are matched to the information classes (i.e. the categories of interest to the user).
It means that all pixels in the image are assigned to particular classes or themes, for example different kinds of
forest, different kinds of land use, etc.
52
Imagery Intelligence (IMINT)
53
Imagery Intelligence (IMINT)
19 The JRC is a Directoriate-General of the European Commission. Its seven institutes are located on five separate
sites in Belgium, Germany, Italy, the Netherlands and Spain. Among its other activities, the Institute for the
Protection and Security of the Citizens (IPSC) in Ispra, Italy deals with the image map production and maintain the
Global Atlas on Crisis Areas on its web pages.
54
Imagery Intelligence (IMINT)
a)
b)
Figure 34. The examples of the products showing the impact of floods in
Pakistan produced by the UN Food and Agriculture Organization (a) and
the German Aerospace Centre (b). (http://reliefweb.int)
55
Imagery Intelligence (IMINT)
a) b)
Figure 35. The examples of the situation assessment in Libya (a) and the damage
assessment in the Gaza Strip (b) produced by the UN Operational Satellite Applications
Programme. (http://reliefweb.int)
Figure 36. The example of the report on destroyed houses in Zimbabwe produced by the
UN Operational Satellite Applications Programme. (http://www.internal-displacement.org/)
56
Imagery Intelligence (IMINT)
Figure 37. Satellite imagery shows that a competitor is not doing as well as it claims.
(http://www.imagingnotes.com)
57
Imagery Intelligence (IMINT)
Abbreviations
58
Imagery Intelligence (IMINT)
59
Imagery Intelligence (IMINT)
References
60
Imagery Intelligence (IMINT)
[19] OHB System. SAR-Lupe [online]. [cit. 2010/07/02]. Available from WWW:
<http://www.ohb-system.de/sar-lupe-english.html>.
[20] NATO Joint Air Power Competence Centre. Strategic Concept of Employment for
Unmanned Aircraft Systems in NATO. Kalkar : JAPCC, 2010. 42 p.
[21] Unattended Ground Sensors. Defence Update [online]. [cit. 2010/07/02]. Available from
WWW: <http://defense-update.com/features/du-1-06/feature-ugs.htm>.
[22] STANAG 3769 AR (Edition 2). Minimum Resolved Object Sizes and Scales for Imagery
Interpretation. Brussels : NATO Standardization Agency, 1998. 18 p.
[23] Federation of American Scientists. History of the National Imagery Interpretability
Rating Scales [online]. [cit. 2010/07/02]. Available from WWW:
<http://www.fas.org/irp/imint>.
[24] STANAG 7194 JINT (Edition 1). NATO Imagery Interpretability Rating Scale (NIIRS).
Brussels : NATO Standardization Agency, 2009. 17 p.
[25] GDTA, Ramonville St. Agne. Aerospace Imagery Analysis for Military Intelligence. Image
Analysis Method. 1999. 59 p.
[26] STANAG 3596 JINT (Edition 6). Air Reconnaissance Requesting and Target Reporting
Guide. Brussels : NATO Standardization Agency, 2007. 100 p.
[27] STANAG 3483 JINT (Edition 5). Air Reconnaissance Intelligence Reporting Nomenclature
- ATP-26(C). Brussels : NATO Standardization Agency, 2007, 4 p.
[28] Infoterra. TerraSAR-X IMINT Manual: The Comprehensive Reference Database for Image
Analysts [online]. [cit. 2011/09/30]. Available from WWW:
<http://www.infoterra.de/terrasar-x_imint-manual>.
[29] EUSC, Torrejón de Ardoz. (various internal training materials). 2011.
[30] ALBRIGHT, D., HINDERSTEIN, C. The Age of Transparency. Imaging Notes. 2000,
March/April, Vol. 15, No. 2, p. 26-29. ISSN 0896-7091
[31] KOVAŘÍK, V. Remote Sensing. Selected lectures. [S-1642]. Brno : Military Academy, 2002,
169 p. (in Czech)
[32] ERDAS, Inc. ERDAS. Field Guide. Norcross, GA, USA : ERDAS, Inc., 2008, 776 p.
[33] CCRS. Fundamentals of Remote Sensing. Ottawa : Canada Centre for Remote Sensing,
Inc., 2007, 258 p. Also available from WWW:
<http://ccrs.nrcan.gc.ca/resource/tutor/fundam/index_e.php>.
[34] CONGALTON, R. G., MEAD, R. A. A Quantitative Method to Test for Consistency and
Correctness in Photointerpretation. Photogrammetric Engineering & Remote Sensing.
1983, Vol. 49, No. 1, p. 69-74
[35] Macmillan English Dictionary Online [online]. [cit. 2011/08/04]. Available from WWW:
<http://www.macmillandictionary.com>.
[36] WordWeb Online [online]. [cit. 2011/08/04]. Available from WWW:
<http://www.wordwebonline.com>.
[37] LEGER, D. Sizing Up the Competition. Imaging Notes. 2000, May/June, Vol. 15, No. 3, p.
22-23. ISSN 0896-7091
61