0% found this document useful (0 votes)
7 views62 pages

IMINT_c12_verResearchGate

The document is a book titled 'Imagery Intelligence (IMINT)' authored by Vladimir Kovarik, published in December 2011. It provides a comprehensive overview of IMINT, including its definitions, capabilities, limitations, historical context, image analysis processes, and reporting methods. The text emphasizes the importance of imagery in military and non-military applications, detailing various imagery sources and processing techniques.

Uploaded by

nasri bilel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views62 pages

IMINT_c12_verResearchGate

The document is a book titled 'Imagery Intelligence (IMINT)' authored by Vladimir Kovarik, published in December 2011. It provides a comprehensive overview of IMINT, including its definitions, capabilities, limitations, historical context, image analysis processes, and reporting methods. The text emphasizes the importance of imagery in military and non-military applications, detailing various imagery sources and processing techniques.

Uploaded by

nasri bilel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/270686775

Imagery intelligence (IMINT)

Book · December 2011

CITATION READS

1 16,933

1 author:

Vladimir Kovarik
University of Defence
34 PUBLICATIONS 238 CITATIONS

SEE PROFILE

All content following this page was uploaded by Vladimir Kovarik on 23 September 2017.

The user has requested enhancement of the downloaded file.


UNIVERSITY OF DEFENCE
Faculty of Military Technology

S - 10441

Imagery Intelligence (IMINT)

Author:
Vladimír KOVAŘÍK

BRNO 2011
Imagery Intelligence (IMINT)

Table of Contents

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1. Definition, capabilities and limitations of IMINT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2. History of IMINT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3. Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1 Image data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Imagery providers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.1 Satellite sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.1.1 Commercial satellites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2.1.2 Military satellites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2.1.3 Future satellite systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2.2 Airborne sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2.3 Ground sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3 Image resolution and interpretability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.3.1 Effect and limitations of spatial resolution . . . . . . . . . . . . . . . . . . . . . 30
3.3.2 Image Interpretability Rating Scales . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.3.2.1 National Image Interpretability Rating Scale . . . . . . . . . . . . . . 33
3.3.2.2 NATO Image Interpretability Rating Scale . . . . . . . . . . . . . . . . 34
4. Collateral information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.1 Interpretation keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.2 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.3 Theme encyclopaedias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.4 Report templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5. Image analysis process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.1 Image pre-processing and enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.1.1 Image pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.1.2 Image enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.2 Elements of image interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.3 Image interpretation techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.3.1 Feature extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.3.2 Meaning extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.3.3 Change detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6. Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
6.1 Target reporting guides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
6.2 Other IMINT products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
7. Other domains of imagery intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

2
Imagery Intelligence (IMINT)

Introduction
The general reason for imagery collection is to examine or to interpret the images for the
purpose of identifying objects and judging their significance. In military terms the main
purpose of imagery collection and exploitation is to discover new targets, to detect changes
on Earth’s surface, to perform surveillance, to plan missions, or to assess the combat results.
And even more specifically, apart from creating the geographic products the imagery helps,
for example, assess military and industrial capabilities of the opponent, locate his military
forces and monitor their status, watch the deployment and dispositions of the enemy’s
military units, or monitor the main supply routes or activity at the weapon storage sites.
Also the image analysis is not used solely for the military purposes. It can serve in the
process of the business espionage when the imagery can prompt the company about potential
problems of its rival or competitor and that may lead to a change of the company’s activities.
It can also support the effort of international bodies in peacekeeping, verification of treaties,
or arms proliferation control.
The result of the image analysis process is not necessarily always another image. It can be
a standard or specialized report or even, when exaggerated, just an information or
confirmation of the existence of the object or feature in the terrain. This depends on the
request of a customer or an end user and also on other factors, such as quality of the image,
time availability for accomplishing the task and so on.
The purpose of this study text is to introduce the term Imagery Intelligence, referred to as
IMINT; to explain its history, fundamentals, capabilities and limitations; to present basic
procedures used in a process of image analysis; to show the role of collateral information;
and the ways of reporting of results to the customer. Since imagery is a fundamental element
of IMINT, a great attention is paid to imagery sources, characteristics and processing
methods. Although virtually any image can be exploited in IMINT, most of this text is focused
primarily on aerial and satellite imagery.
The basic information that is to be extracted from an image, are the answers to the
questions Where, What, and When. And digital image processing is essential for providing
accurate answers.
This study text builds on a short introduction to IMINT fundamentals ‘Introduction to
Imagery Intelligence (IMINT)’ that was prepared within the European Social Funds program
at the University of Defence in Brno[1]. This text does not address the organization nor
planning of IMINT on different command levels. It focuses primarily on technical procedures
used in IMINT.

3
Imagery Intelligence (IMINT)

1. Definition, capabilities and limitations of IMINT

The Imagery Intelligence, referred to as IMINT, is one of the intelligence gathering


disciplines. It forms one of the three integral elements of Geospatial Intelligence (GEOINT) -
the other two elements are imagery and geospatial information. As in many other disciplines
there is no uniform definition of the imagery intelligence. As an example, we can use the
definition inscribed in the USA law [2]:
‘The term "imagery intelligence" means the technical, geographic, and
intelligence information derived through the interpretation or analysis of
imagery and collateral materials.’
More detailed, with respect to the technical part, is the definition currently used in NATO:
‘IMINT means the intelligence derived from imagery collected by photographic,
radar, electro-optical, infrared, thermal and multispectral sensors which can be
ground-based, sea borne or carried by air or space platforms.’
The general role of IMINT can be described as provision of information regarding the
enemy (or an opponent) and the operational environment (i.e. the battlefield or the
battlespace) that helps commanders, decision makers and users reduce uncertainty, identify
opportunities for success, assess risk, outline the enemy’s or the opponent’s intent and
achieve decisive results.
It is also possible to consider the capabilities and limitations of IMINT. These are described
for example in the U.S. Marine Corps Warfighting Publication (MCWP) 2-15.4 ‘Imagery
Intelligence’ [3].

 Capabilities of IMINT
IMINT is an extremely valuable part of intelligence. IMINT provides concrete,
detailed, and precise information on the location and physical characteristics of both
the threat and the environment. It is the primary source of information concerning
key terrain features, installations, and infrastructure used to build detailed
intelligence studies, reports, and target materials. Order of battle (OOB) analysis,
enemy courses of action assessments, development of target intelligence, and battle
damage assessment (BDA) are intelligence functions that rely heavily upon IMINT.

 Limitations of IMINT
The major limitations of IMINT are the time required to task, collect, process,
analyse, and disseminate the imagery product; the detailed planning and
coordination required to ensure the collected imagery is received in time to impact
the decision making process; and the requirement for considerable assets in
personnel, equipment, and communications connectivity to conduct IMINT
operations. Also, imagery operations can be hampered by weather; enemy air
defence capability; and enemy camouflage, cover, concealment and deception
activities.

4
Imagery Intelligence (IMINT)

2. History of IMINT

There was a long way from the first attempts to see the Earth’s surface from above to
today’s platforms passing in myriads over our heads and collecting imagery.
The French brothers Jacques Etienne and Joseph Michel Montgolfier launched the first
successful balloon ascension on 5 June, 1783 and the first clearly recorded manned flight was
performed in November that year. The military exploitation of that promising device came
soon. The first decisive use of a balloon for aerial observation was performed by the French
Aerostatic Corps at the Battle of Fleurus in 1794, when it was used for reconnaissance during
the Austrian army bombardment. Not long after the invention of the photography (by a
French inventor Joseph Nicéphore Niépce in 1822) the French photographer, journalist and
balloonist Gaspar Félix Tournachon, known as ‘Nadar’, became the first person to take the
aerial photograph in 1858. It was a view of Petit-Bicêtre taken from a tethered hot-air
balloon, 80 metres above the ground. However Nadar’s earliest photographs did not survive
and therefore the oldest surviving balloon photograph is a view of Boston made in October
1860 by Samuel A. King and James W. Black (see Fig. 1).

a) b) c)
Figure 1. Self-portrait of Nadar (a). Nadar’s balloon ‘Le Géant’ (b). The oldest surviving
balloon photograph - Boston, 1860 (c). (http://en.wikipedia.org)

At that time the French Aerostatic Corps were long ago disbanded but America’s use of this
new technology began in the Civil War when Professor Thaddeus S. C. Lowe offered his
service to the United States1. His first assignment was with the Topographical Engineers

1 Making a tethered ascent to 150 metres Lowe sent the telegram to the White House with the following message:
‘Balloon Enterprise, Washington, D. C. 16 June 1861, To President United States:
This point of observation commands an area nearly fifty miles in diameter. The city with its girdle of encampments
presents a superb scene. I have pleasure in sending you this first dispatch ever telegraphed from an aerial station and
in acknowledging indebtedness to your encouragement for the opportunity of demonstrating the availability of the
science of aeronautics in the service of the country. T. S. C. Lowe’. [4]

5
Imagery Intelligence (IMINT)

where his balloon was used for aerial observations and map making. Later he created a
reconnaissance balloon unit and over the next two years he made thousands of
reconnaissance flights. Ballooning did not last the war, however [5].
Attention also turned to the use of kites. They were first used approximately 2,800 years
ago in China but the first kite photographs were taken by Arthur Batut in Labruguière in
France in 18882 (see Fig. 2). In America the first photograph was taken in New Jersey in 1895
by William A. Eddy. After perfecting his ability to take clear photographs from varying
altitudes he offered his system to the Navy and the system was then used in the Cuban
campaign during the Spanish American War in 1898 [6].

a) b)
Figure 2. Arthur Batut’s kite with the camera (a). The first kite photograph -
Labruguière, 1888 (b). (http://en.wikipedia.org)

The next platform that seemed to be very promising was a rocket. In 1891 Ludwig
Rahrmann patented a means of attaching a camera to a large calibre artillery projectile or
rocket and this inspired a German engineer Alfred Maul to develop and patent his Maul
Camera Rocket in 19033. The camera was launched into the air with a black powder rocket.
When the rocket had reached an altitude of about 800 metres a few seconds later its top
sprang open and the camera descended on a parachute. A timer triggered the taking of the
photograph. The Maul Camera Rocket was demonstrated in 1912 to the Austrian Army and
tested as a means for reconnaissance in the Turkish-Bulgarian war in 1912 and 1913. It was
not used afterwards because aircraft were much more effective.

2Some sources say that the English meteorologist E. D. Archibald was among the first to take successful
photographs from kites in 1887.
3Majority of sources say that the Swedish inventor, Alfred Nobel was the first in taking successfully aerial
photograph from a rocket mounted camera in 1897. The latest findings show that it was probably not the case.
Nobel patented his rocket camera in 1896 and therefore he was the first but the famous ‘Nobel´s rocket camera
photos’ were taken in April 1897, four months after his death, most probably from a top of a hill [7].

6
Imagery Intelligence (IMINT)

In 1903 the German apothecary Julius Neubronner designed a tiny breast-mounted camera
for pigeons (Fig. 3 and Fig. 4). Later the German Ministry of War was interested in his system
of taking aerial photographs and investigated its adaptability for topographic reconnaissance.
At that time Neubronner already designed and described other pigeon camera models
including stereoscopic and panoramic cameras. After almost ten years of negotiations
Neubronner received the state’s acquisition of his invention in 1914. But these plans were
spoiled by the outbreak of the First World War and Neubronner had to provide all his pigeons
and equipment to the military. Although the battlefield tests were satisfactory, military did
not employ the technique more widely. After the war, the War Ministry responded to
Neubronners inquiry that the use of pigeons in aerial photography had no military value and
further experiments were not justified. However in 1932 it was reported that the German
army was training pigeons for taking aerial photography and that the cameras were capable
of 200 exposures per flight. Also the French claimed that they had developed film cameras for
pigeons as well as a method for having the birds released behind enemy lines by trained dogs.

Figure 3. Pigeons equipped with the Neubronner’s cameras in approx. 1907.


(http://en.wikipedia.org)

Figure 4. Aerial photograph taken on pigeon photo flight showing also the pigeon wingtips.
(http://en.wikipedia.org)

7
Imagery Intelligence (IMINT)

The Wright brothers, Orville and Wilbur, invented and built the world’s first successful
airplane and made the first controlled, powered and sustained heavier-than-air human flight
on 17 December, 1903. Although not the first to build and fly experimental aircraft, the
Wright brothers were the first to invent aircraft controls that made fixed-wing powered flight
possible. The first photograph taken from an airplane was a motion picture shot over
Centocelli, Italy, in 1909, in a plane piloted by Wilbur Wright. Most of this early photography
provided an oblique rather than a vertical view of the ground. Popular illustrative pictures of
a number of large cities and other scenic attractions were also produced using this means.
The first use of airplanes in combat missions was by the Italian Air Force during the Italo-
Turkish War of 1911-1912. On 23 October, 1911 an Italian pilot flew over the Turkish lines in
Libya to conduct the first aerial reconnaissance mission in history.
However until the First World War the aerial photography was not acquired and utilized
on a large-scale, systematic basis. But then cameras were specifically designed for aerial
reconnaissance and associated processing facilities were developed to produce thousands of
photographs per day (Fig. 5). Equally as important as the technological advances was the
development of photo interpretation techniques to obtain intelligence information from the
photographs. By observing the deployment of men and material over a period of time, it was
possible to anticipate military manoeuvres. By the end of the First World War, there had been
substantial improvement in aircraft, cameras and processing equipment and a relatively large
number of people had gained experience in different aspects of aerial photographs
acquisition and utilization.

Figure 5. Military aerial observer during the First World War. (http://pw20c.mcmaster.ca)

During the First World War aerial photography soon replaced sketching and drawing by
the aerial observers. Cameras especially designed for use in airplanes were being produced.
The battle maps used by both sides were produced from aerial photographs, and by the end
of the war, both sides were recording the entire front at least twice a day (Fig. 6). After the
war, England estimated that its flyers took one-half million photographs during the four years
of the war, and Germany calculated that if you laid all its aerial photographs side by side, they

8
Imagery Intelligence (IMINT)

would cover an area six times the size of Germany. The quality of cameras had improved so
much by the end of the war that photographs taken at 4,500 metres could be enlarged to
show footprints in the mud.

Figure 6. Aerial photograph of the trenches near Ypres in 1916.


(http://library.mcmaster.ca/maps/)

During the Second World War, the reconnaissance was classified under two main
headings: mapping and damage assessment. Enemy activity was recorded and new
installations were located, so that accurate maps, to be used by the ground forces, could be
made. From damage assessment photographs, the exact moment when a target that had been
previously hit should be re-attacked could be calculated, and the effectiveness of the enemy's
rebuilding programme could be assessed.
Immediately after the Second World War, long range aerial reconnaissance was taken up
by adapted jet bombers capable of flying higher or faster than the enemy. The onset of the
Cold War led the development of highly specialized and secret strategic reconnaissance
aircraft, or spy planes, such as the Lockheed U-2 and its successor, the Lockheed SR-71
‘Blackbird’ (Fig. 7).
Then cameras returned back to the rockets. From the beginning of 1960s cameras and
other sensors started acquiring imagery from the satellites. Using the platforms orbiting the
Earth brought significant advantages comparing to other means of image acquisition used
before, such as synoptic view and repetitive coverage of an area of interest, observation of
remote areas or areas with a difficult access, independence on political boundaries or a
current situation on the ground, independence on weather, etc. More details about current
commercial and military satellites and satellite constellations are given in Chapter 3.2.1.

9
Imagery Intelligence (IMINT)

One of the most important platforms that were developed was the Unmanned Aerial
Vehicle (UAV). Although its history began already after the First World War when the first
radio controlled aerial targets and aerial torpedoes were tested, the first use of these vehicles
as the reconnaissance platforms dates back to the late 1950s, when they were used by the US
in the North Vietnam and the North Korea. There are many UAV types differing in size, shape
and characteristics, and (apart from other military employment) they all can carry sensors
acquiring imagery or other types of data. The advantage of using the UAVs is that they can be
used in high-risk missions, they offer performance beyond human aircraft capacities, such as
high endurance or high G-force, and they offer a high flexibility of missions. More details
about various aerial platforms are given in Chapter 3.2.2.

a) b)
Figure 7. The Lockheed U-2 in one of the latest variants - TR-1A (a). The Lockheed SR-71
‘Blackbird’ (b). (http://en.wikipedia.org)

In addition, other seemingly old-fashioned but in fact very modern platforms can be put in
this overview. These are aerostats, blimps and hybrid airships which represent high altitude,
very long endurance and low operating cost platforms (Fig. 8). It can be said that after a
century and a half the cameras returned back to the balloons.

Figure 8. The modern aerostats enable permanent surveillance over key areas.
(http://www.defenseindustrydaily.com)

10
Imagery Intelligence (IMINT)

3. Imagery

3.1 Image data types

Image data4 can be classified into various categories. A category that is used probably most
often is a resolution of the imagery. With respect to resolution, the image data can be
described as low resolution, medium resolution, high resolution, or very high resolution data.
In this case, the term ‘resolution’ refers to the spatial resolution of the image data. That
means the ground dimensions of the smallest element of the image, i.e. a pixel. This quality is
sometimes expressed also as the Instantaneous Field of View (IFOV) of the sensor, the
Ground Sample/Sampling Distance (GSD) or the Ground Resolved Distance (GRD). Obviously,
the most important image data type for IMINT is the very high resolution data, i.e. those with
GSD less than 3 m.

 Low resolution data


Low resolution is defined as GSD of more than 100 m.

 Medium resolution data


Medium resolution is defined as GSD of 100 m and smaller.

 High resolution data


High resolution is defined as GSD of 10 m and smaller.

 Very high resolution data


Very high resolution is defined as GSD of 3 m and smaller.
This is not the only classification of data with respect to spatial resolution. Sometimes the
image data are labelled as coarse (or low) resolution or fine (or high) resolution.

Instantaneous Field of View Ground Sample Distance

The IFOV is the angular cone of visibility of GSD is the distance on the
the sensor and determines the area on the ground represented by each
Earth’s surface which is ‘seen’ from a given pixel expressed in ground units.
altitude at one particular moment in time.

4 This chapter is focused mainly on the imagery acquired by the satellite sensors and airborne metric cameras and
scanners.

11
Imagery Intelligence (IMINT)

With respect to the portion of the electromagnetic spectrum (EMS) in which the data were
acquired, the image data can be described as visible, near infrared, thermal, or microwave
(Fig. 9).

 Visible data
The term used for imagery acquired in a visible portion of the EMS, i.e. with
wavelengths between 0.4 µm (violet colour) and 0.7 μm (red colour).

 Near infrared data (NIR)


The term used for imagery acquired in the shorter wavelength range of the EMS, i.e.
with wavelengths between 0.7 µm and 3.0 μm. It is often divided into the very near
infrared (VNIR) and the short wavelength infrared (SWIR), i.e. with wavelengths
between 0.7 µm and 1.3 μm and between 1.3 µm and 3.0 μm, respectively. The SWIR
data are often referred to as mid infrared.

 Thermal data
The term used for imagery acquired in the far infrared portion of the EMS, i.e. with
wavelengths between approximately 3.0 µm and 100 μm.

 Microwave (radar) data


The term used for imagery acquired in the microwave region of the EMS from about
1 mm to 1 m in terms of the wavelengths. In order to further discriminate particular
portions of this large region, the marking K-band (from 1.1 cm to 1.67 cm), X-band
(from 2.4 cm to 3.75 cm), C-band (from 3.75 cm to 7.5 cm), etc. is used.

Figure 9. The regions of the electromagnetic spectrum.

12
Imagery Intelligence (IMINT)

With respect to the number of spectral bands, the image data can be panchromatic,
multispectral, or hyperspectral.

 Panchromatic data
The term used for imagery acquired in a broad range of wavelengths within the
visible portion of EMS.

 Multispectral data
The term used for imagery acquired simultaneously in a variety of different
wavelength ranges (e.g. four bands on the Ikonos satellite or eight bands on the
WorldView-2 satellite).
 Hyperspectral data
The term used for imagery acquired simultaneously in hundreds of very narrow
spectral bands throughout the visible, near-infrared, and mid-infrared portions of
the EMS (e.g. 220 bands on the EO-1 satellite).
With respect to the position of the line of sight (i.e. the orientation of the camera or a
sensor relative to the ground), the image data can be vertical, oblique or panoramic.

 Vertical data
The term used for the imagery acquired with the camera or a sensor directed exactly
vertically. This type of imagery is of the most common use for remote sensing and
mapping purposes.

 Oblique data
The term used for the imagery acquired
with the camera or a sensor
intentionally directed at some angle Image Data Types
between horizontal and vertical
orientations (usually up to 60 degrees). Visible
The types of aerial photographs are Near IR
Mid IR
sometimes further divided as high
Thermal
oblique and low oblique. High oblique Microwave
photographs usually include the Panchromatic
horizon while low oblique photographs Multispectral
do not. Hyperspectral
Vertical
 Panoramic data Oblique
The term used for the imagery or Panoramic
Low resolution
photographs covering a large field of Medium resolution
view due to the camera look angle High resolution
exceeding 60 degrees. Very high resolution

13
Imagery Intelligence (IMINT)

The term spatial resolution was explained in this chapter. However there are other
categories using the term resolution such as radiometric resolution, spectral resolution, and
temporal resolution.

 Radiometric resolution
The term radiometric resolution describes the ability of a sensor to discriminate tiny
differences in energy reflected from (or emitted by) the object. The finer the
radiometric resolution of a sensor, the more sensitive it is to detect small differences
in reflected or emitted energy.

 Spectral resolution
Spectral resolution refers to the ability of a sensor to define fine wavelength
intervals, i.e. it defines the bandwidth. The finer the spectral resolution, the
narrower the wavelengths range for a particular channel or band. Also, spectral
resolution might express the number of bands that the image consists of.

 Temporal resolution
Temporal resolution refers to the ability to acquire imagery of the same object or the
same portion of the Earth’s surface at different periods of time.

Resolution
There are four different characteristics describing the image quality that use the term
resolution:
 Spatial resolution describes the ground dimensions of the smallest element of the
image, i.e. a pixel.
 Radiometric resolution describes the ability of a sensor to discriminate tiny
differences in energy reflected from (or emitted by) the object.
 Spectral resolution refers to the ability of a sensor to define fine wavelength
intervals, i.e. it defines the bandwidth. Also, spectral resolution might express the
number of bands the image consists of.
 Temporal resolution refers to the ability to acquire imagery of the same object or
the same portion of the Earth’s surface at different periods of time.

Each type of data offers different possibilities for their use; each particular type of data is
useful for different specific tasks. For example, panchromatic data is mostly characterized by
the finest spatial resolution, which predetermines it particularly for interpretation and
mapping. Multispectral data can be used for feature extraction using classification or for
various types of analysis based on spectral information. Hyperspectral data provides the
potential for more accurate and detailed information extraction than is possible with other
types of remotely sensed data. In other words, hyperspectral data enables to identify
particular materials thanks to their specific spectral signatures. Multitemporal data can be

14
Imagery Intelligence (IMINT)

used for change detection or monitoring dynamic processes on the ground. More detailed
information concerning the image types, their characteristics and methods of processing can
be found for example in [8], [9] or [10].
Apart from the imagery and its various forms described above there are other types of
image data being used for deriving intelligence information. It is for example a Full Motion
Video (FMV). Due to using extremely versatile UAVs as the platform the video-based data are
capable of providing the most recent view of the area of interest. (Fig. 10) The latest trend is
fusing FMV with other intelligence data such as aerial photographs, satellite imagery, vector
layers, etc. Current software tools enable image analyst to generate a georeferenced image as
a result of mosaicking of large number of individual video frames, to place annotations and to
generate static images and reports [11].

Figure 10. Three examples of the individual video frames of the FMV data.
(http://www.onwar.eu)

15
Imagery Intelligence (IMINT)

3.2 Imagery providers


At present there are basically three families of imagery providers according to a platform
type, i.e. according to the technical means carrying the imaging sensor. They are represented
by satellite sensors, airborne sensors, and ground sensors.

3.2.1 Satellite sensors


The reason why satellite systems and data they provide are described in such detail here is
that these systems represent a highly valuable source of imagery and the satellite imagery is
more available than ever. Comparing to airborne systems, especially the aircraft, they can
acquire data from the large portions of the Earth’s surface, they revisit regularly the target at
the same time of a day, they are less vulnerable, they are not limited by any boundaries or
conflicts on the ground, and their operation is not affected by weather5.
Satellite sensors are carried by commercial and military/intelligence satellites. They have
evolved during last decade significantly and became a vital imagery provider.
The simplest and oldest of sensors used to acquire the image of the Earth’s surface are
cameras employing photographic films as a medium. But the most frequently used imaging
sensors on satellite platforms today are scanners (or scanning radiometers) using two
different methods of scanning - across-track scanning and along-track scanning. Another very
important type of a sensor employed to acquire imagery is radar6.

 Across-track scanner
The era of the across-track scanner began in 1970’s. The principle of this device is
based on a rotating mirror that scans the Earth’s surface in a series of lines
composed of individual pixels. The lines are oriented perpendicular to the direction
of motion of the sensor platform. As the platform moves forward over the Earth,
successive scans build-up a two-dimensional image of the Earth’s surface (see
Fig. 11). The across-track scanner is also referred to as ‘whiskbroom scanner’. Over
the years it turned out that due to a presence of moving parts the across-track
scanners were prone to wear and failure. However this type of a scanner is still used
today, for example on Landsat-7.

 Along-track scanner
Along-track scanners also use the forward motion of the platform to record
successive scan lines and build up a two-dimensional image, perpendicular to the
flight direction. However, instead of a scanning mirror, they use a linear array of the

5 Operation of satellites is not affected by weather, however cloud cover can prevent some sensors from acquiring
the images of the Earth’s surface.
6 There was also a system of the Return Beam Vidicon (RBV) cameras measuring the reflected solar radiation. The
viewed ground scene was stored on the photosensitive surface of the camera tube, and, after shuttering, the image
was scanned by an electronic beam to produce a video signal output. The RBV system was employed only on the
Landsat-1, Landsat-2 and Landsat-3 satellites in 1972, 1975 and 1978 respectively.

16
Imagery Intelligence (IMINT)

CCD (Charge-Coupled Devices) detectors which are ‘pushed’ along in the flight track
direction. These systems are also referred to as ‘pushbroom scanners’ (see Fig. 11).
Comparing to the across-track scanners these detectors are generally smaller,
lighter, require less power, and are more reliable. They also provide imagery of a
better spatial, spectral, and radiometric resolution.

Figure 11. The principle of the across-track (left) and the along-track scanners (right).

 Radar
Radar (RAdio Detection And Ranging) systems are active sensors which provide
their own source of electromagnetic energy. Active radar sensors emit microwave
radiation in a series of pulses from an antenna, looking obliquely at the surface
perpendicular to the direction of motion. When the energy reaches the target, some
of the energy is reflected back towards the sensor. This backscattered microwave
radiation is detected. The time required for the energy to travel to the target and
return back to the sensor determines the distance or range to the target. By
recording the range and magnitude of the energy reflected from all targets as the
system passes by, a two-dimensional image of the surface can be produced. Because
microwave energy is able to penetrate through clouds and images can be acquired
day or night, the radar is called an all-weather sensor7.
Radar imagery differs from traditional imagery acquired using optical and electro-
optical systems. Its appearance is affected by both the property of a radar signal (i.e.
the wavelength, polarization, viewing geometry) and the property of a target surface
(i.e. roughness, chemical properties, moisture content). Radar imagery resembles
the black and white photography with a ‘salt and pepper’ texture, i.e. a speckle. The
bright pixels represent those areas where a significant amount of energy was
backscattered to the radar, i.e. due to a high moisture content, small incidence angle,
etc. The dark pixels represent areas where the energy was reflected away from the
sensor, for example due to a smooth surface (see Fig. 12).

7 It should be emphasized that not all clouds can be penetrated by radar.

17
Imagery Intelligence (IMINT)

There are also special techniques utilizing radar such as Side-Looking Airborne
Radar (SLAR), Synthetic Aperture Radar (SAR), radargrammetry or interferometry
(for details see [8] or [10]).

Figure 12. An example of the orthorectified radar image. Due to the fact that the energy
was reflected away from the sensor, the sea surface and the airport runways are clearly
visible on the image. (http://www.intermap.com)

3.2.1.1 Commercial satellites


In 1980s there were only two satellite systems providing imagery that were commercially
accessible. The first one was the US LANDSAT with the LANDSAT 4 and LANDSAT 5 satellites
providing multispectral imagery with a GSD of 30 m. The second one was the French SPOT
(Systéme Probatoire d’Observation de la Terre) with the SPOT 1 satellite providing
multispectral imagery with a GSD of 20 m and panchromatic imagery with a GSD of 10 m. Mid
1990s saw arrival of Indian IRS 1C and IRS 1D satellites that offered panchromatic imagery
with a GSD of 5.8 m.
But the commercial satellite revolution has started in 1999. After a failure of the first
satellite in April 1999 the IKONOS satellite was successfully launched from the Vandenberg
AF Base on 24 September 1999 carrying the electrooptical scanner that provided
multispectral imagery with a GSD of 4 m and panchromatic imagery with a GSD of 1 m. Since
then nearly twenty other satellites or satellite constellations providing imagery with a GSD
better than 4 m, a half of them with a GSD better than 1 m, were launched.

18
Imagery Intelligence (IMINT)

 ALOS
ALOS (Advanced Land Observation Satellite) is the satellite of the Japan Aerospace
Exploration Agency (JAXA). The satellite of today uncommon weight of nearly four
tonnes was launched in January 2006 to operate at an altitude of 690 km. It acquired
both the optical and radar imagery. A sensor acquiring panchromatic imagery was a
‘three-line imager’ with three independent systems for forward, nadir8 and
backward looking for achieving along-track stereo images. It had a GSD of 2.5 m and
provided a swath width of 35 km (triplet stereo) or 70 km (nadir observations). The
mission was ended in May 2011. Two follow-on satellites of this program are under
preparations. ALOS-2 will be the SAR mission providing the radar imagery with a
GSD of 1 m (Spotlight mode) and the launch is planned for 2013. ALOS-3 will be the
optical mission providing imagery with a GSD of 0.8 m (panchromatic, in nadir) and
5 m (multispectral). The launch is planned for 2015.

 CartoSat-2
CartoSat-2 is a direct follow-on satellite of CartoSat-1/IRS P5 operated by the Indian
Space Research Organisation (ISRO). It was launched in January 2007 and was
placed to the orbit at an altitude of 630 km. The payload consists of a single
Panchromatic Camera providing panchromatic imagery with a GSD of 0.8 m at nadir.
The two nearly identical follow-on satellites were already launched. CartoSat-2A
was launched in April 2008; CartoSat-2B was launched in July 2010. Both satellites
provide panchromatic imagery with a GSD of 0.8 m at nadir, the latter has pointing
capability of ±26 degrees both along-track and across-track which allows to acquire
stereoscopic imagery.

 CBERS-2B/ZY-1
It is the third satellite of the CBERS (China-Brazil Earth Resources Satellite) program
(formerly ZY-1) that was launched in September 2007 to operate at an altitude of
770 km. In addition to four-band multispectral imagery with a GSD of 19.5 m it
acquired panchromatic imagery with a GSD of 2.7 m. Also it had capability to point
the sensors up to 32 degrees from nadir. The CBERS-2B mission was ended in May
2010. The two follow-on satellites, CBERS-3 and CBERS-4, are planned for launching
in 2012 and 2013, respectively.

 COSMO-SkyMed
COSMO-SkyMed (Constellation of Small Satellites for Mediterranean basin
Observations) is a four-satellite constellation of the Italian Space Agency and Italian
Ministry of Defence (MOD). Each of the satellites is equipped with a SAR-2000
instrument working in the X-band and they operate at an altitude of 620 km. The
first satellite of the constellation, COSMO-SkyMed-1, was launched in June 2007,

8
Nadir is a point on the ground directly in line with the sensor and the centre of the Earth. The direction
opposite of the nadir is the zenith.

19
Imagery Intelligence (IMINT)

COSMO-SkyMed-2 in December 2007, COSMO-SkyMed-3 in October 2008, and


COSMO-SkyMed-4 in November 2010. The Spotlight (or Frame) mode provides data
with a GSD better than 1 m in a frame of 10×10 km, the HIMAGE (Stripmap) mode
provide data with a GSD ranging from 3 m to 5 m in a swath width of 40 km.

 EROS A
The EROS A is a minisatellite operated by the ImageSat International in the
Netherlands Antilles and used the design of the Israeli military satellite Ofeq 3. The
EROS A was launched in December 2000 from the Svobodnyi launch site in the
Russian Federation and it operates at an altitude of 490 km. The payload consists of
the Panchromatic Imaging Camera that provides the imagery with a GSD of 1.9 m.
The pointing capability of the camera allows acquiring stereo images during one
pass as well as triplets or two stereopairs in a single pass.

 EROS B
The EROS B is a minisatellite very similar to EROS A and is operated by the ImageSat
International in the Netherlands Antilles. The EROS B was launched in April 2006
from the Svobodnyi launch site in the Russian Federation and it operates at an
altitude of 500 km. The payload consists of the improved Panchromatic Imaging
Camera that provides the imagery with a GSD of 0.7 m. The pointing capability of the
camera allows to generate mosaics or to acquire stereo images during one pass.

 FormoSat-2/ROCSat-2
The Taiwan National Space Program Office operated FormoSat-2 satellite (renamed
from ROCSat-2) was launched in May 2004 and was placed to the orbit at an altitude
of 890 km. It carries a pushbroom-type imager that acquires both the multispectral
and panchromatic imagery, the latter with a GSD of 2 m. It is also capable of
acquiring stereo images.

 GeoEye-1
The GeoEye-1 (formerly known as OrbView-5) is operated by the company GeoEye
and was launched in September 2008. It operates at an altitude of 670 km. The
payload consists of the GeoEye Imaging Systems that acquires both the multispectral
and panchromatic imagery with a GSD of 1.64 m (at nadir) and 0.41 m (at nadir),
respectively. The imager provides the stereo imagery acquired in any direction
(along-track and across-track).

 IKONOS
Five months after a failure during launching of the first satellite the company Space
Imaging (since 2006 GeoEye) successfully launched the IKONOS-2 (today referred to
as IKONOS) satellite in September 1999. It was the first commercial satellite ever
providing the very high resolution imagery. It operates at an altitude of 680 km and
acquires both the multispectral and panchromatic imagery with a GSD of 4 m and
1 m, respectively. At nadir the resolution of panchromatic imagery is 0.82 m.

20
Imagery Intelligence (IMINT)

 IRS P5/CartoSat-1
The IRS P5 satellite (also known as CartoSat-1) operated by the Indian Space
Research Organisation (ISRO) belongs to many ISRO remote sensing satellites and
was launched in May 2005. It was placed to the orbit at an altitude of 620 km. The
payload instrumentation consists of two panchromatic cameras PAN-F (forward
pointing) and PAN-A (aft pointing) providing for-aft stereo imagery with a GSD of
2.5 m.

 KOMPSAT-2/Arirang-2
KOMPSAT-2 (Korea Multi-Purpose Satellite-2 also referred to as Arirang-2) is the
second satellite of this Korea Airspace Research Institute program. It was launched
from the Russian Plesetsk Cosmodrome in July 2006 and was placed to the orbit at
an altitude of 680 km. The Multi-Spectral Camera acquires both the multispectral
and panchromatic imagery with a GSD of 4 m and 1 m, respectively.

 MACSat/RazakSat
The MACSat (Medium-sized Aperture Camera Satellite) is a minisatellite mission of
the international cooperation project between Malaysia and Korea. It was launched
in July 2009 and operates at an altitude approx. 680 km on an orbit with unusual
inclination of 9 degrees. Its pushbroom imager provides both the multispectral and
panchromatic imagery with a GSD of 5 m and 2.5 m, respectively.

 QuickBird
After the failure of the first two satellites of the company DigitalGlobe (formerly
EarthWatch) - EarlyBird (1997) and QuickBird-1 (2000) - the QuickBird-2 was
launched successfully in October 2001. The satellite, today referred to as QuickBird,
operates at an altitude of 445 km. The pushbroom-type high resolution camera on
board acquires both the multispectral and panchromatic imagery with a GSD of
2.4 m and 0.6 m, respectively. Due to its pointing capability of ±30 degrees both
along-track and across-track it can provide single scenes, mosaics, and stereo
imagery.

 RADARSAT 2
It is the second satellite of the Canadian Space Agency’s program RADARSAT that
was launched in December 2007. It operates at an altitude of 790 km and acquires C-
band radar data at eleven different modes. The resolution of data varies from 100 m
(ScanSAR Wide mode) to 3 m (Ultra-fine mode). As the evolution of the program, the
RADARSAT Constellation consisting of the three new satellites that will provide the
daily access to 95 % of the Earth’s surface is planned for launching in 2015.

 Resurs-DK1
The satellite, currently operated by the Russian Federal Space Agency (Roskosmos),
with a design derived from the former Soviet reconnaissance satellites ‘Jantar’ was

21
Imagery Intelligence (IMINT)

launched in June 2006 and placed to the orbit at an altitude of 360 km (at perigee9).
It provides both the multispectral imagery with a GSD between 2.5 and 3.5 m and
panchromatic imagery with a GSD of 1 m.

 RISAT-2
The RISAT (Radar Imaging Satellite) is the first ISRO satellite mission using an active
radar sensor system. It is officially presented as a satellite for disaster management
applications, however it is supposedly used by the Indian MOD as an all-weather
surveillance satellite. RISAT-2 uses a C-band SAR (5.35 GHz). It was launched in
April 2009 and placed to the orbit at an altitude of 450 km (at perigee). A sensor
uses various modes, for example the High Resolution Spotlight mode which provides
imagery with a GSD better than 2 m.

 SPOT 5
The SPOT 5 is the last satellite from a constellation of the SPOT (Systéme Probatoire
d’Observation de la Terre) satellites and is operated by the French companies CNES
(Centre National d’Etudes Spatiales) and SPOT Image. The satellite was launched in
May 2002 and operates at an altitude of 825 km. As all the SPOT satellites it acquires
both the multispectral and panchromatic imagery. The two HRG (High Resolution
Geometric) instruments acquire panchromatic imagery with a GSD of 5 m however
the ‘Supermode’ technology allows to improve it to 2.5 m. The sensors can be
steered up to 27° across-track, enabling stereoscopic imaging and increased revisit
capabilities.

 TanDEM-X
TanDEM-X is a high-resolution interferometric SAR mission of the German
Aerospace Centre. The satellite was launched in June 2010 and it is almost identical
to the TerraSAR-X1 satellite. Both satellites fly in a close formation (at a distance
varying from 300 m to 500 m) and therefore provide a flexible single-pass SAR
interferometric configuration. The data acquired by these satellites enable to
generate a global, consistent, and high-precision digital elevation model.

 TerraSAR-X1/TSX-1
TerraSAR-X1 (also referred to as TSX-1) is the first satellite of a German SAR
mission. It was launched in June 2007 and operates at an altitude of 500 km. The
SAR instrument works in the X-band, i.e. it uses the frequency of 9.65 GHz, and
acquires data in a variety of modes. The Spotlight HS and Spotlight SL modes provide
the highest spatial resolution, which is 1 m.

 TopSat
The British first ever very high resolution satellite operated by the British National
Space Centre and funded by MOD was launched in October 2005 from the Pleseck

9 Perigee is a point in outer space where an object travelling around the Earth is closest to the Earth. The point at
the greatest distance from the Earth is called apogee.

22
Imagery Intelligence (IMINT)

launch site in Russia. It operates at an altitude of 680 km (at perigee) and in addition
to three-band multispectral imagery with a GSD of 5 m it acquires panchromatic
imagery with a GSD of 2.5 m. The sensor can use the viewing angle up to 30 degrees.

 WorldView-1
Represented as the next-generation satellite the WorldView-1 is operated by
DigitalGlobe. It was launched in September 2007 and placed to the orbit at an
altitude of 490 km. The pushbroom-type high resolution camera on board acquires
panchromatic imagery with a GSD of 0.5 m (at nadir). It also provides a single pass
stereo coverage.

 WorldView-2
WorldView-2 is the follow-on satellite to WorldView-1 satellite operated by
DigitalGlobe. It was launched in October 2009 and placed to the orbit at an altitude
of 770 km. The panchromatic CCD array of the high resolution camera on board uses
more than 35,000 detectors in a row and provides panchromatic imagery with a GSD
of 0.46 m (at nadir). It also acquires 8-band multispectral imagery with a GSD of
1.8 m (at nadir). In addition, it provides a single pass stereo coverage.
The graph in Fig. 13 shows the orbit altitudes and GSD values for selected commercial
satellites acquiring panchromatic imagery.

Figure 13. Orbit altitudes and GSD of selected satellites acquiring panchromatic imagery.

With the increasing number of satellites orbiting the Earth the times between images being
captured at the same location is decreasing. The revisit time is an important issue if changes
to the features of interest shall be detected. Also the large choice of sensors can increase the
independence of the user on the data suppliers and it leads to reduction of data prices.

23
Imagery Intelligence (IMINT)

3.2.1.2 Military satellites


Military satellites acquire data to support national interests and national security of a
given country10. Such data are rarely shared. In some cases it is difficult even to learn about
the mere existence of a particular satellite or a satellite system. It is not a purpose of this text
to give a complete history and record of military satellites. Therefore several examples only
will be presented here.

 Helios
It is the second generation military surveillance satellite of the program conducted
by France in conjunction with Belgium, Greece, Italy, and Spain. It is operated by the
French MOD’s agency DGA (Délégation Générale pour l’Armament) and the French
space agency CNES (Centre National d’Etudes Spatiales). To date there were four
launches of the satellites: Helios 1A in July 1995, Helios 1B in December 1999,
Helios 2A in December 2004, and finally Helios 2B in December 2009. The Helios 2B
operates at an altitude of 680 km and provides imagery with a GSD of 0.5 m [12].

 IGS
The IGS (Information Gathering Satellite) are the Japan’s first military
reconnaissance satellites launched between March 2003 and November 2009. The
satellites, called Optical (IGS-1A, IGS-3A, IGS-4A, IGS-5A) or Radar (IGS-1B, IGS-3B,
IGS-4B), carry either optical or synthetic aperture radar sensor, respectively. The
satellites probably operate in pairs in two orbital planes at an altitude of 490 km.
The optical sensors acquire panchromatic and multispectral imagery with a GSD of
1 m and 4.5 m, respectively. The radar sensor works in the C-Band and acquires data
with a GSD between 1 m and 3 m. With a full constellation of these satellites Japan is
now able to gather optical and radar imagery of any place in the world every day
[13].

 KH
The Key Hole (KH) is the first US reconnaissance satellite. The first satellites of this
program were launched under a name Discoverer or Corona. They all carried a
special panoramic camera called the Key Hole. From early 1960s this name had been
used also for the satellites. In the beginning the project was managed by the CIA and
the US Air Force, later it was operated by the National Reconnaissance Office (NRO).
The program started with the KH-1 prototype satellite launched in April 1959 to test
the film capsule recovery techniques. There were 22 satellite launches of the KH-1
between April 1959 and September 1960, however only one mission was successful.
The typical orbit altitude was 200 km at perigee and 900 km at apogee and the
satellite mass varied from 620 kg to 860 kg. The camera provided a spatial
resolution of 12 m (such resolution was not available for commercial imagery until
launching the SPOT 3 satellite in 1993 that is more than three decades later).

10
Since not all these satellites are operated exclusively by the national MODs, it would be more appropriate
to use the term ‘government satellites‘. However, this term is not used very often.

24
Imagery Intelligence (IMINT)

The program continued with numerous


launches of continuously evolving KH Satellites Chronology
cameras and satellite busses, e.g. 10
launches of KH-2, 12 launches of KH-3, KH-1 (Corona)
31 launches of KH-4, 70 launches of KH- KH-2 (Corona)
4A, 24 launches of KH-4B, etc. [14]. KH-3 (Corona)
KH-4 (Corona)
The KH-11, with a code name Kennan or KH-4A (Corona)
Crystal, was the first electro-optical KH-4B (Corona)
KH-5 (Argon)
digital transmission imaging
KH-6 (Lanyard)
reconnaissance satellite with 9 launches KH-7 (Gambit)
between December 1976 and November KH-8 (Gambit)
1988 (with only one failure). The typical KH-9 (Hexagon)
satellite mass was approximately KH-10 (Dorian/MOL)
KH-11 (Kennan/Crystal)
13,500 kg and the sensor provided the
KH-12 (Improved Crystal)
imagery with a GSD of 0.15 m.
The first KH-12 satellite, with code
names Improved Crystal, Dragon, Ikon, and Byeman (also possible KH-11B), was put
in the orbit by the space shuttle Atlantis in February 1990. The satellite broke up
after three weeks but it was followed by six successful launches in November 1992,
December 1995, December 1996, October 2001, October 2005, and January 2011.
The satellite mass varies from 16 to 26 tons (with 7 tons of fuel) and the typical orbit
altitude varies from 260 km at perigee and 910 km at apogee (however the satellite
is capable of changing both the orbit altitude and inclination). The sensor acquires
imagery in the visible and IR portion of the EMS and provides the imagery with a
GSD of 0.1 m [14].
 Ofeq
The Ofeq is the series of the optical reconnaissance satellites operated by the Israeli
MOD. The satellites use the unusual retrograde orbit with an altitude between
340 km (perigee) and 590 km (apogee). The Ofeq 5, launched in May 2002, provides
imagery with a spatial resolution of 0.8 m. The Ofeq 7, launched in June 2007, was
equipped with a newer sensor therefore the spatial resolution of the imagery is
reportedly better than 0.5 m [15]. The last successfully launched satellite of the
series is the Ofeq 9 launched in June 2010 having the same characteristics as its
predecessor [16].
Unlike all the satellites of the series the Ofeq 8 satellite (referred to as TecSAR or
Polaris), launched in January 2008 from the Indian Shriharikota Cosmodrome, is the
all-weather satellite using the X-band SAR [17]. It acquires the radar imagery in the
four different modes. The best of them - SpotLight mode - provides imagery with a
GSD of 1 m.

25
Imagery Intelligence (IMINT)

 Onyx/Lacrosse
The Onyx (formerly Lacrosse) is a radar imaging reconnaissance satellite operated
by the NRO. The first satellite of this type was deployed from the space shuttle
Atlantis in December 1988 and operated at a typical orbit altitude of 660 km. The
resolution of the imagery was probably between 1 and 3 m.
The last satellite of this type, Onyx 5/Lacrosse 5, was launched in April 2005 [18]. It
operates at an altitude of 720 km and is supposedly capable of manoeuvring in the
orbit. It provides radar imagery with a GSD of about 1 m.
 SAR-Lupe
It is the first German satellite-based radar reconnaissance system. It is a
constellation of five identical small satellites in three orbital planes to provide
worldwide coverage at an altitude of approximately 500 km. They were all launched
from the Plesetsk Cosmodrome in Russia in December 2006, July 2007, November
2007, March 2008, and July 2008. The synthetic aperture radar works in the X-Band
and provides data with a GSD better than 1 m [19].
There are also other military satellites and satellite constellations, such as Chinese FSW,
ZY, or Yaogan; Russian Kosmos/Araks, Kosmos/Orlets, Kosmos/Persona, etc.

3.2.1.3 Future satellite systems


In a horizon of the next three years at least dozen new satellites of the existing or newly
introduced commercial systems will be launched. The spatial resolution of the imagery that
will be acquired by their sensors will certainly reach or even slightly improve the resolution
being provided by current systems, i.e. 2 m for multispectral imagery and 0.5 m for
panchromatic imagery.
As long as the military systems are concerned it is difficult to predict the future
development. However the current trend is building and launching satellite constellations
providing both optical and radar data with very high resolution enabling frequent repetitive
coverage of any place of the world. Also probably more satellites belonging to the family of
‘invisible’ surveillance satellites capable of manoeuvring in the orbit will be deployed.

26
Imagery Intelligence (IMINT)

3.2.2 Airborne sensors


Airborne sensors might be very similar to those carried by satellite platforms, i.e.
photographic cameras, scanners or radar. However due to significantly different altitude
these sensors work at, which is much lower than that of the satellites, they can provide
imagery of higher spatial resolution. The sensors can be mounted on fixed wing aircraft,
rotary wing aircraft, Unmanned Aircraft Vehicles (UAV) or ‘balloons’. There are also special
sensors being employed solely by airborne platforms, for example laser scanners, cameras for
the Full Motion Video (FMV) capture, FLIR balls, or full matrix CCD sensors (Fig. 14).

a) b) c)

Figure 14. Examples of airborne mounted sensors: the ADS80 Airborne Digital Sensor
from Leica (a), the ALTM Orion Airborne Laser Terrain Mapper from Optech (b), and the
Raven Eye II Unmanned Multi-mission Stabilized Payload from Northrop Grumman (c).
(http://www.leica-geosystems.com, http://www.optech.ca,
http://www.es.northropgrumman.com)

 Aircraft
There is a wide variety of aircraft used as a platform for acquiring the imagery.
These can be military aircraft, such as reconnaissance, surveillance, or tactical
reconnaissance. These can be various types of helicopters. These can be also special
photogrammetric aircraft using highly specialised metric cameras and airborne
scanners.
 UAV
The UAVs are also known as Remotely Piloted Aircraft (RPA), Unmanned Aerial
Systems (UAS) or ‘drones’. They can be fixed- or rotary-wing aircraft and lighter-
than-air and near-space systems. In many cases certain types of UAVs can overcome
some weaknesses of using aircraft. For instance they can operate in much higher
altitudes, they can utilize solar power as a source of energy, or they can hover above
the point or area of interest for many hours or even days.

27
Imagery Intelligence (IMINT)

There is a wide variety of UAV types that can be classified according to their weight
or the operating altitude such as micro, mini, small, tactical, high altitude long
endurance (HALE), medium altitude long endurance (MALE), and so on [20].

 Balloons
Although the general term ‘balloon’ is used there are several distinct classes of this
platform type. These are blimps or non-rigid airships, aerostats, moored balloons,
etc. Blimps are free-flying airships without an internal supporting framework and
they are powered whereas aerostats are anchored to the ground. Because the
aerostats are not highly pressurized, bullets will not burst them and they can
actually remain buoyant for hours after suffering multiple punctures.
One of the latest classes of this platform is the hybrid airship. It gains lift from three
different sources: the aerostatic lift given by the on-board helium, the aerodynamic
lift given by its hull shape, and lastly the diesel engines and vector vanes. One
example of this platform can be the Long-Endurance Multi-intelligence Vehicle
(LEMV) being currently developed by the Northrop Grumman for the United States
Army11. The LEMV will potentially be capable of lifting a 2,300 kg payload up to
6,100 m for up to three weeks (see Fig. 15).

Figure 15. US Army LEMV Hybrid Airship. (http://www.as.northropgrumman.com)

11 The Lockheed Martin was contracted to construct a high-altitude airship (HAA) for the US Army. The HAA was
intended to operate above the jet stream at a hight of above 18,000 m for up to one month. It was designed to be
150 m long and 46 m in diameter. Due to budget problems the HAA program was canceled.

28
Imagery Intelligence (IMINT)

3.2.3 Ground sensors


Ground sensors can comprise imaging sensor mounted on the wheeled or track armoured
vehicles, portable systems, autonomous or remote controlled Unattended Ground Sensors
(UGS).
The UGS can be deployed at the area of operation where they detect, classify and report
target information. UGS systems employ small, low cost and robust sensors expected to last in
the field for very long time, i.e. even for months. UGS systems utilize a combination of
detectors: seismic detectors, magnetic detectors, acoustic sensors and imagers, such as
passive infrared sensors etc. These systems can also use a thermal camera, triggered by a
passive infrared sensor, taking snapshots of nearby motion events [21]. There are many
examples of UGS systems such as Terrain Commander, Covert Unattended Ground Imager,
BAA, Falcon Watch, Seraphim MUGI, and so on (see Fig. 16).

a) b) c)

Figure 16. Examples of Unattended Ground Sensors: BAA Observation and Reconnaissance
Equipment (a), Terrain Commander 2 Network Enabled Surveillance System (b), and Falcon
Watch Remote Intrusion Detection And Surveillance System (c).
(http://www.rheinmetall-defence.de, http://www.textron.com, http://www.harris.com)

29
Imagery Intelligence (IMINT)

3.3 Image resolution and interpretability

3.3.1 Effect and limitations of spatial resolution


As mentioned earlier the spatial resolution (or IFOV or GSD or GRD) of the imagery refers
to the size of the smallest element of the image, i.e. a pixel. However from the image analyst’s
view the term ‘image resolution’ has a different meaning. That term refers to the ability to
distinguish between objects depicted in the image (see Fig. 17). Theoretically, to be
distinguished individually, the adjacent objects must be separated from each other by at least
one GSD [8]. But a general rule is to have an image with a GSD approximately 2.5 times
smaller than a distance separating those two objects.

Figure 17. The ability to distinguish between adjacent objects on an image.

The left column in Fig. 17 shows the arrangement of the two adjacent objects on the
Earth’s surface, separated by a distance d, where the squares in a background represent the
GSD footprint of the image. The middle column shows the appropriate (gray) pixels, affected
by the reflectance of the real objects, as they appear in the image. The right column shows the
relationship between the GSD of the image and the distance separating the real objects.

30
Imagery Intelligence (IMINT)

If

d
GSD 
2
then the affected pixels in the image will be always depicted as adjacent, regardless the
position of the GSD footprint with respect to a position of the objects on the Earth’s surface. It
means that these objects will always be depicted as one object.
If

d
GSD 
2
then the chance of distinguishing the objects as two individual objects will depend on the
position of the GSD footprint. Only in cases when the GSD footprint will fit the gap between
the objects, these objects will be distinguished separately. In all other cases they will be
depicted as one object.
If

d
GSD 
2
then the objects will be always depicted as separate, regardless the position of the GSD
footprint.
It should be emphasized that having an image with a spatial resolution of 10 m does not
mean that it is possible to recognize objects of size of 10 m. Spatial resolution affects the
interpretation process and all its individual tasks. The closer is the spatial resolution to a size
of the object the less chance is to recognize or even detect the object in an image. In this
context, it is necessary to distinguish between individual interpretation tasks: detection,
recognition, identification, and technical analysis.

 Detection
It is the ability to discover the existence of an object based on its configuration and
on other contextual information without recognizing it.

 Recognition
It is the ability to class the identity of a feature or an object on an image within a
group (e.g. tank, single-lane bridge).

 Identification
It is the ability to identify a feature or an object on an image as a precise type (T-72
tank, MiG-21).

 Technical analysis
It is the ability to describe precisely a feature, an object or a component of the object.

31
Imagery Intelligence (IMINT)

For example, the imagery with a spatial resolution of 4.5 m will allow to detect an aircraft.
Resolution of 1.5 m will allow to recognize a class of that aircraft, such as a fighter, a fighter
bomber, or a bomber. But to identify a type of the aircraft precisely, the imagery with a
spatial resolution of 0.15 m will be needed. When a technical analysis is required, that is a
precise description of fine details of that aircraft, a spatial resolution of 0.04 m will be needed.
The examples of image resolution required for other target types are shown in Tab. 1. (In
practice it is not as simple as the above mentioned example, however, for our purposes we
can afford such simplification).

Table 1. The minimum resolved object sizes in metres [22]12.

Not only the spatial resolution affects the information content of an image and hence its
interpretability. It is necessary to consider also acquisition conditions such as illumination,
shadows, sensor look angle, or influence of the atmosphere.

3.3.2 Image Interpretability Rating Scales


It is obvious that for intelligence purposes a different measure of an image quality rather
than image physical parameters is needed. This measure exists and it is called
‘interpretability’. A special scale providing an objective standard for image quality was
developed - the Image Interpretability Rating Scale. Since the simple image quality measures,
such as resolution or an image scale, cannot adequately predict image interpretability, this
scale assesses the information potential of an image for intelligence purposes. The most
widely known scale being developed and currently used is the US National Image
Interpretability Rating Scale (NIIRS) [23]. Through a process referred to as ‘rating’ an image,
the NIIRS is used by image analysts to assign a number which indicates the level of
information that can be extracted from an image of a given interpretability level. In other

32
Imagery Intelligence (IMINT)

words, rather than define quality in terms of physical parameters such as GSD or signal-to-
noise ratio (SNR), NIIRS defines quality in terms of the ability to extract information12.
In order to relate the imagery quality, expressed in terms of NIIRS, to fundamental image
attributes, the Image Quality Equation (IQE) can be specified. In other words, the IQE relates
the impact of sensor system and acquisition parameters to the measurement of final image
quality. The equation includes a detailed description of a number of variables, such as scale
(might be expressed as the GSD), scene contrast, sharpness (determined from the system
modulation transfer function - MTF), illumination, atmospheric conditions, or optics and
sensor characteristics. An IQE can be used in designing new imaging system or for optimizing
collection from existing systems. The IQE can be generally expressed as

NIIRS  f GSD, MTF , SNR, etc.


or

NIIRS  f (altitude, focal length, atmosphere, etc.)

The following two rating scales will be discussed in more detail - the US National Image
Interpretability Rating Scale and the NATO Image Interpretability Rating Scale.

3.3.2.1 National Image Interpretability Rating Scale


The NIIRS was developed in the early 1970s as the 10-level criteria-based scale, starting
from the level 0 (having no value for intelligence purposes) to the level 9. The criteria were
defined at each level for five general categories of military equipment (air, electronic, ground,
missile, and naval). It was officially introduced to the intelligence community in 1974. Since
then it continued to evolve and grow in scope.
Later there were attempts to apply the visible NIIRS also to infrared and radar imagery. It
was soon apparent that due to their special properties these types of imagery require
different criteria. This resulted in developing separate 10-level scales for both radar and
thermal infrared imagery. As the visible NIIRS, none of these scales is related to the spatial
resolution of an image.
The initial NIIRS dealt with military objects only. But when such objects were not seen on
the image, it was difficult for the analysts to provide a NIIRS rating. Therefore the non-
military cultural criteria that are normally present in any image were added to the scales. The

12 As the FMV data are being used more often and more extensively in the IMINT process, the standard measures
for assessing the image interpretability are needed also for the motion imagery. Due to dynamic nature of motion
imagery, especially the factors such as target motion and camera motion, and also the data compression, do not
allow applying the NIIRS that were developed for still imagery. The Video National Imagery Interpretability Rating
Standard (V-NIIRS) is one of the current projects in development.

33
Imagery Intelligence (IMINT)

cultural criteria are for example buildings, roads, railroads, and bridges, and can be used for
rating when military objects are not present.

34
Imagery Intelligence (IMINT)

4. Collateral information

The imagery itself usually cannot provide the exact, definitive, and unambiguous result
therefore other information might be used. Collateral, or ancillary, information is a vital
information source and a tool the image analyst needs to be able to perform his or her task. It
is usually the non-image information used to assist in the interpretation of an image. There
are numerous types of collateral information that can be used: interpretation keys, models,
theme encyclopaedias, report templates, previous images, previous reports, ground pictures,
maps, books, statistical tables, field observations and so on. Collateral information is used by
image analyst often intuitively in a form of his or her knowledge which is based on everyday
experience and formal training [10].
In order to categorize the objects and features and thereby facilitate the use of collateral
information, especially the process of reporting, several basic themes of military interest,
called the target categories in NATO, were defined. These may be organized according to the
producer’s operating procedures or tailored with respect to the customer’s requirements
(see an example in Fig. 18). Within NATO, the target category list is specified in STANAG 3596
‘Air Reconnaissance Requesting and Target Reporting Guide’ (see Fig. 19).

THEMES OF MILITARY INTEREST

Aeronautical Installations
Ports and Harbour Installations
Military Installations
Electronic Installations
Storage Facilities
Industrial Installations
Networks (roads, railroads, waterways, power lines, pipelines, etc.)
General Terrain Features
Missile Systems
Special Equipment of Ground/Air/Naval Forces

Figure 18. The example of themes of military interest [25].

35
Imagery Intelligence (IMINT)

TARGET CATEGORY LIST


Airfields
Anti-Aircraft Artillery and Missile Systems
Electronic Installations
Headquarters and Barracks
Storage and Repair Installations
Ground Activity
Obstacle Crossing
Shipping
Route Reconnaissance
Terrain Reconnaissance
Coastal Reconnaissance
Bridges and Tunnels
Water Control Installations
Port Installations
Rail Installations
Industrial Installations
Power Installations
Urban Areas/Habitation
Specific Structure

Figure 19. The Target Category List defined in STANAG 3596 [].

4.1 Interpretation keys


An image interpretation key is simply reference material designed to permit rapid and
accurate identification of objects or features represented on images. It helps the image
analyst organize the information observed on image and guides him or her to a correct
identification of unknown objects. Such interpretation keys serve either or both of two
purposes: (1) they are a means of training inexperienced personnel in the interpretation of
complex or unfamiliar topics, and (2) they are a reference aid for experienced interpreters to
organize information and examples pertaining to specific topics.
Generally, the image interpretation key can be represented by a diagram, chart, table, list,
catalogue or set of examples. There are basically three types of interpretation keys: direct
keys, selection keys and elimination keys.

36
Imagery Intelligence (IMINT)

 Direct key
This type of the interpretation key allows performing direct identification of the
objects (see Fig. 20 and Fig. 21). The interpretation key may be in a form of a
catalogue showing particular types of objects or equipment from the selected theme
of interest or a target category list. It usually contains technical parameters, typical
features enabling recognition, and one or more pictures or diagrams. The direct keys
usually contain a large number of the examples of the objects that are built on
known facilities.

 Selection key
Using this type of the interpretation key the image analyst can identify the objects by
comparing various characteristics and parameters - on images and/or word
descriptions - until a correct or most probable object is finally selected (Fig. 22).

 Elimination key
This type of the interpretation key uses selection of types and shapes of the objects
and their parts by eliminating non-corresponding choices in lists. These keys are
composed of word descriptions ranging through various levels of broad to specific
characteristic discrimination. The image analyst progresses down through this
hierarchy, making choices at branching description paths. Finally, by the process of
eliminating all differing features, the object is identified (Fig. 23).
In NATO the STANAG 3483 ‘Air Reconnaissance Intelligence Reporting Nomenclature -
ATP-26(C)’ defines the standard terminology for description of equipment and installations
and presents the information sorted into eight following sections: Roads/Rail/Waterways,
Navy, Army, Aircraft/Airfield, Installations, Terrain Features and Coverage, Civilian Industry,
Electronics [27]. Each feature is accompanied with either the example of its appearance in an
image or a scheme.
There are also commercial interpretation keys provided primarily to the customers using
imagery acquired by the company operated satellites. The TerraSAR-X IMINT Manual
provided by the Infoterra company can be one example [28]. It shows hundreds of defence
and security related objects how they appear in high resolution SAR data. Each SAR image
example is displayed side by side with high resolution optical image depicting the same
object in the same scale and the explanation of the effects is provided (Fig. 24).

37
Imagery Intelligence (IMINT)

a)

b)

Figure 20. An example of the direct interpretation key: the S-75/SA-2 Guideline
SAM site. Typical site configuration (a) and the real site depicted on an image (b).
(http://www.ausairpower.net, http://maps.google.com)

38
Imagery Intelligence (IMINT)

a)

b)

Figure 21. An example of the direct interpretation key: the S-200/SA-5 Gammon
SAM site. Typical site configuration (a) and the real site depicted on the image
(b). (http://www.ausairpower.net, http://maps.google.com)

39
Imagery Intelligence (IMINT)

Figure 22. An example of the selection interpretation key: mobile bridges [25].

Figure 23. An example of the elimination interpretation key: aircraft. Apart from
the shape of wing, other identification elements are used - type of wing, position of
wing, wing tip shape, type of engine, engine intake arrangements, shape of
fuselage, tale assembly shape, etc. [25].

40
Imagery Intelligence (IMINT)

a)

b)

Figure 24. Two examples from the TerraSAR-X IMINT Manual: power line pylon (a)
and fence (b). (http://tim.infoterra.de)

41
Imagery Intelligence (IMINT)

4.2 Models
The models are conceptual objects describing natural or man-made features or objects as
they appear on an image. They can be of the two following types: general models and exact
models.

 General models
General models provide a generic description of natural or man-made features or
objects. For example the ammunition storage model shows that that type of
installation will often contain storage area, administrative and technical buildings,
fire fighting facility, passive protection, etc.

 Exact models
Exact models (sometimes referred to as precise models) pair the real appearance of
particular objects (i.e. the ground truth) with their appearance on an image (see
Fig. 25).

a)

b)

Figure 25. The examples of the exact model showing the ground truth (left) and the
object appearance on an image (right): oil storage tanks (a) and cooling towers (b).
(http://maps.google.com)

42
Imagery Intelligence (IMINT)

4.3 Theme encyclopaedias


Theme encyclopaedia offers functional, organizational, and contextual knowledge of a
specific theme of interest. It can contain scientific and technical information concerning
particular theme, for example production of electricity.

 Functional knowledge
Functional knowledge helps answer a question: How does it work? It helps learn the
basic process from generating the steam, rotation of a turbine up to transforming
the voltage.

 Organizational knowledge
Organizational knowledge provides answers to a question: What is the generic
layout of the installation? It shows all basic components of a power plant, such as
fuel storage; generator hall; cooling towers; transformer yard; etc. (see Fig. 26).

 Contextual knowledge
Contextual knowledge helps answer a question: What is the best location for this
type of installation? It helps learn that a power plant needs to have an access to a
source of fresh water; it will be placed close to either populated place or industrial
area or both; etc.
Creating a theme encyclopaedia takes a long time and requires investing a considerable
amount of know-how therefore creators are not always willing to share such documents. As
well as the interpretation keys the theme encyclopaedias are built on experience of the image
analysts. Some organizations, such as the Groupement pour le Développement de la
Télédétection Aérospatiale (GDTA)13, publish separate examples only for the educational
purposes in their official courses [25]; other organizations, such as the European Union
Satellite Centre (EUSC)14, employ these documents solely in their internal training
materials [29].

13The GDTA, established in 1973 in Toulouse, France, used to be the organization whos goal was to develop the
methods of remote sensing and to promote exploitation of the methods and products of remote sensing by
providing an education to specialists on various levels. The GDTA used to be a group of several French
organizations, such as Centre National d‘Etudes Spatiales (CNES), Institute Géographique National (IGN), etc. The
GDTA was closed in 2005.
14 The EUSC is an agency of the Council of the European Union. It was founded in 1992 and is located in Torrejón
de Ardoz, Spain. Its main mission is to support the decision making of the EU by providing the products resulting
from the analysis of satellite imagery and collateral data.

43
Imagery Intelligence (IMINT)

5-MWe
reactor site

50-MWe
reactor

Radiochemical
laboratory
complex

Fuel
fabrication
area

Figure 26. The Nuclear Centre in Yongbyon, North Korea.


(http://maps.google.com, annotation according to [30])

4.4 Report templates


The report templates are the prearranged structures of reports. Using these templates can
simplify collection of basic information necessary for recognition and identification of objects
or features. The report templates lead the image analyst to specify the items such as type,
status, activity, defenses, infrastructure, support facilities, etc. The example of the airfield
report template according to the NATO standards can be seen in Fig. 27.
Templates for all the NATO target categories can be found in the STANAG 3596 [26]. More
details concerning reporting, especially the target reporting guides, can be found in
Chapter 6.1

44
Imagery Intelligence (IMINT)

CATEGORY 01: AIRFIELDS

1. Location and Type:


a. CONFIRMED LOCATION: (Example: WGS 84 31UTM45763214)
b. TYPE: Military/Civilian/Mixed/Unknown
c. FUNCTION, NATURE & SUBORDINATION: (Example: Airbase, Bomber, Naval)
2. Status:
a. DEPLOYMENT: Permanent/Temporary/Unknown
b SERVICEABILITY: Serviceable/Partly Serviceable/Unserviceable/Unknown
c. OCCUPATION: Occupied/Partly Occupied/Unoccupied/Unknown
d. CAPABILITY: Operational/Partly Operational/Non Operational/Decoy/
Unknown
e. HARDENING: Hardened/Partly Hardened/Non Hardened/Unknown
f. CONSTRUCTION: Under Construction/Modified/None Observed
g. CAMOUFLAGE: Camouflaged/Partly Camouflaged/None Observed
3. Equipment and Activity: Report location, number, function, type, orientation, state
of readiness for each item: (Example: Main Parking
Apron, 02, Bomber, Backfire B, engines running)
4. Defence: Report location, number, function, type for each item:
a. Local Air defence: (Example: 31UTM45763214, 06, AAA, S60)
b. Surface Defence: (Example: 030 100m RP, Defensive Positions)
c. Passive Defence: (Example: SW Dispersal, Fence Secured)
5. Facilities/Description: Report location, number, function, type for each item:
a. Primary Facilities: Include orientation, dimensions and materials for operating
surfaces. (Example: Single runway 09/27, 2000m, 35m,
concrete, with parallel taxiway, 1500m, 25m, concrete)
b. Support Facilities: (Example: 300m SW RP, 01, ATC Tower)
6. Damage assessment:
a. Physical Damage: (Example: Destroyed, confirmed single weapon impact point
15m from DMPI 101. SMoke visible from impact point and
air vents.)
b. Functional Damage: (Example: Building probably functionally destroyed. 4 fire
trucks and 5 ambulances at North end of building.)
c. Unplanned Damage: (Example: Probable weapon impact point in road 200m E
of bunker. Road traffic able to by pass the crater.)
7. Analyst comment:

Figure 27. Example of the NATO target reporting guide: airfields [26

45
Imagery Intelligence (IMINT)

5. Image analysis process

The IMINT cycle mirrors the intelligence cycle - Planning and Direction, Collection,
Processing and Exploitation, Production, Dissemination, Utilization15. These steps define both
a sequential and an interdependent process for developing IMINT (Fig. 28) [3]. To stay
focused on imagery processing only the following two steps will be examined: processing and
production.

Figure 28. The intelligence cycle (according to [3]).

Processing and exploitation in the intelligence cycle involves the conversion of collected
data into information suitable for the production of intelligence. Imagery processing for
IMINT purposes refers to the conversion or transformation of exposed film or electronic
photographic, electro-optical, infrared, and radar imagery into a form usable for
interpretation and analysis.
Production in the intelligence cycle is the activity that converts information into
intelligence through the integration, analysis, evaluation, and interpretation of all-source data
and the preparation of intelligence products in support of known or anticipated user
requirements. IMINT production refers to writing imagery reports; annotating imagery
products; creating imagery-derived products; integrating and fusing IMINT into all-source
intelligence products, and so on (IMINT products are discussed more in Chapter 6.2).
Since digital imagery has its own characteristics and differs significantly from the
traditional photographic prints and transparencies, it requires special treatment in the
context of visual interpretation. It means that prior to commencing the interpretation process

15 Other sources present different cycles. For example, GDTA presents the basic image cycle as follows:
Information requirements, Tasking, Collection, Processing and distribution, Exploitation, Imagery derived
reporting, Report dissemination [25].

46
Imagery Intelligence (IMINT)

the imagery must be either pre-processed or enhanced, or both. The interpretation as such
requires employing a combination of several elements of image interpretation describing
characteristics of objects and features as they appear on imagery. Then the image analysis
process comprises several successive actions performed by an image analyst. These can be
feature extraction, meaning extraction, or change detection.

5.1 Image pre-processing and enhancement


There are many effects occurring during the process of acquisition of the imagery resulting
in inaccuracies, errors and decreasing quality of the imagery. The sources of these problems
are numerous, for example technical imperfections of the sensor, current atmospheric
conditions, flight geometry, Earth’s curvature etc. To exploit the imagery effectively, these
inaccuracies and errors must be corrected16. This is usually done in the two steps - image pre-
processing and image enhancement. More information about these techniques can be found
in [8], [10], [31], [32], or [33].

5.1.1 Image pre-processing


Image pre-processing comprises two basic parts: radiometric corrections and geometric
corrections.

 Radiometric corrections
The aim of the radiometric corrections is to change the brightness values (digital
numbers) of the individual pixels. The main reason for this correction is a presence
of the noise that may be caused by the atmospheric conditions or by a sensor. Also
the radiometric errors can occur. These errors used to be caused by a sensor
malfunction and it can look like missing individual pixels or lines of pixels, which is
called striping or banding. These errors are usually corrected by filtering or by
replacing the missing values with the pixel values of the adjacent pixels.
Sometimes the atmospheric correction is referred separately from the radiometric
corrections because the effects of the atmosphere upon the imagery are not
considered errors - they are part of a signal received by a sensor. However, it is often
important to remove these effects, especially for image matching and change
detection analysis. The atmospheric correction can exploit various methods ranging
from detailed modelling of the atmospheric conditions during data acquisition, to
calculations based on the image data.

 Geometric corrections
The reason for applying the geometric corrections comes from the fact that the raw
data suffer from serious geometric distortions impeding exploitation of that data as
a geographic product. These distortions may be caused by various factors, such as

16 This refers particularly to the imagery acquired by satellite sensors and airborne metric cameras and scanners.

47
Imagery Intelligence (IMINT)

the imperfections of the sensor; the curvature and rotation of the Earth; the platform
altitude, attitude, and velocity; the terrain relief; etc. Geometric corrections are
aimed at transforming the image coordinate system and regulating the pixel so that
the resulting image is in accordance with the map coordinates or is converted to a
specific map projection. Georeferencing, rectification, geocoding or resampling are
the procedures which allow to compensate for geometric distortions.

5.1.2 Image enhancement


Image enhancement is the process of improving the visual appearance of digital images in
order to facilitate the interpretation and in a certain sense to increase the amount of
information that could be obtained from the image. Very often, image enhancement is
conducted without regard for the integrity of the original data. It means that the original
brightness values will be altered in the process of improving their visual qualities and they
will lose their relationships to the original brightnesses on the ground. Therefore, enhanced
images should not be used as input for additional analytical techniques; rather, any further
analysis should use the original values as input. There are basically three groups of image
enhancement: radiometric enhancement, spatial enhancement, and spectral enhancement.
 Radiometric enhancement
Radiometric enhancement deals with the individual values of the pixels in the image. It
is the manipulating with the pixel values so that the new values are assigned to the
pixels independently on the neighbouring pixels. Among the techniques in this group
there are contrast stretching, histogram equalization, histogram matching, or
brightness inversion.

 Spatial enhancement
The aim of spatial enhancement is to increase the interpretability of an image, to
facilitate potential automated feature extraction, and to eliminate or at least to reduce
the effects of sensor imperfections. While radiometric enhancements operate on each
pixel individually, spatial enhancement modifies pixel values based on the values of
surrounding pixels. Spatial enhancement deals largely with spatial frequency, which is
the difference between the highest and lowest values of a contiguous set of pixels.
Among the techniques of this group there are convolution filtering, resolution merge,
etc.
 Spectral enhancement
Spectral enhancement works with multispectral or hyperspectral imagery. The
manipulations comprise colour synthesis, colour enhancement of particular bands, or
transformations of image data to a form which is more convenient for interpretation.
Among the techniques of this group there are principal component analysis,
decorrelation stretch, RGB/IHS transforms, or working with (vegetation) indices.

48
Imagery Intelligence (IMINT)

5.2 Elements of image interpretation


In the past, exclusively the manual interpretation of the analogue images was carried out.
It was called the photointerpretation and was described as ‘the act of examining photographic
images for the purpose of identifying objects and judging their significance’ [34]. Though
feature extraction and other techniques deal primarily with the digital imagery today, the
procedures are based on the same principles as traditional photointerpretation. This can be
extrapolated to the digital imagery and called simply interpretation.
The procedures applied in photointerpretation had almost no demands on special
equipment. It was relatively a subjective process and due to limited image analysts’
capabilities it was limited only to a single image. With the advent of digital image exploitation
the special processing methods were developed as well as automated procedures allowing to
perform unbiased analysis, to process imagery acquired in several spectral bands and
covering large portions of the Earth’s surface.
Generally, the aim of the interpretation is to discover and recognize the objects on the
images and to extract them for further processing. Searching for the objects is performed
through comparing differences between various objects or between the objects and their
background. The image analyst also compares traditional interpretation aids such as tone,
texture, shadows, pattern, association, shape, size, location, and interrelationships [10].

 Tone
Tone is the only directly evaluated element. For black-and-white images, image tone
denotes the lightness of darkness of a region within an image. Tone may be
characterized as ‘light’, ‘medium gray’, ‘dark gray’, ‘dark’, and so on. For colour
imagery, image tone refers simply to ‘colour’, such as ‘dark green’ or ‘light blue’ and so
on. Since image tone can be influenced by factors other than the absolute brightness of
the Earth’s surface, the caution should be employed during interpretation. It is known
that a human interpreter can provide reliable estimates of relative differences in tone,
but cannot describe accurately absolute image brightness.

 Texture
Image texture refers to the apparent roughness or smoothness of a portion of an image.
Usually texture is caused by the pattern of highlighted and shadowed areas created
when an irregular surface is illuminated from an oblique angle.

 Shadow
Shadow is an especially important clue in the interpretation of objects. A building or
vehicle, illuminated at an angle, casts a shadow that may reveal characteristics of its
size or shape that would not be obvious from the overhead view alone.

 Pattern
Pattern refers to the arrangement of individual objects into distinctive recurring forms
that facilitate their recognition on imagery. Pattern on an image usually follows from a
functional relationship between the individual features that compose the pattern.

49
Imagery Intelligence (IMINT)

 Association
Association specifies the occurrence of certain objects or features, usually without the
strict spatial arrangement implied by pattern. Association of specific items has great
significance when the identification of a specific class of equipment implies that other,
more important, items are likely to be found nearby. In other words, certain objects are
closely linked to others, so that presence of one leads to a high level probability of
presence of others. Association is one of the most helpful clues for identification of
man-made features.

 Shape
Shape describes an external form or configuration of an object and is result of the
geometric arrangement of tone and colour elements. The shape of some objects is so
distinctive that they can be identified by this element only. For example, natural
objects have usually irregular shape with irregular boundaries whereas man-made
objects have usually geometrical shape and distinct boundaries.

 Size
Size is important in two ways. First, the relative size of an object or feature in relation
to other objects on the image provides the interpreter with an intuitive notion of its
scale, even though no measurements or calculations may have been made. Second,
absolute measurements of the size of an object can confirm its identification based
upon other factors, especially if its dimensions are so distinctive that they form definite
criteria for specific items or classes of items. Furthermore, absolute measurements
permit derivation of quantitative information, including lengths, volumes, or even rates
of movement.

 Site
Site refers to topographic position or a location of objects with respect to each other or
to terrain features. For example, power plants need water supply for cooling, sewage
treatment facilities are positioned at low topographic sites near rivers to collect waste
flowing from higher locations, etc.

5.3 Image interpretation techniques


The image analyst must routinely conduct several kinds of tasks, many of which may be
completed together in an integrated process. These can be itemized into the following
individual tasks: detection, localization, recognition, identification, listing, comparison,
interpretation, understanding, prediction. Each of these tasks requires different level of
expertise and is heavily dependent on the image quality, time availability, and other limiting
factors. However the common general competences, that the image analyst should possess,
can be worded as very good knowledge related to the thematic field, understanding of the
relation between objects and their representation in an image, possessing the analytical
expertise, ability to perform image interpretation, possessing the geographic expertise, and
mastery of image processing tools [25].

50
Imagery Intelligence (IMINT)

Considering some of the tasks in combination


with particular techniques, several different levels Image Interpretation Tasks
of image interpretation can be specified [29]: Detection
 Observation and detection Localization
Recognition
This is the basic level of image
Identification
interpretation (see description of the
Listing
influence of spatial resolution on image
Comparison
interpretability in Chapter 3.3.1).
Interpretation
 Measurement and identification Understanding
The result in this level is the assignment of Prediction
a label to objects (see Chapter 5.3.1).

 Analysis and interpretation


The result in this level is the assignment of meaning (see Chapter 5.3.2).
As interpretation is influenced by a number of factors, it is obvious that description of
objects and features may not be always sufficiently precise. Therefore it is necessary to
consider also different levels of uncertainty of results being presented to a customer (see
Chapter 6).
It is known that there is a number of cognitive or optical illusions that can mislead
especially the novice image analyst when performing image interpretation. One of the
traditional examples is a Kanizsa triangle17. It is an optical illusion where the white triangle is
formed of illusory or subjective contours. It looks like a triangle but in fact there is no triangle
(see Fig. 29). To avoid mistakes in interpretation caused by various cognitive illusions,
whenever possible, all information should be cross-checked and also, if possible, more image
analysts should participate in the interpretation.

Figure 29. The Kanizsa triangle. (http://en.wikipedia.org)

17 An optical illusion first described by the Italian psychologist Gaetano Kanizsa in 1955.

51
Imagery Intelligence (IMINT)

5.3.1 Feature extraction


Feature extraction in image analysis process is not exactly the same process as the feature
extraction in remote sensing, although the individual techniques can be very similar. The
following techniques or functions can be specified: classification, enumeration, measurement,
and delineation.

 Classification
Classification in this context means the assignment of objects, features, or areas to
classes based on their appearance on the imagery. It is a different process than
traditional classification in remote sensing18.

 Enumeration
Enumeration is the task of listing or counting discrete items visible on an image. For
example, housing units can be classified as ‘detached house’, ‘multistory residential’,
etc. and then reported as numbers present within a defined area.

 Measurement
Measurement, or mensuration, may range from a simple visual estimate of the size,
shape and colour of an object to detailed measurement and calculation of distance,
height, volumes and areas and also scene brightness or density.
Measurements should be always performed using some tool because human eye can be
misled. However since the image processing phase involves also georeferencing or
even orthorectification current software applications provide reliable tools for
obtaining exact coordinates and dimensions of objects and features depicted on an
image.

 Delineation
An image analyst must often delineate, or outline, regions as they are observed on
images. He or she must be able to separate distinct areal units that are characterized by
specific tones and textures, and to identify edges or boundaries between separate
areas.
However, the manual interpretation is not the only method of extracting information from
the imagery. There are various techniques, commonly used in remote sensing, that can
facilitate feature extraction, such as automated classification, spectral analysis, multitemporal
analysis, application of spectral (or vegetation) indices, etc.

18 The aim of classification in remote sensing is to process the multispectral (or hyperspectral) imagery in such a
way that the spectral classes (i.e. the groups of pixels that are uniform with respect to their brightness values in
the different spectral channels) are matched to the information classes (i.e. the categories of interest to the user).
It means that all pixels in the image are assigned to particular classes or themes, for example different kinds of
forest, different kinds of land use, etc.

52
Imagery Intelligence (IMINT)

53
Imagery Intelligence (IMINT)

7. Other domains of imagery intelligence


There are other domains of using the imagery intelligence rather than the military one.
Since the advent of commercially available high resolution imagery virtually anyone can
purchase the imagery, whether it is a big player, such as the Council of the European Union,
or a small player, such as any private company.
The EU, for example, employs the image analysis at the Joint Research Centre (JRC)19 or at
the European Union Satellite Centre (EUSC) for general security surveillance of the areas of
the EU’s interest, for the purposes of arms and proliferation control, treaty verification,
support to peacekeeping tasks, support to exercises, crisis monitoring, contingency planning,
etc.
There are also various non-governmental, international and humanitarian organizations
that maintain their own map catalogues, mostly on their web pages (see examples in Fig. 34,
Fig. 35 and Fig. 36). To produce the maps, and especially the image maps, employing the
satellite imagery interpretation is often necessary. These organizations usually do not have
an access to the military satellite imagery therefore they use the imagery acquired by
commercial satellites and also they rely on other (collateral) information from open sources.
When using the open source information it is extremely important to verify and cross-check
the information. Because it is not always possible, it is important to be aware that the results
may not be always reliable. And the experience shows that these products often contain the
mistakes, either geometric or thematic, or both.
On the other hand, imagery intelligence can have a form of the business espionage (also
industrial or corporate espionage) being conducted by a company or corporation to watch
over its competitor. Fig. 37 shows an example how the periodic analysis of the area of interest
can provide the information that is in contrary to the competitor’s public claim [37]. Using
satellite imagery the companies can perform surveillance of retail car park to determine
trends in numbers of the customers and even to predict for example a competitor’s quaterly
revenue.
Today there are also the companies offering market intelligence in a particular branch, for
example the naval market. They provide their customers with competitor or opponent
assessments, company profiles, and regular intelligence updates based on exploitation of
satellite imagery.

19 The JRC is a Directoriate-General of the European Commission. Its seven institutes are located on five separate
sites in Belgium, Germany, Italy, the Netherlands and Spain. Among its other activities, the Institute for the
Protection and Security of the Citizens (IPSC) in Ispra, Italy deals with the image map production and maintain the
Global Atlas on Crisis Areas on its web pages.

54
Imagery Intelligence (IMINT)

a)

b)

Figure 34. The examples of the products showing the impact of floods in
Pakistan produced by the UN Food and Agriculture Organization (a) and
the German Aerospace Centre (b). (http://reliefweb.int)

55
Imagery Intelligence (IMINT)

a) b)
Figure 35. The examples of the situation assessment in Libya (a) and the damage
assessment in the Gaza Strip (b) produced by the UN Operational Satellite Applications
Programme. (http://reliefweb.int)

Figure 36. The example of the report on destroyed houses in Zimbabwe produced by the
UN Operational Satellite Applications Programme. (http://www.internal-displacement.org/)

56
Imagery Intelligence (IMINT)

Figure 37. Satellite imagery shows that a competitor is not doing as well as it claims.
(http://www.imagingnotes.com)

57
Imagery Intelligence (IMINT)

Abbreviations

ALOS Advanced Land Observation Satellite


BDA Battle Damage Assessment
CBERS China-Brazil Earth Resources Satellite
CCD Charge-Coupled Device
CNES Centre National d’Etudes Spatiales
DGA Délégation Générale pour l’Armament
EMS Electromagnetic Spectrum
EUSC European Union Satellite Centre
FMV Full Motion Video
Groupement pour le Développement de la Télédétection
GDTA
Aérospatiale
GEOINT Geospatial Intelligence
GRD Ground Resolved Distance
GSD Ground Sample/Sampling Distance
HAA High-Altitude Airship
HALE High Altitude, Long Endurance
HRG High Resolution Geometric
IFOV Instantaneous Field of View
IGN Institute Géographique National
IGS Information Gathering Satellite
IMINT Imagery Intelligence
IPIR Initial Photographic Interpretation Report
IPSC Institute for the Protection and Security of the Citizens
IR Infrared
IQE Image Quality Equation
ISRO Indian Space Research Organisation
JAXA Japan Aerospace Exploration Agency
JRC Joint Research Centre
KH Key Hole
KOMPSAT Korea Multi-Purpose Satellite
LEMV Long-Endurance Multi-intelligence Vehicle
MACSat Medium-sized Aperture Camera Satellite
MALE Medium Altitude, Long Endurance
MOD Ministry of Defence
NIIRS National/NATO Image Interpretability Rating Scale
MTF Modulation Transfer Function
NIR Near infrared
NRO National Reconnaissance Office
NSA NATO Standardization Agency
OOB Order of Battle
RADAR RAdio Detection And Ranging
RECCEXREP Reconnaissance Exploitation Report
RISAT Radar Imaging Satellite
RBV Return Beam Vidicon
RPA Remotely Piloted Aircraft
SAM Surface-to-Air-Missile
SAR Synthetic Aperture Radar
SNR Signal-to-Noise Ratio
SPOT Systéme Probatoire d’Observation de la Terre

58
Imagery Intelligence (IMINT)

STANAG Standardization Agreement


SUPIR Supplemental Photographic Interpretation Report
SWIR Short wavelength infrared
UAS Unmanned Aerial Systems
UAV Unmanned Aircraft Vehicle
UGS Unattended Ground Sensor
V-NIIRS Video National Imagery Interpretability Rating Standard
VNIR Very near infrared

59
Imagery Intelligence (IMINT)

References

[1] KOVAŘÍK, V. Introduction to Imagery Intelligence (IMINT). [ESF Textbook]. Brno :


University of Defence, 2010, 33 p.
[2] Title 10 U.S. Code §467: Definitions [online]. [cit. 2010/07/02]. Available from WWW:
<http://codes.lp.findlaw.com/uscode/10/A/I/22/IV/467>.
[3] MCWP 2-15.4. Imagery Intelligence. Washington : US Marine Corps, 2002. 210 p.
[4] Daily Observations from the Civil War. A Diary of American Events - June 18, 1861
[online]. [cit. 2011/07/20]. Available from WWW: <http://dotcw.com/events-diary-
june-18-1861/>.
[5] GORDON, M. Above the Earth to Know the Earth. Pathfinder. 2005, May/June, Vol. 3,
No. 3, p. 23.
[6] GORDON, M. Above the Earth to Know the Earth. Pathfinder. 2005, July/August, Vol. 3,
No. 4, p. 24.
[7] SKOOG, A. I. The Alfred Nobel rocket camera. An early aerial photography attempt. Acta
Astronautica. 2010, Vol. 66, No. 3-4, p. 624-635. ISSN 0094-5765
[8] LILLESAND, T. M., KIEFER, R. W., CHIPMAN, J. W. Remote Sensing and Image
Interpretation. 5th edition. New York : Wiley, 2004. 763 p. ISBN 978-0-47-145152-5
[9] ČAPEK, J. Fundamentals of Remote Sensing. [ESF Textbook]. Brno : University of Defence,
2010, 30 p.
[10] CAMPBELL, J. B. Introduction to Remote Sensing. New York : Guildford Press, 2007. xxx,
626 p. ISBN 978-1-59-385319-8
[11] WOOD, L. The Power of Full Motion Video. Geoinformatics. 2010, Vol. 13, No. 2, p. 6-8.
ISSN 1387-0858
[12] Mission accomplished for Arianespace - HELIOS 2B is now in orbit: 7th Ariane 5 launch
in 2009, 35th success in a row. Arianespace [online]. [cit. 2010/07/02]. Available from
WWW: <http://www.arianespace.com/news-press-release/2009/12-18-09-mission-
success.asp>.
[13] Japan launches spy satellite under veil of secrecy. Spaceflight Now [online]. [cit.
2010/07/02]. Available from WWW: <http://www.spaceflightnow.com/h2a/f16/>.
[14] PACNER, K. Space Spies. Prague : Albatros, 2005, 283 p. ISBN 978-8-00-001686-3 (in
Czech)
[15] Reconnaissance satellite launched into orbit by Israel. Spaceflight Now [online]. [cit.
2010/07/02]. Available from WWW:
<http://www.spaceflightnow.com/news/n0706/11okef7/>.
[16] Israel launches Ofeq-9 satellite. DefenceNews [online]. [cit. 2010/07/07]. Available from
WWW: <http://www.defensenews.com/story.php?i=4681651&c=MID&s=AIR>.
[17] Covert satellite for Israel launched by Indian rocket. Spaceflight Now [online]. [cit.
2010/07/07]. Available from WWW:
<http://www.spaceflightnow.com/news/n0801/21pslv>.
[18] Mission Status Center. Spaceflight Now [online]. [cit. 2010/07/02]. Available from
WWW: <http://www.spaceflightnow.com/titan/b30/status.html>.

60
Imagery Intelligence (IMINT)

[19] OHB System. SAR-Lupe [online]. [cit. 2010/07/02]. Available from WWW:
<http://www.ohb-system.de/sar-lupe-english.html>.
[20] NATO Joint Air Power Competence Centre. Strategic Concept of Employment for
Unmanned Aircraft Systems in NATO. Kalkar : JAPCC, 2010. 42 p.
[21] Unattended Ground Sensors. Defence Update [online]. [cit. 2010/07/02]. Available from
WWW: <http://defense-update.com/features/du-1-06/feature-ugs.htm>.
[22] STANAG 3769 AR (Edition 2). Minimum Resolved Object Sizes and Scales for Imagery
Interpretation. Brussels : NATO Standardization Agency, 1998. 18 p.
[23] Federation of American Scientists. History of the National Imagery Interpretability
Rating Scales [online]. [cit. 2010/07/02]. Available from WWW:
<http://www.fas.org/irp/imint>.
[24] STANAG 7194 JINT (Edition 1). NATO Imagery Interpretability Rating Scale (NIIRS).
Brussels : NATO Standardization Agency, 2009. 17 p.
[25] GDTA, Ramonville St. Agne. Aerospace Imagery Analysis for Military Intelligence. Image
Analysis Method. 1999. 59 p.
[26] STANAG 3596 JINT (Edition 6). Air Reconnaissance Requesting and Target Reporting
Guide. Brussels : NATO Standardization Agency, 2007. 100 p.
[27] STANAG 3483 JINT (Edition 5). Air Reconnaissance Intelligence Reporting Nomenclature
- ATP-26(C). Brussels : NATO Standardization Agency, 2007, 4 p.
[28] Infoterra. TerraSAR-X IMINT Manual: The Comprehensive Reference Database for Image
Analysts [online]. [cit. 2011/09/30]. Available from WWW:
<http://www.infoterra.de/terrasar-x_imint-manual>.
[29] EUSC, Torrejón de Ardoz. (various internal training materials). 2011.
[30] ALBRIGHT, D., HINDERSTEIN, C. The Age of Transparency. Imaging Notes. 2000,
March/April, Vol. 15, No. 2, p. 26-29. ISSN 0896-7091
[31] KOVAŘÍK, V. Remote Sensing. Selected lectures. [S-1642]. Brno : Military Academy, 2002,
169 p. (in Czech)
[32] ERDAS, Inc. ERDAS. Field Guide. Norcross, GA, USA : ERDAS, Inc., 2008, 776 p.
[33] CCRS. Fundamentals of Remote Sensing. Ottawa : Canada Centre for Remote Sensing,
Inc., 2007, 258 p. Also available from WWW:
<http://ccrs.nrcan.gc.ca/resource/tutor/fundam/index_e.php>.
[34] CONGALTON, R. G., MEAD, R. A. A Quantitative Method to Test for Consistency and
Correctness in Photointerpretation. Photogrammetric Engineering & Remote Sensing.
1983, Vol. 49, No. 1, p. 69-74
[35] Macmillan English Dictionary Online [online]. [cit. 2011/08/04]. Available from WWW:
<http://www.macmillandictionary.com>.
[36] WordWeb Online [online]. [cit. 2011/08/04]. Available from WWW:
<http://www.wordwebonline.com>.
[37] LEGER, D. Sizing Up the Competition. Imaging Notes. 2000, May/June, Vol. 15, No. 3, p.
22-23. ISSN 0896-7091

61

View publication stats

You might also like