Guangjun Zhang Star Identification Methods, Techniques and Algorithms
Guangjun Zhang Star Identification Methods, Techniques and Algorithms
Star
Identification
Methods, Techniques and Algorithms
Star Identification
Guangjun Zhang
Star Identification
Methods, Techniques and Algorithms
123
Guangjun Zhang
Beihang University
Beijing
China
Attitude measurement is key and vital for spacecraft. It guarantees accurate orbit
entrance and orbit transfer, high-quality performance of spacecraft, reliable
space-to-ground communication, high-resolution earth observation, and successful
completion of many other missions to be conducted in space. Star sensor is the core
component in the autonomous high-quality attitude measurement of in-orbit
spacecraft based on the observation of stars. By taking advantage of a star’s
astronomical information, the star sensor method has the characteristics of good
autonomy, high precision, and high reliability and can be widely applicable in space
flight (celestial navigation).
Generally speaking, star sensor works in two modes, namely, Initial Attitude
Establishment and Tracking. The star sensor enters into the Initial Attitude
Establishment Mode when it starts working or when attitude gets lost in space due
to unforeseen problems. In this mode, full-sky star identification is needed because
there is no available attitude information. Once initial attitude is established, the star
sensor enters into Tracking Mode. Full-sky autonomous star identification is key in
star sensor technological development, which has encountered many difficulties and
therefore it is a focus for research.
Star identification is interdisciplinary and related to astronomy, image process-
ing, pattern recognition, signal and data processing, computer science, and many
other fields of study. This book summarizes the research conducted by the author’s
team for more than ten years in this specific field. There are seven chapters, cov-
ering basics in star identification, star cataloging and star image preprocessing,
principles and processes of algorithms, and hardware implementation and perfor-
mance testing. Chapter 1 is a general introduction, covering basics in celestial
navigation, with a discussion on star sensor method and star identification, and
reviews algorithms used in star identification and the development trends in this
field. Chapter 2 deals with the preliminary work in star identification, covering star
cataloging, selection of guide stars, processing of a double star, star image simu-
lation, star spot centroiding, and calibration of centroiding error. Chapter 3 is a brief
introduction to star identification using triangle algorithms, with a special emphasis
v
vi Preface
on two modified examples, namely angular distance matching and the P vector.
Chapter 4 focuses on star identification using star patterns, including star identifi-
cation utilizing radial and cyclic star patterns, by using the log-polar transformation
method, also without calibration parameters. Chapter 5 discusses basic principles of
star identification using neural networks. Two methods are presented—star iden-
tification based on neural networks carried out by using features of star vector
matrix and by also using mixed features. Chapter 6 introduces rapid star tracking
using star matching between adjacent frames, covering star tracking modes of the
star sensor method, different algorithms in star tracking, with simulation results
presented and analyzed. Chapter 7, taking RISC CPU as an example, deals with
hardware implementation, as well as hardware-in-the-loop simulation testing and
field experimentation of star identification.
For many years, the author’s research team has obtained the support from Major
Research Grants for Civil Space Programs, the National Natural Science
Foundation of China, Chinese National Programs for High Technological Research
and Development (863 Program), and Aerospace Engineering projects. The author
wishes to thank the Department of Science, Technology and Quality Control of the
former State Commission of Science, Technology and Industry for National
Defense, the National Natural Science Foundation of China, the Department of
High and New Technology Development and Industrialization of the Ministry of
Science and Technology, and the Shanghai Academy of Spaceflight Technology for
their support.
This book is based on many years of research on star identification by the author
and his team. The author wants to express his gratitude to the following people in
his team—Xinguo Wei, Jie Jiang, Qiaoyun Fan, Xuetao Hao, Jian Yang, Juan Shen,
Xiao Li, and many others, who have contributed to much of the work introduced in
this book. The author is also indebted to the National Defense Industry Press for
including this monograph in its book series on spacecraft and guided missiles.
Citations in the book are given due credit. References are listed so that interested
readers know where to look for further information.
Star identification involves a wide range of topics and is related to many research
fields. The author does not venture to cover all in this single book and knows
clearly the limitations that may exist. Any mistakes, therefore, remain the sole
responsibility of the author.
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Fundamental Knowledge of Astronomy . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Characteristics of Stars . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 The Celestial Sphere and Its Reference Frame . . . . . . . . . . 3
1.1.3 Star Catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Introduction to Celestial Navigation . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.1 Basic Principles of Celestial Navigation . . . . . . . . . . . . . . . 6
1.2.2 Characteristics of Celestial Navigation
and Its Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3 Introduction to Star Sensor Technique . . . . . . . . . . . . . . . . . . . . . . 11
1.3.1 Principles of Star Sensor Technique and Its Structure . . . . . 12
1.3.2 The Current Status of Star Sensor Technique . . . . . . . . . . . 13
1.3.3 Development Trends in Star Sensor Technique . . . . . . . . . . 17
1.4 Introduction to Star Identification . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.4.1 Principles of Star Identification . . . . . . . . . . . . . . . . . . . . . . 19
1.4.2 The General Process of Star Identification . . . . . . . . . . . . . 20
1.4.3 Evaluation of Star Identification . . . . . . . . . . . . . . . . . . . . . 23
1.5 Star Identification Algorithms and Development Trends . . . . . . . . . 24
1.5.1 Subgraph Isomorphism Algorithms . . . . . . . . . . . . . . . . . . . 25
1.5.2 Star Pattern Recognition Class Algorithms . . . . . . . . . . . . . 27
1.5.3 Other Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.5.4 Development Trends of Star Identification Algorithms . . . . 32
1.6 Introduction to the Book Chapters . . . . . . . . . . . . . . . . . . . . . . . . . 33
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2 Processing of Star Catalog and Star Image . . . . . . . . . . . . . . . . . . . . . 37
2.1 Star Catalog Partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.1.1 Guide Star Catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.1.2 Current Methods in Star Catalog Partition. . . . . . . . . . . . . . 39
2.1.3 Star Catalog Partition with Inscribed Cube
Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . ......... 41
vii
viii Contents
Star identification is the essential guarantee for the working performance of star
sensors and one key step for celestial navigation. This book summarizes the
research findings by the author’s team in the area of star identification for more than
ten years, with a systematic introduction of the principles of star identification, as
well as the general methods, key techniques, and practicable algorithms. Topics
covered include fundamental knowledge of star sensor and celestial navigation,
processing of the star catalog and star image, star identification methods undertaken
by using modified triangle algorithms, star identification utilizing star patterns, star
identification by using neural networks, rapid star tracking by using star spot
matching between adjacent frames, and hardware implementation and performance
testing of star identification.
This book can be used as a course book for senior undergraduate students and
postgraduate students majoring in information processing, computer science, arti-
ficial intelligence, aeronautics and astronautics, automation and instrumentation.
Moreover, this book can also be used as a reference for people engaged in pattern
recognition and other related research areas.
xi
Chapter 1
Introduction
Navigation systems are vital and indispensable for spacecraft. The main task of a
navigation system is to guide a spacecraft to its destination following predetermined
routes with the required precision and within the given time. For this purpose, the
system should provide accurate navigation parameters, including azimuth (i.e.,
horizontal attitude and course), velocity, position, etc. Since these parameters can
be obtained using various physical principles and techniques, there exist different
types of navigation systems [1, 2], e.g., radio navigation systems, inertial navigation
systems, GPS navigation systems, terrain matching navigation systems, scene
matching navigation systems, celestial navigation systems, and integrated naviga-
tion systems, which are an integration of multiple navigation systems.
Based on known coordinate positions and motion rules of celestial bodies,
celestial navigation uses the astronomical coordinates of an observed object to
determine the geographical position and other navigation parameters of a space-
craft. Celestial navigation is not applicable to aircraft within the Earth’s atmosphere
as they are subject to climate conditions. However, for crafts entering thin air or
navigating at over 8000 m above the ground, it is highly reliable to utilize infor-
mation provided by a celestial navigation system. Different from other navigation
technologies, celestial navigation is autonomous and requires no ground equipment.
Free from interference from artificially or naturally formed electromagnetic fields, it
radiates no energy externally. In addition, the system is well concealed and highly
precise in the determination of the attitude, orientation, and position of spacecraft,
and the time of navigation is not linked to positioning errors. In general, celestial
navigation is very promising in terms of its applications.
In this chapter, fundamental astronomic knowledge and principles of celestial
navigation are introduced first. The second part summarizes technologies for star
sensors and star identification. Star identification algorithms as well as their
development trends are also explained in this chapter. Then each chapter of the
book is briefly introduced.
Celestial navigation observes celestial bodies. Among all the objects observed, stars
are the most important type. Hence, it is necessary to acquire a basic understanding
of the characteristics of stars, which are summarized as follows [3, 4]:
(1) Distance of stars. Stars are quite remote from the Earth. Except for the Sun, the
nearest star to Earth is Centaurus, which is 4.22 light years away. Therefore, in
celestial navigation, stars can be regarded as celestial bodies at infinite
distances.
(2) Velocity of stars. Stars, also known as fixed stars, are usually considered to be
stationary. Actually, stars are constantly moving at high speeds in space. The
velocity of a star can be decomposed into radial velocity and tangential
velocity. The former refers to the component measured along the observer’s
line of sight (positive when the observed object is moving away from the
observer and negative when it is moving towards the observer), while the latter
is the component measured along the line perpendicular to the observer’s line
of sight. Tangential velocity usually shows up as displacement of stars in the
celestial sphere. Our concern is usually this displacement of stars, also known
as proper motion. The velocity of stars’ proper motion is generally less than
0.1″ per year. So far, only around 400 stars have been observed to be moving
more than 1″ a year.
(3) Brightness of stars. As an inherent characteristic, stars emit visible light on
their own. The brightness of a star refers to its apparent brightness observed
from the Earth, which is subject to both its luminosity (related to its tem-
perature and size) and the distance between the star and the Earth. In
astronomy, the degree of brightness of a star is evaluated with a unit of
measurement called star magnitude (also known as visual magnitude, Mv).
The lower the magnitude is, the brighter the star is. A decrease of one in
magnitude represents an increase in brightness of 2.512 times. A star of 1 Mv
is approximately 100 times brighter than one of 6 Mv. Two stars, Aldebaran
(Alpha (α) Tauri) and Altair (Alpha (α) Aquilae), were originally assigned as
the standard stars for 1.0 Mv in astronomy. Later, Vega (Alpha (α) Lyrae) was
1.1 Fundamental Knowledge of Astronomy 3
adopted as the standard star for 0.0 Mv and all other stars’ Mv were referenced
to this. Table 1.1 illustrates the visual magnitude of some common celestial
bodies. Stars of 6 Mv or brighter can be seen with the naked eye. Through
astronomical telescope, stars of 10 Mv or brighter are observable. The Hubble
space telescope enables the observation of stars of up to 30 Mv.
(4) Size of stars. Stars vary significantly in size. However, when observed from
the Earth, their field angles are far smaller than 1″, making it reasonable to
treat a star as an ideal point light source.
To sum up, in celestial navigation, stars can be generally considered as nearly
stationary point light sources with certain spectral characteristics at infinite
distances.
(4) Hour circle. Hour circles are the big circles that meet the celestial sphere by
passing through the celestial poles.
(5) Ecliptic and ecliptic pole. The mean plane of the Earth’s orbit around the Sun
is the ecliptic plane. Its intersection with the celestial sphere is a large circle,
i.e., the ecliptic. The ecliptic poles refer to the points on the celestial
sphere where the sphere meets the imaginary line that passes through the
celestial center and is perpendicular to the ecliptic plane. The obliquity of the
ecliptic, i.e., the angle between the ecliptic plane and the equator plane, is 23°
27′.
(6) Vernal equinox. The equator and the ecliptic intersect at two opposite points.
Vernal equinox c is the point at which the ecliptic crosses the equator moving
northward.
(7) Celestial coordinate system. The second equatorial coordinate system is
defined as a coordinate system with the celestial equator as its fundamental
circle (or abscissa circle), the hour circle passing through vernal equinox c as
its primary circle and the vernal equinox as its principal point. In astronomy,
the second equatorial coordinate system is also called the right ascension
coordinate system, and also known as the celestial coordinate system. The
position of a celestial body is determined by its right ascension and declination
in the system. As Fig. 1.2 shows, QcQ0 refers to the plane of the celestial
equator, while a and d stand for the right ascension and declination of the
celestial body, respectively. It is stipulated that right ascension is measured
counterclockwise (opposite to the direction of diurnal motion), ranging from
0° to 360°. Declination, on the other hand, is measured from the celestial
equator towards the north and the south, ranging from 0 to +90° and 0 to −90°,
respectively. The position of a star is generally described by giving these two
coordinates in the celestial coordinate system.
1.1 Fundamental Knowledge of Astronomy 5
A star catalog is an astronomical catalog that lists stars and their data according to
different needs [3]. Usually, a star catalog records the position (marked by right
ascension and declination), proper motion, brightness (measured by star magni-
tude), color, distance, and many other details of a star. It serves as the foundation
and criterion for star identification and attitude determination. Frequently used star
catalogs include the U.S. Smithsonian Astrophysical Observatory Catalogue
(SAO), Hipparcos Catalogue (HIP or HP), Henry Draper Catalogue (HD), Bright
Star Catalogue (BS or BSC), etc. The SAO J2000 (epoch = J2000) compiled by the
U.S. Smithsonian Astrophysical Observatory, recording around 250,000 stars
brighter than 17 Mv, is adopted as the standard catalog internationally [6].
The most useful data for astronomical navigation are the position and brightness
of stars. The position of a star, i.e., the projection of a star onto the celestial sphere,
is further decomposed into mean position, true position, and apparent position. The
position of a star recorded in the standard star catalog refers to its mean position at
the standard epoch (J2000). Using the mean position at the standard epoch and the
precession and proper motion from the standard epoch to the current mid-year, one
can calculate the mean position of a star in this particular mid-year. The mean
position on a particular day can then be gained by adding the mean position in
mid-year to the precession and proper motion of the specified day. Furthermore, the
nutation and mean position on a particular day can be summed up, giving the true
position of a star. The apparent position of a star refers to the star’s coordinates in
the celestial coordinate system when observed. It can be obtained when the solar
coordinate system is converted into the Earth coordinate system, i.e., when the
aberration of light is taken into account. For simplicity, astronomers employ the
coordinate sets of right ascension and declination of the standard catalog (namely
the mean position at the standard epoch). In this way, the reference coordinate
system can be treated as the mean coordinate system of the standard epoch
6 1 Introduction
(a coordinate system with mean equinox and mean equator as its coordinate axes).
Therefore, the attitude calculated is in some sense based on the mean coordinate
system of the standard epoch.
For the convenience of subsequent star identification and attitude calculation, the
right ascension and declination of stars are usually regarded and recorded as:
0 1 0 1
x cos a cos d
@ y A ¼ @ sin a cos d A ð1:1Þ
z sin d
In the past, by observing stars, people utilized their relatively stationary position
and the predictable motion of Earth to navigate. Taking the horizon as a local
horizontal reference, an observer in the Northern Hemisphere can estimate the local
latitude by looking up at the Big Dipper. In fact, this is the navigating approach
used by ancient mariners. However, except for a local horizontal reference, other
factors including the exact time of observation (year, month, date, and moment) and
the ephemeris that demonstrates the position of stars are also required for the
estimation of both latitude and longitude of a location. Navigators of cross-ocean
flights originally used sextants with bubble levels to manually measure the
line-of-sight angle (also known as visual angle), which is the angle between the
local perpendicular and the line from a star to the observer’s eye. Taking advantage
of the line-of-sight angles of two or more stars, the ephemeris of stars and the exact
time of observation, navigators could calculate the local latitude and longitude.
Thanks to progress in optoelectronic and computer technology, and especially
the emergence of CCD (Charge-Coupled Device) and CMOS (Complementary
Metal–Oxide–Semiconductor) imaging devices, celestial navigation technology has
entered a new stage of development. This technology has been widely employed in
satellites, space shuttles, intermediate-range ballistic missiles, and other spacecraft.
In this section, the fundamental principles and composition of celestial navigation
systems are briefly introduced.
The main task of celestial navigation is to determine the attitude and position of a
spacecraft. This section introduces the basic principles of celestial navigation [7].
1.2 Introduction to Celestial Navigation 7
Fig. 1.4 The cone of positioning determined through single star observation
Fig. 1.5 Two position lines determined through double star observation
approach, selecting a real point from the two intersections of the three cones, the
position of the spacecraft relative to any nearby celestial body can be expressed.
As stated above, a star catalog and ephemeris information of at least two nearby
celestial bodies (planets) are required for celestial positioning. Such information is
needed by all kinds of positioning technologies (including technology making use
of two nearby celestial bodies, and those utilizing line-of-sight technology or
landmark tracking). Readers interested in the topic may refer to relevant literature
for detailed algorithms related to positioning.
10 1 Introduction
Precise attitude information lays the foundation for the autonomous navigation of
spacecraft, and serves as the most critical component in a spacecraft’s attitude
control system. To determine the attitude of a spacecraft, a reliable reference frame
(e.g., an inertial space, the Sun, the Earth, or a star) is usually selected first.
According to the observed changes with respect to the reference frame, changes in
the attitude of a spacecraft can then be deduced. The attitude-measuring component
is usually called an attitude sensor. A gyroscope is a kind of attitude sensor with an
inertial space as its reference frame. It displays outstanding dynamic performance
and relatively high accuracy in instantaneous attitude measurement. However, due
to its large drift over long voyages, other attitude sensors are needed for correction.
Other widely used attitude sensors include Earth sensors (horizon sensors), Sun
sensors, GPS, and magnetometers. The precision of attitude measurement of these
sensors is relatively low because of their less accurate reference and measurement
vectors. Since the reference vector of these sensors is related to the orbital position
of a spacecraft, a kinetic equation should be used to estimate the orbit of the
spacecraft. Moreover, these sensors are usually only used to estimate the attitude in
one direction. Hence, multiple sensors have to be utilized in order to obtain a
three-axis attitude.
Star sensor technology offers a brand new approach to measuring the attitude of
spacecraft. It adopts the star coordinate system as its reference frame. Since the
spatial position of stars can be considered stationary in the reference frame, and the
measurement of starlight vectors is highly precise, star sensors can measure
the attitude of a craft quite precisely (up to arc-second level).
The accuracy of attitude measurement of several common attitude sensors is
shown in Table 1.2 [9]. Because the attitude measurement of star sensors is irrel-
evant to the orbits of spacecraft and stars can be spotted everywhere, star sensors
are applicable in various situations, including deep space exploration. In addition,
being highly reliable and light in weight, star sensors consume relatively little
power and work in multiple modes. They operate autonomously, independently
outputting a three-axis attitude without relying on other attitude sensors. With all
these merits, star sensor technology has become an extraordinarily high-performing
spatial attitude-sensitive technology and has been increasingly recognized and
widely applied in spacecraft.
Table 1.2 Comparison of the attitude measurement accuracy of common attitude sensors
Reference frame Attitude measurement accuracy
Earth sensor Horizon 6′
Sun sensor Sun 1′
Magnetometer Geomagnetism 30′
Star sensor Star 1″
12 1 Introduction
This section first introduces the principles and structure of star sensors. Both the
current situation and future development of star sensor technology is then
discussed.
w ¼ Av ð1:6Þ
Here, A stands for the attitude transformation matrix from a celestial coordinate
system to a star sensor coordinate system. As an orthogonal matrix, A satisfies
AT A ¼ I ð1:7Þ
When two or more stars in the measured star image have been correctly iden-
tified (that is, when corresponding guide stars have been found), the attitude
transformation matrix A can be calculated. The detailed process of computing the
attitude matrix according to the measurement vectors of two stars is introduced in
detail in Sect. 1.2.1.
Star sensors integrate technologies from optics, mechanics, electronics, image
processing, embedded computing, and so on. As shown in Fig. 1.8, a star sensor
consists of a baffle, a lens, an image sensor and its circuit board, a signal processing
circuit, a housing structure, an optical cube, and some other components.
The baffle is designed to eliminate the external stray light that shines into the
image sensor in order to reduce the background noise of the captured star image.
Stray light, including sunlight, earthshine, and so on, can considerably interfere
with the star positioning and star identification programs of a star sensor.
The lens of a star sensor images stars at an infinite distance onto the focal plane
of the image sensor. The baffle, together with the lens, constitutes the optical system
of a star sensor.
As the key component of a star sensor, an image sensor transforms optical
signals into electrical ones. Frequently used image sensors can be grouped into two
categories, CCD and CMOS. The subsequent signal processing circuit is respon-
sible for the image sensor’s imaging drive, timing control, star positioning and star
identification, and so on. The circuit finally outputs a three-axis attitude.
There are generally two types of external interfaces for a star sensor: a power
interface and a communication interface. The former provides power for the proper
functioning of a star sensor, and the latter provides the data communication between
a star sensor and the system.
The optical alignment cube fixed to the housing structure facilitates the con-
version of the measuring benchmark when the star sensor is being mounted and
aligned on a spacecraft.
The internal structure of the ASTRO APS star sensor by Jena-Optronik, a
German Company, is shown in Fig. 1.9 [10].
Since the mid-twentieth century, star sensor technology has experienced four stages
of development, i.e., early-stage star sensors, first-generation (CCD) star sensors,
second-generation CCD star sensors, and CMOS star sensors [11–13]. Currently,
14 1 Introduction
early 1970s. Equipped with 100 × 100-pixel CCD from Fairchild Semiconductor
Company and an 8080 microprocessor by INTEL, STELLAR is capable of tracking
as many as 10 stars simultaneously with a precision of around 7 arc seconds over a
3° optical FOV. CCDs are small in size, light in weight, low in power consumption,
and high in reliability. Thanks to these merits, they have been used extensively in
star sensors. With defocusing and centroiding technologies, stars can be precisely
located at a sub-pixel level, significantly improving the accuracy of measured star
vectors. Successively, U.S. companies such as BALL, TRW, and HDOS developed
CCD star trackers with a larger FOV and pixel arrays. These early-stage CCD star
sensors (or star trackers) are called first-generation star sensors. This generation is
characterized by its accuracy ranging from 100 arc seconds (large FOV) to 3 arc
seconds (small FOV), but lacks autonomous star identification and attitude calcu-
lation functions.
With the appearance of high-speed microprocessors and large capacity memory,
scientists started to develop second-generation star sensors with autonomous star
identification and attitude calculation functions. The features of second-generation
star sensors are summarized as follows:
① Utilizing a built-in star catalog, second-generation star sensors can autono-
mously identify stars and solve lost-in-space problems without external pro-
cessors or external input of initial attitude information;
② The FOV becomes larger and more stars can be observed in it. These features
make it possible to realize full-sky autonomous star identification;
③ Second-generation star sensors can directly export attitude information with
respect to the inertial reference frame.
With a larger FOV, second-generation star sensors can meet the demand on the
number of stars for star identification by detecting only the brighter ones. In
addition, the sensor can independently establish initial attitude without relying on
external devices. Therefore, lost-in-space problems can be solved and the sensor
can navigate autonomously.
Since the 1970s, researchers have been active in studying and developing star
sensors. Star sensor technology has been increasingly and widely applied in earth
observation, lunar observation, planetary observation, interstellar communication,
spacecraft docking, and many other fields. Meanwhile, star sensors have also
commercialized rapidly. Companies producing star sensors can be found not only in
the U.S., but also in Germany, France, Belgium, the Netherlands, and many other
countries. Among them, the U.S. has the largest number of such institutions, e.g.,
Ball Corporation, EMS Technologies Inc., Corning OCA Corporation, Jet
Propulsion Laboratory (JPL), Lawrence Livermore National Laboratory (LLNL),
Honeywell Technology Solutions Lab, etc. Some universities, such as Texas A&M
University and the Technical University of Denmark, have also conducted in-depth
research in star sensor technology. Figure 1.10 and Table 1.3 respectively
demonstrate some typical CCD star sensors and the performance indicators of
several typical CCD star sensors.
16 1 Introduction
Mature as it now is, CCD technology has its inherent limitations in the fact that it
is incompatible with deep submicron ultra-large-scale integration technology,
which the photosensitive pixel array can only be realized on one chip, and that other
functional units cannot be integrated on the same chip, complicating the imaging
system and making it a multi-chip system. A typical star sensor based on a CCD
imaging system weighs 1–7 kg and consumes 7–17 W of power. In addition, a
CCD array requires a unique clock-driven pulse, various operating voltages, and a
perfect charge transfer. The production process of CCD is complex and the cost is
rather high. All these limitations of CCD make it hard to reduce the size, weight,
and power consumption of a CCD-based imaging system.
Since the 1990s, people have set even higher demands for the weight, power
consumption, and radiation resistance of star sensors. As a potential alternative to
CCD technology, Active Pixel Sensor (APS) technology was developed for star
sensors by the Jet Propulsion Laboratory (JPL) in the US. APS-based CMOS image
sensors are superior to CCD image sensors in the following aspects:
(1) Easily integrated and equipped with simple interfaces. The photosensitive
array, driving and control circuit, analog signal processor, A/D converter,
all-digital interface, and other components are easily integrated onto one
chip. This single-chip digital imaging system simplifies the electronic design
of star sensors, decreases the number of peripheral circuits, and reduces the
1.3 Introduction to Star Sensor Technique 17
size and weight of the imaging circuit system. The technology is thus favor-
able for the miniaturization of star sensors.
(2) Highly radiation resistant. The results of ground tests and space application
suggest that the radiation resistance of CMOS image sensors significantly
exceeds that of CCD technology.
(3) Low in power consumption. Through photoelectric conversion, CMOS image
sensors can directly generate current signals. The only requirement is a single
5 V or 3.3 V power supply, and the consumption is 1/10 of that of a CCD
image sensor.
(4) Flexible in data reading. Embedded in the pixel, the photodetection and output
amplifier of CMOS image sensors can be separately located and read, just like
DRAM.
Thanks to the above advantages, CMOS image sensors have been rapidly and
widely adopted in star sensors. CMOS-based star sensors, often called third-
generation star sensors, have been the focus of study in the field of star sensor
technology over the past decade. Many institutions have poured huge human and
material resources into relevant research. It is noteworthy that with the development
of CMOS technology, the resolution and sensitivity of CMOS imaging devices has
significantly improved in recent years. Figure 1.11 presents some typical CMOS
star sensors, and Table 1.4 demonstrates the performance indicators of several
typical CMOS star sensors.
Miniaturization, intellectualization, and low cost are future trends in the design of
spacecraft. Correspondingly, the function, size, power consumption, and other
Fig. 1.11 Typical CMOS star sensors a AA STR star sensor developed by Galileo Company,
Italy b ASTRO APS star sensor developed by Jena-Optronik GmbH, Germany c YK010 star
sensor developed by Beijing University of Aeronautics and Astronautics, China
18 1 Introduction
Star identification is a vital prerequisite for the precise determination of the spatial
attitude and position of a spacecraft. It identifies stars by matching the stars in the
current FOV of a star sensor with reference stars in the guide star database.
Generally, the angular distance of a star pair and the brightness of stars are con-
sidered as the basic characteristics of star images. The angular distance of a star
pair, in particular, plays a crucial role in star identification. In this section, the
fundamental principles and basic process of star identification are introduced first.
Then, the performance of star identification is evaluated.
During the 1960s and 1970s, star sensors were widely applied in lunar orbiters,
Apollo, Mariner, and other spacecraft. Astronauts took pictures of stars with film
cameras and sent them back to the Earth for further processing. Hence, a large
number of star images had to be manually analyzed and measured. In this context,
Junkins came up with the idea of developing a universal star identification algo-
rithm, which became the earliest star identification technology. With the rapid
development of aerospace technology, the autonomous navigation function of
spacecraft requires that star identification approaches should meet higher require-
ments in autonomy, speed, and precision.
For star sensors, star identification amounts to searching for guide stars in a star
catalog (celestial coordinate system) corresponding to the measured stars in the star
image [9], as shown in Fig. 1.12.
20 1 Introduction
Generally speaking, a star sensor has at least two operating modes, i.e., an initial
attitude establishment mode and a tracking mode. During the initial moment of
operation or when faced with lost-in-space problems caused by a malfunction, a star
sensor will enter into Initial Attitude Establishment Mode. With no prior attitude
information, full-sky star identification is needed at this stage. Full-sky star iden-
tification usually takes a relatively long time and requires a high identification rate.
Once the initial attitude is established, the star sensor will enter into Tracking Mode.
Using the attitude information observed in the previous frames of images, a star
sensor can predict and identify the position of stars in the current frame. Star
identification in the Tracking Mode is faster and easier to operate.
Currently, there are various star identification algorithms. However, due to the
differences in specific indicators and application background of star sensors, no
unified and recognized evaluation standard has yet been established to assess the
performance of these algorithms. Reviewing the present literature on star identifi-
cation algorithms for star sensors, the evaluation and comparison are usually done
in simulation conditions according to the following aspects [16]:
(1) Robustness
Robustness is mainly used to assess the impact of different kinds of interference on
a star identification algorithm. Under the impact of a certain kind of interference,
robustness is usually measured by the statistical results of the identification rate of
an algorithm in repeated recognition tests with different boresight directions. The
most frequently used types of interference are noise and interfering stars.
Corresponding to the two kinds of information used in star images, i.e., the
position and brightness of star points, noise is grouped into two categories in
simulation: star position noise and magnitude (brightness) noise. The positional
deviation of star points is mainly caused by calibration errors in the star sensor (e.g.,
focal length measurement errors, lens distortion, optic axis offset errors, etc.) and
star algorithm errors. Centroiding with sub-pixel-level accuracy can be obtained
through high-precision calibration and fine locating of star points. Nonetheless, in
order to inspect an algorithm’s resistance to interference from position noise, a
relatively large position noise is usually adopted to comprehensively investigate
how it performs. Magnitude noise reflects the accuracy level of an image sensor’s
sensitivity toward stars’ magnitudes. Though star magnitude is considered unreli-
able, common star identification algorithms still take advantage of it to a greater or
lesser degree for faster and more accurate identification. In addition, the intro-
duction of magnitude noise may increase or reduce the number of measured stars
(stars with magnitudes that approach the observation limit of the star sensor) in the
FOV. Hence, it is necessary to evaluate the impact of magnitude noise on star
identification algorithms.
There are two types of interfering stars. The first type is so-called unexpected
stars, including planets, nebulae and dust, space debris, etc. It is difficult to dis-
tinguish the imaging targets of artificial stars from those of common stars in a
measured star image. Moreover, since image sensors have a limited capability of
distinguishing star magnitude, dimmer stars are sometimes captured. However,
there are no matching guide stars for these dimmer stars. Another type of interfering
star is missing stars, i.e., stars that should have been captured but fail to show up in
the observed FOV for some reason. Through simulation experiments, conditions
with these two kinds of interfering stars are evaluated respectively and their
influences on identification rate are analyzed.
24 1 Introduction
Since star sensors came into being, scholars and researchers have put much effort
into developing methods for star identification. At present, numerous star identi-
fication algorithms are available. In line with the methods of extracting features,
these algorithms can be roughly grouped into two categories [17]:
(1) Subgraph isomorphism algorithms. This type of algorithm regards the angular
distances between stars as sides, stars as vertexes, and the measured star image
as a subgraph of the full-sky star image. A feature database is constructed in a
certain manner, using angular distances directly or indirectly and regarding
lines (angular distance), triangles, quadrangles, etc., as basic matching ele-
ments. Taking advantage of the combination of these basic matching elements,
a corresponding match for the measured star image can be determined once the
only area (subgraph) fitting the matching conditions is located in the full-sky
star image. Conventional star identification algorithms, including polygon
algorithms, triangle algorithms, group match algorithms, and others, all belong
to the category of subgraph isomorphism algorithms. This type of algorithm is
relatively mature and has been widely adopted.
(2) Star pattern recognition algorithms. This type of algorithm endows each star
with a unique feature—a star pattern, which is usually represented by the
geometric distribution features of other stars within a certain neighborhood. In
this way, identifying stars becomes in essence searching for a guide star in the
star catalog whose star pattern most resembles that of the measured star.
Hence, this kind of algorithm is more like solving pattern identification
problems. The most representative examples are grid algorithms.
1.5 Star Identification Algorithms and Development Trends 25
This section introduces typical existing star identification algorithms and dis-
cusses their development trends.
Here, s1 and s2 stand for the direction vectors of the two stars in the star sensor
coordinate system. This angular distance is then compared to all angular distances
of star pairs stored in the guide database. Once two guide stars whose angular
distance ði; jÞ satisfies
dði; jÞ d 12 e ð1:8Þ
m
are found, ði; jÞ will be considered as a match for the two measured stars. Here, e
represents the error tolerance of angular distance. If more than one ði; jÞ satisfies the
above conditions, a third star should be selected and similar comparison should be
conducted to find a match for the angular distance between the third star and the
previous ones. More stars are to be selected until only one match remains.
Polygon angular matching algorithms are relatively simple and realizable.
However, when there are a large number of guide stars, the algorithms become
complicated, requiring longer matching times and relatively large storage capacity.
The increasing number of measured stars used for the matching (i.e., the increasing
number of polygon sides) makes it more complex to determine the direction of
angular distance. Hence, these algorithms usually require prior information of the
pointing direction of the star sensor boresight.
(2) Triangle Algorithms
Triangle algorithms are the most frequently used and mature algorithms at present
[9, 19, 20]. Similar to the fundamental principle of polygon angular matching
algorithms, triangle algorithms utilize angular distances among three stars as their
matching feature, as shown in Fig. 1.15. The matching triangle can be denoted as
ðdm12 ; dm23 ; dm13 Þ or ðdm12 ; h; dm13 Þ. Normally, ðdm12 ; dm23 ; dm13 Þ is stored in the navi-
gation pattern database in accordance with the value of angular distances in
ascending (or descending) order. The identification of triangle algorithms is in
essence the search for the matching triangle which most resembles the observed
triangle in the navigation pattern database.
26 1 Introduction
Since triangle algorithms store navigation triangles, the required storage capacity
of guide databases is usually large. In addition, the selection of proper navigation
triangles is vital for triangle algorithms. Quine, Heide, and other researchers [21]
improved the current triangle selection rule in triangle algorithms, reducing the
required storage capacity of guide databases remarkably and enhancing the iden-
tification rate. However, these approaches rely heavily on relatively precise star
magnitude information. On the basis of triangle algorithms, Mortari proposed the
Pyramid Algorithm [22, 23], which assesses the validity of the identification of a
triangle algorithm by selecting a star outside the triangle. In this way, the possibility
of redundant matches is reduced. Scholl introduced star magnitude information into
triangle algorithms [24] and formed a six-feature vector. Though his method can
also reduce the possibility of a redundant match, misidentification may occur due to
inaccuracies in magnitude information.
(3) Group Match Algorithms
First put forward by Kosik [25] and further studied by Van Bezooijen and other
researchers [26], group match algorithms work in the following way: A star is
selected as the primary star (Pole Star, star marked as 1 in Fig. 1.16) from the
measured star image (which contains at least four to five stars). Stars other than the
primary star are called companion stars (Satellite Stars). Each companion star forms
a star pair with the primary star, represented by dm1n . Similar to polygon angular
matching, a matching star pair which meets the requirements is searched for among
the guide stars. Denote the set of guide star pairs corresponding to dm1n as <1n . The
guide star that
matches the primary star should be in the intersection of these sets
Tn¼5 1n
n¼2 < and should be the guide star with the highest frequency of occurrence
in <1n (n = 2, …, 5).
Group match algorithms organize feature patterns in the form of angular dis-
tance, and thus put a huge demand on guide database storage capacity. In addition,
the identification rate is easily influenced by interfering stars. Through experiments,
DeAntonio and other researchers carried out in-depth analysis of some deficiencies
of Van Bezooijen’s algorithm [27].
1.5 Star Identification Algorithms and Development Trends 27
Fig. 1.17 Generation process of star pattern in grid algorithm a Determine the primary star r and
its pattern radius pr b Shift the FOV and determine location star l c Rotate the FOV d Construct
pattern
x ¼ ðn; lm ; rm ; ld ; rd ; rh Þ ð1:9Þ
Here, n stands for the number of companion stars in the neighborhood. lm and
rm represent the mean value and the variance of the magnitude (brightness) of the
companion star, respectively. Similarly, ld and rd represent the mean value and the
variance of the angular distance between the companion star and the primary star,
respectively. The variance of the angle between neighboring companion stars is
1.5 Star Identification Algorithms and Development Trends 29
matrix for the guide star pair is searched for in the results of screened angular
distances of star pairs. The templates are all organized in a discrete manner. The
comparison of templates is shown in Fig. 1.20.
algorithms, this approach selects a location star as the primary star. With the line
connecting two stars as a baseline, the distribution features of companion stars are
extracted. On the basis of grid algorithms, Li and other researchers took the features
generated as feature vectors of a BP network for star identification [33]. Algorithms
based on neural networks usually require more time since training is needed. As the
number of guide stars grows, the number of star patterns and the scale of the
network increase correspondingly. Therefore, it is hard for these algorithms to work
in real-time and most of them are still only studied in simulation experiments.
(2) Star Identification Algorithms Based on Genetic Algorithms
Derived from biological evolution and population genetics, genetic algorithms
(GA) have multiple merits. They are not restricted by the properties of functions and
can realize global search and global convergence. Since GA came into being, they
have been applied in many fields. The process of star identification by star sensors
can be treated as a process of combinatorial optimization, during which GA can be
used to search for the optimal combination. Paladugu [34] and Li [35] introduced
GA into star identification and have obtained satisfactory results.
(3) Star Identification Algorithms Based on Hausdorff Distance
Hausdorff distance is used to evaluate the similarities between two images. No
point-to-point correspondence of images is required to be established for calculating
Hausdorff distance. Hence, it is suitable for application to images affected by noise
or serious distortion. Star identification based on Hausdorff distance adopts the right
ascension, declination and magnitude of stars in the basic star catalog as feature
vectors. By calculating the Hausdorff distance between stars in the GSC and the star
image, star identification can be accomplished [36]. This approach requires no prior
knowledge and enjoys a relatively high identification rate. However, its disad-
vantage is that the identification rate of Hausdorff distance may be affected by its
directional property, which makes the distance extremely sensitive to the rotation
angle of the focal plane.
(4) Star Identification Algorithms Based on Singular Value Decomposition (SVD)
Generally, a matrix B can be obtained through the orthogonal transformation of
matrix A, which is formed by a set of three-dimensional vectors. During the
transformation, the three singular values of A and B remain the same. Taking
advantage of this property, these three singular values can be considered as features
and used in star identification [37]. For a particular frame of a measured star image,
the four brightest stars are selected in the FOV and arranged in descending order
according to their magnitudes. The matrix constituted by the corresponding vectors
is decomposed and the three singular values are obtained. These singular values are
taken as the feature vectors of the star image. By simply examining whether or not
the three singular values are equal, the matching of stars can be accomplished. The
advantage of this approach is that only three singular values are extracted as
32 1 Introduction
features in the end regardless of the number of vectors. During the process of
Singular Value Decomposition (SVD), the optimal estimated attitude of a spacecraft
can be obtained through simple calculation of singular vectors. The identification of
stars is simplified by omitting the process of star matching, which means that there
is no need to match each measured star with its corresponding guide star in a star
image.
Current trends in studies of star identification algorithms are mainly focused on the
following aspects:
(1) Efficient star feature extraction methods. Conventional star identification
algorithms, regarding angular distance and its derivative forms as features, are
relatively simple. However, these methods have their inherent limitations,
such as their requirement for large storage capacity, dissatisfactory perfor-
mance in real-time identification, and generally low identification rate. Though
neural networks, genetic algorithms, and other artificial intelligence approa-
ches have been introduced into star identification, they can only influence the
robustness and identification speed of algorithms to a certain degree. The key
of star identification still lies in efficient methods for extracting star features.
(2) Fast matching algorithms. The development of modern spacecraft requires that
attitude measurement should meet higher requirements in terms of speed.
Faster star identification means that spacecraft can establish accurate and
effective attitude as soon as possible. Hence, the rapidity of star identification
algorithms is also a vital indicator in the design of star sensors. Another key
technique for increasing the speed of star identification algorithms is the
proper organization of GSC and the optimization of the matching process.
(3) Reliability. First, as a prerequisite for accurate attitude output, the identifica-
tion rate of star identification algorithms should be high enough. Second, star
identification algorithms should have a degree of robustness within the
allowable range of measurement errors. Therefore, an excellent star identifi-
cation algorithm should be fault tolerant so that star identification can be
conducted properly even in poor conditions.
(4) Autonomy. The autonomy of star identification algorithms can be interpreted
as their intelligence, which is a vital feature of the new generation of star
sensors. This autonomy is displayed in the following aspects. First, star
identification and three-axis attitude output can be accomplished indepen-
dently without prior information or other auxiliary equipment. Second, star
sensors can autonomously choose appropriate identification parameters so that
optimal identification can be realized. Third, exceptional cases can be handled
properly without losing attitude.
1.6 Introduction to the Book Chapters 33
References
1. Gan G, Qiu Z (2000) Navigation and positioning. National Defence Industry Press, Beijing
2. Zhang S, Sun J (eds) (1992) Strap-down navigation system. National Defence Industry Press,
Beijing
3. Inglis SJ (1979) Planets, stars, and galaxies. Science Press, Beijing
4. Roth GD (1985) Astronomy: a handbook. Science Press, Beijing
5. Shen C, Sun G (1987) Celestial navigation. National Defence Industry Press, Beijing
6. SAO Star Catalog, http://tdc-www.harvard.edu/software/catalogs/sao.html
7. Zhang G (2005) Machine vision. Science Press, Beijing
34 1 Introduction
Processing of star catalog and star image is the groundwork for star identification.
The star sensor process establishes the three-axis attitude of spacecraft by observing
and identifying stars. Thus, star information is indispensable. The star information
used by the star sensor process mainly includes the positions (right ascension and
declination coordinates) and brightness of stars. Star sensor’s onboard memory can
store the basic information of stars within a certain range of brightness. And this
simplified catalog is generally called Guide Star Catalog (GSC). To accelerate the
retrieval of guide stars, partition of the star catalog usually has to be done, which
plays an important role in enhancing star identification and star tracking. At the
early design stage of the star image processing and star identification algorithms,
simulation approaches have to be taken to verify their correctness and to conduct
performance evaluation. Therefore, star image simulation lays the foundation for
simulation research of the star sensor. The measuring accuracy of the star vector by
star sensor directly reflects the star sensor’s performance in attitude establishment.
Meanwhile, the measuring accuracy of the star vector is closely linked to star
sensor’s performance quality in star centroiding accuracy. Thus, it is necessary to
conduct research on highly accurate centroiding algorithms that can be used by the
star sensor technique.
This chapter first introduces the composition of GSC and partition methods of
the star catalog. It also discusses guide star selection and double star processing.
Then, it introduces star image simulation and star centroiding. After this, it dis-
cusses the calibration of centroiding error.
Star catalog partition plays an important role in star identification. It can accelerate
the retrieval of guide stars in the star catalog, speed up full-sky star identification
and star identification with established initial attitude. This section introduces GSC
and catalog partition methods and presents star catalog partition with an inscribed
cube method.
The number of stars in the star catalog has much to do with star magnitude. With
the increase in star magnitude, the number of stars in the star catalog increases
drastically. Through statistical analysis, an empirical equation regarding the rela-
tionship between the total number of stars distributed in the full sky and the change
of star magnitude is obtained as follows [1]:
N stands for the total number of stars distributed in the full sky. Mv stands for
star magnitude. Table 2.1 shows different star magnitudes and their corresponding
total numbers of stars in Star Catalog SAO J2000.
To meet the demand of star identification by star sensor method, stars brighter
than (or whose Mv is less than) certain magnitude are selected from the standard
catalog and then used to build a smaller catalog (GSC) that is appropriate for star
identification. Stars selected in the GSC are called guide stars. GSC contains the
basic information of a guide star: right ascension, declination, and magnitude. The
selection of magnitude is related to the star sensor’s parameters. On the one hand,
magnitude should be comparable to the limiting magnitude that can be detected by
star sensor, that is, stars that can be observed by star sensor should be included in
the GSC. Thus, magnitude should be equal to or slightly greater than the limiting
magnitude that can be detected by star sensor, and the number of guide stars within
the field of view must meet the needs of star identification. On the other hand,
magnitude should be as small as possible on the premise that normal identification
can be achieved, which not only reduces the capacity of GSC, but also speeds up
identification. Meanwhile, the probability of a redundant match drops with the
decrease in the total number of guide stars. For example, if the limiting magnitude
that can be detected by star sensor is 5.5 Mv, a total of 5103 stars whose brightness
is greater than 6 Mv can be selected to make up a GSC.
How to retrieve guide stars rapidly must be taken into account in the process of
building a GSC. The rapid retrieval of guide stars is of great importance in star
identification, especially those in the tracking mode or with prior attitude infor-
mation. If the arrangement of guide stars in a GSC is irregular, the entire GSC has
to be traversed. Obviously, this kind of searching is rather inefficient. Therefore, the
celestial area is usually divided into several sub-blocks.
Current methods in star catalog partition are listed below.
(1) Declination Zone Method
Through this method, the celestial sphere is divided into spherical zones
(sub-blocks) by planes that are parallel to the equatorial plane. Each spherical zone
has the same span of declination [2]. And guide stars in GSC can be directly
retrieved by using a declination value. The problem with this method lies in the
extremely uneven distribution of each sub-block. The number of guide stars in the
sub-blocks near the equator is far greater than those near the celestial poles. This
method does not make use of the information of right ascension, and the sub-blocks
for retrieval contain a large number of redundant guide stars. Thus, its retrieval
efficiency is rather low.
(2) Cone Method
Ju [3] divides the celestial sphere by using the cone method, as shown in Fig. 2.1.
This method views the center of the celestial sphere as the vertex and uses 11,000
cones to divide the celestial sphere into regions that are exactly equivalent in size.
When the angle ðwÞ between the axes of neighboring cones is equal to 2.5° and the
cone-apex angle ðhÞ is equal to 8.85° (the FOV is 10° × 10°), the stars included in
the FOV by any boresight pointing are sure to be located within a certain cone.
Through this method, possible matching stars that may correspond to measured
stars in the FOV can be listed rapidly if the approximate boresight pointing of the
star sensor is known beforehand. Since cones are overlapping, one measured star
may be included in different sub-blocks. Thus, this partition method sets a high
demand for storage space.
40 2 Processing of Star Catalog and Star Image
Zhang et al. [5, 6] use a completely different method. They divide the celestial area
in the rectangular coordinate system and propose a star catalog partition with an
inscribed cube method. This method realizes an even and nonoverlapping partition
of the celestial area and the partition procedures are as follows.
① With inscribed cube, the celestial sphere is divided evenly into six regions, as
shown in Fig. 2.3a. A cone is formed when the center of the celestial sphere is
connected to the four vertices of each cube side, respectively. The cone is
intersected with the celestial sphere and divides the sphere into six parts: S1–S6.
The direction vectors of the central axis ðvÞ of S1 and its four boundary points
are as follows:
pffiffiffi pffiffiffi pffiffiffi
v ¼ ð0; 0; 1Þ w1 ¼ ð1; 1; 1Þ= 3 w2 ¼ ð1; 1; 1Þ= 3 w3 ¼ ð1; 1; 1Þ= 3 w4
pffiffiffi
¼ ð1; 1; 1Þ= 3
ð2:2Þ
And the direction vectors of the central axes ðvÞ of S2–S6 and their four
boundary points can be analogized and so on.
② Each part of S1–S6 can be further divided into N × N sub-blocks, as shown in
Fig. 2.3b, c. In this way, the entire celestial sphere is divided into
6 × N × N sub-blocks. Besides, all sub-clocks are equivalent in size, the FOV
FOV as completely as possible, take N = 9, that is, divide the celestial area into
6 × 9 × 9 = 486 sub-blocks and the size of each sub-block is 10° × 10°. Then
there is no need to traverse the entire GSC to retrieve guide stars and the average
search scope is only 9/486 = 1/54 that of before. After partition, the statistical
results of the number of guide stars distributed in each sub-block are as follows:
the maximum number is 39;
the minimum number is 2;
the average number is 10.61.
Guide star selecting is aimed at cutting down the number of guide stars as much as
possible on the premise that correct star identification is guaranteed, which not only
reduces the storage capacity required in star identification algorithms, but also
speeds up star identification. It is thus an important work for enhancing star
identification and reducing the capacity of the navigation database. Meanwhile,
double star processing has implications for star identification. This section intro-
duces guide star selecting and discusses the processing methods of a double star.
Assume that guide stars are distributed evenly and randomly in the celestial area,
the number of stars in the FOV approximately follows a poisson distribution [7]:
lk el
pðX ¼ kÞ ¼ ð2:3Þ
k!
Table 2.2 Probability of the number of stars in the FOV based on Poisson distribution (FOV of
12° × 12°, Magnitude ≥ 6)
X≤2 X≤5 X ≤ 10 X ≥ 20 X ≥ 30 X ≥ 40 X ≥ 50
Probability 0 0.04 3.38 32.97 0.50 0 0
P (%)
44 2 Processing of Star Catalog and Star Image
The statistical result (as shown in Fig. 2.5) shows the numbers of stars brighter
than 6 Mv in an FOV of 12° × 12° by 100,000 random boresight pointings. The
maximum number of stars in the FOV is 63, the minimum number is 2, and the
average number is 17.8. The results computed by Poisson distribution are basically
the same as the simulation results, as shown in Fig. 2.5. In fact, the distribution of
stars in the celestial sphere is not even or random: the distribution near the celestial
pole is sparse while that near the equator is relatively dense.
As for star identification, the number of measured stars in the FOV cannot be too
small and must meet the minimum requirements of identification (≥3). If the
number is too small, then the information that can be used would be relatively little
and, thus, it would be difficult for identification. Meanwhile, accuracy e in attitude
establishment is related to the number n of stars that are involved in attitude
establishment in the FOV:
pffiffiffi
e ¼ e0 = n ð2:4Þ
be guaranteed, that is, redundant guide stars should be eliminated in dense celestial
areas in order to guarantee that the distribution of guide stars in the entire celestial
area is as even as possible.
Based on the single magnitude threshold, there are too many stars in the FOV as
a result of some boresight pointings, while there are no stars in the FOV of some
celestial areas. Obviously, this method cannot guarantee the even distribution of
selected guide stars in the celestial area. Based on the process of partition of GSC,
guide star selection can be realized by traversing the distribution of guide stars in
the full sky.
As described in the last section, the celestial area is divided into
6 × 9 × 9 = 486 sub-blocks and then the direction vector of each sub-block’s
corresponding central axis can be obtained. 486 boresight pointings (direction
vector) evenly distributed in the full sky resulting from this method and the angle
between each pair of neighboring boresight pointings is 10°. Similarly, the full sky
can be divided into 6 × 100 × 100 = 60,000 boresight pointings that are evenly
distributed, and the angle between each pair of neighboring boresight pointings is
0.9°. The 60,000 boresight pointings are scanned, and then guide stars located
within the FOV (such as a circular FOV) by each boresight pointing are established.
If the number of guide stars in the FOV is lower than or equal to a certain
threshold number C, then no processing is needed. Otherwise, it is necessary to
arrange guide stars in the order of brightness (magnitude), keeping the brightest
C stars and eliminating darker ones. The selection of C is related to the require-
ments put forth by star identification for the minimum number of guide stars in the
FOV and accuracy in attitude establishment.
This method mainly takes into account the characteristic that brighter stars are
easier to be sensitized by the star sensor method. Thus, it is reasonable to view
brighter stars as guide stars. In the relatively dense celestial areas, redundant darker
stars can be eliminated from GSC, while in the relatively sparse celestial areas, as
many as possible guide stars should be retained.
Take C = 6, and Fig. 2.6 shows the probability statistics of the number of guide
stars in the FOV after selecting. The minimum number of guide stars in the FOV is
2, the maximum number is 28, and the average number is 11.9. After selection, the
total number of guide stars drops from 5103 to 3360. Compared with Fig. 2.5,
Fig. 2.6 shows that the distribution of guide stars is more reasonably even than that
before selection.
the star spots formed by the two on the image plane of the star sensor cannot be
separated from each other. The size of a star spot is related to the point spread
function (PSF) of the optical system and its visual magnitude.
Generally, in order to improve the accuracy of the star position, a defocusing
technique is often used to make the size of the image point range from 3 × 3 to
5 × 5 pixels. Assume the radius of PSF is one pixel, Fig. 2.7a, b show the gray
distribution of the double star’s spot images. Star spot images can be approximately
represented by the Gaussian function:
!
ðx x0 Þ2 þ ðy y0 Þ2
f ðx; yÞ ¼ A exp ð2:5Þ
2r2
A stands for the brightness of stars which is related to magnitude. Assume the
binary threshold in the process of star spot extraction is T. The minimum distance
between two stars to constitute double star must be d, as shown in Fig. 2.7c.
!
ðd=2Þ2
T ¼ 2A exp ð2:6Þ
2r2
processing a double star is to eliminate them directly, which is feasible when the
number of stars in the FOV is quite large. When the number of stars in the FOV is
relatively small, eliminating the double star directly means throwing away some
available information that is necessary for star identification. Considering that, the
double star can be treated as a new “synthetic star” whose magnitude and orien-
tation are synthesized by those of the double star. In fact, star spot images of the
double star acquired by star sensor can be viewed as synthesized by the star spot
images of the two stars.
Assume the magnitude of each of the double star is m1 and m2 , respectively, and
the direction vector of each is v1 and v2 , as shown in Fig. 2.7d. Star’s brightness is
represented by the density of optical flow, and the brightness ratio of the two stars is:
F1
¼ eðm2 m1 Þ=2:5 ð2:7Þ
F2
48 2 Processing of Star Catalog and Star Image
The brightness of the synthetic star can be viewed as the synthesis of that of the
two stars that constitute the double star, that is,
F ¼ F1 þ F2 ð2:8Þ
Thus,
F F1 þ F2
¼ ¼ eðm2 mÞ=2:5 ð2:9Þ
F2 F2
Assume the angular distance between the synthetic star and each of the double
stars is u1 and u2 , respectively, and the angular distance between the two stars is u.
F1 u1 ¼ F2 u2
F1
u ¼ u1 þ u2 ¼ u1 1 þ ¼ u1 1 þ eðm2 m1 Þ=2:5 ð2:11Þ
F2
Thus, u1 and u2 can be computed, and the direction vector ðvÞ of the synthetic
star can be obtained:
There are mainly three approaches in star identification algorithm research: digital
simulation, hardware-in-the-loop simulation, and field test of star observation. These
three approaches correspond to three different stages in designing a star sensor and a
star identification algorithm. At the early design stage, preliminary performance
evaluation of the algorithm was done using digital simulation to determine appro-
priate identification parameters. Digital simulation is computer-based and is
involved in the whole process of star image simulation, star image processing, and
star identification. After the design of the star sensor finishes, star identification
algorithms can be verified using the method of hardware-in-the-loop simulation.
2.3 Star Image Simulation 49
Fig. 2.8 Illustration of the star sensor coordinate system, the image coordinate system, and front
projection imaging
In this matrix,
Since M is an orthogonal matrix, the rotation matrix from the celestial coordinate
system to the star sensor coordinate system can be expressed as M 1 ¼ M T . First,
search image able stars with a circular FOV, and the right ascension and declination
coordinates ða; dÞ of stars that can be imaged on the image sensor should satisfy the
following conditions:
Here, R stands for the radius of a circular FOV (R is half of the diagonal angular
distance of the FOV, for example, the value of R in an FOV of 12° × 12° is equal to
pffiffiffi pffiffiffi
6 2 (R = 6 2). ða0 ; d0 Þ stands for the boresight pointing of star sensor. The
direction vector of stars that satisfy Eq. (2.13) in the star sensor coordinate system
can be expressed as:
0 1 0 1
xi xi
@ yi A ¼ M T @ yi A ð2:15Þ
zi zi
0 1 0 1
xi cos ai cos di
Here, @ yi A ¼ @ sin ai cos di A is the direction vector of stars in the celestial
zi sin di
coordinate system.
(2) Perspective Projection Transformation
The imaging process of stars on the image sensor can be represented by the per-
spective projection transformation, as shown in Fig. 2.8. After perspective pro-
jection, the coordinates of stars’ imaging points are as follows:
52 2 Processing of Star Catalog and Star Image
To sum up, the imaging process of star sensor can be illustrated by Fig. 2.9. The
imaging model of star sensor can be expressed as follows:
2 3 2 3 2 3 2 3
X f 0 u0 r1 r2 r3 cos b cos a
s4 Y 5 ¼ 4 0 f v0 5 4 r 4 r5 r6 5 4 cos b sin a 5 ð2:17Þ
1 0 0 1 r7 r8 r9 sin b
In the above equation, s stands for nonzero scale factor, f for focal length of the
optical system, ðu0 ; v0 Þ for optical center (the coordinates of the principal point),
(r1–r9) for transformation matrix from the celestial coordinate system to the star
sensor coordinate system, and ða; bÞ for the right ascension and declination coor-
dinates of the starlight vector in the celestial coordinate system. Equation (2.17)
shows that the position coordinates ða; bÞ of stars in the celestial coordinate system
(world coordinate system) are in a one-to-one correspondence with the positions
ðX; YÞ of image points on the image plane of star sensor.
(3) Nonlinear Model
In fact, an actual optical lens cannot achieve perfect perspective imaging, but has
different degrees of distortion. As a result, the image of the spatial point is not
located at the position (X, Y) as described by the linear model, but within the actual
image plane coordinates ðX 0 ; Y 0 Þ migrated due to the distortion of the optical lens.
(
X 0 ¼ X þ dx
ð2:18Þ
Y 0 ¼ Y þ dy
Here, dx and dy stand for distortion values which are related to the position of
the star spot’s coordinates in the image. Generally, there exist radical and tangential
distortions in an optical lens. As for three-order radical distortion and two-order
tangential distortion, distortions in the directions of x and y can be expressed as
follows [8]:
2.3 Star Image Simulation 53
(
dx ¼ x q1 r 2 þ q2 r 4 þ q3 r 6 þ p1 r 2 þ 2x2 þ 2p2xy 1 þ p3 r 2
ð2:19Þ
dy ¼ y q1 r 2 þ q2 r 4 þ q3 r 6 þ p2 r 2 þ 2y2 þ 2p1xy 1 þ p3 r 2
To sum up, in the imaging model of the star sensor, linear model parameters f,
ðu0 ; v0 Þ and nonlinear distortion coefficient ðq1 ; q2 ; q3 ; p1 ; p2 ; p3 Þ constitute the
intrinsic parameters of star sensor, while (r1–r9) makes up the extrinsic parameter.
In order to conduct a subsequent performance evaluation of the identification
rate of the star identification algorithms, a certain level of positional noise (a gauss
noise with mean = 0 and variances rx ; ry ) is added to the coordinates of the star
spot image on the focal plane of the star sensor in a star image simulation so as to
simulate centroid error.
For star sensor, the star can be viewed as a point light source. The positioning
accuracy of a single pixel cannot meet the demand for attitude establishment. Thus,
defocusing is often used to make the star spot image spread to multiple pixels, and
then centroiding methods are used to obtain sub-pixel positioning accuracy [1]. The
pixel size of the star spot is not only related to a star’s brightness (magnitude), but
also related to the PSF of the optical system. The gray distribution of a star spot
image follows the PSF of the optical system and can be approximately represented
by a two-dimensional Gaussian distribution function:
!
1 i Þ þ ð y Yi Þ
ðx X
2 2
li ðx; yÞ ¼ pffiffiffiffiffiffi exp ð2:21Þ
2pr 2r2
Assume that there are N stars in the star image, and a photoelectron density that
is formed in the imaging process of target star spot can be expressed as follows [9]:
X
N Z
sðx; yÞ ¼ k si AsðkÞQli ðx; yÞPðkÞts dk ð2:22Þ
i¼1
54 2 Processing of Star Catalog and Star Image
Here, k stands for bandwidth impact factor, A for optical entrance pupil area,
sðkÞ for optical transmittance, and Q for quantum efficiency. PðkÞ stands for the
spectral response of the imaging device and ts for integration time.
si ¼ 5 1010 =2:512Mi :
In the above equation, Mi stands for the magnitude of the i-th star.
The photoelectron density that is formed in the process of background imaging
can be expressed as follows:
Z
b¼ b0 AsðkÞPðkÞAp ts dk ð2:23Þ
Here, b0 = 5 1010 =2:512Mb , Mb stands for the magnitude order of the back-
ground which can be generally of brightness of 10.0 Mv. Ap stands for angle area of
a single pixel.
Thus, the total number of photoelectrons acquired by the ðm; nÞ-th pixel
ð0 m\M; 0 n\N Þ on the photosensitive surface is as follows:
Integrate Eqs. (2.22) and (2.23), and then the above equation can be simplified
as:
ð2:25Þ
Thus, the final output of the image signal can be expressed as follows:
The parameters used in the process of star image simulation are shown in
Table 2.3. In the process of image synthesis, parameters B, C, rx , ry , and rN can
take the appropriate values as required based on the design specifications of the
optical system and image device as shown in Fig. 2.10a, b. Figure 2.11 is a star
image simulated when the attitude angle of star sensor is (249.2104, −12.0386,
13.3845).
effective. This section begins with an introduction to the preprocessing of the star
image, discusses star spot centroiding methods and finally concludes with simu-
lations and results analysis.
A salient feature of low-level processing of a star image is the large amount of data.
Take the eight-bit gray image with 1024 × 1024 resolution for example. The
amount of data in each frame is 1 MB. Thus, in order to achieve real-time pro-
cessing, the low-level processing algorithms of the star image must not be too
complex. Meanwhile, considering the requirement that the low-level processing of a
star image be achieved by adopting specific hardware circuit (e.g., FPGA or ASIC),
the algorithms must also be characterized by parallel processing as much as pos-
sible. Preprocessing of the star image mainly includes noise removal processing and
rough judgment of the stars.
The output of the image sensor is done via an image capture circuit. The original
digital image signal obtained in this way is mixed with a lot of noises. Thus, noise
removal processing of the original star image is generally done first. The common
noise removal processing can use 3 × 3 or 5 × 5 low-pass filter template, for
example, neighborhood average template and a Gauss template. To reduce the
amount of computation of the algorithms, 3 × 3 low-pass filter template is often
used.
2.4 Star Spot Centroiding 57
Before extracting the information of star spot positions, the target star spots in
the star image must be roughly judged. Rough judgment is actually a process of
image segmentation which can be divided into two stages:
① separating the target star spot from the background;
② separating a single target star spot from others.
At the first stage, global threshold or local threshold can be used to segment the
image. Generally, considering the characteristics of the star image and the com-
plexity of algorithms, just one fixed global background threshold can be used to
separate the star spot from the background. The selection of the global threshold
can adopt multi-window sampling, i.e., selecting several windows randomly in the
image, computing the mean of their gray distribution, and then taking this value as
the mean of the background’s gray distribution. Generally, the background mean
plus five times the standard deviation of noise can be treated as the global back-
ground threshold [1].
How to separate one target star spot from another, is a problem in the prepro-
cessing of a star image. Ju [3] uses multi-threshold clustering to conduct clustering
identification of the pixels whose gray value is greater than the global background
threshold. The specific procedures are as follows:
① Set ten thresholds, group the pixels whose gray value is greater than the global
background threshold together into the corresponding interval based on their
gray values, and then put them in order.
② Each interval is scanned in descending order, the pixel with the maximum gray
value is found in the current interval, and its neighborhood spot is sought out.
They are regarded as belonging with the same star.
This method is relatively complex and involves a sorting operation. Considering
the specific requirement for speed in star image processing, binary image pro-
cessing can be used for reference, and then the connected domain algorithms [10]
can be used to achieve a clustering judgment of the star spot. And the specific
procedures are as follows:
① The image is scanned from left to right, top to bottom.
② If the gray value of a pixel is greater than background threshold (T), then:
* If there is a marker in the above or left spots, copy this marker.
* If there is the same marker in the above or left spots, copy this marker.
* If there are different markers in the above or left spots, copy the marker of the
above spots and then the two markers are put into the equivalent table as an
equivalent marker.
* Otherwise, allocate a new marker to this pixel and put it into the equivalent
table.
③ Repeat step ② until all the pixels whose gray value is greater than T are
scanned.
58 2 Processing of Star Catalog and Star Image
④ Combine the pixels with the same marker in the equivalent table and reallocate
a marker with a lower index number to them.
After the segmentation based on connected domains algorithm, each star spot is
represented by a set of neighboring pixels with the same marker, for example, spots
1, 2, 3 in Fig. 2.12. To eliminate the influence of potential noise interference, star
spots whose number of pixels is lower than a certain threshold should be aban-
doned, for example, spot 4 in Fig. 2.12. As can be seen from the above procedures,
threshold segmentation and connected domain segmentation algorithms can be
done at the same time. Thus, the image can be scanned just once, which is very
suitable for realization by a specific hardware circuit and can meet real-time
demands [11].
Generally, many current centroiding methods can achieve sub-pixel (or even
higher) accuracy. To obtain higher accuracy in star spot positions from the star
image, defocusing is often used to make the imaging points of stars on the pho-
tosensitive surface of image sensor spread to multiple pixels. Both theoretical
derivations and experiments prove that ideal centroiding accuracy can be achieved
when the diameter of dispersion circle ranges from three to five pixels [12, 13].
There are two categories of centroiding methods for spot-like image: the one
based on gray and the one based on edge [14]. The former often uses the
2.4 Star Spot Centroiding 59
information of spot’s gray distribution, for example, centroid method, surface fitting
method, etc. The latter often uses the information of a spot’s edge shape, for
example, edge circle (ellipse) fitting, Hough transformation, etc. The former applies
to relatively small spots with an even distribution of gray, while the latter applies to
larger spots that are less sensitive to gray distribution. Generally, the diameter of
spot star spots in the actual measured star image ranges from three to five pixels and
their gray values approximately follow a Gaussian distribution. Thus, for target star
spots, it is more appropriate to adopt the methods based on gray to conduct cen-
troiding processing. Simulation experiments also show that the accuracy of this
method is higher than that of the method based on the edge method. Here, the
former is mainly introduced, including the centroid method, the modified centroid
method and the Gaussian surface fitting method. Then their positioning accuracy is
analyzed.
(1) Centroid Method
Assume the image that contains target star spots is represented by f ðx; yÞ. Here,
x ¼ 1; . . .; m; y ¼ 1; . . .; n:
f ðx; yÞ f ðx; yÞ T
Fðx; yÞ ¼ ð2:28Þ
0 f ðx:yÞ\T
The centroid method is the most commonly used. It is easy to realize with a
relatively high positioning accuracy. It requires that the gray distribution of the spot
image be relatively even. It has some modified forms, including a centroid method
including a threshold and square weighting centroid method.
(2) Square Weighting Centroid Method
The computational equation of the square weighting centroid method can be
expressed as follows:
Pm Pn Pm Pn
y¼1 F ðx; yÞx y¼1 F ðx; yÞy
2 2
x¼1 x¼1
x0 ¼ Pm Pn ; y 0 ¼ Pm P n ð2:30Þ
y¼1 F ðx; yÞ y¼1 F ðx; yÞ
2 2
x¼1 x¼1
60 2 Processing of Star Catalog and Star Image
The square weighting centroid method substitutes the square of the gray value
for the gray value expressed as weight. It highlights the influence of the pixel which
is closer to the center and with a relatively large gray value on the central position.
(3) Centroid Method with Threshold
Fðx; yÞ in Eq. (2.28) is redefined as follows by centroid method with threshold
[15, 16]:
f ðx; yÞ T f ðx; yÞ T 0
F 0 ðx; yÞ ¼ ð2:31Þ
0 f ðx; yÞ\T 0
This method is to find the centroid of pixels with thresholds greater than T 0 in the
original image, equivalent to the original image minus the background threshold. It
can be proved that the centroid method with threshold is of a higher accuracy than
the traditional centroid method. When T 0 ¼ T and the gray distribution f ðx; yÞ is not
related to the coordinate values of x and y is the centroid method with threshold
equivalent to the traditional centroid method.
(4) Surface Fitting Method
Since images of stars on the photosensitive surface of the image sensor can be
approximately viewed as of a Gaussian distribution, a Gaussian surface can be used
to fit the gray distribution. The two-dimensional Gaussian surface function can be
expressed as follows:
( " 2 #)
1 x x0 x y y y0 2
f ðx; yÞ ¼ A exp 2q þ
2 ð 1 q2 Þ rx rx ry ry
ð2:33Þ
Here, as scale coefficient, A stands for the size of gray amplitude. It is related to
the brightness (magnitude) of stars. ðx0 ; y0 Þ stands for the center of Gaussian
function, rx ; ry for the standard deviation in the directions of x and y, respectively,
and ρ for the correlation coefficient. Generally, take ρ = 0 and rx ¼ ry . The center
(central position coordinates of stars) of the Gaussian function can be obtained by
the least square method. To facilitate computation, one-dimensional Gaussian
curves in the directions of x and y can be used for fitting, respectively.
2.4 Star Spot Centroiding 61
ðxx0 Þ2
f ðxÞ ¼ A e 2r2 ð2:34Þ
ln f ðxÞ ¼ a0 þ a1 x þ a2 x2 ð2:35Þ
Here,
To verify the accuracy of various centroiding methods, star spot images can be
generated based on digital image simulations introduced in Sect. 2.3. The image
size is 20 × 20. There is only one star in the image and the central position of the
star spot is (10, 10). The radius of PSF can take one pixel, and the background’s
gray value of the image is 20. To investigate the influences of gray noise and spot
image size on positioning accuracy, the standard deviation of gray noise varies from
zero to ten and the magnitude from one to six. Simulation experiments use stars of
5.5 Mv as references. The maximum gray value of its peak point just reaches
saturation, i.e., 255. Figure 2.13a shows a star image. Its standard deviation of noise
is eight, and its radius of PSF is 1.5. Figure 2.13b shows the amplified image of the
original star spot.
(1) Influence of Gray Noise on Positioning Accuracy
Assume the actual central coordinates of the star spot are ðxc ; yc Þ and the measured
central coordinates are ðxi ; yi Þ. The deviation ðep Þ of centroiding and the standard
deviation ðrp Þ are defined as follows:
62 2 Processing of Star Catalog and Star Image
n qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
1X
ep ¼ ðxi xc Þ2 þ ðyi yc Þ2
n i¼1
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð2:37Þ
u n qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2
u 1 X
rp ¼ t ðxi xc Þ2 þ ðyi yc Þ2 ep
n 1 i¼1
takes one pixel. Binary threshold T is equal to the background’s gray value plus five
times the standard deviation of gauss noise. The selected threshold T 0 of centroid
method with threshold is equal to T plus 20 (T 0 = T + 20). Each method undergoes
1000 times of measurements.
The following conclusions can be drawn from the above:
① When noise level is low, each method is of high accuracy and the accuracy is
nearly the same.
② Increase the noise level and the accuracy of various methods decreases. The
accuracy decrease of the traditional centroid method is most significant, while
the decrease of centroid method with threshold is relatively small.
③ The accuracy comparison of various methods is as follows:
centroid method with threshold > square weighting centroid method >
Gaussian surface fitting method > traditional centroid method.
first decreases and then increases. The main reason for this is that the gray distri-
bution of star spots reaches saturation and can no longer be fitted through the
Gaussian surface. From the above simulations and results analysis, it can be con-
cluded that the centroid method with threshold is a centroiding method that is
appropriate for extracting the central positions of star spots. It is of high accuracy
and is robust to the influence of noise. In addition, the centroid method with
threshold is as simple as the traditional centroid method and is easily implemented.
(4) Selection of Threshold
Threshold T 0 is an important parameter for the centroid method with threshold, the
selection of which affects positioning accuracy. Figure 2.17 shows the variation
curve of positioning accuracy with different values of T 0 . As shown in Fig. 2.17, the
positioning accuracy is nearly the same when T 0 is between T + 10 and T + 60.
Within this range, the deviation reaches a minimum value when T 0 is approximately
T + 30.
When the standard deviation of gray noise is 8, T 0 ¼ T þ 20, Magnitude = 5,
and there is no filter processing, the deviation of the centroid method with
threshold is 0.023 pixel and the standard deviation is 0.012 pixel. Thus, it can be
concluded that this centroiding method can achieve the positioning accuracy of
0.04–0.05 pixel.
The field angle of star is far less than one arc second. Ideally, focus imaging leaves
the star spot image of the star sensor within one pixel. To improve the centroiding
accuracy of the star sensor, a defocusing technique is often used to make the star
spot image spread to a dispersion circular spot. And then the centroid algorithms are
66 2 Processing of Star Catalog and Star Image
There are many factors affecting the accuracy of the centroid algorithms, including
noise, sampling and quantization errors, etc. These factors, based on their influence
form and function, can be put into two categories: those that cannot be compensated
and those that can be compensated. Generally speaking, it is very difficult to
compensate all kinds of noises (e.g., readout noise, dark current noise, fixed pattern
noise, nonuniformity noise, etc.) later in the imaging process. Instead, specific
measures are often taken when the image capture circuit is designed to improve the
signal-to-noise ratio (SNR) as much as possible. Factors such as sampling and
quantization errors have a regular influence on centroiding. In itself, it is an error
introduced when an energy distribution center is replaced by a pixel geometric
center. This kind of error is often called pixel frequency error. That is, the deviation
of the star spot center changes regularly within one pixel [18–20].
(1) Centroiding Deviation Induced by Fill Factor
Generally, pixel fill factor is assumed to be 100% in the centroiding process. In fact,
since the pixel fill factor of the image device is less than 100%, nonuniformity of
pixel response in space is caused in the quantization process of pixel. Even if there
is no noise, pixel quantization will still result in the distortion of PSF and the
deviation between the calculated star spot position and its “real” position.
A star spot covering 3 × 3 pixels (as shown in the circular region of the dotted
line) which moves in the direction of the line scanning is illustrated by Fig. 2.18.
The dark rectangles in Fig. 2.18 are the regions occupied by transistors of reset, row
selection gating, and column amplification. And the surrounding white regions are
the effective photosensitive parts of pixels. In the process of scanning, with the
changes in pixel photosensitive regions (shaded parts in the circular region of dotted
line in Fig. 2.18) covered by the spot, there appears to be a periodical change in the
computed spot centroid deviation.
(2) Centroiding Deviation Induced by Charge Saturation
Sampling and quantization errors induced by charge saturation are another
important source. When saturation of electron charge occurs in the center pixel of
the center energy of star spot, it causes the traditional Gaussian PSF model to be
truncated. And the truncation effect induces computational deviation in star spot
2.5 Calibration of Centroiding Error 67
centroid. The truncation of PSF in the direction of X is illustrated in Fig. 2.19. With
pixels whose gray value is greater than 255, PSF is then truncated. The truncated
Gaussian PSF model is as follows:
( .
I0 exp ðx x0 Þ2 þ ðy y0 Þ2 =ð2r2 Þ ð2pr2 Þ x2 þ y2 r 2
I ðx; yÞ ¼
I1 x2 þ y2 \r 2
ð2:38Þ
Fig. 2.19 Illustration of Gaussian PSF truncation model induced by charge saturation
68 2 Processing of Star Catalog and Star Image
Here, I stands for the radiation energy distribution of star spot, x0 and y0 for the
center of the star spot, x and y for pixel coordinates, and r for the truncation radius.
I0 stands for the total energy of starlight, determined by magnitude. I1 stands for the
saturation value of the electron charge in a pixel. When magnitude is low and
starlight is weak, r = 0 and this model returns to the original Gaussian PSF.
The common equations of the centroid algorithms are as follows:
RR RR
xIðx; yÞdxdy yIðx; yÞdxdy
xc ¼ R RA ; yc ¼ R RA ð2:39Þ
A Iðx; yÞdxdy A Iðx; yÞdxdy
Here, xc and yc stand for the radiation center of the star spot, A for the neigh-
borhood of the star spot, x and y for pixel coordinates of image sensor plate, and
I (x, y) for radiation distribution function. After discretization of digital images, the
computational equations of centroid are as follows:
Pn Pn
xk I k yk I k
~xc ¼ Pk¼1
n ; ~yc ¼ Pk¼1
n ð2:40Þ
k¼1 Ik k¼1 Ik
Here, ~xc and ~yc stand for the center of star spot after discretization, n for the
number of pixels with a gray value greater than threshold T 0 , k for the index number
of pixels, xk and yk for the coordinates of the k-th pixel, and Ik for the gray output of
the k-th pixel.
Figure 2.20 shows the pixel frequency error curves induced by charge saturation
obtained through simulations with different magnitudes when there is no noise. The
directions of x and y are mutually independent, and here the error in the direction of
x is simulated. As shown in Fig. 2.20, the deviation within pixels approximately
follows the sine function. But with the changes of magnitude, from 3.5 to 0.0 Mv,
both the amplitude and phase change.
Next, with the star of 0 Mv as reference, the pixel frequency error model of the
centroid algorithms is introduced, and brightness and positional noise are increased
to simulate and compute pixel frequency error. First, the pixel frequency error of the
model built is as follows:
(
Ex ¼ Ax sin 2pxp þ 2pBx sinð2pBx Þ
ð2:41Þ
Ey ¼ Ay sin 2pyp þ 2pBy sin 2pBy
Here, Ax and Ay stand for the deviation amplitude coefficients in the directions of
x and y, Bx and By for deviation phase coefficients, and xp and yp for position
coordinates within one pixel. Since the pixel of the image sensor is square shaped,
assume Ax = Ay and Bx = By. And for the same magnitude, the deviation amplitude
coefficient and phase coefficient are constants. Here, use the direction of x as ref-
erence, and the parameter estimation in the direction of y can be processed in the
same way.
Using least square estimation, the estimation equations of amplitude and phase
deviations are as follows:
8
> DAx
>
< DEx ¼ sin 2pxp þ 2pBx ; 2p cos 2pxp þ 2pBx 2p cosð2pBx Þ
>
DBx
>
> DAy
>
: DEy ¼ sin 2pyp þ 2pBy ; 2p cos 2pyp þ 2pBy 2p cos 2pBy
DBy
ð2:42Þ
Here, DEx and DEy stand for the measured values of pixel deviation in the
directions of x and y, DAx and DBx for pixel frequency error amplitude and phase
estimation in the direction of x, and DAy and DBy for pixel frequency error
amplitude and phase estimation in the direction of y.
brightness error is also Gaussian white noise and its mean square deviation is 2% of
the saturated gray value. The threshold takes five times the mean square deviation
of brightness, i.e., 10% of the saturated gray value. Figure 2.21 shows the cali-
bration results of pixel frequency error. ‘−+’ stands for the simulation value with
noise, ‘−O’ for sine estimated value, and ‘−Δ’ for residual error.
The amplitude of sine deviation after calibration is 0.060 pixel and the phase is
1.4 × 10−3 rad. The mean root of the simulation value’s error is 0.055 pixel, and the
mean root of residual error after calibration is 0.036 pixel. The computation
accuracy of star spot centroid is improved by 34%, which shows the calibration
value is of remarkable accuracy. It is worth noting that the above simulations are
based on the star of 0 Mv. The pixel frequency error of each magnitude is different.
Thus, to fully calibrate the pixel frequency errors of all magnitudes to be dealt with
by the star sensor, the amplitude and phase of several major magnitudes’ pixel
frequency error can be measured, and then interpolation can be used to calibrate the
centroiding results of different magnitudes.
In the laboratory, the starlight simulator and a turntable of high accuracy can be
used to calibrate the pixel frequency error of centroiding. As shown in Fig. 2.22,
the turntable is adjusted to leave the central positions of the imaging star spot on the
edge of pixels to reduce possible interferences in the direction of y. The turntable
begins from the initial position of pixel x. Its rotation angle from pixel x to pixel
x + 1 is approximately 0.05 pixel each interval. In the process, 21 points are
sampled. Conduct multiple samplings (100 times) at each point to reduce random
error. Repeat the sampling process in the direction of y. To obtain a more precise
estimated parameter value of pixel frequency error, five pixels can be selected in the
up, down, left, right, and middle parts of the star sensor’s image sensor plate to
repeat the above data collection. The parameters can be integrated to solve the
amplitude and phase deviation coefficients. For example, for captured image of
resolution 1024 × 1024, the sampling of pixel frequency error can be done at five
pixel points, i.e., (127, 512), (512, 512), (896, 512), (512, 127), (512, 896),
respectively.
References
1. Liebe CC (2002) Accuracy performance of star trackers - a tutorial. IEEE Trans Aerosp
Electron Syst 38(2):587–599
2. Jeffery WB (1994) On-orbit star processing using multi-star star trackers. SPIE 2221:6–14
3. Ju G, Kim H, Pollock T et al (1999) DIGISTAR: a low-cost micro star tracker. AIAA Space
Technology Conference and Exposition, Albuquerque, AIAA: 99-4603
4. Chen Y (2001) A research on three-axis attitude measurement of satellites based on star
sensor. Doctoral Thesis of Changchun Institute of Optics, Fine Mechanics and Physics,
Chinese Academy of Sciences, Changchun
5. Wei X (2004) A research on star identification methods and relevant technologies for star
sensor (pp 15–43). Doctoral Thesis of Beijing University Aeronautics and Astronautics,
Beijing
6. Zhang G, Wei X, Jiang J (2006) Star map identification based on a modified triangle
algorithm. Acta Aeronautica et Astronautica Sinica 27(6):1150–1154
7. Wang X (2003) Study on wild-field-of-view and high-accuracy star sensor technologies.
Doctoral Thesis of Changchun Institute of Optics, Fine Mechanics and Physics, Chinese
Academy of Sciences, Changchun
8. Weng JY (1992) Camera calibration with distortion models and accuracy evaluation. IEEE
Trans Pattern Anal Mach Intell 14(10):965–980
9. Yuan J (1999) Navigation star sensor technologies. Doctoral Thesis of Sichuan University,
Chengdu
10. Zhang Y (2001) Image segmentation. Science Press, Beijing
11. Hao X, Jiang J, Zhang G (2005) CMOS star sensor image acquisition and real-time star
centroiding algorithm. J Beijing Univ Aeronaut Astronaut 31(4):381–384
12. Grossman SB, Emmons RB (1984) Performance analysis and optimization for point tracking
algorithm applications. Opt Eng 23(2):167–176
13. Zhou R, Fang J, Zhu S (2000) Spot size optimization and performance analysis in image
measurement. Chin J Sci Instrum 21(2):177–179
14. Shortis MR, Clarke TA, Short TA (1994) Comparison of some techniques for the subpixel
location of discrete target images. SPIE 2350:239–250
15. Sirkis J (1990) System response to automated grid methods. Opt Eng 29(12):1485–1491
16. West GAW, Clarke TA (1990) A survey and examination of subpixel measurement
techniques. SPIE 1395:456–463
17. Wei X, Zhang G, Jiang J (2003) Subdivided locating method of star image for star sensor.
J Beijing Univ Aeronaut Astronaut 29(9):812–815
18. Giancarlo R, Domenico A (2003) Enhancement of the centroiding algorithm for star tracker
measure refinement. Acta Astronaut 53:135–147
72 2 Processing of Star Catalog and Star Image
19. Ying JJ, He YQ, Zhou ZL (2009) Analysis on laser spot locating accuracy affected by CMOS
sensor fill factor in laser warning system. The ninth international conference on electronic
measurement and instruments (pp 202–206) (2)
20. Hao X (2006) CHU key technologies of miniature CMOS star sensor (pp 61–64). Doctoral
Thesis of Beijing University Aeronautics and Astronautics, Beijing
Chapter 3
Star Identification Utilizing Modified
Triangle Algorithms
Generally, the existing star identification algorithms can be roughly divided into
two classes according to their methods of feature extraction: subgraph isomorphism
algorithms and star pattern recognition algorithms. The former category regards star
pair angular distances as sides and stars as vertexes, so that a measured star image
can be regarded as the subgraph of a full-sky star image. By using angular distance
in a direct or indirect manner, these algorithms use line (angular distance), triangle,
and quadrangle as the basic matching elements to build a guide database in a certain
way. With the combined use of those elements, once the only area (subgraph) that
meets the matching requirements is found in the full-sky star image, it is regarded as
the corresponding match of the measured star image. The triangle algorithm is the
most typical subgraph isomorphism algorithm and has been so far one of the most
common and widely used star identification algorithms. For example, the space-
borne star sensors of the Danish Oersted satellite [1, 2], America’s DIGISTAR I
miniature star tracker [3], and others all use this algorithm. Besides, the triangle
algorithm has many derived forms, like Scholl [4] method based on six features and
Mortari et al.’s [5] pyramid algorithm.
Traditional triangle algorithms, though simple in structure and easy in realiza-
tion, have many weaknesses. For example, they generally require very large guide
databases and are rather time-consuming in data retrieval. The modification of the
triangle algorithms mainly focuses on the introduction of new features, especially
that on magnitude, and the selection of triangles to eliminate as many redundant
guide triangles as possible. Due to the algorithms’ own limitations, the modifica-
tions cannot remarkably improve the performance. Zhang et al. [6–9] modify the
current triangle algorithms for star identification so as to solve their existing
problems and enhance the efficiency of matching and identification.
In this chapter, two modified triangle algorithms for star identification are
introduced, and their basic principles and the specific realization processes are
elaborated on. Finally, the performance of the two algorithms will be evaluated
through experiments and compared with that of the traditional triangle algorithm.
The triangle algorithms for star identification, with many varied forms, use angular
distances between each two of the three stars to fulfill matching and identification.
In this section, their basic principles are introduced, and their features and existing
problems analyzed.
A single star cannot be used for identification, while two stars can be identified
through angular distance. The right ascension and declination coordinates of guide
stars i and j are denoted as ðai ; di Þ and ðaj ; dj Þ, respectively. The angular distance in
the celestial coordinate system is defined as follows (c.f. Fig. 3.1a):
!
1 si sj
dði; jÞ ¼ cos ð3:1Þ
j si j sj
0 1 0 1
cos ai cos di cos aj cos dj
Here, si ¼ @ sin ai cos di A, and sj ¼ @ sin aj cos dj A. The two are the direc-
sin di sin dj
tion vectors for the guide stars i and j, respectively.
Similarly, denoting the star spot coordinates on the image plane of measured
stars 1 and 2 as ðX1 ; Y 1 Þ and ðX2 ; Y 2 Þ, respectively, then the angular distance in the
star sensor coordinate system can be defined as follows (c.f. Fig. 3.1b):
Fig. 3.1 Angular distance matching. a Angular distance in the celestial coordinate system.
b Angular distance in the star sensor coordinate system
3.1 Current Triangle Algorithms 75
1 s1 s2
dm12 ¼ cos ð3:2Þ
js1 j js2 j
0 1 0 1
X1 X2
Here, s1 ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
1 @ Y1 A and s2 ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
1 @ Y2 A.
X12 þ Y12 þ f 2 X22 þ Y22 þ f 2
f f
The two are the direction vectors of the measured stars 1 and 2 in the star sensor
coordinate system, respectively.
If the measured stars can be matched with the guide stars, then
dði; jÞ d 12 e ð3:3Þ
m
The major differences regarding those triangle algorithms lie in the selection and
storage mode of guide triangles. The earliest research in this field was done by
Liebe whose method stored all guide triangles that could be formed by guide stars
Table 3.1 Storage mode of dði; jÞ (°) dðj; kÞ (°) hði; kÞ (°)
guide triangles
5.596 8.191 164.204
5.596 8.191 30.680
5.596 10.102 7.452
5.596 12.306 58.521
5.596 12.754 13.597
8.191 8.329 165.117
8.191 10.102 156.662
8.191 12.306 137.275
8.191 12.754 150.607
8.329 10.102 38.222
8.329 12.306 27.841
8.329 12.754 44.276
for the retrieval of matches. If one guide triangle and one measured triangle meet
the requirement in formula (3.4), then the guide triangle is the match of the mea-
sured triangle. If there is only one guide triangle that is matched, then the star
identification is successful. The guide triangles are stored in the ascending order of
their first side dði; jÞ and the second one dðj; kÞ, as shown in Table 3.1, which
indicates the storage mode of the guide triangles (the “side-angle-side” mode is
used here). Liebe stores the angular distance values of each side (angle) of those
guide triangles by sections, with error tolerance added, for quick retrieval. This
triangle algorithm selects 1000 stars from 8000 guide stars to form guide triangles,
but still requires around 1 MB memory to store 185,000 guide triangles. The
identification rate using this method is 94.6%, which drops significantly (to
70–80%) when there are interfering stars. It takes 10 s for full-sky star identification
on average.
The major problem with the triangle algorithms is that there are too many guide
triangles. Theoretically speaking, N guide stars can form N(N − 1)(N − 2)/6 guide
triangles. Though the limitation by the FOV helps eliminate many of them, the
number of the triangles left is still extremely large. This is particularly true when
limiting magnitude is relatively high and the total number of guide stars compar-
atively large. Too many guide triangles will impede the use of the triangle algo-
rithms. Therefore, triangle algorithms have to deal with the problem brought about
by too many guide triangles. To solve this problem, guide stars and guide triangles
need to be selected.
A modified triangle algorithm, proposed by Quine and Durrant-Whyte [10],
holds that guide triangles contain a tremendous amount of redundant information,
3.1 Current Triangle Algorithms 77
and thus one triangle for one star is enough. The principle of selecting triangles is as
follows: use guide star S1 as the first star of the triangle, then choose the brightest
and the second brightest stars, S2 and S3 in the neighboring areas with a small radius
r and a larger one R as the other two stars of the triangle to form a guide triangle (as
shown in Fig. 3.3).
This method also holds true for measured stars in order to form measured
triangles. With this method, N guide stars only need N guide triangles. The required
memory capacity is also reduced. Though this method makes some progress in
terms of time for identification and memory requirement, it still has problems:
① Measured triangles near the edge of the FOV may be selected in an erroneous
way and then lead to a mistaken identification;
② It requires relatively accurate information on brightness, which is generally
hard to obtain, when there may be some errors in the measurement of
magnitude.
Kruijff et al. [11] propose a Douma/DUDE (Delta-Utec Douma Extension)
algorithm based on Liebe’s and Quine’s triangle algorithms. Just like Quine’s
algorithm, the Douma/DUDE algorithm uses the information of brightness/
magnitude, and selects triangles that are most likely to be detected as the guide
triangles. The algorithm assigns each star a probability value which is related to the
measured magnitude. The stars with the biggest probability value are most likely to
be selected. Similarly, the triangles that are most likely to appear are selected
preferentially. When making up measured triangles, Douma/DUDE also considers
how much the location of measured stars (close to the edge of the FOV or not), will
have an effect on identification. The Douma/DUDE triangle algorithm is more
reasonable in the selection of guide triangles than the former two. However, it is
also limited in practical use because relatively accurate information on brightness is
still required.
As stated in the previous section, storing directly all guide triangles will lead to
problems, such as tremendous memory requirement, many redundant matches, and
too much time taken on retrieval of matches. However, the selection of triangles
proposed in Quine’s algorithm and Douma/DUDE algorithm also results in a higher
probability of error in identification. Meanwhile, the requirements for the infor-
mation on brightness are relatively strict. Regarding the above-mentioned problems,
Zhang et al. [6, 7] propose a modified triangle star identification algorithm by using
angular distance matching. The algorithm stores the data of angular distance of star
pairs to realize the matching of triangles. The number of star pairs is much lower
than that of triangles, so the memory capacity required decreases remarkably. At the
same time, storing star pairs in a reasonable way helps speed up data retrieval and
star identification. In this section, the specific realization process of the algorithm is
introduced in detail, and an in-depth comparison of it with the traditional triangle
algorithms is made.
Regardless of the limitation of the FOV, N guide stars can theoretically make up
N(N − 1)/2 star pairs. This number is far lower than that of the triangles that can be
formed. The total number of guide stars brighter than 6.0 Mv is 5103, among which
3360 are selected (c.f. Sect. 2.2.1) as guide stars. The generation of star pairs
follows the steps below.
Scan the already selected GSC.
If the angular distance between a star pair is smaller than d, then record the
angular distance and the index numbers of the two stars, or the star pair.
Here, d refers to the diagonal distance of an FOV. For example, when the FOV is
pffiffiffi
12° × 12°, d ¼ 12 2°.
Star pairs should be stored in the ascending order of the value of angular dis-
tance. In order to make it easier to search the matching of star pair’s angular
distance, all angular distances are divided into many intervals. The value of each
interval of angular distance k is equal to 0.02°. Thus, if the angular distance of two
measured stars is worked out, their corresponding interval can be easily searched.
The selection of star pairs in this interval will help to find out potential matching
guide star pairs.
3.2 Modified Triangle Algorithm Utilizing the Angular … 79
Figure 3.4 illustrates the storage of star pairs in interval No. 177, where all of the
star pairs with an angular distance between 3.54° and 3.56° are stored. If the angular
distance of two measured stars d is found to be 3.548, all the 62 star pairs in interval
No. 177 can be selected as the probable matches of that measured star pair. If a
higher error tolerance e is used, then the star pairs in the neighboring intervals of
No. 177, like No. 176 and No. 178, can also be the possible matches.
The curve in Fig. 3.5 shows how the number of star pairs in star pair database
changes along with the change in angular distances. It can be seen that there is a
linear co-relation: the bigger the angular distance, the higher the number of star
pairs in the corresponding section. Figure 3.6 indicates the statistical probability of
the distribution of the number of star pairs in different intervals of angular distance
in the FOV of star sensors. It is obvious from Fig. 3.6 that angular distances of star
pairs are mainly from 0° to 12°. When the angular distance is bigger than 12°,
though there are a large number of star pairs in each interval of the database, these
star pairs are less likely to appear in the FOV due to the limitation of its scope.
Therefore, it is reasonable to think there are a large number of redundant star pairs
in this interval that seldom show up in the real measured star image. Based on this,
star pairs with angular distances of 0°–12° can be selected when a database of star
pairs is established, and the angular distance values of the corresponding sides
should also be in that range while selecting stars to form measured triangles. The
method can largely reduce the size of the database of star pairs on the premise that
the rate of identification is not affected.
80 3 Star Identification Utilizing Modified Triangle Algorithms
pffiffiffi
When d ¼ 12 2°, 122,964 star pairs are stored, and 1 MB memory is required
if per star pair is calculated as 8 bytes. When d = 12°, 60,692 star pairs are stored,
and 0.5 MB memory is required. It is thus clear that a smaller value of d requires
smaller memory. Experiments show that the identification rate will not be affected
when d = 12°.
Brighter stars are generally selected for star identification, as information provided
by those stars is more reliable. During the selection of measured triangles, the
3.2 Modified Triangle Algorithm Utilizing the Angular … 81
For 20 possible measured triangles, if one side of any triangle among them has
an angular distance larger than 12°, then ignore that triangle. For the rest of the
triangles, the principles of selection are as follows:
① Give preference to the measured triangles formed by the brightest stars.
Assuming M1 , M2 and M3 (Mi is measured by the gray value of star spot, the
smaller Mi is, the brighter the star is) stand for the brightness of the three stars
making up the triangle. Set M ¼ M1 M2 M3 , the triangle with the smallest value
of M is selected preferentially.
② Give preference to measured triangles with relatively shorter angular distances.
It is obvious from Fig. 3.5 that a larger angular distance means a larger number
of star pairs in the corresponding range and a higher probability of redundant
matches. Therefore, among triangles with roughly equal M value, the ones with
the shorter angular distance should be given priority.
Here, the principle of “the longest side the shortest” is adopted, that is, the triangle
whose longest side has the shortest angular distance is selected preferentially.
Sort the measured triangles by following the above-mentioned selection prin-
ciples. Those who rank higher are used for star identification successively. Once
correct identification is obtained, next triangles can be skipped. Otherwise, the rest
of the triangles should be selected in sequence for the identification.
Denoting dm12 , dm23 , and dm13 as the three sides (angular distances) of the selected
measured triangles, then find the matched star pairs in the
star
pair database
according to the three angular distances. Assuming C dm12 , C dm23 , and C dm13 are
the matched star pair sets in the formula (3.3), and the numbers of star pairs in these
sets are nðdm12 Þ, nðdm23 Þ, and nðdm13 Þ, respectively. The matching process is, in
essence, to search for three star pairs p1 2 Cðdm12 Þ, p2 2 Cðdm23 Þ, p3 2 Cðdm13 Þ, which
are linked end-to-end, that is, there is just one shared guide star between each two of
the three star pairs. If set ðp1 ; p2 ; p3 Þ meets the above-mentioned requirements, then
the star pairs can form a matched triangle of the measured triangle.
In general, the match retrieval of triangles in the three sets Cðdm12 Þ, Cðdm23 Þ, and
Cðdm13 Þ, conducted by traversal combination, needs to do comparison operations for
nðdm12 Þ nðdm23 Þ nðdm13 Þ times in the worst case scenario. If the matched star pair
set of each side (angular distance) contains approximately 100 star pairs, then one
matching set needs to do a comparison operation for 106 times, which is rather
time-consuming. To avoid this, a simple, fast, status marks-based retrieval method
is used here. It searches for ðp1 ; p2 ; p3 Þ that meets the requirements by setting and
judging the status marks, as shown in Fig. 3.8. The specific steps are as follows:
① Set a status mark for every guide star in the GSC. Initialize them before
matching and identification. Set the status of every guide star as 0.
3.2 Modified Triangle Algorithm Utilizing the Angular … 83
Fig. 3.8 The identification process of triangle. a Measured triangle. b Angular distance matching
(star pairs). c State marks (I, II, III)
② Scan Cðdm12 Þ. Set the status of the guide stars contained in all the star pairs in the
set as I, and record the index number j of the other star in the same star pair (i, j).
③ Scan Cðdm23 Þ. If the status of the guide stars contained in the star pairs in the set
is already set as I, then set the status of the guide star as II, and record the index
number of the other guide star k which constitutes a star pair with the former
guide stars.
④ Scan Cðdm13 Þ. If ðj; kÞ 2 Cðdm13 Þ, which means the star pair formed by guide stars
j and k is contained in the set Cðdm13 Þ, then a matched triangle is successfully
found. At the time, set the status of another guide star i, which forms star pairs
with ðj; kÞ, from II to III. Thus, ði; j; kÞ is the guide triangle that matches with
the measured triangle.
If the method of setting status marks is adopted, then the retrieval needs only
nðdm12 Þ þ nðdm23 Þ þ nðdm13 Þ times, far fewer than those of traversal combination.
The foregoing identification process may bring about more than one set of guide
triangles that can be matched with the measured triangle, thus other methods should
84 3 Star Identification Utilizing Modified Triangle Algorithms
be used to conduct further election. Verification is then introduced for this purpose.
The main purposes of the verification are as follows:
① It can evaluate whether the identification is correct, and meanwhile rule out
wrong matches, if any.
② It can identify the matched stars for as many measured stars in the measured
star image as is possible, which is also beneficial to tracking identification and
improving the accuracy of attitude calculation.
③ It can provide rough information on attitude.
The basic idea of verification is as follows. If the identification is correct, i.e., the
guide triangle is the correct match of the measured triangle, then the attitude cal-
culated with the result of the match must also be accurate. So the simulated star
image (called reference star image) generated according to this attitude must be
consistent with the original measured star image (i.e., the positions of star spots are
identical). Figure 3.9 is an illustration of the correspondence of the guide stars
distributed in the celestial sphere and the measured stars in the star sensor’s FOV.
When the identification is correct, then not only will the guide triangles correspond
to the matched measured triangles, but there is a one-to-one correspondence between
the other measured stars in the FOV and the guide stars in the celestial area.
Here, the calculation of attitude adopts the method of obtaining attitude with two
vectors, that is, calculating the attitude of star sensor with the vector information
regarding two matched stars. Its specific process is introduced in Sect. 1.2.1. This
method is easy and fast, yet cannot generate a fairly precise attitude evaluation.
Fig. 3.9 Correspondence between guide stars and measured stars. a Guide stars in the celestial
sphere. b Measured stars in the star sensor’s FOV
3.2 Modified Triangle Algorithm Utilizing the Angular … 85
Verification does not require precise attitude information, but needs fast attitude
calculation. Therefore, the method of obtaining attitude with two vectors can meet
the requirements here for verification.
The simulation process of the reference star image is described in Sect. 2.3.
Based on the calculated attitude and the imaging parameters of the star sensor, the
star image that the star sensor can capture with that attitude can be easily deduced.
In order to do less calculations, the simulation of the reference star image here just
involves getting the coordinate values of imaging star spots without involving the
simulation of energy distribution and the synthesis of digital images and without
considering the factors that can be ignored, such as distortion. The information in
the partition table of GSC (c.f. Sect. 2.1) can be used for the fast retrieval of other
guide stars in the neighborhood of the guide triangle that matches with the mea-
sured triangle.
Assuming Star 1 and Star 2 are two measured stars and their identification results
are i and j (index number in the GSC), respectively. The generation process of the
star image is illustrated in Fig. 3.10.
① Find in the GSC the index numbers of celestial area sub blocks, subi and subj ,
where the guide stars i and j are located respectively.
② In the partition table of GSC, find the record entries of the sub blocks subi and
subj , and obtain the index numbers of the neighboring sub blocks. Record the
sets of 3 × 3 sub blocks centering around subi and subj as Cðsubi Þ and Cðsubj Þ,
respectively.
③ If the guide stars i and j are the correct identifications of the measured Star 1
and Star 2, then the reference star image (Fig. 4.9) must be located within both
Cðsubi Þ and Cðsubj Þ, i.e., the intersection Cðsubi Þ \ Cðsubj Þ. Therefore, guide
stars in the intersection area are used to generate the reference star image. With
this method, the retrieval scope of guide stars is further narrowed down, which
speeds up the generation of the star image.
If the stars (or the majority of them) in the reference star image can find their
correspondent measured stars within the neighboring area of a small radius in the
measured star image, then the reference star image and the measured star image are
considered in correspondence. The identification is successful and the algorithm is
finished. Otherwise, use the same method to verify other matched guide triangles. If
the generated reference star image and the measured star image are not in corre-
spondence, then the identification of measured triangles is not successful. Repeat
the same identification process with the rest of the measured triangles of higher
ranking. If there is no measured triangle which can be correctly identified, then the
identification algorithm fails.
The flow chart of the whole algorithm is shown in Fig. 3.11.
of angular distances. Two star pair databases when k = 0.02° and k = 0.05° are
used for identification. The identification results are shown in Fig. 3.13.
It can be seen from Fig. 3.13 that the identification rates of different k values are
about the same under low level of noise situations. When r is bigger than 0.5 pixel,
the identification rate drops drastically when k = 0.02°. When r = 2 pixels, the
identification rate is already below 84%. Under the same condition, the identifi-
cation rate decreases relatively slowly when k = 0.05°, and remains above 97%
when r = 2 pixels. The reason for a sharp decline in identification rate when
k = 0.02° is that the uncertainty value e of angular distances is so low that, when
noise is added, the angular distance of a measured star deviates from the right
matching interval and causes mistaken matches. A bigger k can ensure that the
angular distance between measured stars is always within the right matching
interval after the noise is added. Meanwhile, it is noticeable that a bigger k results in
more star pairs to be searched for in angular distance matching, and more time in
identification. At the same time, a bigger k means larger memory usage in algorithm
operation. Therefore, the value of k is related to the level of noise, meaning a small
k should be taken when noise level is low, and the value of k can be increased when
the noise level is relatively high.
At low noise level (r < 0.5), the modified triangle algorithm can obtain a nearly
100% identification rate, which is a remarkable improvement when compared with
the 94.6% identification rate of Liebe’s triangle algorithm. When r = 2 pixels, the
identification rate of Liebe’s triangle algorithm drops rapidly to about 70%, while
that of the modified triangle algorithm after increasing the value of k still stands at
97%, or even higher.
3. Influence of Magnitude Noise on Identification
Magnitude noise reflects the precision of a photoelectric detector when measuring
stars’ brightness and is influenced by factors such as characteristics of the stellar
magnitude noise being 0.2 Mv. The identification result is shown in Fig. 3.15. It is
obvious from Fig. 3.15 that, when the equivalent brightness of the “artificial star” is
relatively weak (>5 Mv, the triangle algorithm can ensure a comparatively high
identification rate (>95%). When the equivalent magnitude is smaller than 5 Mv,
the identification rate drops sharply, and when the equivalent magnitude is 3 Mv
and there are two “artificial stars,” the identification rate drops to around 50%.
Other triangle algorithms, if interfered by bright artificial stars, will demonstrate a
noticeable decrease in their identification rates. A bigger Ns can increase identifi-
cation rate, but will lead to an increase in identification time at the same time.
Similarly, delete a certain number of (one to two) measured stars from the
measured star image in order to examine the influence of “missing stars” on
identification rate. The magnitude of deleted measured stars ranges from 3 to 6 Mv.
The result is illustrated in Fig. 3.16. It can be seen that “missing stars” exert a
comparatively small influence on identification.
5. Memory and Identification Time
The identification time of the algorithm is related to the interval parameter k during
the storage of angular distances. When k = 0.02°, the average identification time is
8.4 ms, and when k = 0.05°, the average identification time is around 10.3 ms. The
identification time is the average of 1000 identifications operated on Pentium
800 MHz PC, and the codes have not gone through optimization. By contrast, the
average full-sky identification time needed by Liebe’s triangle algorithm is 10 s.
Regardless of differences in hardware, the modified triangle algorithm is superior in
terms of operation time.
The memory requirement of a modified triangle algorithm decreases largely due
to the adoption of angular distance matching. For the selected 3360 guide stars, the
memory required for storing angular distances is 0.5 MB, while the traditional
triangle algorithm requires, in general, at least 1 MB.
In application, the triangle algorithms need to compare the length of three sides
(angular distances), thus the comparison times will multiply when the number of
triangles is huge. Besides, a huge number of triangles make it difficult for storage
and data retrieval. Generally, the following ways can be used to decrease the
number of triangles:
① limiting the number of guide stars, and decreasing GSC capacity;
② storing only a part of triangles with certain characteristics;
③ optimizing the storage structure of triangles.
The above-mentioned methods all require comparison with measured triangles
for multiple times, so their efficiency is low. To solve this problem, in this chapter, a
modified star identification algorithm by using P vector is introduced. The algo-
rithm selects the feature triangles of each star as the research subjects, calculates
their feature values, and searches for their matched guide triangles through the
feature values. It reduces the number of comparisons effectively and speeds up
identification.
3.3 Modified Triangle Algorithm Utilizing the P Vector 93
To reduce the number of triangles and the comparisons of triangles, the modified
star identification algorithm by using P vector [8, 9] adopts Quine’s principle that
one triangle is for one star, and integrates the three sides of the feature triangles for
each guide star into one parameter P. Compare the values of P to judge if the
triangle is the matched one. The values of P make full use of the three sides, and
reflect the features of triangles, so there is a one-to-one correspondence between the
value of P and triangles, and fast star identification can be realized. Moreover, the
verification procedure is added after the initial matching ends, to ensure that
measured stars near the edge of the FOV, even though they cannot form feature
triangles, can still be identified correctly.
The generation of P vector is divided into two steps: forming feature triangles and
figuring out the optimal projection axis. The two steps are introduced below.
1. Forming Feature Triangles
Every guide star can form only one triangle, named as—feature triangle, with the
guide star as the primary star. A feature star is made up of the primary star and two
neighboring stars which are the closest to the primary star. The selection principle
of neighboring stars is as follows: in the area between a small radius r and a larger
one R, select the two stars which are closest and second closest to the primary star
as the neighboring stars. The distance between the neighboring stars and the pri-
mary star must be shorter than the FOV radius R to ensure that those stars can be
seen in the star sensor’s FOV at the same time; and the distance must be longer than
r to avoid the linking between the primary star and the neighboring stars during
imaging. Therefore, the neighboring stars for forming feature triangles must meet
the requirement r < d < R.
Feature triangles are spherical triangles on the celestial sphere. The side length is
the angular distance between two stars. In the two sides linked to the primary star, if
the rotation from the short side to the long side is counter-clockwise, θ is defined as
positive. Otherwise, it is negative.
The three sides of a feature triangle formed according to the rules mentioned
above make up a three-dimensional vector. When θ is positive, the three coordinates
of the vector are all positive. Otherwise, they are negative. In Fig. 3.17, the distance
between the primary star S1 and the neighboring star S2 is 3.654°. The distance
between S1 and the neighboring star S3 is 5.864°. And the distance between S2 and S3
is 4.012°. The distance between S1 and S2 is smaller than that between S1 and S3, and
the rotation is counter-clockwise, so θ is positive, and the three-dimensional vector
corresponding to the feature triangle is (3.654, 5.864, 4.012).
94 3 Star Identification Utilizing Modified Triangle Algorithms
Pi ¼ x1 xi þ x2 yi þ x3 zi ¼ XT Xi ð3:6Þ
Thus, the mean and variance of all projection points are as follows:
XN
1X N
¼1
P Pi ¼ XT X i ð3:7Þ
N i¼1 N i¼1
3.3 Modified Triangle Algorithm Utilizing the P Vector 95
1X N
Þ2
DðPÞ ¼ ðP i P
N i¼1
1X N
þP
2
¼ P2 2Pi P
N i¼1 i
1X N
1X N
þ 1
XN
2
¼ P2i 2Pi P P
N i¼1 N i¼1 N i¼1
!
1X N 2
1X N
2
¼ X X i 2P
T
Pi þ P
N i¼1 N i¼1
1X N 2
2
¼ XT X i P
N i¼1
1X N
2
¼ XT Xi XiT X P
N i¼1
!
X N
T 1 2
¼X Xi Xi X P
T
ð3:8Þ
N i¼1
8
> @LðX; kÞ
>
< @X ¼ 0
ð3:11Þ
>
: @LðX; kÞ ¼ 0
>
@k
i.e.
2ZX 2kX ¼ 0
ð3:12Þ
XT X 1 ¼ 0
ZX ¼ kX ð3:13Þ
It can be seen from Eq. (3.14) that the maximum value of the objective function
maxðXT ZXÞ is the maximum feature value of symmetric matrix Z. The optimal
projection principal axis is thus the eigenvector corresponding to the maximum
eigenvalue, i.e., the optimal projection direction of data points when they are
projected from three-dimensional space to one-dimensional space.
When the optimal projection principal axis is figured out, its positional rela-
tionship with data point set is shown in Fig. 3.18. Verify the P values of these
projection points. There are no identical values, that is, any one projection point is
corresponding to only one three-dimensional vector, and a one-to-many corre-
spondence does not exist. Even if some projection points are very close to one
another, they will be considered as candidates during the identification process.
Guide database is the star catalog and pattern information used when a star sensor
identifies a star image and calculates attitude. A guide database consists of three
parts: a GSC, a feature triangles database and a P value vector table. The GSC stores
the position and magnitude of guide stars. The feature triangles database stores the
basic information—vertex and side length—of feature triangles. The P value vector
table stores the P values and the index number of every triangle. The construction of
GSC is described in Chap. 2. Here, the construction of feature triangles database and
the structure of P value vector database are mainly introduced.
1. Feature Triangles Database
Feature triangles are formed through using every star in GSC as a primary star. The
feature triangle database stores the information of these triangles. The index number
of each guide star, which is the vertex of the triangle, and the corresponding lengths
of three sides, are recorded as one entry. All entries are stored based on the index
number of primary stars. The storage structure of feature triangles database is
illustrated in Fig. 3.19. The data is used to figure out the optimal projection prin-
cipal axis, calculate P values of all the projection points, and search for guide stars
that match the measured stars during identification.
2. Structure of P Value Vector Table
P values obtained from feature triangles are also stored in the guide database. Each
P value has its corresponding feature triangle, so that each P value and the index
number of the primary star of its feature triangle are stored within an entry. Thus the
P value vector table is established. For efficient retrieval, all these entries are stored
in an ascending order of P values. The storage structure of the P value vector table
Fig. 3.19 Storage structure of feature triangles database. i index number of primary star S1 in
GSC, j index number of neighboring star S2 which is close to the primary star, k Index number of
neighboring star S3 which is far from the primary star, short edge angular distance between S1 and
S2, long edge angular distance between S1 and S3, third edge angular distance between S2 and S3
98 3 Star Identification Utilizing Modified Triangle Algorithms
is illustrated in Fig. 3.20. Here, “P” stands for the value obtained from feature
triangles based on Eq. (3.6), and “index” is the index number of the primary star of
the corresponding triangle.
Once a P value is obtained, its corresponding feature triangle can be searched for
quickly with the help of the P value vector table. Moreover, the direction vector of
the optimal projection principal axis is also stored in the database.
3. Mapping Relation between the Two Databases
Mapping between the P value vector table and the feature triangle database is
realized through the index number of primary stars. The mapping relation is
illustrated in Fig. 3.21. After a P′ value is obtained, its equivalent number in the
P column of the P value vector table is found. According to the index number I′ to
the right of this value, the primary star whose entry number (i) is I′ can be found in
the feature triangle database. That entry is the feature triangle that corresponds to
the P′ value.
Fig. 3.21 The mapping relation between P value vector table and feature triangles database
3.3 Modified Triangle Algorithm Utilizing the P Vector 99
The identification process of this algorithm has two parts: initial matching and
verification of identification. Initial matching begins with the calculation of the only
P value of a certain measured triangle, followed by a quick retrieval of the corre-
sponding feature triangle on the basis of the P value. A successful identification of a
triangle indicates that the orientation information of three stars has been obtained.
Two of them can be chosen to work out the general attitude of star sensor at the
time, and an ideal simulated star image can be acquired with the imaging model of
the star sensor. The similarities between the measured star image and the simulated
one can be compared in order to verify the results of identification.
1. Initial Matching
After a measured star image is captured, the first thing is to observe if there are three
or more measured stars. With only one or two stars, identification is impossible.
Otherwise, the algorithm in this section can be used. The process of initial matching
is as follows:
1. Select the primary star. The star near the center of the FOV is preferred, while
for those close to the edge of the FOV, chances are that not all the neighboring
stars needed in forming the feature triangle are seen in the FOV.
2. Determine the neighboring stars. Sort the stars surrounding the primary star
according to distance. Then calculate their angular distances with the primary
star. Select the two stars closest to the primary star and in the area of
r < d < R as neighboring stars to form the feature triangle.
3. Figure out the P value. The three-dimensional vector X ¼ ð x y z Þ which
describes the features of the feature triangle can be worked out through its
angular distances. The projection axis vector stored in the P value vector table is
retrieved and then its projection on the optimal projection principal axis, i.e., the
value of P, is obtained according to Eq. (3.6). Then the direction in which the
short side can rotate to the long side is worked out to determine whether the
P value is positive or negative.
4. Match triangles. If there is no error in the feature triangle of the measured star,
then the P value vector table has the one and only P value which corresponds to
the result in step (3). The corresponding feature triangle of the guide star can
thus be determined. However, due to various errors, the projection points tend to
deviate from the locations where they are supposed to be. As a result, triangles
whose P values are within the error range of ½ P e; P þ e are all considered
as candidates to be examined. Then the three sides of the measured triangle and
the candidate feature triangles of guide stars are compared respectively. If the
errors of all the three sides are all very small, and only one triangle meets this
100 3 Star Identification Utilizing Modified Triangle Algorithms
requirement, then this feature triangle is regarded as being the match of the
measured triangle, with vertexes in a one-to-one matching relationship.
However, if several matching triangles are found, then verification must be
carried out for further selection.
2. Verification of Identification
This verification process is similar to the modified triangle algorithm method by
using angular distance matching in that it generates a reference star image based on
the result of initial matching and the imaging parameter of the star sensor. As
shown in Fig. 3.22, “☆” stands for a star in the reference star image, and “★” for a
star in the measured star image. The two star images may be not exactly the same
when compared, yet if most stars in the reference star image have their corre-
sponding measured stars in an area with a small radius in the measured star image,
then the two star images are viewed as the same, that is, identification is successful.
Nevertheless, if the two star images are different, which means the identification of
feature triangles becomes wrongly calculated, then another primary star should be
selected from the rest of the measured stars to form a new feature triangle for
matching. If no feature triangles can get matched successfully, the identification
algorithm fails.
The flow chart of the identification algorithm is shown in Fig. 3.23.
3.3 Modified Triangle Algorithm Utilizing the P Vector 101
The guide star database used in the simulations is based on the SAO Star Catalog.
Stars of brightness greater than 6 Mv (5103 in total) are chosen from the star
catalog to constitute the guide star database. The size of the FOV is 10.8° × 10.8°,
and the focal length of the lens is 80.047 mm. The pixel size is 0.015 mm, and the
resolution is 1024 × 1024 pixels. The simulation processes are realized on Intel
Pentium4 2.0 GHz computers.
1. Identification Example
Figure 3.24 indicates the results of identification of four randomly generated star
images by using P vector identification algorithm. “+” which stands for a measured
star in the FOV, and “◦” for a correctly identified star. It is obvious that this
algorithm makes a correct identification of all the measured stars in the simulated
star images.
2. Influence of the Number of Measured Stars in the FOV on Identification Rate
The number of measured stars in the FOV is an essential factor that can influence
the identification rate. If the number of measured stars in the FOV is too small, it is
102 3 Star Identification Utilizing Modified Triangle Algorithms
hard to find more stars to verify the result of initial matching during the verification
process. As a result, even if there are several candidate triangles in the process of
initial matching, a final match cannot be identified. However, if there are enough
measured stars in the FOV, this will not occur and a high identification rate can be
guaranteed. It can be observed in Fig. 3.25 that with five measured stars in the FOV,
the identification rate stands at merely 76%, which will further drop if the number is
lower. But when there are more than six measured stars, the identification rate
increases remarkably. When the number exceeds ten, the rate is close to a 100%.
3. Influence of Positional Noise on the Identification Rate
Similar to the foregoing simulations, noises are added to the coordinates of star
spots in the reference star image to make comparisons with other algorithms for
differences in identification results under the same conditions. To precisely evaluate
3.3 Modified Triangle Algorithm Utilizing the P Vector 103
References
1. Liebe CC (1995) Star trackers for attitude determination. IEEE Trans Aerosp Electron Syst 10
(6):10–16
2. Liebe CC (1992) Pattern recognition of star constellations for spacecraft applications.
IEEE AES Mag 28(6):34–41
3. Ju G, Kim H, Pollock T et al (1999) DIGSTAR: a low-cost micro star tracker. AIAA-99-4603
4. Scholl M (2019) Star field identification algorithm—performance verification using
simulation star fields. SPIE 1993:275–290
5. Mortari D, Junkins J, Samaan M (2001) Lost-in-space pyramid algorithm for robust star
pattern recognition. In: 24th annual AAS guidance and control conference, AAS 01-004
6. Wei X (2004) A research on star identification methods and relevant technologies in star
sensor. Doctoral thesis of Beijing University Aeronautics and Astronautics, Beijing, pp 1–14
7. Zhang G, Wei X, Jiang J (2006) Star map identification based on a modified triangle
algorithm. Acta Aeronautica Et Astronautica Sinica 27(6):1150–1154
8. Yang J (2007) A research on star identification algorithm and RISC technology application.
Doctoral thesis of Beijing University Aeronautics and Astronautics, Beijing, pp 1–17
9. Yang J, Zhang G, Jiang J (2007) Fast star identification algorithm using P vector. Acta
Aeronautica Et Astronautica Sinica 28(4):897–900
10. Quine BM, Durrant-Whyte HF (1996) Rapid star pattern identification. SPIE 2739:351–360
11. Kruijff M et al (2003) Star sensor algorithm application and spin-off. In: 54th international
astronautical congress of the International Astronautical Federation (IAF), the International
Academy of Astronautics and the International Institute of Space Law, vol 1, pp 349–359
Chapter 4
Star Identification Utilizing Star Patterns
© National Defense Industry Press and Springer-Verlag GmbH Germany 2017 107
G. Zhang, Star Identification, DOI 10.1007/978-3-662-53783-1_4
108 4 Star Identification Utilizing Star Patterns
As the representative of star identification algorithms by using star patterns, the grid
algorithm performs better in fault-tolerance, storage capacity and running time. This
section presents a brief introduction to the principles and performance features of
the grid algorithm, and analyzes its disadvantages in its feature extraction approach.
The grid algorithm was firstly proposed by Padgett [1], and it is a star identification
method by using “star patterns”. Its pattern generation process (as is shown in
Fig. 1.17) can be roughly divided into the following steps:
① Determine the primary star (the star to be identified) and pattern radius pr. The
pattern of the primary star consists of neighboring stars in the neighborhood
determined by the pr.
② Shift the star image so that the primary star is the center of the FOV.
③ Within radius pr and beyond radius br, find the star l which is closest to s. l is
called the location star.
④ With the line connecting the primary star and the location star as the initial
coordinate axis and the primary star as the origin, rotate the star image and
divide the image into a grid of g g. In this way, the primary star’s feature
pattern is expressed in the grid cellði; jÞ. If there are neighboring stars in the grid
cell, the corresponding value is 1, otherwise, it’s 0.
Denoting it by the one-dimensional vector, and suppose the star’s feature vector
is
v ¼ ða1 ; a2 ; . . .; ak ; . . .; ag2 Þ;
k ¼ 1; 2; . . .; g2
ð4:1Þ
1 cellði; jÞ ¼ 1
ak ¼
0 cellði; jÞ ¼ 0
Here, k ¼ j g þ i.
Denoting measured star j’s pattern as patj , and the pattern set of all guide stars in
the GSC as fpati g, in essence, star identification’s aim is to seek
Pg2
Here, in matchðpatj ; pati Þ ¼ k1 ðpatj ðkÞ & pati ðkÞÞ, and stands for logic and
operation.
4.1 Introduction to the Grid Algorithm 109
And the rise of positional noise further decreases the probability. There are two
major reasons: Firstly, the primary star is near the edge of the FOV, which
makes the location star more likely to fall outside the FOV; Secondly, due to
the errors in star magnitude measurement, accuracy of luminosity information
cannot be guaranteed to be of a high enough accuracy. A mistaken determi-
nation of a location star generates an incorrect feature pattern, and if so, it’s
almost impossible to obtain a correct identification. To compensate for this
possibly inaccurate determination of location star and the consequent
misidentification rate, the grid algorithm increases the number of stars to be
identified to ensure a still relatively high accuracy calculation. The grid algo-
rithm adopts the FOV of 8° × 8°, and selects stars brighter than 7.5 Mv as
guide stars. The total number of guide stars after selection is around 13,000.
The average number of stars in a round FOV of radius 4° is close to 30.
Therefore, a larger portion of stars could obtain a more accurate identification
calculation even if other measured stars were identified incorrectly because of a
potentially mistaken selection of location stars. The identification rate of the
grid algorithm will drop significantly when the average number of stars in the
FOV is low.
② The feature pattern cannot reflect the degree of internal similarity. The grid of
g ¼ 8 is shown in Fig. 4.1. Suppose its feature pattern vector is pat. According
to the construction process of grid’s feature pattern vector, the element at
positions (14, 19, 39, 45, 51) in pat is 1, and the element at other positions is 0.
Affected by errors in star spot position measurement, the star on the edge of the
grid might move from place A to place B, and suppose the feature pattern
vector extracted in this way is pat0 . Apparently, the element at positions (19, 22,
39, 45, 51) in pat0 is 1, and the element at other positions is 0. It is thus evident
that, with this method, there exists big differences between feature vectors
extracted, based on similar distribution features. In other words, feature vectors’
similarity cannot be reflected in feature space.
To solve problems of the grid algorithm’s feature extraction, Zhang et al. [3–5]
have proposed star identification algorithms by using radial and cyclic star patterns.
This section presents the detailed implementation of this algorithm, and compares
its performance with that of the grid algorithm through simulations.
To avoid the grid algorithm’s problems, neighboring stars’ distribution features are
resolved into radical and cyclic directions (Fig. 4.2). First, the rotation-invariant
radial feature is reliable enough to be directly applied to matching and identification
without the determination of location stars. Next, the similar features would stay
similar after extraction in feature space because both radial and cyclic features are
one-dimensional. The star identification method, by using radial and cyclic star
patterns proposed here, is developed from this idea.
There are differences between radial and cyclic features. The radial feature
enjoys rotation invariance, and it is a reliable characteristic, while the cyclic feature,
like the grid algorithm, needs a location star to generate a cyclic feature pattern.
Given this distinction, a multi-step match is adopted. The radial feature of star
pattern is used for an initial match, and then a follow-up match is carried out using a
cyclic feature of the star pattern. In initial match, guide stars are limited within a
small range so that mistaken matching would be decreased as much as possible. The
radial feature is just featured with this reliability and meets the demand. The cyclic
feature can be used for a follow-up match to further eliminate redundant matches.
Construction process of radial feature is shown as follows (as is shown in
Fig. 4.3):
① With s as the primary star, determining the radial pattern radius Rr . Stars in the
neighborhood of the radius Rr are called neighboring stars of s. These neigh-
boring stars constitute the radial pattern vector of s.
② Along the radial direction, the neighborhood of radius Rr which centers on s is
divided into rings G1 ; G2 ; . . .; GNq , intervals between which are equal. (Nq here
means the grade of subdivision.)
③ Calculate the angular distance between neighboring star t1 (i ¼ 1; 2; . . .; Ns)
and s and set it as dðs; ti Þ, then this neighboring star i (i ¼ 1; 2; . . .; Ns) falls in
the ring intðdðs; ti Þ=Rr Þ (int means rounding), thus the radial feature pattern
vector of s is denoted as
Take three neighboring stars, for example, to illustrate the construction process
of the cyclic feature (Fig. 4.4). The steps are as follows:
① With s as the primary star, determine the cyclic pattern radius Rc and neigh-
boring stars t1 ; t2 ; t3 .
② With s as the primary star and origin, calculate the angles between neighboring
stars successively (see \t1 st2 ; \t2 st3 ; \t3 st1 ; in Fig. 4.4).
③ Find the smallest angle (\t1 st2 ) and choose the side of this angle (st1 ) as the
starting side to evenly partition the neighboring round area into eight sectors.
④ A vector v of eight bit is formed according to the neighboring stars’ distribution
in the sectors counter-clockwise. If there are neighboring stars in this sector, the
corresponding bit is one. Otherwise, it is zero. In Fig. 4.4, it is shown that
v ¼ ð11000100Þ.
⑤ Shift v circularly to find the maximum number (decimal) as the cyclic pattern of
s. v remains unchanged after the shift as shown in Fig. 4.4, and the cyclic
feature patc ðsÞ ¼ ð11000100Þ ¼ 196.
Under special circumstances, patc ðSÞ ¼ 0 when the number of stars in the
neighborhood is 0, and when the number of stars in the neighborhood is 1,
patc ðSÞ ¼ 128. The smallest angle between neighboring stars is found in a radial
feature extraction, which is similar to the determination of the location star in the
grid algorithm. Therefore, the cyclic feature is not reliable. However, it has little
effect on the identification process because it’s used only in the follow-up match.
Star identification is conducted after the radial and cyclic features of all guide
stars are extracted utilizing the above-mentioned method. With the same method,
features of the measured star image are extracted to find a guide star whose pattern
is closest to that of the measured star by using the matching criterion similar to
Eq. (4.2). However, a big problem of matching in this way is that it is too
time-consuming. For the screened 3360 guide stars, it needs 672,000 (3360 × 200)
comparison operations for one match of the radial feature if the radial grade of
subdivision Nq ¼ 200. Apparently, traversal search in this way takes too much
114 4 Star Identification Utilizing Star Patterns
time. To avoid this problem, a lookup table (LT) is designed to store guide stars’
radial features, and matching speeds up significantly with this kind of storage
structure.
The LT has Nq entries which are denoted respectively as LTi ði ¼ 1; 2; . . .; NqÞ
and correspond to Nq rings in the radial feature extraction. Take each guide star in
the GSC as the primary star and construct its radial feature pattern vector with the
method introduced above. A new record is put in the LTj if there is a neighboring
star within ring Gi , and this record is the primary star’s index number. After
searching all neighboring stars of this primary star, the corresponding records are
put in the LTj . The LT is constructed after all guide stars in the GSC are searched.
Figure 4.5 shows a part (from the third entry to the eighth entry) of the LT with
radial pattern vector Rr ¼ 10 and subdivision grade Nq ¼ 200. The LT’s structure
is very simple. Each entry only includes its record number and the guide stars’
index numbers in ascending order. During the construction of LT, the guide star’s
index number which appears repeatedly in the same entry should be eliminated.
Because the radial feature is used for initial match and the cyclic feature is used
for follow-up matches, the set of candidate stars is limited within a comparatively
small range after the initial match. Therefore, the speed of a matching search is not a
major problem anymore, and the cyclic feature can be stored directly for matching.
The structure of the navigation database is shown in Fig. 4.6.
In a multi-step match, an initial match is performed firstly with the radial feature so
that the search range is limited within a relatively small order. Next, screening by
other features layer by layer is conducted until the correct match is finally obtained.
The identification rate of star identification by using “star patterns” is related to the
neighboring stars’ number nneighbor in the neighborhood determined by the pattern
4.2 Star Identification Utilizing Radial and Cyclic Star Patterns 115
radius. Generally speaking, the bigger nneighbor , the higher the identification rate.
Smaller nneighbor provides little information with too many redundant matches and
cannot guarantee correct identification. Therefore, a brighter measured star with a
bigger nneighbor value is the priority choice. Defining Q ¼ M=nneighbor (M for the star
magnitude), and ordering measured stars according to the value of Q. a stars of
smallest Q are therefore selected successively for matching.
(1) Initial Match
The initial match is related to the structure of LT. Take a measured star s as an
example to illustrate the process of the initial match. Suppose s’s radial feature is
denoted as (12, 26, 31, 54, 102, 133). That is, s has neighboring stars in rings (12, 26,
31, 54, 102, 133). Search the records in the (12, 26, 31, 54, 102, 133) entries of the
LT for the index numbers of guide stars (Fig. 4.7). The guide star with the index
number of 454 appears five times. Both of the guide stars whose index numbers are
2294 and 211 appear twice, respectively. And all of the other stars appear only once.
It indicates that, for the guide star of index number 454 and the measured star s, there
are five neighboring stars appearing in the same and corresponding rings. Therefore,
this guide star is most likely to become the match star of s. The initial match with this
method is described as follows:
Distribute N (N as the total number of chosen guide stars) counters
ðCT1 ; CT2 ; . . .; CTN Þ to correspond to each guide star. Take the measured star to be
identified in the measured star image as the primary star and construct its radial
feature pattern vector by using the method described above. If there are neighboring
stars in ring Gj , scan all records in the LTj and the value of the counter of their
corresponding guide stars’ index number is added by 1. Finally, compare
ðCT1 ; CT2 ; . . .; CTN Þ and choose a guide star with the highest counter value and this
guide star is likely to be the match star (called the candidate star) of the measured
116 4 Star Identification Utilizing Star Patterns
star. An initial match is conducted with a stars, selected from the measured star
image, and their matching stars are found. The candidate star(s) that the measured
star i corresponds to may not be unique, so the candidate stars’ set is recorded as
cani . In essence, the initial match is to narrow down the scope of search matching
from the whole guide star catalog to fcani g (i ¼ 1; 2; . . .; a ).
(2) Follow-up Match
In theory, if there are two or more measured stars, whose candidate star is unique
after initial match, the next step goes directly to verification and identification.
However, when the number of stars in the FOV is relatively small, there are a large
number of redundant matches in fcani g. Here, the cyclic feature vector is used for
further screening: if the measured stars’ candidate star is not unique, the candidate
star’s cyclic feature vector is constructed using the above-mentioned method. And
the candidate star’s cyclic feature vector stored in the cyclic pattern database is
compared with the constructed cyclic feature vector above. If the two vectors are
the same, this candidate star is kept: otherwise, it’s removed.
(3) The FOV Constraint
Under some circumstances, the candidate star obtained after screening with both
radial and cyclic distribution features is still not unique. If so, further screening,
based on other constraints must be carried out. It is shown in the experiment that all
correct matches of the measured stars in the image cluster in a certain area of the
FOV, while the incorrect matches (error and redundant matches) are randomly
distributed across the sphere. The method of FOV constraint is based on this
principle. If the number of stars in some candidate star’s neighborhood of radius r is
under a certain threshold value T, this candidate star is eliminated directly from the
candidate star(s).
4.2 Star Identification Utilizing Radial and Cyclic Star Patterns 117
Parameters used in star sensor’s imaging during simulations are the same as those
used in simulations in Sect. 3.2.5. Simulations mainly include the selection of
subdivision grades and radial pattern radius, the effect of star spot position noise,
star magnitude noise and interfering star value on identification (compared with the
grid algorithm), and the effect of the number of stars in the FOV on identification.
(1) Selection of Identification Parameters
Initial match is the most important step in identification algorithm, in which two key
parameters are used: radial pattern radius Rr and radial quantizing grade Nq.
Figure 4.9 demonstrates how the identification rate varies along with the dif-
ferent radial pattern radiuses and radial quantizing grades. Here, the standard
deviation of star positional noise is defined as a 0.5 pixel and the identification is
conducted in 1000 random orientations in the celestial sphere. It is seen that the
bigger Rr value, the higher the identification rate. But the increase in the identifi-
cation rate is little when Rr [ 10 . The selection of Nq is related to the noise level
of the star position. The highest identification rate is achieved only when an
appropriate Nq is chosen. When Nq is bigger, the quantizing grade is finer and the
algorithm is easier to be interfered with by noise and more likely to provide a wrong
match; A smaller Nq results in redundant matches easily. To enhance the algo-
rithm’s robustness to positional noise, a smaller Nq should be taken.
Figure 4.10 shows how LT capacity varies along with changes in radial pattern
radius and quantizing grades. The ordinate stands for the total number of storage
records in the LT. It is shown that the required storage capacity increases quickly
with the increase of Rr , and Nq has little influence on the storage capacity.
118 4 Star Identification Utilizing Star Patterns
Fig. 4.8 Flow chart of the star identification algorithm by using radial and cyclic star patterns
To ensure that algorithm’s identification rate is high enough and the required
storage capacity is as small as possible, the parameters of initial match Rr and Nq
are defined as 10° and 200, respectively. In addition, here the cyclic pattern radius
Rp ¼ 6 during the construction of the cyclic feature.
(2) Effect of Star Spot Position Noise on Identification
To investigate the influence of star location error on algorithm’s identification rate,
a gauss noise with mean = 0 and std. dev r ¼ 0 2 pixels is added to the true star
position in star image generated through simulation. Figure 4.11 shows the
4.2 Star Identification Utilizing Radial and Cyclic Star Patterns 119
statistics of identification results of 1000 star images randomly chosen from the
celestial sphere. Comparing these results with those of the grid algorithm in the
same experimental conditions. With the grid algorithm, pattern radius Rp ¼ 6 and
the number of grid cells g2 ¼ 60 60. It’s shown in Fig. 4.11 that this algorithm
always performs better than the grid algorithm in identification rate when star spot
position noise changes. When the standard deviation of position noise is 2 pixels,
the identification rate of this algorithm is about 97%, while that of the grid algo-
rithm drops to around 94%.
(3) Effect of Star Magnitude Noise on Identification
To investigate the effect of brightness error on identification, a gauss noise with
mean = 0 and std. dev = 0−1 Mv is added to the star magnitude in the star image
120 4 Star Identification Utilizing Star Patterns
simulation. Figure 4.12 shows that two algorithms are used for identification and
the statistics after different star magnitude noises are added. Each statistics is
obtained from 1000 times of identification generated randomly across the sphere.
The two algorithms’ identification rate is barely affected by the increasing of star
magnitude noise. Because magnitude information is not used in feature extraction,
both algorithms demonstrate robustness to star magnitude noise.
(4) Effect of Interfering Star on Identification
With the same method in Sect. 3.2.5, a simulation experiment on the effect of an
interfering star is carried out. A certain number (1–2) of “artificial stars”, whose
equivalent magnitude varies from 3 to 6 Mv and standard deviation of star
4.2 Star Identification Utilizing Radial and Cyclic Star Patterns 121
magnitude noise is 0.2 Mv, are randomly added to the measured star image. The
identification statistics are shown in Fig. 4.13. It can be seen that the grid algorithm
and this algorithm are not very sensitive to artificial star magnitude. The identifi-
cation is barely affected by the increasing of artificial star’ brightness. The identi-
fication rate drops by about 4% with two artificial stars than that with one artificial
star. Furthermore, this algorithm performs slightly better in resisting the influence
of the artificial star than the grid algorithm.
In the same way, a certain number (1–2) of measured stars in the image are
randomly deleted to investigate the influence of “missing stars” on identification
rate. The magnitude of deleted measured stars varies from 3 to 6 Mv. Figure 4.14
shows the statistical result. It is shown that missing star magnitude, like the artificial
star, has little effect on identification. Compared with that in normal conditions (no
missing star, no noise disturbance), the identification rate drops by 2% with one
missing star and by 6% with two missing stars. Moreover, with the interference of
the missing star, the identification rate of this algorithm is slightly higher than that
of the grid algorithm. The main reason why the missing star results in lower
identification rate is that the number of neighboring stars in the neighborhood is too
small while the number of redundant matches is too large.
(5) Effect of the Number of Measured Stars in the FOV on Identification Rate
Generally speaking, the more stars in the FOV, the more likely that identification is
successful. Figure 4.15 shows how the identification rate changes as the number of
measured star in the FOV increases in the experiment when standard deviation of
star location noise is 0.5 pixel and 1000 star images are randomly selected in the
celestial sphere. It is shown in Fig. 4.15 that an identification rate of 100% is
ensured when the number of stars in the FOV is over 10, while the rate drops with
less than ten stars in the FOV. It is almost impossible to obtain a correct identifi-
cation when the number of stars in the FOV is lower than 5. In fact, through
statistics, the probability that there are over ten stars on average in the FOV is
96.62%.
The average number of measured stars in the FOV is an important parameter in
star identification carried out by using “star patterns”. This kind of algorithm is
outstanding in performance when there are enough stars in the FOV. When stars in
the FOV are sparse, the identification is difficult because not enough information
can be provided to exclude redundant match(es).
(6) Identification Time and Storage Capacity
When 1000 times of identification are randomly conducted on Pentium 800 MHz
PC with this algorithm, the average identification time is 11.2 ms. In the same
situation, the average identification time of the grid algorithm is 10.5 ms.
In this algorithm, there are 2 bytes in the index number of each guide star in the
LT. It follows that the LT needs a storage space of about 192 KB altogether. So,
plus the 3.36 bytes that the cyclic pattern needs, the star identification algorithm by
using radial and cyclic star patterns requires a storage space of 196 KB altogether.
In comparison, there are 73,723 records in the grid algorithm, which requires a
storage space of about 144 KB.
The grid algorithm is slightly better than the present algorithm in identification
time and storage space. The main reason is that star identification by using radial
and cyclic star patterns is more complicated in feature extraction, storing and
identification.
The differences between star identification and general image recognition are
mainly twofold:
① The only feature information that can be used in star identification is star spot’s
position coordinates and brightness information which is not quite accurate. In
some sense, star identification can be regarded as the identification of
two-dimensional discrete points.
② Star identification is not completely the identification of two-dimensional dis-
crete points. It also relies on star sensor’s imaging model and parameters of the
imaging system. Despite such differences between star identification and gen-
eral image recognition, there are some methods in image recognition that can be
referred to and applied in star identification. To realize, rotation-invariant
feature extraction is the priority task in star identification by using “star pat-
terns”. Lots of research in image recognition has been conducted with and some
established methods put forward.
Log-Polar transformation (LPT) is a common method used for rotation-invariant
feature extraction in image recognition. Zhang et al. [3, 6, 7] introduced this method
into star identification. Through transformation, a feature pattern expressed in coded
strings is generated for each star. Finally, approximate string matches are employed
to identify feature patterns. This section introduces the principles and implemen-
tation of the star identification algorithm by using LPT in detail and evaluates its
performance through simulations.
Schwartz [8] proposes that, between mankind’s retina and visual cortex, there exists
Log-Polar mapping which plays an important role in the identification of a target
that is scale, shift and rotation invariant. LPT is a kind of transformation from
124 4 Star Identification Utilizing Star Patterns
Fig. 4.16 Log-polar transformation. a Binary image, b Image after log-polar transformation
Cartesian coordinates to polar coordinates. Through mapping, the scale, shift and
rotation of the target turn into one dimensional changes, greatly simplifying the
problem. LPT is widely used in many areas such as moving target identification and
character identification [9–11].
Denote the binary image in the Cartesian coordinate system and its LPT result
image in the polar coordinate system as f ðx; yÞ and f 0 ðr; hÞ, respectively. LPT
(Fig. 4.16) can be defined as
pffiffiffiffiffiffiffiffiffiffiffiffiffiffi
r ¼ ln x2 þ y2
8
1
< tan ðy=xÞ
> if x [ 0; y [ 0
ð4:4Þ
1
h ¼ p þ tan ðy=xÞ if x\0
>
:
2p þ tan1 ðy=xÞ if x [ 0; y\0
Through LPT, the rotation transformation in the original image turns into cir-
cular shift in θ coordinate and the scale transformation in the original image turns
into shift in rcoordinate in the polar system. Therefore, LPT is usually used to
extract rotation and scale invariant features in image matching. Both images, before
and after rotating, are transformed by LPT as is shown in Fig. 4.17.
A measured star image can be viewed as the rotation of a star image in a certain part
of the celestial sphere, and star identification, in some sense, is equivalent to image
matching and identification by using rotation-invariant features. If the rotated
measured star image coincides with the image of a certain part of the celestial
sphere, then they are considered matched. So the LPT can be employed to extract
4.3 Star Identification Utilizing the Log-Polar Transformation Method 125
Fig. 4.17 Images’ LPT results before and after rotation. a The original image and its LPT results.
b The image after rotation and its LPT results
the rotation-invariant features in the star image. Different from the LPT in general
image matching, the LPT in star identification only needs to transform discrete
points (star spots) instead of a whole image in general cases. Additionally, in star
identification, centroid position coordinates of the star spot to be identified are
chosen as the coordinate origin for LPT, while in general image matching, centroids
of the target to be identified are chosen as the coordinate origin.
Figure 4.18 is the illustration of star image’s LPT. Figure 4.18a is a star image
composed of stars in the neighborhood of the guide star s in the GSC and the
image’s LPT result (s is the origin of coordinates). Figure 4.18b is the LPT result of
a measured star image, with the measured star t as the origin of coordinates. If the
measured star corresponds to the guide star, the measured star image should
coincide, after rotating by a certain angle around star t, with the star image com-
posed of guide stars in Fig. 4.18a. The shift in θ coordinate in the image after LPT
is circular.
With the result obtained after LPT centering around the guide star (or measured
star) as the feature pattern of the guide star (or the measured star), taking the guide
stars as an example, the transforming process can be illustrated as follows:
126 4 Star Identification Utilizing Star Patterns
Fig. 4.18 Star image’s LPT. a The star image of a certain celestial area in GSC and its LPT
results. b A measured star image and its LPT results
① Take the direction vector of guide star s as the direction of star sensor’s
boresight, and project guide stars in the neighborhood of s with radius R (called
neighboring stars of s, such as stars of number 1–6 in Fig. 4.18a), to the
imaging plane (c.f. “Star Image Simulation” in Sect. 2.3). The star image
obtained in this way is the original image. Apparently, guide star s is projected
to the origin of the original image.
② Conduct LPT of the neighboring stars of s according to Eq. (4.4). Using a
similar method, the measured star in the measured star image is transformed by
LPT. Denoting the scale of the original image as M × N, and that of the image
after LPT as m × n, then the resolution in h and r directions are 360 °/m and R/
n, respectively. Because the number of stars is much lower than m * n, the
binary image after LPT can be expressed as an m × n sparse matrix A.
1 at least one star at ði; jÞ
Aði; jÞ ¼
0 no star at ði; jÞ
i ¼ 1; . . .; m; j ¼ 1; . . .; n
4.3 Star Identification Utilizing the Log-Polar Transformation Method 127
Here, csðlptðsÞ; vÞ means circular shifting (in left or right directions) of ðlptðsÞ for
v bits, and same is defined as the number of matched nonzero bits in the two
vectors. The bigger the same value, the more the matched nonzero bits, and the
more similar these two vectors. Bigger value of simðlptðsÞ; lptðtÞÞ indicates higher
similarity between lptðtÞ and ðlptðsÞ. For example, for two feature vectors with
m ¼ 20
lptðsÞ ¼ ð0 23 0 0 54 0 10 0 0 0 21 0 0 0 0 0 0 19 0 0Þ
lptðtÞ ¼ ð10 0 46 0 21 0 0 12 0 0 0 19 0 0 0 20 0 0 54 0Þ
denoting stars’ feature pattern by strings, the storage space is saved and the search is
faster. The principles of coding are as follows:
① Circularly shift, to the left direction, all zero bits before the first nonzero bit in
lptðsÞ to the tail of the vector and thus the nonzero bit becomes the first bit of
1 × m vector.
② Recode to obtain strðsÞ. Odd number bits are nonzero bits in lptðsÞ, and even
number bits are the numbers of zero between two adjacent nonzero bits in
lptðsÞ. Each character in the coded string is expressed by one byte.
Below, Lpt(s) refers to pattern vector (m = 100) after the lptðsÞ of guide star
number 5 in the GSC. strðsÞ is the coded string.
lptðsÞ ¼ 0 0 0 0 0 0 0 0 0 0 0 0 0 35 0 0 0 0 0 0 53 0 0 0 0 0 0 0 0 0 44 0 0
0 0 0 52 51 0 0 0 0 0 0 0 54 48 0 0 0 0 0 0 0 0 0 0 3 0 49 0 0 0 0 0 0
0 0 0 0 0 0 0 0 53 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 29 0 0 0 0 0 0 0
strðsÞ ¼ 35 6 53 9 44 5 52 0 51 7 54 0 48 10 31 1 49 14 53 16 29 21
Given that rotation is transformed into circular shift through LPT, to facilitate
pattern matching, the length of the coded string of guide star is extended to twice of
its original length through the circular shift, while the coded string of measured star
remains unchanged. The string above is extended to
str 0 ðsÞ
36 6 53 9 44 5 52 0 51 7 54 0 48 10 31 1 49 14 53 16 29 21 35 6 53 9 44 5 52 0 51 7
54 0 48 10 31 1 49 14 53 16 29 21
certain character fails, the KMP algorithm does not simply return from the text, but
makes full use of the preceding comparison information. In the KMP algorithm, a
KMP flow chart, which is used to scan the text, is constructed for each pattern
str 1m . Every node in a KMP flow chart includes only two arrows. One is called the
success link, which should be followed when an anticipated character is read out in
the text, while the other is called the failure link. The key to the KMP algorithm is
to construct the failure link. Figure 4.19 is the KMP flow chart of pattern string
str 1m ¼ “ABABCB”.
The KMP algorithm reduces the complexity of string match significantly. Only
m þ k times of comparison are needed in the worst case.
The string match algorithm used in star identification is different from general
string match algorithms, so the KMP algorithm cannot be directly used for iden-
tification. The differences are mainly in the following aspects:
① The character sets these two algorithms use are different, so are the meanings of
characters. The meaning of every bit in general string match is equivalent,
while in star identification, meanings of bits of odd number and even number in
the star pattern’s string are different. Bits of odd number stand for neighboring
stars’ coordinate values in the r axis after LPT, and bits of even number stand
for intervals in the h axis. For star identification, string match is actually the
match of strings in odd number bits. In Fig. 4.20, assuming that strings of
measured star and guide star match start from ða1 ; b2 Þ, matching of a2i þ 1 and
b2i þ 1 must satisfy the following simultaneously:
a2i þ 1 ¼ b2i þ 1
and
quarter, at most a half, of the capacity of its corresponding guide star’s pattern
when it is on the edge of the FOV. Plus star spot position error and the effect of
interfering stars, measured star’s pattern strings cannot completely correspond
to its matched guide star’s pattern strings. Secondly, due to star spot position
error, principles of accurate string match cannot be applied to define character’s
match. Therefore, to enhance the robustness of string match, Eq. (4.7) is
redefined as:
ja2i þ 1 b2i1 j 1
and
It is obvious that Eq. (4.8) is much looser than Eq. (4.7) in defining “match”.
Approximate string match is a kind of fault-tolerant match which is widely used
in many areas such as intelligent information retrieval, DNA fragments analysis,
and so on. Now quite a few algorithms about approximate string match are avail-
able, for example, the agrep algorithm [13] proposed by Wu. This algorithm
memorizes already matched strings with a kind of flexible bit coding. The speed is
very high (just a few seconds for searching texts of several megabytes on a Sun
workstation).
Du [14] has conducted detailed study on how to conduct approximate string
recognition.
Based on a KMP algorithm, an algorithm of approximate string recognition
suitable for the recognition of star pattern string is introduced. It follows the
character matching principle defined in Eq. (4.8). General approximate string match
algorithms must deal with operations like character deleting, inserting and substi-
tuting. But in star identification, a comparatively easier method is adopted. In the
strings of the guide star pattern, assuming the number of matched characters (odd
numbers) and mismatched characters of strings with the measured star patterns are
nmatch and ndismatch , respectively. These two values are updated constantly in the
process of matching. The process of identification and search is controlled by
keeping track of these two values. The string match follows the principles below:
4.3 Star Identification Utilizing the Log-Polar Transformation Method 131
far higher than the identification rate of the measured star on the edge of FOV.
Therefore, a measured star with bigger nneighbor is the priority choice.
Meanwhile, a brighter star is easier to capture and is more reliable. Thus,
defining each measured star as Q ¼ M=nneighbor (M for star magnitude), these
measured stars are ordered according to the value of Q, and the star with the
smallest Q is the priority in selection for matching.
Using the method similar to the star identification algorithm above, verification
is introduced into the identification. If two measured stars obtain correct identifi-
cation (not the final correct identification, but the “correct identification” in the
string matching described above), these two stars and their matched guide stars can
be used to verify the validity of the identification. If it’s verified, the identification
succeeds. Otherwise, other measured stars of a lower order are selected successively
for identification.
Figure 4.22 is the flow chart of the star identification algorithm found by using
LPT.
Fig. 4.22 Flow chart of the star identification algorithm by using LPT
4.3 Star Identification Utilizing the Log-Polar Transformation Method 133
In simulation, star sensor’s imaging parameters are the same as the parameters used
in the simulation in Sect. 3.2.5. The simulations mainly include the selection of
radius R, and effects of star spot position noise, star magnitude noise and interfering
stars on identification.
(1) Selection of Identification Parameters
It is a problem that the algorithm by using star patterns must solve to determine the
range of neighborhood used to construct the feature pattern. If the selected pattern
radius R is too small and the information is not complete, then the unique feature for
each star cannot be constructed. If R is too big, there will be relatively great
differences between the pattern of the measured star and the pattern of its corre-
sponding measured star. Especially for the star close to the edge of the FOV, the
pattern obtained through LPT may be just a small part of its corresponding guide
star’s pattern.
Select different values of R and conduct identification experiment respectively.
In the experiment, values of R vary from 3° to 10°, LPT transformation parameters
m and n in identification are 100 and 60, respectively, and no noise is added.
Figure 4.23 represents the statistical identification result of 1000 star images ran-
domly selected in the celestial sphere. According to the statistical result, when R is
comparatively small, the identification rate is very low. The identification rate goes
up with the increase of R. But after R > 6°, the identification rate declines.
Therefore, 6° is considered a reasonable value for the pattern radius R.
(2) Effect of Star Spot Position Noise on Identification
To investigate algorithm’s robustness to star location error, a gauss noise with
mean = 0 and std. dev r ¼ 0 2 pixels is added to the true star position in star
image generated through simulation. Figure 4.24 shows the statistics of identifi-
cation results of 1000 star images randomly selected from the celestial sphere.
According to the statistical result, this algorithm demonstrates robustness to posi-
tion noise, and the identification rate can still be 98% or higher when r ¼ 2.
(3) Effect of Star Magnitude Noise on Identification
A gauss noise with mean = 0 and std. dev = 0–1 Mv is added to the star magnitude
in star image simulation. Figure 4.25 shows the statistics of identification after
different star magnitude noises are added. Each of the statistics is obtained from
1000 times of random identification across the sphere. It can be seen from the
statistics that star magnitude noise has a small impact on the identification rate. And
the identification rate can still reach around 99% when the standard deviation of
noise is 1 Mv. Thus the effect of brightness error on identification rate is negligible.
searching of strings. And the run time of this algorithm increases significantly when
there are many measured stars in the FOV.
In LPT, the average length of each pattern string is about 28, so the total storage
requirement of 3360 guide stars is 94 KB if one character is 1 byte. In this way, star
identification algorithm, by using LPT, requires very small storage space.
between star spots in the image. That means, accurate intrinsic parameters are not
required in identification. This section introduces the implementation of this
algorithm and evaluates its performance through simulation experiments.
Many star identification algorithms need exact parameters of the optical system in
advance, such as focal length, positions of principal points and even optical sys-
tem’s distortion coefficient. These parameters can be estimated with ground cali-
bration in the lab. However during the launch, the star sensor is inevitably shocked
by external forces which result in tiny changes in the relative position of the image
plane and the lens’s optical axis. When the star sensor is used in orbit, it is affected
by the space environment variation. The optical system’s parameters are thus
changed. So the original parameters obtained from calibration is no longer accurate
and often causes identification errors. Figure 4.28 shows the simulation, in which
star sensors generate star images with the same attitude but different focal lengths. It
can be seen that star spot positions change significantly with different parameters. +
stands for the star spot’s position when the focal length is 76.012 mm. * stands for
the generated image’s star spot position when the focal length is 80.047 mm.
With other constant parameters remaining the same, when there are principal
point error ep and focal length error ef , errors of optical axis’s angular distance and
direction vector of star spot in each position in the image plane are calculated as
shown in Fig. 4.29. It can be seen that the principal point error causes a relatively
constant error in angular distance, the distribution of which in the whole star image
Fig. 4.29 Errors of optical axis’s angular distance and direction vector of star spot due to changes
of principal point and focal length
is similar. When there are errors in the focal length, the farther the star spot is from
the principal point in the image plane, the bigger errors in its direction vector are.
Due to these errors, wrong feature patterns may be generated and thus the star
identification fails.
The position of the measured star in the grid cell needs to be judged in star
identification with the grid algorithm. When feature patterns are generated, the
measured star image may appear in the wrong grid cell (as is shown in Fig. 4.30) if
the star sensor’s focal length calibration is not exact or changes. If so, the star
patterns obtained from the measured star image cannot match with the star patterns
stored in the guide database correctly and thus make the identification difficult.
It can be seen, from the analysis above, that identification is affected by changes of
optical parameters (the position of principal point and the focal length) used in
traditional algorithms. In view of this problem, the concept of scaled distance is
introduced. This distance is only related to the relative position of star spots in the
image plane. The scaled distance remains unchanged no matter how these param-
eters change because they are not used in the algorithm. And thus the stability of
star identification is ensured.
There are several situations when the star sensor captures star images. If the
position of the principal point changes, the star spot’s position coordinate relative to
the principal point’s position will shift in the image plane. If the optical system’s
focal length changes, the measured image will zoom in or zoom out proportion-
ately. If the star sensor with different attitudes photographs the same celestial area,
the star spot’s relative position will rotate in the image plane. Therefore, it must be
ensured that patterns remain unchanged in the situations above when the identifi-
cation algorithm without calibration parameters is used to extract star patterns.
Star identification manifests as the matching of two-dimensional discrete points,
so each star spot rather than each pixel in the whole image needs to be transformed.
The process of star pattern extraction is as follows:
① Take each guide star as the primary star, search adjacent guide stars in a certain
neighborhood and calculate distances between these neighboring stars and the
primary star. This distance is not the angular distance defined in most identi-
pffiffiffiffiffiffiffiffiffiffiffiffiffiffi
fication algorithms, but the straight-line distance R ¼ x2 þ y2 between two
points in the image plane.
② Find out the star closest to each primary star and take it as the location star.
Rotate and position the star image. To avoid the influence of binary stars and
improve accuracy of rotation and positioning, the guide star at a certain distance
R0 is often selected as the location star.
③ Transform the star image. Calculate the straight-line distance Rb between the
location star and the primary star in the plane and take this distance as the
standard. Ratio Rri of the distance Ri between one of other neighboring stars and
the primary star and the standard distance Rb (Rri = Ri/Rb) is defined as the
distance between this neighboring star and the primary star and is called scaled
distance. Starting from the location star, the counter-clockwise direction sur-
rounding the primary star is in the positive direction. Angles between each
neighboring star and the location star are calculated successively and denoted as
the neighbor star’s angle coordinate. Through the transformation above, the
original star image is in the coordinate system with axis θ—R.
④ Construct the feature vector of the primary star. The coordinate system with
axis θ—R is equally divided into M sectors in the direction of scaled distance,
and is equally divided into N sectors in the angle direction. The transformed star
140 4 Star Identification Utilizing Star Patterns
image is divided into cells, and meanwhile a new M × N pattern vector patðSÞ
is constructed. If there are stars in a certain cell, the corresponding value of
patðSÞ is 1. Otherwise, it’s 0. The feature vector can quantify the distribution of
the primary star’s neighbor stars into a vector constituted by 0 and 1.
Here, when there are stars in the cell corresponding to bi , the value of bi is 1,
otherwise, it’s 0.
Figure 4.31 shows the process of constructing feature vectors. The obvious
distinction between the above-described feature construction method and the grid
algorithm, lies in the fact that the grid algorithm directly uses star spot’s coordinates
to construct features and these features will change after image zooming in or out.
By comparison, the scaled distance used to construct features in the star identifi-
cation algorithm without calibration parameters is independent of imaging system
parameters. During feature construction, all scaled distances between the closest
neighboring stars and the primary star are 1. Because the scaled distance is a
relative value, it will not change with the image zooming in or out. So it is an
invariant. The angle is determined by the location star, so it’s not related to the
attitude when images are captured and is a rotation invariant. In addition, the
positional relationship between star spots rather than unreliable information like
brightness is used when feature vectors are constructed. Therefore, vectors con-
structed with the above method enjoy great stability.
To identify a star, the one and only distinctive feature of the star needs to be
extracted. Generally, features of stars in the guide star pattern database can be
pre-computed and stored in the star sensor’s memory. Then these data are directly
read out during identification. When the guide star pattern database is constructed,
all stars in the GSC needs to be traversed, and all of their features are calculated and
stored in order.
Every star image is divided into M × N grid cells. To save storage space, a
storage sequence is set for each grid cell, so there are M × N sequences in all. Each
grid cell stands for a position in the guide star’s neighborhood. If some guide star
has neighboring stars in this position, the index number of this guide star is
recorded into the grid cell’s sequence and the counter of this sequence is added by
1. Figure 4.32 shows the sequence’s storage format. Here, Number stands for the
number of stars recorded in this sequence, and Star Index for the star index of guide
stars recorded in this sequence which indicates these guide stars will, as neigh-
boring stars, fall into the grid cell expressed by this sequence. Data in Fig. 4.32 is a
part of the pattern database when M = 30 and N = 80. The pattern database’s
structure is simple, each entry of which only includes the number of stars recorded
in the sequence and the guide stars’ index numbers in ascending order. During the
construction of the pattern database, the guide star’s index number, which appears
repeatedly in the same entry, should be eliminated.
quickly divided into groups on the basis of FOV constraint. Finally the correct
match is obtained.
(1) Initial Match
Take each guide star in the measured star image as the primary star, and calculate
the radial distance Ri between this primary star and its neighbor stars based on the
pffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Eq. r ¼ x2 þ y2 . Also record the closest neighbor star’s index number and the
distance Rb between it and the primary star. There must be a certain distance
between this closest neighboring star and the primary star. Calculate the scaled
distances (Rri = Ri/Rb) between other neighboring stars and the primary star. The
scaled distance must be within a certain range, and neighboring stars beyond this
range are not considered. The primary star’s feature vector is constructed according
to the neighboring stars’ distribution. Most neighboring stars of the measured stars
near the center of the image can appear in the image and the measured stars’
patterns are comparatively complete. But for stars close to the edge of the image,
their patterns may be missing. Figure 4.33 shows the pattern constructed for a
certain measured star.
In the initial match, the screening range covers all guide stars in the GSC since
there is no prior information. Assume that there are Ns guide stars in all, assign a
counter CT1 ; CT2 . . .CTNs for each guide star and the initial values of these counters
are 0. If a certain primary star has neighboring stars in the i-th cell, read out data in
the i-th line of the guide pattern database and add 1 to each of the counters of all
corresponding stars in the i-th line. The rest of the neighboring stars are dealt with
similarly. Find out the maximum value of these counters after all the neighbor stars
of this primary star are scanned. The guide star corresponding to this maximum
value is the match for the primary star. Due to errors of star spot position in
imaging, or pattern missing of stars close to the edge, the pattern of the measured
star cannot be completely identical with that of the guide star, so the guide star
corresponding to the maximum value is not unique, and there are often several
guide stars matching the measured star. The initial match aims to narrow down the
searching range and the exact match should be conducted as follows.
(2) Fast Grouping
Generally, after the result of an initial match is obtained, the angular distance
between each two stars waiting for selection is calculated to judge the relationship
between them and get the final unique result. With the initial screening, the grid
algorithm adopts an FOV constraint to judge which stars are in the same celestial
area. If the angular distances between a star waiting for selection and most other
stars are beyond the FOV, this star is considered redundant and can be eliminated
from the waiting list. Assume that the stars screened initially are randomly dis-
tributed in the celestial area, then the part of celestial area where most screened
guide stars are located is the area to be measured by the star sensor. This algorithm
needs to compute the angular distances between every two stars waiting for
selection. So if there are many stars on the waiting list, a great amount of calcu-
lation and much more identification time may both be required. If there are n stars
in the star image, and each of them has mi stars P waiting for selection, then the
number of calculations of angular distance is 1 i\j n mi mj .
To speed up grouping, a new method for fast grouping is adopted. Assign a
counter for each sub-block, and the original states of these counters are 0. Judge the
sub-blocks where all stars waiting for selection are located and add 1 to the corre-
sponding counter. Divide these stars into groups. The value of each counter stands
for the number of stars waiting for selection in this sub-block. The stars waiting for
selection can be considered as random dots distributed with equal probability. All
these stars are distributed randomly in the celestial sphere and the part with the
highest concentration of random dots is the area to be measured in the FOV. The
sub-block that has the biggest counter value stands for the area where the random
dots concentrate. In general, the number of stars waiting for selection is the same as
that of the measured stars in this sub-block and eight adjacent sub-blocks. To ensure
correct identification, the screened stars waiting for selection are examined by FOV
constraints. If so, the number of calculations of the angular distance is no more than
Cn2 . Assume that there are ten measured stars in the image and each star has five stars
waiting for selection. That means n = 10 and m1 ¼ m2 ¼ ¼ mn ¼ 5. The cal-
culation amount required by the method of fast grouping is 4% of the original
P
method ðCn2 = 1 i\j n mi mj ¼ C102
=ðC10
2
5 5Þ ¼ 4%Þ.
Figure 4.34 is the flow chart of star identification without calibration parameters.
144 4 Star Identification Utilizing Star Patterns
Imaging parameters of the star sensor in simulations are as follows: the size of the
FOV is 10.8° × 10.8°, the focal length of optical system is 80.047 mm, the pixel
size is 0.015 mm × 0.015 mm, and the pixel resolution is 1024 × 1024. Select
stars brighter than 6 Mv from the SAO J2000 basic star catalog to construct a GSC
and generate a corresponding pattern vector database. Simulations are conducted on
an Intel Pentium4 2.0 GHz computer. The simulations mainly include a selection of
radial scaled distance radius, and the effect of calibration parameters error, star spot
position noise, star magnitude noise and the number of stars in the FOV on
identification.
4.4 Star Identification Without Calibration Parameters 145
optimum division grade of the cyclic angle. Division of the cyclic angle is actually
dividing a circle into several equivalent sectors since the range of cyclic angle is
360. Suppose N = 50, 60, 70, 80, 90 and 100 and then investigate the identification.
Figure 4.36 shows the identification rates with different division grades. It shows
that a comparatively high identification rate can be achieved when the division
grade of the cyclic angle is 80.
It can be seen, from the analysis above, that the algorithm can achieve the
highest identification rate when the radial scaled distance radius is 10, the division
grades of radial scaled distance and cyclic angle are 30 and 80, respectively. When
the radius of radial scaled distance changes, the corresponding division grade of
scaled distance should be adjusted according to this scale while the division grade
of cyclic angle remains the same. Only in this way can the algorithm of this radius
realize the highest identification rate.
It is shown in Fig. 4.37 that the identification rate is low when the radius of
radial scaled distance is relatively small, and it goes up with the increase of radial
scaled distance radius. However, the identification rate changes gently when the
radius is larger than 12, which indicates that an unlimited increase of radial scaled
distance radius will not lead to the increase in identification rate. Instead, it will
require a larger capacity of pattern database and thus bring more pressure on the
storage capacity. After every factor is taken into consideration, it is concluded that
the algorithm can achieve the highest identification rate when the radial scaled
distance radius is 12, M = 14/12 × 30 = 35, and N = 80.
(2) Effect of Focal Length Calibration Error on Identification Rate
Assume that the error of the lens focal length increases from −1 to 1 mm, 1000 star
images are randomly generated under each of these error grades. Identify these
images with the grid algorithm and this algorithm respectively, and then record the
identification results. Figure 4.38 shows that the identification rate of the grid
algorithm changes significantly. The algorithm almost fails when the error is
4.4 Star Identification Without Calibration Parameters 147
distances of other measured stars. Therefore, the identification rate of this algorithm
is lower than that of the grid algorithm. According to practical experience, error in
star spot centroiding is generally smaller than 0.5 pixels, so the identification rate of
the algorithm without calibration parameters is at least 92% in practical use, even if
there exist errors in star spot positions.
(5) Effect of Star Magnitude Noise on Identification Rate
To investigate the performance of an identification algorithm under the influence of
star magnitude noise, a noise with mean = 0 and variance = 0–1 magnitude is
added to the generated star image. Two algorithms are used respectively for
identification after different star magnitude noises are added. Figure 4.41 shows the
statistics of identification. Each of the statistics is obtained from 1000 times of
identification generated randomly across the whole celestial sphere. The two
algorithms’ identification rate is barely affected by the increase in star magnitude
noise. Because brightness information is not used in feature extraction, both
algorithms demonstrate strong resistance to star magnitude noise.
(6) Effect of the Number of Stars in the FOV on Identification Rate
Similar to the grid algorithm, star identification without calibration parameters takes
distribution of stars surrounding a certain star in the image as this star’s pattern.
Therefore, the number of measured stars in the image should meet a certain
demand. In the design of the star sensor, there should be enough measured stars in
the FOV at each time of image capturing so that the identification rate of the star
pattern-based algorithm can be ensured. Figure 4.42 indicates that both the grid
algorithm and this algorithm do not perform well when the number of stars in the
FOV is smaller than 6. An identification rate lower than 60% cannot completely
meet the demand in practical use. However, the identification rates of these two
algorithms can reach 95% or higher, when there are over 8 measured stars in the
FOV.
Table 4.3 Comparison of identification time and storage capacity between the star identification
algorithm without calibration parameters and the grid algorithm
Method Star identification without calibration Grid
Performance parameters algorithm
Average identification time 7.3 10.2
(ms)
Storage capacity (KB) 372 362
References
4. Zhang G, Wei X, Jiang J (2008) Full-sky autonomous star identification based on radial and
cyclic features of star pattern. Image Vis Comput 26(7):891–897
5. Wei X, Zhang G, Jiang J (2004) A star map identification algorithm using radial and cyclic
features. Opto-Electr Eng 31(8):4–7
6. Wei X, Zhang G, Jiang J (2009) A star identification algorithm based on log-polar transform.
AIAA J Aerosp Comput Inf Commun 6(8):483–490
7. Wei X, Zhang G, Jiang J (2006) A star identification algorithm based on log-polar
transformation. Opt Tech 32(5):678–681
8. Shwartz EL (1977) Spatial mapping in the primate sensory projection: analytic structure and
relevance perception. Biol Cybern 25:181–194
9. Tao Y, Ioerger TR, Tang YY (2001) Extraction of rotation invariant signature based on fractal
geometry. Int Conf Image Process 1:1090–1093
10. Kageyu S, Ohnishi N, Sugie N (1991) Augmented multi-layer perceptron for
rotation-and-scale invariant hand-written numeral recognition. IEEE Int Joint Conf Neural
Netw 1:54–59
11. Chen Zhaoyang, Ding M, Zhou C (1999) Target searching method based on log-polar
coordinate mapping. Infrared Laser Eng 28(5):39–42
12. Knuth DE, Morris JH, Pratt VR (1977) Fast pattern matching in strings. SIAM J Comput 6
(1):323–350
13. Wu S, Manber U (1992) Agrep—a fast approximate pattern-matching tool. In: Usenix winter
1992 technical conference, San Francisco, pp 153–162
14. Du MW, Chang SC (1994) An approach to designing very fast approximate string matching
algorithm. IEEE Trans Knowl Data Eng 6(4):620–632
15. Yang J (2007) A research on star identification algorithm and RISC technology application.
Doctoral Thesis of Beijing University Aeronautics and Astronautics, Beijing, pp 34–47
16. Yang Jian, Zhang G, Jiang J (2008) A star identification algorithm for un-calibrated star
sensor cameras. Opt Tech 34(1):26–32
Chapter 5
Star Identification Utilizing Neural
Networks
© National Defense Industry Press and Springer-Verlag GmbH Germany 2017 153
G. Zhang, Star Identification, DOI 10.1007/978-3-662-53783-1_5
154 5 Star Identification Utilizing Neural Networks
Artificial neural networks (ANNs), also known as neural networks (NNs), are
algorithmic mathematical models [4] which process information in distributed and
parallel ways by simulating the characteristics of cerebral neural networks. These
networks depend on the complexity level of the system, adjust the interconnection
of massive inner nodes, and thus achieve the purpose of information processing.
Artificial neural networks have the ability of self-learning and self-adaption. The
networks can analyze and grasp the potential laws between a batch of previously
provided corresponding input–output data. Based on these laws, output data can be
calculated when new input data is given. This process of learning and analyzing is
called “training.”
ANNs are networks in which massive artificial neurons interactively connect
with each other and every neuron is only a very simple information processing unit.
The structure of a strong interconnection network determines that neural networks
are equipped with a strong fault-tolerant ability, making it easy for the networks to
“learn.” By simply adjusting the connection form and strength, neural networks are
able to remember new information and update the database. It is evident that human
brains gain new knowledge and information by the stimulation of plenty of
examples. ANNs work in the same way as human brains. Through the training of
extracted samples from concrete problems, ANNs capture the inherent attributes of
the problem and then apply the trained network to calculate other examples of the
same problem. This is the learning process of ANNs.
In 1943, psychologist McCulloch and mathematical logician Pitts built neural
networks and a mathematical model and called it the MP Model. On the basis of
this MP Model, they put forward the formal mathematical description and network
structure of neurons and finally proved that one single neuron also has a logic
function. Thus, they began a new era of research on ANNs. In 1949, psychologist
Hebb proposed that the connection strength between synapses is variable. In 1960s,
ANNs were further developed with the introduction of an improved neural network
model, including perceptrons and self-adaptive linear elements. After analyzing the
functions and limitations of neural networks represented by perceptrons, Minsky
et al. published Perceptron in 1969 and indicated that perceptrons failed to solve
problems in higher order predicates. Their argument greatly influenced research on
5.1 Introduction to Neural Networks 155
ANNs are nonlinear and self-adaptive systems which consist of massive processing
units. The idea of the artificial neural network was proposed on the basis of research
findings in modern neuroscience. By simulating the methods of processing and
memorizing information performed by human brains, ANNs try to realize infor-
mation processing. ANNs have four basic characteristics:
① Nonlinearity. Nonlinear relationships are universal in nature. The intelligence
of human brains is one of the nonlinear phenomena. Artificial neurons are kept
either in an activated state or an inhibited state, and this phenomenon is shown
as a nonlinear relationship in mathematics. Networks composed of threshold
neurons have better performance and can improve their fault-tolerant ability and
expand their storage capacity.
② Free of limitation. Usually, a neural network is connected by multiple neurons.
The overall behavior of a system is not solely dependent on the characteristics
of one neuron, but might be more decisively determined by the interaction and
interconnection between units. Different units form massive connections to
156 5 Star Identification Utilizing Neural Networks
No matter what kind of neural network model is discussed, the minimum infor-
mation processing unit is the neuron. So far, people have built hundreds of artificial
neuron models. However, the most commonly used is still the earliest MP Model.
The neuron is a multi-input and single-output information processing unit and a
neural network which consists of several neurons through a weighted connection.
Though a single neuron is only able to do very simple information processing, a
network connected by more than one neuron has stronger computing capability.
Neural network computing manifests as the interaction between neurons. By
changing the connection mode and strength between neurons, the computational
efficiency of neural networks can be changed. The connection strength between two
neurons is denoted by a real number, which is called the connection weight. The
connection form and connection weight between neurons are often determined by
the learning process of neural networks. Based on different types, connection modes
and learning styles of neurons, various neural network models are designed. The
structure of neural networks is shown in Fig. 5.1.
ANN models mainly focus on the topological structures in network connections,
the characteristics of neurons and the learning rules. Currently, there are nearly 40
neural network models, including the BP Network, the self-organizing map, the
Hopfield network, the Boltzmann machine network, the Adaptive resonance theory
network, and so on. According to the topological structures in network connections,
neural network models can be divided as follows:
① Forward networks. In a forward network, each neuron inputs the information
from the previous level and outputs the information to the next level without
any feedback and can be represented by a directed acyclic graph. This kind of
network can achieve the transformation of signals from the input space to the
output space. Its information processing ability comes from the multiple
compositions of simple nonlinear functions. Thanks to the simple network
structure, it is easy to create a forward network. The BP Network is a typical
forward network.
② Feedback networks. In a feedback network, feedbacks exist between neurons
and the working process can be described by an undirected complete graph.
The information processing of this kind of neural network is actually the
transformation of states and can be treated by using a dynamic system theory.
The stability of the system is closely related to the associative memory func-
tion. Both Hopfield Model and Boltzmann Machine belong to this kind of
network.
Learning is an important topic in neural network research. The adaptability of
neural networks is obtained through learning. Based on environmental changes, the
weights are adjusted accordingly in order to improve the performance of the system.
The Hebb Learning Rules proposed by Hebb lays the foundation for the learning
algorithms of neural networks. Hebb Learning Rules hold that the learning process
ultimately happens at the synapses between neurons. The connection strength of
synapses changes with the activities of neurons around synapses. It is on this basis
that people have proposed various learning rules and algorithms in order to meet the
demands of different network models. Efficient learning algorithms enable the
neural networks to formulate the intrinsic representation of the objective world and
to establish featured information processing methods by adjusting connection
weights. The storage and processing of information are reflected in the connection
of networks.
According to different learning environments, the learning methods of neural
networks can be divided into supervised learning and unsupervised learning. In the
process of supervised learning, data of training samples is placed at the input end
and at the same time, by comparing the expected output with the network output,
error signals can be obtained. Based on this, the connection strength of weights is
adjusted. After several trainings, a determined weight is obtained by convergence.
When samples change, weights can be modified through learning in order to adapt
to a new environment. Neural networks using supervised learning include back
propagation networks, perceptrons, and so on. In the process of unsupervised
learning, the network is directly placed in a new environment without giving a
standard sample. The learning stage and working stage are integrated. At this time,
the changes in learning rules are subject to the evolution equation of connection
weights. The simplest example of unsupervised learning is the Hebb Learning
Rules. The competitive learning rule, which adjusts weights according to estab-
lished clustering, is a more complicated example of unsupervised learning.
Self-organizing maps, adaptive resonance theory networks and many others are all
typical models related to competitive learning.
5.2 Star Identification Utilizing Neural Networks … 159
The star identification algorithm carried out by using neural networks based on
features of star vector matrix [5] makes use of the direction vectors of one primary
star and three other stars in the neighborhood in order to establish the primary star’s
feature vector, which is considered as the weight vector of a self-organizing
competitive network. Star identification is finished “automatically” by using the
competition mechanism of self-organizing competitive networks.
In actual neural networks, such as the human retina, there is “lateral inhibition,”
which means if one neuron is excited, it will inhibit other neurons around through
its branches. Lateral inhibition brings out competition between neurons. Though at
the initial stage, every neuron is in different levels of excitation, lateral inhibition
makes neurons compete with each other. Finally, the inhibition produced by the
neuron with the strongest excitatory effect defeats the inhibitions produced by other
neurons. So this neuron “wins” and neurons around the “winner” are all “losers.”
Self-organizing competitive neural networks are formed on the basis of the
above-mentioned biological structure and phenomenon. These kinds of networks
can conduct self-organizing training and judgments to input patterns and ultimately
divide these patterns into different types. In structure, self-organizing competitive
neural networks are often single layer networks consisting of an input layer and a
competitive layer. There is not a hidden layer and neurons between the input layer
and the competitive layer connect bidirectionally. At the same time, neurons in the
competitive layer also have transverse connections. The basic idea, here is that in
the competitive layer, neurons compete to respond to the input pattern and only one
neuron will be the final winner. In addition to the competition, neurons can also
become a winner by producing inhibitions. That means every neuron can inhibit
other neurons from responding and make itself the winner. Moreover, there is
another lateral inhibition method by which every neuron only inhibits its neigh-
boring neurons but not neurons far away. In learning algorithms, the network
simulates the dynamics principles of biological nervous systems which conduct
information processing by excitation, coordination, inhibition, and competition
between neurons to supervise its learning and work. Therefore, the self-organizing
and self-adaptive learning ability of self-organizing competitive neural networks
further broadens the application of neural networks in pattern recognition and
classification.
160 5 Star Identification Utilizing Neural Networks
kndistk ¼ kIWi Pk
i stands for the ith weight vector in the weight matrix IW.
The network input data n1 in the competitive layer is the sum of the negative
distance and threshold b1 and is an S1 × 1-dimension vector. If the entire threshold
vector is 0, when input vector P and weight vector IW are equal, n1 reaches its
maximum value 0.
For the biggest element in n1, the output of the transfer function in the com-
petitive layer is 1, while its output is 0 for other elements. If all the thresholds are 0,
the neuron whose weight vector is the closest to the input vector has the minimum
negative value but the maximum absolute value. Thus, this neuron wins the
competition and the output result is 1.
Star identification utilizing self-organizing competitive neural networks has the
following advantages:
① Simple in structure. This kind of network has only two layers of structure
without a hidden layer. So it is easy to understand and compute.
② Easy to train. Each star is an independent pattern. In the process of star iden-
tification, learning clustering is not needed. Under this circumstance, there is no
need for iterative computing weights.
③ Clear in results. The nodes of the output layer are either 0 or 1 and can clearly
indicate which star is represented by the identification result. However, some
other networks output a real number between 0 and 1. They can indicate the
class of the star only after adjustments.
5.2 Star Identification Utilizing Neural Networks … 161
When formulating the patterns, a guide star is chosen as the primary star and three
stars around it are chosen as neighboring stars. All the four stars consist of the guide
star pattern. As to different guide stars, their three neighboring stars also have
different positions, so the pattern of a particular guide star is unique. The direction
vectors of these four stars in the celestial coordinate system can form a vector
matrix V. According to the characteristics of star sensor imaging models, after
transposing V and multiplying it by the original matrix, a symmetrical characteristic
matrix VTV can be obtained. Similarly, four corresponding measured stars of these
four guide stars can also form a symmetric matrix WTW. VTV and WTW are com-
pletely identical. This means no matter whether in the celestial coordinate system or
the image coordinate system, the symmetric matrix formed by the same set of stars
remains unchanged. So this symmetric matrix can be viewed as a characteristic for
star identification.
Denoting the right ascension and declination coordinates of the guide star as ða; dÞ, its
direction vector in the celestial coordinate system is ½ cos a cos d sin a cos d sin dT .
The transformation of the vector of stars from the celestial coordinate system into the
star sensor coordinate system is W = AV. Here W stands for the direction vector
matrix of the measured star in the star sensor coordinate system, A for the attitude
matrix and V for the direction vector matrix of the guide star in the celestial coordinate
system. When four stars are observed, W ¼ ½ b1 b2 b3 b4 , V ¼ ½ r1 r2 r3
r4 . Here bi is the direction vector of the measured star, ri is the direction vector the
guide star, i = 1, 2, 3, 4. W is a 3 × 4 matrix and WTW is a 4 × 4 matrix.
Because W ¼ AV, it follows that
½ b1 b2 b3 b4 ¼ A½ r1 r2 r3 r4 ð5:1Þ
W T W ¼ ½ b1 b2 b3 b4 T ½ b1 b 2 b3 b4
2 T 3 2 3
b1 b1 bT1 b2 bT1 b3 bT1 b4 1 bT1 b2 bT1 b3 bT1 b4
6 bT b b T b bT b bT b 7 6 bT b bT2 b4 7
6 1 2 2 2 3 2 47 6 2 1 1 bT2 b3 7 ð5:2Þ
¼ 6 2T 7¼6 7
4 b3 b1 bT3 b2 bT3 b3 bT3 b4 5 4 bT3 b1 bT3 b2 1 b3 b4 5
T
W T W ¼ V T AT AV ¼ V T V ð5:3Þ
V T V ¼ ½ r1 r2 r3 r4 T ½ r1 r2 r3 r4
2 T 3 2 3
r1 r1 r1T r2 r1T r3 r1T r4 11 r1T r2 r1T r3 r1T r4
6 rT r rT r rT r rT r 7 6 rT r r2T r3 r2T r4 7 ð5:4Þ
6 1 2 2 2 3 2 47 6 2 1 1 7
¼ 6 2T 7¼6 7
4 r3 r1 r3T r2 r3T r3 r3T r4 5 4 r3T r1 r3T r2 1 r3T r4 5
r4T r1 r4T r2 r4T r3 r4T r4 r4T r1 r4T r2 r4T r3 1
162 5 Star Identification Utilizing Neural Networks
Because bi and ri are unit vectors, diagonal elements of the matrix in both
Eqs. (5.1) and (5.4) are 1. Therefore based on Eqs. (5.2), (5.3), and (5.4), the
following equation can be obtained
2 3 2 3
1 bT1 b2 bT1 b3 bT1 b4 11 r1T r2 r1T r3 r1T r4
6 T 7 6 7
6 b2 b1 1 bT2 b3 bT2 b4 7 6 r2T r1 1 r2T r3 r2T r4 7
6 7¼6 7 ð5:5Þ
6 bT b bT3 b2 bT3 b4 7 6 r3T r4 7
4 3 1 1 5 4 r3T r1 r3T r2 1 5
bT4 b1 bT4 b2 bT4 b3 1 r4T r1 r4T r2 r4T r3 1
In Eq. (5.5), corresponding elements on two sides of the equation are equal.
Because they are all symmetric matrixes, bT1 b2 ¼ r1T r2 , bT1 b3 ¼ r1T r3 , bT1 b4 ¼ r1T r4 ,
bT2 b3 ¼ r2T r3 , bT2 b4 ¼ r2T r4 , and bT3 b4 ¼ r3T r4 . In other words, for any set of stars in the
celestial coordinate system, if any two of their direction vectors are multiplied by
each other, the products remain unchanged when the vectors are converted into the
star sensor coordinate system. So these products can be extracted and used as
characteristics for star identification. As four stars are selected to form feature vec-
tors, six elements in the matrix formed according to Eqs. (5.2) and (5.3) are inde-
pendent. These six elements can be used to form star’s feature vectors as follows:
The feature vector of the guide star is
This feature vector is formed by four stars and contains the relative positions of
these four stars, so it can be viewed as the pattern of these four stars. Moreover, if
one of the stars is selected to be the primary star, this feature vector can also reflect
the distribution of the neighbor stars. So the feature vector can also be considered as
the primary star’s pattern.
To form guide star patterns, every star in the GSC is selected as the primary star
in turn and three neighboring stars in the neighborhood of the primary star are
found. These four stars are considered as a group to form the feature vector of this
set of stars according to patb ¼ ½ bT1 b2 bT1 b3 bT1 b4 bT2 b3 bT2 b4 bT3 b4 . The
reason for choosing three neighboring stars around the primary star to structure a
feature vector is that there are no two primary stars whose three neighbor stars are in
exactly the same pattern distribution. So three neighbor stars, as well as one primary
star, are enough to form the feature vector.
When forming measured star patterns, if the primary star is far away from the
image center, then it is possible that the neighbor stars may fall outside the image.
This makes neighbor stars around the primary star incomplete. Meanwhile, the
more neighboring stars are selected, the larger probability that neighbor stars are
mistakenly selected.
5.2 Star Identification Utilizing Neural Networks … 163
As to different primary stars, their three neighbor stars have different positions,
and their corresponding feature vectors are also different. According to the rule of
the pattern class identification algorithm, the geometric distribution patterns of
neighboring stars in a certain neighborhood can form a unique pattern of the pri-
mary star. So, this feature vector can be used as the pattern of the primary star.
The procedures of forming the feature vector of the primary star are as follows:
① Determine the neighboring stars to be selected. For any primary star S1 ,
compute the angular distances Ri between the primary star and all the neigh-
boring stars around it. Here i = 1, 2, 3… n, and n stands for the total number of
neighboring stars around the primary star. Take stars whose angular distances
are in the range of Rt \Ri \RFOV to be the neighboring stars to be selected.
Here, RFOV is the FOV of star sensors, and Rt is in the segment 0.5°–1°.
② Determine the neighbor stars to form the feature vector. According to the
angular distances between the primary star S1 and the neighboring stars to be
selected, order all the neighboring stars to be selected from the smallest to the
largest. Select three stars S2 , S3 , and S4 closest to the primary star S1 to be the
neighboring stars to form the feature vector.
③ Form the feature vector of the primary star S1 . Compute the direction vectors
b1 , b2 , b3 , b4 between the primary star S1 and three neighboring stars S2 , S3 , S4 .
The algorithm is as follows:
Here, bi stands for the direction vectors of the primary star S1 and three neighbor
stars S2 , S3 , S4 , i = 1, 2, 3, 4.
In this equation, ai stands for the right ascension of the ith star and di stands for
its declination.
Compute and form the feature vector patb ¼ ½ bT1 b2 bT1 b3 bT1 b4 bT2 b3 bT2 b4 bT3 b4
of the primary star S1 . This feature vector is a six dimensional vector. Then,
compute and store all the feature vectors of all the guide stars in the GSC.
In addition to storing the feature vectors of guide stars, these guide stars used to
form these patterns also need to be stored. This is because when a primary star is
obtained, it is easy to look up the indexes of three neighboring stars used to form the
pattern with the primary star. In the operation of self-organizing competitive neural
networks, only one node outputs 1 and only the index of the primary star can be
obtained. The indexes of three neighboring stars around the primary star cannot be
determined. By looking up the data in the index database of neighboring stars, the
neighbor stars around the primary star can be found. And in the process of identifi-
cation, the corresponding guide star of the neighboring stars in the measured star
image can be quickly found. In other words, the three neighbor stars can be identified.
The storage format of a partial record of the neighbor star index database is
shown in Fig. 5.3.
164 5 Star Identification Utilizing Neural Networks
Self-organizing competitive neural networks used for star identification have two
layers. The first layer is the input layer, in which the number of nodes is the same as
the number of dimensions of the feature vector of the guide star. That is, there are
six nodes. The second layer is the output layer. The number of nodes is the same as
that of classes. The number of the class number is the total number of guide stars.
Every node in the output layer is connected with one node in the input layer. The
weights between the nodes of the two layers are called weight vectors.
When the self-organizing competitive neural networks are constructed, the
values of the weight vectors between all the nodes are determined according to the
classification. This is the so-called network training. The typical learning rule
adopted in competitive learning strategies is Winner-Takes-All. The principle of
competitive learning is shown in Fig. 5.4. Denoting the input pattern as a two
dimensional vector. After normalization, its vector end can be viewed as a spot of
identity element distributed in the image and is represented by “○.” Denoting there
are three neurons in the competitive layer and that their three corresponding star
vectors are marked in the same unit circle. After been fully trained, the three “★”
spots in the unit circle gradually shift into the cluster center of every input feature
vectors. Thus, the weight vectors of all the neurons of the competitive layer become
the clustering centers of the input feature vector. When a pattern is inputted into the
network, the winning neuron in the competitive layer outputs 1 and the input is put
into the same class as the winner belongs to.
In star identification, every guide star is viewed as a class of the output layer and
every class has only one feature pattern. So, after normalizing the feature patterns of
all guide stars, weight vectors of corresponding nodes in the output layer are
assigned directly. Thus, the training of self-organizing competitive neural networks
is completed. As is shown in Fig. 5.5, every class has only one feature vector, or
one “○” spot. The position of “○” is the clustering center of this class of feature
vectors. “★” represents the position of the weight vector of a node. Move “★” to all
the “○” to make them overlapping, so the weight vectors of all the classes point to
their own clustering centers. The values of weight vectors and the input feature
patterns are equal.
In the training process of networks, the feature vectors of guide stars are used to
train the self-organizing competitive neural networks. The pattern information of
guide stars is integrated into the weight matrix of neural networks. So in
well-trained networks, patterns of all the guide stars are included. In the process of
identification, there is no need to read the data of guide star patterns and store the
information individually.
In measured star images, the primary star to be identified and three neighboring
stars used to form the pattern are determined. Then the feature vector of the primary
star to be identified is formed. Input the feature vector into the well-trained
self-organizing competitive neural network. The network judges which node is the
closest to the input pattern and the corresponding node in the output layer outputs 1.
Look up the index number of the node. This number is the corresponding guide star
index of the primary star to be identified.
① Determine the primary star to be identified T1 . When a measured star image is
obtained, compute the distances between all the stars and the central point of
the image. Then sort the stars according to the distances in ascending order and
select the star with the minimal distance as the primary star to be identified T1 .
② Determine the neighboring stars used to form the feature vector. Compute the
angular distances between the primary star to be identified T1 and the neigh-
boring stars around it. Select three neighboring stars T2 , T3 , T4 with the minimal
angular distances as the neighboring stars used to form the feature vector.
③ Form the feature vector of the primary star to be identified T1 . Compute the
direction vectors r1 , r2 , r3 , r4 of the primary star to be identified T1 and its
neighbor stars T2 , T3 , T4 according to the algorithm as follows:
h xi yi f
iT
ri ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
x2 þ y2 þ f 2
ffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2xi þ yi þ f
2
ffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2 2 xi þ yi þ f
2
ffi
2 ;
i i
ri stands for the direction vectors of the primary star to be identified T1 and its
neighbor stars T2 , T3 , T4 , i = 1, 2, 3, 4.
Here, xi is the horizontal coordinate of the measured star in the image plane, yi
is the vertical coordinate of the measured star in the image plane, f is the focal
length of the star sensors.
Form the feature vector of the primary star to be identified T1 :
④ Identify the primary star to be identified. Input patr into the well-trained
self-organizing competitive neural network and look up the node with the
network output of 1. The guide star determined by the index number of the
node is the corresponding guide star of the primary star to be identified T1 .
⑤ Identify the neighbor stars T2 , T3 , T4 . In the neighbor star database, look up the
star numbers of the three neighbor stars of the corresponding guide star of the
primary star to be identified T1 . These three neighbor stars are in correspon-
dence with neighbor stars T2 , T3 , T4 .
The flow chart of the neural network star identification algorithm by using star
vector matrix feature is shown in Fig. 5.6.
5.2 Star Identification Utilizing Neural Networks … 167
Fig. 5.6 Flow chart of the neural network star identification algorithm by using the star vector
matrix feature
In simulations, the imaging parameters of star sensors are as follows: the FOV size
is 10.8° × 10.8°, the focal length of the optical system is 80.047 mm, the pixel size
is 0.015 mm × 0.015 mm, and the pixel resolution is 1024 × 1024. Select stars
brighter than 6 Mv from the SAO J2000 Fundamental Star Catalog to form a GSC
168 5 Star Identification Utilizing Neural Networks
and generate a corresponding pattern vector database. The simulations are realized
on Intel Pentium 4 2.0 GHz computers with MATLAB.
(1) Example of Identification
Figure 5.7 is the identification result of four random star images, + stands for
measured stars in the FOV and ○ stands for stars being correctly identified. It can
be seen that, similar to other methods of star identification based on “star patterns,”
the probability of correctly identifying stars close to the center of the FOV is higher
than that of stars near the image edge of the FOV. Stars close to the image edge
have a larger probability of having their neighboring stars missing. Missing stars
lead to incomplete patterns and consequently may make the stars fail to be iden-
tified correctly.
(4) Impact of the Number of Measured Stars in the FOV on Identification Rates
For algorithms based on star pattern classes, the number of measured stars in the
FOV is an important factor that can influence identification performances. The more
measured stars in the FOV, the easier to form unique patterns of stars and to do
identification. At this time, an algorithm based on pattern class is often outstanding
in performance. From Fig. 5.10, it can be seen that when the star number in the
FOV reaches 10, this algorithm and the grid algorithm both have identification rates
of 95% or above. When the star number exceeds 12, both algorithms obtain
identification rates of nearly 100%. When the star number is under 9, this algorithm
has greater advantages. This is because this algorithm only selects the primary star
and the three neighboring stars around to generate the star pattern. However, the
grid algorithm divides the image and the resolution decreases consequently. So
when there are not enough stars, the grid algorithm cannot describe the pattern of
the primary star accurately and the identification rate drops sharply.
(5) Identification Time and Storage Capacity
This algorithm simulates in a MATLAB environment and the average identification
time for one star image is 0.4 s. The networks store the patterns of the guide stars in
weight vectors and make them a part of the network. So there is no need to store the
patterns of the guide stars separately. In addition, the GSC and the neighbor star
index database are needed when running the algorithm. The two altogether require
about 280 KB.
Similar to 5.1, the star identification algorithm by using neural networks based on
mixed features [6] uses mixed features consisting of close neighboring triangles and
radial-distributed vectors to form feature patterns of stars. The algorithm uses
competitive networks to complete star identification.
The output of the first layer is the initial value of the second layer, and that is
a2 ð0Þ ¼ a1 ð5:7Þ
The output of the second layer meets the following recursive procedure
Here, W2 is a matrix with a diagonal of 1 and all other elements of a very small
minus (e). After iteration, the network ultimately moves toward a stable state.
That means the node with the biggest initial value outputs 1 while other nodes
output 0. The corresponding prototype vector of the node is the optimal match of
the input vector p. So the guide star represented by this node is the matching star of
the measured star.
Star identification by using competitive networks has the following advantages:
① A fast and accurate search capability in order to find the optimal match for the
input pattern vector. When the input pattern contains noises or is incomplete,
the competitive networks can still quickly find the prototype most similar to the
input pattern in the pattern space.
5.3 Star Identification Utilizing Neural Networks … 173
Generally speaking, patterns fit for star identification by using neural networks are
required to meet the following conditions:
① Simple in structure and convenient in computing. Neural networks have to be
easy for parallel implementation and generally do not use complicated feature
extraction methods.
② Stable and reliable. Patterns must be free of influences from other factors to the
greatest extent. For example, when forming the pattern vectors by using the
distribution of companion stars in the neighborhood, patterns with rotation
invariance are preferred and information on brightness should not be used as
much as possible.
③ The pattern of one star should be distinguished from that of other stars. Because
every star belongs to a catalog of its own, different stars that have the same or
similar patterns should be avoided. This requires using as much information as
possible, since one single pattern often fails to distinguish stars from each other.
Through massive experimentation, mixed features consisting of radial patterns
and close neighboring triangles are selected to form feature patterns of stars. The
structure of feature pattern vector p is shown in Fig. 5.12. It is a 1 × 13 vector.
Denoting the radius of the pattern as r, the definitions of the radial pattern and the
close neighboring triangle are as follows:
① Divide the circular neighborhood with radius r into 10 annuluses with the same
intervals. Each annulus stands for angular distance r/10. R1, R2, …, R10 stand
for the number of companion stars falling into the 1st, 2nd, …, 10th annulus,
Here, w ¼ ðw1 ; w2 ; . . .; w13 Þ stands for the weighted coefficient vector. Different
patterns have different weighted coefficients. Even the corresponding weighted
coefficients of different elements with the same patterns are not exactly the same. It
can be seen that radial patterns may be incomplete (i.e., measured stars are close to
the edge of the FOV), and companion stars close to the primary star have lower
5.3 Star Identification Utilizing Neural Networks … 175
probability of being outside the FOV than those companion stars far away.
Therefore, closer companion stars are more stable and reliable in redial patterns.
This can be reflected in the weighted coefficient vectors as follows:
w4 w5 w13 ð5:10Þ
Under the circumstance of star spot position noises with standard deviation of
1 pixel and position noises of 0.5 Mv, 1000 randomly generated star images from
the whole celestial sphere are identified. The identification rate is 99.7%. This is
better than the results for the grid algorithm and the identification algorithm based
on radial and cyclic features under the same experimental conditions.
References
1. Lindsey C, Lindblad T (1997) A method for star identification using neural networks. SPIE
3077:471–478
2. Bardwell G (1995) On-board artificial neural network multi-star identification system for 3-axis
attitude determination. Acta Astronaut 35:753–761
3. Paladugu L, Schoen M, Williams BG (2003) Intelligent techniques for star-pattern recognition,
Proceedings of ASME, IMECE2003-42274
4. Zongli J (2001) Introduction to artificial neural network. High Education Press, Beijing
5. Yang J (2007) Star identification algorithm and application research on RISC technology,
Beihang University doctoral dissertation, Beijing, pp 49–60
6. Wei X (2004) Star identification in star sensors and research on correlative technology,
Beihang University doctoral dissertation, Beijing, pp 76–81
Chapter 6
Rapid Star Tracking by Using Star Spot
Matching Between Adjacent Frames
As described in Sect. 1.4, the star sensor method usually has two working modes,
namely the initial attitude establishment mode and the tracking mode. In the initial
attitude establishment mode, star sensor identifies and establishes initial attitude by
using the full-sky star image. Once the initial attitude of a spacecraft is established
successfully, star sensor enters into the tracking mode. In normal conditions, star
sensor works in the tracking mode most of the time, which means that the tracking
mode is the principle operational mode of star sensor.
In accordance with the star sensor’s requirements toward star tracking, Zhang
et al. [1–4] put forward a rapid star tracking algorithm by using star spot matching
between adjacent frames. The algorithm effectively improves the efficiency of star
tracking by taking full advantage of the information of the partition star catalog and
using strategies such as threshold mapping, sorting before matching, etc. This
chapter introduces the detailed operational process of this algorithm and evaluates
its performance through simulation experiments.
This section briefly introduces the fundamental principles and process of the
tracking mode of star sensor. The characteristics of the star tracking algorithm and
star sensor’s basic requirements toward star tracking algorithm are analyzed. In the
last part, some widely adopted star tracking algorithms are presented.
Figure 6.1 demonstrates the operational process of star sensor. As is shown, star
sensor has two operational modes.
© National Defense Industry Press and Springer-Verlag GmbH Germany 2017 177
G. Zhang, Star Identification, DOI 10.1007/978-3-662-53783-1_6
178 6 Rapid Star Tracking by Using Star Spot Matching …
⑤ Once the initial attitude is available, star sensor enters into tracking mode;
⑥ Star images are captured and star spot centroiding is conducted;
⑦ After tracking, matching and identifying, the results are used to calculate the
current attitude of star sensor and the attitude is outputted. Step ⑥ is then
repeated for the following rounds of tracking.
Initial attitude establishment mode and tracking mode are two independent yet
correlated working modes of star sensor. Initial attitude establishment mode offers
precise initial identification information and initial attitude for the tracking mode.
When tracking identification fails or in lost-in-space conditions, star sensor will
enter into initial attitude establishment mode and start to identify the full-sky image
once again. Lacking initial attitude in initial attitude establishment mode, star sensor
carries out star identification regarding the whole celestial sphere as unidentified
regions. Hence, a longer time is needed to search and match the stars. The iden-
tification usually takes several seconds. In the tracking mode, the results of full-sky
star identification and attitude calculation, as well as identification information of
the previous frames of star tracking are used for the tracking identification of the
measured stars in the current FOV. Therefore, the processing time is relatively
short. Only at the initial moment of operation or when faced with a lost-in-space
problem, star sensor will enter into initial attitude establishment mode. After the
initial attitude is established, star sensor will be in real-time tracking state as long as
the tracking mode remains stable. Hence, tracking mode is the major operational
mode of star sensor.
Star sensor’s requirements for star tracking cover the following aspects:
① Rapidity. The tracking time usually determines the update frequency of the
attitude of star sensor. Therefore, the tracking time should be as short as
possible.
② Accuracy. The identification results obtained in star tracking mode are directly
used for attitude output. Identification errors in the tracking mode may result in
fluctuations in attitude output. In serious cases, star sensor may have to start
identifying full-sky star images repeatedly.
③ Identify as many measured stars in the FOV as possible. Stars used in attitude
calculation are distributed unevenly in the FOV. Hence, if the stars showing up
in the FOV are not timely and accurately identified, the attitude export may
experience abnormal fluctuations.
180 6 Rapid Star Tracking by Using Star Spot Matching …
In long-term tracking, it is inevitable that some tracked stars may move out of
the tracking FOV and some stars may enter into the FOV. It takes the star sensor
produced by Ball Company 0.2 s to accomplish new star identification and attitude
estimation. It is thus clear that the identification of new stars is quite
time-consuming, which is a bottleneck in rapid star tracking.
Rapid star tracking algorithm by using star spot matching between adjacent frames
directly uses the corresponding relations of stars between adjacent frames and prior
identification information to accomplish the rapid tracking and identification of
measured stars in the FOV. In order to accelerate the speed of star tracking,
strategies such as zone catalog-based quick retrieval of guide stars, threshold
mapping, sorting before tracking and others are used. In this section, the detailed
process of the algorithm is presented.
The general idea of rapid star tracking to be introduced in this chapter is as follows.
A reference star image is generated through star prediction. A radius of neigh-
borhood is set to determine if the measured stars in the measured star image is
within the neighborhood of the corresponding star in the reference star image. In
this way, whether or not the tracking has been accomplished successfully can be
evaluated. Through star mapping, the number of tracked stars in the FOV is
increased for the convenience of continuous tracking. The process of rapid star
tracking is demonstrated in Fig. 6.3.
For better explanation, the meaning and function of each step in the tracking
process is briefly explained according to this illustration of star tracking.
1. Initial Attitude
Without prior attitude, star sensor first enters into initial attitude establishment mode
when it starts working. By matching and identifying the measured full-sky star
images captured by an imaging device, the precise initial attitude of star sensor is
calculated and established. Then, star sensor enters into tracking mode.
2. Searching for Guide Star and Acquiring Star Information
In accordance with the attitude of star sensor, the current direction of boresight of
the star sensor can be calculated. With the boresight pointing to the very direction,
information on the guide stars within a certain range of a celestial area can be
182 6 Rapid Star Tracking by Using Star Spot Matching …
obtained. Information of stars (including the index number, star magnitude, coor-
dinates of the right ascension and declination and other information of stars) are
found from the GSC. These stars are the ones that can be captured by image sensor
under the current attitude. At this stage in the process of tracking, the partition star
catalog is used. See Sect. 2.1 for details.
3. Star Mapping
Star images captured by an image sensor are based on image coordinate system
(coordinates of stars are stored as pixels), while coordinates of guide stars stored in
the star catalog are based on the celestial coordinate system. Hence, the coordinates
of guide stars in the celestial area, which is expressed with respect to the celestial
coordinate system, should be transformed and expressed in the image coordinate
system (i.e., transforming the right ascension and declination information of guide
star in star catalog into positional coordinate information on the image sensor).
Threshold mapping is utilized at this stage in the process of tracking.
4. Star Prediction
When star sensor is operating, its angular velocity keeps changing. During tracking
and matching, the movement of stars in the previous frames of known star images
can be used to predict the position of stars in the next frame of star image. In this
6.2 Rapid Star Tracking Algorithm by Using Star Spot Matching … 183
Fig. 6.4 Corresponding match of measured star and guide star in star tracking. dx difference value
between the x-coordinates of the two stars, dy difference value between the y-coordinates of the
two stars, ★ guide star in the reference star image, ☆ measured star in the measured star image,
r radius of neighborhood, d distance between the two stars
184 6 Rapid Star Tracking by Using Star Spot Matching …
The value of the neighborhood radius r is related to the angular velocity of star
sensor. With too large a value, the number of stars that can be successfully matched
and identified may decrease. For star No. 2 in Fig. 6.4, three measured stars, 3′, 4′
and 5′, will be spotted if the value of its neighborhood radius r is too large. The
correct match for star No. 2, namely 4′, cannot be successfully matched and
identified in this case. If the value of r is too small, however, 4′ will be out of the
neighborhood, resulting in an unsuccessful match again.
8. Attitude Calculation
In accordance with the intrinsic parameters of star sensor, the attitude of star sensor
with respect to the celestial coordinate system can be calculated by using the
coordinates of a tracked measured star and the coordinates of its corresponding
guide star in the star image.
On the basis of the calculated attitude of star sensor, the current pointing
direction of boresight of star sensor can be computed. Information on guide stars
within a certain range of the celestial area in this direction of boresight can be
obtained. Making use of the imaging model of star sensor, the reference star image
of the next frame can be obtained and used in the identification of the measured star
image in the next frame. The tracking of measured stars in the measured star image
is thus realized following these cyclic procedures.
It is clear from the illustration of star tracking that strategies such as partition star
catalog, threshold mapping, sorting before matching and identification and others are
adopted, accelerating the speed of star tracking. Among these strategies, partition
star catalog divides the whole celestial area into several sub-celestial areas. In this
way, only the sub-celestial areas adjacent to the pointing direction of the boresight of
star sensor, instead of the whole celestial area, are searched in guide star mapping,
reducing the number of guide stars to be retrieved and accelerating the speed of
guide star indexing. In threshold mapping, with the demanded accuracy of attitude
calculation, a threshold of the number of tracked stars is set so that guide star
mapping is conducted only when the number of tracked stars is smaller than the
threshold. Hence, the frequency of mapping, as well as the mean number of tracked
stars, is reduced. Sorting before matching means that stars are arranged and ranked in
accordance with their coordinate values in the star image before matching and
identification, reducing meaningless matching of stars too far apart. In addition, with
the introduction of star spot prediction, the position of a measured star in the next
frame of star image is predicted on the basis of the tracking results of previous
frames. Consequently, the neighborhood radius used is relatively small and the
number of stars successfully tracked can be increased under the same circumstances.
For detailed discussion of the guide star indexing, threshold mapping and sorting
before matching in star tracking algorithm, the identification results of star images
in the kth and previous frames are defined as prior information and the task is to
track and identify the current (k + 1)th frame of the measured star image. Two star
images in Fig. 6.4 are regarded as the reference star image and measured star image
of the (k + 1)th frame respectively.
6.2 Rapid Star Tracking Algorithm by Using Star Spot Matching … 185
The partition of celestial areas is similar to the one introduced in Sect. 2.1. Guide
star indexing by using partition of star catalog is conducted in the following way:
① Based on the results of previous tracking and identification, the direction vector
of the boresight of star sensor is calculated;
② The subblock, whose medial axis vector is closest to the direction vector of the
boresight most, is spotted in the partition star catalog;
③ This subblock and its adjacent subblocks (as shown in Fig. 6.5) constitute a
sub-celestial area and the index number of guide stars in the area are stored;
④ In accordance with the stored index number of guide stars, corresponding guide
stars are found in the star catalog and the positional information of guide stars
are obtained. With the perspective projection transformation model, guide stars
projected on the array plane of the imaging device of star sensor can be
screened. In this way, the quick search of guide stars is accomplished.
With the partition star catalog, only nine subblocks, instead of the whole
celestial area, are searched for the guide star. The use of partition star catalog
narrows down the search area and accelerates the speed of searching and tracking.
In the process of tracking, the generation of reference star, i.e. star mapping, is most
time-consuming. It is quite a waste of time if each tracking has to undergo coor-
dinate transformation. The aim of tracking is to calculate the attitude of star sensor
by tracing the stars. To guarantee proper tracking, more than three stars should be
tracked. Generally speaking, with the successful tracking of six to ten stars, the
attitude of star sensor calculated can be accurate enough. Hence, threshold mapping
is adopted in order to reduce the frequency of star mapping.
In threshold mapping, a star number threshold, defined as δ, is set. Star mapping
is not conducted unless the number of measured stars successfully tracked is
smaller than δ. Figure 6.6 presents the process of threshold mapping.
The detailed process is as follows:
① Firstly, a threshold δ is set. δ can be relatively large so that tracking algorithm
can perform better in terms of reliability and attitude accuracy.
② Star mapping is not carried out unless less than δ stars have been matched and
identified in the kth frame of the measured star image and its reference star
image. When star mapping is not conducted, information of the measured stars
in the measured star image (including right ascension, declination, magnitude,
star index number and other information) is deemed to be the information of its
matching guide stars in the reference star image. The matched and identified
measured star image is directly considered as the (k + 1)th frame of the ref-
erence star image (only the information of successfully identified stars is kept).
In this way, matching can be carried out simply between adjacent star images
on the basis of prior information, with no need to generate a reference star
image again.
③ If less than d star are matched and identified in the kth frame of the measured
star image, star mapping is conducted and the (k + 1)th frame of the reference
star image is generated.
④ Similarly, the (k + 1)th frame of the reference star image and its measured star
image are matched and identified. Star tracking is thus accomplished following
these cyclic procedures.
When matching reference star images and measured star images, it is not necessary
to compare those stars which are far apart. Thus, the strategy of sorting before
matching and identification is utilized, i.e., star spots in the two star images are
ranked in ascending order in accordance with their x-coordinates. (Fig. 6.7
demonstrates the ranking of stars in the two star images in Fig. 6.4.) Then, star
matching and identification are carried out.
(a) Sorting of the (k + 1)th frame of reference star image
(b) Sorting of the (k + 1)th frame of measured star image
For better illustration, the sorted (k + 1)th frame of the reference star image and
its measured star image are marked as sequence A and sequence B, respectively.
The matching and identification of the two star images shown in Fig. 6.4 can be
regarded as the matching process of the two sequences. Take the matching of star
No. 2 in A and stars in B as an example (as shown in Fig. 6.8), the detailed process
is as follows:
① In order to reduce computational work and accelerate calculation speed, the
difference between dx (x-coordinate difference) and dy (y-coordinate differ-
ence), instead of the distance between two stars d, is compared. The equations
are as follows:
dx ¼ jx x0 j; dy ¼ jy y0 j ð6:1Þ
② The comparison of star No. 2 in A sequence and stars in B sequence starts with
3′. 3′ is the first star in B-sequence whose dx becomes smaller than r when star
(a) (b)
No. 7 in A and those in B are compared. In other words, the dx values of star
No. 7, as well as 7′ and 11′ before 3′, are all larger than r. Since the stars are
arranged in ascending order, the dx values of star No. 2 after No. 7, and stars
No. 7′ and 11′ should be larger than r. Hence, there is no need to compare star
No. 2 with them.
③ In the comparison, if the dx values of star No. 2 and B-sequence stars are
smaller than r, then this No. 2 star is compared with the next star in B sequence.
In this case, No. 2 is to be compared successively with stars No. 3′, 9′, 4′ and
10′.
④ When star No. 2 is compared with 10′, it is found that their dx value is larger
than r. The comparison of star No. 2 with B-sequence stars should be termi-
nated at this time as there is no need to continue the comparison. Since the stars
are arranged in ascending order, the dx values of star No. 2 and stars No. 5′, 2′,
8′ and 6′ (which are after star No. 10′), are definitely larger than r. There is no
need to make further comparison.
⑤ During the comparison of star No. 2 and B-sequence stars, only the dx and
dy values of No. 2 and 4′ are smaller than r simultaneously, signifying that there
is only one star in the neighborhood of star No. 2. Thus, 4′ is matched and
identified. The guide star information of 4′ can be obtained from the identifi-
cation results of the previous frame of star No. 2.
⑥ Similarly, stars No. 9, 3, 6, 4 and 10 are compared with B-sequence stars,
accomplishing the matching and identification of this frame of the measured
star image.
With the approach of sorting before matching and identification, unnecessary
comparisons are effectively eliminated. As demonstrated in Fig. 6.8, star No. 2 in
the (k + 1)th frame of the reference star image is simply to be compared with stars
No. 3′, 9′, 4′ and 10′ in the measured star image. It is not necessary to compare star
No. 2 with other stars. With sorting before matching and identification, only the
stars with approximate coordinate values are to be compared, reducing the com-
parison frequency and accelerating the speed of matching and identification.
When star sensor is operating, the attitude angular velocity of its carrier keeps
changing. The range of attitude angular velocity is defined as 1°–5°/s.
Requirements for the value of neighborhood radius r varies significantly at different
angular velocities.
If the neighborhood radius r used in tracking and identification is a constant, this
value may be suitable at one velocity but inappropriate at another, either larger or
smaller. As a result, the tracking may be low in efficiency, or cannot be accom-
plished in some cases.
6.2 Rapid Star Tracking Algorithm by Using Star Spot Matching … 189
The value of the neighborhood radius has a direct impact on the effects of
tracking and matching, as is shown in Fig. 6.4:
① When the value of the neighborhood radius r is small (small circle), there is
only one star, 4′, in the neighborhood of star No. 2. Hence, the two stars are
successfully matched.
② When the value of the neighborhood radius r is large (big circle), there are two
stars, 3′ and 4′, in the neighborhood of star No. 2. Hence, 4′ cannot be tracked
and identified.
Due to the big differences in the neighborhood radius demanded by different
angular velocities, it is inappropriate to set a single constant value for the neigh-
borhood radius.
Star spot position prediction can be used to solve the problem. There are two
approaches which can be taken in order to predict the position of star spots. The
second approach is utilized in this book.
① The position of stars in the FOV is predicted through the accurate estimation of
attitude. As a widely used method, it estimates current attitude information on
the basis of precise information of attitude and angular distance. Then, the
positional coordinates of stars in the FOV are calculated in accordance with the
current estimated value of the attitude. The strength of this method is that the
estimated position acquired through this approach is usually accurate. However,
the calculation is relatively complex and requires the use of a very complicated
filtering algorithm.
② The imaging position of star spots in the FOV is estimated by using the image
and angular velocity. In accordance with the positional changes of stars tracked
in the previous moment, the position of the star is predicted in the current FOV.
This strategy is easy to accomplish and fast to calculate [8]. It is noteworthy
that when full-sky star identification mode is transformed into tracking mode,
the star spot position at the initial moment cannot be estimated since there is no
previous record to be used for tracking. A larger neighborhood radius has to be
used for tracking. After one successful tracking, tracking information of the
previous frames can be utilized to predict the position of the tracked star in the
next frame. The neighborhood radius used at this time can be reduced in its
value.
With the adoption of star spot prediction, the rate of matching and identification
is significantly improved, which also increases the number of stars to be tracked and
matched to some extent.
190 6 Rapid Star Tracking by Using Star Spot Matching …
the tracking time with a threshold of 30, while the lower one demonstrates the
tracking time with a threshold of 6.
It is clear from the illustration that as the threshold increases, a longer time is
required for each tracking step. Therefore, given that the attitude measurement
remains accurate, the value of the threshold should be kept as small as possible.
2. Threshold Value’s Influence on the Success Rate of Star Tracking
To speed up star tracking, the threshold value should be kept as small as possible in
theory. However, the value should not be too small. The reasons lie in the following
aspects:
① At least three stars are required to be successfully tracked for the calculation of
attitude. Hence, the threshold value should not be smaller than 3.
② Occasionally, stars successfully tracked in the current frame of measured star
image may move out of the FOV in the next frame, resulting in tracking failure.
It often occurs when the angular velocity is high and the displacement of a
measured star in adjacent frames of images is great.
When the threshold is set to be 6 and eight stars have been successfully tracked,
no star spot mapping is required in this kind of star tracking. However, if six of
these tracked stars move out of the FOV in the next frame of the measured star
image, only two stars can be successfully tracked at most. Under this circumstance,
star tracking may fail and attitude calculation cannot be carried out.
As Fig. 6.11 shows, when the threshold is 4, the success rate of tracking, which
is around 55%, is relatively low. With the threshold bigger than 8, the success rate
of star tracking is relatively high, remaining above 95%.
Since the threshold value should be kept as small as possible, the threshold can
be set as 8 in star tracking. In this way, the success rate can be ensured and the
processing time can be effectively shortened in star tracking.
192 6 Rapid Star Tracking by Using Star Spot Matching …
As shown in Fig. 6.12, if the neighborhood radius is too large or too small, the
success rate of tracking will decrease. The highest success rate of star tracking can
be acquired when the neighborhood radius is about 10.
In order to study the influence of star position noise on star tracking, position noise
is introduced into the generated measured star image in simulation experiments.
Following Gaussian distribution, the position noise has a mean value of 0 and
standard deviation of 0–2 pixels.
1. Influence of Position Noise on the Success Rate of Star Tracking
In simulations, star tracking that can accomplish the predetermined tracking process
is considered to be successful. In other words, successful star tracking refers to
those that can track and identify measured star images generated during each step
with the attitude calculated on the basis of the results of tracking and identification
within the range of error tolerance. Once star tracking fails to track one frame and
cannot calculate the attitude, it is deemed as a failure.
Gauss noise, with a mean value of 0 and standard deviation of 0 (i.e., no noise),
0.5, 1.0, 1.5 and 2, respectively, is introduced into the star positions in the measured
star image. A hundred tracking processes are randomly selected to run the test.
Figure 6.13 demonstrates the results of tracking simulation. It is found that
though the success rate of tracking decreases as the standard deviation of position
noise increases, it remains above 95%.
In simulation, star tracking fails mostly because the number of stars that are
successfully tracked is too small. This number cannot meet the minimum demand
for attitude calculation. As a result, the attitude of star sensor cannot be correctly
194 6 Rapid Star Tracking by Using Star Spot Matching …
calculated and the next frame of reference star image cannot be generated. Thus, the
tracking process is terminated and star tracking fails.
2. Influence of Position Noise on the Export Accuracy of Attitude in Star Tracking
As can be seen from the attitude calculation equation, the computation process is
related to the coordinates of star position. The accuracy of the star position in the
measured star image has a direct bearing on the measurement precision of the
starlight vector in any star sensor coordinate system. The attitude export accuracy
during star tracking is further affected by this precision.
Gauss noise, with a mean value of 0 and standard deviation of 0–2, is introduced
into the star positions in the measured star image. Several tracking processes are
randomly selected to run the test.
A random attitude of star sensor is selected. Its initial attitude has a yaw angle of
300°, a pitch angle of 40° and a roll angle of 0°. The final attitude has a yaw angle
of 310°, a pitch angle of 50° and a roll angle of 10°. Figure 6.14 illustrates the
influence of position noise on the accuracy of the star sensor attitude. The curve
with relatively small fluctuations represents the difference value between the cal-
culated attitude value and the true value when the standard deviation of position
noise is 0.5. The curve with relatively large fluctuations reflects the difference value
between calculated attitude value and the true value when the standard deviation of
position noise is 2.
It is clear from Fig. 6.14 that the increase in position noise reduces attitude
accuracy. Therefore, star centroiding of star sensor should be maintained as accu-
rately as possible in actual use in order to improve the precision of star sensor’s
attitude measurement.
6.3 Simulations and Results Analysis 195
In simulation experiments, the measured star images in each tracking step are
simulated on the basis of the given attitude of star sensor. With the set initial and
final attitudes of star sensor, the given attitude of star sensor changes in the given
manner.
In actual use, the attitude of star sensor does not necessarily change in only one
manner. The impact of different manners on attitude calculation is studied by
imposing various changes on the given attitude. In this way, the simulation
experiments of star tracking can be more practical and the tracking algorithm can
perform better.
A random attitude of star sensor is selected. Its initial attitude has a yaw angle of
190°, a pitch angle of −70° and a roll angle of 0°. The final attitude has a yaw angle
of 200°, a pitch angle of −60° and a roll angle of 10°. With this given attitude,
situations in which the attitude is subject to linear variation and conic variation are
analyzed respectively.
1. Linear Variation of the Given Attitude
The given attitude changes in accordance with equation
Y ¼ AX þ C
Here, Y stands for the attitude in each tracking step, A for the coefficient of the linear
equation, X for the steps of tracking, and C for the initial attitude of tracking. The given
attitude in each tracking step changes linearly as the tracking process differs.
Figure 6.15 demonstrates the comparison between the given value and the cal-
culated value of yaw angle in each tracking step when the given attitude changes
linearly. On the right of the Figure is an enlarged illustration of part of the tracking
196 6 Rapid Star Tracking by Using Star Spot Matching …
Fig. 6.15 Tracking curve of the linear variation of the given attitude
curve. The dotted line represents the value of the given attitude, while the solid line
stands for the calculated value of the attitude. It is clear from the Figure that the
curve of the calculated value is basically in line with the curve of the given value.
2. Conic Variation of the Given Attitude
The given attitude changes in accordance with equation
Y ¼ AX 2 þ C
Here, Y stands for the given attitude in each tracking step, A for the coefficient of
the conic equation, X for the steps of tracking, and C for the initial attitude of
tracking. With this equation, the given attitude in each tracking step changes in
accordance with the conic curve.
Figure 6.16 demonstrates the comparison between the given value and the cal-
culated value of yaw angle in each tracking step when the given attitude changes in
Fig. 6.16 Tracking curve of the conic variation of the given attitude
6.3 Simulations and Results Analysis 197
accordance with the conic curve. On the right of the Figure is an enlarged illus-
tration of part of the tracking curve. The dotted line represents the value of the
given attitude, while the solid line stands for the calculated value of the attitude. It is
clear from the Figure that the curve of the calculated value is basically in line with
the curve of the given value.
In order to assess the rapidity of the tracking algorithm, simulation tests on the
processing time of star tracking are carried out. When the threshold is set to 8, i.e.,
the number of stars being tracked in the FOV is smaller than 8, star mapping is
conducted.
The initial attitude of the simulation is set to have a yaw angle of 190.707°, a
pitch angle of −88.168° and a roll angle of 1.210°. The final attitude has a yaw
angle of 200.707°, a pitch angle of −78.168° and a roll angle of 11.210°.
Figure 6.17 presents the statistical result of the time spent on 140 frames of star
tracking. The dotted line stands for the tracking time spent before improvement, and
the solid line for the tracking time spent after improvement (by using zone
catalog-based quick retrieval of guide stars, threshold mapping, sorting before
tracking and other strategies). The statistical results demonstrate that an average of
12 ms is taken in each tracking step before the improvement and 6 ms after the
improvement.
References
1. Jiang J, Zhang GJ, Wei XG et al (2009) Rapid star tracking algorithm for star sensor. IEEE
Aerosp Electron Syst Mag 23–33
2. Jiang J, Li X, Zhang G, Wei X (2006) Fast star tracking technology in star sensor. J Beijing
Univ Aeronaut Astronaut 32(8):877–880
3. Jiang J, Li X, Zhang G, Wei X (2006) A fast star tracking method in star sensor. J Astronaut
27(5):952–955
4. Li X (2006) Fast star tracking technology in star sensor. Master’s thesis of Beijing University
of Aeronautics and Astronautics, Beijing
5. Laher R (2000) Attitude control system and star tracker performance of the wide-field infrared
explorer spacecraft. American Astronomical Society, AAS 00-145:723–751
6. Wang G et al (2004) Kalman filtering algorithm improvement and computer simulation in
autonomous navigation for satellite. Comput Simul 27(1):33–35
7. Yadid PO et al (1997) CMOS active pixel sensor star tracker with regional electronic shutter.
IEEE Trans Solid-State Circ 32(3):285–288
8. Samaan MA, Mortari D, Junkins JL (2001) Recursive mode star identification algorithms.
Space flight mechanics meeting, Santa Barbara, CA, AAS 01-194, pp 11–14
Chapter 7
Hardware Implementation
and Performance Test of Star
Identification
As an aerospace product, star sensor must meet the demand for miniaturization.
Currently, star sensor uses embedded design schemes and its integration level is
increasingly high so as to minimize its weight, power consumption, and size. The
star identification algorithm generally runs on embedded processors. Meanwhile, to
store GSC and navigation feature databases, it is also necessary to extend the
peripheral memory. RISC (Reduced Instruction Set Computer) processor based on
ARM (Advanced RISC Machines) has such characteristics and advantages as low
power consumption, low cost, and good performance. It is used widely by
numerous star sensor products as a core device of the data processing unit of star
sensor.
After simulation experiments using simulated star images, the star identification
algorithm needs to be further tested to investigate its performance when it is closer
to the on-orbit operational status of star sensor. Generally, there are two ways of
testing: one that uses a star field simulator to conduct hardware-in-the-loop simu-
lation and verification and one that conducts the field test of star observation. The
former can be done in a laboratory and is not restricted by weather conditions or
geographical positions. Through flexible configurations of simulation star images,
diversified function tests and verifications can be done. The latter can obtain the
actual star images that are closest to the operational status of star sensor, but it is
subject to the influences of atmospheric environment, climatic conditions, and
geographical positions.
This chapter introduces the hardware implementation process of the star iden-
tification algorithm by taking the RISC processor as an example, and describes its
two testing methods, i.e., hardware-in-the-loop simulation and verification and field
test of star observation.
© National Defense Industry Press and Springer-Verlag GmbH Germany 2017 199
G. Zhang, Star Identification, DOI 10.1007/978-3-662-53783-1_7
200 7 Hardware Implementation and Performance Test …
The circuit system of star sensor is generally put into two parts: front end and back
end. The former mainly fulfills the driving of the image sensor and low-level
processing of star image. The centroid coordinates of star spot obtained through
front end processing are often implemented by FPGA or CPLD. The latter mainly
fulfills star identification, star tracking, attitude establishment, etc. The final output
of attitude information is often implemented by RISC or DSP processor. This
section mainly introduces the structure of RISC processing circuit at the back end of
star sensor and the implementation of star identification algorithm [1, 2].
The RISC processor enjoys better pipeline function and requires fewer gate circuits
for its implementation. Compared with other microprocessors, it is lower in power
consumption and cheaper. Thus, many star sensors choose RISC processor as their
processor. For example, SETIS star sensor from German company Jena-Optronik
uses advanced ASIC chip technology with 16-bit RISC controller (PMS610) at its
core, reducing the cost by 50%. American JPL (Jet Propulsion Laboratory) takes the
lead in designing a micro star sensor that uses CMOS image sensor and 32-bit RISC
processor. Its power consumption is just 500 mW when the voltage is 5 V.
With RISC processor at its core, RISC data processing circuit adds communi-
cation interface module, memory module, JTAG interface module, RS232 serial
communication module, and power module at the periphery to implement such
functions as debugging, computation, and communication. Figure 7.1 shows the
framework of the RISC data processing circuit.
Some memory space must be allocated in advance for the normal running of star
identification algorithm. The demand of star identification algorithm for memory
space is in two aspects:
• The running code segment, global variable, and stack of the program itself
require some memory space.
• Star identification algorithm must depend on guide database. Generally, GSC
and navigation feature database that form guide database require relatively large
memory space.
Due to the small internal memory capacity of ARM RISC processor, it is thus
indispensable to extend external memory in the circuit design. By adding SRAM
(Static RAM) memory bank to EBI bus of RISC processor, Flash memory can
implement the expansion of memory space. Modified triangle algorithm based on
angular distance matching can be taken as example. In designing software, it is
preliminarily estimated that the program running itself requires about 1.6 MB of
space, while guide database requires about 1.1 MB of memory space. Thus, the
capacity of SRAM and Flash memory used should reach 2 MB.
Since the star image data processed by RISC data processing circuit comes from
the FPGA circuit of the front end, the interface that communicates with FPGA must
be included, which is implemented by the PIO (Parallel Input/Output) interface of
RISC processor. In addition, the RS-232 serial port is also configured for attitude
output.
At the development stage, the program used for implementing star identification
and attitude establishment must be downloaded to the RISC processor for debug-
ging. Thus, a JTAG interface is added to the RISC processing circuit to debug
software by connecting with the simulator.
The primary electronic components of RISC data processing circuit include RISC
processor and peripheral memory.
(1) RISC processor-AT91R40008
AT91R40008 is one of the AT91 ARM microprocessor series products of ATMEL
Company [3]. Its kernel is ARM7TDMI processor, the structure of which is shown
in Fig. 7.2. This processor is of 32-bit RISC structure with high performance and
density and 16-bit instruction set. Besides, its power consumption is very low.
Inside the processor, there are SRAM of 256 KB and numerous registers that can
rapidly deal with unusual interruptions, thus making it suitable for occasions with
real-time requirements. AT91R40008 can be directly connected to external mem-
ory. It has a fully programmable 16-bit external bus interface and can chip select
eight peripherals, which speeds up access to memory and reduces the price of the
202 7 Hardware Implementation and Performance Test …
system at the same time. AT91R40008 supports 8 bit/16 bit/32 bit read and write
(with SRAM of 256 KB inside) and interrupt control with eight priority grades. It
has 32 programmable I/O interfaces and three 16 bit timers/counters supporting
three external clocks input. AT91R40008 has two USART (Universal
Synchronous/Asynchronous Receiver/Transmitter) units which can provide the
function of full-duplex serial communication. AT91R40008 needs two levels for its
7.1 Implementation of Star Identification on RISC CPU 203
enable pins of Flash, and allocates base address 0x01000000 to Flash. The
read-write enable pins of RISC processor and memory can be connected accord-
ingly. It should be noted that SRAM needs to be connected to the high 8-bit gating
signal NUB and the low 8-bit gating signal NLB as well.
(2) SRAM Memory Circuit
SRAM memory circuit combines two 16-bit SRAM chips of 1 MB, extending the
space to 2 MB. The SRAM memory circuit is shown in Fig. 7.4.
As shown in Fig. 7.4, there are two SRAMs: Bank0 and Bank1, each with a
16-bit data line. The two SRAMs select CS_BANK0 and CS_BANK1 as their chip
select signals, respectively, which can be obtained through 74LVC139 decoder.
The two chip select signals are related to address A20. When A20 is zero,
CS_BANK0 is zero and CS_BANK1 is one. Here, SRAM0 is selected. When A20
is one, CS_BANK0 is one, and CS_BANK1 is zero. Here, SRAM1 is selected.
Thus, a SRAM module of 2 MB is formed, extending the memory space.
The two SRAMs share the read enable signal RD, the write enable signal WE
(NRD and NWE in Fig. 7.4, respectively) and the high 8-bit gating signal NUB and
the low 8-bit gating signal NLB.
(3) Interface Circuit Design of RISC Processor
When the RISC processor is at work, front end FPGA is required to provide the
coordinates of star spots in the captured star images. RISC processor receives data
7.1 Implementation of Star Identification on RISC CPU 205
from FPGA through PIO interface and then conducts star identification and attitude
establishment. Data transmission shares 19 PIO interfaces, three of which are used
for communication control, i.e., Status, Req, and Ack. The other 16 interfaces are
used for data transmission.
To transmit the centroid data of star spots to RISC processor at the back end of
star sensor, data transmission protocol between FPGA and RISC is defined.
Figure 7.5 shows the time sequence of data transmission protocol.
Status: Operation status of FPGA, input signal;
Req: Read requests of RISC processor, initial value set to high, output signal;
Ack: Answering signal of FPGA, active high, initial value set to low, input signal;
Data: 16-bit data read by RISC processor, input signal.
Before communication, Req is high, Ack is low, and the interface state machine
inside FPGA is idle. That Status is set to low means that a new frame of centroids
data has been prepared to conduct data transmission. Once the communication
starts, if the RISC processor reads the falling edge of Status, it can then set Req to
low level and send out data read requests. After reading the low-level signal of Req,
the state machine that implements interface protocol inside FPGA puts corre-
sponding data to data lines and sets Ack to high at the same time. Here, the interface
state machine inside FPGA is in the state of being able to read. Noticing that Ack
ascends, the RISC processor reads the data, sets Req to low and then informs FPGA
that the data has been read. After FPGA reads that Req is set to high, the state
machine returns to the idle state and sets Ack to low. Thus, data transmission is
finished. The centroid data of all the star spots in a frame of image are transmitted
cyclically in this way.
It has been tested that the transmission time of the centroid data of each star spot
is approximately 0.5 ms. Take the centroid data of 20 star spots in a frame of star
image for example. The transmission time of the centroid data of each frame of star
image is approximately 10 ms, which can meet the real-time demand.
Figure 7.6 shows the physical graph of RISC data processing circuit.
The programs that run on a RISC processor mainly include star identification and
tracking programs. There are also some auxiliary programs such as start-up pro-
gram, serial communication program, time testing program, peripheral data com-
munication program, etc. The major codes are written in C language and part of the
start-up codes are written in ARM assembly language.
(1) Start-Up Program of RISC Processor System
Start-up program is the prerequisite for the operation of RISC processor system,
which is part of the whole program block that runs first. Its major function is to
7.1 Implementation of Star Identification on RISC CPU 207
Hardware-in-the-loop simulation test mainly uses the star field simulator to test the
process of star identification and star tracking by star sensor [2, 4, 5], verifies the
validity of the identification algorithm, and tests its performance at the same time.
7.2 Hardware-in-the-Loop Simulation Test of Star Identification 209
and star sensor is connected to the data processing computer by test cables. Based
on the motion path of a spacecraft, the star field simulator can calculate the bore-
sight pointing of star sensor and generate a real-time star image. Star sensor is
installed and aimed at the lens of the star field simulator. Both of their optical axes
should be as parallel as possible, and the joint between them should be shaded to
reduce the influence of stray light. Star sensor, through star images generated by
star field simulator, simulates the observation of the real night sky and can test the
functions of star sensor such as star identification, tracking, and attitude
establishment.
As shown in Fig. 7.7, star identification and star tracking tests are conducted by
aiming the boresight of star sensor at the star field simulator to verify the accuracy
of star identification and star tracking. The FOV of star sensor used in the test is
20° × 20° in size and the star identification algorithm is the modified triangle
algorithm based on angular distance matching. The star image simulation program
selects stars of 5.0 Mv, and the attitude angle of star sensor is set to 12° (right
ascension), 58° (declination), and 90° (roll). Figure 7.8 shows the photographed
star image. Since the FOV of the star field simulator is smaller than that of star
sensor, only the rectangular part in the middle of the star image is valid. Figure 7.9
shows the results obtained by identifying the photographed star image with star
identification software. As shown in Fig. 7.9, the software identifies the pho-
tographed image correctly and obtains the correct results of attitude establishment.
Figure 7.10 shows the attitude results obtained by the RISC processor of star
sensor through full-sky star identification. Compared with the results of star image
processing software and the attitude value set by star image simulation, it can be
seen that star sensor correctly identifies the star image simulated by star field
simulator.
To further verify the accuracy of star identification in the tracking status, con-
tinuous tracking and identification are done, and the star field simulator is set as
follows:
The initial attitude angle is 12° (right ascension), 58° (declination) and 30° (roll),
with right ascension increasing by 0.2°/s angular velocity, declination by 0.2°/s and
the roll angle remaining unchanged. Based on this attitude, dynamic star images are
generated to simulate the on-orbit motion of star sensor. Star sensor photographs the
Fig. 7.10 Attitude output results obtained by star sensor through full-sky star identification
dynamic simulated star images, conducts star tracking and outputs the attitude
results. The interval of each data output of star sensor is 100 ms and the attitude
output results are shown in Fig. 7.11.
As shown in Fig. 7.11, star sensor can conduct stable star tracking of dynami-
cally simulated star images. It can be seen from the attitude establishment results of
right ascension and declination angles in Fig. 7.11 that the gradient of right
ascension and declination angles correctly reflects the angular velocity (0.2°/s).
The test of time taken in star identification is realized by using the timing and
counting modules of an ARM processor. The star field simulator generates 100 star
images randomly and the time taken in identifying each star image by star sensor in
the FOV is listed in Table 7.2. The time taken by star sensor in full-sky star
identification is from 0.18740 s to 1.21544 s, and the average is 0.47832 s.
7.2 Hardware-in-the-Loop Simulation Test of Star Identification 213
Fig. 7.11 Attitude output results obtained by star sensor through star tracking of dynamic star
images
The way in testing the attitude data update rate is basically the same as that of
testing the time taken in full-sky star identification, both of which uses the timing
and counting modules of the ARM processor to test the time taken in star tracking.
When star sensor is run in Tracking Mode, ten points are tested respectively
when the number of tracked stars is the same, and the time taken in each tracking is
recorded (Table 7.3).
214
Table 7.3 Time taken by star sensor in tracking stars with different numbers
Number of tracked stars 4 5 6 7 8
Tracking time (ms) 0.05516 0.05728 0.06945 0.05972 0.06048
0.05418 0.05718 0.05546 0.05928 0.07918
0.05990 0.05704 0.05470 0.05904 0.06912
0.06958 0.05680 0.05540 0.05894 0.06736
0.05876 0.05622 0.05528 0.05890 0.06736
0.05474 0.05512 0.05680 0.06154 0.06746
0.05450 0.05812 0.05630 0.05796 0.07525
0.05778 0.05714 0.05608 0.05984 0.07466
0.05778 0.05710 0.05646 0.05762 0.07464
0.05620 0.05660 0.05790 0.05770 0.07502
Average value 0.05786 0.056860 0.05738 0.05905 0.07105
Number of tracked stars 9 10 11 12 13
Tracking time (ms) 0.06214 0.06348 0.07102 0.06994 0.08958
0.06282 0.06318 0.07130 0.07002 0.08670
0.06126 0.06760 0.07130 0.06982 0.08958
0.06946 0.08656 0.07128 0.06994 0.08670
0.06922 0.08366 0.07126 0.06952 0.08674
0.06926 0.07610 0.07114 0.06972 0.09266
0.08286 0.07678 0.06912 0.06950 0.09076
0.08130 0.07706 0.07860 0.08668 0.08954
0.08134 0.07720 0.07884 0.09656 0.09042
0.08136 0.07698 0.07850 0.09816 0.09058
Average value 0.07210 0.07486 0.07324 0.07699 0.08933
In Tracking Mode, the maximum number of stars that can be tracked by star
sensor is 13, and each tracking can be completed in 100 ms, i.e., the attitude data
update rate of star sensor can reach 10 Hz.
To test the performance of star sensor more correctly and understand the operational
status of star sensor in practical application, the real night sky needs to be observed
for further testing. Similar to the hardware-in-the-loop simulation test, the field
experiment focuses on investigating whether the functions of star sensor (star
identification and star tracking) are effective in the real night sky. The experiment
also conducts preliminary evaluation of the star sensor’s performance in attitude
establishment.
216 7 Hardware Implementation and Performance Test …
When the star sensor observes stars in the field, a comparison is usually done
between star images photographed by star sensor and those simulated by Skymap
software before verifying the full-sky star identification algorithm. Through manual
identification, the correct match of measured stars in the star image is determined,
providing references for the accuracy of star identification. Figure 7.12 shows the
main interface of Skymap.
Skymap is a powerful night sky simulation software, which can provide a wide
range of astronomical reference information for astronomers and fans. The main
functions of Skymap are as follows:
① Display the night sky that can be observed at any place on the earth between
4000 BC and 8000 AD. The observation scope can be as large as the whole sky,
or as small as a tiny region.
② Zoom in or zoom out of the celestial areas that are of interest and rotate the
night sky through the use of keyboard or mouse.
③ Display more than 15 million stars and over 200 thousand extended celestial
bodies: star cluster, nebula, galaxy, etc.
④ Display the positions of the sun, the moon, and the major planets with a margin
of error less than 1″.
⑤ Display the names of 88 constellations, their shape connection and all the
known asteroids and comets (including a database of over 11,000 asteroids and
comets).
⑥ Display the grids and graduation lines of various different coordinate systems
such as the horizon coordinate system, the equator coordinate system, the
ecliptic coordinate system, the galactic equator coordinate system, etc.
⑦ Annotations can be added to the star image. A circular FOV with camera and
rectangular CCD can be randomly configured for observation.
Figure 7.13 shows the real picture of the star observation field experiment. And
Fig. 7.14 shows the star image photographed when the boresight of star sensor is
pointed at Cassiopeia and its surroundings. The FOV of star sensor is 20° × 20°
and its exposure time is 200 ms. Figure 7.15 shows the centroiding results of
photographed star images. The specific procedures of centroiding are available in
Sect. 2.4.
The longitude and latitude of the place for field star observation, the measure-
ment time, the size of the measured FOV, and other parameters are set in Skymap
software. The celestial area near Cassiopeia is selected for comparison. Figure 7.16
shows the star image of the celestial area near Cassiopeia generated by Skymap.
Through one-by-one comparison, star spots (marked by cross lines) extracted
from the real star image can find their corresponding stars. The comparison results
are shown in Table 7.4. It can be seen that each of the measured stars in the
measured star image find its corresponding guide star through manual identification.
Fig. 7.13 Star observation physical graph of the star sensor in the field
218 7 Hardware Implementation and Performance Test …
Fig. 7.16 Star image of the corresponding celestial area generated by Skymap
The real star images photographed by using the modified triangle algorithm based on
angular distance matching (introduced in Sect. 3.2) are identified. The results are
shown in Fig. 7.17. It can be seen that the measured stars in the measured star image
are identified correctly. The index number of the measured star’s corresponding
matching guide star can be found in the GSC. Through this index number, the SAO
index number of the star can be searched out from the SAO J2000 star catalog. Both
of the above results are in line with those of the manual identification of Skymap. It is
noticeable that there are some subtle differences between the magnitude information
in the SAO star catalog and that in the Skymap star database.
Through the identification results, the star probe sensitivity of star sensor can be
estimated. It can be seen from the identification results in Fig. 7.17 that stars of
5 Mv can be measured by star sensor. In fact, if the gray threshold (set to 50 in
Fig. 7.16) is lowered, a larger number of fainter stars can be extracted.
In an actual star observation test, a large number of photographed star images go
through full-sky star identification test. It has been tested that 100% identification rate
can be obtained when the number of measured stars in the FOV is bigger than four.
In star observation field experiments, specific tests have been done when there
emerge interfering stars. Figure 7.18 shows the identification result of a pho-
tographed star image at Nanshan Observatory, Xinjiang (longitude and latitude:
N43°28′18″, E87°10′45″) at 8 a.m. on December 31, 2008. The FOV of star sensor
is 20° × 20°, and its exposure time is 100 ms. As can be seen from the results, the
7.3 Field Experiment of Star Identification 221
Fig. 7.17 Results of star identification by using modified triangle algorithm based on angular
distance matching
bright star marked 4 is not identified correctly (Index and Mag are marked by −1
and 99.0, respectively.) The star image obtained at this place and time is simulated
by Skymap software, as shown in Fig. 7.18. It can be seen from Fig. 7.18 that the
star marked 4 is Saturn. The existence of Satum in the FOV does not affect the
correct identification of other stars or the correct output of the attitude establishment
result. Figure 7.19 shows the star image of the corresponding sky zone generated
by the Skymap.
In Tracking Mode, the accuracy of star tracking is verified by continuously
outputting the attitude. Figure 7.20 shows the result curve of attitude output after
continuous star tracking of 3000 frames. As can be seen from the curve, star sensor
can conduct stable tracking and attitude output in the process of field star obser-
vation. The attitude variation gradient of its right ascension angle reflects the earth’s
rotation velocity.
222 7 Hardware Implementation and Performance Test …
Fig. 7.18 Identification results when there are interfering stars in the measured star image
Fig. 7.19 Star image of the corresponding celestial area generated by Skymap
References 223
Fig. 7.20 Result curve of attitude output of continuous star tracking of 3000 frames
References
1. Yang J, Jiang J, Zhang G, Li S (2005) RISC technique in star sensor, Opto-Electron Eng 32
(8):19–22
2. Yang J (2007) Star Identification Algorithm and RISC Technique, Doctoral Thesis of Beijing
University Aeronautics and Astronautics, Beijing, pp 62–96
3. AT91R40008 electrical characteristics, http://www.atmel.com
4. Yang J, Zhang GJ, Jiang J, Fan QY (2007) Semi-physical simulation test for micro CMOS star
sensor. International Symposium on Photoelectronic Detection and Imaging Beijing, China,
SPIE, vol. 6622
5. Wei X, Zhang G, Fan Q, Jiang J (2008) Ground function test method of star sensor using
simulated sky image, Infrared Laser Eng 37(6):1087–1091