0% found this document useful (0 votes)
53 views145 pages

Unmanned Aircraft Systems - International Symposium On Unmanned Aerial Vehicles, UAV'08

The document discusses the history and evolution of unmanned aircraft systems and drones. It describes various types of drones and their applications in sectors like agriculture, delivery, emergency response, and inspections. Regulations for drone usage are also mentioned.

Uploaded by

mohan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views145 pages

Unmanned Aircraft Systems - International Symposium On Unmanned Aerial Vehicles, UAV'08

The document discusses the history and evolution of unmanned aircraft systems and drones. It describes various types of drones and their applications in sectors like agriculture, delivery, emergency response, and inspections. Regulations for drone usage are also mentioned.

Uploaded by

mohan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 145

27.

Unmanned Aircraft Systems: International Symposium on Unmanned Aerial Vehicles, UAV'08

Chapter 1: Introduction to Unmanned Aircraft Systems

[mh]Evolution of UAV Technology

Although the original translation refers to a flying electromechanical technology used to mean “drone,”
the term drone is based on a game using the term “Queen Bee” . The historical development of drones
shows that the first vehicle that fits the definition of unmanned aerial vehicles (UAV) was the
unmanned hot air balloon used in France in 1783. Since then, drones have continued to be used in
intelligence, aerial surveillance, search and rescue, reconnaissance, and offensive missions as part of
the military Internet of Things (IoT). Drones are widely used in many areas, such as traffic
surveillance, cargo, first aid, agriculture, entertainment, hobby, security, and surveillance, as they
provide many customizable solutions that combine practicality and speed.

In any case, the low resolution of open-source images provided by satellites, the expensiveness of
high-resolution images, and the dependence of satellite images on weather conditions create significant
problems in the supply and evaluation of images. Drones fill an important gap as images taken by
drones that can fly several meters above the ground offer advantages such as cloud-based data analysis,
allowing manufacturers to monitor product development and quality continuously, easily, and quickly .

Such technological equipment also allows real-time monitoring of the business situation in the
construction industry, rapid analysis of the excavation area in the mining industry, precise
determination of the excavation to be excavated, and preliminary preparations. In the energy and
infrastructure sector, it is possible to determine roads, cables, and pipelines and plan accordingly. Aid
organizations use drones to locate camps, plan transportation routes, and monitor work. It allows the
rapid delivery of goods and services and the arrangement of communication infrastructures to areas
with a high density of buildings and people or where there is no highway transportation.

Drones for transportation fulfill important tasks in delivering medical supplies and foodstuffs over long
distances in emergencies and rapid rescue efforts. Drones are also used in the logistics industry to
detect damage and cracks in the ship structure and hull, allowing emergency teams such as the fire
brigade to intervene in dangerous areas quickly and safely. We also see drones in measuring the level
of wear and tear on highway routes, security checks in bridges and tunnels and other determinations,
and interior controls of partially damaged buildings in disaster. To provide the communication needs of
different work groups or rescue teams in the field of activity, drones can be used in communication as
well as helping to establish private communication networks quickly. Although the battery life
problem, the biggest obstacle in using drones, is still a problem in long-distance tasks, it is possible to
perform longer functions by changing the battery in short-distance studies. Another important
limitation of unmanned aerial vehicles used today is the need for human supervision to perform almost
all of the tasks described, which are factors that slow down the performance of the specified functions
and the intervention in case of danger.

Many factors are considered depending on the size and flight characteristics of drones. Among these,
the increase in energy cost due to the reduction in size and the difficulty of hanging in the air can be
counted. On the other hand, there is no ideal design for fixed and rotary wing drones that combines
both aerodynamic and propulsion performance. These include traditional fixed-wing and rotary-wing
designs and bio-inspired designs based on flapping wings. Of course, each of these designs has
advantages and disadvantages. For example, fixed-wing aircraft can fly quickly and efficiently but
cannot hover. Rotary-wing designs can hover and are highly maneuverable but have lower flight
efficiency. On the other hand, there is no ideal design for both fixed and rotary-wing types that
provides both aerodynamic and thrust performance .

The growth in the use of commercial and personal drones has necessitated many regulations to prevent
accidents and provide drone control in a way that does not pose a hazard . Although many countries
have created UAV regulations, the increasing use of drones causes rules to change constantly and new
rules to be introduced. These regulations vary between countries and regions. Among these regulations,
in the UK, the Civil Aviation Authority (CAA) limits the flight altitude of unmanned aerial vehicles to
500 feet, drones weighing more than half a kilo. It is important to register it with the CAA. The agency
also states that it is prohibited to fly near airports and an aircraft with its “Dronecode” that it is
mandatory to stay below 400 feet and at least 150 feet away from buildings and people, and that the
drone must be constantly observed during flight .

Drones, designed with a kite-like mechanical architecture and one engine placed on its diagonal points,
mainly consist of four basic components: propeller, engine, body, and flight board. Drones, generally
defined as 250 and 450 class according to the distance between the two crossed motors, are classified
as single rotor (helicopter), multi-rotor (multicopter), fixed wing, and fixed-wing hybrid VTOL
according to their physical structures.

Single-rotor drones are small-sized helicopters and fueled or electric types are available. Working with
a single blade and fuel offers advantages such as increasing stability and flying longer distances, but it
also brings safety risks.

Multicopter are the smallest, lightest, and most widely used drones on the market. Their flight distance,
speed, altitude, and payload are limited. These drones, which usually carry a light load such as a
camera, are used for terrestrial observations and determinations of up to 50 minutes of flight time.

Multicopters are divided into models with four engines (quadcopter), six engines (hexacopter), and
eight engines (octocopter) according to the number of motors. The main factor determining the design
here is the carrying capacity of the drone and the required range; accordingly, the size and number of
engines are determined. According to the necessary range and control structure, there are many types
of drones, from hobby-purpose models with a range of 30–40 meters to professional models with a
range of 10 km+, together with the engine, chassis, and battery.

[mh] Applications of UAVs in Civilian Sectors

Research on the use of UAVs for civilian applications has gathered a lot of interest in the last decades,
as UAVs prove to be a highly useful tool for a plethora of use cases. UAVs were initially employed for
military applications, responsible for a variety of missions. Their decreasing cost, high aerial mobility,
and the advancements in battery technologies made UAVs highly attractive options for civilian
applications as well . Proposed civilian applications include agriculture, photography, shipping and
delivery, disaster management, rescue operations, archeological surveys, geographic mapping, human
health, livestock surveillance, safety inspection, wild-life observance, weather forecasting, emergency
response, telecommunication, and border surveillance .

UAVs present a wide variety from micro-UAVs weighing some 100 g to large UAVs weighing over
100 kg. They also differentiate by their control configuration. Some examples of UAVs with different
kinematics models are presented in Figure. Their size and control configuration are factors that should
be considered while selecting a UAV for a specific application and while designing the guidance
methodology to apply.

Figure :Examples of UAVs in the Gazebo simulator. (a) A multirotor quadcopter, (b) A vertical take-
off (VTOL), (c) A fixed-wing.

UAVs are expected to decrease the financial cost, improve performance in terms of range and
completion time, and minimize human fatigue and safety risk in the operations they are involved in.
Systems consisting of multiple UAVs capable of collaborating to reach a user-defined goal reduce the
response time for time critical operations (e.g., search and rescue missions). Autonomy capabilities
reduce the operator’s workload and enable operations with multiple UAVs. In addition, UAVs with
autonomy features should lower the risks related to the performance of the human operator . Fully
autonomous UAVs make decisions on their missions and planning with no human intervention .

Four levels of autonomy for UAVs are identified in , from remote control to fully autonomous:

 Fully autonomous: The UAV is capable of achieving its given scope and completing its mission
with no human intervention while considering operational and environmental conditions.
 Semi-autonomous: The UAV is capable of autonomous operation between human interactions.
The mission is planned and executed by the human operator and/or the UAV.
 Teleoperation: The UAV receives actuator commands or continuously updated goals by the
human operator, who accesses sensory data from the vehicle.
 Remote control: The UAV is continuously controlled by a human operator and only conducts
Line of Sight missions.

The significant advantages and multiple applications of UAVs are expected to cause an outstanding
increase in the number of UAV operations over urban and rural areas. The need for methods to manage
and control increasing UAV traffic is becoming more urgent. A UTM system is responsible for
supporting, monitoring, and regulating the safe and smooth incorporation of UAVs into civilian
airspace. UTM systems are seen to be a part of or an addition to Air Traffic Management (ATM),
which has been employed for manned aviation for decades. Research on UTM has boosted in the latest
decade, with several research programs focusing on defining its requirements, describing its
operations, and designing and testing its implementations.

In this chapter, a short literature review of different civilian applications of UAVs is presented,
focusing on guidance algorithms designed to increase the vehicles’ autonomy capabilities in decision-
making and planning to support fully autonomous operations. Recent work with guidance methods
designed for specific civilian applications is presented. Additionally, an overview of proposed traffic
management systems and concepts for UAVs is presented, describing their safe incorporation into
civilian airspace.

[h]Civilian applications

In this section guidance and decision-making paradigms based on the intended use case are described.
Although different UAV applications and use cases could be formulated as well-known guidance
problems (e.g., traveling salesman problem (TSP), vehicle routing problem (VRP), coverage path
planning problem, etc.), each application type introduces specific constrains and optimization
parameters for the guidance system. Six main civilian applications are presented: cinematography,
payload delivery and shipping, agriculture, surveillance, search and rescue, and disaster and
environmental monitoring.

UAV operations in are separated into six categories: area coverage, search, routing for a set of
locations, data gathering and recharging in wireless sensors network, allocating communication links
and computing power to mobile devices, and operational aspects of a self-organizing network of
drones. Most of the applications considered in this work fall into the first three categories. Figure
shows the relation between those three operation categories and the UAV applications studied in this
work.
Figure :Civilian application of UAVs and their relations to three common operations .

In area coverage operations, the UAVs must scan a specific area. In the coverage path planning
problem, the UAV must design a path to cover all points of the area with its sensors. In case full area
coverage is not possible, the designed path must maximize the collected information while obeying to
the imposed constraints. If the area is decomposed to a grid of cells, the problem can be transformed
into a traveling salesman or a vehicle routing problem. Coverage operations are applied in agriculture,
surveillance, and disaster and environmental monitoring.

In search operations, the UAVs are tasked to explore an area and locate specific targets of interest with
unknown locations. The operation or search area is usually partitioned into a grid of cells and cells are
associated with probabilities to create a belief map on the existence of targets. The applied approach is
usually optimized to minimize the time to detect the targets. Search operations include applications in
search and rescue, surveillance, and disaster and environmental monitoring.

In routing operations, a set of waypoints of interest are given by the end-user or generated from another
system and the UAV must design paths to visit them all while minimizing time or energy criteria.
Routing operations are generally formulated as TSP or VRP problems, with multiple variations of them
identified in literature. If the kinematic model of UAVs is considered the problems may be converted
into Dubins-TSP or Dubins-VRP. Similarly, if multiple UAVs are cooperating to visit all locations,
multi-vehicle TSP or VRP problems are defined. Additional constraints are included depending on the
intended application. For example, waypoints might be coupled with specific visitation time windows,
or some waypoints might need to be visited in a specified sequence. Routing operations are commonly
encountered in cinematography and payload delivery.

[h]Cinematography

Autonomous UAV cinematography offers the capture of aerial video footage from previously hard-to-
reach areas, innovative visual effects and shot types, large area and multiple targets coverage,
capturing a scene from multiple view angles, and cost reduction in comparison to manual shooting .
The UAV cinematography concepts are described by the desired camera motion shot type and the
desired framing shot type given by the director or human end user. The camera motion shot types
define the UAV’s trajectory and are categorized into static, dynamic, target tracking, and dynamic
target, depending on if the UAV is moving and if its motion directly depends on the target’s trajectory .
The framing shot type describes the percentage of the camera image covered by the target.

An autonomous system for cinematography including multiple UAVs consists of :

 A high-level planner, responsible for generating and allocating specific tasks to the system’s
UAVs while considering time and resources constraints.
 A path planner, responsible for generating a list of waypoints for the involved UAVs, including
specification for the camera’s attitude while considering the vehicles’ safety.
 A trajectory follower, responsible for guiding each UAV to execute the generated path while
controlling its camera to provide the desired shot angle.
 A scheduler, responsible for synchronizing the action of the above three modules.

The autonomous cinematography UAV system uses as input a high-level mission description provided
by the director/human end-user. The mission description contains a set of artistic instructions including
shot types, starting time and duration, positions and targets, etc.

Trajectories for cinematography UAVs must meet esthetic quality criteria in addition to constraints
imposed from the UAV’s dynamics. The attitude of the UAV and camera must be planned to obtain the
desired result. The trajectory planning of the UAV and the attitude control of the camera can be
approached as one optimization problem or be decoupled and solved separately. A proposed method of
trajectory planning for flying cameras is presented in . The problem is formulated in a non-linear
model predictive contouring control manner, and it is solved online in a receding horizon fashion. The
formulated optimization problem includes dynamic planning and collision avoidance to smoothly
guide the UAV to follow virtual rails, the desired 3D path.

By decoupling the two problems, the headings of the UAV and the camera are examined
independently. A covariant gradient descent is proposed in to compute the UAV’s trajectory while
minimizing the cost function. The cost function includes smoothness, shot quality, occlusion, and
safety metrics. A desired trajectory can be computed by formulating the problem as a constrained
nonlinear optimization problem, solved in a receding horizon manner . This allows to minimize the
required camera changes for smooth camera movement, the vehicle’s acceleration for smooth and
efficient trajectory, and the distance to target to guide the vehicle towards the desired location. The
UAV’s kinematic constrains and collision avoidance constrains must be added to ensure the generated
trajectory is feasible and safe.
[h]Payload delivery and shipping

Employing UAVs for package and food delivery missions is expected to minimize the delivery time
and reduce the delivery costs . In addition, it has the potential to decrease energy consumption and
CO2 emissions . A UAV based system for food delivery is presented in . The buildings of the area are
described in a 3D map and the A* algorithm is used to compute the shortest path from the origin to the
desired delivery point.

While the path planning problem for deliveries is relatively simple to solve, this is not the case for
delivery using multiple UAVs. The problem is changed into a vehicle routing problem, in which the
optimal assignment of UAVs to deliveries must be computed while minimizing criteria like delivery
time and energy consumption. A genetic algorithm for assigning delivery tasks to UAVs is presented in
. Authors in approach the problem using a Mixed-Integer Linear Programming (MILP) model fitted to
optimize several objectives in order to minimize delivery time and energy consumption. The UAVs
collaborate to collect and deliver packages. After the routing problem has been formulated, a
matheuristic method is applied to generate solutions in restricted computational time. A mixed integer
programming model is presented in , which integrates constrains sets generated by the business logic of
food delivery.

[h]Agriculture

State-of-the-art UAV farming technologies include planting methods based on UAVs, which decreases
the planting cost by up to 85% . Potential UAV agriculture application include planting, crop and spot
(i.e., targeted on weeds) spraying, crop monitoring, irrigation monitoring (i.e., identify areas with low
soil moisture, dehydrated crops, water-logged areas), soil and environment conditions
monitoring, cattle monitoring, and mustering (i.e., locating and gathering livestock animals in a large
area) . Aerial vehicles are not impacted from difficult terrain condition, frequently met in agricultural
application, and they can offer high-level observing overview of the field or detailed level information
over a target of interest by adjusting their flight path and altitude . Research in the integration of UAV
and multi-UAV systems with autonomy capabilities has boomed, due to the multiple potential
applications identified and the benefits of UAVs.

A UAV system for remote sensing and multi-spectral data collection from a field is proposed in . The
system must plan a flight for area coverage. The waypoints of the trajectory are computed in relation to
the area covered by one image collected of the UAV. The UAV moves forward and laterally and
hovers over the generated waypoints to cover the area of interest in its entirety. The IDeAL system,
presented in , uses UAVs to support Agricultural IoT. The Strip Division along Resultant Wind Flow
approach is proposed as a path planning technique for area coverage over the field to minimize
information loss, coverage time, path deviation due to wind, and energy consumption. The method is
initialized by computing the convex hull of the field’s boundary and then a path to cover the area of the
field is generated. The path computation considers optimization parameters, like travel distance,
overlaps in coverage, energy consumption, the number of sharp turns, and deviation from the planned
path. The area of interest is scanned by a sweeping motion of the UAV. The field is separated into
strips, so that the forward UAV motion along a strip is parallel to the wind’s direction to minimize the
deviation of the path due to wind.

A UAV system, capable of autonomously finding livestock in freely moving herds is presented in . The
UAV must search a given field and locate the animals who have unknown locations. The problem is
formulated as dynamic TSP, in which the waypoints to be visited are not given preflight and the route
is updated online. The problem is solved with a dual-stream deep network architecture to compute
navigation commands on the grid-based flight area. Their method uses current sensory data and
historic map data of the areas already explored.

A route optimization method for UAV spraying in precision agriculture is proposed in . The route
planning algorithm receives stressed areas, requiring spraying and generates a UAV flight plan to
cover those regions. The given regions may be of irregular shapes and sizes. Their method uses the
convex hull of the stressed areas and creates Voronoi diagrams to compute the optimal spray
waypoints, depending on the radius of the spray. After the set of waypoints to be visited has been
identified, the problem can be formulated as TSP to compute the shortest path visiting all the
waypoints. A variation of TSP is used, called clustered TSP. Clustered TSP is defined for optimizing a
route visiting waypoints clustered into different groups based on their location. This solution fits well
the described spraying problem, as the computed waypoints are clustered based on their corresponding
stress region. Specific constraints are added to the obstacle avoidance problem for agricultural spraying
UAVs. Sprayer UAVs have a heavier payload, as they must carry the spraying liquid. The spraying
process must cover the desired area and coverage optimization should be considered in the selection of
an obstacle avoidance approach. An overview of obstacle detection and avoidance methods for this
application is provided in . Six families of real-time collision avoidance algorithms are considered for
agricultural spraying UAVs: bug algorithms, Artificial Potential Field (APF), collision cone, fuzzy
logic, Vector Filed Histogram (VFH), and Neural Networks (NN). Bug, APF and collision cone
algorithms are simple to implement and do not create a heavy computational load pre-flight or during
flight. Fuzzy logic and NN systems require training or learning with large computational cost and their
performance and capability of generalization depends on the training data. VFH algorithms have high
computational needs and do not consider the vehicle’s dynamics.

Using multi-UAV systems to cooperatively execute agriculture tasks increases the accuracy and
efficiency of the system. A distributed swarm control algorithm for agriculture operation is introduced
in . Each UAV of the swarm is controlled by three control inputs: (1) the UAV control, guiding the
vehicle to the desired position, (2) the formation control, responsible for maintaining the desired inter-
vehicle distances in order to maintain their communication’s connectivity while preventing inter-
vehicle collisions, and (3) the obstacle avoidance control, responsible for avoiding collisions with static
obstacles. The formation and obstacle avoidance control inputs are computed using artificial potential
functions to generate repulsive and attractive actions for the formation control and solely repulsive
actions for the obstacle avoidance control. A multi-UAV system for farmland inspections is presented
in . They use an on-the-fly autonomous path planning algorithm able to consider information on the
strategic, tactical, and operational level. On the strategic level the algorithm considers the end-user
specific mission description. On the tactical level the UAV is capable of deciding to modify its path
based on new information, collected by its sensors or received from another cooperating vehicle,
during the mission execution. The local path is computed at the operational level, to generate safe,
feasible and efficient control commands.

[h]Surveillance

Surveillance applications require repeated coverage of the area of interest, as the monitored
phenomenon is dynamic. The full area should be monitored, and the selected methods should minimize
the maximum time between visits in the same region . A single- and a multi-UAV method for
surveillance and modification suggestions to integrate dynamic and endurance constraints are
presented in . The area is decomposed into a grid and each cell is assigned an age value, corresponding
to the time elapsed from its most recent scanning. The next cell to be visited is selected using a control
policy based on the ages of all the cells.
Surveillance procedures in urban environments impose specific constraints, as the increased density of
high buildings creates multiple occlusion cases for the UAV’s sensors. An occlusion-aware approach
for UAV surveillance in cities is proposed in . The surveillance task is formulated as a 3D Art Gallery
Problem and solved with an approximation approach to define a set of waypoints that must be visited
for full coverage. The path planning problem, to connect all computed waypoints is defined as a
Dubins-TSP and the spiral and alternating algorithms are used to compute an optimal solution. Another
approach of computing the set of waypoints for full coverage is to discretize the target area and use a
genetic algorithm to select the required waypoints . The UAVs’ paths are computed using the Ant
Colony System (ACS) method, fitted with piecewise cubic Bezier curves to generate smooth and
feasible paths.

A cooperative surveillance strategy, with connectivity constraints, for a heterogenous team of UAVs is
presented in . The decentralized algorithm implements area partitioning for irregular, urban areas by
creating sub-areas each assigned to one UAV. The coverage paths are computed to minimize the
maximum time between two sequential visits of an area and the maximum time to disseminate
collected data within the system.

A distributed multi-agent deep reinforcement learning-based algorithm for surveillance of a set of


known targets is introduced in . Energy consumption in addition to surveillance performance
optimizations are considered.

[h]Search and rescue

Search and rescue (SAR) missions are highly time critical , as the survivability of the victims decreases
with time. For this reason, multi-UAV, collaborative search operations are proposed.

A centralized planning algorithm for multi-UAV collaboration for search and rescue missions, called
layered search and rescue (LSAR) algorithm, is described in . LSAR is based on the assumption that
the survivors’ distribution is denser closer to the center of a disaster and survivors closer to the disaster
have a higher rescue priority. The disaster area is divided into regions with different sizes, regions
closer to the disaster center have smaller areas than regions more distant to the center. UAVs are
assigned to regions prioritizing regions closer to the center, while covering the maximum number of
regions.

The search area is described in a grid representation for most SAR implementations, and each cell of
the grid corresponds to one single-UAV task. That allows to reformulate the search problem as a multi-
UAV task allocation (MUTA) problem. A bio-inspired algorithm, based on the foraging behavior of
fish when searching for food, for multi-UAV search and rescue missions is proposed in . The UAVs
are divided into groups, representing schools of fish, where each group has one UAV leader. The
group’s leader selects the next search region for its group. Follower UAVs search grid cells in the
region indicated by their leader. Follower UAVs have a forgetfulness feature, allowing them to
abandon their leader and join another UAV group or create a new group, if the UAV’s performance on
discovering survivors is low.

Another example of bio-inspired algorithms for SAR is shown in . A multi-UAV system, based on the
locust behavior when searching for food sources is proposed. In the search phase of the mission, during
which there is no a-priori information on the location of the survivors, UAVs act as locusts in their
solitary phase and spread on the disaster area, selecting area regions not assigned to another UAV.
UAVs in the search phase are distinguished into scout UAVs, who are greatly repelled by each other
and only select regions unassigned to other scouts, and eagle UAVs, who explore unassigned grid cells
in the average locations of other UAVs. In the rescue phase, designed for more detailed exploration of
areas, UAVs act as locusts in their gregarious phase and are attracted to regions depending on the
number of detected survivors in each region. A similar idea, for assigning social and antisocial
behaviors to UAVs for SAR mission is explored in . Antisocial searcher UAVs are guided far away
from each other, spreading the swarm in the search area. On the other hand, social search UAVs are
responsible for exhaustive local area search in the locations of discovered survivors.

Search paths in are planned for multiple UAVs in a centralized manner using a genetic algorithm to
optimize the coverage and the connectivity of the system to the base station, minimizing the sum of
time to detect a victim and the time to inform the base station. Authors in used a hexagonal
decomposition to generate a grid map and a graph in the search area. A centralized and pre-flight
mixed-integer linear programming model is proposed to solve the multi-UAV coverage path planning
and achieve full coverage of the graph in minimum time.

A grid-based representation for the area can also be used to create a belief map, containing the
probability of finding a survivor in each cell of the grid. A variety of approaches have been found
suitable for solving the MUTA problem for SAR operations with belief map, like methods in the
family of greedy heuristics, potential fields, and partially Observable Markov Decision Processes . An
adaptive memetic algorithm is proposed in for solving the single-UAV search problem with a belief
map. The algorithm adaptively selects from six different local search procedures, which are utilized to
narrowly modify the solutions in an attempt to improve their fitness and diversity, based on the
procedure’s performance in previous generations. A coordinated Monte Carlo tree search algorithm is
presented in . Their implementation is decentralized and factors belief data into the decision-making
process.

[h]Disaster and environmental monitoring

Disaster and environmental monitoring applications provide a variety of solutions depending on the
phenomenon they are designed to investigate. For highly dynamic situations, time is critical and
obtaining a good estimation of the location and magnitude of the phenomenon in a short time is
preferred over acquiring a complete image of the area in a longer time.

In time-sensitive disasters like oil spillage and wildfires, UAVs must explore the area to identify the
location and borders of the disaster in minimum time and a complete area scan is not required. A
decentralized methodology for mapping off-shore oil spill using a team of UAVs, called PSOil, is
introduced in . The search area is discretized into a grid of cells and a belief map is constructed,
representing the likelihood of discovering oil in a cell. The PSOil algorithm uses the swarm dynamics
of the P Swarm Optimization (PSO) algorithm. Three mapping phases are proposed; a scouting phase
for randomly exploring the area to discover oil, an aggressive oil spill mapping phase in which the
agents select their next target cell using local and global data, and a boundary tracking phase to define
the exact oil spill boundaries using the Moore Neighborhood tracing algorithm.

A bio-inspired and decentralized algorithm based on the Oxyrrhis Marina behavior for locating food
sources has been proposed for identifying forest fire locations . The method includes two phases: an
exploring phase during which the UAV executes a Levy flight, and a mapping phase during which the
UAV uses Brownian search based on the temperature change it senses. The proposed system is
enhanced by a dynamic formation control for guiding the firefighting UAVs to a non-overlapping
formation. A leader-follower coalition formation approach for wildfire monitoring using a
heterogenous swarm of UAVs is proposed in . Coalition leaders decompose their assigned observance
regions into single-UAV tasks and the tasks are assigned to UAVs as coalition followers using a
distributed, bid-response negotiation process. Firefighting UAVs utilizing a modified PSO algorithm
and the temperature readings of their sensors in a decentralized swarm are shown in . PSO was adapted
to handle dynamic environments.

Full area coverage is used for static or slow-changing phenomena. Commonly, coverage paths are
designed by decomposing the monitored area into cells with techniques like the vertical cell,
trapezoidal or boustrophedon decomposition and sequentially sweeping all created cells . A major
concern for mapping missions is the mission duration, as the areas of interest may be extensive and full
coverage paths may be longer than the UAV’s endurance. One proposed solution to this problem is to
separate the area into regions, each corresponding to a single-UAV task. Authors in created regions
sized to the energy autonomy of one vehicle by discretizing the area, to be scanned, into a grid of cells
and clustering obstacle-free cells using the k-means clustering algorithm. The coverage path is
computed using a depth-first search algorithm on the cells of the assigned region. Their solution
assumed multiple UAVs or recharging breaks between tasks.

If the power autonomy of a UAV is not sufficient for full area coverage, sub-optimal trajectories to
cover the maximum area, while obeying to the energy constraint, must be designed. A Voronoi-based
path generation (VPG) algorithm is used in to plan coverage paths under energy constraints for
environmental monitoring applications. The VPG algorithm is described as a repetitive process to
generate the path’s waypoints, satisfy energy consumption limitations and are optimized to provide the
maximum and more spread coverage of the area. The path’s waypoints are initialized randomly, a
Voronoi diagram is created based on their positions, and the centroids of the Voronoi polygons are
computed. Then, the path is modeled as a chained mass-spring-damper system, with the waypoints
representing masses and springs connecting waypoints to the centroids, in order to compute the
updated waypoint positions at each repetition of the algorithm.

[h]Unmanned traffic management architectures

The multiple identified UAV applications in civilian use cases create the need for the definition of
management systems to enable the safe conduction of various autonomous operations in common
airspace. Safety, security, and economic factors must be considered when designing a concept for large
scale UAV operations .

The design of traffic management systems for UAVs takes inspiration from the years-long experience
and knowledge in ATM systems, used for manned aviation. However, it is important to identify the
different requirements and characteristics of manned and unmanned missions. UAV missions will be
shorter and more numerous in comparison to manned flights. In addition, UAVs will have to navigate
in more congested environments and integrate a higher level of autonomy. The co-existence of manned
and unmanned flights must be taken into heavy consideration, as it is crucial to ensure that manned
aviation will not be impacted by the introduction of a high number of UAVs in the airspace.

In 2013, NASA initiated the Unmanned Aerial System (UAS) Traffic Management research initiative
to support safe and efficient low-altitude airspace operations for unmanned vehicles . The FAA has
published two Concepts of Operations (ConOps) for UTM, a first version in 2018 and a second one in
2020 , based on which UTM should include a set of federated services to support UAS operations and
ensure that are authorized, safe, secure, and equitable in terms of airspace access. Those ConOps focus
on UTM operations below 400 feet above ground level. The proposed services include flight planning,
communications, separation, weather, registration, authorization, and mapping services. Performance
and airspace authorizations shall be conducted to assess the operators and equipment’s capabilities, and
inform ATM stakeholders of UTM operations. UAVs and operators shall be identified. The safety of
the operations is ensured through multiple layers of separation: strategic traffic management during
pre-flight planning, separation provision using conflict alerts and deconfliction services in a tactical
level, contingency management to respond to flight anomalies, real-time collision avoidance using
ground-based or onboard equipment, and near real-time notifications and advisories based on airspace
constraints.

In 2018, the EU’s SESAR Joint Undertaking (SJU) published a blueprint , describing its vision for U-
space. U-space encompasses a wide range of services to ensure the smooth operation of drones for all
types of missions in all operating environments, focusing on very low level airspace. U-space services
will be enhanced as the autonomy capabilities of UAVs evolve. Three foundation services are
proposed for U-space: electronic registration (e-registration), electronic identification (e-identification),
and geofencing (i.e. defined zones in which UAV operations are not allowed). ConOps for UAV
operation in U-space have been developed from the CORUS project .

ConOps envisioned both in the USA and EU highlight the necessity for integrating unmanned air
traffic into ATM. It is crucial that the developed concepts for UAV operations do not impact manned
aviation operations. Furthermore, both concepts signify the safety aspects of the airspace, describing
separation methods, like strategic and tactical deconfliction and collision avoidance . Figure depicts the
logic commonly followed to safely plan and conduct UAV flights in a UTM system. In the pre-flight
stage the system receives the desired flight information and generates a flight plan. The flight is
deconflicted with other known flights registered and generated in the system. If conflicts are detected
during flight, they are resolved in a tactical manner. The imposed airspace structure and rules are
consolidated through all stages.

Figure :Flight planning and deconfliction logic.

As the number of UAV operations will increase, so will the traffic density and complexity in the
airspace. To safely support high numbers of flights, the airspace shall be structured including a set of
local airspace rules. Urban topography, like buildings, shall be considered when designing a UTM
network . Layer concepts have been proposed to integrate different flight rules depending on the
flight’s altitude . The layers, zones and tubes concepts were proposed in the Metropolis project to
separate and organize UAV traffic . The layers concept vertically separates traffic based on heading. In
the zones concept, circular and radial zones are designed in the horizontal plane inspired by ring roads
around cities. The tubes concept structures traffic in both the horizontal and vertical planes, generating
a 3D directional graph.

Strategic conflict management is linked to flight planning, as it acts pre-flight to detect and resolve
possible conflicts for the requested flights. Flight plans are usually described as 4D trajectories. Flight
planning consists of two steps; a path planning phase designs the initial UAV’s trajectory, and a
strategic deconfliction phase modifies the trajectory in space and/or in time to ensure the safety of the
flights. In , two flight type operations are considered; area operations are repetitive, while linear
operations are point-to-point missions and are executed once. Flights are assumed to retain a static
altitude. The airspace is discretized into a 3D grid. Routes for linear flights are generated using the A*
algorithm and timestamps are added based on the UAV’s velocity, while the area operations occupy
regions. A First-Come, First-Served (FCFS) approach is used for flight deconfliction and for each
added planned flight, cells of the grid appear occupied for specific timestamps to the ones planned
after. The FCFS approach is augmented by an optimization model with mixed integer linear
programming to minimize flight delays. Authors in approach the strategic deconfliction problem
differently. Flights are planned as no other traffic exists in the airspace and are deconflicted by
adjusting their departure time or rejecting flights if an appropriate departure time cannot be found. A
genetic evolutionary algorithm is used to compute the scheduling of the flights to minimize conflicts
and their delay.

Tactical conflict management is responsible for detecting and resolving conflicts during the flight.
Airspace services are used to communicate the positions and velocities of nearby UAVs, to act as input
to the tactical conflict management system. An iterative geometric approach is used in for tactical
deconfliction by separating the multi-conflict problem into simpler sub-problems in a 4D grid.
Potential conflicts are detected using the well-known velocity obstacle geometric method in and
conflicts are resolved by adjusting the heading of the UAVs. A MILP technique is proposed to
compute the new headings for UAVs with the same speed, and a stochastic parallel gradient descent
based method is used for UAVs with unequal speeds. Intent information, describing the designed
trajectory of each UAV, can be incorporated into the velocity obstacle representation to make conflict
detection in earlier time .

Not all missions in the airspace are expected to have the same priority. Some missions, like medical aid
or security applications, might be defined as emergency and their arrival delay is more critical than
others. Even for normal (i.e., not emergency) missions time constraints may vary. For example, the
arrival delay of a food delivery mission is more impactful than the one of a generic package delivery.
In addition to time constraints, some missions are coupled with area constraints. For example, a
surveillance mission must not deviate from a specific path, or its scope will not be met. Priority should
be taken into account when planning flights . Priority information can be integrated in conflict
resolution by forcing lower priority flights to resolve potential conflicts . Allocating the responsibility
of deconfliction to lower priority flights increases the efficiency of higher priority flights in
comparison to the systems where the deconfliction responsibility is shared.

Six types of civilian application of UAVs are presented in this work. Proposed guidance and decision-
making methods to enhance UAV autonomy in each of those applications are presented. While each
application type does not globally correspond to one type of problem, operations in the same
application area share restrictions and limitations imposed by the main objective of their application.
The civilian applications are followed by an overview of unmanned traffic management systems,
required to enable those operations.

Cinematography applications create specific restrictions on the UAV’s trajectory planning, as the
camera model inserts additional constraints to take into account the viewpoint and potential occlusions
of the target. UAVs are equipped with high-end optical cameras and must generate smooth trajectories
for visually appealing results. Payload delivery missions shall be cost efficient, be capable of carrying
payloads of varying weights and be designed to heavily consider battery constraints. Shipping systems
are designed to serve multiple orders and the multi-UAV system should have efficient coordination.
UAV applications in agriculture require the full coverage of fields for a variety of tasks. For some
specific tasks (e.g., spraying) the UAVs must carry heavy payloads, which add constraints to the
planning. Surveillance missions are often applied in environments with dynamic targets, requiring
repetitive area monitoring. The potential operational environment shows a wide range from urban
congested environments to rural. Grid-based area partitioning is a common approach for surveillance
applications, since it allows to easily monitor the age (i.e., time elapsed from last visit) of the grid’s
cells. Search and rescue operations are the most time-critical missions, so that the survivability of the
victims is increased. Disaster and environmental mapping missions may have to cover quite large areas
and the desired full coverage is not always possible. The UAVs must conduct feasible trajectories that
maximize the amount of useful coverage data. Grid maps are often implemented in search and rescue
and disaster and environmental monitoring missions to create belief maps and increase the probability
of gathering profitable information.

The need for systems to monitor and manage UAV traffic has become clear and numerous programs
have been initiated for that purpose. The US and EU have recognized the importance of creating a
framework for the integration of UAVs in the airspace in a regulated manner. UTM services must be
selected to ensure safe, secure, efficient, and equal access to the airspace. Structuring the airspace
allows to manage the density and complexity of traffic. Safe flights are designed with multiple levels
of deconfliction to minimize the risk of an intervehicle collision.

[mh] Military Applications and Advancements

If the qualities required of a leader to be a good commander and a good decision maker remain
constant in human history in the face of the complexity of the battle, the leader of tomorrow will have
to adapt to the uses of new technologies. This will allow him to be better informed, and consequently
to be more reactive in order to keep the initiative in the manoeuvre, but also to carry his action further
and delegate certain tasks to the machines he will have at his disposal. Such adaptations are not trivial,
because they reconsider the existing military doctrines, and can call into question the very principle of
the hierarchy that makes the strength of armies. It is therefore necessary for the military to know how
to use these new technologies through training, but also to know how to keep control of the use of new
systems integrating a certain form of autonomy. Above all, it is important for the military leader to
preserve the very essence of his very identity: to give meaning to military action and command to
achieve his goals.
[h]Commandment

Primarily, a military leader must command, which implies legitimate decision-making authority and a
responsibility towards the soldiers entrusted to him for the mission which he must ensure.

The command is the very expression of the personality of the leader. It depends on the tactical situation
which includes the risk and the obligations of the mission to be carried out.

To be a good military leader implies several additional qualities: to be demanding, to be competent, to


have a high moral strength in the face of the difficulties of war, to have confidence in his own abilities
and in those put at disposal, to be in responsibility to assume his decisions and for that to put in
responsibility his subordinates, and finally to be able to decide in complete freedom.

He is the one who decides and commands. He is the one to whom all eyes turn in difficulty , but the
exercise of his command requires a demanding discernment between reflection and action.

[h]Decision

The military world is very demanding and dangerous. Having to take into account the danger for his
soldiers, the danger for himself and the responsibility of the mission he has been given, the military
leader should:

 discern in complexity (deploy true situational intelligence);


 decide in uncertainty (have the strength of character to accept calculated risks);
 act in adversity (to unite energies, encourage collective action and make conscious decisions);

This forms the basis of the educational project of the Saint-Cyr Coëtquidan military academy, and
perfectly synthesises the objectives of a training system adapted to the officers of the 21st century.
However, this initial training must take into account the technological evolutions allowing military
decision-makers of today and tomorrow to reduce the fog of war.

[h]Military leader is accountable for the decision

What is decision-making for a military officer? It consists of choosing between different possibilities
and opting for a conclusion among the different possible solutions, while having analysed all effects
that this decision implies.

In order to decide, the leader must master the various areas listed below: a perfect knowledge of the
mission entrusted to him, of the means at his disposal and of his troops. Nothing is worse than
indecision when the lives of soldiers are in danger. His decision must call for moral and intellectual
courage.

“The unknown is the governing factor in war” said Marshal Foch. However, the role of the leader is
above all to be able to adapt and modify his analysis and the behaviour of his troop in order to respond
to unforeseen situations. This ability to adapt is essential to maintain the freedom of action that allows
for initiative on the battlefield, and to be able to innovate according to the constraints.

The leader must show discernment in action, to appreciate facts according to their nature and their fair
value. This implies being cautious in his choices and the scope of his choices.
Finally, the leader must be lucid, and control his stress, pressure and emotions. These to preserve his
“esprit d’initiative”.

[h]Information, the key to victory

To meet all these requirements, information is one of the major foundations for the exercise of the
command of the chief. It is the keystone of all military action, to keep the initiative and maintain
supremacy on the ground .

In fact, information allows the chief to plan the military action, taking into account the means at his
disposal, ensuring the transport logistics, and confronting the possible friendly and enemy modes of
action in order to determine the manoeuvre that he will conduct.

The management of the information received is reflected “en conduite” by the regular rhythm of
reports and situation updates to higher or subordinate levels, in order to anticipate threats and maintain
a capacity to react as quickly and efficiently as possible in the face of adversity or any obstacle
hindering the manoeuvre.

For the decision-making process to run smoothly, the information must be updated regularly because
the situation can change very quickly and the leader will have to adapt his analysis accordingly.

Thus, there is no single decision of the military commander in operation, but a continuum of decisions,
some of which are almost routine or implicit, while others require extensive analysis. Some decisions
are ultimately critical, as they can result in a favourable or tragic outcome to a given situation.

[h]What is fundamentally changing

This chapter addresses the change in the art of decision-making for a military officer, implied by the
use of some technologies that will gradually invade the battlefield.

Indeed, some technologies will allow the leader to be better informed, but also to be more reactive in
order to keep the initiative. Their management requires a mastery of new data management processes
resulting from the digitisation of the battlefield, in particular the possible influx of operational data
from the field and their synthesis for the military leader.

[h]A more accurate and faster remote information acquisition

The one who sees further and before the others is the one who dominates the military manoeuvre. This
is what enables him to gain a tactical advantage because the one who acts first with determination is
most often the one who wins. Moreover, the ability to see further and more accurately thanks to remote
sensors or cameras brings an undeniable advantage to the military leader, enabling him to react faster
than his enemy.

Today, spaces are getting tighter, and information can be transmitted in a few milliseconds to any point
on the planet, provided that the sensor capturing the information is available. This is done through
cyberspace which must be secured for military forces so that they can be sure of the veracity of the
data they use. This immediacy of information is a new parameter in the art of command. It forces the
leader to make a quick analysis and to be reactive in his response.
It also raises the question of his capacity to process the information, if there is too much data to
process. In this case, it will be necessary to process automatically the data as soon as it is received by
the systems, to extract only the relevant information. And if these systems are unable to do this, the
leader will have to be assisted in the analysis and decision-making by a third party, which may also be
a machine. This raises the question of the control of these decision aids provided and which he must
rely on.

[h]Act remotely to remove the danger and increase the area of action

One of the major military revolutions that began at the start of the 21st century in the Iraq and
Afghanistan wars is the robotisation of the battlefield. It is unavoidable and will gradually be
introduced into the battlefield because the use of unmanned robots (UAV, USV, UUS and UGV) offers
many advantages to the armies that will use them on the ground.

Firstly, it avoids exposing our own combatants, which is all the more important in our modern armies
where the latter are a scarce and expensive resources to train.

Secondly, it extends the area of perception and action of a military unit. In a sense, they are the “5
deported senses” of the fighter, i.e. his eyes (camera), his ears (reception), his mouth (transmission),
his touch (actuator arm) and even his sense of smell and taste (detection of CBRN products).

As tools placed at the disposal of the combatant, robots will allow him to control the battlefield by
deporting effectors or sensors allowing a control of the various dimensions and spaces of the
battlefield, on land, in the air, at sea and even electromagnetically. These will thus progressively move
the combatant behind the contact zone, in order to move him away from the dangerous area and reduce
the risks, or allow him to dive in with the maximum of means at his disposal, thus significantly
reducing the vulnerability of the combatants .

Finally, the ability to act remotely while preserving the lives of his men will allow the leader to act
even the enemy can even deploy his forces for his manoeuvre.

Robotic systems will thus become new tactical pawns that the military leader will now use to prepare
his action, to facilitate his progress, allowing him new effects on the enemy, the terrain, the occupation
of space and on the rhythm of the action. Especially since these machines will eventually be more
efficient, more precise and faster for specific tasks than a human being can be. This is currently evident
in industrial manufacturing and assembly plants.

[h]The disruption of autonomy

This military revolution of deporting action with robotic systems is accompanied by another, no less
disruptive, that of the autonomy of these systems. Autonomy will allow for omnipresence of action in
the area, 24 hours a day, subject to energy sufficiency. It will allow the machines to adapt to the terrain
and its unforeseen events in order to carry on the mission entrusted to them by the military leaders.
Autonomous systems will allow them to react to complex situations by adapting their positioning
strategy, and even adapting the effects it produces on the battlefield. For example, it may be an
automatic reorganisation of the swarm formation adopted by a group of robots to follow an advancing
enemy, followed by the decision to block an axis of progression with smoke or obstacles to hinder
enemy progression.

However, autonomy is not fundamentally new for a leader. A section or a platoon leader has combat
groups under his command, whose group leader who receives a mission has full autonomy to carry it
out. The new fact is that if robots are tactical pawns at the disposal of the combatant, and if they can
have a certain form of autonomy in the execution of their action, they do not have and will never have
the awareness of their action and the capacity of discernment which are characteristics of the human
being. This opens up a number of ethical questions regarding the opening of fire that will not be
addressed in this chapter .

[h]The contribution of new technologies to military decision-making

These upheavals are based on technologies that create new opportunities in military decision-making
processes.

[h]All deployed systems are interconnected

The digitisation of the battlefield stems from the constant trend towards the integration of electronic
components in all future military equipment, which, coupled with a means of transmission, allow for
their interconnection and the dissemination of the information collected. It affects all systems deployed
in the field (from weapons systems to military vehicles), right down to the disembarked combatant
who, just like any civilian with a smartphone, will be connected to the great digital web of the
battlefield and therefore traceable and reachable. Just like every individual in the civil society, every
actor on the battlefield is traceable and able to communicate.

[h]Enriched information

As explained above, technology will enable a faster detection of threats on the battlefield. The Law of
Moore has sometimes been used to describe the increase in the capabilities of digital cameras,
according to a ratio of “twice as far” or “twice as cheap” or “twice as small” every 3 years. In fact,
each innovation allows to see further for a smaller footprint. The digital zoom allows high
magnifications but at the cost of algorithmic processing of the image which causes lesser definition
quality. It is often paired with the optical zoom, which consists of adapting the focal length to the target
you want to look at. Cameras can now merge data from multiple sensors of different types. In
particular, thermal imaging allowing you to see a large fraction of the spectrum and to view and
measure the thermal energy emitted by an equipment or a human. To which one can add light
intensification processes to amplify the existing residual light to recreate an image usable by the human
eye, in low light conditions.

All of this fused data can enrich the field of vision of the combatant by superimposing additional data
that completes his knowledge of the tactical situation. This is the principle of augmented reality.

[h]The immediacy of information processing

If data acquisition and transmission is possible, the information should nevertheless be processed.
However processing it requires easily accessible hardware and software resources offering the
necessary computing capacity to react as quickly as possible, particularly in order to be extremely
reactive in situations where the analysis time is too short for a human to do it by himself. Embedded
computer software can provide such capacity at the core of deployed systems, but this capability can
also be moved to a secure cloud, which can be both a tactical cloud, i.e. a cloud deployed on the
battlefield in support of the manoeuvre, or to a further away, highly sovereign and secure cloud.
[h]To the detriment of human decision-making

This immediacy of information processing allows a hyper-reactivity of systems, foreshadowing the


concept of “hyperwar” formulated by General John Allen & Amir Hussain Allen in 2019, which puts
forward the idea that the advent of hyperwar is the next fundamentally transformative change in
warfare.

“What makes this new form of warfare unique is the unparalleled speed enabled by automating
decision-making and the concurrency of action that become possible by leveraging artificial
intelligence and machine cognition… In military terms, hyperwar may be redefined as a type of
conflict where human decision-making is almost entirely absent from the observe-orient-decide-act
(OODA) loop. Consequently, the time associated with an OODA cycle will be reduced to near-
instantaneous responses. The implications of these developments are many and game changing”.

[h]A support for information processing

For information processing, the volume of data produced increases exponentially and the accuracy and
granularity of the data produced by sensors grows. This trend will become more and more pronounced
over time .

Military experts usually process observation data retrieved from the battlefield by satellites,
reconnaissance aircraft, drones or sensors abandoned on the ground. However, as human resources are
scarce and the volume of data is constantly increasing, it will be necessary to delegate the processing of
this amount of data to AI algorithms in support of the human being, at the risk of not being able to
process all of them without this technology.

On the ground, the deployed combatant will be increasingly charged cognitively by the complexity of
the systems to operate and the amount of information to process. It will be vital to automate the
processing of certain information in order to unload it, so that only what is really necessary will be
presented. This needs to be done in an extremely ergonomic way. This requires defining which data
can be subjected to artificial processing, and up to what hierarchical level their processing can be
automated.

[h]The contribution of artificial intelligence

Automated management of routine, repetitive and time-consuming procedures could emerge. In a


headquarters, for example, reports management and automatic production of summaries adapted to the
level of command would immediately make the chain of command more fluid. The AI could take the
form of a dashboard to stimulate the reflection of the commander and his advisers by dynamically
delivering relevant information and updated statements .

During operational preparation, depending on the tactical situation, the leader must confront the
possible modes of action he envisages with the reference enemy situation and the possible enemy
modes of action. Very often he does not have the material time to confront his action with several
enemy modes of action, and he only anticipates certain non-compliant cases that he considers probable.
Artificial intelligence could be more exhaustive in confronting more possible modes of action of the
enemy, and thus present a more complete analysis of possible options to the military leader who could
then decide accordingly.
[h]Reduction of the OODA decision cycle

The technologies listed above have a direct effect on the OODA decision cycle, which will be
profoundly impacted by the new technologies.

This concept was defined in 1960 by an American military pilot by the name of John Boyd to formalise
the decision cycle in air combat. It has since been used to schematise any decision cycle. The author
will use it here in the light of the potential offered by the technologies detailed above .

Figure :OODA cycle time reduction: A better reactivity.

[h]Observe: a better detection

“Seeing without being seen” is essential in military operations, and remains a common adage.
Technology is helping, with the extended distances made possible by long-range cameras and their
deportation to robotic systems. It can now also help to overcome several natural detection constraints
such as night, fog or walls.

Moreover, digitised systems can operate 24 hours a day with great consistency, where humans are
subject to fatigue and inattention, avoiding the risk of missing information.

For surveillance or patrol missions, where human resources are often lacking, the leader can delegate
to systems the analysis of images of the area for the detection of movements and the potential presence
of enemies. It should be noted that this detection should filter out false alarms as much as possible,
such as the movement of leaves in the trees when the wind picks up.

[h]Orient: a better analysis

Remotely seeing will make it possible to identify a potential target from afar, to discriminate it (is the
target a combatant) and to characterise its behaviour (is it hostile or not). If these criteria are met, the
target becomes a potential target that can easily be geolocated, this information will then be transmitted
to the decision-making levels. The gain here is that of anticipating the analysis for better decision-
making.

The leader will also be able to rely on the automatic processing of data acquired within the digital
environment of the battlefield. Faced with the potential ‘infobesity’ of the battlefield, artificial
intelligence will enable massive data processing, subject to the availability of a computing capacity
directly embedded in remote robotic platforms, or by remote processing of information via long-
distance communications. It will allow constant monitoring of the analysis of captured images or
sounds, a task that the best human experts can only supervise because they are subject to fatigue and
inattention. This is particularly the case with satellite images or images captured by surveillance
drones, which can monitor an area 24 hours a day. Finally, it will also enable the detection of weak
signals that would be invisible to humans, by correlation between several distinct events, or by cross-
checking.

There are still two essential components to the analysis of the situation that a machine can never
integrate. Firstly, instinct and intuition, which a machine cannot have and which are the fruit of a life-
long learning of human experience, and secondly, the transcendence of military action which only a
metaphysical dimension in the literal sense can provide.

[h]Decide: a better reaction

The military commander is the decision-maker for military action. It is therefore up to him to take the
decision according to the information at his disposal. He can of course rely on a deputy or on
operational advisers who help him analyse the situation, if time permits.

For example, France is intervening in Mali and the Sahara as part of the Barkhane military operation to
combat Salafist jihadist armed groups infiltrating the entire Sahel region. Launched on 1 August 2014,
this operation replaces operations Serval and Épervier. The following scenario is fictitious: an armed
Reaper drone of the French army flies over a region of the Malian desert at night and its cameras
(incorporating AI for automatic motion detection processing of the captured images) detect a
suspicious movement. The sensor operator of the drone is alerted and zooms in on the area to detect a
jihadist 4x4 occupied by armed personnel via its Infrared camera. This vehicle is moving towards a
village 20 kilometres away. Setting up an operation with Special Forces is not possible because they
are not in the area, and there is a great risk that the occupants of the 4x4 will disperse once they reach
the village. The legal advisor on duty quickly confirms the possibility of the drones firing on the target
because no collateral damage is possible in this desert area. The head of the operation decides to give
the order to fire the drone.

This example clearly shows the drastic reduction in the OODA decision cycle offered by the new
technologies: the chief detects and is informed as soon as possible by an automatic detection of a
suspicious movement of an enemy vehicle. He confirms with his image operator the Positive
identification (PID) of the target as an enemy. He then reports it to his hierarchy and receives the order
to open fire. He can thus, in compliance with IHL, open fire from a distance. The enemy has not even
spotted him.

There are still situations where time is critical and the leader will not have time to make a decision due
to the rapidity of the attack. The automation of response processes then becomes a possible option, i.e.
he can delegate to a machine the possibility of giving an appropriate response to a situation by itself.
This is already the case with missiles or ballistic threats, which require armies to use automatic systems
to counter them. This requires automatic systems that are faster and more precise than human beings
(e.g. coupling weapons and radar). Tomorrow, faced with future systems that will develop
unpredictable trajectory strategies (enemy missiles with AI), faced with saturating threats that risk
overwhelming our defences, faced with swarms of offensive robots, our systems will have to adapt in
real time to counter the threat. Only a certain autonomy of the defensive systems will make it possible
to face them, an autonomy which will have to remain under the control of the leader having these
systems at his disposal.

[h]Act: a quicker and more accurate reaction

A quicker reaction: A man reacts in a few seconds, the machine in a few milliseconds or less. Where a
human thinks in a few seconds for the best, the machine will analyse parameters in a few milliseconds
and propose a response in near real time.

A more accurate action: A human shooter who moves, breathes and shakes is less accurate than a
machine that does not move, breathe or shake because it is not subject to emotion. Precision in action
will therefore increasingly be the prerogative of the machine.

The outcome of a fight or a counter-measure may depend on these factors 10 or 100 seconds to a
thousand seconds.

[h]Technology as a decision aid for the leader

Military decision-making is centred on the military leader, because he is at the heart of the command
situation. He takes responsibility for military action, a mission given to him by the legitimately elected
political power.

The leader must therefore control the decisions taken within the framework of military action because
he is the guarantor and he assumes the consequences.

What lessons can one learn from the opportunities offered by new technologies for military decision-
making and the possible resulting changes in the art of command?

[h]To reduce the “fog of war”

The leader must rely on technology to reduce the uncertainty and fog of war. It will allow him to be
more aware of his tactical situation by searching for intelligence. Furthermore, it will enable him to
delegate to machines the management of repetitive tasks that do not require constant situational
intelligence.

Depending on the circumstances and if he has time to reflect, the digitisation of battlefield information
will also allow the leader to replay certain possible scenarios before taking a decision. Finally, it will
give him the possibility to select the information he has received that he deems important, to view it
several times (especially if the information is imprecise) before making a decision.

[h]For decision support

A digital aid will be welcome to synthesise the multiplication of digital actors on the ground with
whom he is in contact, or whom he must command or coordinate as a leader.
One of the consequences of the digitization of the battlefield is that it may lead to information overload
for the leader who is already very busy and focused on his tasks of commanding and managing. It is
already accepted in the military community that a leader can manage a maximum of seven different
information sources at the same time, and even less when under fire.

Delegating is one way to avoid cognitive overload. Thus, one possible solution is to create a “digital
assistant” who can support the leader in the information processing steps.

His digital deputy can be a digital assistant, an autonomous machine that will assist the leader in
filtering and processing information, which will help the leader in the decision-making process.

Nevertheless, the leader will have to fight against the easy way out, take a step back, allow himself
time to reflect, and reason with a critical sense when faced with machines that will think for him. This
process will help him fight against a possible inhibition of human reasoning. Artificial intelligence
does not mean artificial ignorance if it is used as an intellectual stimulant, although it can have this
flaw.

[h]For an optimization of its resources

The chief will be able to entrust machines with the execution of certain time-consuming and tedious
tasks, such as patrols or the surveillance of sectors, and thus conserve his human resources for missions
where they will have a higher added value.

The same applies to missions that require reactivity and precision, especially if there is a need to be
extremely quick to adapt to the situation. For example, it will be useful in the case of saturating threats,
where targeted destruction or multi-faceted and omnipresent threats such as swarms of drones must be
dealt with.

[h]But technology as a decision aid subject to control and confidence

Delegation of tasks to increasingly autonomous machines raises the question of the place of humans
who interface with these systems and should stay in control.

[h]The leader must always control execution of an autonomous system

At first, the military will not use equipment or tools that they do not control, regardless of the army in
the world. Every military leader must be in control of the military action, and for this purpose, must be
able to control the units and the means at his disposal. He places his confidence in them to carry out the
mission, which is the basis of the principle of subsidiarity.

For this reason, it is not in his interest to have a robotic system that governs itself with its own rules
and objectives. Moreover, this system could be disobedient or break out of the framework that has been
set for it. Thus, machines with a certain degree of autonomy must be subordinate to the chain of
command, and subject to orders, counter orders, and reporting .
[h]Operators must have confidence when delegating tasks to an autonomous system

The military will never use equipment or tools that they do not trust. This is the reason why a leader
must have confidence in the way a machine behaves or could behave. For that, military engineers
should develop autonomous systems capable of explaining their decisions.

Automatic systems are predictable, thus, one can easily anticipate how it will perform the task
entrusted to it. However, this becomes more complex with autonomous systems, especially self-
learning systems where one may well know the objective of the task to be performed by the machine,
but has no idea how it will operate. This raises a serious question of trust in this system. As an
example, when I ask an autonomous mowing robot to mow my lawn, I know my lawn will be mowed,
but I do not know exactly how the robot will proceed.

The best example to focus on are the expectations of the soldier about Artificial Intelligence embedded
in autonomous systems.

AI should be trustable. This means that adaptive and self-learning systems must be able to explain their
reasoning and decisions to human operators in a transparent and understandable manner;

AI should be explainable and predictable: one must understand the different steps of reasoning carried
out by a machine that delivers a solution to a problem or an answer to a complex question. For this, a
human-machine interface (HMI) that explains its decision-making mechanism is needed.

One must therefore focus on more transparent and personalised human-machine interfaces for the
operator and the leader .

[h]Tunnel effect

Easy access to information or possible information overload both favour a possible tunnel effect. This
effect, due to a sudden rise in adrenaline, causes a failure in the analysis of signals and data received by
a brain that is no longer able to step back and analyse the situation. For the military, this tunnel effect is
clearly the enemy of the soldier who has to concentrate on a screen, on a precise task, forgetting to
look at the enemy threat around him and thus exposing himself seriously. It is also the enemy of the
leader who, because he focuses on a piece of information that he finds crucial, becomes unable to step
back and fulfil his role as a leader, which is to take into account the globality of the military action, and
not one of its particular aspects highlighted by this information. Too much information should not
prevent the commander from stepping back and reflecting.

The question of the gender of the soldier operator may be an avenue of exploration here, as women
may have the capacity to manage several tasks simultaneously better than men.

[h]Inhibit the action

Easy access to information encourages another possible flaw in decision-making. That of not deciding
anything until one has all the information at his disposal. This flaw can probably become a major
concern in the future. With the responsibility of the soldier at stake, he may hesitate until the last
moment to take a decision because he lacks information that he can hope to recover by technological
means. This is the death of daring, of manoeuvre by surprise, which often ensures a victory for the
leaders who dare to practice them.
[h]AI will influence the decision of the leader

Stress is an inherent component of taking responsibility. It is common for a military leader to have the
feeling of being overwhelmed in a complex (military) situation. In such contexts, the leader will most
often be inclined to trust an artificial intelligence because it will appear to him, provided he has
confidence in it, as a serious decision-making aid not influenced by any stress, having superior
processing capabilities, and able to test multiple combinations for a particular effect.

[h]Too much predictability in operational decision-making patterns

The modelling of human intelligence by duly validated but very fixed algorithmic processes can lead to
the inhibition of human intelligence. In particular, there will be a risk that military thinking will be
locked into decision-triggering software. In other words, the formatting of military thought into
controlled and controllable decision-making processes, developed by the need to respect the rules of
engagement and international rules, particularly those of the decision to open fire. The processes will
certainly be validated, but once activated, these processes may become completely rigid technological
gems, admirably designed, but incorporating doctrinal biases that cannot be challenged in the face of
unpredictable enemy behaviour . By the time, these systems and their uses are adapted, it will be too
late and the battle will be lost.

Another major risk is the predictability of the behaviour of these systems by the enemy. As these
systems are known, their vulnerability will also be known. It will therefore be easy for the enemy to
circumvent them by manoeuvres combining cunning and opportunity, with victory only reflecting the
inability of these highly technical systems to adapt to an unpredictable or simply illegal conflict.

The leader must therefore anticipate these pitfalls and use the means at his disposal with intelligence.
On these aspects, the French army has developed the concept of “major effect” to be achieved. This
major effect conceptualises the way in which the leader intends to seize the initiative in the execution
of his mission and which makes it possible to adapt the means and methods of execution to the final
effect sought .

[h]A principle of subsidiarity undermined

As a corollary to the extraordinary potential of the digitisation of the battlefield, namely to allow all
levels of the hierarchy to access information in real time and simultaneously, there is also a new risk at
every level of the military hierarchy: that of the leader having the possibility of directly accessing
‘target information’, thus breaking the principle of subsidiarity, which requires him to delegate to his
subordinates the responsibility for and the use of the means made available to him. The temptation to
interfere in the decisions of subordinates and to decide in their place will be great, given his experience
and his position. In order to avoid this possible risk, it will be necessary to define precisely the right
level of information to be communicated for the right strategic level, in order to respect the freedom of
action of each level and to avoid a general and systematic dissemination of information without
intermediate processing and filtering.

“The philosophy of war does not change. It will not change as long as it is men who make war” said
General Charles de Gaulle.
In spite of everything, new technologies bring new equipment to the forces in operation. They are
transforming the art of waging war through the opportunities they offer and by the new uses they bring
to the battlefield.

With these new means at his disposal, the leader must continue to ensure the delicate balance between
reflection and action. Without real and concrete commitment, there can be no good understanding of
the situation, and without hindsight, there can be no good decisions .

This balance can only be achieved through advanced training. Firstly, human learning, to know how to
command his men and respect the opponent. Secondly, intellectual learning, because he must
understand the technologies he will use on the battlefield. Military training, because war is an art that
leaves no room for the unexpected and requires skills and qualities that are acquired through effort,
courage and performance training.

It is this leader of tomorrow that the French Military Academy of Saint-Cyr Coëtquidan is training in
Brittany, in the western part of France.

Chapter 2: UAV Design and Development

[mh] Airframe Design and Materials

The development of new technologies and structural materials has been and still is directly related to
the progress of civilization . Quantitative and qualitative progress in the development of our
civilization has been made possible by the use of new materials. The aviation industry is a particularly
clear demonstration of this thesis. Aerospace is a universally recognized indicator of progress in the
development of advanced fields of knowledge. Historically, a great number of the achievements of
civilization in science, engineering, and materials science have been closely tied to aviation .
Figure :Developing and evolving structural materials over time (the time scale is nonlinear, graphs
drawn on the base of different statistical data). 1: in 559 AD, several prisoners of emperor Wenxuan of
Northern Qi, including Yuan Huangtou of Ye, were reportedly forced to fly a kite off a tower as an
experiment. 2: the first manned flight, Montgolfier, went aloft in a tethered Montgolfier hot air balloon
in 1783. 3: the first manned glider flight was made by a boy in an uncontrolled glider launched by
George Cayley in 1853. 4: the first controlled, sustained flight in a powered airplane was made by
brothers Wright in 1903. 5: the first documented supersonic flight was by Chuck Yeager in Bell X-1 in
1947. 6: the first piloted orbital flight was made by Yuri Gagarin in the rocket Vostok in 1961. The
first manned hypersonic flight was made in 1961 by Robert white in the X-15 research aircraft at
speeds in excess of Mach 6. 7: the first powered, controlled takeoff and landing on another planet or
celestial body was the NASA rotorcraft ingenuity on Mars in 2021.

Up to now, aluminum and its alloys have been the main structural material used in aviation. The
combination of strength, lightness, and low cost has made it indispensable in aircraft design .

However, progress does not stand still. The ever-increasing demands on aerospace from military and
civilian customers have led to a number of processes, one of which has been the search for new
structural materials for aerospace industry. In the sixties of the twentieth century, looking for ways to
lighten the structure, designers began to use composite materials (CM) everywhere , along with new
metal alloys .

Figure :CM usage rate in the aviation industry .

Polymer composites are widely used in the aerospace industry. Their composition and manufacturing
technology are relatively simple. A woven base in the form of a fabric made of carbon fibers (or other
fiber types) is impregnated with polymeric synthetic resins. The raw product is then pressed into the
desired shape and applied heat. After the resin has cured, the edges of the product are treated. The
product is then typically finished .
Composites were first used in aviation in the form of phenol-impregnated modified wood. This
material was then replaced by more advanced metal alloys. However, CM returned to aircraft structure
when it became necessary to provide radio transparency in the radar antenna zone. Fiberglass
composites perfectly met the radio transparency requirements and provided the necessary strength and
stiffness as well as aerodynamic perfection for radar fairings .

In 1938, the Douglas Aircraft Company used fiberglass for the fairings of the Douglas A-20 Havoc
bomber . In 1964, the first all-fiberglass airframe, the H-301 Libelle (“Dragonfly”), received German
and U.S.-type certification .

After the 1950s, aerospace engineers began to actively develop and introduce new CM reinforced with
boron, carbon, and synthetic fibers into aircraft structures .

Boeing 727 (1963) was the first medium-range narrow-body airliner developed by the Boeing
Corporation to use composite materials in its design. Carbon-epoxy rudder skins were made using CM.
A 26% weight reduction of the rudder was achieved .

In 1967, the four-seat civilian piston-powered aircraft Eagle, built by Windecker, made its maiden
flight in mostly composite .

Lockheed L-1011-1 Tristar made its first flight in 1970. Its design used carbon-epoxy aileron skins and
later a carbon-epoxy keel. Further models of this aircraft increased the amount of CM. For example, in
the Lockheed L-1011-500 Tristar (1978), Kevlar wing-to-fuselage fillets, fixed nose and trailing edge
wing panels, trailing edge elevator and rudder panels, wing high-lift fairings, and fuselage fillets with
the central engine were used. The weight reduction for CM units was 25% .

The design of DC-10 (1970) is generally similar to that of the Lockheed L-1011-1 Tristar. CM were
used to make carbon epoxy skins and the rudder structural frame (30% weight reduction compared to
metal structure) .

Airbus A310 (1982) used a carbon fiber-reinforced plastic vertical stabilizer, control surfaces, high-lift
devices, and carbon fiber-based CM brakes . The total composite content in the A-310 design was 5%
of the aircraft weight .

Figure :Composite materials application in Airbus A310 . 1: ailerons; 2: wing-to-fuselage fillets; 3:


elevators; 4: rudder; 5: fin; 6: high-lift devices and their fairings; 7: engine case parts; 8: radom.
ATR-42 (1984) and its extended version, ATR-72 (1989), have wing torsion boxes made entirely of
carbon fiber (for the first time in passenger aviation). In general, the proportion of CM in the aircraft
was 22.6%. Beech Model 2000 Starship 1 (1986) has wings and fuselage made of CM .

Antonov Company (USSR-Ukraine) has a long history of CM application. The share of CM use in
Antonov aircraft structures is increasing from 1 to 2% of the airframe weight in An-26 (1969) to 20%
in An-70 (1994). Almost all wing high-lift devices were made of CM .

The appearance in 2005 of Hawker 4000 (Beechcraft, USA) and European A-380 with an all-
composite fuselage brought the percentage of CM in the structure to 25–30% .

The development of B787 (2009) and A350 (2013), whose wings and fuselage are mainly made of
CM, brought this ratio to 50–55% . Currently, Airbus and Boeing are the undisputed leaders in the use
of CM structures in transport category aircraft.

There are also projects to upgrade existing aircraft. The main idea of such upgrading is to replace
traditional metal structural materials with composite materials. For example, the Yakovlev Yak-40
upgrade project. CTP-40DT (2016) is a new variant of Yak-40. The wing of this aircraft is completely
made of CM .

CM are very practical for the UAV industry. Such types of flying vehicles are especially sensitive to
their weight. In the past, when structural materials were mainly metals, UAVs had poor flight
performance and low applicability. Now, new generation of propulsions and structural materials based
on CM use provide high flight performance of UAVs and their wide applications .

Based on all the above, it can be concluded that the use of composite materials in the design of modern
aircraft is reasonable. It is the CM that allow to achieve high weight efficiency for aircraft of the
transport category.

[h]Aircraft wing specifics

The wing is one of the main units of a modern airplane, its main function being to generate lift .
However, the wing (as well as the fuselage) is the largest unit of an airplane airframe. This is the cause
of high aerodynamic drag. Thus, there is a contradiction: on the one hand, to increase the transport
efficiency of the aircraft, the wing area should be increased. On the other hand, an increase in the wing
area leads to an increase in its weight, aerodynamic drag, the loads acting on it, etc. .

The weight of a passenger aircraft wing is approximately 8–12% of the aircraft structural weight and
30–40% of the airframe weight . Up to 30–40% of the structural weight is the weight of the skin. One
of the ways to reduce the weight of the wing, which depends on the designer, is the widespread use of
composite materials for its elements.

The aerodynamic drag of a wing can be up to 60% of the total drag of an aircraft . This is due to many
aspects such as profile drag, friction, and induced drag. In this case, the induced drag is about 30% of
the total drag of the aircraft or about 50% of the wing drag. The specificity of the induced drag is that
it is directly related to the level of the lift force of the wing . Thus, higher lift force results in higher
induced drag.
There are several ways to reduce induced drag for a given lift . So far, the most effective way to reduce
induced drag is to increase its aspect ratio. For example, the Boeing company used folding wingtips on
the new B777X, which allowed the wing aspect ratio to be increased by up to 10 . Another example is
the new UAC MС-21. This aircraft has an aspect ratio of 11 . In both cases, such a high aspect ratio
was achieved by using CM.

Figure :Deflectable wingtips of Boeing 777X .

One of the consequences of a high aspect ratio is an increase in the bending moment in the wing (for
the same lift force) . The increase in internal wing loads leads to an increase in the stresses acting in the
wing elements. Therefore, the longer the span of the wing, the more difficult it is to ensure its strength
and stiffness. In other words, the increase in mass to provide strength can easily offset any positive
effect of using a wing with a large aspect ratio. In addition to bending stiffness, torsional stiffness must
also be considered. Torsional stiffness also decreases with increasing wing span.

Figure :Equivalent lift force arrangement and bending moment evaluation .

In this case, the positive property of composite materials, namely, high elasticity, was in full demand.
Elasticity is the ability of a material to change its shape under load and return to its original shape
when the load is removed. For example, the wing of B787 can bend up to seven meters , and also, the
wing of A350XWB can bend up to five meters . So, where aluminum loses strength and fractures,
composites change shape only temporarily, bending under load without breaking.
Figure :Wings flex tests. (a) Wing bending scheme in a flight; (b) scheme of the wing bending test; (c)
Boeing B787 ; (d) Airbus A350XWB .

Another component of the drag of the wing is the joints of the structural elements of the wing that
extend into the airflow. The share of such a drag reaches 3% . The use of CM reduces the number of
joints that extend into the airflow due to the wide use of bonded and solid-state structures.

Thus, the use of composites in the wing structure has a significant positive effect on the main
performance indicators of the wing. Namely, it allows to reduce the weight of the wing, reduce its drag
by several factors at once, and increase the strength of the wing.

[h]Composite materials and their performance for aircraft structure

The properties of composite materials depend on the composition of the components, their quantitative
ratio, and the strength of the bond between them. By combining the volume content of the components,
it is possible to obtain materials with the required values of strength, heat resistance, modulus of
elasticity, or compositions with the required special properties, such as high tensile strength, high
torsional stiffness, magnetism, and others, depending on the purpose.

Figure shows a brief classification of CM by the matrix material type.


Figure :Brief classification of CM by the matrix material type .

Some terms are briefly described in Section 3.5.

[h]Polymer composite materials

A large group of composites are polymer composite materials (PCM)—composites in which polymer
material serves as a matrix. Their use has a significant economic impact.

Parts can be made from PCM using both processes typical of molded polymer products (injection
molding, pressing, etc.) and special processes unique to this class of materials (winding, etc.).

[h]Fiberglass

Fiberglass is a PCM reinforced with glass fibers formed from molten inorganic glass. Thermosetting
resins such as polyester, phenolic, epoxy, and others and thermoplastic polymers such as polyamides,
polyethylene, polystyrene, and the like are often used as the matrix.

Fiberglass materials have high strength, low thermal conductivity, and high electrical insulation
properties and are transparent to radio waves.

Fiberglass is a low-cost polymer composite. Its use is justified in serial and mass production, aerospace
industry, shipbuilding, radio electronics, construction, and automotive and railway engineering .

[h]Carbon fiber-reinforced plastic

Carbon fiber-reinforced plastics are composite materials consisting of a polymer matrix and reinforcing
elements in the form of carbon fibers. Carbon fibers are obtained from synthetic and natural fibers
based on copolymers of acrylonitrile, cellulose, and others.

For the production of carbon fiber composites, the same matrices are used for fiberglass—
thermosetting and thermoplastic polymers.

The main advantages of carbon fiber composites over glass fiber composites are their low density and
higher modulus of elasticity. Carbon fiber-reinforced plastic is a very light and strong material. Carbon
fibers, and therefore carbon plastics, have virtually no linear expansion.

Carbon plastics are used in aerospace industry, mechanical engineering, medicine, and sports
equipment. Carbon plastics are used to produce high-temperature components for rockets and high-
speed aircraft, brake pads and disks for aircraft and reusable spacecraft, and electrothermal equipment .

[h]Boron fiber-reinforced plastic


Boron fiber-reinforced plastics are compositions consisting of a polymer matrix and boron fibers.
Modified epoxy and polyamide binders are used to make boron plastics. The fibers can be either
monofilaments, tapes braided with auxiliary glass filaments, or tapes in which boron filaments are
interwoven with other filaments.

Due to the high hardness of the fibers, the material has high mechanical properties, and boron also
serves to absorb thermal neutrons.

Boron fiber has high compressive strength, shear strength, hardness, and thermal and electrical
conductivity. However, the high brittleness of the material makes it difficult to process and limits the
shape of boron plastic products.

Composites based on boron fibers are mainly used in the aerospace industry to produce parts that are
subjected to long-term stress . The cost of boron fibers is very high due to the peculiarities of their
production technology.

[h]Organic fiber-reinforced plastic

Organic fiber-reinforced plastics are composites of polymeric binders and fillers, which are organic
synthetic, less often natural, and artificial fibers in the form of tapes, yarns, fabrics, paper, and others.

In thermosetting organic plastics, the matrix usually consists of epoxy, polyester, and phenolic resins,
as well as polyimides. The material contains 40–70% filler. The filler content in organic plastics is
based on thermoplastic polymers—polyethylene, PVC, polyurethane, and others—from 2 to 70%.

The degree of orientation of filler macromolecules plays an important role in improving the mechanical
properties of organic plastics. Macromolecules of rigid chain polymer (Kevlar) are mainly oriented
along the fiber axis and therefore have high tensile strength along the fibers. Body armor is made of
Kevlar-reinforced materials. Organic plastics have low density, are lighter than glass and carbon fiber-
reinforced composites, have relatively high tensile strength and impact strength but low compressive
and flexural strength.

Organic plastics are widely used in automotive, aerospace, radio electronics, shipbuilding, chemical
engineering, production of sports equipment, and others .

[h]Metal matrix composites

Aluminum, magnesium, nickel, copper, and other metals are used to make metal matrix composites
(MMC). Fillers are high-strength fibers and refractory ps of varying dispersity that are not dissolved in
the base metal.

The properties of dispersion-hardened metal composites are isotropic, that is, the same in all directions.
An addition of 5–10% of reinforcing fillers (such as refractory oxides, nitrides, borides, and carbides)
leads to increased resistance of the matrix to loading and increased heat resistance of the composite in
comparison with the original matrix.

Reinforcing metals with fibers, filamentary crystals, or wires significantly increases both the strength
and heat resistance of the metal. For example, aluminum alloys reinforced with boron fibers can be
operated at temperatures up to +450–500°C instead of +250–300°C .
Oxide, boride, carbide, nitride metal fillers, and carbon fibers are used. Ceramic and oxide fibers, due
to their brittleness, do not allow plastic deformation of the material, which causes significant
difficulties in the manufacture of products, while the use of more plastic metal fillers allows
deformation.

Such composites are obtained by impregnating fiber bundles with metal melts, electrodeposition,
mixing metal with powder and subsequent sintering, etc. Fiber-reinforced metals are used in aerospace
and other industries.

[h]Ceramic matrix composites

Ceramic composite materials (CCM) are materials in which the matrix is ceramic, and the
reinforcement is metallic or nonmetallic fillers.

Reinforcing ceramic materials with fibers and dispersed metal and ceramic ps results in high-strength
composites.

The range of fibers suitable for reinforcement is limited by the properties of the base material. Metal
fibers are often used. The tensile strength does not increase much, but the thermal resistance increases
—the material breaks less when heated, but there are cases where the strength of the material
decreases. This depends on the ratio of the thermal expansion coefficients of the matrix and the
reinforcing fiber. Ceramic composites with carbon fibers are promising for high-temperature
applications.

The applications for composite materials are numerous . In addition to the aerospace, space, and other
specialized industries, they are in demand in the construction of power turbines, the automotive
industry, mining, metallurgy, construction, etc. The range of applications of these materials is
constantly expanding.

[h]Properties of composite materials

As mentioned above, the main advantages of composite materials are their high specific and fatigue
strength, high wear resistance, and stiffness. As the matrix of the composite is responsible for the
uniformity of the material, its resistance to external influences, and the distribution and transfer of
stresses, the reinforcing material acts as a reinforcing structure and is a stronger component than the
matrix. By selecting the properties of the matrix and the reinforcing material, it is possible to achieve
the required combination of manufacturing and operational properties. Some performances of different
CM are shown in Table .

No. Matrix material Reinforcing material Density, kg/m3 Tension stress, MPa Elasticity modulus, GPa
1 Glass fibers 1900–2200 1200–2500 50–68
2 Organic fibers 1300–1400 1700–2500 75–90
Epoxy
3 Carbon fibers 1400–1500 800–1500 120–220
4 Boron fibers 2000–2100 1000–1700 220
5 Carbon fibers 2300 800–1000 200–220
Aluminum
6 Boron fibers 2600 1000–1500 220–250
No. Matrix material Reinforcing material Density, kg/m3 Tension stress, MPa Elasticity modulus, GPa
7 Carbon fibers 1800 600–800 180–220
Magnesium
8 Boron fibers 2000 700–1000 200–220
9 Tungsten wire 12,500 800 265
Nickel
10 Molybdenum wire 9300 700 235
11 Carbon Carbon fiber 1500–1800 350–1000 120–220
12 Ceramic Silicon carbide fiber 3200 480 —

Table :Basic CM properties for different matrix and fiber materials.

As shown in Table, there is a wide range of available properties for CM based on different
components. This makes it possible to use CM for different operational conditions (highly stressed
elements, thermally stressed elements, radio-transparent elements, etc.) for aircraft structures.

[h] Brief glossary

Acrylonitrile is an organic compound with the chemical formula CH 2CHCN and the structure H2C〓
CH▬C☰N.

Aramid is any of the number of synthetic polymers (substances composed of long-chain, multiunit
molecules) in which repeating units containing large phenyl rings are linked together by amide groups.

Biopolymers are natural polymers produced by the cells of living organisms.

Cellulose is an organic compound with the chemical formula (C 6H10O5)n and the most abundant
biopolymer on the Earth.

Electrodeposition is a process of producing a metal coating on a solid substrate through the reduction
of cations of that metal by means of a direct electric current.

Epoxy resin is a type of polymer that is made up of a combination of epoxide groups and other
molecules.

Ethylene is a hydrocarbon which has the chemical formula C2H4 or H2C〓CH2.

Filamentous crystals are single crystals in the form of needles and fibers with diameters ranging from a
few nanometers to hundreds of microns and a large length-to-diameter ratio (typically 100–1000).

Kevlar (para-aramid) is a strong, heat-resistant synthetic fiber, related to other aramids such as Nomex
and Technora.

Monofilament is a single filament of the synthetic fiber.

Monomer styrene is an organic compound with the chemical formula C6H5CH〓CH2.

Nomex is a flame-resistant meta-aramid material.


Para-aramid is an aromatic polyamide that is characterized by long rigid crystalline polymer chains.

Phenol (also known as carbolic acid or phenolic acid) is an aromatic organic compound with the
molecular formula C6H5OH.

Phenolic resins are synthetic polymers made by reacting phenol (or substituted phenols) with
formaldehyde.

Polyamide is a polymer that comprises repeated units linked together by amide bonds.

Polyester resins are synthetic resins formed by the reaction of dibasic organic acids and polyhydric
alcohols.

Polyethylene is a thermoplastic that also happens to be the most common plastic. It is produced
through the use of special catalysts that affect polyolefins at moderate temperature and pressure.

Polyimide is a polymer containing imide groups belonging to the class of high-performance plastics.

Polyolefins are produced by polymerizing, respectively, ethylene and propylene, mainly obtained from
oil and natural gas.

Polystyrene is a polymer made from the monomer styrene, a liquid hydrocarbon that is commercially
manufactured from petroleum.

Polyurethane is an ultralight, bearing, elastic, impact-resistant, sound-insulating compound.

Polyvinyl chloride (PVC or vinyl) is an economical and versatile thermoplastic polymer.

Propylene is a gaseous hydrocarbon with the chemical formula C3H6, obtained from petroleum.

Technora is an aromatic copolyamide that has a highly oriented molecular structure, consisting of both
para and meta linkages.

Thermoplastic (thermosetting resins) is a type of polymer that liquifies upon heating and cannot be
reused after cooling.

[h]Methods for assessing the strength of composite materials

The fracture mechanics of CM units is very complex. There are many approaches to assess their
strength based on different fracture criteria . They are usually based on experimental results from their
tensile, compression, and shear tests. The fracture criterion defines the critical combination of
operational stresses (deformations) that will lead to failure.

First, it is necessary to make a conditional partition of the unit into simplified components. For
example, for a wing, at least two structural components can be conditionally identified: a panel with
stiffeners (skin with stringers) and a spar (Figure).
Figure :Wing subparts. 1: upper and lower panels; 2: stiffeners of panels; 3: spars; 4: stiffeners of
spars.

For aircraft of the transport category, the intensity of wing loading requires to select the torsion-box
load-carrying structure for their wings . The main feature of this type of load-carrying structure is that
most of the bending moment is supported by the wing panel. The lift force acts as a shear force into the
spar web. So, both of these structural components can be considered as the panel with complex loading
configuration. Both these panels have stiffeners: it is a set of stringers for skin and a set of struts for
spar web. Caps of a spar can be considered as stiffeners of skin panels. These structural components
can also be simplified to a composite package .

In the general case, the strength analysis of a composite package is reduced for the determination of the
stress-strain state of its layers and the calculation of their safety factors according to one or another
criterion . The minimum of these factors determines the safety margin of the CM package as a whole.
Figure shows a typical CM package structure. In the following, the most common fracture criteria used
in the practice of composite strength analysis are briefly summarized.

Figure :A typical CM package structure.

Maximum stress criterion: according to this criterion, monolayer failure occurs when the following
conditions are met:

The safety factor is the result of the following equation η = A−1.


Here, A is the cross-sectional area; σ1, σ2, and τ12 are the stresses acting in the monolayer; XT, XC, YT,
YC, and S12 are the failure stresses.

This criterion has the following disadvantages. In general, the matrix is in a three-dimensional stress
state, but even for relatively thin plates where σ 3 = 0 and transverse shear strains can be neglected,
there is no mutual influence of the components of the plane-stress state on the strength of the matrix,
which may lead to overestimation of the matrix strength under a combination of loads.

Tsai-Hill criterion: a quadratic criterion based on the fourth (energy) theory of strength. Monolayer
failure occurs when the condition is satisfied:

The safety factor here is calculated by the equation

The main disadvantage of this criterion is that it is impossible to determine the cause of monolayer
failure: matrix or fiber failure has occurred. In addition, the criterion does not distinguish between
stress combinations σ1 and σ2, as biaxial tension or biaxial compression are equivalent in this case.

Hofmann criterion: this criterion is based on the sum of the linear and quadratic stress invariants.
Monolayer failure occurs when the following condition is satisfied:

The safety factor is determined by solving the quadratic equation

The criterion distinguishes between combinations of stresses σ 1 and σ2 due to additional linear terms
compared to Hill’s criterion. As with Hill’s criterion, it is rather problematic to determine the cause of
failure.

Tsai-Wu criterion: it is a variation of the Hofmann criterion. Monolayer destruction occurs when the
following condition is satisfied:
This criterion differs from the Hofmann criterion by the factor used for σ 1 and σ2. In general, the factor
F12 can be in the range −1.0 ≤ F12 ≤ 1.0.

A characteristic feature of the criteria discussed above is their uncertainty with respect to what
happened in the monolayer—whether the matrix or the fiber was destroyed. At the same time, this is
very important when analyzing the strength of a composite package. Therefore, criteria that analyze the
strength margins of both the matrix and the fiber separately are becoming more common.

Hashin-Rotem criterion: this criterion assesses fiber and matrix strength separately.

The following relationships are used to determine the fiber strength:

The strength of the matrix is determined by the following relations:

The conditions for interlayer delamination are specified by the relation:

The criteria considered above belong to the group of criteria based on the ultimate stresses of the
monolayer. In addition, there are a number of criteria based on the ultimate strain of the monolayer.

The maximum strain criterion is one of the most common criteria. According to this criterion,
monolayer failure occurs when the following conditions are met:
The safety factor is calculated using the following equation

Here, ε1, ε2, and ε12 are the strains of the monolayer; ε1T, ε1C, ε2T, ε2C, and γ12 are the strains and their
failure in tension, compression, and shear. Formally, this criterion is the same as the maximum stress
criterion. The difference between them is that the elastic characteristics of the matrix can have a
nonlinear behavior, and the strain criteria make it possible to take this factor into account to a certain
extent.

There are other criteria with different approaches to taking into account stresses and strains. Here, the
more commonly used one has been explained.

Figure illustrates the extent to which certain strength criteria are used in practice.

Figure :Application of various failure criteria to analyze the strength of composite units .

[h]Strength and stability analysis of CM panels with stiffeners

Depending on the stiffness ratio of skin and stiffeners, the reinforced CM plates can be considered as
composite plate sets connected along nodal lines or as structurally orthotropic sets with given stiffness
characteristics. Depending on the model adopted, different numerical approaches are also used for the
strength analysis.

The study with discrete models allows detection, in addition to the general instability, of some local
forms of loss of stability. However, to fully study the behavior of the panel after a local loss of
stability, a discrete model should be used in combination with geometric and physical nonlinearities.
Therefore, when solving the problem in a nonlinear form, the nonlinear stress-strain behavior should be
analyzed at each step according to the strength criteria in order to determine the possible failure of the
plate before a general form of loss of stability occurs. This type of failure is called crippling strength in
the reference literature.

In addition to the abovementioned complex way of studying the behavior of a reinforced panel based
on a discrete model in a geometric nonlinear form, it is possible to use a model of a structurally
orthotropic plate as a substitute for a real reinforced panel. In this case, the problem is reduced to the
calculation of the “smeared-out” stiffness parameters of such an orthotropic plate.

4.2.1 Numerical-analytical method for calculation of the overall stability of CM panels, taking into
account the local loss of skin stability

The solution of the general stability problem of a reinforced CM panel is reduced to the solution of the
stability problem of an anisotropic (orthotropic) plate based on the equation:

Here, w is the lateral deflection; Dx, Dy, and Dxy are the stiffnesses along their respective axes; Nx, Ny,
and Nxy are the forces in their mid-plane.

The critical compressive forces can be approximated by the following equations:

1. A rectangular plate hinged along the contour, loaded at the edges x = 0 and x = a by the
compressive forces

Rectangular elongated plate fixed on all edges:

Rectangular elongated plate, one unloaded edge is free and the other three are hinged (a/b > 4):
Rectangular plate (1 < a/b < ∞) whose loaded edges are hinged and whose unloaded edges are fixed:

In the practice of studying the stability of reinforced metal panels and shells, the concept of a “framed”
shell or panel has developed. This means that the main material is concentrated in the reinforcements
and the skin is comparatively thin. The same assumption applies to CM panels. The comparatively
early local loss of stability of the skin and the reduction of its performance in the subcritical area are
accounted for by the reduction factor method.

The assumption that the shell or panel is “framed” leads to the assumption that there is no local loss of
stability of the reinforcement elements. As a result, the considered approach does not take into account
the possibility of failure of the reinforcement after local loss of stability.

For a reinforced plate considered as an orthotropic plate, the following relations are valid:

are forces and moments in the panel;

strains and curvatures at the reference plane.

The following ratios apply to orthotropic skins


If the skin is regularly supported by longitudinal and lateral stiffeners installed with pitches bc and bf,
respectively, the “smeared-out” stiffness parameters of the supported panel are calculated using the
following equations:

are the elastic and


shear moduli of the longitudinal and lateral reinforcement; Fs, Ff, Is, If, IPs, and IPfare the areas,
eigencentral, and polar moments of inertia for the longitudinal and lateral reinforcement sections,
respectively; zs and zf are the distances from the centers of gravity of the longitudinal and lateral
reinforcement sections to the midplane of the skin; Gsfhsf is the shear stiffness of the reinforcement set
without skin, which is related to the way the longitudinal and lateral reinforcements are attached to
each other.

The given equations for the secant and tangent stiffness parameters are further used in the calculation
of the precritical stress-strain behavior and the overall stability of the reinforced panel. In the particular
case of no skin buckling, these equations are transformed into the usual equations for the stiffness
parameters of the reinforced panel.

[h]Methodology for calculating the structural capacity of panels using FEM

The analysis of the structural capacity of reinforced panels based on FEM requires special approaches
compared to, for example, the analysis of smooth plates. The solution of such problems requires a
certain experience in their simulation and in analyzing the obtained data. Some peculiarities and tips
for solving the problems of assessing the structural capacity of reinforced panels are presented below.

The analysis of panels operating in tension does not cause significant difficulties in assessing their
strength. The main difficulties lie in the analysis of the panel operating in compression or under the
action of a combination of loads and are related to the issues of its local and overall stability. A
peculiarity of reinforced thin-walled structures, which include stiffened panels, is that they can have
local forms of stability loss related to the loss of stability of the skin and elements of the reinforcement
set. This does not exhaust the structural capacity of the panel, but only reduces its stiffness
characteristics. Panel failure occurs either when the load reaches the value of its total loss of stability or
when the strength of its separate elements is exhausted.

For metal panels, a natural limitation on the level of development of subcritical strains is the
requirements of aviation regulations for the absence of residual strains in the structure under
operational loads.

For panels made of CM, these requirements are transformed into requirements of no violation of
composite continuity under these loads, that is, no matrix failure or delamination in it. For panels with
a comparatively weak set of stiffeners, in addition to the loss of stability of the skin, there may be a
local loss of stability of the stiffener elements: their webs or caps, while the stiffener itself remains in
the plane of the panel. This phenomenon is known as panel creep and must be taken into account when
assessing the structural capacity of the panel. Panel creep can be related not only to the local loss of
stability of the stiffener, but also to its deformation interaction with the skin that has lost stability .

Figure :Deformation interaction of the skin with the set of stiffeners .

In composites, creep can quickly lead to stiffener failure. This depends mainly on the specific thickness
of the stiffeners, that is, the ratio of wall or cap width to thickness. The higher the ratio, the faster the
creep failure. As an example, Figure shows a graph of the change in the safety factor of a plate made of
CM with a ratio of b/δ = 27 during its subcritical deformation. It can be seen that, after the loss of plate
stability, its failure occurs in the 5% range of load increase, and the safety factor accordingly decreases
more than twice.
Figure :CM plate safety factor changes after the loss of stability .

The finite element model of a panel is created based on its geometric data. Typically, this is a bend-
diaphragm element. In terms of length, it is desirable to consider three-span panels. This reduces the
influence of boundary conditions at the panel edges on the solution results and forms natural boundary
conditions for the offset part of the model, that is, the middle span. The lateral dimensions of the model
are usually chosen based on the wide panel concept, when there are no natural boundary conditions at
the longitudinal edges due to the operation of the panel as part of the structure. The wide panel concept
assumes that the boundary conditions at its longitudinal edges have minimal influence on the solution.
Typically, five to six stiffeners in the model are sufficient to satisfy these conditions using finite
elements.

Plate edge torsion should be locked when a fixed state is formed. Lateral movement of the edges
should not be restrained. This causes biaxial compression of the skin in the edge zone when the plate is
loaded, resulting in an earlier loss of stability. Forming hinge support conditions does not require any
additional conditions on the displacements of the edges, except for limiting their displacements out of
the panel plane.

The panel loading is provided by setting distributed loads on its edges. The values of these loads are set
in proportion to the longitudinal stiffnesses of the panel elements. This results in its central loading.

The size of the computational mesh of the model is mainly determined by the parameters of the panel
and the objectives of the analysis. If it is known that there will be no local forms of stability loss, the
nodal mesh can be less detailed.
A typical example of such a model of a panel with stiffeners is shown in Figure :

Figure :Typical reinforced panel model for analysis .

The application of the nonlinear form of FEM to the analysis of the stability of thin-walled units opens
much wider possibilities for a complete analysis of the structural capacity of units. This is mainly due
to the possibility of considering the subcritical behavior of a unit and the possibility of monitoring its
strength by some or other failure criteria at each stage of its loading. However, there are serious
difficulties in the practical use of this approach. First of all, it takes a lot of time to analyze such
problems.

The results of the analysis of the reinforced panel show the deformation of the stiffeners from the plane
of the panel and the growth of the stresses in the stiffener as a function of the percentage of the load are
observed. The intensive growth of the stiffener deformation indicates that the point of total loss of
stability of the panel is near, and the analysis of its stress-strain behavior allows to specify the failure
load. Figure shows typical dependencies of stiffener deformation growth and stresses in its rib as a
function of load.

Figure :Graphs of the stringer deformation growth (a) and the stringer rib stress (b) as a function of the
load .
The prediction of the critical load can be based on the method of generalized tangential stiffness of the
panel using. Conventionally, this stiffness can be defined as the ratio of the percentage of the load
increment to the increment of the panel edges convergence value, T = ΔP/ΔU. As it is the tangential
stiffness that determines the stability of the panel, if it decreases sharply, it can be concluded that the
critical point of the total loss of panel stability is near. As an example, Figure shows a typical graph of
the convergence of the edges of one of the panels as a function of the percentage of the applied load.

Figure :Graph of the dependence of the value of the panel edges convergence on the percentage of the
applied load .

Thus, in the area of 80% load, T = (87.5 − 75)/(3.97 − 3.148) = 15.2, and at the finish of the process,
T = (88.67 − 88.65)/(4.133 − 4.088) = 0.44, that is, the tangential compressive stiffness of the panel
decreased almost 35 times.

[h]Assessment of panel structural strength

In the airframe structure, panels typically operate under combined loading conditions, the major
components of which are axial and shear loads. The values of these loads depend on both the location
of the panel in the structure and the design loading case. Therefore, it is convenient to assess the
structural strength of panels using diagrams of their strength and general stability, plotted in the
coordinates of the acting linear compressive and shear loads. The combination of these diagrams
provides a visual illustration of the structural strength of the panel and the causes of its failure .

Presenting such diagrams for a number of typical panels of the considered structure and plotting on
them the points of specific combinations of loads acting on the panel in various design cases, it is
possible to evaluate its structural strength and available safety factor. At the preliminary stage of
design, such diagrams can be drawn using numerical and analytical calculation methods. However, at
the stage of detailed design and verification analysis, it is desirable to use nonlinear FEM for this
purpose. This is especially true for CM panels. This is due to the fact that analytical calculation
methods often do not take into account the effects of panel crippling. As an example, Figure shows the
diagram of the structural strength of a wing panel made of CM as plotted by the results of nonlinear
FEM analysis.
Figure :Diagram of the structural strength of a CM wing panel . 1: zone of the overall instability; 2:
zone no strength; 3: design load combinations; 4: limit of strength; 5: limit of stability; 6: limit of
structural strength of the panel.

The diagram clearly shows that the structural strength of the panel is determined by its overall stability
when the ratio of shear and compression loads ξ = Nxy/Nx < 0.32 and by the strength of the composite
when ξ > 0.32. Having a point on the diagram that defines the combination of loads acting on the
panel, it is easy to estimate the available reserves of its strength by drawing a ray through this point
from the origin to its intersection with the boundary of the structural strength of the panel.

Modern level of material science allows to replace traditional metallic structural materials by new
composite materials. This trend provides weight reduction of airframes. However, CM units require
more complex approaches to the design, analysis, and verification processes.

There are several criteria to assess the strength of CM basic elements. The aircraft wing structure
consists of complex units. For them, it is necessary to apply certain levels of approximation because
the full process of strength assessment can take a long time.

The aircraft wing consists of two main structural components: skin and spar. Both can be considered as
a reinforced panel. Such an approach allows to unify different analysis models for a relatively well-
known model.

On the basis of these conditions, it is possible to assess the strength of the wing by criteria for a
reinforced panel. Some peculiarities and recommendations for the selection of parameters of the model
of such a panel have been given. Also, several cases have been explained and analyzed as examples.

[mh]The Guidelines of Material Design and Process Control on Hybrid Fiber Metal Laminate for
Aircraft Structures

The current trends in commercial aircraft operations are showing an increasing demand for lower
operational and maintenance costs. The maintenance costs, directly incurred by the airlines’ operation,
are an important measure of the economic benefits associated with reducing direct operating cost .
Practically, most aircraft structures are being designed for longer design lifetime with extension of
inspection intervals. For this purpose, the fatigue and damage tolerance (F&DT) properties have been
received considerable attention for further use of lightweight materials on next generation aircrafts .
Therefore, there is a strong need for the application of more durable and damage tolerant materials to
minimize the total maintenance costs of commercial aircraft. Reducing the structural weight can lead to
better fuel efficiency, reduced CO2 emissions and lower maintenance costs. Nowadays, two competing
materials, such as modern aluminum alloys and composites, have the potential to improve the cost
effectiveness, but they still have limitations that restrict their widespread use, for example corrosion-
fatigue resistance for aluminum alloys and blunt notch strength and impact resistance for carbon fiber
reinforced plastics (CFRP) .

A chronological history of materials for aircraft structures is illustrated in Figure. New multi-layered
hybrid materials, FMLs consist of thin metal sheets bonded into a laminate with intermediate thin fiber
reinforced composite layers, and combines the benefits of both material classes . Recently, the use of
FMLs leads to subsequent benefits for primary aircraft structures, for example upper fuselage skin
panels as shown in Figure . This figure also presents typical load cases for dimensioning criteria in the
design of fuselage structures. To date, the representative commercially available FML is glass
reinforced aluminum laminate (GLARE), which combines thin aluminum sheets with unidirectional
glass fiber reinforced epoxy layers . It has been produced for the upper fuselage skin panels of Airbus
A380 (Toulouse, France) at GKN Aerospace’s Fokker Technologies (Papendrecht, The Netherlands) in
collaboration with AkzoNobel (Amsterdam, The Netherlands) and Alcoa . The FMLs are also being
considered for thin-walled structures for single aisle fuselage shells. In addition, their superior F&DT
properties which are addressed as essential design principles in JAR/FAR 25.571 (Damage-tolerance
and fatigue evaluation of structure) make them the ideal candidate for military aircrafts that such
applications are not only subject to high fatigue stresses, but also high-velocity impact damages (e.g.
battle damages) . Concurrently, other types of commercially available FMLs are aramid aluminum
laminate (ARALL) based on aramid fibers and carbon reinforced aluminum laminate (CARALL)
based on carbon fibers, respectively .
Figure :Chronological history of materials for aircraft structures .

Figure :GLARE application on Airbus A380 fuselage section-13/18: Total GLARE area is 469 m2, 27
panels (reproduced from Beumler ) and typical load cases on GLARE sections

The first generation FML, the ARALL, was introduced at 1978 in Faculty of Aerospace Engineering at
TU Delft (Delft University of Technology, The Netherlands) . The ARALL consists of alternating thin
aluminum alloy layers (0.2–0.4 mm) and uniaxial or biaxial aramid fibers. The GLARE which is the
second generation of FML presents the excellent fatigue resistance with high blunt notch strength than
either 2024-T3 or ARALL. This new hybrid material also offers the actual weight reduction when it is
applied to the fuselage skin panels . Finally, a much stiffer FML which is made by carbon fiber instead
of aramid and glass fibers, the CARALL, had been also investigated in TU Delft . The use of high
modulus of carbon fiber (in typical, ranging from 230 to 294 GPa) exhibits more efficient crack
bridging at the preliminary stage of fatigue crack propagation within composite layers . However, the
residual strength of notched CARLL is significantly lower than the monolithic aluminum alloys due to
the limited failure strain of carbon fiber (in typical, 2.0%) . Furthermore, it is more susceptible to
galvanic corrosion when aluminum alloys are electrically connected to carbon fiber reinforced
composites .

[h]Mechanical behaviors of GLAREs for aircraft structures

GLAREs boast a large number of favorable characteristics, such as low density, static strength, better
F&DT properties, high impact and flame resistances, as shown in Figure . More descriptions on
advantages of GLAREs are provided as follows:

 Lightweight: High static strength of GLARE2 in 0° fiber direction contributes to weight saving
over the aluminum alloys by roughly 6% in the design based on bending stiffness, and by 17%
in the design based on yield strength, respectively . For example, the use of GLAREs on A380
fuselage shells achieves a weight saving of up to 794 kg (−10%) compared with the aluminum
alloys . The typical density of standard GLAREs is the range from 2.38 to 2.52 g/cm3.
 High strength: It is apparent that the GLAREs reinforced with unidirectional glass fiber have
anisotropic properties. This glass fiber contributes to increase in static strength and elastic
modulus in the longitudinal direction along which the glass fiber is oriented. On the other hands,
the aluminum sheets control overall mechanical properties of GLAREs in the transverse
direction. As a result, the unidirectional GLAREs (e.g. GLARE1 and GLARE2) exhibit the high
ultimate tensile strength compared with the aluminum alloys in the longitudinal direction, and it
contributes to weight reduction in the case of tension-dominated structural components. In
contrast, the transverse strength is somewhat lower than those of aluminum alloys. To overcome
this limitation, the cross-ply GLAREs (e.g. GLARE3 and GLARE5) and angle-ply GLARE
(e.g. GLARE6) have been introduced to provide the balanced mechanical properties in both
directions .
 High fatigue resistance: The superior fatigue resistance is a result of fiber bridging mechanism
whereby the intact fiber layers provide an alternative load path around the cracked metal layers,
eventually reducing local stress in front of crack tip . Vogelesang et al. reported that GLARE3–
3/2 exhibit almost constant slow crack-growth when it is subjected to constant-amplitude fatigue
loading as shown in Figure. Such low fatigue crack growth rate can lead to the minimal
scheduled inspection downtime of aircraft.
 Blunt notch strength: The notched residual strength is also an important design parameter since
the geometrical notches (e.g. cutouts to serve as doors and windows) are inevitable in the design
of fuselage shells. Although the GLARE presents a relatively high notch sensitivity compared
with ductile aluminum alloys, and the use of high ultimate strength S2-unidirectional glass fiber
(in typical, 4890 MPa) makes it superior to ARALL in notch strength . Hagenbeek et al.
proposed a numerical simulation model for predicting blunt notch strength by considering the
metal volume fraction based on Norris failure criterion, and they reported that such approach
has been shown to be effective for use in predicting multi-axial blunt notch strengths (i.e.
biaxial and shear components) of GLARE.
 High impact resistance: Impact resistance is actually a significant advantage of GLARE,
especially when compared to either aluminum alloys or CFRP ; Figure compares the respective
impact energy absorbing capacities based on the through-the-thickness cracking (i.e. puncture
energy). Obviously, GLARE3–3/2 shows higher impact resistance to cracking than aluminum
alloy. This result may be attributed due to the localized fiber fracture and the extensive shear
failure in the outer aluminum sheets . In addition, a high strain rate strengthening phenomenon
which occurs in the glass fibers, combined with their relatively high failure strain contribute to
increase in the impact resistance of GLARE, rather than other FMLs, such as ARALL and
CARALL .
 Burn-through capabilities: To meet the airworthiness standard of a max. 90 seconds evacuation
time (JAR/FAR 25.803: Emergency evacuation), a structural integrity of fuselage is of major
importance in order to prolong a safe environment of the passengers in the event of a post-crash
fire scenario. Apparently, the GLARE shows high thermal insulation performance, and
subsequently contributes to enhancing structural integrity in fuselage shells as shown in Figure.
Owing to high melting temperature of S2 glass fiber (in typical, 1466°C), only the outer
aluminum sheet starts to melt and separates from the other layers. As a result, the unexposed
side of GLARE panel would remain relatively intact where the unexposed side temperature was
just below 400°C.
 Long-term hygrothermal behaviors: In general, the significant changes in moisture absorption
are not observed by GLAREs, which confirmed well to the shielding effect of the outer
aluminum sheet in this material . However, in the case of thermal cycling exposure, the decrease
rate of GLAREs is 1–7% higher than those of glass fiber-reinforced composites. This reduction
is attributed to the large coefficient of thermal expansion (CTE) difference between their
constituent materials .
Figure :GLARE vs. aluminum alloy comparison ratio.

Figure :Fatigue crack growth .


Figure :Comparison of low-velocity impact performance .

Figure :GLARE fire resistance comparing to aluminum alloy .

[h]GLARE grades

Another beneficial feature of GLARE is that the number and orientation of composite layers can be
selected to best suit different applications, and such material features make it attractive for structural
applications . For the certification of GLAREs for aircraft structures, several lay-up patterns are
already defined as a standard grade: the schematic view of GLARE 3/2 is shown in Figure. This
approach is useful to define the specific lay-up pattern used in the structural design . Nowadays, the
standard GLAREs are being produced in six different grades as listed in Table . All grades are
classified according to the type of lay-up pattern where the composite layers consists of unidirectional
S2 glass fiber (AGY Holding Corp., USA) and FM ®94 modified epoxy (Cytec-Solvay Group, USA).
Nominal fiber volume fraction and ply thickness of prepreg are 59% and 0.125 mm, respectively .
Figure :Schematic view of GLARE 3/2.

Metal layers Prepreg layers Typical


Grade Thickness Thickness density Characteristics
Grade Orientation 3
(g/cm )
(mm) (mm)
7475-
GLARE1 0.3–0.4 0/0 0.25
T761
 Unidirectional loaded parts with
2024- rolling direction aluminum sheet in
0.2–0.5 0/0 0.25 2.52
T3 loading direction (stiffeners)
GLARE2
2024-
0.2–0.5 90/90 0.25
T3
 Bi-axially loaded parts with 1:1 of
2024- principle stresses (fuselage skins,
GLARE3 0.2–0.5 0/90 0.25 2.52
T3 bulkheads)

2024-  Bi-axially loaded parts with 2:1 of


0.2–0.5 0/90/0 0.375
T3 principle stresses with aluminum
GLARE4 2.52 sheet in main or perpendicular
2024-
0.2–0.5 90/0/90 0.375 loading direction (fuselage skins)
T3

2024-  Impact critical areas (floors & cargo


GLARE5 0.2–0.5 0/90/90/0 0.5 2.38 liners)
T3

2024-  Shear, off-axis properties


GLARE6 0.2–0.5 −45/+45 0.5 2.52
T3

Table :Classification of GLARE for aircraft structures .

(a) The number of orientations is equal to the number of unidirectional prepreg ply in each composite
layer. The thickness in mm corresponds to the total thickness of composite layers in between two
aluminum layers.
(b) The rolling direction (axial) is defined as 0°, and the transverse rolling direction is defined as 90°.

A special coding convention is used to describe the different GLARE grades and specify their lay-up
patterns. Symbolically, a general configuration is represented as follows :

where; NG is the number indicating GLARE grade, Nal is the number of aluminum layers, Ngl is the
number of composite layers (Ngl = Nal – 1) and tal is the thickness of each separate aluminum sheet (in
typical, 0.25–0.5 mm). Each composite layer in turn consists of a certain number of unidirectional
prepreg plies in 0°/90°/±45° directions. For example, each composite layer in GLARE4 consists of two
unidirectional prepreg plies in oriented at 0 and 90°with respect to the rolling direction of aluminum
sheets. Thereafter, GLARE4B-3/2 comprises three cross-plies within a composite layer, for example
two layers in 90° and one layer in 0° direction. The fraction of unidirectional fibers in the rolling
direction is twice much than that in the perpendicular direction.

[h]Design philosophy for GLARE structures

An introduction of new materials for aircraft structures took place in evolution steps which suggests a
realistic application of the innovative design philosophy, eventually leading to optimization of design
concept. The innovative design concept of GLAREs on A380 fuselage shells is shown in Figure . The
structural efficiencies, such as damage tolerance and residual notch strength are much better served by
incorporating the local variations in skin panel thickness with adhesively bonded joints. In early stage
of technology development, GLARE structures were produced only as a flat panel. The innovations in
structural design have been developed to overcome the joining problem and is termed the splice joint.
The first splice concept consisted of butted aluminum sheets with the composite layer bridging the
splices (e.g. butt joint). However, this concept is not recommended for structural applications because
of premature failure in the butts. To overcome this limitation, several designs of splice concepts where
two aluminum sheets are positioned with a slight over-lap forming a single metal sheet layer are
introduced, as shown in Table .

Figure :Construction and production possibility with the optimized GLARE panel .
Table :GLARE design features–“giant tool box”

For example, alternating layers (i.e. aluminum sheets and unidirectional S2 glass/epoxy prepreg plies)
are laid up over the curing tool, in which forms single, or double curvature shape . The splices are then
staggered with respect to each other, while the adhesive layers are continuous. This interlaminar
doubler solution can offer the local thickness variations in the skin panel . Furthermore, this design
concept can allow tailor-made skin panels of any size, not limited by the standard width of aluminum
sheet rolls (in typical, 1.5 m). Now, the practical limitation of panel sized is only defined by autoclave
size. The thickness variations in the skin panel are generally utilized for compliance with the fail-safe
design requirements and the cost-effective part production for integrating the fuselage structures
between skin panels, longitudinal stringers and circumstance frames. However, the difficulties in the
production of splice GLARE panels in two bonding cycles demand for a feasible production solution,
which allows for completing a splice joint including doublers through co-curing process. For this
purpose, a SFT (Self-Forming Technique) can provide a smart solution to produce the required
doublers without an additional cure cycle for bonding the doublers over the base GLARE panel. Such
an inter-laminar panel highlights the advantages of using a SFT process as follows: (1) no dimensional
tolerance issue for overlap in double curvature panel, (2) the evacuation of entrapped air or volatiles in
composite layers through splice (adhesive squeeze-out). It therefore allows for the increased fuselage
panels width with reducing the additional joints, structural weight and production cost .

[h]Metal volume fraction (MVF)

For the standard GLARE grades qualified, their in-plane static properties can be defined by simple
prediction based on MVF, which can reduce the additional experimental testing for material
qualification. A terminology, MVF, reflects the relative contribution of aluminums properties to the
properties of GLARE . As a result, the MVF approach is useful for the prediction of static strength
properties for GLARE as found in the literatures . The MVF value can be calculated as follows:

where; tal is the thickness of each separate aluminums sheet, tlaminate is the total thickness of GLARE
panel and pmetal is the number of aluminums sheets . The typical MVF values of the standard GLARE
grades are valid in a range between 0.55 and 0.70. The material property of GLARE having any MVF
can be calculated by using a linear relation which follows the “rule of mixtures” available in
anisotropic mechanics by using the Eq. (3).
where; EGLARE is the elasticity of GLARE and EM and EG are the elasticity of aluminum sheet and
composite layers, respectively. The load transfer ratio for composite layers (PG/PGLARE) in GLARE
according to MVF can be defined as follows:

The load transfer ratio for composite layers in GLARE according to MVF can be predicted as shown in
Figure. It is worth noting that the load transfer ratio of composite layer in GLARE exponentially
decreases with the fraction of aluminum sheets. As the fraction of aluminum sheets in GLARE
decreases, more shear load can be dissipated through the aluminum sheet-composite interface .

Figure :Plot of load transfer ratio for glass/epoxy layers in GLARE according to MVF for various
modulus ratios of EG/EM: The corresponding GLARE grades of • GLARE2A 3/2–0.4 (0.703), $\
raster="UF11"$ GLARE3 3/2–0.4 (0.703), $\raster="UF12"$ GLARE4A 3/2–0.4 (0.612), $\
raster="UF13"$ GLARE4B 3/2–0.4 (0.612) and $\raster="UF14"$ GLARE5 3/2–0.4 (0.542).

[h]Part production and quality controls

Basically, the production process for making GLARE structures is similar to the traditional production
of metallic bonded structures and composite laminates. Before the parts are released, the part’s quality
should be assured through a reliable quality control (QC) method. For this purpose, the stringent QCs
procedures shall be developed and applied to the part production of GLAREs. At this time, the QCs
system includes all procedures that ensure the raw material quality, in-process control monitoring and
verification of fitness for part acceptance. At each production stage, the key process parameters should
be also standardized with the specified production tolerances as the follows :

 QC of raw materials. GLARE manufacturer starts with the preparation with rolls of thin
aluminum bare sheet (in typical, 0.3–0.4 mm). A custom-built machine decoils the thin
aluminum sheet from rolls, and flattens the sheet and cuts it to lengths of up to 11 m for large
skin panels. Next, the cut sheets are milled in accordance with the engineering drawings. At this
time, all the aluminum sheets and unidirectional prepreg plies should be controlled by raw
materials inspection specifications, and some specific properties should be controlled: (1)
rolling direction, straightness, waviness and surface roughness for aluminum sheets; (2) fiber
direction, prepreg bridging, or wrinkles and shelf-life requirements (e.g. storage life and
mechanical life) for prepreg plies. This QC activity is basically the same as the traditional
production of sheet metal forming, or composites. All prepreg shall be cut over a clean, non-
contaminated surface with clean, sharp knives, or digital cutting machine to minimize distortion
and splitting. The pre-cut materials (i.e. kit) should be stored in flat or stress-free condition to
prevent folding or further damage. Unless otherwise specified by the engineering drawings, all
the prepreg size should have a suitable trim at required locations to keep irregular edges out of
the final part dimension.

  QC of surface treatment. Surface of aluminum bare sheet should be pre-treated to obtain a proper
adhesion strength and durability with the prepreg resin. For this purpose, the milled sheets are
transferred via a handling system to the chemical treatment line. The standard surface treatment
process consists of solvent degreasing, CAE (Chromic-Sulfuric Acid Etching), CAA (Chromic Acid
Anodizing), and followed by organic bond primer. All key process parameters should be checked for
each batch of aluminum sheets according to the corresponding Airbus’s own specifications, for
example deoxidizing/anodic bath temperature, solution chemistry, rinse water purity and so forth. The
specific surface treatment procedures of aluminum alloys are going to be explained in detail Section
3.2. Finally, the primed, cut sheets are re-rolled, and covered in a protective black plastic (or paper)
bag for storage until needed in fabrication.
  Control of lay-up process. Alternating layers of aluminum bare sheets and prepreg plies are
positioned in the right stacking order in accordance with GLARE grade. All the lay-up works should
be conducted in a sufficiently clean environment, and the working environment such as temperature
and humidity should be also kept below well-defined levels. All cut prepreg plies should be
sequentially prepared and collated on the curing tool in the location and orientation as per the
engineering drawings, or shop process instruction. An optical LPS (Laser Projection System, Virtek
Vision International Inc., Waterloo, ON, Canada) may be capable of attaining the required dimension
tolerance.

  Control of autoclave process. The laid-up parts are vacuum bagged, and then placed into the
autoclave to be united by heat and pressure. Autoclave facility shall have instrumentation which
autographically records time, temperature, pressure and vacuum where applicable. All gauges shall be
controlled and periodically calibrated and certified in accordance with the procedures approved by the
QC department. During an autoclave cure cycle, a high compaction pressure (in typical, 11 bar) is
normally applied to the GLARE lamination stack at an elevated curing temperature (in typical, 125°C)
for 3.5 hours. The representative manufacturing-induced defects, such as voids, porosities, should be
accurately controlled to prevent internal defects. In addition to the QC activities in the part production
of GLARE, there is also required to perform a “final check” prior to the part release. Non-destructive
inspection (e.g. ultrasonic C-scanner) and some mechanical tests are generally accomplished in the
final step of QC.
  Post processing. The manufacturing and assembly of GLARE structures typically require
machining operations, such as milling and drilling. For examples, the GKN Aerospace’s Fokker
business has been produced a large-sized GLARE panel of 4.5 × 11.5 m by using a 5-axis milling
machine on a movable bed. However, a technique for machining of this multi-layered structure has
presented more challenges in the aerospace industry than aluminum alloys or composites due to the
coupled interaction between composite- and metal-phase cutting. The machining operations should be
accomplished to meet the acceptance limit for the discrepancies as per the engineering drawing, or
process specification.

[h]Surface treatment of aluminum alloys for producing GLARE structures

A strong bonding interface is one of the key factors for improving the durability of GLAREs. It is
apparent that the surface treatment technique can improve the surface energy and wettability of
metallic substrates, and is an effective method for enhancing the bonding strength between a metal
substrate and a fiber reinforced polymer composite . In addition, the surface treatment can remove the
undesirable surface oxides or contaminants on the metallic substrate, and ameliorate the surface
composition and microstructure of the metallic substrate . This allows the fiber bridging mechanism
and mechanical properties of GLAREs to be improved, and moreover, the crack propagation rate at the
aluminum-composite interface can be effectively reduced . Previous research works reported that the
surface treatment should be carefully taken into consideration when improving interlaminar shear
strength at the aluminum-composite interface , environmental durability and low-velocity impact
resistance of GLARE. Therefore, the proper production steps should be clearly defined before any
production process is implemented. Note that this section is described based on our previous surface
treatment studies of aluminum alloys for aircraft structures .

All the anodizing process are complex multi-stage operations incorporating degreasing and deoxidizing
stages, as described in the preceding sections, plus appropriate rinses. Anodizing oxidation in solution
of CAA or PAA is the preferred stabilizing treatment for the structural adhesive bonding of aluminum
alloys in critical applications such as aircraft structures . However, they typically rely on such
hazardous materials as strong acid and hexavalent chromium. The use of chromate is prohibited, or
progressively banned in most industries due to its carcinogen activity. For this purpose, non-chromate
anodizing such as boric-sulfuric acid anodizing (BSAA), or phosphoric-sulfuric anodizing (PSA), have
been developed since the mid-1990s , but neither of them have been fully validated for aircraft
applications. Typical anodizing processes and their process parameters are listed in Table.

Treatments1 CAA PAA BSAA PSA


Electrolyte 5.0–10.0 (H3BO3)/ 10.0 (H3PO4)/
2.5–3.0 (CrO3) 10 (H3PO4)
(wt%) 30.0–50.0 (H2SO4) 10.0 (H2SO4)
Voltage (V) 40.0 ± 1.0 10.0 15.0 ± 1.0 18 ± 2.0
Time (min) 35–45 20 18–22 15
Temperature
40.0 ± 2.0 25.0 26.7 ± 2.2 27.0 ± 2.0
(°C)
 Prone to  Control Cl- &
 Control Cl-2 &
biological F
sulfate
 Control Cl- & contamination  The
impurity
Contaminatio F4 5 installation of
 Incorporation
n  Filtering  Use of sodium preventive
of BaCO3
controls required to benzoate or devices for
powder3 to
remove fungus benzoic acid fungus growth
remove
to prevent (e.g.filters and
impurity
fungus growth UV lamps)

Racks Al, Ti, Al with Ti-tips Equivalent to CAA Equivalent to CAA Equivalent to CAA
 Appearances  Appearances
 Solution  Solution  Appearances  Appearances
chemistry chemistry  Solution  Solution
 Water purity  Water purity chemistry chemistry
QC issues  Air cleanliness  Air cleanliness  Water purity  Water purity
 Voltages  Voltages  Air cleanliness  Air cleanliness
 Bath  Coating  Voltages  Voltages
temperature weight

Table :Anodizing processes for structural adhesion bonding of aluminum alloys

1
The proprietary materials and exact production steps are slightly dissimilar between organizations.
2
Cl: Chloride ions.

3
BaCO3: Barium carbonate.

4
F: Fluorine.

5
Bio-contaminant organisms, for example fungal and bacterial (pseudomonas species).

The classical porous oxide structure which are produced by anodizing process is likely to be related to
capillary forces of primer trying to penetrate into the oxide pores, which in turn increase in mechanical
interlocking between anodic oxide and primer . The porous oxide structures can be controllable in
accordance with the anodizing processes, as listed in Table. This table clearly represents the effects of
anodizing processes on the oxide structures in terms of oxide thickness, pore diameter and cell wall
thickness. The CAA process was found to give a relatively thick and softer oxide structure than those
formed by the other processes . This was established as an effective pretreatment for adhesive bonding
with superior durability performance in service . The European aerospace industry is still using this
method . However, notwithstanding the remarkable durability data in corrosive environments, the use
of chromate treatment process is being restricted due to recent environmental policy.

Table :Comparison of oxide morphology on 2024-T3 bare aluminum alloys

The PAA process is basically used for the structural adhesive bonding of aluminum and its alloys. The
standard process (Boeing’s BAC 5555 or ASTM D 3933) has proven to produce the most durable and
reactive surface for structural adhesive bonding . The PAA substrates are normally submitted to a
Forest Product Laboratory (FPL) etch prior to anodizing, although the non-chromate acid etch (P2) is
sometimes used instead. The PAA-treated anodic oxide is highly porous with open cell diameter of
approximately 32 nm in height on top of a much thinner barrier layer . The PAA oxide thickness is
typically reported in the range from 200 to 400 nm with a much thinner barrier layer of about 10 nm .
The physical comparisons between PAA and CAA oxide structures clearly represent the PAA oxide to
have a much more open porous structure, which would be more easily penetrated by the subsequent
organic bond primer, thereby drawing the organic polymer into the oxide structure to form a very
strong interlocking interphase. The PAA oxide structure provides either equivalent or better durability
results than the CAA oxide structure in the most experimental trails .

The BSAA process is usually carried out using a mixture of 5–10 wt.% boric acid (H3BO3) and 30–
50 wt.% sulfuric acid (H2SO4) at 26.7 ± 2.2°C. This process was patented by Boeing as a direct
replacement to the CAA process . It is well known that the CAA process produces a chromium mist
that is hazardous to health if inhaled. The BSAA is an alternative that eliminates this concern and the
need for mist control. The process standard, BAC 5632, involves deoxidizing with tri-acid solutions,
consisting of sodium dichromate, sulfuric, and hydrofluoric acid (HF), followed by the application of
boric and sulfuric acid anodizing. The parts are then dried in warm air at 75°C prior to bond primer
application. The anodic film which is produced by the standard BSAA has relatively small pore
diameter (10 nm) compared with the conventional CAA film (25 nm), as listed in Table. The anodic
oxide structure from the BSAA has a paint adhesion that is equal, or superior, to the one formed on
CAA . For this purpose, the BSAA process parameters have been modified by the research groups, for
example . As a result, the required surface topography and equivalent mechanical stability in strength
and durability are only enhanced when the following process variations were instituted: electrolytic
phosphoric acid deoxidizer (EPAD) : anodizing bath temperature in the BSAA bath and additional
post treatment using a PAD .

More recently, a variety of alternative chromate-free electrochemical treatments have been introduced
in the context of corrosion protection and adhesive bonding of aluminum and its alloys. The new eco-
efficient alternatives developed by Airbus include tartaric-sulfuric acid anodizing (TSA) and PSA. In
particular, a significant step towards chromate-free has been achieved by PSA process for adhesively
bonded joints. This process, which is utilized for adhesive bonded joints is usually carried out by using
a mixture of 10 wt.% phosphoric acid (H3PO4) and 10 wt.% sulfuric acid (H2SO4) at temperatures
ranging from 26 to 28°C . This process is now ready for the qualification by Airbus. The standard
process requires nitric acid deoxidizing prior to PSA treatment. The PSA-treated surface produces an
oxide structure of about 1500 nm in thickness with somewhat narrow porous structures in the range
from 20 to 25 nm in pore diameter . The PSA process has a reduced process time (in typical, 23 min)
and anodizing temperature (27°C), compared with the standard CAA . This leads to an improvement in
eco-efficiency by decreasing time and energy consumption and offers a capacity increase.

[h]Lesson learned from serial production of GLARE structures

In-process QCs activities are essential if the fits, forms, functions and requirements designed into a part
are to be consistently achieved. In general, the QC systems applied to the part production of GLARE
structures should be established based on the company’s own specifications, part requirements,
engineering drawings. For this purpose, all available QC factors such as prescribed contractual
requirements, available equipment, level of personnel training and documentation systems should be
carefully considered. Table describes the specific lesson learned in part production and the
corresponding practical solution.

Table :Lesson learned from production of GLARE structures.


The new hybrid material FML has been successfully applied to the commercial aircraft structures by
offering weight savings of 10% compared with conventional aluminum and its alloys, together with
benefits that include high tensile strength and better F&DT characteristics and high level of fiber
safety. A large number of literatures on the practical applications demonstrates that the material
properties of FMLs and their additional interlinked advantages make them the ideal option for thin-
walled fuselage shells of next single aisle aircrafts. This chapter dealt with the details of technological
developments with ongoing research efforts to understand the material property behaviors of FMLs,
especially static strength, F&DT properties and long-term durability. In addition, two prediction
methods of MVF and CLT have been introduced to predict the corresponding static properties of FMLs
respect to the different lay-up patterns. However, to compete with the typical materials used in
aerospace engineering, additional efforts should be directed towards producing consistently sound
FML structures at affordable costs and ensuring the stringent quality controls for compliance with
structural integrity. Recently, the FML manufacturers have continued to make a substantial progress in
production technology, which allows for enabling FMLs in high-volume production rates and
increasing affordability for aerospace industry. In addition to the consideration of each constituent
material’s properties, a strong interfacial bonding between metal sheets and composite layers is one of
the key factors for the improvement in joint strength and long-term durability of FML structures.
Therefore, a proper surface treatment on the metallic substrate is prerequisite for achieving long-term
service capability through more efficient processing in production.

Chapter 3: UAV Navigation and Control

[mh] Autonomous Navigation Algorithms

Autonomous navigation can be defined as the ability of the mobile robot to determine its position
within the reference frame environment using suitable sensors, plan its path mission through the terrain
from the start toward the goal position using high planner techniques and perform the path using
actuators, all with a high degree of autonomy. In other words, the robot during navigation must be able
to answer the following questions :

 Where have I been? It is solved using cognitive maps.


 Where am I? It is determined by the localization algorithm.
 Where am I going? It is done by path planning.
 How can I go there? It is performed by motion control system.
Autonomous navigation comprises many tasks between the sensing and actuating processes that can be
further divided into two approaches: behavior-based navigation and model-based navigation. The
behavior-based navigation depends on the interaction of some asynchronous sets of behaviors of the
robot and the environment such as: build maps, explore, avoid obstacles, and follow right/left and goal
seeking, which are fused together to generate suitable actions for the actuators. The model-based
navigation includes four subtasks that are performed synchronously step by step: perception,
localization, path planning, and motion control. This is the common navigation systems that have been
used and found in the literature.

The model-based navigation will be explained in details, since it is implemented in the proposed
navigation system. In the perception task of this navigation model, the sensors are used to acquire and
produce enormous information and good observation for the environments, where the robot navigates.
There is a range of sensors that can be chosen for the autonomous navigation depending on the task of
mobile robot, which generally can be classified into two groups: absolute position measurements and
relative position measurements . The European robotics platform (EUROP) marks the perception
process in the robotics system as the main problem which needs further research solutions .

The localization process can be perceived as the answer to the question: where am I? In order to enable
the robot to find its location within the environments, two types of information are needed. First, a-
priori information is given to the robot from maps or cause-effect relationships in an initialization
phase. Second, the robot acquires information about the environment through observation by sensors.
Sensor fusion techniques are always used to combine the initial information and position
measurements of sensors to estimate the location of the robot instantly .

In the path planning process, the collision-free path between start and target positions is determined
continuously. There are two types of path planning: (a) global planning or deliberative technique and
(b) local path planning or sensor-based planning. In the former, the surrounding terrain of the mobile
robot is known totally, and then the collision-free path is determined offline, while in the latter, the
surrounding terrain of the mobile robot is partially or totally unknown, and the robot uses the sensor
data for planning the path online in the environments . The path planning consists of three main steps,
namely, the environment’s modeling, determination of the collision-free trajectory from the initial to
target positions, and searching continuously for the goal .

In motion control, the mobile robot should apply a steering strategy that attempts to prevent slippage
and reduce position errors. The control algorithm has to guarantee a zero steady-state orientation or
position error. The kinematic and dynamic analysis of the mobile robot is used to find the parameters
of the controllers for continuously avoiding the obstacles and moving toward the target position
through the predefined path . The autonomous navigation system can be classified according to the
environmental features as follows:

1. Structured environments: it is usually found in indoor applications, where the environments are
totally known.
2. Semi-structured environments: the environments are partially known.
3. Unstructured environments: outdoor applications are almost unstructured, where the
environments are totally unknown.

There are many factors that can make outdoor autonomous navigation very complex in comparison to
the indoor navigation:
1. The borders of surrounding terrain in the indoor application are clear and distinct. However, in
contrary to that, they are mostly “faded” in outdoor setting.
2. The robot algorithm should be able to discover and deal with many things that are not in the
initial planning.
3. The locomotion of robot will be different depending on the terrain roughness.
4. Outdoor autonomous navigation should have adequate robustness and reliable enough because
the robot typically works and meddles with people and encounters unexpected moving
obstacles.
5. Weather changes can affect the mobile robot sensor measurement, electronic devices, and
actuators. For example, the inhomogeneous of the light will affect the camera when capturing
the image of the scene, the sunray affects the laser measurement, and ultrasonic measurement
becomes unavailable.

Although the outdoor navigation in urban area is a topic involving a wide range and level of
difficulties, it is attractive and forms a challenge to the researchers who are motivated to discover
solution(s) to the problems via their proposed developed technique. Until today, there is as yet a robot
or vehicle, which is able to drive fully autonomously on the roads of urban buildings, taking into
account a high level of robust interaction with its environments . This vehicle is expected to operate in
a dangerous situation, since it always passes through crowded area, must follow printed or signal
traffic lights and more often than not, it is in touch with the people.

Most navigation systems on the roads are performed using two or more sensors to obtain the
parameters of the surrounding environment, taking into consideration various situations and conditions.
Sensor fusion technique is the common method that has been used in navigation of mobile robot in
road environments. In this model, the methods for extracting the features of roads can be done with
reference to either behavior-based navigation or model-based navigation as described in the following
paragraphs.

Global positioning system (GPS) is used as the main sensor and combined with odometry and LRF for
trajectory tracking, obstacle avoiding and localization in curbed roads or navigating mobile robot
between urban buildings . GPS can be combined with LRF and dead reckoning for navigating vehicles
in roads using beacons and landmarks , combined with camera vision for localizing mobile robot in
color marked roads , combined with 3D laser scanner for building compact maps to be used in a
miscellaneous navigation applications of mobile robot , combined with IMU, camera vision, and sonar
altimeter for navigating unmanned helicopter through urban building or combined with LRF and
inertial sensors for leading mobile robot ATRV-JR in paved and rugged terrain . GPS can be combined
with INS and odometry sensor for helping GPS to increase its positioning accuracy of vehicles . GPS is
combined with odometry and GIS (geographical information system) for road matching-based vehicle
navigation . GPS can also be combined with LIDAR (light detection and ranging) and INS for leading
wheelchair in urban building by 3D map-based navigation . GPS is combined with a video camera and
INS for navigating the vehicles by lane detection of roads . GPS is combined with INS for increasing
position estimation of vehicles . GPS is combined with INS and odometry for localizing mobile robot
in urban buildings . Camera video with odometry is used for navigating land vehicle, road following by
lane signs and obstacles avoiding . Omni-directional infrared vision system can be used for localizing
patrol mobile robot in an electrical station environment . 3D map building and urban scenes are used in
mobile robot navigation by fusing stereo vision and LRF .

LRF can be combined with cameras and odometry for online modeling of road boundary navigation ,
for enhancing position accuracy during mobile robot navigation or for correcting trajectory navigation
of mobile robot . Also, it can be combined with compass and odometry for building map-based mobile
robot navigation . It can be combined with color CCD camera for modeling roads and driving mobile
robot in paved and unpaved road , for crossing roads through landmark detection , for obstacle
detecting and avoiding or for road recognition process . It can be combined with sonar and camera
vision for navigating mobile robot when crossing roads . 3D LRF is combined with two color camera,
INS, and odometry for cross-country navigation of ADAM mobile robot through natural terrain . It can
be combined with IMU, CCD camera, and odometry for guiding tractor vehicle in Citrus Grove . It can
be combined with camera, LRF, sonar, and compass for building 3D map of an urban environment .

Priori map can be used for navigation by mapping precisely environment landmarks . Dead reckoning
is used for estimating position accurately by feature detection in mobile robot . A hybrid methodology
between the teleoperating and autonomous navigation system has been applied in a curbed road
environment as explained in . A combination of differential global positioning system (DGPS) and
odometry with extended Kalman filter is used to localize the mobile robot in a road environment. LRF
is used for obstacle avoidance by finding the suitable path to pass through the road curbs and to
estimate the position of the road curbs during trajectory tracking. The main setback in this work is that
the robot cannot work autonomously in road environments.

Jose et al. presented a robust outdoor navigation system using LRF, and dead reckoning is proposed
for navigating vehicles using beacons and landmarks on the roads. The system is studied considering
several cylindrical or V-shaped objects (beacons) that can be detected using LRF in a road
environment. The information filter (derived from Kalman filter) is used to gather data from sensors
and build map and determine the localization of robot. The results show that the accuracy of the online
mapping is very close to DGPS, which has accuracy equal to 1 cm. Donghoon et al. proposed a
navigation system method based on fusing two localization systems is proposed. The first one uses a
camera vision system with p filter, while the second uses two GPS with Kalman filter. The printed
marks on the road like arrows, lines, and cross are effectively detected through the camera system. By
processing the camera’s data using the hyperbola-pair lane detection, that is, Delaunay triangulation
method for point extraction and mapping based on sky view of points, the robot is able to determine
the current position within its environment. To eliminate the camera drawback like lens distortion and
inaccuracy in the long run, GPS system data are used to complement and validate the camera’s data.
The errors between the predefined map and real-time measurements are found to be 0.78 m in x
direction and 0.43 m in y direction.

Cristina et al. suggested a compact 3D map building for a miscellaneous navigation application of
mobile robot in real time performed through the use of the GPS and 3D laser scanner. A 3D laser
scanner senses the environment surface with sufficient accuracy to obtain a map, which is processed to
get a qualitative description of the traversable environment cell. The 3D local map is built using 2D
Voronoi diagram model, while the third dimension is determined through the data from laser scanner
for the terrain reversibility. The previous map is gathered with the GPS information to build a global
map. This system is able to detect the slopes of navigated surfaces using Brezets algorithm and the
roughness of the terrain using normal vector deviation for the whole region. Although the proposed
algorithm is very complicated, the result shows that it is able to extract an area as big as 40 × 20 m in
an outdoor environment for 0.89 s.

A new platform for an autonomous wheeled mobile robot navigation system has been designed and
developed in our labs. It is comprised of the following parts as shown in Figure:

 Two motors, type: DC-brush with power 120 W and model DKM.
 One spherical Castor wheel.
 4 m range LRF; model HOKUYO URG-04LX-UG01.
 High resolution WiFi camera, model JVC-GC-XA1B.
 Two optical rotary odometry; model, B106.
 Motors drivers, types SmartDrive 40.
 Five cards for the interface free-based controllers system (IFC): interface power, abbreviated as
IFC-IP00; interface computer, abbreviated as IFC-IC00; interface brushless, abbreviated as IFC-
BL02; and interface brush motors, abbreviated as IFCBH02.

The platform has an additional part such as battery with model NP7-12 lead acid and the aluminum
profile and sheet-based chassis as shown in Figure :

Figure :The developed platform used for experiments.

[h]Sensor modeling and feature extraction

The data coming from each sensor in the platform were prepared in such a way that it enables for the
extraction of the road features autonomously while navigating on the road. It is described in the
following section.

[h]Camera

JVC-GC-XA1B camera is utilized to figure out the environments in front of the robot with good
resolution (760 × 320) and speed equal to 30 frame/s. The sequences of pictures are extracted from the
live video using image processing tool boxes in MATLAB. An algorithm has been developed that can
take the image sequences from the video and apply multiple operations to get a local map from the
image and perform further calculation for road following and roundabout detection. In general, the
image sequences processing algorithm consists of three main parts:

1. Preprocessing of the image for depth processing.


2. Processing of the image and development of the environment local map.
3. Post processing algorithms to perform the following subtasks:
o Roundabout detection based on LS approach.
o Road following in the roads without curbs.

The first and second steps will be explained here, while the third one will be explained in next section.

(i) Preprocessing of the image for depth processing.

A preprocessing algorithm is utilized to capture the environments of the road from the live video and
prepare it in an appropriate form to be processed in the next processes. The main operations that have
been used at this step can be briefly described as follows:

 Constructing the video input object: the video input object represents the connection between
MATLAB and a particular image acquisition device.

vid = videoinput(‘winvideo’,3);

 Preview of the WiFi video: it creates a video preview window that displays live video data for
video input object. The window also displays the timestamp and video resolution of each frame
and the current status of video input object.

preview (vid);

 Setting the brightness of live video

The brightness of image’s sequences is adjusted, since the aperture of camera is not automatic.
Following command is used:

set (vid.source, ‘Brightness’, 35);

 Start acquiring frames

This function helps to reserve the image device for the exclusive use in this program and locks
the configuration to others applications. Thus, certain properties become read only while
running.

Start(vid);

 Acquiring the image frames to MATLAB workspace

It returns the data, which contains the number of frames specified in the (Frames per Trigger)
property of the video input object.

data = getdata(vid);

 Crop image

It allows for cutting the interesting regions in the images. In the proposed algorithm, the bottom
half of the image is cropped, since it contains the nearest area of the road to the robot; also to
save time needed for calculation in comparison with whole image.

GH = imcrop(data, [1 u1 io(2) io(1)]);


 Convert from RGB into grayscale

The purpose is to make image useful for the edge detection and removing the noise operations.

IS = rgb2gray(GH);

 Remove image acquisition object from memory

It is used to free memory at the end of an image acquisition session and start a new frame
acquisition.

delete(vid).

(ii) Processing of the image and development of the environments local map.

It includes some operations that allow to extract the edges of the road and roundabout from the images
with capability to remove the noises and perform filtrations.

The following operations are applied for edge detection and noise filtering:

1. 2D Gaussian filters: It is used for smoothing the detection of the edge. The point-spread
function (PSF) for 2D continuous space Gaussian is given by:

 σ is the standard deviation of the Gaussian distribution.

The following Gaussian filter was applied to the image in MATLAB:

PSF = fspecial(‘gaussian’, 3, 3).

 Multidimensional images (imfilter): It allows the user to filter one array with another array
according to the specified options. In the proposed image processing, a symmetric option is used to
filter the image by the Gaussian filter as follows:

I = imfilter(IS, PSF, ‘symmetric’, ‘conv’).

IS is the gray value image of the scene, and conv indicates the linear convolution operation of the
image in which each output pixel is the weighted sum of neighboring input pixels.

 Canny, Prewitt, and Sobel filters for edge detection: these filters were used to find the edges of
curbed road in the image. They are derivative-based operations for edge detection:

Canny filter: It finds the edges by looking for local maxima of the gradient of the image matrix. It
passes through the following steps: smoothing the image by blurring the operation in order to prepare
for removing the noise; finding the gradients of the edges, where the large magnitude gradients of the
image should be marked; nonmaximum suppression operation which allow only the local maxima to be
marked as edges; double thresholding applied to the potential edges; and edge tracking by hysteresis,
where the final edges are determined by removing all edges that are not connected to certain strong
edges (large magnitudes). This filter is implemented with threshold coming from imfilter as follows:

BW = edge (I, ‘canny’, (graythresh(I) * .1)).

Prewitt gradient filter:

It finds the edges using the Prewitt approximation of the derivative. Each component of the filter takes
the derivative in one direction and smoothies in the orthogonal direction using a uniform filter. This
filter is implemented with threshold coming from imfilter as follows:

BW = edge (I, ‘prewitt’, (graythresh(I) * .1)).

Sobel gradient filter:

It finds the edges using the Sobel approximation of the derivative. Each filter takes the derivative in
one direction and smoothies in the orthogonal direction using a triangular filter. This filter is
implemented with the threshold coming from imfilter as follows:

BW = edge (I, ‘sobel’, (graythresh(I) * .1)).

 Morphological operations

The morphological methodology is applied to improve the shape of the lines representing the road
curb, which are extracted from edges in the images using the above-mentioned filters. Morphology
technique includes a various set of image processing enhancing operations that process the images
based on shapes. Morphological operations implement a structuring element to the input image and
then carry out the output images with a similar size. The morphological operation compares the current
pixel in the output image with the input one and its neighbors to calculate the value of each output
image pixels. This is done by selecting a certain shape and size of the neighbors that are sensitive to the
input image’s shapes.

Two operations are used to perform the morphological method:

Morphological structuring element (strel): it is used to define the areas that will be applied using
morphological operations. Straight lines with 0° and 90° are the shapes used for the images which
actually represent the road curbs.

se90 = strel(‘line’, 3, 90).

se0 = strel(‘line’, 3, 0).

Dilation of image (imdilate): dilation and erosion are two common operations in morphology. With
dilation operation, the pixels in the image have to be summed to the object boundaries. The numbers of
the added pixels are depended on the element size and its structure that is utilized to process the input
images. In dilation process, the status of the pixels in the output images is figured out by implementing
a certain rule and equations to the corresponding input pixels and all its neighbors. In the developed
algorithm, the imdilate is used with a range that comes from the strel operation as follows:
BW1 = imdilate(BWC, ).

BWC is the binary image coming from the edges filters.

 2D order-statistic filtering (ordfil2)

It is also called Min/Max of the Median filter. It uses a more general approach for filtration, which
allows for specifying and choosing the rank order of the filter as:

f2 = ordfilt2(x, 5, [1 1 1; 1 1 1; 1 1 1]).

Where, the second parameter specifies the rank order chosen, and the third parameter specify the mask
with the neighbors specified by the nonzero elements in it.

 Removing of small objects from binary image (bwareaopen)

This function is helpful to remove all connected components (objects) that have less than a certain
threshold numbers of pixels from the binary image. The connectivity between the pixels can be defined
at any direction of image; in MATLAB, it can be defined by default to 4-connected neighborhood or 8-
connected neighborhood for the images. In the proposed algorithm, it is adjusted as follows:

BW2 = bwareaopen(BW1, 1200).

 Filling of image regions and hole operation (imfill)

Due to the object in images are not clear, especially at the borders, this function is used to fill the
image region that represents an object. This tool is very useful in the proposed algorithm to detect the
roundabout intersection. One can describe the region filling operation as follows:

1. Xf is any point inside the interested boundary, B is the symmetric structuring element, ∩ is the
intersection area between A and B, and AC is the complement of set A. The above-mentioned
equation is performed in MATLAB by the following command:

P = imfill(BW2, ‘holes’).

The developed image sequences after processing looked like the one shown in Figures 2 and 3,
which are used later on for Laser Simulator-based roundabout detection. Also, Figures 8–11
portrayed in Section 4 show the capability of the algorithm to develop the local map in the
indoor and outdoor road environments.
Figure :Image sequence processing where no roundabout can be detected, (a) original image, (b)
curbed image in a gray, (c) after edge detection algorithm, (d) final processed image, and (e) developed
image for local map.
Figure :Image sequence processing where roundabout is detected, (a) original image, (b) curbed image
in a gray value, (c) with edge detection algorithm, (d) final image, and (e) developed image for local
map.

[h]Laser range finder


LRF measurements are used to localize the robot in environments and for building 2D local map. As
previously mentioned, this device can scan an area with 240° at 100 ms/scan. The surrounding
environments in the LRF measurements look like the one shown in Figure :

Figure :Laser measurements and the location of objects in its environment at the left, middle, and right
side.

Two coordinate systems are used to characterize the measurements of LRF as depicted in Figure,
which are: LRF measuring coordinate and LRF environment coordinate systems. Since the LRF
measurements are scanned starting from the origin point, the calculation is done based on the right
triangle as depicted in Figure :

Figure :Principle of laser measurement calculation.

If the laser measurement of components in yL direction is compared at points a, b, and c, one can find:
La, Lb, and Lc are the length of laser measurements at point a, b, and c, respectively, as shown in
Figure. β is the angle between the measurement point and the platform coordinate system which can be
calculated from the scan area angle Lθ = (0–240°). yLa and yLc represent the parameters of the road, and
yLb represents the curb. It is found that yLa and yLc have the same length in y direction; however, the
length of yLb is different and can be written as follows:

ZRb = hc, hc is idetified as the road curbed height which is known as a threshold in the program. ρ is
defined as the LRF rays and the floor. For obstacle detection, two successive scan measurements (i and
ii) have to be compared with each other in yL direction in the location between e and d as illustrated in
Figure to find the obstacles ahead of the robot as shown in Eqs. (5)–(7):

The width of the obstacle can be calculated as:

From previous calculation, one can define the parameters that will be used later for road discovering by
LRF as shown in Figure as:

 Road fluctuations (height of objects with reference to the laser device) as follows:
Road width (side distance with reference to the laser device) as follows:

rn is the dimension of LRF signals for n-th LRF measurements.

 Curb threshold: rf0 in β = 0° is used as reference and the other fluctuation measurements (rfn) on
the left and right side in the same scan are compared with this base line. If the deviation
between the reference point and other measurement values exceeds the predefined threshold th,
this point will be considered as a road curb. Otherwise, it will be considered as a road. This
operation is repeated with all measurements as follows:

The LRF driver for reading the data from USB connection in real-time is developed in MATLAB. It
includes some functions to identify, configure and get data from the sensors as further detailed out in
Appendix B. The algorithm for detecting the curbs of road based on the previous mentioned Eqs. (3)–
(10) is developed and implemented in the road environments as shown in Figure. The LRF is the main
sensor for the calculation of the robot path within the environments, which produces the results through
using the LRF to generate the equations for path planning as shown in Sections 4.1 and 4.2.

Figure :Laser measurements for the experimental setup, (a) one scan of laser measurement (mm), (b)
road with curbs in 3D (mm), and (c) path generation (mm).

[h]Odometry

Two rotary encoders (B106) were used to estimate the position of robot, which are connected through
dual brush card pins (IFC-BH02) to the two differential wheels. The full rotation of encoders is 500
pulses/rotations, and the linear position of the wheels is calculated from the encoder rotation as shown
in Figure using the following expression:

Figure :Calculation of the linear position from the rotary encoder.

C is the accumulative path of robot, r is the radius of wheel, Pcur is the number of pulses in the current
position, and Pfr is the number of pulses for one full rotation of the encoder.

Two encoders were used in the proposed system, and the accumulative path will be calculated as an
average value of both encoders as follows:

[h]Navigation system results and discussion

The proposed WMR is able to navigate autonomously on the road following and negotiates effectively
the curvature of a standard roundabout. The robot is able to find the path starting from the predefined
start pose until it reaches the pregoal position using the LS approach for the local map that was
identified by the robot sensors such as camera, the LRF, and odometry measurements. Normally, the
start and goal positions are determined using the DGPS system, and the robot uses this information to
detect the direction of the roundabout exit. Since the proposed robot is navigating in a relatively small
area (about 10–30 m), it is not useful to use GPS. Instead one can simply inform the robot about the
roundabout exit direction, goal position before it starts to move. The goal position can be determined as
how much distance the robot should travel after passing through the roundabout, for example, the goal
is located at eastern part of the exit of the roundabout (i.e., 270°) with a distance equals to 20 m. The
path that the robot will follow in the environment can be adjusted in the program of the navigation
system as in middle, left or right. It is desired that the robot should navigate in the middle of the road
for all the experiments. The velocity of the robot during navigation is adjusted in the range of 7–
10 cm/s.
During navigation on the road, our Laser Simulator approach is used to detect the roundabout when it
is present in the local map identified by the camera . The sensor fusion including LRF and encoders is
used to estimate the position of robot within the environments.

In general, two conditions are used to detect a roundabout; noncurbs detection on the left and right and
discovering ellipse in the image as shown in Figures 8 and 9.

Figure :Image sequences processing where no roundabout can be detected using LS, (a) image from
preprocessing and processing step, and (b) applying the LS (continues line in the middle).

Figure :Image sequences processing where roundabout is detected using LS, (a) image from
preprocessing and processing step, and (b) applying the LS (discontinues line in the middle).
Figure :Correction of the robot path in the road following when the start location of the robot is near to
left curb, (a) laser measurements, (b) path determination: the blue with (*) is the laser measurements,
black with (o) is road curbs and roundabout, and red with (−) is the optimum path, (c) images of the
robot in the beginning of movement, and (d) images of the robot when correcting its path.

[h]Road following

In the road following area, the camera, encoders, and laser are combined together to find the collision-
free path. The camera is used for roundabout detection when it figures out using Laser Simulator. LRF
is utilized for detecting the road curbs and localizing the robot in the environment. The encoder’s
measurements are used for estimating the current position of robot within environments. The
generation of robot path is described in the following paragraphs.
[h]Roundabout navigation results
The results show the capability of the roundabout path planning algorithm to find the right trajectory
with a small deviation in comparison to the optimum path as shown in Figure :

Figure :Camera sequence images when navigating in outdoor with 360° rotation, (a) image when start
moving, (b) camera’s local map when robot starts to move, (c) camera’s local map when robot detects
the roundabout, and (d) path of robot during navigation in a roundabout in (mm) with 360° rotation:
blue with (*) is the path, black with (O) is for the road environments, and red with (−) for the optimum
path.

Figure shows how the camera, encoders, and laser range finder are used for path planning and
execution from start to goal position. It also shows the local map built from camera sequence frames,
where the developed view presents the curbs of the road environment of road in images. The camera’s
local map is determined for each image in the sequences of video frames as shown in Figure(b) and (c),
and the LS is applied to find the roundabout. Figure(c) shows the last image, where the roundabout is
detected.

By applying the algorithm of cameras, laser range finder, and odometry described in Section 3, one can
determine the path of robot as in Figure(d). It is noticed that the part of roundabout in the entrance and
exit areas is not occurred in the roundabout environment, due to that the laser still does not reach that
area, but it is calculated based on a mixed algorithm of camera and laser range finder. The road
following curbs are shown in Figure(d), where the entrance to the roundabout is located on the left
side, and the exit is located on the right. The path of the robot looked smooth especially with the indoor
environments in comparison with the optimum path (continuous red line); however, there is a deviation
in the outdoor measurements in the area of entrance and exit of roundabout due to the distance between
the curbs and the roundabout center.

[mh] GPS and GNSS Integration

[h]The limitations of GPS

Some of the downsides of GPS are listed in . Among these are several limitations which are relevant to
this chapter. The weak intensity signal causes GPS to be less applicable for cases where stable
navigating is mandatory or cases where navigating at indoor and covered areas. The low granularity of
the signal accuracy makes navigation in crowded cities where landmarks are so close such that GPS is
not able to differentiate among them and so is not effective. Furthermore, GPS signal may disperse and
change its direction due to interruptions caused by skyscrapers, trees, geomagnetic storms, etc. The
impact of unreliable GPS is huge especially due to the constant growing use of navigation applications
such as Google Maps and Waze, which heavily rely on GPS signal. The impact may be more car
accidents in cases where required information is missed exactly at the time it is critical and useful for
driving continuation. GPS signal is not enough for covering all navigation instances. Local and timely
knowledge is required for updated and accurate information to be able to properly react instantly when
obstacles appear in the road ahead, for example, whether a deep pit or a flooding road is likely to be or
if the road is closed. To reduce the dependency on GPS, several methods and technologies have been
proposed, such as detailed map information, data from sensors, vision-based measurements, stop lines,
and GPS-fused SLAM technologies.

[h]GPS and GIS integration

A geographic information system (GIS) is a system designed to capture, store, manipulate, analyze,
manage, and present all types of geographical data. Many electronic navigation systems deliver its
road-guiding instructions using just verbal commands referring to the associated electronic map
displayed to the user. This approach assumes the user familiarity with street maps and road networks,
which sometimes is not so. In addition, there are places where street maps are not commonly used and
instead landmarks are used allowing the intuitive navigation by recognizable and memoizable views
along the route. The introduction of buildings as landmarks together with corresponding spoken
instructions is a step towards a more natural navigation. The integration of GPS and GIS provides this
capability. The main problem lies in identifying suitable landmarks and evaluating their usefulness for
navigation instructions. Existing databases can help to tackle this problem and be an integrated part of
most navigation applications. For example, Brondeel et al. used GPS, GIS, and accelerometer data to
collect data of trips and proposed a prediction model for transportation modes with high correction
rate. ResZexue et al. developed a logistics distribution manager (LDM) software and a smart machine
(SM) system. It is based on fusing GPS, GIS, Big Data, Internet+, and other technologies to effectively
apply its attributes and benefits for achieving a robust information management system for the logistics
industry. The resulting logistics facility has shorter distribution time, improved operational
competitiveness, optimized the workflow of the logistics distribution efficiency, and saved cost. These
examples demonstrate the level of improvements we can expect by integrating GPS and GIS as well as
the IoT, mobile phones, and other current technologies.

[h]GPS and mobile phone integration

GPS positions provided via phone are generated using multiple different methods, resulting in highly
variable performance. Performance depends on the smartphone attributes, the cell network, availability
of GPS satellites, and line of sight to these satellites. The time from turning on the smartphone to
getting GPS coordinates is relatively long. To accelerate it, a variety of techniques got used. Some
phones have incomplete GPS hardware, requiring a cell network to function. The quality of the GPS
antenna determines the duration until the device will get a lock. For example, the S3 Mini device has
relatively good GPS hardware, including GLONASS and A-GPS support.

[h]Urban vehicles navigation

Urban canyons, sky blockage, and multipath errors affect the quality and accuracy of
GNSS/GPS. Public transportation in modern cities may have hundreds of routes and thousands of bus
stops, exchange points, and busses. These two factors make urban bus systems hard to follow and
complex to navigate. Mobile applications provide passengers with transport planning tools and find the
optimal route, next bus number, arrival time, and ride duration. More advanced applications provide
also micro-navigation-based decisions, such as current position and bus number, the number of stops
left till arrival, and exchange to a better route. Micro-navigation decisions are highly contextual and
depend not just on time and location but also on the user’s current transport mode, waiting for a bus or
riding on a bus. Emerging technology is where accuracy and robustness are critical requirements for
safe guidance and stable control. GNSS accuracy can be significantly improved using several
techniques such as differential GNSS (DGNSS), augmented GNSS, and precise positioning services
(PPS). These techniques add complexity and additional cost. Multi-constellation GNSS also enhances
the accuracy by increasing the number of visible satellites. In dense urban areas where high buildings
are common, the geometry of visible satellites often results into high uncertainty in the vehicle’s GNSS
position estimate resulting in performance in dense urban areas still being challenging.

[h]Bus navigation using embedded Wi-Fi and a smartphone application

Urban Bus Navigator (UBN) is a system infrastructure that connects passengers’ mobile smartphones
with Wi-Fi-enabled busses, gaining real-time information about the journey and transport situations of
passengers . A key feature of UBN is a semantic bus ride detection that identifies the concrete bus and
route the passenger is riding on, providing continuous, just-in-time dynamic rerouting and end-to-end
guidance for bus passengers. Technical tests indicate the feasibility of semantic bus ride detection,
while user tests revealed recommendations for effective user support with micro-navigation. The
system elements include semantic bus ride detection using a Wi-Fi-based recognition system and a
dynamic trip tracking. The semantic bus ride detection combined with the phone’s GPS is used to
monitor the passenger’s trip progress. Deviations are immediately recognized and trigger replanning
the trip, resulting a new set of navigation instructions for the passenger. The architecture is composed
of Wi-Fi for proximity detection of busses by the passenger’s mobile phone, a smartphone application
for trip planning using macro-navigation, a context-aware trip hints using micro-navigation, context
sensing, bus ride recognition, and trip tracking.

[h]GNSS/IMU sensor fusion scheme

This urban navigation is based on detecting and mitigating GNSS errors caused by condensed high
buildings interfering signals going through . It is using a map-aided adaptive fusion scheme. The
method estimates the current active map segment using dead-reckoning and robust map-matching
algorithms modeling the vehicle state history, road geometry, and map topology in a hidden Markov
model (HMM). The Viterbi algorithm decodes the HMM model and selects the most likely map
segment. The projection of vehicle states onto the map segment is used as a supplementary position
update to the integration filter. The solution framework has been developed and tested on a land-based
vehicular platform. The results show a reliably mitigate biased GNSS position and accurate map
segment selection in complex intersections, forks, and joins. In contrast to common existing adaptive
Kalman filter methods, this solution does not depend on redundant pseudo-ranges and residuals, which
makes it suitable for use with arbitrary noise characteristics and varied integration schemes.

[h]Navigation based on compass-based navigation control law

Urban environments offer a challenging scenario for autonomous driving . The proposed solution
allows autonomously navigate urban roadways with minimum a priori map or GPS. Localization is
achieved by Kalman filter extended with odometry, compass, and sparse landmark measurement
updates. Navigation is accomplished by a compass-based navigation control law. Experiments validate
simulated results and demonstrate that, for given conditions, an expected range can be found for a
given success rate.

The architecture contains steering and speed controllers, an object tracker, a path generator, a pose
estimator, and a navigation algorithm using sensors allowing real-time control. High-level localization
is provided by the pose estimator, which utilizes only odometry measurements, compass
measurements, and sparse map-based measurements. The sparse map-based measurements generated
from computer vision methods compare raw camera images to landmark images contained within a
sparse map. The roadway scene includes lane line markings, road signs, traffic lights, and other sensor
measurements. The scene information and the inertial pose estimate are fed into a navigation algorithm
to determine the best route required to reach the target. This navigation scheme is provided by a
compass-based navigation control law.

[h]Space navigation systems

Common navigation technologies assume navigation on a surface with two-dimension (2D), flat land
area. Navigation in three-dimension (3D) is much more complicated requiring at least new
technologies to complement the existing 2D navigation technologies.
[h]Autonomous navigation of micro aerial vehicles

In this section we present a low-computational method for state estimator enabling autonomous flight
of micro aerial vehicles . All the estimation and control tasks are solved on board and in real time on a
simple computational unit. The state estimator fuses observations from an inertial measurement unit,
an optical flow smart camera, and a time-of-flight range sensor. The smart camera provides optical
flow measurements and odometry estimation, avoiding the need for image processing, usable during
flight times of several minutes. A nonlinear controller operating in the special Euclidean group SE(3)
can drive, based on the estimated vehicle’s state, a quadrotor platform in 3D space guaranteeing the
asymptotic stability of 3D position and heading. The approach is validated through simulations and
experimental result.

[h]Vision-based navigation for micro helicopters

Weiss developed a vision-based navigation system for micro helicopters operating in large and
unknown environments. It is based on vision-based methods and a sensor fusion approach for state
estimation and sensor self-calibration of sensors and with their different availability during flight. This
is enabled by an onboard camera, real-time motion sensor, and vision algorithms. It renders the camera
and an onboard multi-sensor fusion framework capable to estimate at the same time the vehicle’s pose
and the inter-sensor calibration for continuous operation. It runs at linear time to the number of key
frames captured in a previously visited area. To maintain constant computational complexity, improve
performance, and increase scalability and reliability, the computationally expensive vision part is
replaced by the final calculated camera pose.

[h]Space navigation using formation flying tiny satellites

Traditional space positioning and navigation are based on large satellites flying in a semi-fixed orbit
and so are costly and less flexible . Recent developments of low-mass, low-power navigation sensors
and the popularity of smaller satellites, a new approach of having many tiny spacecrafts flying in
clusters under controlled configurations utilizing its cumulative power to perform necessary
assignments. To keep stable but changeable configurations, positioning, attitude, and intersatellite
navigation are used. For the determination of relative position and attitude between the formation
flying satellites, Carrier-phase differential GPS (CDGPS) is used, where range coefficients, GPS
differential corrections, and other data are exchanged among spacecrafts, enhancing the precision of
the ranging and navigation functions. The CDGPS communicates the NAVSTAR GPS constellation to
provide precise measures of the relative attitude, the positions between vehicles, and attitude in the
formation.

[h]Pedestrian navigation systems

Pedestrian navigation services enable people to retrieve precise instructions to reach a specific location.
As the spatial behavior of people on foot differs in many ways from the driver’s performance, common
concepts for car navigation services are not suitable for pedestrian navigation. Cars use paved roads
with clear borderlines and road signs, and so keeping the car on track is its main role, neglecting
obstacles and hazards, unless it is integrated with a social network. However, pedestrians, unlike like
cars, may not follow the defined road. This makes personal navigation more complicated and forces us
adding special features required for safe navigation. Pedestrian navigation requires very accurate, high-
resolution, and real-time response . Solely GPS does not support last moment route changes, such as
road detours, significant obstacles, and safety requirements. However, integrating the IoT and GPS via
an application generates a solution providing accurate and safe navigation. To enable it, a two-stage
personal navigation system is used. In the first stage, the trail is photographed by a navigated drown,
and the resulting video is saved in a cloud database. In the second stage, a mobile application is loaded
to the pedestrian’s mobile phone. Once the pedestrian is about to walk, it activates the mobile
application which synchronizes itself with the cloud navigation database, and then instructions from
the mobile phone guide the pedestrian along the trail-walk. A more advanced system contains the two
stages within the mobile application. The mobile video camera is activated and captures the trail
images in front of the pedestrian, processes it, and guides the pedestrian accordingly. In case of an
upcoming obstacle, the application proposes the safest and most effective detour and guides the
pedestrian accordingly.

Personal navigation systems are very accurate and safe, operate indoor and outdoor, and are available
as long as the mobile phone is connected, and its internal storage is big enough. It provides spatial
information for climbing, wandering, or tramping users. It is used for locating casualties, as well as for
self-orientation of rescue teams in areas with low visibility. In military and security operations,
localization and information technologies are used by soldiers to self-locate, collect, and collate. A
similar implementation with the same functionality is a walking stick with embedded micro devices
and software as described above and a wearable Bluetooth headset with an embedded camera in front
of it.

[h]Landmark-based pedestrian navigation systems

Navigation in cities is commonly done by the target address: zip-code, street, and house number.
However, in cases where people do not use street and house number as an address but rather use
landmarks to identify the route to the target as well as the target location , by combining CIS and GPS,
the desired landmarks coordinates are loaded to the cloud database, and the corresponding navigation
application is modified to identify the landmarks on ground.

A landmark-based navigation system is composed of a video camera to obtain and analyze pedestrian
paths, selected reliable landmarks along the main routes, a routing table containing all relevant origins
and destinations within the site, positions of view and orientations to assert maximal coverage of
interesting spots, thousands of partial routes for the entire recording period, and the detected stops over
a whole day for different definitions of a stop. Based on the defined sections and the landmarks and
decision points, a routing table is created to define navigational instructions from each origin in the
station to each possible destination. Table columns correspond to the original landmarks and the
decision points; rows correspond to destination landmarks. The identified landmarks and the defined
route instructions are used to develop an audio guiding system using speech recognition and text-to-
speech software. The audio guiding system employs verbalisms that are as distinct and clearly
recognizable as the visual landmarks and that the users can intuitively combine the description with
what they see.

[h]Shoe navigation based on micro electrical mechanical system

A micro electrical mechanical system (MEMS) is a family of thumbnail technologies enabling a wide
variety of advanced and innovative applications. When such device is mounted on a shoe, it collects
the number of steps, average step width, and walking directions. This data is constantly collected and
processed, and via signals it guides the person wearing the shoe. Due to the magnetic field, some
navigation errors may occur; a special filter offsets it by using a special filter. Experiments show that
this approach is applicable and efficient.
[h]Indoor navigation technologies

Indoor navigation systems became popular due to the lack of GPS signals indoors and the increase in
navigation needs especially in small areas, such as parking garages and huge complex of buildings.
Several indoors navigation systems have already been implemented. Each of them is based on a
different technology that complies with the specific requirements and constraints of the location it is
expected to navigate in. We assume that each solution has technical and usability limitations. It helps
tracking objects by using wireless concepts, optical tracking, ultrasound techniques, sensors, infrared
(IR), ultra-wide band (UWB), Wireless Local Area Networks (WLANs), Wi-Fi, Bluetooth, radio
frequency identification (RFID), assisted GPS (A-GPS), and more. Most solutions have limited
capabilities, accuracy, unreliability, design complexity, low security, and high configuration costs.

[h]NFC-based indoor navigation system

NFC technology allows communication over short-range, mobile, and wireless conditions. NFC
communication happens when two NFC-capable devices are close to each other. Users use their NFC
mobiles to interact with an NFC tag or another NFC mobile. NFC-based indoor navigation system
enables users to navigate through a complex of buildings by touching NFC tags spread around and
orienting users to the destination . NFC internal has considerable advantages to indoor navigation
systems in terms of security, privacy, cost, performance, robustness, complexity, and commercial
availability. The application orients the user by receiving the destination name and touching the mobile
device to the NFC tags and so navigates to the desired destination.

[h]Indoor garage navigation based on car-to-infrastructure communication

Indoor micro-navigation systems for enclosed parking garages are based on car-to-infrastructure
communication providing layout information of the car park and the coordinates of the destination
parking lot. It uses unique signal rates. In case a car is detected, the system calculates its position and
transmits data to a vehicle to substitute the internal positioning system. With this information the
vehicle is guided. Integration to the outdoor navigation system is available to allow smooth transition
from/to outdoor/indoor.

[h]Autonomous vision-based micro air vehicle (MAV) for indoor and outdoor navigation

In this section we introduce a quadrotor that performs autonomous navigation in complex indoor and
outdoor environments . An operator selects target positions in the onboard map, and the system
autonomously plans flights to these locations. An onboard stereo camera and an inertial measurement
unit (IMU) are the only sensors. The system is independent of external navigation aids like GPS. All
navigation tasks are implemented onboard the system. The system is based on FPGA-dense stereo
matching images using semi-global matching, locally drift-free visual odometry with key frames and
sensor data fusion. It utilizes the available depth images from stereo matching. To save processing time
and make large movements or rather low frame rates possible, the system works only on features. A
wireless connection is used for sending images and a 3D map to the operator and to receive target
locations. The results of a complex, autonomous indoor/outdoor flight support this approach. The
position is controlled by the estimated motion of the sensor. To enable it, a state machine controller, a
tracking position system, and a reference generator are implemented. The reference generator is used to
create smooth position, velocity, acceleration, and a tracking controller based on a list of waypoints.
The flown path is composed of straight line segments between any two waypoints.
[h]Obstacle avoidance navigation systems

A comprehensive automated navigation system must incorporate effective tools for detecting road
obstacles and instantly propose the optimal alternate route bypassing the detected obstacle. It combines
optimal route finding, real-time route inspection, and route adjustments to ensure safe navigation. The
following are three examples utilizing advanced technologies such as computer vision, fuzzy logic, and
context-aware. More examples can be found in .

[h]Image processing obstacle avoidance navigation

Unmanned aerial vehicles (UAVs) use vision as the principal source of information through the
monocular onboard camera. The system compares the obtained image to the obstacles to be avoided.
Micro aerial vehicle (MAV), to detect and avoid obstacles in an unknown controlled environment.
Only the feature points are compared with the same type of contrast, achieving a lower computational
cost without reducing the descriptor performance. After detecting the obstacle, the vehicle should
recover the path. The algorithm starts when the vehicle is closer to the obstacle than the distance
allowed. The limit area value is experimentally obtained defining the dimensions of obstacles in pixels
at a specific distance. The output of the control law moves the vehicle away from the center of the
obstacle avoiding it. If the error is less than zero, the vehicle moves to the right side. Detouring of
permanent obstacles, a preliminary process is applied to scan the route and correct it such that the
corrected route already considers all known obstacles and skips them.

[h]Fuzzy logic technique for mobile robot obstacle avoidance navigation

Mobile robots perform tasks such as rescue and patrolling. It can navigate intelligently by using sensor
control techniques . Several techniques have been applied for robot navigation and obstacle avoidance.
Fuzzy logic technique is inspired by human perception-based reasoning. It has been applied to
behavior-based robot navigation and obstacle avoidance in unknown environments. It trains the robot
to navigate by receiving the obstacle distance from a group of sensors. A reinforcement learning
method and a genetic algorithm optimize the fuzzy controller for improving its performance while the
robot moves. Comparing the performance of different functions such as triangular, trapezoidal, and
Gaussian for mobile robot navigation shows that the Gaussian membership function is more efficient
for navigation.

A similar concept is using neural network learning method to construct a path planning and collision-
free for robots. Real-time collision-free path planning is more difficult when the robot is moving in a
dynamic and unstructured environment.

[h]Context-aware mobile wearable system with obstacle avoidance

The system is composed of three embedded components; a map manager, a motion tracker, and a
hindrance dodging . The map manager generates semantic maps from a given building model. The
hindrance dodging detects visible objects lying on the road and suggests a safe bypass route to the
target location. A developed prototype performed very well proving that this navigation system is
effective and efficient.
Chapter 4: UAV Communication Systems

[mh] Communication Protocols for UAVs

AVs are unmanned systems that have the ability to autonomously observe, object, decide, and act
(OODA loop) in air, ground, sea surface, and underwater environments, including unmanned aerial
vehicles (UAVs), unmanned ground vehicles (UGVs), unmanned surface vessels (USVs), and
unmanned underwater vehicles (UUVs). In practical applications, communication and networks are
particularly important for AVs, mainly in two aspects: first, to maintain communication between AVs
and mission control stations to ensure that AVs complete various tasks under the supervision of
operators; second, to build information transmission channels with other AVs and manned systems,
and multi-system collaboration to do operations. In different application scenarios, different structures
of networks built by various communication means are used to interact with single or multiple AVs
during mission execution, such as fixed structured networks, 4G/5G with star network structure, WiFi,
self-organizing network structures without fixed configuration, grid-like grid networks, and mesh
networks with multi-layer grid network structure.

In this chapter, the communication network technologies of AVs are described in terms of wireless
communication, network architecture, and data transmission. During the introduction, the specific
technology applications, challenges, and considerations will be explained with the examples of AVs
such as UAVs, unmanned vehicles, and robots, which are widely used at present.

[h]Various types of communication means and technologies can be used for AVs

A very large number of mobile communication technologies are currently applied to AVs, gradually
realizing remote control of AVs, multi-system collaboration, and clustering of large-scale AVs.
According to the basic way of communication networking, it can be divided into centralized
communication mode and distributed communication mode. In centralized communication, 3G/4G-
LTE/5G mobile network-based clustering communication technology and high-power WiFi
networking technology for small-scale community-level applications have been gradually proposed
and widely used with the benefit of high-bandwidth, low-latency, fast-access, and high-capacity
communication capabilities. Unfortunately, this centralized architecture requires central access devices
such as base stations and routers, and the communication range is limited by one central device.

Compared with centralized communication networking methods, distributed communication modes are
more suitable for cluster systems, of which self-assembling networks as typically distributed networks
have become a hot research topic today. For example, UAV cluster self-assembling network , that is,
the communication between multiple UAVs does not completely rely on basic communication
facilities such as ground control stations or satellites, but uses UAVs as network nodes, and each node
is able to forward control commands to each other, exchange data such as situational awareness and
intelligence collection, and automatically establish a network. Dynamic networking, wireless relay, and
other technologies are used to achieve interconnection and interoperability between autonomous
systems, with the advantages of self-organization, self-healing capability, and efficient and fast
networking to ensure that the UAV group forms a whole to perform combat missions. Gupta et al.
pointed out that wireless self-assembling network is the most suitable communication network
architecture for UAVs, however, more factors such as dynamic topology, direction of routing,
heterogeneous network switching, energy of each UAV, and so on, should be considered for
applications. Liang Yi Xin et al. from Southeast University reviewed the airborne network architecture
and network protocol stack, compared for planar and hierarchical network structures, and pointed out
that research is needed in network architecture design, mobility model, routing mechanism, and
transmission control mechanism. Chen Si et al. proposed a highly dynamic mobile self-assembling
network architecture scheme with switchable modes. However, only a few researchers can build a
more practical network. Chen Wu et al. implemented a small UAV self-assembling network
demonstration and validation system based on 802.11 b/g, optimized routing and transmission
protocols, implemented an H.264-based video transmission system, and a secure communication
protocol based on offline digital certificates, and the system consisted of only one command terminal
and three mobile terminals. The system includes only one command terminal and three mobile
terminals and has been verified by real flight. The various network communication technologies
applicable to UAV clusters are summarized in Table.

Transmission rate Transmission distance (>


Networking method Communication cost
(1Mbps) 1 km)
Satellite Yes Yes High
WiFi Yes No Low
WiMAX Yes Yes Middle
LTE(4G) Yes Yes Middle
Zigbee No No Low
Bluetooth No No Low
UWB Yes No Low
Self-organizing
Yes Yes Middle
network

Table :Comparison of transmission performance of common communication technologies.

[h]Key technologies for wireless communication in AVs

AVs usually used in 4D (dangerous, doll, dirty, deep) working environments, geographical
environments, weather conditions, and human activities have a greater impact on the wireless
communication, therefore, the AVs communication technology of anti-jamming and security is the
most important.

[h]Communication anti-jamming

Anti-jamming communication is the general term for various technical and tactical measures to ensure
the normal conduct of communications in various interference conditions or complex electromagnetic
environments. There are two major types of anti-jamming communication technologies in common
use, one is based on the extended-spectrum anti-jamming communication technology, and the other is
based on the nonextended spectrum anti-jamming communication technology.

Spread spectrum (SS) is a means of anti-jamming communication that extends the information
bandwidth for transmission. Frequency-hopping spread spectrum (FHSS), time hopping spread
spectrum (THSS), frequency modulation spread spectrum (Chirp SS), and hybrid spread spectrum.
With the development of artificial intelligence technology, anti-interference communication
technology based on spectrum awareness, cognitive radio, and other technologies is developing
rapidly.

Nonextended spectrum-based anti-jamming communication system is mainly a general term for the
technical methods to achieve anti-jamming without extending the spectrum of the signal. At present,
the commonly used methods mainly include adaptive filtering, interference cancelation, adaptive
frequency selection, automatic power adjustment, adaptive antenna zeroing, smart antenna, signal
redundancy, diversity reception, signal interleaving, and signal bursting. Compared with the extended
spectrum-based anti-interference communication system, the nonextended spectrum-based anti-
interference methods cover a wider range and involve more knowledge. The extended spectrum-based
anti-interference communication mainly considers the interference problem in the frequency domain,
time domain, and speed domain, while the nonextended spectrum-based anti-interference will focus on
the power domain, spatial domain, transform domain, and network domain, in addition to designing the
above three areas.

There are more and more anti-interference communication methods when in essence, the goal of all
technical methods is to improve the effective signal to noise and interference ratio (SNIR) at the
receiver end of the communication system, so as to ensure that the receiver can properly achieve the
correct reception of the useful signal.

[h]Information encryption technology

The data chain uses a uniform bit-oriented defined information standard with a uniform type format, so
data encryption is generally used to ensure data security. According to the different ways of plaintext
encryption, secret key generation, and management, encryption systems can be divided into three
categories: one is group cipher (also known as symmetric encryption), in which the plaintext is first
grouped (each group contains multiple characters) and then encrypted group by group; another is
public key encryption (also known as asymmetric encryption); and another is single encryption.

A symmetric encryption algorithm is one that uses the same key for encryption and decryption and is
reversible (decryptable). The AES encryption algorithm is an advanced encryption standard in
cryptography that uses a symmetric packet cipher system with a minimum supported key length of 128.
It has been widely analyzed and used worldwide. The advantage of AES is that it is fast, but the
disadvantage is that the transfer and storage of the key is a problem, and the key used by both parties
involved in encryption and decryption is the same, so the key can be easily leaked.

Asymmetric encryption algorithm means that different keys (public and private) are used for
encryption and decryption, so asymmetric encryption, also called public key encryption, is reversible
(decryptable). The RSA encryption algorithm is based on a very simple number-theoretic fact: it is
easy to multiply two large prime numbers, but extremely difficult to factorize their product, so the
product can be made public as the encryption key. Although the security of RSA has never been
theoretically proven, it has survived various attacks and has not been completely broken. The
advantage of RSA is that the encryption and decryption keys are not the same, and the public key can
be made public, so it is only necessary to ensure that the private key is not leaked, which makes the
transmission of the key much simpler and reduces the chance of being cracked; the disadvantage is that
the encryption speed is slow.

Typical one-way encryption algorithm MD5 full name is message-digest algorithm 5, a one-way
algorithm is irreversible (the data encrypted by MD5 cannot be decrypted). The length of the data after
MD5 encryption is much smaller than the encrypted data, the length is fixed, and the encrypted string
is unique. The algorithm is applicable to scenarios: commonly used in irreversible password storage,
information integrity checking, etc. In information integrity checking, a typical application is to
generate a message digest for a piece of information to prevent tampering. If there is a third-party
certification authority, MD5 can also be used to prevent “repudiation” by the author of the document,
which is called a digital signature application.

[h]Information authentication technology

Authentication of a message is another important aspect of message security. The purpose of


authentication is twofold: first, to verify that the sender of the message is genuine and not an impostor;
second, to verify the integrity of the message. That is, to verify that the information has not been
tampered with, replayed, or delayed during transmission or storage.

[h]Digital signature technology

A digital signature algorithm consists of two main algorithms, namely a signature algorithm and a
verification algorithm. A signer can sign a message using a (secret) signature algorithm, and the
resulting signature can be verified by a public verification algorithm. Given a signature, the verification
algorithm makes a “true” or “false” question and answer depending on whether the signature is true or
not. There are a large number of digital signature algorithms, such as RSA digital signature algorithm,
finite automaton digital signature algorithm, etc.

[h]Identification technology

The security of communication and data systems often depends on the ability to correctly identify the
individual communication user or terminal. There are two main common ways of identification, one is
the way of using passwords; the other is the way of using badges. Passwords are the most widely used
form of identification. Passwords are generally strings of 5–8 long consisting of numbers, letters,
special characters, control characters, etc.

[h]Relay communications

In the field of autonomous system communication, relay communication is widely used as an effective
means to extend the communication range. In this section, we take the most widely used UAV relay
communication as an example to explain the relay communication technology.

For UAVs applications in various environments, such as reconnaissance and surveillance , assist
communication , emergency communication , search and rescue , and so on , wireless communication
for data transmission between UAVs and GCS often suffers from the undulating terrain, high
buildings, and other factors, which block the direct link. In this case, according to the advantages of
rapid deployment, mobility, and wide communication range , relay UAV is adopted to build up indirect
connections by multiple hops, as shown in Figure :

Figure :Relay UAV implementation in UAVs swarm.

First of all, the throughput and communication range are the two main issues of relay UAV serving for
data transmission. Some researchers propose the optimization for the relay UAV position to maximize
the end-to-end throughput . Meanwhile, the communication range for more users, including spectrum
resources, energy- efficient and quality-of-service (QoS) are considered in relay applications .
Moreover, an iterative and suboptimal algorithm is presented to optimize robust transmit power along
with relay UAV speed and acceleration for EE mobile relaying networks , and a mood-driven online
learning approach is illustrated for relay UAVs assignment and channel allocation to maximize total
transmission rate of the networks . Multi-rotor UAVs are usually used as relays with limits of
endurance, energy, and mobility, and the collaboration scheme with multiple UAVs substitution to
maintain a long time relaying is necessary .

Except for the multi-rotor UAVs, fixed-wing relay UAVs with the advantages of high mobility is able
to provide better service in a wide range. In this case, the trajectory of the relay is more important.
Some methods for trajectory planning are studied . A genetic algorithm is proposed for optimizing the
amount of data transmitted to users, as well as the access order and motion trajectory of user groups . A
path optimization algorithm is described for fixed-wing UAVs’ relay assistant communication system
based on maximizing the weighted sum of ergodic capacity of each state .

In addition, the relay task allocation is a typical optimization problem, which is above trajectory
planning, QoS, and other aspects . A heterogeneous UAV task assignment model with distributed
online task allocation based on the extended CBBA algorithm is investigated , while the deployment
strategies with a distributed game-theory-based scheduling method are discussed to maximize the
stationary coverage to guarantee the continuity of the service . Also, some advanced optimal solutions
are studied such as automatic generation control strategy in Ref. , a modular relay positioning and
trajectory planning algorithm in Ref. , and so on.
With the growing applications of UAV swarms in civil and military currently, swarm network and
relay cooperative communication became the popular topic. Trajectory planning for dynamically
deploying relay UAVs one by one for continuous communication is the main issue. A joint
optimization on multi-hop UAVs trajectories and transmitting power to maximize the end-to-end
throughput is proposed with the capability of obstacle avoidance . Meanwhile, energy efficiency is
another critical issue. An aerial backbone network scheme with the assistance of relay connecting GCS
and core networks is presented . Moreover, multiple relay UAVs cooperate to assist in swarm network.
The method for UAV cooperative relay is investigated to improve the capability of communication
network , and also, a UAV relay selection joint game model and a distributed fast UAV relay
cooperative formation algorithm are shown to optimize the EE of UAV swarms .

How to design the relay planning framework for AVs’ application is a hot topic today . According to
UAVs’ tasks schedules and task regions in the swarm, we propose a framework for relay UAVs task
planning as shown in Figure. In this framework, two main processes are designed, including initial
deployment and optimization of task planning of relay UAVs. It is assumed that both relay UAVs
(RUs) and mission UAVs (MUs) in the swarm are the same type of fixed-wing UAV.
Figure :Relay planning framework for AVs’ application.

Based on the mission planning of swarm, the first process is to analyze the distribution of task points or
regions of each MU and determine when and where the relay is required for assisting the data link
between mission UAVs and GCS. In this step, MINLP method is adopted for global deployment for
relay UAVs, in which the rough number for deployment and approximate accessing locations for
service could be solved. In the next process, the CBBA method is applied to optimize the relaying
resource allocation for arranging the RUs as few as possible. In the worst cases, all RUs demand
relays to connect GCS at the same time.

[h]Network architecture for autonomous vehicles

Network communication is an important guarantee for information interaction in the process of AVs
cooperating to accomplish their tasks. In this section, As suggested, start with a typical network
structure, analyze the characteristics of AVs networking requirements, and put forward some
suggestions for designing AVs network architecture design.

[h]Star network topology

A star topology, also known as a central radiating topology, uses a central node to connect other nodes
in a “one-to-many” fashion, as shown in Figure below. Unlike bus topologies that simply broadcast
transmitted frames to all connected endpoints, star topologies use components with additional built-in
levels of intelligence. The central node maintains dynamic media access control and data traffic
forwarding for each node in a star topology deployment.
Figure :Star network structure.

The structural characteristics of the star topology are as follows:

 Simple control: Any one site is only connected to the central node, so the media access control
method is simple, resulting in a very simple access protocol. Easy network monitoring and
management.
 Easy fault diagnosis and isolation: The central node can isolate the connected lines one by one
for fault detection and location, and the fault of a single connection point affects only one
device and does not affect the whole network.
 Convenient services: The central node can easily provide services and network reconfiguration
to individual sites.

Take the example of a UAV forming a star network. In the star network, each UAV establishes a
connection with the central node. There is no direct communication between UAV nodes but rely on
the central node for relay and forwarding services. As shown in Figure, a multi-star network consists of
multiple-star networks. One node in each group is connected to a ground station.

Figure :Multi-star network structure.

[h]Ring network topology

The data in the ring can only be transmitted in one direction, and the delay time of the information on
each device is fixed, which is especially suitable for real-time control LAN system. As shown in
Figure, the ring structure is like a string of pearls, and each computer on the ring structure is one of the
beads on the necklace.
Figure :Ring network structure.

The network characteristics of the ring topology are as follows:

 This network structure is generally only applicable to the IEEE 802.5 token network (token ring
network), and “tokens” are passed sequentially along a fixed direction in a ring-type connection.
There is only one path between every two nodes, and path selection is simplified.
 Each node on the loop is bootstrap control.
 Since the information source is serially passed through each node in the loop, it will task a long
time for transmission when the number of nodes are large in the loop.
 Loops are closed and not easily expandable.
 When a node failure will cause the whole network to go down.
 It is difficult to locate branch node faults.

[h]Tree network topology

A tree topology is a hierarchical structure where nodes are linked and arranged like a tree, as shown in
Figure. Usually, property topology can be generally divided into three layers: core layer, distribution
layer, and access layer. At the top of the tree is core layer, which is the “root” of the tree and also high-
speed transmission from current network to another. In the middle of the network is distribution layer,
serving transmission for the core, which is also operating the access control and QoS policies. Access
layer is at the bottom, where endpoint devices or users connect.
Figure :Tree structure for network.

The network characteristics of the tree topology are as follows:

 Simple network structure, easy to manage


 Simple control, easy network building, and easy expansion. The tree structure can be extended
with many branches and subbranches, and these new nodes and new branches can be easily
added to the network.
 Short network latency and low bit error rate
 Fault isolation is easier. If a node or line in a branch fails, it is easy to isolate the failed branch
from the entire system.
 Poor network sharing capability
 Underutilization of communication lines
 The root node is too heavily loaded and the individual nodes are too dependent on the root node.
If the root fails, the whole network does not work properly.

[h]Mesh network topology

A mesh topology is another nonhierarchical structure in which each network node is directly connected
to all other nodes, as shown in Figure below. A mesh topology ensures great network resilience
because if a connection is disconnected, neither disruption nor connection loss occurs. Instead, traffic
is simply rerouted along a different path.
Figure :Mesh network structure.

The structural characteristics of the mesh network topology are as follows:

 The network is highly reliable, and generally, there are two or more communication paths
between any two node switches in the communication subnet, so that when one path fails,
information can still be sent to the node switch through the other path.
 Networks can be formed in a variety of shapes, using a variety of communication channels and
a variety of transmission rates
 Easy to share resources among nodes in the network
 Improved information traffic distribution on the line
 Optimal path selection and low transmission latency

A hierarchical grid network removes the central node and enables all nodes to connect, as shown in
Figure. A hierarchical mesh network has multiple grids, and one node in each group is able to reach
other groups. All nodes in the hierarchical grid network are able to self-organize. While one node fails,
the remaining nodes are triggered to rebuild the network. For some practical applications, the
hierarchical grid network is more suitable for the multi-AVs system.
Figure :Hierarchical network structure.

[h]Analysis of the demand characteristics of AVs’ networking

Large-scale AVs applications are characterized by large numbers, wide range, fast speed, flexible
mobility, frequent changes in space-temporal relationships, changeable tasks, and cross-regional
scheduling, which pose greater challenges in terms of clarifying the information transfer between
systems, network structure, and dynamic optimization. Through the authors’ previous studies , we
analyze and summarize the main challenges and challenges in the current research of AVs’
networking, as follows.

[h]Insufficient correlation between network architecture design and AVs cluster tasks

The current network planning and communication studies of large-scale AVs are disconnected from
the cluster task requirements and lack the understanding of the intrinsic connection between task
behavior and information transfer in the network layer. The cluster network of large-scale AVs is a
typical complex network, which should start from the typical task characteristics of the cluster,
establish a model of individual behavior and information transfer between individuals, and thus design
the network generation method. Most of the current research abstracts the AVs task process as a model
of prime behavior with certain probability distribution characteristics and unifies the various types of
information of concurrent interaction between nodes as the value of transmission capacity between
nodes, which makes the network architecture theory research detached from the cognition of the cluster
task. In the link communication layer, a large amount of research work focuses on communication
waveform design, channel design, and access technology, while the type of information service and
transmission characteristics in the cluster task process is not considered for problem formulation,
difficulty attack to testing process. Therefore, there is an urgent need to explore the associated transfer
mapping and characterization methods from the cluster task domain to the information domain and to
guide the design and construction of the cluster network architecture from the task process to the
information requirements.

[h]The contradiction between loose organizational structure and close communication relationship
The contradiction between the loose cluster organization in the AVs network and the close
communication of multiple nodes involved in collaborative tasks within the cluster poses a big
challenge to the construction of the cluster network structure. On the one hand, the cluster of
autonomous vehicles is “task/sub-task-centric,” and each AV can be scheduled and assigned online,
resulting in flexible entry and exit of nodes and rapid integration and separation of subnetworks, which
make the nodes loosely coupled in terms of organizational structure. On the other hand, in the task
process, the information interaction between nodes around the same subtask is frequent, the
information transfer between subnets in the collaborative task is close, and each link of the task is
closely coupled with the information quality, meanwhile, the movement and state of each node
participating in the task will affect the data link stability and transmission quality, thus making the task
participants and information transfer tightly coupled together. The loose task organization and behavior
of the cluster bring a great challenge to the close information interaction between nodes in task
execution. In this regard, it is necessary to deeply analyze the characteristics of AVs’ group behavior,
explore the coupling mechanism between task behavior and information service, and propose a
network generation method with mutual coupling of link layer, network layer, transmission layer, and
application layer.

[h]Conflict between task-oriented planning and communication network optimization

In the main system network architecture, each AV node balances mission and communication, and
there is a contradiction between mission replanning and network optimization in complex dynamic
scenarios, and the network robustness and transmission stability face challenges. In complex mission
scenarios, such as low-altitude close-range reconnaissance surveillance, earthquake relief emergency
communication, etc., the mission may be replanned with the development of the situation (mission
reassignment, route replanning, etc.), and the organizational relationship of the cluster changes
accordingly; at the same time, the mission may encounter the loss of a few AVs due to loss of control
or destruction, resulting in network topology changes, requiring reconfiguration of links, adjustment of
network topology, and optimization of routing, and how to ensure continuous and stable network
information delivery during dynamic changes is an optimization problem. However, each AV in the
cluster is both a task performer and a network participant, and planning and constraining the behavior
of AVs from two different dimensions of network optimization and task execution at the same time is
an optimization challenge that cannot be solved. In this regard, the dynamic evolution process of the
network needs to be studied in terms of the role assignment and role change of each node in the cluster
in the task as well as the time-varying task correlation among the nodes.

[h]Design thinking of AVs networking for task-oriented process

Based on a comprehensive analysis of the main problems and challenges, this paper proposes a valve
idea of small-world network generation for AVs clusters based on task cognition, which mainly solves
two problems: firstly, it starts by analyzing the task flow and recognizing the information and
communication requirements, and constructs a dynamic diagram of the internal topological association
of the group with the development of the task chronology; secondly, it constructs a multi-layer network
structure combining “self-organized multi-hop transmission and cooperative relay communication” for
the data transfer characteristics of the group subtasks, and establishes a network organization method
and mechanism that adapts to the changes of subtask increase/decrease and node loss, so as to achieve
better support for the cooperative tasks of AVs clusters.

The main idea of the task-oriented AVs network planning and generation system design is that the AVs
have the characteristics of local convergence and global dispersion, or global convergence as a whole,
with the target or specific task as the center in the actual application process, and the information
transfer interaction between individuals within the cluster must meet the timeliness requirements of the
task.

As shown in Figure, the first layer is the application layer, which mainly solves the problem of
cognition from cluster task domain to information domain. It can adopt the description method of
information flow, establish the characterization method and model from cluster task domain to network
information domain, and get the information cross-linking relationship within the cluster, whose
difficulties are: there are many types of cluster tasks, single-computer independent tasks and multi-
computer collaborative tasks are intertwined and can be executed in parallel, and the information
business requirements change with the advancement of task time, yes, the granularity of task
decomposition is closely coupled with information flow description, and it is difficult to accurately
decompose cluster tasks and establish dynamic information flow model between nodes.

Figure :AVs network generation method based on missions.

The second layer is the network layer, which mainly solves the problem of network generation and
evolution adapted to the cluster task characteristics. Based on the information association relationship
of each node in the cluster established in the first layer, the logical topology relationship of the network
is constructed and the initial topology diagram of the network is formed.

The third layer is the link layer, considering the physical characteristics of the link, and combining the
idea of “multi-hop and relay,” proposing a network-structured design and generation method to guide
the construction of network links, and establishing a dynamic network link reconfiguration method to
cope with sudden changes in network topology in complex environments.
[h]Data transmission for autonomous vehicles

Data transmission between multiple autonomous vehicles is a typically distributed information


transmission mode. There are two kinds of information transmission in the current distributed system:
one is to establish point-to-point direct communication mode; the other is the indirect communication
mode in which the information producer and consumer are decoupled.

[h]Inter-process communication

Inter-process communication refers to the relatively low-level communication methods used between
processes in distributed systems, including message-passing meta-language, direct access to API
provided by network protocols (socket programming), and support for multicast communication.

[h]Remote procedure call, RPC

RPC is the most common communication paradigm in distributed systems and consists of a set of
techniques based on bi-directional exchange between communicating entities in a distributed system,
including remote operations, procedures, or methods. The most common ways of RPC are request-
response mode, remote procedure invocation, and remote method invocation.

[h]Remote method invocation, RMI

RMI is very similar to RPC, but it is applied to the environment of distributed objects. In this method,
an object that initiates a call is able to call a method in a remote object. As with RPC, the underlying
details are hidden from users. For example, in Java, just a class extends the java.rmi. Remote interface
can become a remote object that exists on the server side for the client to access and provide certain
services.

[mh] Indirect communication

A characteristic of indirect communication is that these technologies will support the adoption of a
third entity, allowing deep decoupling between the sender and the recipient. Kafka, for example, is a
typical technique for indirect communication, which is considered a message queue implementation
and can also be used as a publish-subscribe system.

Indirect communication will come to be considered to handle two main scenarios:

 Spatial decoupling: senders do not need to know who they are sending to
 Time decoupling: sender and receiver do not need to exist at the same time

The key technologies of indirect communication mainly include: publish-subscribe system, message
queue, distributed shared memory (DSM) and tuple space, and group communication, among which
publish-subscribe system and message queue are the most widely used indirect communication
technologies in ROS and other applications.

[h]DDS (distributed data system, distribution-subscription system)

Data Distribution Service (DDS) is a standard data-centric distributed system publishing and
subscription programming model and specification that is compatible with the performance
requirements and hard real-time requirements of data-centric distributed applications. DDS can control
service behavior through quality of service (QoS) and effectively support complex data communication
models .

[h]DDS specifications

The DDS standard consists of two separate parts: the first part is data-centric publish-subscribe
(DCPS), which deals with data-centric publish/subscribe, and applications can use this layer to
communicate with each other; the second part is the Data Local Reconstruction Layer (DL-RL), which
is located on top of the DCPS layer and can abstract the lower services and establish mapping
relationships. It is an optional object-oriented layer.

DDS uses standard software application programming interfaces (APIs) to provide an infrastructure for
communication between various applications and can be quickly added to any software application.

[h]DCPS communication mechanism

Data is transferred across domains, and there can be publishers, subscribers, or both on a node. The
publisher owns and manages the data writers and the subscriber owns and manages the data readers. A
data reader and a data writer must be associated through the same subject and compatible QoS policies
so that data published by the data writer can be received by the subscribed data reader, as shown in
Figure :

Figure :Publishing and subscription programming model.

Domain: A domain represents a logically isolated communication network. Applications that use
DCPS to exchange data must belong to the same domain, and entities belonging to different domains
will never exchange data; Domain Participant: A domain actor is an entry point for an application to
interact in a particular domain and a factory for multiple objects to write or read data.

Topic: A topic is a method of publish/subscribe interaction and consists of a topic name and topic type.
The topic name is a string that uniquely identifies the topic within the domain. A topic type is a
definition of the data that a topic contains. Each topic data type can specify its key to distinguish
different instances of the same topic. In the DCPS communication model, a connection can be
established only when the topics of the data writer and the data reader match each other.

Data writer: The application passes the value to the DDS by using the data writer. Each data writer
must be a specific topic, and the application publishes examples on that topic using an interface of the
type specified by the data writer. The data writer is responsible for encoding the data and passing it to
the publisher for transmission.

Publisher: The exact mechanism used to capture published data and send it to the relevant subscribers
in the domain is determined by the implementation of the service.

Subscriber: Receives a message from the publisher and delivers it to any associated data reader
connected to it.

Data reader: Gets the data from the subscriber, decode the topic into the appropriate type, and delivers
the sample to the application. Each data reader must be a specific topic. The application uses a specific
type of interface to the data reader in order to receive samples easily.

[h]QoS policy

The fine control of real-time QoS is one of the most important features of DDS. DDS defines multiple
QoS policies, including reliability, bandwidth control, send cycle, and resource restriction. Each
publisher/subscriber can establish an independent QoS protocol, so DDS designs can support
extremely complex and flexible data flow requirements. It should be noted that these policies can be
applied to all entities in DCPS, but not all policies will work for every entity type. The match between
publisher and subscriber is done using the request-offered (RxO) mode. In this pattern, the publisher
“provides” a set of QoS policies, the subscriber “requests” a set of required QoS policies, and the
middleware is responsible for determining whether the provided policies match the requested policies,
thereby establishing communication or indicating incompatibility errors.

[h]Discovery process

In DDS, publishers and subscribers do not need to specify the number and location of each other; the
application sends samples of a topic one at a time, and the middleware distributes samples to all
applications that want that topic. In addition, new publications and subscriptions for topics may appear
at any time, and middleware will automatically interconnect with each other. The mechanism of DDS
is realized through list information. The dissemination of list information between applications by DDS
is called “discovery” process .

[h]Features of DDS

The advantages of the system structure of DDS are summarized as follows:

 The concept of global data space is introduced to improve the communication efficiency.
 Take data as the center to reduce network delay.
 QoS is used to control the service behavior, which increases the communication flexibility.
 UDP/IP protocol is adopted to increase the network throughput.
 Dynamic configuration to improve data transmission capability.
[h]Message queues

Message middleware pass, also known as a message queue server, is a technology often used in today’s
distributed application architectures as a way to communicate asynchronously from program to
program, where the sender of a message does not have to wait all the time for the message to finish
processing, but instead sends the message to the message middleware and returns. The designated
consumers of the messages subscribe to the messages and process them. The message queue model
means that the message producer puts messages into a queue and the message consumer messages
from the queue. The publish-subscribe system means that the message producer publishes messages to
the queue of a specified topic and the message consumer subscribes to the queue messages of the
specified topic. When there is a new message in the subscribed topic, the message consumer can
consume the message by pull or the message middleware by push.

As shown in Figure, the first modern message queue software The Information Bus (TIB) was
developed in 1982, and three years later IBM’s message queue IBM MQ product family was released,
followed by a period of evolution of the MQ family into WebSphere MQ to rule the commercial
message queue platform market. The year 2001 saw the birth of Java Message Service (JMS), JMS by
providing a public Java API way to hide the implementation interface of separate MQ product vendors,
thus spanning different MQ consumption and solving interoperability problems. Later, AMQP
(Advanced Message Queuing Protocol) advanced message queuing protocol was created, which uses a
standard set of underlying protocols, adding many other features to support interoperability. Currently,
there is a proliferation of open-source message queuing middleware, with the more popular ones being
ActiveMQ, RabbitMQ, Kafka, and Ali’s RocketMQ.

Figure :Development of message queues.

[h]ActiveMQ

ActiveMQ is produced by Apache, the most popular, powerful open-source messaging bus, which is a
fully supported JMS1.1 and J2EE 1.4 specification of the JMS provider implementation, designed to
provide efficient, scalable, stable, and secure enterprise-class messaging for applications, as shown in
Figure :

Figure :The process of active message queues.

The ActiveMQ client uses the ConnectionFactory object to create a connection through which
messages are sent to and received from the messaging service. Connection is the active connection
between the client and the messaging service. When the connection is created, communication
resources are allocated and the client is authenticated. This is a fairly important object, and most clients
use a connection for all messaging. A connection is used to create a session, which is a single-threaded
context for generating and using messages. It is used to create producers who send and consumers who
receive messages and to define the delivery order for the messages sent. Sessions support reliable
delivery through a large number of acknowledgment options or through transactions.

The client sends messages to a specified physical target by the MessageProducer, which can specify a
default delivery mode, priority, validity value, and other factors to control all messages. Meanwhile,
the client receives messages from the specified physical target by the MessageConsumer. The
consumer can use a message selector designed for consumer, which operates the messaging services
and matches the selection criteria. Also, consumer is able to synchronously or asynchronously receipt
messages.

[h]RabbitMQ

RabbitMQ is an AMQP (Advanced Message Queued Protocol) messaging middleware implemented in


Erlang language, originally originated in financial systems and used in distributed systems for storing
and forwarding messages. This is due to its outstanding performance in terms of ease of use,
scalability, reliability, and high availability.

The basic components of RabbitMQ and its workflow are shown in Figure:
Figure :The process of RabbitMQ.

Broker: The entity server of RabbitMQ. It provides a transport service that maintains a transport line
from the producer to the consumer, ensuring that message data is transmitted in the specified manner.

Exchange: The message switch. Specifies to which queue messages are routed according to what rules.

Queue: Message queue. The carrier of messages, each message is cast to one or more queues.

Binding: The effect is to bind exchange and queue according to some routing rules.

Routing Key: A routing key by which the exchange delivers messages. The key specified when
defining the binding is called the binding key.

Vhost: Virtual host. A broker can have multiple virtual hosts, which are used as a separation of
privileges for different users. A virtual host holds a set of exchange, queue, and binding.

Producer: Message producer. Mainly delivers messages to the corresponding exchange. It is usually a
standalone program.

Consumer: Message consumer. Receiver of messages, usually a stand-alone program.

Connection: TCP long connection between producer, consumer, and broker.

Channel: Message channel, also known as a channel. Multiple channels can be created in each
connection of the client, each channel represents a session task. In the RabbitMQ Java Client API,
there are a large number of programming interfaces defined on the channel.

[mh] RocketMQ

RocketMQ is a distributed messaging middleware open-sourced by Alibaba in 2012, which was


donated to Apache Software Foundation and became an Apache top-level project on September 25,
2017. As a homegrown middleware that has experienced the baptism of Alibaba’s “Double 11” super
project and has stable and outstanding performance, it has been used by more and more domestic
enterprises in recent years for its high performance, low latency, and high-reliability features.
The basic components of RocketMQ and its workflow are shown in Figure :

Figure :The process of RocketMQ.

Producers: Message producers, responsible for producing messages, are selected by the MQ load
balancing module to deliver messages to the appropriate broker cluster queue, with fast failure and low
latency support, and all message producers in RocketMQ are in the form of producer groups. A
producer group is a collection of producers of the same type, which sends messages of the same topic
type. A producer group can send messages for multiple topics at the same time.

Consumer: Message consumer, responsible for consuming messages. A message consumer gets the
message from the broker server and performs the related business processing on the message; message
consumers in RocketMQ are in the form of consumer groups. A consumer group is a collection of
consumers of the same type, and such consumers consume messages of the same topic type.

NameServer: It is a registration center for broker and topic routes, and mainly contains two parts,
which are as follows:

 Broker management: It accepts the registration information, which is saved as the basic data of
routing information. It also provides heartbeat detection mechanism to check the activity of the
broker.
 Routing information management: Every NameServer owns the entire routing information of
the broker cluster and the queue information for client queries, and the producer and consumer
can obtain the routing information of the entire broker cluster through the NameServer to
deliver and consume the messages.

Broker: The broker acts as a message relay, storing and forwarding messages, and is responsible for
receiving and storing messages from producers in the RocketMQ system and preparing them for pull
requests from consumers.

[h]Kafka

Kafka was first developed as a distributed publish/subscribe-based messaging system by LinkedIn


Corporation and later became a top project of Apache.
The basic components of Kafka and its workflow are shown in Figure :

Figure :The process of Kafka.

Producer: Producer, as a producer of messages, needs to deliver messages to a specified destination (a


partition of a topic) after production. Producer can choose which partition to publish messages
according to the specified algorithm for selecting partition or in a random way.

Consumer: In Kafka, there is also the concept of consumer group, which is a logical grouping of some
consumers. Because each Kafka consumer is a process, so the consumers in a consumer group will
probably be composed of different processes distributed on different machines.

Broker: The main server used to store messages supports horizontal scaling (the more the number, the
better the cluster throughput), and the storage of messages is divided by topic+partition (topic
partition); the offset (offset) of each message within a particular topic/partition is stored together with
the timestamp of the message when the message is stored until the expiration time (in the server). The
offset of each message in a particular topic/partition is stored with the timestamp of the message, and
when the message is stored until its expiration time (configurable in the server), it is automatically
deleted to free up space (whether it has been consumed or not).

ZooKeeper: The broker side does not maintain the consumption state of the data and delegates it to
ZooKeeper, which improves performance.

The main features of ActiveMQ, RabbitMQ, Kafak, and RocketMQ are compared in the following
Table.

Characteristics ActiveMQ RabbitMQ RocketMQ Kafka


Application Mature Mature RocketMQ is used in a large Kafka is more mature in
number of applications within the logging space.
Ali Group, generating massive
messages every day, and has
Characteristics ActiveMQ RabbitMQ RocketMQ Kafka
successfully supported many
Tmall Double Eleven massive
message tests, and is a
powerful tool for data peak
shaving and valley filling.
Community
Medium-high High High High
activity
Stand-alone
Million Million Ten million (less than Kafka) Ten million (Highest)
throughput
Timeliness Millisecond Microsecond Millisecond Millisecond
High, master– High, master– Very high, distributed,
slave-based slave-based multiple copies of data,
Very high, distributed
Availability architecture architecture for few machines down, no
architecture
for high high data loss, no
availability. availability. unavailability
OpenWire,
Does not follow the
STOMP,
Own defined set (community standard MQ interface
Support pacts REST, AMQP
provides JMS - immature) protocol, relatively
XMPP,
complex to use.
AMQP
There is a low There is a low Optimized configuration with Optimized configuration
Message loss probability of probability of parameters to achieve zero with parameters to
data loss. data loss. loss. achieve zero loss.
Erlang-based
Support for simple MQ
strong
The MQ functions, real-time
concurrency,
domain is MQ is more functional, still computing in the field of
Feature Support excellent
extremely distributed, and scalable. big data, and log
performance,
full-featured. collection is used on a
very low
large scale.
latency
The impact of Topics can reach When the topic is from
topic number on hundreds/thousands of levels, tens to hundreds, the
throughput and there is a small drop in throughput will drop
throughput, which is a major dramatically, under the
advantage of RocketMQ, same machine, Kafka
which can support a large tries to ensure that the
number of topics on the same number of topics is not
machine. too much, if you want to
support a large number
of topics, you need to
Characteristics ActiveMQ RabbitMQ RocketMQ Kafka
add more machine
resources.

Table :Comparison of message queue features of commonly used distributed systems.

[mh] Key factors to be considered when designing data transmission

communication middleware is a data transmission platform that isolates application layer components
from traditional communication architecture, network details, and operating systems. With the idea of
layering, it effectively reduces the dependence between different layers and improves the software’s
scalability, reusability, portability, and other performance , which is an important direction for the
development of radar communication in the future .

The traditional communication middleware based on client/server model mainly focuses on business
decision-making and display, and the data exchange between nodes is low in efficiency and small in
data volume, which cannot meet the transmission requirements of distributed high real-time and large
data volume. Therefore, communication products implemented based on this model have been
gradually replaced. Aiming at the problems of traditional communication middleware, developing a
new generation of real-time communication middleware has become a key task in the field of radar
communication. In 2003, OMG developed a new generation of communication middleware based on
CORBA. In face of market demand, the new generation of communication middleware must meet the
following three requirements :

[h]Support communication between distributed nodes

In multi-radar distributed system, the requirement of real-time communication middleware is mainly


reflected in the data transmission mode between communication nodes, including one-to-one, one-to-
many, many-to-many, etc., which is mainly characterized by real-time, high bit rate, and concurrent
communication.

[h]Support dynamic joining and exiting of communication nodes

The new generation of communication middleware adopts the standard publish/subscribe model and
defines the unified standard data transmission interface. It changes the development mode of the
traditional communication middleware through the anonymous communication subject information
published and subscribed data, so as to realize the dynamic entry and exit of communication nodes.

[h]Loose coupling of communication nodes

Real-time communication middleware uses the data-centered publishing and subscription mechanism
to realize the loose coupling between communication nodes. Middleware is located between the
application layer and the operating system layer. It provides standard interface services on the upper
side and shields the complex communication details and diversified operating systems on the lower
side. In view of different environments, the appropriate operating system is selected according to the
operating system adapter to realize the decoupling between application components and operating
systems.
Chapter 5: UAV Operations and Missions

[mh] Mission Planning and Execution

On 22 March 1993, I was closely watching the big screen in the German Space Control Centre in
Oberpfaffenhofen near Munich, Germany. The Space Shuttle with the Spacelab D2 mission was just
about to lift off, and on board were seven astronauts and equipment for a host of experiments of which
four were from Denmark. I was the responsible investigator for these four experiments. I was nervous.
In the airplane from Copenhagen to Munich the day before, I had told myself that this would either be
my stepping stone to a further career in space physiology and medicine or simply my big Waterloo.
The Danish government had payed millions of Kroners in preparation of the studies, and failure was
not an option. The problem, though, was that anything random, which was totally out of my and my
colleagues’ control, could happen at any time and jeopardise our experiments.

The four experiments had been under preparation for some 5 years and supported by Danish space
research grants. Eight people had been full time involved in my laboratory: four engineers and four
medical doctors. Equipment had been developed, tested and adapted for spaceflight, and many nervous
moments had been overcome. It was exciting times and now—finally—everything was about to be
launched into space to orbit the Earth at a speed of 28,000 km per hour in a free fall condition inducing
chronic weightlessness for 10 days. I felt tense as the launch countdown approached zero.

A few seconds before launch, two of the three Space Shuttle engines ignited. The third one did not.
Immediately we knew in the control room that something was wrong. One of the engines did not blast,
and within a few seconds, all engines stopped. As water vapour evaporated away from the shuttle, we
waited anxiously. What would happen now?

The ignition had failed, and within the next hour or so, the astronauts were let out, and the launch
postponed until the end of April. In that interim period, we had to repeat some ground tests on the
astronauts in collaboration with several other international teams from Europe and the United States.
Finally, after some additional postponements of the launch, the Space Shuttle Columbia ignited on 26
April and went to space, where it completed a 10-day mission with great success. Almost 90
experiments were expertly conducted. Concerning the four Danish human physiology experiments, the
data were successfully collected, which 2–3 years later led to three scientific papers from our Danish
space medicine and physiology laboratory, DAMEC Research Inc. (Danish Aerospace Medical Centre
of Research Inc.), at the Copenhagen University Hospital.

The successful completion of our experiments on the Spacelab D2 mission in 1993 with the
publications in 1995 and 1996 led to the later expert preparation, implementation and conduction of
seven additional studies in space on the Russian Mir station, Space Shuttle Columbia and International
Space Station. All studies had cardiovascular adaptation aspects, and parabolic flights were used—not
only for preparation of the spaceflight experiments but also to obtain basic science knowledge of
human physiological responses to very acute weightlessness of 20 s.

During changes in posture, blood pressure in humans is continuously and acutely regulated by pressure
reflexes originating from receptors (baroreceptors) in the aorta close to the heart and in the two carotid
(neck) arteries at the base of the skull. In addition, there are pressure sensors in the heart, from which
reflexes also originate to participate in blood pressure control. This blood pressure regulation is for the
central nervous system to make sure that the perfusion pressures to the brain and other organs are
optimal despite the displacement of blood caused by the posture. As an example, the upright posture
displaces blood downwards towards the lower body and the legs away from the head and heart. To
counteract that so that the blood pressure at heart and head level does not fall too much, which would
impede blood supply to the brain, the blood pressure reflexes sense the decrease in pressure and within
a few heart beats initiate an increase in heart rate and constriction of the small arteries in the lower
body. The opposite occurs, when the posture changes from upright to recumbent or supine.

We have for many years investigated, which blood pressure sensors are the most important for
adjusting blood pressure to posture changes. In a whole host of investigations using a combination of
various models as well as short-term weightlessness during parabolic flights, we have aimed at
isolating the effects of some receptors versus others and found that in order for blood pressure and
heart rate to adapt to either the supine or upright posture, the low pressure reflexes originating from the
heart and major central veins are pivotal. Without the inputs from these low pressure heart and venous
reflexes, the new steady state cannot be achieved. These findings have changed our previous
understanding of how the human cardiovascular system adapts to changes in postures and the effects of
gravity.

Since blood pressure is also determined by the amount of fluid in the body, we have used the human
head-out water immersion model to investigate how the volume of fluid and amount of salt is
controlled. By immersing humans in the seated posture to the level of the neck for hours, the fluid- and
salt-excreting mechanisms through the kidneys are stimulated, because the headward shift of blood and
fluid from the lower body to the heart through various mechanisms informs the central nervous system
that the upper body vessels are being overloaded with blood and fluid so that this excess volume must
be excreted. The mechanisms for this are still only partly understood, but the main opinion is that there
is a connection between the heart and kidneys through what is termed the cardio-renal link so that
when the heart chambers are stretched, it initiates a reflex response to the kidneys to excrete more salt
and fluid in the urine.

Our research using the seated head-out water immersion model has shown that not only the cardio-
renal link and associated hormones control the urinary excretion rates of sodium during shifting of
blood and fluid to the upper body but also dilution of the blood with fluid from the tissues plays an
important role. During shift of blood from the lower to the upper body caused by the surrounding water
pressure, fluid is pushed into the circulation from the lower body tissues. This dilution can affect the
kidneys directly as well as release of some kidney-regulating hormones.

Weightlessness in space is a unique condition that cannot be replicated on the ground and where bodily
functions can be studied without the intervening effects of gravity. In space, blood and fluids are
chronically displaced towards the upper body segments (heart and head), and the daily fluctuations
induced by posture changes do not occur. This gives us a unique opportunity to utilise weightlessness
in space for exploring the cardiovascular and fluid volume-regulating systems in the human body.

[h]Experiments: the spacelab D2 mission in 1993

The experiments on the Spacelab D2 mission in 1993 aimed at understanding whether the blood and
fluid shift into the heart during prolonged weightlessness would augment the urinary output of water
and salt, just as we usually observe in ground-based simulation models. In one experiment we aimed at
measuring how much the fluid pressure leading into the heart (central venous pressure, CVP) increases,
which we at that time thought to be the stimulus for control of the urinary fluid and salt excretion. In
another experiment we aimed at monitoring the urine production. It was the hypothesis that an increase
in CVP induced by weightlessness would augment the excretion rate of fluid and salt.

[h]CVP experiment

The CVP experiment was originally planned to be done on two of the Spacelab D2 astronauts, but for
some technical reasons, it was only accomplished in one. The experimental plan was to shortly before
launch insert a long catheter with a pressure transducer at the end into the vein directly leading into the
right atrial chamber of the heart through a peripheral arm vein and connect it to a preamplifier and a
recording system. The test astronaut would thus be inserted with the catheter and wear the CVP
monitoring system until 3 hours into the mission following launch. Thereafter the catheter would be
withdrawn. Before the launch of the Space Shuttle, control measurements in different body postures
were performed.

The equipment used for measurements of central venous pressure during the Spacelab D2 mission in
one astronaut is depicted in Figure. The central venous catheter (1) with a pressure transducer placed at
one end and a connector box at the other was plugged into a preamplifier (2) that was connected to a
recording unit (3). A calibration piston (4) could be connected to the reference opening of the catheter
and thus induce predetermined pressure changes on the backside of the transducer membrane. In this
way, it was tested to what degree the calibration characteristics of the pressure transducer might have
changed over time after being inserted into the astronaut. Following the spaceflight, the catheter was
brought back to the investigators and tested for change in drift of the tip transducer.

Figure :Central venous pressure equipment, which flew on the Spacelab D2 mission in 1993 .
[h]Urinary excretion experiment

Before the spaceflight, four test astronauts would, over a 4.5-h period, empty their urine bladders in
either supine or seated posture, on an hourly basis after being infused through a peripheral vein with
isotonic saline in an amount of 2% of their body weight. About a week into the flight, the same would
be done following a similar infusion while they thus would be free floating in the space vehicle. Blood
was sampled on ground and during flight for determination of water-, salt- and blood pressure-
regulating hormones. The volume of urine was measured after each void and samples taken for
determination of salt (sodium and potassium) concentrations. In space, the volume of urine was
determined by a urine monitoring system delivered by NASA, which was connected to the toilet. If the
voids were felt by the subjects to be small, the bladders were emptied into bags and returned to Earth
through the trash system on the Shuttle. The saline infusion was conducted over some 20 min by
manually inflating a cuff-pressure system, and the amount varied on an average between 1.7 and 1.8
litres.

[h]Experiments during subsequent missions (1995–2012)

After the Spacelab D2 mission in 1993, our research team in Denmark conducted additional
experiments on various space platforms with one on the Russian space station, Mir, in 1995, where we
monitored urine excretion rates over longer flight periods of up to 6 months by having three test
astronauts collect urine into bags and the volumes measured by a scale system. The idea was to test
whether urine production in weightlessness after intake of an oral water load would be enhanced—just
like we tested the same hypothesis regarding urinary salt excretion on the previous Spacelab D2
mission.

During later space missions, we have conducted several additional studies on the Space Shuttle and the
International Space Station focusing on the cardiovascular adaptation to short (<30 days) and long
(>30 days) duration flights. In particular, we have for this purpose conducted cardiac output
measurements by a non-invasive rebreathing technique, which has been developed for spaceflight.
Cardiac output is the amount of blood injected by the heart into the body per minute, and this variable
is important for understanding the effects of weightlessness on not only cardiac function but also the
vascular condition in general. The hypothesis was tested that the weightlessness-induced increase in
central blood volume would increase cardiac output and at the same time through the cardiovascular
reflexes dilate the arterial resistance vessels to counteract an increase in blood pressure.

Normally, accurate cardiac output estimations require insertion of catheters into the veins and arteries,
which makes the measurement difficult on a routine basis in normal healthy humans. With the non-
invasive foreign gas rebreathing technique developed for spaceflight, such estimations can be done
anytime and anywhere with no harm to the test subject. As indicated in Figure, the tested person
breathes back and forth into a rebreathing bag with a gas mixture, in which a tracer gas (e.g. N 2O) is
taken up by the blood flowing through the lungs. The disappearance rate of the tracer gas from the
lungs to blood is detected by an infrared photoacoustic gas analyser connected to the mouthpiece of the
rebreathing person. By knowing the solubility of the gas in the blood, the amount of blood flowing
through the lungs per unit of time can be calculated. This amount is equal to cardiac output. The
measurements take less than 30 s and are pivotal for understanding how cardiac output adapts to
various conditions such as weightlessness in space .
Figure :The principle of foreign gas rebreathing for measurement of cardiac output .

This methodology is currently being used on the International Space Station for various research
projects and has been developed from a mass spectrometry detection technique for measuring gas
concentrations to using infrared photoacoustic gas detection. This has made it possible to have a much
less voluminous and more user-friendly equipment on board the space station. Also, this technique has
been tested against golden standard invasive clinical techniques, where excellent correlations have
been found .

[h]Microgravity experiment implementation and execution

The cardiovascular experiments that we have conducted in space through the past three decades entail
the following steps: (1) proposal, (2) selection, (3) funding, (4) feasibility assessment, (5)
implementation, (6) execution, (7) data analysis and (8) publication. The unique features about
spaceflight experiments are steps 4–6, because of the limitations of conducting experimental
procedures in space. Furthermore, all of the associated equipment have to be developed and approved
for spaceflight with all the safety aspects taken appropriately into consideration.

[h]Proposal

The first step for performing experiments in space is to develop a proposal and respond to a space
agency solicitation from, e.g. the European Space Agency (ESA) or National Aeronautics and Space
Administration (NASA). ESA’s topics are usually broader than NASA’s, because the latter are mostly
focused on operational aspects. In both cases the proposal formats are rather similar and the subsequent
selection process very much like the way it is done at the national levels with peer review panels and
scientific merit scorings. In order for a proposal to be successful, the following criteria must generally
be fulfilled:

1. Qualified research team with experimental experience from a university or a recognised


company. Most proposals are from universities, and experience in space research is an
advantage but not a requirement.
2. Clearly written proposal, which fulfils the requirements stated in the call.
3. Adherence to all of the rules and instructions—otherwise the proposals will be rejected upon
receipt, and this includes adhering to the stated deadlines for submission.
4. Relevancy for utilising spaceflight meaning that what is proposed to be measured should be
responsive to the flight environment (e.g. weightlessness).

One thing to keep in mind when writing a proposal is to formulate it so that non-experts in the field can
also get something meaningful out of it, because at the time of selection and thereafter, decision-
makers who may not be scientists might make a judgement as to the appropriateness of spending
resources in space for this particular experiment.

[h]Selection

When a proposal is submitted in response to a research announcement, the first step is that it will be
evaluated by scientists appointed by the space agency or a group of space agencies. The scientists—or
peer reviewers—are usually experts within the field of the topic of the solicitation, who are not
involved in collaborations with the proposers. The peer review panels are usually led by an agency
representative, and the panel will score each proposal on a scale between zero and 100. Scores between
90 and 100 are categorised as “Excellent” or “Outstanding”, 80 and 89 as “Very Good”, 70 and 79 as
“Good”, 50 and 69 as “Fair” and below 50 as “Poor”. The score threshold for selection may vary
between space agencies, but usually no proposals are selected with a score lower than 70.

The next step for the space agency representatives is to—based on the peer review scores—perform
final selections. In this case, not only the scientific merit scores play a role but also the relevancy of the
proposal for the agency. Usually a subset of the highest scientifically scoring proposals are selected,
but sometimes even proposals with the highest scores may not be finally selected because of less
relevancy for the operational purposes of the agency. In this regard, there are different policies between
the space agencies. For NASA, deep space explorations are the main drivers, while ESA usually
focuses almost entirely on the scientific merit and the proposal’s ability to produce new fundamental
knowledge to the scientific community.

[h]Funding

In order for a selected proposal to be executed in space, funding has to be obtained. This can either be
done by grants from local and national authorities or from the space agency itself. The problem in
particular for European researchers is that for ESA to consider selection of a proposal, it is
advantageous to have obtained national funding or a declaration of intention of funding already before
submission. In many cases, national authorities will only indicate intention of funding, should the
proposal be selected, but they usually will not guarantee it. This can sometimes create a hen and an egg
problem: The space agency will—before it actually finally selects a proposal—require guaranteed
funding from a national authority, whereas the national authority requires that the agency selects the
proposal. The proposers usually obtain intention for funding in letters from the national funding
authorities, and usually the space agency will accept that. In our case, when the research team was
supported for selected experiments, we referred to an existing grant that could overlap with selection of
a new proposal.

Funding of a grant for experiments in space usually only covers the expenses incurred by the
experimental research team. The space infrastructure, such as access to a space vehicle and its
astronauts (e.g. the Space Shuttle), is delivered by the space agencies.

[h]Feasibility

When selection and funding are obtained, the space agency will conduct a feasibility study to evaluate
whether the experiment can be implemented in space and whether there are technical or other
obstacles. If these cannot be overcome, the proposal will be de-selected. Usually the experiment is
modified, if obstacles are detected. An obstacle for implementation may not just be technical such as
lack of availability of a technique or equipment but may also be lack of astronaut crew time for
execution of the experiment. In that case, the experiment is usually modified in close collaboration
with the research team. If the experiment changes considerably, the space agency may require an
additional scientific peer review to evaluate whether the scientific quality is still high enough, but this
is not the usual process.

[h]Implementation

When the space agency feasibility assessment has been successfully completed, meaning that the
experiment can be conducted in space, the implementation process begins. All of our previous research
team’s selected experiments (10 in total) have been somehow modified during the feasibility
assessment and implementation processes. The renal experiment on the Spacelab D2 mission in 1993
was changed by decreasing the number of inflight sessions from two to one, because of limited crew
time. The purpose was as previously described to evaluate the effects of applying an intravenous saline
fluid load to the test astronauts on renal excretion rates of fluid and sodium. We had originally planned
a session with infusion and a session without, but only the infusion session could be implemented in
space. Otherwise the experiment was kept intact except that it was actually improved by changing our
proposed saline loading from an oral saltwater load to intravenous infusion. This was done because an
experimental complement was created for the space mission, whereby several experiments were to be
executed in an integrated fashion, and a US experiment had suggested to use saline loading by
infusion. Thus, this intravenous infusion of isotonic saline was planned to be done for the first time
ever in space.

Although the urinary experiment was not difficult to implement from a technical point of view, it was
totally different regarding the CVP experiment. After selection of this experiment, many managers
within ESA and NASA thought it would be impossible to be allowed to conduct such an invasive
study. We succeeded anyway, which was because of one important thing: the backing of the appointed
payload commander. Without this support for the experiment, it would never have been executed. The
reason for the astronaut support was because of thrust in our research group’s ability to conduct the
study, which we obtained by always being well prepared for pre-flight briefings of the astronauts as
well as for the pre-flight control studies. We always made sure that as much of the data that had been
collected were properly analysed between the different ground tests and that the data were presented to
the astronauts during the subsequent tests so that they together with the investigators could follow the
progress of the study throughout the pre-flight period.
[h]Execution

During the Spacelab D2 mission, I was standing in the mission control centre in Oberpfaffenhofen near
Munich in Germany holding my breath and watching the big screen. All investigators followed the
executions of their experiments from the mission control centre, and just before initiation of our renal
experiment, which as described earlier was integrated into one complement, a valve was stuck in some
of the equipment. If the valve problem was not solved, we would all risk that none of the experiments
in the complement would be conducted. It was a tense moment, when the payload commander after
directions from mission control finally got the valve to work and initiated the flow of measurements.
What a relief!

During execution of our urinary excretion experiment a mistake did happen, whereby urine bags,
which were to be collected after flight directly from the trash can in the Space Shuttle, were not
correctly labelled. It meant that we could not readily identify, from which crew members the urine in
the bags derived. The way the problem of identification was solved was by measuring the
concentrations of five different substances in each urine bag and comparing them to samples that had
been collected inflight from each bag before trash storage. Each of the samples had been correctly
labelled. Since each crew member had a unique pattern of concentrations of the selected substances,
the matching and identification of the bags with the samples were possible.

Spaceflight experimentation often requires creative solutions to unexpected problems.

The CVP experiment went well in one astronaut. Originally, we had planned for two astronauts to be
instrumented, but unfortunately the catheter broke in one during an extended prelaunch period, where
the catheters had been successfully inserted into two astronauts, but where the prelaunch period was
extended for 2 days over a weekend because of a failure in one of the shuttle’s navigation systems that
had to be changed. During some leisure activities, it broke and was withdrawn before launch.

During execution of the experiment on ground before the flight, which is called the baseline data
collection to which all inflight data were to be compared, the biggest obstacle occurred during the
execution process of one of the pre-flight test sessions on the ground. The obstacle demonstrated that it
is not only a challenge to implement and execute an experiment during the flight phase but that ground
tests can be limiting factors too. What happened is that the gas analyser used for rebreathing
experiments to determine cardiac output and respiratory variables for some reason did not work. Even
though these measurements were not directly involved with our urinary experiment, it sent our
experiment to jeopardy, because it was totally integrated into an experimental complement to be
executed in concert. The breakdown happened at the Aerospace Medical Institute at the German space
centre, DLR, and since the astronauts’ test time was extremely limited with many other obligations, it
was made clear to the experimental team that the astronauts would withdraw from the experiments, if
the equipment was not working the next morning. We knew then that we were in trouble!
Figure :This rack was called Anthrorack and developed for the spacelab D2 mission. It consisted of
several pieces of equipment of which one was a mass spectrometer gas analyser used for respiratory
analysis and cardiac output determinations by rebreathing . During the ground-based data collections, a
capacitor was burned and had to be replaced at a very critical time before the mission.

We were otherwise all ready to conduct the baseline data collections the next day, and if the gas
analyser could not be fixed, it would mean that all of our experimental efforts would go down the
drain. What could we do?

Ingenuity, imagination and thinking out of the box are usually essential in solving unexpected
problems associated with spaceflight. In fact, this is what characterises this discipline. To my
disappointment most of the officials and investigators gave up immediately. It was late afternoon, and
the experiments were to be commenced early next morning at 07.00 am. The payload commander had
left with a statement that he and his astronaut team would show up on time the next morning, and if the
equipment did not work, the experimental complement would be deleted from the mission.
I and one of our ESA representatives soon conferred with each other, and we promised that we would
demonstrate to the payload commander that this problem could be fixed in time. The only question was
how? At the time we did not know that the problem was a burned capacitor, so we planned to have a
technician immediately flown down from the company, Innovision A/S, in Denmark, which had
developed and built the equipment.

What we did was risky, unusual and not according to the normal rules and regulations, but we were
running out of time. We had to rent a private airplane within a few hours, because there were no
commercial flights at that time. We had to establish an ESA guarantee for payment to the airline
company, and we—above all—had to get in contact with the technician in Denmark. It soon turned out
that he was available and the money for renting the airplane could be secured (after tough negotiations
with ESA) and everything seemed possible.

The technician was late at night transported by taxi to a nearby local airport some 200 km away, from
where he lived, entered the plane and came to Cologne around 4 am in the morning. We picked him up
at the airport in Cologne and brought him to the aerospace facility, and by a miracle, he quickly
identified the problem to be a burned capacitor and substituted it by another from a similar equipment.

At exactly 07.00 am before the baseline data collection was to begin, we were ready. The astronauts
entered with the expectation that the tests would be cancelled. We could inform them otherwise, and
with a rare expression of facial recognition, the payload commander and his fellow astronauts
professionally moved ahead to be ready for the tests.

Everything went smoothly!

This as well as the inflight incident with the stuck valve were pivotal obstacles for the outcome of
many of the physiological experiments during the Spacelab D2 mission. Had they not been overcome, I
would probably not have been able to continue my space physiology career for the next 30 years.

[h]Data analysis

It is pivotal to make sure that the data collected are correct. One basic rule is for the principal
investigator to always be present or to have proper representation at each of the pre- and post-flight
experimental sessions and to be monitoring how the data collections are done during flight—preferably
from a mission control centre. If that is not made sure of by the investigators of a study, one cannot be
sure that the circumstances surrounding the collections are fully understood and that handling of blood
samples is done correctly and according to specifications. Furthermore, the investigators have to make
sure to be readily available during executions of their experiment should inquiries from space agency
representatives need acute responses and interventions. Otherwise, it is unlikely that the data can be
trusted.

The investigators must also be proactive and tenacious in obtaining the collected data in space that
usually are stored on inflight computers. One way to make sure that the data are correctly handled is to
push the mission controllers to download as much as possible from space to ground as soon as the data
are collected, because no one knows what could happen afterwards to the storage. Usually, it is not a
problem should downloading not be possible, but in this case, it can also delay the post-flight analysis
of the data because of bureaucratic impediments to obtain it.

During the very sad and unfortunate accident of the STS-107 mission on February 1, 2003, where the
Space Shuttle, Columbia, and crew were lost during re-entry in the atmosphere, I was in charge of an
experiment. We conducted inflight cardiac output rebreathing experiments as well as blood pressure
monitoring. The data were downloaded to the mission control centre during the mission, which made it
possible for us to publish the data so that the experiments—despite the very sad circumstances—were
not done in vain.

During the Euromir 95 mission, which was a long-duration mission on the Russian space station Mir,
where the ESA astronaut, Thomas Reiter, stayed for 179 days in space, I was in charge of a urinary
collection experiment following an oral water load. I obtained the inflight data directly from Thomas
Reiter himself shortly after his flight, which is unusual, but we did so to bypass the bureaucracy.
Otherwise it could have taken weeks to obtain it.

[h]Publication

The most important part of all investigations including those in conjunction with human spaceflights is
to publish the results in as widely distributed scientific journals as possible. The whole purpose of
obtaining the data is to gain new knowledge, and by publishing in science journals with external peer
review, there is a certain guarantee for data quality and interpretation. It usually takes 2 years after the
end of a spaceflight mission to have the data published, but many times, it takes longer. The
investigators, however, owe it to everybody involved as well as society in general to produce a
scientific publication as fast as possible.

From the Spacelab D2 mission in 1993, our research team succeeded in publishing three papers within
3 years of the mission . During later missions on the Russian space station Mir, the Space Shuttle
Columbia (STS-107) and the International Space Station, we conducted five additional experiments
focusing on how the human cardiovascular system adapts to short- and long-duration flights . This is
important for understanding the long-duration health effects of future deep space missions that may last
up to 3 years on a mission to Mars.

[h]Parabolic flights

In preparation of the CVP experiment for the Spacelab D2 mission, we in 1991 participated in a series
of ESA-funded parabolic flights at an air base in Bretigny-sur-Orge, near Paris in France. The purpose
of participating in these flights was not only to test the technical feasibility of the equipment in
weightlessness but also to obtain short-term data during this condition and compare them to longer
effects of spaceflight . At that time, the parabolic flights were conducted by a Caravelle, which flew in
a Keplerian trajectory, thereby creating a free fall condition (0 g) symmetrically around the top of the
trajectory for 20 s. Some 20 s before and after the 0 g period, the plane underwent a period of increased
g’s from 1 up to 2. Thus, it is a very short period of weightlessness that is created in this way, but it is
the only way to induce real weightlessness in humans without going into actual space.
Figure :Dr. Regitze Videbaek measuring the size of the heart chambers in a subject during an ESA
parabolic flight campaign. The airplane ascents into a parabolic (Keplerian) trajectory to create
weightlessness for 20 s. The subject is also instrumented with invasive monitoring equipment for
estimating central venous pressure (CVP), which was also used for the D2 spacelab mission in 1993 on
board the space shuttle Columbia .

The CVP equipment was also tested during longer weightless periods (some 60 s) in a fighter airplane
(Draken) in Denmark in one of the investigators. This test was supported by the Royal Danish Air
Force. All of these tests were conducted in seven subjects (Caravelle) and in an additional one subject
(Draken) and demonstrated that the equipment worked during short-term variations in g’s between 0
and 4. In addition we obtained data on effects of short-term changes in g-loads on CVP including
effects of weightlessness for comparisons with spaceflight.

For further interpretation of the data, we later after completion of the D2 mission performed another
series of CVP experiments during 20 s of weightlessness during parabolic flights . In that context, we
added measurements of oesophageal pressures through an air tube that was swallowed by the test
subject through the nose for obtaining intrathoracic pressures. Intrathoracic pressures are the pressures
surrounding the heart. Those pressures were not measured during the Spacelab D2 mission, so the
parabolic flight data helped us interpret the CVP data from space.

The process of getting access to parabolic flights is not very different from getting access to
spaceflight. Investigators must usually respond to solicitations put forward by a space agency and go
through the scientific selection and feasibility assessment processes. The space agency will supply the
investigators with the infrastructure such as the flights, but investigators must find their own funding,
which usually also applies for adjustments of the equipment to fit into the airplane. In some cases,
investigators will have more direct access to the parabolic flight venue, if their experiments concern
technical feasibility assessments for a spaceflight. Obtaining experimental baseline data from these
flights for comparisons with space data can also be allowed at the discretion of the relevant space
agency.

[h]Spacelab D2 mission

From the Spacelab D2 mission, our CVP and urinary experiments showed us a new mechanism as to
how blood and fluid are shifted from the lower to the upper portions of the body in weightlessness and
that the excretion rate of a saline load is not faster than on Earth. Both results were surprising and
revealed new insight. Likewise, it was a surprise that the agitating (sympathetic) part of the
autonomous nervous system was stimulated during weightlessness and that it was not—as expected—
supressed. In ground-based simulation studies using 60 head-down bed rest or acute seated head-out
water immersion, the opposite is usually seen. Thus, there is a difference in effects of weightlessness in
space and the simulation models on the ground.

Despite the upward blood shift to the heart and head, CVP was measured to decrease in space
compared to being horizontal supine on the ground . We had expected it to be increased. The data we
obtained were only from one astronaut, but a US-led team during two other missions also measured
CVP directly with invasive catheters and found decreases. We thereafter performed a parabolic flight
study and measured CVP with same technology as during the D2 mission and found similar acute
decreases during the 20 s of weightlessness . However, we also observed that the heart was expanded
despite the decreased CVP, because simultaneously the oesophageal pressure also decreased and even
more so. From ultrasound images taken of the heart during the parabolic weightless period, we
observed an expansion of the cardiac chambers, so the ostensible discrepancy between the decrease in
CVP and the expanded heart could be reconciled by the expansion of the thorax that further stretches
the heart and gives an erroneous impression of the change in its feeding pressure .
Figure :Central venous pressure (in mm Hg) as a function of time in one astronaut before launch of the
space shuttle in the suit room with the space suit on and in the shuttle on the launch pad in the supine
leg-up position. (A) Closing of the helmet visor. (B) ignition. (C) Release of the solid rocket boosters
from the ascending shuttle. (D) Entering weightlessness. The g-load (G) is indicated at the bottom .

Figure :The parabolic flight experiment which helped us interpret the Spacelab D2 mission data.
Central venous pressure (CVP) was measured directly with long catheters with transducers at their tip
placed near to the heart chambers in supine subjects during the parabolic manoeuvre. Simultaneously,
the intrathoracic pressures (IPP) were also measured through long air-filled tubes with balloons at the
end in the oesophagus. By subtracting IPP from CVP, the transmural heart distension pressure (tCVP)
can be estimated. As can be seen, the tCVP increased in weightlessness (0 G) by 4.3 mm Hg (Delta)
even though CVP fell by 1.3. Thus, parabolic flight data could help interpret those obtained during
spaceflight .
[h]Subsequent space missions

From our later inflight experiments following the Spacelab D2 mission, the main conclusions can be
briefly summarised as follows:

 Cardiac output and stroke volume increase by some 35–40% during months of flight in space,
which is caused by the weightlessness-induced upward fluid shift.
 The arterial resistance vessels dilate and decrease the circulatory resistance by some 40% and
blood pressure by 10 mm Hg.
 The sympathetic nervous activity is not decreased in space and is at the level of being upright on
ground, which is supported by the attenuated urinary excretion rates of fluid and sodium.
 The dilatation of the arteries and the high sympathetic nervous activity is in contradiction to
each other, and the mechanism is not yet known .

Figure :During missions to the international Space Station in the period of 2006 to 2012, we
conducted measurements of cardiac output by a non-invasive rebreathing method in eight astronauts
and found a clear-cut increase in some 35% between the 3rd and 6th month in space . At the same time,
blood pressure is decreased, which indicates that the total vascular resistance is decreased by almost
40%. In contradiction to this, noradrenaline levels are not suppressed but maintained unchanged from
ground-based upright levels. Thus, the mechanism of chronic peripheral vasodilatation in space is still
unknown .

Thus, to our experience, experiments in space have revealed some new insight and mechanisms into
human physiology, which could be of importance in interpreting the health consequences of long-
duration flights in the future. By comparing the effects of long-duration (3–6 months) spaceflight on
the International Space Station with those of short-term shuttle flights , there are at least two important
and surprising observations: (1) The shift of blood and fluid from the lower body segments into the
heart, which increases cardiac output, is even bigger, and (2) blood pressure is more decreased by a
more pronounced peripheral vasodilation .

[mh] Unmanned Aerial Systems (UASs) for Environmental Monitoring

Mankind has always been fascinated by the dream of flight, in fact in many ancient cultures, myths and
legends depicted deities with the extraordinary ability to fly like birds. It should be sufficient to recall
the Egyptian winged goddess Isis or the Greek myth of Icarus; even Christian iconography preserves
and recovers the figures of winged beings as intermediaries between man and God, reinterpreting them
as angels. From those ancient times, passing through the Renaissance intuitions of Leonardo Da Vinci,
to the first balloon flights of the Montgolfier brothers in 1783, we came to the early twentieth century
which witnessed the first sustained and controlled flight of a powered, heavier-than-air machine with a
pilot aboard. Just over a hundred of years have passed since that fateful day—December 17, 1903, that
is since the Wright Flyer took off near Kill Devil Hills, about four miles south of Kitty Hawk, North
Carolina, USA. Nowadays the beauty of flying characterizes our daily lives, becoming an
indispensable tool to move people and things in few hours in all parts of the world. We can state that
the ability of flight has strongly changed the perspective on our vision of the world.

Long before this first powered flight of Wright brothers, one of the first recorded usages of unmanned
aircraft systems (UAS) was by Austrians on August 22, 1849. They launched 200 pilotless balloons,
carrying 33 pounds of explosives and armed with half-hour time fuses, against the city of Venice. On
May 6, 1896, Samuel P. Langley’s Aërodrome No. 5, a steam-powered pilotless model was flown
successfully along the Potomac River near Washington. During World War I and World War II, radio-
controlled aircrafts were used extensively for aerial surveillance, for training antiaircraft gunners, and
they also served as aerial torpedoes . During Cold War, the drone was seen as a viable surveillance
platform able to capture intelligence in denied areas. Reconnaissance UASs were first deployed on a
large scale in the Vietnam War. By the dawn of the twenty-first century, unmanned aircraft systems
were used more and more frequently for a variety of missions especially since the war on terror,
becoming a lethal hunter-killer. Due to these historical aspects the public perception of most of the
UAV applications is still mainly associated with military use, but nowadays the drone concept is
refashioned as a new promise for citizen-led applications having several functions, ranging from
monitoring climate change to carrying out search operations after natural disasters, photography,
filming, and ecological research.
The interpretation of photos from airborne and satellite-based imagery has become one of the most
popular tools for mapping vast surfaces, playing a pivotal role in habitat mapping, measuring, and
counting performed in ecological research, as well as to perform environmental monitoring concerning
land-use change . However, both satellite and airborne imagery techniques have some disadvantages.
For example, the limitations of piloted aircrafts must be considered in regard to their reliance on
weather conditions, flight altitude, and speed that can affect the possibility to use such method . In
addition, satellite high-resolution data might not be accessible for many developing-country
researchers due to financial constraints. Furthermore, some areas such as humid biotopes and tropic
coasts are often obscured by a persistent cloud cover, mostly making cloud-free satellite images
unavailable for a specific time period and location; moreover the temporal resolution is limited by the
availability of aircraft platforms and orbit characteristics of satellites . In addition, the highest spatial
resolution data, available from satellites and manned aircraft, is typically in the range of 30–50
cm/pixel. Indeed, for the purpose of monitoring highly dynamic and heterogeneous environments, or
for real-time monitoring of land-use change in sensitive habitats, satellite sensors are often limited due
to unfavorable revisit times (e.g. 18 days for Landsat) and spatial resolution (e.g. Landsat and Modis
~30 m/pixel) . To address these limitations, new satellite sensors (Quickbird, Pleiades-1A, IKONOS,
GeoEye-1, WorldView-3) have become operational over the past decade, offering data at finer than 10-
m spatial resolution. Such data can be used for ecological studies, but hurdles such as high cost per
scene, temporal coverage, and cloud contamination remain .

Emerging from a military background, there is now a proliferation of small civilian unmanned aerial
systems or vehicles (UAS/Vs), formerly known as drones. Modern technological advances such as
long-range transmitters, increasingly miniaturized components for navigation and positioning, and
enhanced imaging sensors have led to an upsurge in the availability of unmanned aerial platforms both
for recreational and professional uses. These emerging technologies may provide unprecedented
scientific application in the most diverse fields of science. In particular, UAVs offer ecologists a new
way to responsive, timely, and cost-effective monitoring of environmental phenomena, allowing the
study of individual organisms and their spatiotemporal dynamics at close range .

Two main categories of unmanned aerial vehicles (UAVs) exist: rotor-based copter systems and fixed-
wing platforms. Rotor-wing units have hovering and VTOL (Vertical Take-Off and Landing)
capabilities, while fixed-wing units tend to have longer flight durations and range. However, a more
detailed classification can be made according to size, operating range, operational flight altitude, and
duration . For additional information regarding the classification of UAVs, please refer to Refs. .

Size Nomenclature Specifics Operational requirements Application areas Examples


Very large HALE (High Fly at the Prohibitively expensive Assessments of Global Hawk,
(3–8 tons) Altitude, Long highest for most users (high climate variable Qinetiq
Endurance) altitude (> 20 maintenance, sensors, impacts at global Zephyr,
Km) with crew training costs), long scales, remote NASA
huge runway for takeoff and sensing collection, PathFinder
operating landing, ground-station and
range that support, and continuous earth/atmospheric
extend air-traffic control issues, science investigations
thousands of challenging
km, long deployment/recovery and
flight time transport
(over 2
days), very
Size Nomenclature Specifics Operational requirements Application areas Examples
heavy
payload
capacity
(more than
900 kg in
under-wing
pods)
Large (1– MALE Medium Similar requirements as Near-real-time NASA Altus
3 tons) (Medium altitude (3–9 for HALE but with wildfire mapping and II, NASA
Altitude, Long Km), over 12 reduced overall costs surveillance, Altair, NASA
Endurance) h flight time investigation of storm Ikhana, MQ-
with broad electrical activity and 9 Reaper
operating storm morphology, (Predator B),
range (> 500 remote sensing and Heron 2,
km), heavy atmospheric NASA
payload sampling, arctic SIERRA
capacity surveys, atmospheric
(~100 kg composition and
internally, chemistry
external
loads of 45
up to 900 kg)
Medium LALE (Low Fly at Reduced costs and Remote sensing, ScanEagle,
(25–150 Altitude, Long moderate requirements for takeoff mapping, surveillance Heron 1, RQ-
kg) Endurance), altitude (1–3 and landing compared to and security, land 11 Raven,
LASE (Low Km) with MALE (hand-launched cover RQ-2 Pionee,
Altitude, Short operating platforms and catapult- characterization, RQ-14
Endurance) ranges that launch platforms), agriculture and Dragon Eye,
extend from simplified ground-control ecosystem NASA J-
5 to 150 km), stations assessment, disaster FLiC,
flight time response and Arcturus T-
(over 10 assessment 20
hours),
moderate
payload
capacity (10–
50 kg)
Small, MAV (Micro) Fly at low Low costs and minimal Aerial photography AR-Parrot,
mini, and or NAV altitude (< take off/landing and video, remote BAT-3,
nano (Nano) Air 300 m), with requirements (Hand- sensing, vegetation SenseFly
(Less than Vehicles short launched), often are dynamics, disaster eBee, DJI
25 kg for duration of accompanied by ground- response and Inspire 3, DJI
small flight (5–30 control stations consisting assessment, precision Phantom 4,
Size Nomenclature Specifics Operational requirements Application areas Examples
AUVs, up min) and of laptop computers, agriculture, forestry Draganflyer
to 5 Kg range (< 10 flown by flight planning monitoring, X6, Walkera
for mini Km), small software or by direct RC geophysical Voyager 4
and less payload (Visual Line Of Sight or surveying,
than 5 Kg capacity (< 5 Beyond Visual Line Of photogrammetry,
for nano) kg) Sight when allowed), archeological
usually fixed-wing (small research,
AUVs) and copter-type environmental
(mini and nano AUVs) monitoring

Table :Summary of f UAVs’ classes with examples.

In this brief review and in our case studies, we only discuss and illustrate the use of small and mini
UAVs because these portable and cost-effective platforms have shown a great potential to deliver high-
quality spatial data to a range of science end users.

[h]Some recent ecological applications of lightweight UASs

Although lightweight UASs represent only a small fraction of the full list of unmanned systems
capable of performing the so-called “three Ds” (i.e. dull, dirty, or dangerous missions), they have been
used in a broad range of ecological studies.

[h]Forest monitoring and vegetation dynamics

Tropical forests play a critical role in the global carbon cycle and harbor around two-thirds of all
known species . Tropical deforestation is a major contributor to biodiversity loss, so an urgent
challenge for conservationists is to be able to accurately assess and monitor changes in forests,
including near real-time mapping of land cover, monitoring of illegal forest activities, and surveying
species distributions and population dynamics .

Koh and Wich provided with a simple RC fixed-wing UAV (Hobbyking Bixler 2) helpful data for the
monitoring of tropical forests of Gunung Leuser National Park in Sumatra, Indonesia. In fact, the
acquired images allowed the detection of different land uses, including oil palm plantations, maize
fields, logged areas, and forest trails.

UAVs have also been used for the successful monitoring of streams and riparian restoration projects in
inaccessible areas on Chalk Creek near Coalville (Utah), as well as to perform nondestructive,
nonobtrusive sampling of Dwarf bear claw poppy (Arctomecon humilis), a short-lived perennial herb
of crust community which is very sensitive to off-road vehicle (ORV) traffic . A fixed-wing (eBee,
senseFly ) and a quadcopter (Phantom 2 Vision+, DJI) were used to acquire high-spatial resolution
photos of an impounded freshwater marsh, demonstrating that UAVs can provide a time-sensitive,
flexible, and affordable option to capture dynamic seasonal changes in wetlands, in order to collect
effective data for determining percent cover of floating and emergent vegetation.

Dryland ecosystems provide ecosystem services (e.g. food, but also water and biofuel) that directly
support 2.4 billion people, covering 40% of the terrestrial area, they characteristically have distinct
vegetation structures that are strongly linked to their function . For these reasons, Cunliffe et al.
acquired aerial photographs using a 3D Robotics Y6 hexacopter equipped with a global navigation
satellite system (GNSS) receiver and consumer-grade digital camera (Canon S100). Later, they
processed these images using structure-from-motion (SfM) photogrammetry in order to produce three-
dimensional models, describing the vegetation structure of these semi-arid ecosystems. This approach
yielded ultrafine (<1 cm2) spatial resolution canopy height models over landscape-levels (10 ha). This
study demonstrated how ecosystem biotic structures can be efficiently characterized at cm scales to
process aerial photographs captured from inexpensive lightweight UAS, providing an appreciable
advance in the tools available to ecologists. Getzin et al. demonstrated how fine spatial resolution
photography (7-cm pixel size) of canopy gaps, acquired with the fixed-wing UAV ‘Carolo P200,’ can
be used to assess floristic biodiversity of the forest understory. Also in riparian contexts, UAS
technology provides a useful tool to quantify riparian terrain, to characterize riparian vegetation, and to
identify standing dead wood and canopy mortality, as demonstrated by Dunford et al. .

[h]Wildlife research

Often population ecology requires time-series and accurate spatial information regarding habitats and
species distribution. UASs can provide an effective means of obtaining such kind of information. Jones
et al. used a 1.5-m wingspan UAV equipped with autonomous control system to capture high-quality,
progressive-scan video of a number of landscapes and wildlife species (e.g. Eudocimus albus, Alligator
mississippiensis, Trichechus manatus). Israel dealt with the problem of mortally injured roe deer
fawns (Capreolus capreolus) by mowing machinery, and demonstrated a technical sophisticated
‘detection and carry away’ solution to avoid these accidents. In fact, he presented a UAV-based
(octocopter Falcon-8 from Ascending Technologies) remote sensing system via thermal imaging for
the detection of fawns in the meadows.

Considering that in butterflies, imagoes and their larvae often demand specific and diverging
microhabitat structures and resources, Habel et al. took high-resolution aerial images using a DJI
Phantom 2 equipped with a H4-3D Zenmuse gimbal and a lightweight digital action camera (GoPro
HERO 4). These aerial pictures, coupled with the information on the larvae´s habitat preference from
field observations, were used to develop a habitat suitability model to identify preferential microhabitat
of two butterfly larvae inhabiting calcareous grassland.

Moreover, UAVs may offer advantages to study marine mammals, in fact Koski et al. used the Insight
A-20 equipped with an Alticam 400 (a camera model developed for the ScanEagle UAV) to
successfully detect simulated Whale-Like targets, demonstrating the values of such methodology for
performing marine ecological surveys. In a similar manner, Hodgson et al. captured 6243 aerial
images in Shark Bay (western Australia) with a ScanEagle UVS, equipped with a digital SLR camera,
in which 627 containing dugongs, underlying that UAS systems may not be limited by sea state
conditions in the same manner as sightings from manned surveys. Whitehead et al. described efforts to
map the annual sockeye salmon run along the Adam’s River in southern British Columbia, providing
an overview of salmon locations through high-resolution images acquired with a lightweight fixed-
wing UAV.

[h]Case studies along temperate Mediterranean coasts

Although over the past decade there has been an increasing interest in tools for ecological applications
such as ultrahigh-resolution imagery acquired by small UAVs, few have been used for environmental
monitoring and classification of marine coastal habitats. Indeed, in this section we outline two case
studies regarding the application of a small UAV for mapping coastal habitats. These applications
represent a cross section of the types of applications for which small UAVs are well-suited, especially
when one considers the ecological aspect related to marine species biology and habitat monitoring.
Despite the fact that there are a number of advanced sensors that have been developed and many
proposed applications for small UASs, here we carried out our studies using a commercially available
and low-cost camera. As such, it can be considered a simple, inexpensive, and replicable tool that can
be easily implemented in future research which could also be carried out by nonexperts in the field of
UASs technologies.

For each survey we used a modified rotary-wing Platform (Quanum Nova CX-20, Figure), which
included an integrated autopilot system (APM v2.5) based on the ‘ArduPilot Mega’ (APM,
http://www.ardupilot.co.uk/), which has been developed by an online community (diydrones.com). The
APM includes a computer processor, geographic positioning system module (Ublox Neo-6 Gps), data
logger with an inertial measurement unit (IMU), pressure and temperature sensor, airspeed sensor,
triple-axis gyro, and accelerometer . This quadcopter is relatively inexpensive (<$500) and lightweight
(~1.5 Kg). The cameras used to acquire the imagery was a consumer-grade RGB, FULL-HD action
camera (Gopro Hero 3 Black Edition, sensor: Complementary Metal-Oxide-Semiconductor; sensor
size: 1/2.3″ (6.17 × 4.55 mm), pixel size: 1.55 μm; focal length: 2.77 mm). In addition, a brushless 3-
Axis Camera Gimbal (Quanum Q-3D) was installed, to ensure a good stabilization on acquired images,
avoiding motion blur. Both drone and gimbal were powered by a ZIPPY 4000 mAh (14.8 V) 4S 25C
Lipo battery which allowed a maximum flying time of about 13 min or less, depending on wind. In
addition, by combining the APM with the open-source mission planner software (APM Planner), the
drone can perform autonomous fly paths and survey grids.
Figure :The Quanum Nova CX-20 Quadcopter ready to fly just before a coastal mapping mission.
Figure :The integrated navigation and autopilot systems (APM 2.5).

[h]Case study 1: mapping of the upper limit of a Posidonia oceanica meadow for the detection of
impacted areas

The marine phanerogam P. oceanica (L.) Delile is the most widespread seagrass in the Mediterranean
Sea . It plays a pivotal role in the ecosystems of shallow coastal waters in several ways by (i) providing
habitat for juvenile stages of commercially important species ; (ii) significantly reducing coastal
erosion, promoting the deposition of ps with dense leaf canopy and thick root-rhizome (‘matte’) ; and
(iii) offering a nursery area for many fish and invertebrate species . Although known to be a reef-
building organism capable of long-term sediment retention, P. oceanica meadows are however
experiencing a steep decline throughout the Mediterranean Sea . Along the Mediterranean coasts, the
decline of seagrasses on a large spatial scale has been attributed to anthropogenic disturbances such as
illegal trawling , fish farming , construction of marinas , and sewage discharge and pollution . On
contrast, on a smaller spatial scale, particularly in coastal areas subjected to intense recreational
activity, seagrasses are impacted by mechanical damage caused by boat anchoring or moorings . Major
damage to seagrasses seems to be caused by dragging anchors and scraping anchor chains along the
bottom, as boats swing back and forth, generally resulting in dislodgement of plant rhizomes or leaves .
In most published works the mapping of P. oceanica meadows has been based on satellite, airborne
imagery, multibeam bathymetry, and side-scan sonar mosaics . Remote sensing data from satellites and
piloted aircraft can be used to map large areas, but they either do not have adequate spatial resolution
or are too expensive to map fine-scale features, otherwise small UAVs are particularly well-suited to
mapping the upper limits of meadows at a smaller spatial scale (i.e. 1–5 Km).

The case study for this application was carried out along a sandy cove (Arenella bay) with a well-
established P. oceanica meadow, approximately 2 km north of Giglio Porto (Giglio Island, Tuscany,
IT), in late November 2016. Our goals were to show the high level of detail that can be reached with
UAV-based imagery, to respect other free-available remote sensing techniques, and to detect impacted
areas of the meadow. In fact, in this study site, there are two principal sources of disturbance: a direct
adverse effect on meadow due to boat anchoring during summer seasons, and the presence of a granite
quarry that in the past (no longer operational) may have caused an increase in sedimentation rates,
resulting in a reduction of cover and shoot density.

We set the GoPro Hero 3 camera to take photos every 2 s (time lapse mode) in Medium Field of View
(M FOV: 7 Megapixel format, 3000 × 2250 pixels), and we set the camera pointing 90° downward
with auto white balance. Flight speeds were maintained between 5 and 7 m/s to allow for 75% in-track
overlap. The drone was programmed to fly at 30 m above mean sea level in order to get a Ground
Sampling Distance (GSD) of ~2.5 cm per pixel, according to the formula :

where GSD is the ground sample distance (i.e. photo resolution on the ground), SW is the sensor
width, FH is the flight height, FL is the focal length of the camera, and IMW is the image width. By
multiplying the GSD by image size (width and height) the resulting photo footprint was 66 × 50 m.

The bay (1.96 ha) was flown in 16 strips with a total flight duration of 6.34 min. In total, the survey
yielded 184 images, which were processed in Adobe Photoshop Lightroom 5.0 (Adobe Systems
Incorporated, San Jose, California, USA) using the lens correction algorithm for the GoPro HERO 3
Black Edition camera, in order to remove lens distortion (fish-eye effect). Since for this application,
high-spatial accuracy was not required, five ground-control points (GCPs) were placed at accessible
locations along the coast (with easily recognizable natural features such as rocks), and they were
surveyed with a handheld GPS + GLONASS receiver (Garmin Etrex 30), leading to horizontal errors
of ±5 m. Successively, the images where used to produce a high-resolution orthoimage mosaic in
Agisoft Photoscan 1.0 (www.Agisoft.com). This structure from motion (SfM) package allows a high
degree of automation, and makes it possible for nonspecialists to produce accurate orthophoto mosaics
in less time than what it would take using conventional photogrammetric software .

Figure shows how high-spatial resolution of RGB imagery acquired from UAV has allowed us to
detect the impacted areas of the meadow. In particular, we identified 1.437 m 2 of dead ‘matte’ by
analyzing satellite imagery (Google Earth), 1.686 m 2 with Bing Aerial orthophotos and 1.711 m 2 with
UAV-based orthomosaic. In fact, due to the higher spatial resolution of UAS imagery, we were able to
detect even the smallest areas where dead ‘matte’ was exposed, due to meadow degradation (Figure).
Figure :The bay of Arenella (Giglio Island, scale 1:500) with impacted Posidonia oceanica meadow
(dead P. oceanica ‘matte’ is enclosed by orange polygons) mapped using three different free/low- cost
remote sensing techniques: (a) Google Earth Satellite image; (b) Bing aerial orthoimage; and (c) UAV-
based orthomosaic. The enclosed area highlighted by the red box is shown at greater scale (1:100) in
order to visualize the increasing level of detail. In (c), red dots represent the position of GCPs.

The imagery acquired provides a new perspective on P. oceanica mapping and clearly shows how
comparative measurements and low-cost monitoring can be made in shallow coastal areas. In fact, in
this kind of environment, anthropogenic drivers such as boat mooring and creation of coastal dumping
areas are significantly affecting ecosystem structure and function. In addition, considering that drone
surveying is relatively not expensive, regular time-series monitoring can be adopted to assess the
evolution of coastal meadows.

[h]Case study 2: integration of underwater visual census (UVC) data with UAV-based aerial maps for
the characterization of juvenile fish nursery habitats

Most demersal fishes have complex life cycles, in which the adult life-stage takes place in open deeper
waters, while juvenile life-stages occur in benthic inshore habitats . The presence of suitable habitats
becomes an essential requirement during the settlement of juvenile stages. In fact, these habitats are the
key to success for the conclusion of early life phases, providing shelter from predators and abundance
of trophic resources. As a result of this site-attachment, juveniles exhibit systematic patterns of
distribution, influenced by the availability of microhabitats . Habitat identification has been generally
achieved by human underwater visual censuses (UVC) techniques . The latter has been considerably
improved in recent years with visual underwater video technologies . However, these studies require a
deep knowledge of the environment in addition to considerable efforts in terms of time and
experienced staff . Small UASs potentially offer a low-cost support to conventional UVC techniques,
providing a time-saving tool aimed at improving data from underwater surveys. Indeed, our aim is to
couple UVC data (e.g. number of juvenile fish) with remote sensing data (high-resolution UAV-based
imagery), to extrapolate habitat features from image analysis, allowing a considerable saving of both
time and efforts, especially for underwater operators.

The case study for this application involved the same UAS used in the previous example, an
underwater observer, and was focused on a common coastal fish species: the white seabream (Diplodus
sargus, L.). D. sargus is abundant in the Mediterranean and dominates fish assemblages in shallow
rocky infralittoral habitats. It inhabits rocky bottoms and P. oceanica beds, from the surface to a depth
of 50 m. In common with other sparid fishes, it is an economically important species of interest for
fisheries and aquaculture.

Between early May and late June 2016, juvenile white seabream (D. sargus, L.) were censused from
Cannelle Beach to Cape Marino, along a rocky shoreline (~1.5 km long) south of Giglio Porto (Giglio
Island, IT). Counts of fish were obtained from two visual census surveys per month: the diver swam
slowly along the shoreline (from 0 to 6 m depth) and recorded the numbers of individuals encountered
while snorkeling. When juvenile fish or shoals of settlers (size range 10–55 mm) were observed, the
abundance and size of each species were recorded on a plastic slate. In addition, the diver towed a rigid
marker buoy with a handheld GPS unit with WAAS correction (GpsMap 62stc) in order to accurately
record the position of each shoal of fish.

Two mapping missions were successfully carried out in late July 2016, along the same shoreline, in
order to produce a high-resolution aerial map of the coast (Figure).

Figure :The high-resolution (2.5 cm/pix) mosaic representing the rocky coast (~1.5 Km) south of
Cannelle Beach (Giglio Island, Italy), derived from two mapping mission (204 images) of Quanum
Nova CX-20.

The quadcopter flew at 40 m, yielding a ground resolution of ~2.5 cm/pix. The two surveys covered
1446 m of shoreline and took approximately 16 min, resulting in 204 images. Since many stretches of
the coast were inaccessible areas, where GCPs cannot be physically measured on the ground, we used a
direct georeferencing approach. The GPS coordinates of the cameras are determined using the UAV
onboard GPS receiver, so that the GPS position at the moment of shot can be written to the EXIF
header information for each image, after estimating time offset with Mission Planner (v.1.3.3 or
higher) geotagging images tool (for better results preflight synchronization of the camera’s internal
clock with GPS time is recommended). In addition, these measured values (from onboard GPS) may be
useful to estimate the camera’s approximate external orientation parameters to speed up
photogrammetric workflow (bundle adjustment) in Agisoft Photoscan. However, since they are
typically captured at relatively low accuracy in the case of UAVs’ consumer-grade GPS, we also
registered the final orthomosaics, by importing it as raster image (TIFF format) into Arcmap 10.1 . We
aligned the raster with an already existing 1:5000 scale aerial orthophoto by 8 control points in order to
perform a 2nd order polynomial transformation. Afterward, the control points were used to check the
reliability of image transformations. The total error was computed by taking the root mean square sum
of all the residuals to compute the RMS error (RMSE). This value described how consistent the
transformation was between the different control points. The RMSE achieved was 0.15 pixels which
was well under the conventional requirements of less than 1 pixel . The successful geo-registration
allowed a direct visualization on the map of UVC data (i.e. lat/long coordinates of fish shoals) after
downloading GPS eXchange (.gpx) information from GPS unit. These GPS data were imported as
point shapefile in ArcMap using DNRGPS 6.1.0.6 application .

As all juvenile fish positions, with their relative abundances (number of fish per shoal), are now
available in a GIS environment, it is a straightforward process to model them with interpolation
methods, which is, for example, available in ArcMap. The point data, measured from irregularly
spaced locations, were converted into continuous surfaces using an inverse distance weighting (IDW)
method and then rasterized into a grid format. We used local interpolators of inverse distance
weighting because the concept of computation (i.e. it assumes that each point has a local influence that
diminishes with distance ) is relevant for juvenile fish, where closer points are thought to be similar as
a result of the habitat characteristics. Figures 5 and 6 show the spatial distribution of D. sargus juvenile
density collected through underwater visual census after IDW interpolation. GIS data integration
allowed us to identify two important aspects: (1) four areas (a–d) with high densities of juveniles were
clearly visible, suggesting that such zones serve as nursery grounds for juvenile white seabreams
(Figure) and (2) as the juveniles grew larger in size (> 40 mm) a dispersal out of the nursery areas was
evident and the preference for a given habitat type decreased leading to an increase in the number of
shoals but with lower densities within shoals (Figure).
Figure :Spatial distribution of small-sized (10–40 mm) juvenile D. sargus. IDW-interpolated fish
density after UVC data collection in May 2016. The four areas (a–d) with the highest densities of
juvenile are highlighted by red circles.
Figure :Spatial distribution of large-sized (41–55 mm) juvenile D. sargus. IDW-interpolated density
after UVC data collection in late June 2016.

These four nurseries were investigated through image analysis (Figurea–d): we performed a Maximum
Likelihood Classification algorithm followed by both postprocessing workflow and manual polygons
editing for edge refinement in order to highlight the most important habitat feature such as substrata
type and extent (Table). In fact, due to high site-attachment of juvenile fish, the presence of specific
habitats play a key role in the development of early life-stages, hence the fine characterization of these
environments becomes an important aspect regarding ecological studies focused on juvenile fish.
However, underwater data collection by SCUBA operators require a large effort to acquire such
detailed information, therefore UASs-based remote sensing techniques become useful, reliable, and
feasible tools for mapping coastal fish habitats and for supporting ecological investigations.

Nursery Substrata Habitat description Habitat Percent Depth Total


type cover cover (m) extent
(m2) (%) (m2)
Sandy cove (a) Sand Granite coarse sand 243.9 36.2 0–3 674.4
Rock Large- (mean ± SD diameter: 3.4 ± 382.5 56.7
16) and medium-sized (mean ± SD
diameter: 0.9 ± 0.3) boulders with
photophilic algae biocenosis
Posidonia Small patches on sand 48 7.1
Nursery Substrata Habitat description Habitat Percent Depth Total
type cover cover (m) extent
(m2) (%) (m2)
oceanica
Rocky cove Sand Granite coarse sand 11.6 3.7 0–3.5 317.8
(b)
Rock Small-sized (mean ± SD diameter: 0.6 306.2 96.3
± 0.2) blocks and pebbles with
photophilic algae biocenosis
Small port (c) Sand Fine sand and mud 218.7 23.8 0–2.8 918.3
Rock Cranny rock semisciaphilic algae and 402.2 43.8
isolated boulders on soft sediment
Debris Dead P. oceanica leaves on mud 297.4 32.4
Rocky/sandy Sand Sandy patches 129.1 5.1 0–5.5 2521.2
cove (d)
Gravel and Small- and medium-sized pebbles on 25.2 1
pebbles sand
Rock Cranny rock with photophilic algae 1698.9 67.4
biocenosis and isolated boulders
Posidonia Posidonia meadow and ‘matte’ 621 24.6
oceanica
Debris Dead P. oceanica leaves on sand 47 1.9

Table :Main habitat features characterizing the four nursery areas (a–d).

Measures are derived from high-resolution mosaics image analysis in Arcmap 10.1.
Figure :Thematic maps of the four nursery areas (a–d) derived from Maximum likelihood
classification and manual editing. Different colors represent main habitat types.
In this brief review we have provided an overview of ecological studies carried out with small drones.
Through study cases we demonstrated how UAV-acquired imagery has a substantial potential to
revolutionize the study of coastal ecosystem dynamics. The future of UASs applications looks very
promising due to the relative low cost with respect to the benefits obtained . In fact, the field of
ecology is severely hindered by the difficulties of acquiring appropriate data, and particularly data at
fine spatial and temporal resolutions, at reasonable costs . As demonstrated in this study, unmanned
aerial vehicles offer ecologists new opportunities for scale-appropriate measurements of ecological
phenomena providing land cover information with a very high, user-specified resolution, allowing for
fine mapping and characterization of coastal habitats. Although the camera equipment used herein only
captures three color (RGB) channels with relatively low resolution (max 16 megapixel), it was possible
to distinguish impacted areas in sensitive habitat types, as well as preferred sites for juvenile fish
species. Moreover, high-spatial resolution data derived from UAVs combined with traditional
underwater visual census techniques enable the direct visualization of field data into geographic space
bringing spatial ecology toward new perspectives. High-resolution aerial mosaics allow rapid detection
of key habitats, and thus can be used to identify areas of high relevance for species protection and areas
where management action should be implemented to improve or maintain habitat quality . UASs are
potentially useful to investigate population trends and habitat use patterns, and to assess the effect of
human activities (e.g. tourism, pollution) on abundance, particularly in coastal and shallow habitats,
where visibility enables animal detection from the surface, as demonstrated for elasmobranch species
in coral reef habitats . Finally, although the flexibility of UASs will be able to revolutionize the way we
address and solve ecological problems , we must consider government approval navigational
stipulations and social implications that impose restrictions on the use of UASs before undertaking
research projects involving the use of UASs .

Preface

Unmanned Aircraft Systems (UAS) have rapidly emerged as one of the most transformative
technologies of the modern era. Their applications span a wide spectrum, from military reconnaissance
and surveillance to civilian tasks such as environmental monitoring, disaster response, and package
delivery. The International Symposium on Unmanned Aerial Vehicles (UAV'08) stands as a testament
to the growing importance and interdisciplinary nature of UAS research and development.

UAV'08 brings together experts, researchers, engineers, and policymakers from around the globe to
discuss the latest advancements, challenges, and opportunities in the field of unmanned aerial vehicles.
This symposium serves as a platform for fostering collaboration, sharing knowledge, and shaping the
future trajectory of UAS technology.

The year 2008 marks a pivotal moment in the evolution of unmanned aircraft systems. It represents a
period of significant technological breakthroughs, regulatory milestones, and increased public
awareness. UAV'08 captures this pivotal juncture, providing a snapshot of the state-of-the-art in UAS
research and innovation.

This symposium covers a diverse range of topics, reflecting the multifaceted nature of unmanned
aircraft systems. From airframe design and navigation algorithms to payload integration and ethical
considerations, the breadth of discussions at UAV'08 underscores the complexity and depth of the
field.
Furthermore, UAV'08 serves as a forum for exploring the societal implications of UAS technology. As
unmanned aircraft become more prevalent in our skies, questions surrounding privacy, safety, and
ethical use come to the forefront. This symposium encourages thoughtful dialogue and debate on these
pressing issues, seeking to ensure that UAS technology is deployed responsibly and ethically.

As we embark on this journey through the world of unmanned aircraft systems, we invite readers to
join us in exploring the cutting-edge research, innovative applications, and profound societal impacts
of UAS technology. UAV'08 represents a collaborative effort to advance knowledge, drive innovation,
and harness the potential of unmanned aerial vehicles for the betterment of society.

about the book


The emergence of Unmanned Aircraft Systems (UAS) has swiftly become a defining feature of modern
technology. The International Symposium on Unmanned Aerial Vehicles (UAV'08) serves as a vital
platform for experts, researchers, engineers, and policymakers to convene and discuss the latest
developments and hurdles in this field. In 2008, a pivotal moment for UAS technology, this
symposium encapsulates the era's significant technological strides, regulatory landmarks, and increased
public awareness. UAV'08 delves into a diverse array of topics, from airframe design to ethical
considerations, reflecting the multifaceted nature of UAS research. Furthermore, it provides a forum to
contemplate the societal implications of UAS deployment, fostering dialogue on privacy, safety, and
ethical use. As readers embark on this exploration of unmanned aircraft systems, they are invited to
join in the quest to advance knowledge, spur innovation, and ensure the responsible and ethical
deployment of UAS technology for the greater good of society.

You might also like