2018-Obstacle Avoidance For Drones Using A 3DVFH Algorithm
2018-Obstacle Avoidance For Drones Using A 3DVFH Algorithm
Marc Pollefeys
Master Thesis
is original work which I alone have authored and which is written in my own words.1
Author(s)
Tanja Baumann
Student supervisor(s)
Lorenz Meier
Supervising lecturer
Marc Pollefeys
With the signature I declare that I have been informed regarding normal academic
citation rules and that I have read and understood the information on ’Citation eti-
quette’ (https://www.ethz.ch/content/dam/ethz/main/education/rechtliches-
abschluesse/leistungskontrollen/plagiarism-citationetiquette.pdf). The
citation conventions usual to the discipline in question here have been respected.
1 Co-authored work: The signatures of all authors are required. Each signature attests to the
Preface v
Abstract vii
Symbols xi
1 Introduction 1
2 Related Work 3
2.1 Global Obstacle Avoidance . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Local Obstacle Avoidance . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3 Impact on this Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3 Approach 7
3.1 The 3DVFH Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2 Building a Memory for 3DVFH . . . . . . . . . . . . . . . . . . . . . 8
3.2.1 Re-projection of Histogram into 3D Points . . . . . . . . . . . 8
3.2.2 Building a Histogram from 3D Points . . . . . . . . . . . . . 9
3.2.3 Combining Memory Histogram and New Histogram . . . . . 10
3.3 Adaptions for Safety and Corner Cases . . . . . . . . . . . . . . . . . 14
3.3.1 Move only in Directions Inside the FOV . . . . . . . . . . . . 14
3.3.2 Safety Measure: Back-off . . . . . . . . . . . . . . . . . . . . 15
3.3.3 Safety Margin in Histogram Dependent on Obstacle Distance 16
3.3.4 Adaption of Cost-Parameters Dependent on Progress . . . . . 16
3.3.5 Sphere Avoidance - Minimum Distance to Obstacles . . . . . 18
3.4 Ground Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.4.1 Building an Internal Height Map . . . . . . . . . . . . . . . . 20
3.4.2 Height Control without Additional Obstacles . . . . . . . . . 22
3.4.3 Height Control in Combination with Histogram . . . . . . . . 22
3.5 Look Ahead: 3DVFH* with memory . . . . . . . . . . . . . . . . . . 23
3.5.1 Algorithm Structure . . . . . . . . . . . . . . . . . . . . . . . 23
3.5.2 Reuse Old Path . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.5.3 Cost and Heuristic Functions . . . . . . . . . . . . . . . . . . 26
4 Results 29
4.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.1.1 Simulation Setup . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.1.2 Real World Platform . . . . . . . . . . . . . . . . . . . . . . . 30
4.1.3 Local Planner Node . . . . . . . . . . . . . . . . . . . . . . . 30
4.2 3DVFH with Memory . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.3 Ground Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.3.1 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . 35
4.3.2 Real -World Ground Map . . . . . . . . . . . . . . . . . . . . 37
ii
4.4 Sphere Avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.5 3DVFH* with memory . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.5.1 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . 39
4.5.2 Outdoor Flight Results . . . . . . . . . . . . . . . . . . . . . 41
5 Conclusion 45
5.1 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Bibliography 49
A Appendix 51
A.1 Additional Results Graphics . . . . . . . . . . . . . . . . . . . . . . . 51
A.1.1 Memory 3DVFH Simulation . . . . . . . . . . . . . . . . . . . 51
Preface
Unmanned Aerial Vehicles (UAVs) have many advantages over legged or wheeled
systems. They are able to operate in a less cluttered environment up in the sky
and they can achieve much higher velocities. Therefore, UAVs are valuable for a
variety of tasks from consumer applications to search and rescue missions. The
use of UAVs in commercial applications such as surveying or delivery has also
increased significantly in the last few years. Special interest lies in autonomous
UAV applications. Autonomous flight poses additional challenges as the trajectory
has to be planned and crashes with obstacles must be avoided. Obstacle avoidance
has been a very active field of research for many years. The algorithms have evolved
and become more powerful. However, the problem is not yet solved conclusively
and there is still a vast potential for improvement. This thesis is focused on the
development of an obstacle avoidance algorithm for real-time flight on UAVs.
v
Abstract
Unmanned Aerial Vehicles (UAV) possess a vast potential for various autonomous
applications such as surveying or delivery. When considering autonomous flight
operations one of the most essential requirements certainly is a reliable obstacle
avoidance mechanism. Obstacle avoidance is a very active field of research without
any conclusive solutions so far. This thesis introduces the 3DVFH* obstacle avoid-
ance algorithm suitable for real-time application on UAVs. The algorithm combines
the ideas behind the previously presented 3DVFH+ and the VFH* algorithm with
a novel memory strategy. The 3DVFH* algorithm computes obstacle avoidance
maneuvers in a purely reactive manner without the need to build a global map of
the environment. The memory strategy keeps track of previously seen obstacles by
propagating the previous polar histogram to the current location. Various features,
such as ground detection and safety mechanisms, have been implemented to increase
the robustness and render application on UAVs possible. The 3DVFH* algorithm
has been shown to effectively avoid obstacles in complex simulation scenarios ex-
hibiting a look-ahead capability. The ground detection was shown to be able to keep
a specified minimum distance to the ground. Real-world flight tests were performed
by running the 3DVFH* algorithm on-board the Intel® Aero Ready to Fly drone
equipped with only one forward facing camera. The drone was able to successfully
avoid an obstacle in all of the 26 test flights.
vii
Acknowledgements
I would like to thank Lorenz Meier for the supervision of this thesis. Without his
guidance, resources and connections this project would not have been possible. I
would also like to thank all the people from the Advanced Technology Labs AG who
were always available to brainstorm and lend a hand wherever needed. Especially I
would like to thank Christoph Tobler for providing his piloting skills and for joining
me to defy wind and cold during the various hours of outdoor tests. Special thanks
goes to Martina Rivizzigno, who has not only been supporting me throughout this
thesis but also become a great friend. I am also deeply grateful to my parents who
have always encouraged me to work towards my goals and supporting me though
everything.
ix
Symbols
Symbols
elevation angle
ζ azimuth angle
Indices
x x axis
y y axis
z z axis
old data from last computation cycle
xi
Chapter 1
Introduction
1
Chapter 1. Introduction 2
Originally there have been two different approaches to obstacle avoidance. The first
approach could be called global obstacle avoidance. It either has access to a map of
the environment or builds it at runtime from sensor data. From this map a complete
path to the goal is calculated using a path finding algorithm. Those global obstacle
avoidance algorithms are usually too computationally expensive to run on-board of
a regular size UAV. They require large systems or the use of a ground station. On
the other hand there are the local or reactive approaches. They do not build a map
but act on the sensor data gathered at the current time-step. This usually makes
them unable to find the optimal path, however, those algorithms are fast enough
to run on-board a UAV. A frequently used method for local obstacle avoidance is
the Vector Field Histogram (VFH) [1, 2]. The work in this thesis will be based on
the VFH method and will stretch it towards a global approach. A variant of the
VFH algorithm called VFH* is used [3]. The VFH* algorithm builds a look-ahead
tree and therefore combines the local method with A* path planner. The VFH*
method was only used in 2D environments so far. We will extend it to the 3D
space to make it usable for UAV applications. Also a memory will be added to the
VFH method to keep track of previously seen obstacles. Those changes allow the
use of some advantages inherent to global approaches without sacrificing the low
computational effort of the algorithm. The goal of this thesis is to show successful
obstacle avoidance in outdoor flights of a small UAV.
Chapter 2
Related Work
Obstacle avoidance algorithms have received a lot of attention over the last few
years. But the foundations of obstacle avoidance research go back to the 90s.
Where at the beginning the algorithms were only designed for robots moving in
a 2D environment, but as the robotic hardware has evolved, recently many of them
have been adapted to consider 3D environments. There exist two main categories
of obstacle avoidance algorithms. On one hand there are the local or reactive
approaches. Those algorithms do not build a map of the environment or save
obstacle positions. They calculate the best reaction from the current sensor data.
The advantage of local algorithms is that they have low computational cost, which
is crucial to run on-board small robotic devices. However, as they do not plan
a complete path they are unable to find the optimal path to the goal. Those
algorithms are also prone to get stuck in local minima. On the other hand there are
the global approaches. Those algorithms have access to a map of the environment
which is already precomputed or they build the map themselves while encountering
obstacles. Often path planning algorithms, such as dynamic programming of A*
algorithms, are used to determine the best path from the given map. Those global
algorithms are less prone to get stuck in local minima as they consider all the
available information about the terrain. In the case where the whole environment
is mapped they might even yield an optimal solution. However, global methods
require a lot of computation power. Goerzen et al. [4] give an overview over the
most common obstacle avoidance algorithms, which is focused mostly on global
approaches. They further distinguish between algorithms which take kinematic and
dynamic constraints of the robotic platform into account and those which do not.
In the next paragraphs research in global and local obstacle avoidance algorithms
is summarized.
3
Chapter 2. Related Work 4
motion primitives. Fraundorfer et al. [8] use an UAV for autonomous mapping and
exploration of an unknown environment. The UAV is equipped with a stereo camera
from which an occupancy map is built on a ground station using SLAM with loop
closure and re-localization. A frontier algorithm determines the new goal for the
UAV and the VFH+ algorithm is used to navigate around obstacles. Mohammadi
et al. [9] use a similar method in 2D to find a path for a surgical needle in a 2D
environment. The map is given by the X-Ray image and the space is tessellated
into hexagonal cells as hexagons provide smoother paths then squares. This grid is
searched for the optimal path using an A* algorithm. Stachniss and Burgard [10]
use an A* algorithm to find a path for a ground robot in 2D space. On this path
a subgoal is defined and another A* algorithm is used to search the five dimen-
sional configuration space, providing a look-ahead not only in position, but also in
velocity and orientation. This algorithm has been compared to the Dynamic Win-
dow Approach (DWA) introduced by Brock and Khatib [11]. The DWA algorithm
determines the feasible velocity space such that only velocities are considered with
are reachable in the next timstep and allow the robot to stop in front of obstacles.
The look-ahead tree in [10] enables the robot to slow down before an opportunity
to turn comes up, whereas the DWA overshoots and misses the turn because it does
not reduce the velocity beforehand.
There have also been studies using Extended Kalman Filters (EKF) to track distinct
obstacles to be able to cope with dynamic scenes. The final path is then calculated
by applying dynamic programming [12] or by drawing a collision cone around the
obstacles and limiting the velocity space to vectors on the outside [13]. Sasiadek
and Duleba [14] use a two layer approach, where first a coarse path is planned and
then individual obstacles are traced at a safe distance.
Another method to build search trees uses probabilistic properties in the hope of
reducing the computational cost of the algorithm. Hrabar [15] draws a probabilistic
road-map into the occupancy grid by randomly adding points and connecting them
if a path between them exists. This tree is then searched using a D* Lite algorithm
which does not recalculate the path at every time-step but only updates it when
new information becomes available. Kuffner and LaValle [16] use two random trees
in a given map, one grown from the start position and one from the goal. Those
trees are expanded alternatively until they can be connected. The simulation re-
turns a feasible path in position and orientation space.
many adaptions and variants. Song and Huang [20] use an optical flow algorithm to
generate the polar histogram and adapts safety margins around obstacles to their
distance. Yamauchi [21] also uses distance dependent safety margins around ob-
stacles. The Mobile and Static Vector Field Method (MSV) [22] is an adaption
to VFH+ which is able to cope with moving obstacles by estimating their velocity
from consecutive histograms. Another variant is the VFH* introduced by Ulrich
and Borenstein [3]. This algorithm combines the advantage of the A* planning
algorithm with the local properties of the VFH+ algorithm. A look-ahead tree is
built where at every node the VFH+ algorithm is executed to find feasible direc-
tions. The tree is then searched for the best path using the A* algorithm. There
have been attempts to enhance the VFH* algorithm to take moving obstacles into
account. Jie et al. [23] developed the IVFH* algorithm which uses spring like forces
on the nodes of the search tree to adapt them to moving obstacles. Babinec et al.
[24, 25] determine the velocity of obstacles from several subsequent states and use
this knowledge while building the search tree. So far all VFH algorithms have been
used in a 2D environment for ground bound robots, however, Vanneste et al. [26]
have developed the 3DVFH+ algorithm. This algorithm cannot be counted as a
completely local algorithm as it builds a map of the environment in the form of an
Octomap [6]. From this Octomap a 3D occupancy map is built by considering only
voxels in a bounding box around the vehicle. From the 3D occupancy map a 2D
polar histogram is built in which openings are detected and evaluated to find the
best direction for moving in a 3D environment.
Some recent studies have also been using machine learning approaches to deter-
mine the correct avoidance maneuver from sensory input. There has been work
to combine the VFH method with machine learning and fuzzy logic [27, 28]. An
early version of a fuzzy logic method influenced by potential fields was presented by
Zavlangas et al. [29]. On the other hand, Ross et al. [30] learn the correct control
input directly from sensor data and expert demonstrations. A continued study [31]
uses receding horizon control to evaluate a preselected set of feasible trajectories on
a cost map, instead of the learnt purely reactive control.
Another popular method used for local obstacle avoidance is the potential field
method, which is first introduced by Khatib [32]. There have been various adap-
tions to this method over the years especially to find good potential functions and
avoid local minima. Waydo [33] used stream functions and considered the velocity
of obstacles for the generation of the potential field in a 2D application. Mac et al.
[34] used the potential field method to navigate a UAV through an indoor environ-
ment.
There are some further propositions for local obstacle avoidance such as the strategy
by Oleynikova et al. [35]. Obstacle positions are determined by converting the
disparity map into a column-accumulated U-map. In this U-map bounding boxes
are fitted around distinctive areas. Those boxes are then converted to ellipses and
projected into the 2D space at the current height of the UAV and inflated by a safety
radius. Flight tests showed that the UAV was able to avoid sparse obstacles when
choosing the waypoints to lie outside the ellipses. Another family of computationally
cheap algorithms for local obstacle avoidance in 2D are the Bug algorithms. A
comparison of different Bug algorithms can be found in [36]. Buniyamin et al.
[37] developed the PointBug algorithm which was able to outperform other Bug
algorithms, especially in environments with little sharp edges and turns. An early
method for local obstacle avoidance is the Curvature-Velocity Method (CVM) by
Simmons [38]. CVM achieves real-time performance by approximating how far
a robot could travel on a given curvature before hitting an obstacle. Additional
Chapter 2. Related Work 6
velocity constraints can be imposed based on the physical limitations of the robot.
Approach
A cost function is evaluated for all free cells in the resulting histogram. The cost
function contains a goal orientation term and a smoothing term. The goal orienta-
tion term compares the evaluated direction to the goal direction. The smoothing
term compares the evaluated direction to the direction chosen in the last time-step.
The cost function is discussed in more detail in chapter 3.3.4. The histogram cell
with the lowest cost will be chosen as the direction for movement. In the case when
there is no obstacle present the histogram consists only of unoccupied cells and the
UAV is allowed to go faster. In this case the waypoint is chosen to lead directly
towards the goal instead of determining it from the histogram. This reduces the
7
Chapter 3. Approach 8
2D polar histogram
elevation angle ε
azimuth angle ζ
Figure 3.1: The elevation and azimuth angle with respect to the UAV position are
calculated for every 3D point. The points are then mapped to the corresponding 2D
histogram bin.
computation time, as there is no need to evaluate the cost function for every free
cell, and it helps avoiding the effects of discretization.
needs to be converted into a histogram at the current drone location. This can be
achieved by re-projecting the occupied cells in the old histogram into 3D points and
then building a memory histogram from those 3D points. To get the re-projected
points, the old histogram is scanned for occupied cells. Each of those occupied cells
is characterized by an elevation angle and an azimuth angle ζ. In addition to
those angles, a distance is stored in the distance layer of the old histogram and an
age in the age layer. If the age of the occupied cell does not exceed a defined limit,
the cell is re-projected into four 3D points corresponding to the corners of the cells.
This process is shown in figure 3.3. The elevation and azimuth angles of the four
corner points can be calculated by adding/subtracting half the angular resolution
α of the histogram from the elevation and azimuth angles of the occupied cell as
shown in equation 3.1. Those four points can then be re-projected into 3D space
using equation 3.2.
α α
1 = + ζ1 = ζ +
2 2
α α
2 = − ζ2 = ζ + (3.1)
2 2
α α
3 = + ζ3 = ζ −
2 2
α α
4 = − ζ4 = ζ −
2 2
π π
pix = posoldx + d · cos(i )sin(ζi )
180 180
π π
piy = posoldy + d · cos(i )cos(ζi ) (3.2)
180 180
π
piz = posoldz + d · sin(i )
180
180
ζ= · atan2 px − posx , py − posy (3.3)
π
180 pz − posz
= · atan p (3.4)
π (px − posx )2 + (py − posy )2
βtemp = β + α − β%α (3.5)
βtemp
βindex = −1 (3.6)
α
Chapter 3. Approach 10
This procedure is used twice in the process of getting the combined histogram
for navigation: To get the memory histogram from the re-projected points and to
calculate the new histogram from the point-cloud provided by the stereo camera.
In both cases the primary polar histogram is calculated in the same way. However,
the thresholds for conversion to a binary histogram are different as the total 3D
point count largely differs. Also the cell size of the memory histogram is larger as
it is built at half the resolution. For the new histogram, a bin in the binary layer is
set to occupied if at least one 3D point falls into it. The corresponding value in the
distance layer is set to the mean of the distances of all points which fall into that
bin. The value in the age layer is always set to zero, as this histogram is generated
from new data. The memory histogram on the other hand is generated at half the
resolution of all the other histograms and then up-sampled. A cell in the binary
layer is considered as occupied, if at least six points fall into it. This threshold needs
to be chosen with care. If every cell of the old histogram is split into four 3D points,
then six is the only admissible value. If the threshold is chosen lower than six, the
memory histogram has more occupied cells than the old histogram. This causes
the combined histogram to converge to a completely occupied histogram. However,
if the threshold is chosen too large, the memory histogram will have less occupied
cells than the old histogram. The memory effect would be minimized and especially
the borders of obstacles would be neglected. This effect is visualized in figure 3.4.
So the optimal choice for the threshold is the largest number which does not lead
to a convergence to a full histogram. For the distance layer, the same approach as
for the new histogram is used. The distance value is therefore defined a the mean
distance of all the points in the bin. The age value is also determined as the mean
age of all points in the bin. After the memory histogram is created at half the
resolution, it is up-sampled to match the full resolution of the new histogram. The
resolution change makes the method less susceptible to discretization errors.
age
dist
bin
old histogram
age
dist
bin
upsample
age age
dist dist
bin bin
age
dist
bin
combined histogram
Figure 3.2: Strategy to include previously seen data into the current histogram.
The original 3DVFH algorithm uses the new histogram only, while the memory
approach enhances the algorithm by combining the new histogram with propagated
data from the last time-step.
Chapter 3. Approach 12
reprojected
points
p1
old position d
pold p3
p2
p4
Figure 3.3: Each occupied cell in the old histogram corresponds to an obstacle at
the corresponding elevation and azimuth angle. Using the stored distance d the oc-
cupied cell can be re-projected into its four corner points in 3D space to approximate
the location of the original obstacle.
new position
pnew
old position
pold
Figure 3.4: The threshold for the minimum number of points which make up an
occupied cell needs to be chosen carefully. It could lead to no memory effect if chosen
too large or to convergence to a completely full histogram if chosen too small. In
this figure, if the threshold is chosen at one point, the memory histogram would have
three occupied cells. If the threshold is chosen at two, the memory histogram would
have one occupied cell. And for all thresholds larger than two the memory histogram
would be empty.
13 3.2. Building a Memory for 3DVFH
0 0 0 0 0 0 0 0 0 0 0 0 0 0
binary layer 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 1 1 0 0 0 1 1 0 0 0
0 0 1 1 1 0 0 1 1 1 1 1 1 0
0 0 1 1 1 0 0 1 1 1 1 1 1 0
0 0 0 0 0 0 0 0 1 1 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
distance layer 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 2 4 3 0 0 0 2 3 4 0 0
0 0 2 3 4 0 0 5 4 4 3 3 3 0
0 0 3 2 3 0 0 4 4 3 4 3 2 0
0 0 0 0 0 0 0 0 5 4 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
age layer 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 6 7 7 0 0
0 0 0 0 0 0 0 8 7 6 7 8 8 0
0 0 0 0 0 0 0 8 7 7 7 8 9 0
0 0 0 0 0 0 0 0 8 9 0 0 0 0
combined Histogram
binary layer 0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 1 1 1 1 0
1 1 1 1 1 0 0
1 1 1 1 1 0 0
0 1 1 0 0 0 0
0 0 0 0 0 0 0
distance layer 0 0 0 0 0 0 0
0 0 2 2 4 3 0
5 4 2 3 4 0 0
4 4 3 2 3 0 0
0 5 4 0 0 0 0
0 0 0 0 0 0 0
binary layer 0 0 0 0 0 0 0
0 0 6 0 0 0 0
8 7 0 0 0 0 0
8 7 0 0 0 0 0
0 8 9 0 0 0 0
Figure 3.5: An example on how to combine the new histogram and the memory
histogram. The blue area marks the FOV.
Chapter 3. Approach 14
1
2
Figure 3.6: The yellow area marks the FOV of the UAV. Once a direction is
chosen, new parts of the obstacle become visible (red part of the obstacle is what
is known at the position 2) and cause the other direction to be favorable. The for-
ward component in movement direction leads to crash if the the controller alternates
between two choices.
15 3.3. Adaptions for Safety and Corner Cases
dmin
Figure 3.7: In the leftmost figure, the drone is flying along a long obstacle. For
large obstacles, the drone will come quite close due to the forward component of the
flying direction. If the obstacle is too long on the left side, a change of direction
becomes more attractive and the drone hovers and yaws to the right as shown in the
middle. If the obstacle lies closer than the minimum perception distance, a part of
the obstacle will not be seen and the drone flies straight into it.
been perceived and stored in the memory. Therefore, the drone will chose the better
direction from this information, turn and move. In real-world tests the drone might
turn more than once in each direction due to the noise in the point-cloud. This
feature does not solve the problem of switching between possibilities but eliminate
the risk of crashing emanating from it.
drone
position
Figure 3.8: The occupied histogram cells are marked in red and cells inside the
safety margin are marked in blue. All colored cells are not eligible as candidate
directions.
goal
drone
position
∆pitch_up
∆yaw
projected
candidate
Figure 3.9: The occupied histogram cells are marked in red and cells inside the
safety margin are marked in blue. The active candidate is marked in green and
projected to the same distance as the goal. The distances between the projected
candidate and the goal determine the values used to calculate the goal cost of the
active candidate.
cgoal = ∆yaw (g, p) + kup · ∆pitch up (g, p) + kdown · ∆pitch down (g, p) (3.7)
csmooth = ∆yaw (pold , p) + ∆pitch (pold , p) (3.8)
ctot = kgoal · cgoal + ksmooth · csmooth (3.9)
skrit for the slope is chosen as well as the boundary values on the cost parameter
kup . If the current filtered slope value is below skrit the cost factor kup is increased
with a rate of ∆inc per time-step until it reaches the maximum value kup max . If
the current filtered slope lies over the critical value, the cost factor is reduced ∆dec
per time-step until it reaches the minimum value kup min . The cost parameter kup
is therefore calculated according to equation 3.11. In table 3.1 the parameter values
used for this study are summarized.
∆d d[n] − d[n − 1]
s[n] = = (3.10)
∆t t[n] − t[n − 1]
kup [n − 1] + ∆inc , if s[n] < skrit and kup [n − 1] < kup max
kup [n] = kup [n − 1] − ∆dec , if s[n] > skrit and kup [n − 1] > kup min (3.11)
kup [n − 1], otherwise
Parameter value
kgoal 2.0
ksmooth 1.5
kdown 4.0
kup max 4.0
kup min 0.75
∆inc 0.3
∆dec 0.2
skrit -0.0007
nM A 50
rsphere
UAV position
csphere
rsphere_cloud
The sphere avoidance constraint interferes with the avoidance algorithm only at
the very end of the pipeline. The next waypoint wporig is calculated either from
the histogram (if there are points in the point-cloud) or does lead directly towards
the goal (if there is no perceived obstacle). After the desired waypoint has been
calculated, the sphere avoidance checks it for validity and adjusts it if necessary.
The existing sphere is enlarged by a factor khyst to implement a hysteresis in the
influence of the sphere. All waypoints wporig lying inside the radius khyst · rsphere
will be modified by the sphere avoidance feature. If a waypoint lies inside the
hysteresis sphere, it will first be modified in the z-component. The temporary
waypoint wptemp has the same x- and y-components as the original waypoint. The
z-component is calculated according to equation 3.12. This shift in height causes
the waypoint to lie closer to the equator. The point will later be projected to the
sphere surface and points closer to the equator are more in accordance to avoid
obstacles by flying around instead of over or under.
(
wporigz + 0.25|wporigz − cspherez |, if wporigz < cspherez
wptempz = (3.12)
wporigz − 0.25|wporigz − cspherez |, if wporigz > cspherez
From this shifted waypoint wptemp , the sphere waypoint is calculated by projecting
it onto the sphere surface. Here, two cases are distinguished: Either, the temporary
waypoint lies inside the actual avoidance sphere, or it lies in the hysteresis shell.
The projection radius rproj will be determined according to equation 3.13.
(
rsphere , if |wptemp − csphere | < rsphere
rproj = (3.13)
|wptemp − csphere |, otherwise
The sphere waypoint wpsphere is the projection of the temporary waypoint onto a
sphere around the center-point with a radius of rproj . The final waypoint can then
be calculated according to equation 3.14.
Chapter 3. Approach 20
wpsphere ,
if |wptemp − csphere | < rsphere
wpf inal = (1 − f )wpsphere + f · wporig , if rsphere ≤ |wptemp − csphere | < khyst · rsphere
wporig , otherwise
(3.14)
The final waypoint wpf inal will be used as a control input. A further advantage
of the sphere avoidance constraint is, that the adaption of the waypoint is also
performed when there are no obstacles ahead and the original waypoint would
suggest to go straight to the goal. This causes the drone to remember previously
seen obstacles, even if the noise in the point-cloud temporarily causes the histogram
to be empty. Table 3.2 shows some suggestions for the tuning of the sphere avoidance
parameters.
rsphere 2.5m
rsphere cloud 3.5 m
nsphere 20
khyst 1.3
agesphere 100 iterations
Lx
Ly
Lz
γ
hmin
Figure 3.11: Dimensions of the ground box used to crop the point-cloud.
Lz = 1.5 · hmin
Lz
Lx = 2 · (3.15)
tan(γ)
Lz
Ly = 2 ·
tan(γ)
The cropped point-cloud is fed into a RANSAC algorithm, which estimates a hor-
izontal plane from the data. The RANSAC algorithm requires an angle difference
prior ∆φ, which defines the maximal deviation of the plane model from the horizon-
tal, and the maximum inlier distance dinlier max to the plane. The resulting plane
is assumed to be valid if the cropped point-cloud is larger than a minimum cloud
size ncloud min and a specified minimum fraction pcloud min of the cropped cloud are
inlier to the fitted plane model. A suggestion for the choice of the ground detection
parameters can be found in table 3.3.
The RANSAC algorithm returns the plane coefficients [a, b, c, d] of the fitted plane
model (3.16) if the calculated plane is valid. From those plane coefficients, the clos-
est point pplane to the drone on the plane and the perpendicular distance dplane of
the plane to the drone can be calculated using the equations 3.17, 3.18.
ax + by + cz + d = 0 (3.16)
apx + bpy + cpz + d
dplane = − (3.17)
a2 + b2 + c2
pplane = [px + dplane a, py + dplane b, pz + dplane c]T (3.18)
Parameter value
ncloud min 160
pcloud min 0.7
∆φ 20°
dinlier max 0.1
Table 3.3: Suggestions for the choice of the ground detection parameters
A rectangular patch is estimated from the inlier by searching for the maximum
and minimum x- and y-positions in the inlier cloud. The boundary coordinates
[xmin , xmax , ymin , ymax ] are saved together with the height of the patch. The height
of the patch can be calculated from the height of the drone and the distance dplane
Chapter 3. Approach 22
to the patch. From all those saved patches, a height memory of the environment is
built. When a new patch is found, first the height memory is searched for a patch
with similar height. If it overlaps with the new patch, they are assumed to be the
same surface and are saved as one entry with bounding values encompassing both
patches. If no overlapping patch with similar height is found, the new data is saved
as an individual patch. This knowledge about the height of the terrain can either be
used as an individual controller in the case where there are no additional obstacles
ahead, or it can be combined with the histogram approach to control height and
obstacle avoidance simultaneously. In the following paragraphs both use cases are
discussed.
When the drone is approaching a ground patch stored in the internal height map, it
has to start rising early enough to reach the required height once it is over the patch.
The maximum rising angle emax of the drone is set to half the vertical FOV, which
for the Realsense is 24°. If rising at a steeper angle, a collision-free path cannot be
guaranteed. Therefore, the drone needs to start adjusting its height as soon as it
enters a margin m in front of a patch. This margin is dependent on the minimum
distance to ground hmin , the difference of height between the UAV and the ground
patch as well as on the rise angle. It can be calculated using the equation 3.19.
pz − hpatch − hmin
m= (3.19)
tan(emax )
Once the drone has entered an area of a patch enlarged by the margin m in flight
direction, the margin is set to be fixed and the drone is considered to be above the
concerned ground patch. A continuously adapting margin would lead to oscillations,
as the margin decreases as the drone height increases. When the drone is over an
obstacle, the minimum flight height is calculated by adding the specified minimum
height over ground hmin to the obstacle height hpatch . The drone altitude is forced
to the required value by assigning one of the following three states:
2. drone height is high enough, but very close to minimum flight height
3. drone height is high enough and not close to minimum flight height
In case 1 the new waypoint is set in the x-y direction of the goal, but such that the
drone will be rising at an angle emax . In case 2 the drone is not allowed to sink
any further. Therefore, the suggested waypoint towards the goal is only modified
if it would lead the drone downwards. In case 3. the original waypoint directly
towards the goal is kept. To avoid oscillations, hysteresis between all three cases
are implemented.
in equation 3.19 but considering the discretization of the histogram in the desired
rise angle. The same three cases are distinguished. In the first case, where the
drone is flying too low, all histogram cells which lie at an elevation angle lower than
emax are blocked such that they are no longer valid candidates. This means the
drone has no other possibility than to rise at an angle emax in a direction where
the corresponding histogram cell is free. In the second case, all candidate cells at
an elevation angle lower then zero are discarded such that the drone cannot choose
to drop in altitude. For the third case, the candidates are determined from the
histogram without interference of the height controller.
To expand a node, the complete point cloud is first cropped around the current node
position. The points of the cropped point-cloud will be checked for their distance to
the drone same as for the Back-off feature discussed in chapter 3.3.2. As the current
node is just a proposal of a possible movement, it does not require a reaction if it
is too close to an obstacle. But if the node is too close to an obstacle, its cost is set
to infinity. Instead of expanding it, the tree will be searched for the next best node.
To expand a node, the FOV at its position is calculated. The histogram is built
as a combination of the histogram of the cropped point-cloud and the histogram of
the re-projected points. To combine those two histograms the FOV of the drone at
the node position is needed. The histogram is down-sampled to half the resolution
to reduce the number of candidate directions and make them more distinct. The
candidate directions are extracted and evaluated according to the same cost func-
tion as used for the 3DVFH algorithm. To reduce the branching factor of the search
tree not all candidates are added as nodes. They are added best-first in the order
Chapter 3. Approach 24
Figure 3.12: Structure of the VFH* part of the algorithm. Building the search tree
is achieved by repeating the histogram approach for every proposed future position
of the drone.
of their cost. A candidate is only added if from the same origin no close node was
added before. The maximum branching factor can be specified by the parameter
bmax . Every added direction produces a new node at a distance of Lnodes from the
current origin. The tree cost function and heuristic function are evaluated for the
new nodes to assign the total cost to every node. Then the whole tree is searched
for the node with the least total cost. The node with the lowest cost is expanded
next if the limit N of expanded nodes is not yet reached. If N nodes are expanded,
25 3.5. Look Ahead: 3DVFH* with memory
Figure 3.13: Logic of the whole algorithm including all the discussed features.
the node with the least cost corresponds to the end of the tree path. The best path
through the tree will be back-traced from the end node to find the best direction
for movement.
The architecture of the whole avoidance algorithm with all the features is shown
in figure 3.13. The logic of the algorithm from the acquisition of the point-cloud
to the output of the next waypoint is displayed. Note that if there is no obstacle
ahead in the current time-step the previously calculated path is reused. Whether
the previously calculated path is still valid is determined from the age of the path
as well as the current drone location compared to the suggested tree path. If the
drone seems to be on a previously calculated tree path, the path will be followed to
the end instead of going directly towards the goal. In cases where the point-cloud
contains a lot of noise and might even be empty for some time-steps, the drone will
stay on the calculated path and not swerve towards the goal. The choice of the
parameters is mostly a compromise between computation time and accuracy of the
calculation. The parameters need to be tuned on the specific hardware to ensure
valid computation times.
Chapter 3. Approach 26
d4
n4
d3 δ
n5
l4 wp
n0
n2 n3
root
n1
Figure 3.14: Extraction of the next waypoint from an previously constructed tree
path.
n n n
n
target cost yaw cost path smooth cost tree smooth cost
Figure 3.15: Visualization of the different cost terms. The angle or distance
marked in red determines the value of the corresponding cost.
branch to the origin of the node. The last cost term is not an angular term but a
comparison of trees from different time-steps. It compares the current node with
the node at the same depth in the last chosen path. It needs to be scaled in a
way that the magnitude of this term fits to the angular costs. The different cost
terms are visualized in figure 3.15. The different cost terms are weighed with factors
k to determine the importance of each criterion. The cost function is written in
equation 3.24. The sum of the individual weighted cost terms is subject to scaling.
The scaling factor is dependent on the depth of the node. Nodes at higher depth
are less penalized by cost. The discount factor λ needs to be less than one. This
depth dependent scaling is necessary to avoid trajectories which stop shortly before
an obstacle instead of searching for a way.
cn = λdepthn · (ktarget ctarget + kyaw cyaw + kpath cpath + ktree ctree ) (3.24)
The A* algorithm also demands a heuristic function which estimates the cost of
the cheapest path from a node n to the goal. For a function to be admissible as a
heuristic, it must never overestimate the cost to reach the goal. For simplicity we
use the target cost of the current node as a heuristic.
Chapter 3. Approach 28
Chapter 4
Results
29
Chapter 4. Results 30
Figure 4.1: The Intel® Aero Ready to Fly drone used for outdoor testing
31 4.1. Experimental Setup
other nodes and stores the small messages locally until it can update them in the
planner. As the point-cloud requires a lot of memory, it is copied directly to the
planner as soon as possible instead of storing it locally. To avoid race conditions,
it is important that the planner information is only updated when the algorithm is
not running. After the incoming messages are handled, the communication thread
determines if it is necessary to provide an alternative setpoint. In our setup the
variable t max is set to 0.25 s. If the planner takes longer to provide the next set-
point, the communication thread provides an interim setpoint using data from the
last planner cycle if possible. If the 3DVFH* is running, the previously planned
path is used to determine the interim waypoint. If the 3DVFH algorithm is used,
the last calculated setpoint is repeated. If there is no information available, e.g. in
the before the planner has ever run, the current position is fed as a setpoint. If the
planner cannot provide a new setpoint for a time of t land (in this implementation
15 s) then the drone will go into landing mode.
Figure 4.2: Work-flow of the communication thread: It receives messages from the
other nodes and updates the planner thread. If the planner does not produce setpoints
at the required rate, this thread will provide setpoints from previously calculated
cycles or from estimation data.
Chapter 4. Results 32
Figure 4.4 shows the mean travel times to each goal. For all goal locations the
algorithm with memory was faster to reach the goal. The main reason for this
finding is, that the original algorithm often gets stuck in locations where passing
left or right seems equally good from the current visual information. As soon as the
drone turns in one direction, the parts of the obstacle on the other side disappear
from the FOV, which in turn makes the other direction more attractive. As the
drone is only allowed to move in directions inside the FOV, it will stay in one place
while turning. Due to the lacking memory, the algorithm has no knowledge about
what it has seen before and it will hover on the spot and turn back and forth.
Therefore, this indecisive behavior increases the time to goal. The algorithm with
memory however will remember the previously seen parts of the obstacle and keep
going into the decided direction. Only if the obstacle is very large in that direction,
the drone might turn back to find another way.
10 goal 2
goal 1
5
z [m]
0
goal 3
15
10 goal 4
5
start goal 5
0
20
-5
15
y [m] 10
-10 5
0 x [m]
Figure 4.3: Simulated world with rectangular obstacles. The drone will always
start in the same location and move to five randomly chosen goals.
33 4.2. 3DVFH with Memory
50
0
goal 1 goal 2 goal 3 goal 4 goal 5
Figure 4.4: Mean travel times for each goal location. the times for the original
algorithm are shown in red, whereas the times for the memory algorithm are shown
in blue.
80 without memory
60
40
20
0
goal 1 goal 2 goal 3 goal 4 goal 5
Figure 4.5: Percentage of failure for each of the goal locations. The memory
algorithm never failed to reach the goal, whereas the original algorithm failed in
24% of all runs.
The statistic in figure 4.5 show the percentage of failed runs for each of the goals.
The algorithm with the memory never failed to reach the goal. However, the original
algorithm failed in 24% of all the runs. The failure cases were mainly due to position
drift when the drone was stuck in an indecisive behavior. Either the drone lost
height and landed on the ground, or it drifted upwards and would have surpassed
the obstacle. Both of those cases were treated as failure.
Figure 4.6 shows how often the back-off safety measure was needed to avoid colli-
sions. Those numbers can be understood as the amount of failures that would have
happened if the measure was deactivated. It is obvious, that the original algorithm
largely depends on this safety feature. The main reason for this dependency lies
in the same issue as discussed before with the travel times and failures. When the
drone cannot decide which direction to go and yaws a lot, it is prone to the failure
case shown in figure 3.7. The more the drone changes direction without making
any real progress, the closer it comes to the obstacle. The back-off safety measure
prevents crashes for this exact failure case.
Figure 4.7 shows the trajectories of the drone towards goal 1. The memory algorithm
manages to travel on a smooth path towards the goal. The original algorithm on
the other hand gets stuck at a position where an obstacle is directly between the
drone location and the goal. When the drone yaws to change direction, the obstacles
points closest to the drone come into view and the drone has to back-off to maintain
a safe distance. Due to the continuous yawing motion, also the estimation of the
drone drifted more then usually. This is the reason why the trajectories of the
Chapter 4. Results 34
0
goal 1 goal 2 goal 3 goal 4 goal 5
Figure 4.6: Number of situations, where the back-off safety measure was needed
to prevent collisions with the obstacles.
15 15
10 10
5 5
y [m]
y [m]
0 0
-5 -5
-10 -10
-5 0 5 10 15 20 -5 0 5 10 15 20
x [m] x [m]
Figure 4.7: Trajectories of the drone flying towards goal 1. The left figure shows
the paths taken by the 3DVFH with memory, whereas the right figure shows the
trajectories of the 3DVFH algorithm without memory.
original algorithm pass the last gap farther to the right. This difference is made up
purely by position drift and cannot be observed when looking at the real relative
position of the drone to the obstacle. The trajectories for all the other goals can be
found in the appendix A.1.1.
35 4.3. Ground Detection
10
z [m]
0
0 10 20 30 40 50
x [m]
Figure 4.8: Obstacle avoidance without ground detection. Seen in the x,z-plane
10
5
y [m]
-5
-10
0 10 20 30 40 50
x [m]
Figure 4.9: Obstacle avoidance without ground detection. Seen in the x,y-plane
Chapter 4. Results 36
10
z [m]
5
0
0 10 20 30 40 50
x [m]
Figure 4.10: Obstacle avoidance using ground detection. Seen in the x,z-plane
10
5
y [m]
-5
-10
0 10 20 30 40 50
x [m]
Figure 4.11: Obstacle avoidance with ground detection. Seen in the x,y-plane
10
z [m]
0
0 10 20 30 40 50
x [m]
Figure 4.12: Ground as estimated by the drone. Visualization of the patch memory
in the x,z-plane
10
5
y [m]
-5
-10
0 10 20 30 40 50
x [m]
Figure 4.13: Ground as estimated by the drone. Visualization of the patch memory
in the x,y-plane
37 4.3. Ground Detection
2
z [m]
20
0 15
10
5
5
0
0
-5
-10 x [m]
y [m] -5
Figure 4.14: Ground as estimated by the drone while being carried up a concrete
stair.
Figure 4.15: Ground truth of the terrain which was used to build the internal
height map.
Chapter 4. Results 38
Figure 4.16: Point-cloud of an exemplary object using the stereo camera and
different noise standard deviation values. The coordinate system marks the position
of the drone, where the stereo camera points into the red x-axis.
39 4.5. 3DVFH* with memory
Figure 4.17: Gazebo visualization of the testing environment. The oak trees have
relatively sparse branches which makes the point-cloud highly irregular.
Sucessful runs
100 with sphere
percentage of success [ ]
without sphere
50
0
0 0.01 0.03
standard deviation [ ]
Figure 4.18: Success rate of the different algorithms in the simulated tree world.
For each standard deviation value 15 runs were performed with sphere avoidance
and 15 without. The goal were chosen randomly but for both strategies the same
goals were used.
2 2
y [m] 0 0
y [m]
-2 -2
-4 -4
-6 -6
-8 -8
0 5 10 15 0 5 10 15
x [m] x [m]
Figure 4.19: Trajectories of the 3DVFH* and the 3DVFH algorithm in avoiding
an L-shaped obstacle. The dashed line indicates the direct path to the goal which is
marked with a red cross.
the branches to the left side. As a consequence the cheapest path takes the drone
to the left side of the obstacle. In figure 4.21 the decision process of the 3DVFH*
is visualized. Taking a closer look at the shape of the trajectories, it can already be
seen in figure 4.19 that the 3DVFH algorithm struggles with the concave obstacle.
The back-off safety measure was invoked once in 50% of the runs. Once the broad
part of the obstacle comes in sight the drone tries to change direction. Only after it
has turned to the left and the wall of the narrow part becomes visible, it continues
to pass the obstacle to the right. The 3DVFH algorithm therefore looses time not
only by the inefficient path choice (passing the obstacle to the right) but also by
indecision. To quantify the difference in smoothness of the trajectories, the jerk
has been calculated. The magnitude of jerk is depicted in figure 4.20. It becomes
apparent that the 3DVFH* produces smoother trajectories. The magnitude of jerk
between the two strategies differs by a factor of more than two.
80 200
]
]
150
2
60
jerk [m/s
jerk [m/s
40 100
50
20
0
0 0 50 100 150
0 50 100
t [s]
t [s]
Figure 4.20: Magnitude of jerk over time for all the tests runs. The different colors
correspond to different runs. Ten runs have been performed for both algorithms
41 4.5. 3DVFH* with memory
Figure 4.21: Visualization of the look-ahead tree in the 3DVFH* algorithm. The
coordinate system marks the body frame of the drone and the camera is aligned with
the red x-axis. The goal is marked with a yellow sphere. The white surface is the
part of the obstacle which can be seen when the point-cloud is cropped around the
current position of the drone. At every node the whole point-cloud is cropped with
respect to the node position, which is why the nodes on the right stop being expanded.
They would have to avoid the broad part of the obstacle further to the right, which
make them more costly than the paths to the left. The red line corresponds to the
cheapest path through the search tree.
Parameter value
bounding box size (x, y) 10 m
bounding box size (z) 2m
sphere avoidance radius 2.5 m
max re-projection age 50 iterations
number of expanded nodes 10
Figure 4.22: Cardboard obstacle on an empty field used for testing the 3DVFH* al-
gorithm. The colored duct tape provides features for the stereo camera and therefore
improves the visibility of the object.
12
10
8
x [m]
0
10 5 0 -5 -10
y [m]
Figure 4.23: Trajectories of the 26 test flights. All flights started at slightly dif-
ferent initial positions and lead to three different goal positions marked in red. The
grey box visualizes the cardboard obstacle.
43 4.5. 3DVFH* with memory
It was found that the runtime of the algorithm largely depends on the core tem-
perature because the clock-rate is throttled as the core temperature rises. Figure
4.24 shows the running time of the obstacle avoidance algorithm on the Intel Aero
compute board using the parameters in table 4.1. The drone was stationary in an
outdoor location where the air temperature was measured to be 5 ◦C. It can be
shown that the algorithm runs significantly faster in the beginning, where the core
temperature was still low. As the core temperature converges to 58 ◦C, the running
time converges to a mean of 367.82 ms. For this test the algorithm continuously
runs in the obstacle avoidance mode for a total time of 45 min. In a real flight the
mean compute-time would be significantly lower, as the there are many iterations
where there is no obstacle present and the drone can fly directly towards the goal
or along a previously computed path.
500 60
400
55
computation time [ms]
temperature [°C]
300
50
200
45
100
Execution time convergence: 367.82 ms
0 40
0 10 20 30 40 50
t [min]
Figure 4.24: Running time of the algorithm during obstacle avoidance in a bench-
mark test performed outdoor where the algorithm ran for approximately 45 min. The
air temperature was about 5 ◦C. The red graph shows the core temperature of the
CPU and the dashed blue line marks the convergence value of the algorithm running
time in the given scenario.
Chapter 4. Results 44
Chapter 5
Conclusion
This thesis introduces the 3DVFH* obstacle avoidance algorithm suitable for real-
time application on UAVs. The algorithm combines the ideas behind the 3DVFH+
[26] and the VFH* [3] algorithm with a novel memory strategy. The introduced
memory strategy is shown to be effective for outdoor flight as well as complex sim-
ulation scenarios. Therefore, in contrast to the 3DVFH+ algorithm, the 3DVFH*
algorithm is not dependent on a global map of the environment which reduces the
computational cost of the algorithm. The VFH* algorithm was extended to a 3D
environment and the advantages of the look-ahead functionality were reproduced
in the 3DVFH* algorithm. Due to this look-ahead capability, the 3DVFH* algo-
rithm outperformed the 3DVFH+ in more complex scenorios. A computationally
simple method for ground detection using only data from a forward facing camera
was implemented. The ground detection method was shown to be able to maintain
a minimum distance to the ground. Various features have been implemented to
increase the robustness of the suggested approach and render application on UAVs
possible. From the conducted outdoor flight tests it can be concluded that the
3DVFH* algorithm is able to avoid obstacles robustly and produces smooth flight
behavior. This is the first study to provide a thorough evaluation of UAV flight
using a 3D histogramic method.
5.1 Outlook
The performance of the 3DVFH* is severely limited by the available sensor infor-
mation. For this study the algorithm was implemented on a simple hardware with
one forward facing camera. The safety as well as the flight speed could be increased
by using additional sensors to enlarge the field of view.
The simulation experiments suggested a vast potential of the 3DVFH* algorithm for
complex environments where the performance largely benefits form the look-ahead
functionality. It is therefore planned to test the algorithm in more challenging
outdoor environments.
45
Bibliography
[1] J. Borenstein and Y. Koren, “The Vector Field Histogram - Fast Obstacle
Avoidance for Mobile Robots,” IEEE Transactions on Robotics and Automa-
tion, vol. 7, no. 3, pp. 278–288, 1991.
[2] I. Ulrich and J. Borenstein, “VFH+: Reliable Obstacle Avoidance for Fast
Mobile Robots,” IEEE International Conference on Robotics and Automation
(ICRA), pp. 1572 – 1577, 1998.
[3] I. Ulrich and I. Borenstein, “VFH*: Local obstacle avoidance with look-
ahead verification,” IEEE International Conference on Robotics and Automa-
tion (ICRA), vol. 2, pp. 1572–1577, 2000.
[11] O. Brock and O. Khatib, “High-speed navigation using the global dynamic win-
dow approach,” IEEE International Conference on Robotics and Automation
(ICRA), vol. 1, pp. 341–346, 1999.
46
47 Bibliography
[12] A. Ess, B. Leibe, K. Schindler, and L. Van Gool, “Moving obstacle detection
in highly dynamic scenes,” IEEE International Conference on Robotics and
Automation (ICRA), pp. 56–63, 2009.
[14] J. Z. Sasiadek and I. Duleba, “3D local trajectory planner for UAV,” Journal
of Intelligent and Robotic Systems, vol. 29, no. 2, pp. 191–210, 2000.
[15] S. Hrabar, “3D path planning and stereo-based obstacle avoidance for rotor-
craft UAVs,” IEEE International Conference on Intelligent Robots and Systems
(IROS), pp. 807–814, 2008.
[17] R. Deits and R. Tedrake, “Efficient mixed-integer planning for UAVs in clut-
tered environments,” IEEE International Conference on Robotics and Automa-
tion (ICRA), pp. 42–49, 2016.
[20] K. Song and J. Huang, “Fast optical flow estimation and its application to
real-time obstacle avoidance,” IEEE International Conference on Robotics and
Automation (ICRA), vol. 3, pp. 2891–2896, 2001.
[21] B. Yamauchi, “The Wayfarer modular navigation payload for intelligent robot
infrastructure,” Unmanned Ground Vehicle Technology VII, Proceedings of
International Society for Optics and Photonics, vol. 5804, no. 781, pp. 85–96,
2005.
[22] B. You, J. Qiu, and D. Li, “A novel obstacle avoidance method for low-cost
household mobile robot,” IEEE International Conference on Automation and
Logistics, ICAL, pp. 111–116, 2008.
[33] S. Waydo, “Vehicle motion planning using stream functions,” IEEE Interna-
tional Conference on Robotics and Automation (ICRA), vol. 2, pp. 2484–2491,
2003.
Appendix
15 15
10 10
5 5
y [m]
y [m]
0 0
-5 -5
-10 -10
-5 0 5 10 15 20 -5 0 5 10 15 20
x [m] x [m]
Figure A.1: Trajectories of the drone flying towards goal 2. The left figure shows
the paths taken by the 3DVFH with memory, whereas the right figure shows the
trajectories of the 3DVFH algorithm without memory.
51
Appendix A. Appendix 52
Goal 3: [18, 6, 3]
15 15
10 10
5 5
y [m]
y [m]
0 0
-5 -5
-10 -10
-5 0 5 10 15 20 -5 0 5 10 15 20
x [m] x [m]
Figure A.2: Trajectories of the drone flying towards goal 3. The left figure shows
the paths taken by the 3DVFH with memory, whereas the right figure shows the
trajectories of the 3DVFH algorithm without memory.
15 15
10 10
5 5
y [m]
y [m]
0 0
-5 -5
-10 -10
-5 0 5 10 15 20 -5 0 5 10 15 20
x [m] x [m]
Figure A.3: Trajectories of the drone flying towards goal 4. The left figure shows
the paths taken by the 3DVFH with memory, whereas the right figure shows the
trajectories of the 3DVFH algorithm without memory.
53 A.1. Additional Results Graphics
15 15
10 10
5 5
y [m]
y [m]
0 0
-5 -5
-10 -10
-5 0 5 10 15 20 -5 0 5 10 15 20
x [m] x [m]
Figure A.4: Trajectories of the drone flying towards goal 5. The left figure shows
the paths taken by the 3DVFH with memory, whereas the right figure shows the
trajectories of the 3DVFH algorithm without memory.