0% found this document useful (0 votes)
31 views10 pages

Ummenhofer DeMoN Depth and CVPR 2017 Paper

This document summarizes a research paper titled "DeMoN: Depth and Motion Network for Learning Monocular Stereo". It presents a convolutional neural network that is trained end-to-end to estimate depth and camera motion from successive, unconstrained image pairs. The network alternates between estimating optical flow and camera motion/depth in an iterative manner. Compared to traditional stereo methods, the learned approach provides more accurate and robust results without requiring textured scenes or known camera motion.

Uploaded by

GG GE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views10 pages

Ummenhofer DeMoN Depth and CVPR 2017 Paper

This document summarizes a research paper titled "DeMoN: Depth and Motion Network for Learning Monocular Stereo". It presents a convolutional neural network that is trained end-to-end to estimate depth and camera motion from successive, unconstrained image pairs. The network alternates between estimating optical flow and camera motion/depth in an iterative manner. Compared to traditional stereo methods, the learned approach provides more accurate and robust results without requiring textured scenes or known camera motion.

Uploaded by

GG GE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

DeMoN: Depth and Motion Network for Learning Monocular Stereo

Benjamin Ummenhofer*,1 Huizhong Zhou*,1


{ummenhof, zhouh}@cs.uni-freiburg.de

Jonas Uhrig1,2 Nikolaus Mayer1 Eddy Ilg1 Alexey Dosovitskiy1 Thomas Brox1
1 2
University of Freiburg Daimler AG R&D
{uhrigj, mayern, ilg, dosovits, brox}@cs.uni-freiburg.de

Abstract

In this paper we formulate structure from motion as a


learning problem. We train a convolutional network end-
to-end to compute depth and camera motion from succes-
sive, unconstrained image pairs. The architecture is com-
posed of multiple stacked encoder-decoder networks, the
core part being an iterative network that is able to improve
its own predictions. The network estimates not only depth
and motion, but additionally surface normals, optical flow
between the images and confidence of the matching. A cru-
cial component of the approach is a training loss based on Figure 1. Illustration of DeMoN. The input to the network is two
spatial relative differences. Compared to traditional two- successive images from a monocular camera. The network esti-
frame structure from motion methods, results are more ac- mates the depth in the first image and the camera motion.
curate and more robust. In contrast to the popular depth-
from-single-image networks, DeMoN learns the concept of
ing SfM approaches fail in case of small camera translation.
matching and, thus, better generalizes to structures not seen
This is because it is hard to integrate priors that could pro-
during training.
vide reasonable solutions in those degenerate cases.
In this paper, we succeed for the first time in training a
convolutional network to jointly estimate the depth and the
1. Introduction camera motion from an unconstrained pair of images. This
Structure from motion (SfM) is a long standing task in approach is very different from the typical SfM pipeline in
computer vision. Most existing systems, which represent that it solves the problems of motion and dense depth esti-
the state of the art, are carefully engineered pipelines con- mation jointly. We cannot yet provide a full learning-based
sisting of several consecutive processing steps. A funda- system for large-scale SfM, but the two-frame case is a cru-
mental building block of these pipelines is the computation cial first step towards this goal. In the longer term, the learn-
of the structure and motion for two images. Present imple- ing approach has large potential, since it integrates naturally
mentations of this step have some inherent limitations. For all the shape from X approaches: multi-view, silhouettes,
instance, it is common to start with the estimation of the texture, shading, defocus, haze. Moreover, strong priors on
camera motion before inferring the structure of the scene by objects and structure can be learned efficiently from data
dense correspondence search. Thus, an incorrect estimate of and regularize the problem in degenerate cases; see Fig. 6
the camera motion leads to wrong depth predictions. More- for example. This potential is indicated by our results for
over, the camera motion is estimated from sparse corre- the two-frame scenario, where the learning approach clearly
spondences computed via keypoint detection and descriptor outperforms traditional methods.
matching. This low-level process is prone to outliers and Convolutional networks recently have shown to excel at
does not work in non-textured regions. Finally, all exist- depth prediction from a single image [7, 8, 24]. By learning
priors about objects and their shapes these networks reach
∗ Equal contribution remarkably good performance in restricted evaluation sce-

15038
narios such as indoor or driving scenes. However, single- etry [4]. LSD-SLAM [9] deviates from this approach by
image methods have more problems generalizing to previ- jointly optimizing semi-dense correspondences and depth
ously unseen types of images. This is because they do not maps. It considers multiple frames from a short temporal
exploit stereopsis. Fig. 9 shows an example, where depth window but does not include bundle adjustment. DTAM
from a single image fails, because the network did not see [30] can track camera poses reliably for critical motions by
similar structures before. Our network, which learned to ex- matching against dense depth maps. However, an external
ploit the motion parallax, does not have this restriction and depth map initialization is required, which in turn relies on
generalizes well to very new scenarios. classic structure and motion methods.
To exploit the motion parallax, the network must put the Camera motion estimation from dense correspondences
two input images in correspondence. We found that a sim- has been proposed by Valgaerts et al. [41]. In this paper,
ple encoder-decoder network fails to make use of stereo: we deviate completely from these previous approaches by
when trained to compute depth from two images it ends up training a single deep network that includes computation of
using only one of them. Depth from a single image is a dense correspondences, estimation of depth, and the camera
shortcut to satisfy the training objective without putting the motion between two frames.
two images into correspondence and deriving camera mo- Eigen et al. [7] trained a ConvNet to predict depth from
tion and depth from these correspondences. a single image. Depth prediction from a single image is an
In this paper, we present a way to avoid this shortcut and inherently ill-posed problem which can only be solved us-
elaborate on it to obtain accurate depth maps and camera ing priors and semantic understanding of the scene – tasks
motion estimates. The key to the problem is an architecture ConvNets are known to be very good at. Liu et al. [24]
that alternates optical flow estimation with the estimation combined a ConvNet with a superpixel-based conditional
of camera motion and depth; see Fig. 3. In order to solve random field, yielding improved results. Our two-frame
for optical flow, the network must use both images. To this network also learns to exploit the same cues and priors as
end, we adapted the FlowNet architecture [5] to our case. the single-frame networks, but in addition it makes use of a
Our network architecture has an iterative part that is com- pair of images and the motion parallax between those. This
parable to a recurrent network, since weights are shared. enables generalization to arbitrary new scenes.
Instead of the typical unrolling, which is common practice ConvNets have been trained to replace the descriptor
when training recurrent networks, we append predictions of matching module in aforementioned SfM systems [6, 44].
previous training iterations to the current minibatch. This The same idea was used by Žbontar and LeCun [45] to es-
training technique saves much memory and allows us to in- timate dense disparity maps between stereo images. Com-
clude more iterations for training. Another technical contri- putation of dense correspondences with a ConvNet that is
bution of this paper is a special gradient loss to deal with the trained end-to-end on the task, was presented by Dosovit-
scale ambiguity in structure from motion. The network was skiy et al. [5]. Mayer et al. [28] applied the same concept
trained on a mixture of real images from a Kinect camera, to dense disparity estimation in stereo pairs. We, too, make
including the SUN3D dataset [43], and a variety of rendered use of the FlowNet idea [5], but in contrast to [28, 45], the
scenes that we created for this work. motion between the two views is not fixed, but must be es-
timated to derive depth estimates. This makes the learning
2. Related Work problem much more difficult.
Estimation of depth and motion from pairs of images Flynn et al. [12] implicitly estimated the 3D structure
goes back to Longuet-Higgins [25]. The underlying 3D ge- of a scene from a monocular video using a convolutional
ometry is a consolidated field, which is well covered in text- network. They assume known camera poses – a large sim-
books [17, 10]. State-of-the-art systems [14, 42] allow for plification which allows them to use the plane-sweeping
reconstructions of large scenes including whole cities. They approach to interpolate between given views of the scene.
consist of a long pipeline of methods, starting with descrip- Moreover, they never explicitly predict the depth, only RGB
tor matching for finding a sparse set of correspondences images from intermediate viewpoints.
between images [26], followed by estimating the essential Agrawal et al. [2] and Jayaraman & Grauman [19] ap-
matrix to determine the camera motion. Outliers among plied ConvNets to estimating camera motion. The main fo-
the correspondences are typically filtered out via RANSAC cus of these works is not on the camera motion itself, but on
[11]. Although these systems use bundle adjustment [39] learning a feature representation useful for recognition. The
to jointly optimize camera poses and structure of many im- accuracy of the estimated camera motion is not competitive
ages, they depend on the quality of the estimated geometry with classic methods. Kendall et al. [21] trained a ConvNet
between image pairs for initialization. Only after estimation for camera relocalization — predicting the location of the
of the camera motion and a sparse 3D point cloud, dense camera within a known scene from a single image. This is
depth maps are computed by exploiting the epipolar geom- mainly an instance recognition task and requires retraining

5039
3x

r,t
image pair depth egomotion
bootstrap net iterative net refinement net
Figure 2. Overview of the architecture. DeMoN takes an image pair as input and predicts the depth map of the first image and the relative
pose of the second camera. The network consists of a chain of encoder-decoder networks that iterate over optical flow, depth, and egomotion
estimation; see Fig. 3 for details. The refinement network increases the resolution of the final depth map.

Figure 3. Schematic representation of the encoder-decoder pair used in the bootstrapping and iterative network. Inputs with gray font are
only available for the iterative network. The first encoder-decoder predicts optical flow and its confidence from an image pair and previous
estimates. The second encoder-decoder predicts the depth map and surface normals. A fully connected network appended to the encoder
estimates the camera motion r, t and a depth scale factor s. The scale factor s relates the scale of the depth values to the camera motion.

for each new scene. All these works do not provide depth Method L1-inv sc-inv L1-rel
estimates. Single image 0.080 0.159 0.696
Naı̈ve image pair 0.079 0.165 0.722
DeMoN 0.012 0.131 0.097
3. Network Architecture
Table 1. Naı̈ve two-frame depth estimation does not perform bet-
The overall network architecture is shown in Fig. 2. ter than depth from a single image on any of the error measures
DeMoN is a chain of encoder-decoder networks solving dif- (smaller is better). The architecture of DeMoN forces the network
ferent tasks. The architecture consists of three main com- to use both images, yielding a large performance improvement.
ponents: the bootstrap net, the iterative net and the refine-
ment net. The first two components are pairs of encoder-
flow field and an estimate of their confidence. Details on the
decoder networks, where the first one computes optical flow
loss and the training procedure are described in Section 5.
while the second one computes depth and camera motion;
see Fig. 3. The iterative net is applied recursively to succes- The second encoder-decoder, shown in the right part of
sively refine the estimates of the previous iteration. The last Fig. 3, takes as input the optical flow, its confidence, the im-
component is a single encoder-decoder network that gener- age pair, and the second image warped with the estimated
ates the final upsampled and refined depth map. flow field. Based on these inputs it estimates depth, sur-
Bootstrap net. The bootstrap component gets the image face normals, and camera motion. The architecture is the
pair as input and outputs the initial depth and motion es- same as above, apart from the extra 3 fully connected lay-
timates. Internally, first an encoder-decoder network com- ers that compute the camera motion and a scaling factor for
putes optical flow and a confidence map for the flow (the the depth prediction. The latter reflects the inherent con-
left part of Fig. 3). The encoder consists of pairs of convo- nection between depth and motion predictions due to scale
lutional layers with 1D filters in y and x-direction. Using ambiguity; see Section 4.
pairs of 1D filters as suggested in [37] allows us to use spa- By feeding optical flow estimate into the second
tially large filter while keeping the number of parameters encoder-decoder we let it make use of motion parallax.
and runtime manageable. We gradually reduce the spatial Tab. 1 shows that an encoder-decoder network trained to es-
resolution with a stride of 2 while increasing the number timate depth and camera motion directly from an image pair
of channels. The decoder part generates the optical flow (naı̈ve image pair) fails to make use of stereo cues and per-
estimate from the encoder’s representation via a series of forms on par with a single-image network. DeMoN, on the
up-convolutional layers with stride 2 followed by two con- other hand, performs significantly better.
volutional layers. It outputs two components of the optical Iterative net. The iterative net is trained to improve ex-

5040
isting depth, normal, and motion estimates. The architec- Bootstrap Iterative 1 2 3 GT
ture of this encoder-decoder pair is identical to the boot-

Depth
strap net, but it takes additional inputs. We convert the
depth map and camera motion estimated by the bootstrap
net or a previous iteration of the iterative net into an op-

Flow
tical flow field, and feed it into the first encoder-decoder
together with other inputs. Likewise, we convert the optical
flow to a depth map using the previous camera motion pre- Figure 4. Top: Iterative depth refinement. The bootstrap net fails
diction and pass it along with the optical flow to the second to accurately estimate the scale of the depth. The iterations refine
encoder-decoder. In both cases the networks are presented the depth prediction and strongly improve the scale of the depth
with a prediction proposal generated from the predictions of values. The L1-inverse error drops from 0.0137 to 0.0072 after
the previous encoder-decoder. the first iteration. Bottom: Iterative refinement of optical flow.
Images show the x component of the optical flow for better visi-
Fig. 4 shows how the optical flow and depth improve
bility. The flow prediction of the bootstrap net misses the object
with each iteration of the network. The iterations enable
completely. Motion edges are retrieved already in the first iteration
sharp discontinuities, improve the scale of the depth values, and the endpoint error is reduced from 0.0176 to 0.0120.
and can even correct wrong estimates of the initial boot-
strapping network. The improvements largely saturate after Prediction Refined prediction Ground Truth
3 or 4 iterations. Quantitative analysis is shown in the sup-
plementary material.
During training we simulate 4 iterations by appending
predictions from previous training iterations to the mini-
batch. Unlike unrolling, there is no backpropagation of the
gradient through iterations. Instead, the gradient of each Figure 5. The refinement net generates a high-resolution depth
iteration is described by the losses on the well defined net- map (256 × 192) from the low-resolution estimate (64 × 48) and
work outputs: optical flow, depth, normals, and camera mo- the input image. The depth sampling preserves depth edges and
can even repair wrong depth measurements.
tion. Compared to backpropagation through time this saves
a lot of memory and allows us to have a larger network
and more iterations. A similar approach was taken by Li et images with unknown camera motion can be determined
al. [23], who train each iteration in a separate step and there- only up to scale. We resolve the scale ambiguity by nor-
fore need to store predictions as input for the next stage. We malizing translations and depth values such that ktk = 1.
also train the first iteration on its own, but then train all iter- This way the network learns to predict a unit norm transla-
ations jointly which avoids intermediate storage. tion vector.
Refinement net. While the previous network compo- Rather than the depth z, the network estimates the in-
nents operate at a reduced resolution of 64 × 48 to save verse depth ξ = 1/z. The inverse depth allows represen-
parameters and to reduce training and test time, the final re- tation of points at infinity and accounts for the growing lo-
finement net upscales the predictions to the full input image calization uncertainty of points with increasing distance.To
resolution (256 × 192). It gets as input the full resolution match the unit translation, our network predicts a scalar
first image and the nearest-neighbor-upsampled depth and scaling factor s, which we use to obtain the final depth val-
normal field. Fig. 5 shows the low-resolution input and the ues sξ.
refined high-resolution output.
A forward pass through the network with 3 iterations 5. Training Procedure
takes 110ms on an Nvidia GTX Titan X. Implementation 5.1. Loss functions
details and exact network definitions of all network compo-
nents are provided in the supplementary material. The network estimates outputs of very different na-
ture: high-dimensional (per-pixel) depth maps and low-
4. Depth and Motion Parameterization dimensional camera motion vectors. The loss has to bal-
ance both of these objectives, and stimulate synergy of the
The network computes the depth map in the first view two tasks without over-fitting to a specific scenario.
and the camera motion to the second view. We represent Point-wise losses. We apply point-wise losses to our
the relative pose of the second camera with r, t ∈ R3 . The outputs: inverse depth ξ, surface normals n, optical flow
rotation r = θv is an angle axis representation with angle θ w, and optical flow confidence c. For depth we use an L1
and axis v. The translation t is given in Cartesian coordi- loss directly on the inverse depth values:
nates.
It is well-known that the reconstruction of a scene from
P ˆ j)|,
Ldepth = i,j |sξ(i, j) − ξ(i, (1)

5041
with ground truth ξ.ˆ Note that we apply the predicted scale 5.2. Training Schedule
s to the predicted values ξ.
The network training is based on the Caffe frame-
For the loss function of the normals and the optical flow
work [20]. We train our model from scratch with Adam [22]
we use the (non-squared) L2 norm to penalize deviations
using a momentum of 0.9 and a weight decay of 0.0004.
from the respective ground truths n̂ and ŵ
P The whole training procedure consists of three phases.
Lnormal = i,j kn(i, j) − n̂(i, j)k2 First, we sequentially train the four encoder-decoder
(2)
P
Lflow = i,j kw(i, j) − ŵ(i, j)k2 . components in both bootstrap and iterative nets for 250k
iterations each with a batch size of 32. While training an
For optical flow this amounts to the usual endpoint error. encoder-decoder we keep the weights for all previous com-
We train the network to assess the quality of its own flow ponents fixed. For encoder-decoders predicting optical flow,
prediction by predicting a confidence map for each optical the scale invariant loss is applied after 10k iterations.
flow component. The ground truth of the confidence for the Second, we train only the encoder-decoder pair of the it-
x component is erative net. In this phase we append outputs from previous
ĉx (i, j) = e−|wx (i,j)−ŵx (i,j)| , (3) three training iterations to the minibatch. In this phase the
bootstrap net uses batches of size 8. The outputs of the pre-
and the corresponding loss function reads as vious three network iterations are added to the batch, which
P
Lflow confidence = i,j |cx (i, j) − ĉx (i, j)| . (4) yields a total batch size of 32 for the iterative network. We
run 1.6 million training iterations.
Motion losses. We use a minimal parameterization of Finally, the refinement net is trained for 600k iterations
the camera motion with 3 parameters for rotation r and with all other weights fixed. The details of the training pro-
translation t each. The losses for the motion vectors are cess, including the learning rate schedules, are provided in
Lrotation = kr − r̂k2 the supplementary material.
(5)
Ltranslation = kt − t̂k2 .
6. Experiments
The translation ground truth is always normalized such that
kt̂k2 = 1, while the magnitude of r̂ encodes the angle of 6.1. Datasets
the rotation.
SUN3D [43] provides a diverse set of indoor images to-
Scale invariant gradient loss. We define a discrete scale
gether with depth and camera pose. The depth and camera
invariant gradient g as
pose on this dataset are not perfect. Thus, we sampled im-
⊤
age pairs from the dataset and automatically discarded pairs

(i+h,j)−f (i,j) f (i,j+h)−f (i,j)
gh [f ](i, j) = |ff(i+h,j)|+|f ,
(i,j)| |f (i,j+h)|+|f (i,j)| .
with a high photoconsistency error. We split the dataset so
(6) that the same scenes do not appear in both the training and
Based on this gradient we define a scale invariant loss that the test set.
penalizes relative depth errors between neighbouring pixels:
RGB-D SLAM [36] provides high quality camera poses
X
ˆ j) obtained with an external motion tracking system. Depth
X
Lgrad ξ = gh [ξ](i, j) − gh [ξ](i, .

h∈{1,2,4,8,16} i,j
2 maps are disturbed by measurement noise, and we use the
(7) same preprocessing as for SUN3D. We created a training
To cover gradients at different scales we use 5 different and a test set.
spacings h. This loss stimulates the network to compare MVS includes several outdoor datasets. We used the
depth values within a local neighbourhood for each pixel. Citywall and Achteckturm datasets from [15] and the
It emphasizes depth discontinuities, stimulates sharp edges Breisach dataset [40] for training and the datasets provided
in the depth map and increases smoothness within homo- with COLMAP [33, 34] for testing. The depth maps of the
geneous regions as seen in Fig. 10. Note that due to the reconstructed scenes are often sparse and can comprise re-
relation gh [ξ](i, j) = −gh [z](i, j) for ξ, z > 0, the loss is construction errors.
the same for the actual non-inverse depth values z. Scenes11 is a synthetic dataset with generated images of
We apply the same scale invariant gradient loss to each virtual scenes with random geometry, which provide perfect
component of the optical flow. This enhances the smooth- depth and motion ground truth, but lack realism.
ness of estimated flow fields and the sharpness of motion Thus, we introduce the Blendswap dataset which is
discontinuities. based on 150 scenes from blendswap.com. The dataset
Weighting. We individually weigh the losses to balance provides a large variety of scenes, ranging from cartoon-like
their importance. The weight factors were determined em- to photorealistic scenes. The dataset contains mainly indoor
pirically and are listed in the supplementary material. scenes. We used this dataset only for training.

5042
Depth Motion Depth
NYUv2 [29] provides depth maps for diverse indoor Method L1-inv sc-inv L1-rel rot trans Method sc-inv
scenes but lacks camera pose information. We did not train Base-Oracle 0.019 0.197 0.105 0 0
on NYU and used the same test split as in Eigen et al. [7]. In Base-SIFT 0.056 0.309 0.361 21.180 60.516

MVS
Base-FF 0.055 0.308 0.322 4.834 17.252 Liu indoor 0.260
contrast to Eigen et al., we also require a second input im- Base-Matlab - - - 10.843 32.736 Liu outdoor 0.341
age that should not be identical to the previous one. Thus, Base-Mat-F - - - 5.442 18.549 Eigen VGG 0.225
DeMoN 0.047 0.202 0.305 5.156 14.447 DeMoN 0.203
we automatically chose the next image that is sufficiently
Base-Oracle 0.023 0.618 0.349 0 0
different from the first image according to a threshold on Base-SIFT 0.051 0.900 1.027 6.179 56.650

Scenes11
the difference image. Base-FF 0.038 0.793 0.776 1.309 19.425 Liu indoor 0.816
Base-Matlab - - - 0.917 14.639 Liu outdoor 0.814
In all cases where the surface normals are not available, Base-Mat-F - - - 2.324 39.055 Eigen VGG 0.763
we generated them from the depth maps. We trained De- DeMoN 0.019 0.315 0.248 0.809 8.918 DeMoN 0.303
Base-Oracle 0.026 0.398 0.336 0 0
MoN specifically for the camera intrinsics used in SUN3D Base-SIFT 0.050 0.577 0.703 12.010 56.021

RGB-D
and adapted all other datasets by cropping and scaling to Base-FF 0.045 0.548 0.613 4.709 46.058 Liu indoor 0.338
Base-Matlab - - - 12.831 49.612 Liu outdoor 0.428
match these parameters. Base-Mat-F - - - 2.917 22.523 Eigen VGG 0.272
DeMoN 0.028 0.130 0.212 2.641 20.585 DeMoN 0.134
6.2. Error metrics Base-oracle 0.020 0.241 0.220 0 0
Base-SIFT 0.029 0.290 0.286 7.702 41.825
While single-image methods aim to predict depth at the

Sun3D
Base-FF 0.029 0.284 0.297 3.681 33.301 Liu indoor 0.214
Base-Matlab - - - 5.920 32.298 Liu outdoor 0.401
actual physical scale, two-image methods typically yield the Base-Mat-F - - - 2.230 26.338 Eigen VGG 0.175
scale relative to the norm of the camera translation vector. DeMoN 0.019 0.114 0.172 1.801 18.811 DeMoN 0.126
Comparing the results of these two families of methods re- Base-oracle - - - - -
Base-SIFT - - - - -
quires a scale-invariant error metric. We adopt the scale-
NYUv2 Base-FF - - - - - Liu indoor 0.210
invariant error of [8], which is defined as Base-Matlab - - - - - Liu outdoor 0.421
Base-Mat-F - - - - - Eigen VGG 0.148
q P DeMoN - - - - - DeMoN 0.180
2
sc-inv(z, ẑ) = n1 i d2i − n12 ( i di ) ,
P
(8)
Table 2. Left: Comparison of two-frame depth and motion es-
with di = log zi −log ẑi . For comparison with classic struc- timation methods. Lower is better for all measures. For a fair
ture from motion methods we use the following measures: comparison with the baseline methods, we evaluate depth only at
pixels visible in both images. For Base-Matlab depth is only avail-
L1-rel(z, ẑ) = n1 i |ziẑ−ẑ i| able as a sparse point cloud and is therefore not compared to here.
P
i
(9)
We do not report the errors on NYUv2 since motion ground truth
L1-inv(z, ẑ) = n1 i |ξi − ξˆi | = n1 i z1i − ẑ1i (10)
P P (and therefore depth scale) is not available. Right: Comparison
to single-frame depth estimation. Since the scale estimates are not
L1-rel computes the depth error relative to the ground truth comparable, we report only the scale invariant error metric.
depth and therefore reduces errors where the ground truth
depth is large and increases the importance of close objects further improve accuracy we minimized the reprojection er-
in the ground truth. L1-inv behaves similarly and resembles ror using the ceres library [1]. Finally, we generated the
our loss function for predicted inverse depth values (1). depth maps by plane sweep stereo and used the approach
For evaluating the camera motion estimation, we report of Hirschmueller et al. [18] for optimization. We also re-
the angle (in degrees) between the prediction and the ground port the accuracy of the depth estimate when providing
truth for both the translation and the rotation. The length of the ground truth camera motion (“Base-Oracle”). (“Base-
the translation vector is 1 by definition. Matlab”) and (“Base-Mat-F”) are implemented in Matlab.
The accuracy of optical flow is measured by the aver- (“Base-Matlab”) uses Matlab implementations of the KLT
age endpoint error (EPE), that is, the Euclidean norm of the algorithm [38, 27, 35] for correspondence search while
difference between the predicted and the true flow vector, (“Base-Mat-F”) uses the predicted flow from DeMoN. The
averaged over all image pixels. The flow is scaled such that essential matrix is computed with RANSAC and the 5-point
the displacement by the image size corresponds to 1. algorithm [31] for both.
Tab. 2 shows that DeMoN outperforms all baseline
6.3. Comparison to classic structure from motion
methods both on motion and depth accuracy by a factor of
We compare to several strong baselines implemented by 1.5 to 2 on most datasets. The only exception is the MVS
us from state-of-the-art components (“Base-*”). For these dataset where the motion accuracy of DeMoN is on par with
baselines, we estimated correspondences between images, the strong baseline based on FlowFields optical flow. This
either by matching SIFT keypoints (“Base-SIFT”) or with demonstrates that traditional methods work well on the tex-
the FlowFields optical flow method from Bailer et al. [3] ture rich scenes present in MVS, but do not perform well for
(“Base-FF”). Next, we computed the essential matrix with example on indoor scenes, with large homogeneous regions
the normalized 8-point algorithm [16] and RANSAC. To or small baselines where priors may be very useful. Besides

5043
Image GT Base-O Eigen DeMoN

Sun3D
Reference

RGBD
Second

MVS
Figure 6. Qualitative performance gain by increasing the baseline
between the two input images for DeMoN. The depth map is pro-

NYUv2 Scenes11
duced with the top left reference image and the second image be-
low. The first output is obtained with two identical images as input,
which is a degenerate case for traditional structure from motion.

Figure 8. Top: Qualitative depth prediction comparison on var-


ious datasets. The predictions of DeMoN are very sharp and de-
tailed. The Base-Oracle prediction on NYUv2 is missing because
First frame Frontal view Top view the motion ground truth is not available. Results on more methods
and examples are shown in the supplementary material.
Figure 7. Result on a sequence of the RGB-D SLAM dataset [36].
The accumulated pairwise pose estimates by our network (red) are models by Liu et al.: one trained on indoor scenes from the
locally consistent with the ground truth trajectory (black). The
NYUv2 dataset (“indoor”) and another, trained on outdoor
depth prediction of the first frame is shown. The network also
images from the Make3D dataset [32] (“outdoor”).
separates foreground and background in its depth output.
The comparison in Fig. 8 shows that the depth maps pro-
that, all Base-* methods use images at the full resolution of duced by DeMoN are more detailed and more regular than
640 × 480, while our method uses downsampled images the ones produced by other methods. This becomes even
of 256 × 192 as input. Higher resolution gives the Base-* more obvious when the results are visualized as a point
methods an advantage in depth accuracy, but on the other cloud; see the videos in the supplemental material.
hand these methods are more prone to outliers. For detailed On all but one dataset, DeMoN outperforms the single-
error distributions see the supplemental material. Remark- frame methods also by numbers, typically by a large mar-
ably, on all datasets except for MVS the depth estimates of gin. Notably, a large improvement can be observed even on
DeMoN are better than the ones a traditional approach can the indoor datasets, Sun3D and RGB-D, showing that the
produce given the ground truth motion. This is supported additional stereopsis complements the other cues that can
by qualitative results in Fig. 8. We also note that DeMoN be learned from the large amounts of training data avail-
has smaller motion errors than (“Base-Mat-F”), showing its able for this scenario. Only on the NYUv2 dataset, DeMoN
advantage over classical methods in motion estimation. is slightly behind the method of Eigen & Fergus. This is
In contrast to classical approaches, we can also han- because the comparison is not totally fair: the network of
dle cases without and with very little camera motion, see Eigen & Fergus as well as Liu indoor was trained on the
Fig. 6. We used our network to compute camera trajecto- training set of NYUv2, whereas the other networks have
ries by simple concatenation of the motion of consecutive not seen this kind of data before.
frames, as shown in Fig. 7. The trajectory shows mainly
6.4.1 Generalization to new data
translational drift. We also did not apply any drift correc-
tion which is a crucial component in SLAM systems, but Scene-specific priors learned during training may be use-
results convince us that DeMoN can be integrated into such less or even harmful when being confronted with a scene
systems. that is very different from the training data. In contrast, the
geometric relations between a pair of images are indepen-
6.4. Comparison to depth from single image
dent of the content of the scene and should generalize to
To demonstrate the value of the motion parallax, we unknown scenes. To analyze the generalization properties
additionally compare to the single-image depth estimation of DeMoN, we compiled a small dataset of images show-
methods by Eigen & Fergus [7] and Liu et al. [24]. We com- ing uncommon or complicated scenes, for example abstract
pare to the improved version of the Eigen & Fergus method, sculptures, close-ups of people and objects, images rotated
which is based on the VGG network architecture, and to two by 90 degrees.

5044
Image GT Eigen Liu DeMoN a b c d

Figure 10. Depth prediction comparison with different outputs and


losses. (a) Just L1 loss on the absolute depth values. (b) Addi-
tional output of normals and L1 loss on the normals. (c) Like (b)
but with the proposed gradient loss. (d) Ground truth.
Depth Motion
grad norm flow L1-inv sc-inv L1-rel rot tran
no no no 0.040 0.211 0.354 3.127 30.861
yes no no 0.057 0.159 0.437 4.585 39.819
no yes no 0.037 0.190 0.336 2.570 29.607
no yes yes 0.029 0.184 0.266 2.359 23.578
yes yes yes 0.032 0.150 0.276 2.479 24.372
Table 4. The influence of the loss function on the performance.
The gradient loss improves the scale invariant error, but degrades
the scale-sensitive measures. Surface normal prediction improves
the depth accuracy. A combination of all components leads to the
Eigen Liu DeMoN best tradeoff.
Figure 9. Visualization of DeMoN’s generalization capabilities to Depth Motion Flow
previously unseen configurations. Single-frame methods have se-
Confidence L1-inv sc-inv L1-rel rot tran EPE
vere problems in such cases, as most clearly visible in the point
no 0.030 0.028 0.26 2.830 25.262 0.027
cloud visualization of the depth estimate for the last example.
yes 0.032 0.027 0.28 2.479 24.372 0.027
Method L1-inv sc-inv L1-rel Table 5. The influence of confidence prediction on the overall per-
Liu [24] 0.055 0.247 0.194 formance of the different outputs.
Eigen [7] 0.062 0.238 0.185
DeMoN 0.041 0.183 0.130 Flow confidence. Egomotion estimation only requires
Table 3. Quantitative generalization performance on previously sparse but high-quality correspondences. Tab. 5 shows that
unseen scenes, objects, and camera rotations, using a self-recorded given the same flow, egomotion estimation improves when
and reconstructed dataset. Errors after optimal log-scaling. The given the flow confidence as an extra input. Our interpreta-
best model of Eigen et al. [7] for this task is based on VGG, for tion is that the flow confidence helps finding most accurate
Liu et al. [24], the model trained on Make3D [13] performed best. correspondences.
DeMoN achieved best performance after two iterations.
7. Conclusions and Future Work
Fig. 9 and Tab. 3 show that DeMoN, as to be expected,
generalizes better to these unexpected scenes than single- DeMoN is the first deep network that has learned to es-
image methods. It shows that the network has learned to timate depth and camera motion from two unconstrained
make use of the motion parallax. images. Unlike networks that estimate depth from a single
image, DeMoN can exploit the motion parallax, which is a
6.5. Ablation studies powerful cue that generalizes to new types of scenes, and al-
lows to estimate the egomotion. This network outperforms
Our architecture contains some design decisions that we traditional structure from motion techniques on two frames,
justify by the following ablation studies. All results have since in contrast to those, it is trained end-to-end and learns
been obtained on the Sun3D dataset with the bootstrap net. to integrate other shape from X cues. When it comes to han-
Choice of the loss function. Tab. 4 shows the influence dling cameras with different intrinsic parameters it has not
of the loss function on the accuracy of the estimated depth yet reached the flexibility of classic approaches. The next
and motion. Interestingly, while the scale invariant loss challenge is to lift this restriction and extend this work to
greatly improves the prediction qualitatively (see Fig. 10), more than two images. As in classical techniques, this is
it has negative effects on depth scale estimation. This leads expected to significantly improve the robustness and accu-
to weak performance on non-scale-invariant metrics and the racy.
motion accuracy. Estimation of the surface normals slightly
improves all results. Finally, the full architecture with the Acknowledgements We acknowledge funding by the
scale invariant loss, normal estimation, and a loss on the ERC Starting Grant VideoLearn, the DFG grant BR-
flow, leads to the best results. 3815/5-1, and the EU project Trimbot2020.

5045
References [15] S. Fuhrmann, F. Langguth, and M. Goesele. Mve-a mul-
tiview reconstruction environment. In Proceedings of the
[1] S. Agarwal, K. Mierle, and Others. Ceres Solver. 6 Eurographics Workshop on Graphics and Cultural Heritage
[2] P. Agrawal, J. Carreira, and J. Malik. Learning to see by (GCH), volume 6, page 8, 2014. 5
moving. In IEEE International Conference on Computer Vi- [16] R. I. Hartley. In defense of the eight-point algorithm. IEEE
sion (ICCV), Dec. 2015. 2 Transactions on Pattern Analysis and Machine Intelligence,
[3] C. Bailer, B. Taetz, and D. Stricker. Flow Fields: Dense Cor- 19(6):580–593, June 1997. 6
respondence Fields for Highly Accurate Large Displacement [17] R. I. Hartley and A. Zisserman. Multiple View Geometry
Optical Flow Estimation. In IEEE International Conference in Computer Vision. Cambridge University Press, ISBN:
on Computer Vision (ICCV), Dec. 2015. 6 0521540518, second edition, 2004. 2
[4] R. T. Collins. A space-sweep approach to true multi-image [18] H. Hirschmüller. Accurate and efficient stereo processing
matching. In Proceedings CVPR ’96, 1996 IEEE Computer by semi-global matching and mutual information. In IEEE
Society Conference on Computer Vision and Pattern Recog- International Conference on Computer Vision and Pattern
nition, 1996, pages 358–363, June 1996. 2 Recognition (CVPR), volume 2, pages 807–814, June 2005.
[5] A. Dosovitskiy, P. Fischer, E. Ilg, P. Häusser, C. Hazırbaş, 6
V. Golkov, P. v.d. Smagt, D. Cremers, and T. Brox. Flownet: [19] D. Jayaraman and K. Grauman. Learning image representa-
Learning optical flow with convolutional networks. In IEEE tions tied to egomotion. In ICCV, 2015. 2
International Conference on Computer Vision (ICCV), Dec. [20] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Gir-
2015. 2 shick, S. Guadarrama, and T. Darrell. Caffe: Convolutional
[6] A. Dosovitskiy, P. Fischer, J. T. Springenberg, M. Ried- Architecture for Fast Feature Embedding. arXiv preprint
miller, and T. Brox. Discriminative unsupervised feature arXiv:1408.5093, 2014. 5
learning with exemplar convolutional neural networks. IEEE [21] A. Kendall and R. Cipolla. Modelling Uncertainty in Deep
Transactions on Pattern Analysis and Machine Intelligence, Learning for Camera Relocalization. In International Conv-
38(9):1734–1747, Oct 2016. TPAMI-2015-05-0348.R1. 2 erence on Robotics and Automation (ICRA), 2016. 2
[7] D. Eigen and R. Fergus. Predicting Depth, Surface Normals [22] D. Kingma and J. Ba. Adam: A Method for Stochastic
and Semantic Labels With a Common Multi-Scale Convo- Optimization. arXiv:1412.6980 [cs], Dec. 2014. arXiv:
lutional Architecture. In IEEE International Conference on 1412.6980. 5
Computer Vision (ICCV), Dec. 2015. 1, 2, 6, 7, 8 [23] K. Li, B. Hariharan, and J. Malik. Iterative Instance Segmen-
[8] D. Eigen, C. Puhrsch, and R. Fergus. Depth Map Predic- tation. In 2016 IEEE Conference on Computer Vision and
tion from a Single Image using a Multi-Scale Deep Network. Pattern Recognition (CVPR), pages 3659–3667, June 2016.
In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, 4
and K. Q. Weinberger, editors, Advances in Neural Informa- [24] F. Liu, C. Shen, G. Lin, and I. Reid. Learning Depth from
tion Processing Systems 27, pages 2366–2374. Curran Asso- Single Monocular Images Using Deep Convolutional Neural
ciates, Inc., 2014. 1, 6 Fields. In IEEE Transactions on Pattern Analysis and Ma-
[9] J. Engel, T. Schöps, and D. Cremers. LSD-SLAM: Large- chine Intelligence, 2015. 1, 2, 7, 8
scale direct monocular SLAM. In European Conference on [25] H. C. Longuet-Higgins. A computer algorithm for re-
Computer Vision (ECCV), September 2014. 2 constructing a scene from two projections. Nature,
[10] O. Faugeras. Three-dimensional Computer Vision: A Geo- 293(5828):133–135, Sept. 1981. 2
metric Viewpoint. MIT Press, Cambridge, MA, USA, 1993. [26] D. G. Lowe. Distinctive Image Features from Scale-
2 Invariant Keypoints. International Journal of Computer Vi-
[11] M. A. Fischler and R. C. Bolles. Random Sample Consen- sion, 60(2):91–110, Nov. 2004. 2
sus: A Paradigm for Model Fitting with Applications to Im- [27] B. D. Lucas and T. Kanade. An Iterative Image Registration
age Analysis and Automated Cartography. Commun. ACM, Technique with an Application to Stereo Vision. In Proceed-
24(6):381–395, June 1981. 2 ings of the 7th International Joint Conference on Artificial
[12] J. Flynn, I. Neulander, J. Philbin, and N. Snavely. Deep- Intelligence - Volume 2, IJCAI’81, pages 674–679, San Fran-
stereo: Learning to predict new views from the world’s im- cisco, CA, USA, 1981. Morgan Kaufmann Publishers Inc. 6
agery. In Conference on Computer Vision and Pattern Recog- [28] N. Mayer, E. Ilg, P. Häusser, P. Fischer, D. Cremers,
nition (CVPR), 2016. 2 A. Dosovitskiy, and T. Brox. A large dataset to train convo-
[13] D. A. Forsyth. Make3d: Learning 3d scene structure from lutional networks for disparity, optical flow, and scene flow
a single still image. IEEE Transactions on Pattern Analysis estimation. In IEEE International Conference on Computer
and Machine Intelligence, 31(5):824–840, May 2009. 8 Vision and Pattern Recognition (CVPR), 2016. 2
[14] J.-M. Frahm, P. Fite-Georgel, D. Gallup, T. Johnson, [29] P. K. Nathan Silberman, Derek Hoiem and R. Fergus. Indoor
R. Raguram, C. Wu, Y.-H. Jen, E. Dunn, B. Clipp, S. Lazeb- Segmentation and Support Inference from RGBD Images. In
nik, and M. Pollefeys. Building Rome on a Cloudless Day. European Conference on Computer Vision (ECCV), 2012. 6
In K. Daniilidis, P. Maragos, and N. Paragios, editors, Eu- [30] R. A. Newcombe, S. Lovegrove, and A. Davison. DTAM:
ropean Conference on Computer Vision (ECCV), number Dense tracking and mapping in real-time. In 2011 IEEE In-
6314 in Lecture Notes in Computer Science, pages 368–381. ternational Conference on Computer Vision (ICCV), pages
Springer Berlin Heidelberg, 2010. 2 2320–2327, 2011. 2

5046
[31] D. Nister. An efficient solution to the five-point relative pose
problem. IEEE Transactions on Pattern Analysis and Ma-
chine Intelligence, 26(6):756–770, June 2004. 6
[32] A. Saxena, S. H. Chung, and A. Y. Ng. Learning depth from
single monocular images. In In NIPS 18. MIT Press, 2005.
7
[33] J. L. Schönberger and J.-M. Frahm. Structure-from-motion
revisited. In IEEE Conference on Computer Vision and Pat-
tern Recognition (CVPR), 2016. 5
[34] J. L. Schönberger, E. Zheng, M. Pollefeys, and J.-M. Frahm.
Pixelwise view selection for unstructured multi-view stereo.
In European Conference on Computer Vision (ECCV), 2016.
5
[35] J. Shi and C. Tomasi. Good features to track. In 1994
IEEE Conference on Computer Vision and Pattern Recog-
nition (CVPR’94), pages 593 – 600, 1994. 6
[36] J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cre-
mers. A benchmark for the evaluation of rgb-d slam systems.
In Proc. of the International Conference on Intelligent Robot
Systems (IROS), Oct. 2012. 5, 7
[37] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna.
Rethinking the Inception Architecture for Computer Vision.
arXiv:1512.00567 [cs], Dec. 2015. 3
[38] C. Tomasi and T. Kanade. Detection and tracking of point
features. Technical report, International Journal of Computer
Vision, 1991. 6
[39] B. Triggs, P. McLauchlan, R. Hartley, and A. Fitzgibbon.
Bundle Adjustment A Modern Synthesis Vision Algorithms:
Theory and Practice. In B. Triggs, A. Zisserman, and
R. Szeliski, editors, Vision Algorithms: Theory and Practice,
volume 1883, pages 153–177. Springer Berlin / Heidelberg,
Apr. 2000. 2
[40] B. Ummenhofer and T. Brox. Global, dense multiscale re-
construction for a billion points. In IEEE International Con-
ference on Computer Vision (ICCV), Dec 2015. 5
[41] L. Valgaerts, A. Bruhn, M. Mainberger, and J. Weickert.
Dense versus Sparse Approaches for Estimating the Funda-
mental Matrix. International Journal of Computer Vision,
96(2):212–234, Jan. 2012. 2
[42] C. Wu. Towards Linear-Time Incremental Structure from
Motion. In International Conference on 3D Vision (3DV),
pages 127–134, June 2013. 2
[43] J. Xiao, A. Owens, and A. Torralba. SUN3D: A Database of
Big Spaces Reconstructed Using SfM and Object Labels. In
IEEE International Conference on Computer Vision (ICCV),
pages 1625–1632, Dec. 2013. 2, 5
[44] S. Zagoruyko and N. Komodakis. Learning to compare im-
age patches via convolutional neural networks. In Confer-
ence on Computer Vision and Pattern Recognition (CVPR),
2015. 2
[45] J. Žbontar and Y. LeCun. Computing the Stereo Matching
Cost With a Convolutional Neural Network. In IEEE Confer-
ence on Computer Vision and Pattern Recognition (CVPR),
June 2015. 2

5047

You might also like