Understanding Deep Convolutional Networks
Understanding Deep Convolutional Networks
Stéphane Mallat
École Normale Supérieure, CNRS, PSL
45 rue d’Ulm, 75005 Paris, France
arXiv:1601.04920v1 [stat.ML] 19 Jan 2016
Abstract
Deep convolutional networks provide state of the art classifications and regressions results over many
high-dimensional problems. We review their architecture, which scatters data with a cascade of linear
filter weights and non-linearities. A mathematical framework is introduced to analyze their properties.
Computations of invariants involve multiscale contractions, the linearization of hierarchical symmetries,
and sparse separations. Applications are discussed.
§1 Introduction
Multilayer neural networks are computational learning architectures which propagate the input data
across a sequence of linear operators and simple non-linearities. The properties of shallow networks, with
one hidden layer, are well understood as decompositions in families of ridge functions [10]. However, these
approaches do not extend to networks with more layers. Deep convolutional neural networks, introduced by
Le Cun [20], are implemented with linear convolutions followed by non-linearities, over typically more than
5 layers. These complex programmable machines, defined by potentially billions of filter weights, bring us
to a different mathematical world.
Many researchers have pointed out that deep convolution networks are computing progressively more
powerful invariants as depth increases [4, 21], but relations with networks weights and non-linearities are
complex. This paper aims at clarifying important principles which govern the properties of such networks,
but their architecture and weights may differ with applications. We show that computations of invariants
involve multiscale contractions, the linearization of hierarchical symmetries, and sparse separations. This
conceptual basis is only a first step towards a full mathematical understanding of convolutional network
properties.
1
In high dimension, x has a considerable number of parameters, which is a dimensionality curse. Sampling
uniformly a volume of dimension d requires a number of samples which grows exponentially with d. In most
applications, the number q of training samples rather grows linearly with d. It is possible to approximate
f (x) with so few samples, only if f has some strong regularity properties allowing to ultimately reduce the
dimension of the estimation. Any learning algorithm, including deep convolutional networks, thus relies on an
underlying assumption of regularity. Specifying the nature of this regularity is one of the core mathematical
problem.
One can try to circumvent the curse of dimensionality by reducing the variability or the dimension of x,
without sacrificing the ability to approximate f (x). This is done by defining a new variable Φ(x) where Φ is
a contractive operator which reduces the range of variations of x, while still separating different values of f :
Φ(x) 6= Φ(x0 ) if f (x) 6= f (x0 ). This separation-contraction trade-off needs to be adjusted to the properties
of f .
Linearization is a strategy used in machine learning to reduce the dimension with a linear projector.
A low-dimensional linear projection of x can separate the values of f if this function remains constant in
the direction of a high-dimensional linear space. This is rarely the case, but one can try to find Φ(x)
which linearizes high-dimensional domains where f (x) remains constant. The dimension is then reduced
by applying a low-dimensional linear projector on Φ(x). Finding such a Φ is the dream of kernel learning
algorithms, explained in Section 2.
Deep neural networks are more conservative. They progressively contract the space and linearize transfor-
mations along which f remains nearly constant, to preserve separation. Such directions are defined by linear
operators which belong to groups of local symmetries, introduced in Section 3. To understand the difficulty
to linearize the action of high-dimensional groups of operators, we begin with the groups of translations and
diffeomorphisms, which deform signals. They capture essential mathematical properties that are extended
to general deep network symmetries, in Section 7.
To linearize diffeomorphisms and preserve separability, Section 4 shows that we must separate the varia-
tions of x at different scales, with a wavelet transform. This is implemented with multiscale filter convolutions,
which are building blocks of deep convolution filtering. General deep network architectures are introduced
in Section 5. They iterate on linear operators which filter and linearly combine different channels in each
network layer, followed by contractive non-linearities.
To understand how non-linear contractions interact with linear operators, Section 6 begins with simpler
networks which do not recombine channels in each layer. It defines a non-linear scattering transform,
introduced in [24], where wavelets have a separation and linearization role. The resulting contraction,
linearization and separability properties are reviewed. We shall see that sparsity is important for separation.
Section 7 extends these ideas to a more general class of deep convolutional networks. Channel combi-
nations provide the flexibility needed to extend translations to larger groups of local symmetries adapted
to f . The network is structured by factorizing groups of symmetries, in which case all linear operators are
generalized convolutions. Computations are ultimately performed with filter weights, which are learned.
Their relation with groups of symmetries is explained. A major issue is to preserve a separation margin
across classification frontiers. Deep convolutional networks have the ability to do so, by separating network
fibers which are progressively more invariant and specialized. This can give rise to invariant grandmother
type neurons observed in deep networks [1]. The paper studies architectures as opposed to computational
learning of network weights, which is an outstanding optimization issue [21].
2
§2 Linearization, Projection and Separability
Supervised learning computes an approximation f˜(x) of a function f (x) from q training samples {xi , f (xi )}i≤q ,
for x = (x(1), ..., x(d)) ∈ Ω. The domain Ω is a high dimensional open subset of Rd , not a low-dimensional
manifold. In a regression problem, f (x) takes its values in R, whereas in classification its values are class
indices.
Separation Ideally, we would like to reduce the dimension of x by computing a low dimensional vector Φ(x)
such that one can write f (x) = f0 (Φ(x)). It is equivalent to impose that if f (x) 6= f (x0 ) then Φ(x) 6= Φ(x0 ).
We then say that Φ separates f . For regression problems, to guarantee that f0 is regular, we further impose
that the separation is Lipschitz:
∃ > 0 ∀(x, x0 ) ∈ Ω2 , kΦ(x) − Φ(x0 )k ≥ |f (x) − f (x0 )| . (1)
0 −1 0 0 2
It implies that f0 is Lipschitz continuous: |f0 (z) − f0 (z )| ≤ |z − z |, for (z, z ) ∈ Φ(Ω) . In a classification
problem, f (x) 6= f (x0 ) means that x and x0 are not in the same class. The Lipschitz separation condition
(1) becomes a margin condition specifying a minimum distance across classes:
∃ > 0 ∀(x, x0 ) ∈ Ω2 , kΦ(x) − Φ(x0 )k ≥ if f (x) 6= f (x0 ) . (2)
We can try to find a linear projection of x in some space V of lower dimension k, which separates f . It
requires that f (x) = f (x + z) for all z ∈ V⊥ , where V⊥ is the orthogonal complement of V in Rd , of
dimension d − k. In most cases, the final dimension k can not be much smaller than d.
Linearization An alternative strategy is to linearize the variations of f with a first change of variable
Φ(x) = {φk (x)}k≤d0 of dimension d0 potentially much larger than the dimension d of x. We can then
optimize a low-dimensional linear projection along directions where f is constant. We say that Φ separates
f linearly if f (x) is well approximated by a one-dimensional projection:
0
d
X
f˜(x) = hΦ(x) , wi = wk φk (x) . (3)
k=1
The regression vector w is optimized by minimizing a loss on the training data, which needs to be regularized
if d0 > q, for example by an lp norm of w with a regularization constant λ:
0
q
X d
X
loss(f (x ) − f˜(xi )) + λ
i
|wk |p . (4)
i=1 k=1
Sparse regressions are obtained with p ≤ 1, whereas p = 2 defines kernel regressions [16].
Classification problems are addressed similarly, by approximating the frontiers between classes. For
example, a classification with Q classes can be reduced to Q − 1 “one versus all” binary classifications.
Each binary classification is specified by an f (x) equal to 1 or −1 in each class. We approximate f (x) by
f˜(x) = sign(hΦ(x), wi), where w minimizes the training error (4).
We now study strategies to compute a change of variables Φ which linearizes f . Deep convolutional networks
operate layer per layer and linearize f progressively, as depth increases. Classification and regression problems
3
are addressed similarly by considering the level sets of f , defined by Ωt = {x : f (x) = t} if f is continuous.
For classification, each level set is a particular class. Linear separability means that one can find w such
that f (x) ≈ hΦ(x), wi. If x ∈ Ωt then hΦ(x), wi ≈ t, so all Ωt are mapped by Φ in different hyperplanes
orthogonal to some w. The change of variable linearizes the level sets of f .
Symmetries To linearize level sets, we need to find directions along which f (x) does not vary locally, and
then linearize these directions in order to map them in a linear space. It is tempting to try to do this with
some local data analysis along x. This is not possible because the training set includes few close neighbors
in high dimension. We thus consider simultaneously all points x ∈ Ω and look for common directions along
which f (x) does not vary. This is where groups of symmetries come in. Translations and diffeomorphisms
will illustrate the difficulty to linearize high dimensional symmetries, and provide a first mathematical ground
to analyze convolution networks architectures.
We look for invertible operators which preserve the value of f . The action of an operator g on x is
written g.x. A global symmetry is an invertible and often non-linear operator g from Ω to Ω, such that
f (g.x) = f (x) for all x ∈ Ω. If g1 and g2 are global symmetries then g1 .g2 is also a global symmetry, so
products define groups of symmetries. Global symmetries are usually hard to find. We shall first concentrate
on local symmetries. We suppose that there is a metric |g|G which measures the distance between g ∈ G
and the identity. A function f is locally invariant to the action of G if
∀x ∈ Ω , ∃Cx > 0 , ∀g ∈ G with |g|G < Cx , f (g.x) = f (x) . (5)
We then say that G is a group of local symmetries of f . The constant Cx is the local range of symmetries
which preserve f . Since Ω is a continuous subset of Rd , we consider groups of operators which transport
vectors in Ω with a continuous parameter. They are called Lie groups if the group has a differential structure.
Translations and diffeomorphisms Let us interpolate the d samples of x and define x(u) for all u ∈ Rn ,
with n = 1, 2, 3 respectively for time-series, images and volumetric data. The translation group G = Rn is an
example of Lie group. The action of g ∈ G = Rn over x ∈ Ω is g.x(u) = x(u − g). The distance |g|G between
g and the identity is the Euclidean norm of g ∈ Rn . The function f is locally invariant to translations if
sufficiently small translations of x do not change f (x). Deep convolutional networks compute convolutions,
because they assume that translations are local symmetries of f . The dimension of a group G is the number
of generators which define all group elements by products. For G = Rn it is equal to n.
Translations are not powerful symmetries because they are defined by only n variables, and n = 2 for
images. Many image classification problems are also locally invariant to small deformations, which provide
much stronger constraints. It means that f is locally invariant to diffeomorphisms G = Diff(Rn ), which
transform x(u) with a differential warping of u ∈ Rn . We do not know in advance what is the local range of
diffeomorphism symmetries. For example, to classify images x of hand-written digits, certain deformations
of x will preserve a digit class but modify the class of another digit. We shall linearize small diffeomorphims
g. In a space where local symmetries are linearized, we can find global symmetries by optimizing linear
projectors which preserve the values of f (x), and thus reduce dimensionality.
Local symmetries are linearized by finding a change of variable Φ(x) which locally linearizes the action
of g ∈ G. We say that Φ is Lipschitz continuous if
∃C > 0 , ∀(x, g) ∈ Ω × G , kΦ(g.x) − Φ(x)k ≤ C |g|G kxk . (6)
The norm kxk is just a normalization factor often set to 1. The Radon-Nikodim property proves that the
map that transforms g into Φ(g.x) is almost everywhere differentiable in the sense of Gateaux. If |g|G is
small then Φ(x) − Φ(g.x) is closely approximated by a bounded linear operator of g, which is the Gâteaux
derivative. Locally, it thus nearly remains in a linear space.
4
Figure 1: Wavelet transform of an image x(u), computed with a cascade of convolutions with filters over
J = 4 scales and K = 4 orientations. The low-pass and K = 4 band-pass filters are shown on the first
arrows.
Lipschitz continuity over diffeomorphisms is defined relatively to a metric, which is now defined. A small
diffeomorphism acting on x(u) can be written as a translation of u by a g(u):
This diffeomorphism translates points by at most kgk∞ = supu∈Rn |g(u)|. Let |∇g(u)| be the matrix norm of
the Jacobian matrix of g at u. Small diffeomorphisms correspond to k∇gk∞ = supu |∇g(u)| < 1. Applying a
diffeomorphism g transforms two points (u1 , u2 ) into (u1 −g(u1 ), u2 −g(u2 )). Their distance is thus multiplied
by a scale factor, which is bounded above and below by 1 ± k∇gk∞ . The distance of this diffeomorphism to
the identity is defined by:
|g|Diff = 2−J kgk∞ + k∇gk∞ . (8)
The factor 2J is a local translation invariance scale. It gives the range of translations over which small
diffeomorphisms are linearized. For J = ∞ the metric is globally invariant to translations.
Deep convolutional networks can linearize the action of very complex non-linear transformations in high
dimensions, such as inserting glasses in images of faces [28]. A transformation of x ∈ Ω is a transport of x in
Ω. To understand how to linearize any such transport, we shall begin with translations and diffeomorphisms.
Deep network architectures are covariant to translations, because all linear operators are implemented with
convolutions. To compute invariants to translations and linearize diffeomorphisms, we need to separate scales
and apply a non-linearity. This is implemented with a cascade of filters computing a wavelet transform, and
a pointwise contractive non-linearity. Section 7 extends these tools to general group actions.
Averaging A linear operator can compute local invariants to the action of the translation group G, by
averaging x along the orbit {g.x}g∈G , which are translations of x. This is done with a convolution by an
averaging kernel φJ (u) = 2−nJ φ(2−J u) of size 2J , with φ(u) du = 1:
R
One can verify [24] that this averaging is Lipschitz continuous to diffeomorphisms for all x ∈ L2 (Rn ), over
a translation
R range 2J . However, it eliminates the variations of x above the frequency 2−J . If J = ∞ then
Φ∞ x = x(u) du, which eliminates nearly all information.
5
Wavelet transform A diffeomorphism acts as a local translation and scaling of the variable u. If we let
aside translations for now, to linearize small diffeomorphism we must linearize this scaling action. This is
done by separating the variations of x at different scales with wavelets. R We define K wavelets ψk (u) for
u ∈ Rn . They are regular functions with a fast decay and a zero average ψk (u) du = 0. These K wavelets
are dilated by 2j : ψj,k (u) = 2−jn ψk (2−j u). A wavelet transform computes the local average of x at a scale
2J , and variations at scales 2j ≥ 2J with wavelet convolutions:
The parameter u is sampled on a grid such that intermediate sample values can be recovered by linear
interpolations. The wavelets ψk are chosen so that W is a contractive and invertible operator, and in order
to obtain a sparse representation. This means that x ? ψj,k (u) is mostly zero besides few high amplitude
coefficients corresponding to variations of x(u) which “match” ψk at the scale 2j . This sparsity plays an
important role in non-linear contractions.
For audio signals, n = 1, sparse representations are usually obtained with at least K = 12 intermediate
frequencies within each octave 2j , which are similar to half-tone musical notes. This is done by choosing
a wavelet ψ(u) having a frequency bandwidth of less than 1/12 octave and ψk (u) = 2k/K ψ(2−k/K u) for
1 ≤ k ≤ K. For images, n = 2, we must discriminate image variations along different spatial orientation.
It is obtained by separating angles πk/K, with an oriented wavelet which is rotated ψk (u) = ψ(rk−1 u).
Intermediate rotated wavelets are approximated by linear interpolations of these K wavelets. Figure 1 shows
the wavelet transform of an image, with J = 4 scales and K = 4 angles, where x ? ψj,k (u) is subsampled at
intervals 2j . It has few large amplitude coefficients shown in white.
Filter bank Wavelet transforms can be computed with a fast multiscale cascade of filters, which is at the
core of deep network architectures. At each scale 2j , we define a low-pass filter wj,0 which increases the
averaging scale from 2j−1 to 2j , and band-pass filters wj,k which compute each wavelet:
Let us write xj (u, 0) = x ? φj (u) and xj (u, k) = x ? ψj,k (u) for k 6= 0. It results from (11) that for 0 < j ≤ J
and all 1 ≤ k ≤ K:
xj (u, k) = xj−1 (·, 0) ? wj,k (u) . (12)
These convolutions may be subsampled by 2 along u, in which case xj (u, k) is sampled at intervals 2j along
u.
Phase removal Wavelet coefficients xj (u, k) = x ? ψj,k (u) oscillate at a scale 2j . Translations of x smaller
than 2j modifies the complex phase of xj (u, k) if the wavelet is complex or its sign if it is real. Because of
these oscillations, averaging xj with φJ outputs a zero signal. It is necessary to apply a non-linearity which
removes oscillations. A modulus ρ(α) = |α| computes such a positive envelop. Averaging ρ(x ? ψj,k (u)) by
φJ outputs non-zero coefficients which are locally invariant at a scale 2J :
Replacing the modulus by a rectifier ρ(α) = max(0, α) gives nearly the same result, up to a factor 2. One can
prove [24] that this representation is Lipschitz continuous to actions of diffeomorphisms over x ∈ L2 (Rn ), and
thus satisfies (6) for the metric (8). Indeed, the wavelet coefficients of x deformed by g can be written as the
wavelet coefficients of x with deformed wavelets. Small deformations produce small modifications of wavelets
in L2 (Rn ), because they are localized and regular. The resulting modifications of wavelet coefficients is of
the order of the diffeomorphism metric |g|Diff .
6
Figure 2: A convolution network iteratively computes each layer xj by transforming the previous layer xj−1 ,
with a linear operator Wj and a pointwise non-linearity ρ.
However, if α = 0 or α0 = 0 then this inequality is an equality. Replacing α and α0 by x ? ψj,k (u) and
x0 ? ψj,k (u) shows that distances are much less reduced if x ? ψj,k (u) is sparse. Such contractions do not
reduce as much the distance between sparse signals and other signals. This is illustrated by reconstruction
examples in Section 6.
Scale separation limitations The local multiscale invariants in (13) have dominated pattern classification
applications for music, speech and images, until 2010. It is called Mel-spectrum for audio [25] and SIFT type
feature vectors [23] in images. Their limitations comes from the loss of information produced by the averaging
by φJ in (13). To reduce this loss, they are computed at short time scales 2J ≤ 50ms in audio signals, or
over small image patches 22J = 162 pixels. As a consequence, they do not capture large scale structures,
which are important for classification and regression problems. To build a rich set of local invariants at a
large scale 2J , it is not sufficient to separate scales with wavelets, we must also capture scale interactions.
A similar issue appears in physics to characterize the interactions of complex systems. Multiscale sepa-
rations are used to reduce the parametrization of classical many body systems, for example with multipole
methods [11]. However, it does not apply to complex interactions, as in quantum systems. Interactions
across scales, between small and larger structures, must be taken into account. Capturing these interactions
with low-dimensional models is a major challenge. We shall see that deep neural networks and scattering
transforms provide high order coefficients which partly characterize multiscale interactions.
Deep convolutional networks are computational architectures introduced by Le Cun [20], providing remark-
able regression and classification results in high dimension [21, 19, 17]. We describe these architectures
illustrated by Figure 2. They iterate over linear operators Wj including convolutions, and predefined point-
wise non-linearities.
A convolutional network takes in input a signal x(u), which is here an image. An internal network layer
xj (u, kj ) at a depth j is indexed by the same translation variable u, usually subsampled, and a channel index
kj . A layer xj is computed from xj−1 by applying a linear operator Wj followed by a pointwise non-linearity
ρ:
xj = ρWj xj−1 .
7
The non-linearity ρ transforms each coefficient α of the array Wj xj−1 , and satisfies the contraction condition
(14). A usual choice is the rectifier ρ(α) = max(α, 0) for α ∈ R, but it can also be a sigmoid, or a modulus
ρ(α) = |α| where α may be complex.
Since most classification and regression functions f (x) are invariant or covariant to translations, the
architecture imposes that Wj is covariant to translations. The output is translated if the input is translated.
Since Wj is linear, it can thus be written as a sum of convolutions:
XX X
[Wj xj−1 ](u, kj ) = xj−1 (v, k) wj,kj (u − v, k) = [xj−1 (·, k) ? wj,kj (·, k)](u) . (15)
k v k
The variable u is usually subsampled. For a fixed j, all filters wj,kj (u, k) have the same support width along
u, typically smaller than 10.
The operators ρ Wj propagates the input signal x0 = x until the last layer xJ . This cascade of spatial
convolutions defines translation covariant operators of progressively wider supports as the depth j increases.
Each xj (u, kj ) is a non-linear function of x(v), for v in a square centered at u, whose width ∆j does not
depend upon kj . The width ∆j is the spatial scale of a layer j. It is equal to 2j ∆ if all filters wj,kj have a
width ∆ and the convolutions (15) are subsampled by 2.
Neural networks include many side tricks. They sometimes normalize the amplitude of xj (v, k), by
dividing it by the norm of all coefficients xj (v, k) for v in a neighborhood of u. This eliminates multiplicative
amplitude variabilities. Instead of subsampling (15) on a regular grid, a max pooling may select the largest
coefficients over each sampling cell. Coefficients may also be modified by subtracting a constant adapted to
each coefficient. When applying a rectifier ρ, this constant acts as a soft threshold, which increases sparsity.
It is usually observed that inside network coefficients xj (u, kj ) have a sparse activation.
The deep network output xJ = ΦJ (x) is provided to a classifier, usually composed of fully connected
neural network layers [21]. Supervised deep learning algorithms optimize the filter values wj,kj (u, k) in order
to minimize the average classification or regression error on the training samples {xi , f (xi )}i≤q . There can be
more than 108 variables in a network [21]. The filter update is done with a back-propagation algorithm, which
may be computed with a stochastic gradient descent, with regularization procedures such as dropout. This
high-dimensional optimization is non-convex, but despite the presence of many local minima, the regularized
stochastic gradient descent converges to a local minimum providing good accuracy on test examples [12].
The rectifier non-linearity ρ is usually preferred because the resulting optimization has a better convergence.
It however requires a large number of training examples. Several hundreds of examples per class are usually
needed to reach a good performance.
Instabilities have been observed in some network architectures [31], where additions of small perturbations
on x can produce large variations of xJ . It happens when the norms of the matrices Wj are larger than 1, and
hence amplified when cascaded. However, deep network also have a strong form of stability illustrated by
transfer learning [21]. A deep network layer xJ optimized on particular training databasis, can approximate
different classification functions, if the final classification layers are trained on a new databasis. This means
that it has learned stable structures, which can be transferred across similar learning problems.
A deep network alternates linear operators Wj and contractive non-linearities ρ. To analyze the properties
of this cascade, we begin with a simpler architecture, where Wj does not combine multiple convolutions
across channels in each layer. We show that such network coefficients are obtained through convolutions
8
with a reduced number of equivalent wavelet filters. It defines a scattering transform [24] whose contraction
and linearization properties are reviewed. Variance reduction and loss of information are studied with
reconstructions of stationary processes.
No channel combination Suppose that xj (u, kj ) is computed by convolving a single channel xj−1 (u, kj−1 )
along u:
xj (u, kj ) = ρ xj−1 (·, kj−1 ) ? wj,h (u) with kj = (kj−1 , h) . (16)
It corresponds to a deep network filtering (15), where filters do not combine several channels. Iterating on
j defines a convolution tree, as opposed to a full network. It results from (16) that
If ρ is a rectifier ρ(α) = max(α, 0) or a modulus ρ(α) = |α| then ρ(α) = α if α ≥ 0. We can thus remove
this non-linearity at the output of an averaging filter wj,h . Indeed this averaging filter is applied to positive
coefficients and thus computes positive coefficients, which are not affected by ρ. On the contrary, if wj,h is
a band-pass filter then the convolution with xj−1 (·, kj−1 ) has alternating signs or a complex phase which
varies. The non-linearity ρ removes the sign or the phase, which has a strong contraction effect.
Equivalent wavelet filter Let m be the number of band-pass filters {wjn ,hjn }1≤n≤m in the convolution
cascade (17). All other filters are thus low-pass filters. If we remove ρ after each low-pass filter, we get m
equivalent band-pass filters:
ψjn ,kn (u) = wjn−1 +1,hjn−1 +1 ? ... ? wjn ,hjn (u) . (18)
The cascade of J convolutions (17) is reduced to m convolutions with these equivalent filters
xJ (u, kJ ) = ρ(ρ(...ρ(ρ(x ? ψj1 ,k1 ) ? ψj2 ,k2 )... ? ψjm−1 ,km−1 ) ? ψJ,kJ (u)) , (19)
with 0 < j1 < j2 < ... < jm−1 < J. If the final filter wJ,hJ at the depth J is a low-pass filter then ψJ,kJ = φJ
is an equivalent low-pass filter. In this case, the last non-linearity ρ can also be removed, which gives
xJ (u, kJ ) = ρ(ρ(...ρ(ρ(x ? ψj1 ,k1 ) ? ψj2 ,k2 )... ? ψjm−1 ,km−1 ) ? φJ (u) . (20)
The operator ΦJ x = xJ is a wavelet scattering transform, introduced in [24]. Changing the network filters
wj,h modifies the equivalent band-pass filters ψj,k . As in the fast wavelet transform algorithm (12), if wj,h
is a rotation of a dilated filter wj then ψj,h is a dilation and rotation of a single mother wavelet ψ.
Scattering order The order m = 1 coefficients xJ (u, kJ ) = ρ(x ? ψj1 ,k1 ) ? φJ (u) are the wavelet coefficient
computed in (13). The loss of information due to averaging is now compensated by higher order coefficient.
For m = 2, ρ(ρ(x ? ψj1 ,k1 ) ? ψj2 ,k2 ) ? φJ are complementary invariants. They measure interactions between
variations of x at a scale 2j1 , within a distance 2j2 , and along orientation or frequency bands defined by k1
and k2 . These are scale interaction coefficients, missing from first order coefficients. Because ρ is strongly
contracting, order m coefficients have an amplitude which decrease quickly as m increases [24, 32]. For images
and audio signals, the energy of scattering coefficients becomes negligible for m ≥ 3. Let us emphasize that
the convolution network depth is J, whereas m is the number of effective non-linearity of an output coefficient.
9
Diffeomorphism continuity Section 4 explains that a wavelet transform defines representations which are
Lipschitz continuous to actions of diffeomorphisms. Scattering coefficients up to the order m are computed
by applying m wavelet transforms. One can prove [24] that it thus defines a representation which is Lipschitz
continuous to the the action of diffeomorphisms. There exists C > 0 such that
∀(g, x) ∈ Diff(Rn ) × L2 (Rn ) , kΦJ (g.x) − ΦJ xk ≤ C m 2−J kgk∞ + k∇gk∞ kxk ,
plus a Hessian term which is neglected. This result is proved in [24] for ρ(α) = |α|, but it remains valid for
any contractive pointwise operator such as rectifiers ρ(α) = max(α, 0). It relies on commutation properties
of wavelet transforms and diffeomorphisms. It shows that the action of small diffeomorphisms is linearized
over scattering coefficients.
Classification Scattering vectors are restricted to coefficients of order m ≤ 2, because their amplitude
is negligible beyond. A translation scattering ΦJ x is well adapted to classification problems where the
main source of intra-class variability are due to translations, to small deformations, or to ergodic stationary
processes. For example, intra-class variabilities of hand-written digit images are essentially due to translations
and deformations. On the MNIST digit data basis [6], applying a linear classifier to scattering coefficients
ΦJ x gives state of the art classification errors. Music or speech classification over short time intervals of
100ms can be modeled by ergodic stationary processes. Good music and speech classification results are
then obtained with a scattering transform [2]. Image texture classification are also problems where intra
class variability can be modeled by ergodic stationary processes. Scattering transforms give state of the art
results over a wide range of image texture databases [6, 29], compared to other descriptors including power
spectrum moments. Softwares can be retrieved at www.di.ens.fr/data/software.
Stationary processes To analyze the information loss, we now study the reconstruction of x from its
scattering coefficients, in a stochastic framework where x is a stationary process. This will raise variance
and separation issues, where sparsity plays a role. It also demonstrates the importance of second order
scale interaction terms, to capture non-Gaussian geometric properties of ergodic stationary processes. Let
us consider scattering coefficients of order m
ΦJ x(u, k) = ρ(...ρ(ρ(x ? ψj1 ,k1 ) ? ψj2 ,k2 )... ? ψjm ,km ) ? φJ (u) , (21)
R
with φJ (u)du = 1. If x is a stationary process then ρ(...ρ(x ? ψj1 ,k1 )... ? ψjm ,km ) remains stationary
because convolutions and pointwise operators preserve stationarity. The spatial averaging by φJ provides a
non-biased estimator of the expected value of ΦJ x(u, k), which is a scattering moment:
E(ΦJ x(u, k)) = E ρ(...ρ(ρ(x ? ψj1 ,k1 ) ? ψj2 ,k2 )... ? ψjm ,km ) . (22)
If x is a slow mixing process, which is a weak ergodicity assumption, then the estimation variance σJ2 =
kΦJ x−E(ΦJ x)k2 converges to zero [8] when J goes to ∞. Indeed, ΦJ is computed by iterating on contractive
operators, which average an ergodic stationary process x over progressively larger scales. One can prove that
scattering moments characterize complex multiscale properties of fractals and multifractal processes, such
as Brownian motions, Levi processes or Mandelbrot cascades [7].
Inverse scattering and sparsity Scattering transforms are generally not invertible but given ΦJ (x) one
can can compute vectors x̃ such that kΦJ (x) − ΦJ (x̃)k ≤ σJ . We initialize x̃0 as a Gaussian white noise
realization, and iteratively update x̃n by reducing kΦJ (x) − ΦJ (x̃n )k with a gradient descent algorithm,
until it reaches σJ [8]. Since ΦJ (x) is not convex, there is no guaranteed convergence, but numerical
reconstructions converge up to a sufficient precision. The recovered x̃ is a stationary process having nearly
10
Figure 3: First row: original images. Second row: realization of a Gaussian process with same second
covariance moments. Third row: reconstructions from first and second order scattering coefficients.
the same scattering moments as x, whose properties are similar to a maximum entropy process for fixed
scattering moments [8].
Figure 3 shows several examples of images x with N 2 pixels. The first three images are realizations of
ergodic stationary textures. The second row gives realizations of stationary Gaussian processes having the
same N 2 second order covariance moments as the top textures. The third column shows the vorticity field
of a two-dimensional turbulent fluid. The Gaussian realization is thus a Kolmogorov type model, which
does not restore the filament geometry. The third row gives reconstructions from scattering coefficients,
limited to order m ≤ 2. The scattering vector is computed at the maximum scale 2J = N , with wavelets
having K = 8 different orientations. It is thus completely invariant to translations. The dimension of ΦJ x
is about (K log2 N )2 /2 N 2 . Scattering moments restore better texture geometries than the Gaussian
models obtained with N 2 covariance moments. This geometry is mostly captured by second order scattering
coefficients, providing scale interaction terms. Indeed, first order scattering moments can only reconstruct
images which are similar to realizations of Gaussian processes. First and second order scattering moments
also provide good models of ergodic audio textures [8].
The fourth image has very sparse wavelet coefficients. In this case the image is nearly perfectly restored
by its scattering coefficients, up to a random translation. The reconstruction is centered for comparison.
Section 4 explains that if wavelet coefficients are sparse then a rectifier or an absolute value contractions ρ
does not contract as much distances with other signals. Indeed, |ρ(α) − ρ(α0 )| = |α − α0 | if α = 0 or α0 = 0.
Inverting a scattering transform is a non-linear inverse problem, which requires to recover a lost phase
information. Sparsity has an important role on such phase recovery problems [32]. Translating randomly
the last motorcycle image defines a non-ergodic stationary process, whose wavelet coefficients are not as
sparse. As a result, the reconstruction from a random initialization is very different, and does not preserve
patterns which are important for most classification tasks. This is not surprising since there is much less
scattering coefficients than image pixels. If we reduce 2J so that the number of scattering coefficients reaches
the number of pixels then the reconstruction is of good quality, but there is little variance reduction.
Concentrating on the translation group is not so effective to reduce variance when the process is not
11
translation ergodic. Applying wavelet filters can destroy important structures which are not sparse over
wavelets. Next section addresses both issues. Impressive texture synthesis results have been obtained
with deep convolutional networks trained on image data bases [14], but with much more output coefficients.
Numerical reconstructions [13] also show that one can also recover complex patterns, such as birds, airplanes,
cars, dogs, ships, if the network is trained to recognize the corresponding image classes. The network keeps
some form of memory of important classification patterns.
Scattering transforms on the translation group are restricted deep convolutional network architectures, which
suffer from variance issues and loss of information. We shall explain why channel combinations provide the
flexibility needed to avoid some of these limitations. We analyze a general class of convolutional network
architectures by extending the tools previously introduced. Contractions and invariants to translations
are replaced by contractions along groups of local symmetries adapted to f , which are defined by parallel
transports in each network layer. The network is structured by factorizing groups of symmetries, as depth
increases. It implies that all linear operators can be written as generalized convolutions across multiple
channels. To preserve the classification margin, wavelets must also be replaced by adapted filter weights,
which separate discriminative patterns in multiple network fibers.
Separation margin Network layers xj = ρWj xj−1 are computed with operators ρWj which contract and
separate components of xj . We shall see that Wj also needs to prepare xj for the next transformation Wj+1 ,
so consecutive operators Wj and Wj+1 are strongly dependant. Each Wj is a contractive linear operator,
kWj zk ≤ kzk to reduce the space volume, and avoid instabilities when cascading such operators [31]. A
layer xj−1 must separate f so that we can write f (x) = fj−1 (xj−1 ) for some function fj−1 (z). To simplify
explanations, we concentrate on classification, where separation is an > 0 margin condition:
The next layer xj = ρWj xj−1 lives in a contracted space but it must also satisfy
The operator Wj computes a linear projection which preserves this margin condition, but the resulting
dimension reduction is limited. We can further contract the space non-linearly with ρ. To preserve the
margin, it must reduce distances along non-linear displacements which transform any xj−1 into an x0j−1
which is in the same class.
Parallel transport Displacements which preserve classes are defined by local symmetries (5), which are
transformations ḡ such that fj−1 (xj−1 ) = fj−1 (ḡ.xj−1 ). To define a local invariant to a group of transfor-
mations G, we must process the orbit {ḡ.xj−1 }g∈G . However, Wj is applied to xj−1 not on the non-linear
transformations ḡ.xj−1 of xj−1 . The key idea is that a deep network can proceed in two steps. Let us write
xj (u, kj ) = xj (v) with v ∈ Pj . First, ρWj computes an approximate mapping of such an orbit {ḡ.xj−1 }ḡ∈G
into a parallel transport in Pj , which moves coefficients of xj . Then Wj+1 applied to xj is filtering the orbits
of this parallel transport. A parallel transport is defined by operators g ∈ Gj acting on v ∈ Pj , and we write
The operator Wj is defined so that Gj is a group of local symmetries: fj (g.xj ) = fj (xj ) for small |g|Gj . This
is obtained if a transport of xj = Wj xj−1 by g ∈ Gj corresponds to the action of a local symmetry ḡ of fj−1
12
Figure 4: A multiscale hierarchical networks computes convolutions along the fibers of a parallel transport.
It is defined by a group Gj of symmetries acting on the index set Pj of a layer xj . Filter weights are
transported along fibers.
on xj−1 :
g.[ρWj xj−1 ] = ρWj [ḡ.xj−1 ] . (25)
By definition fj (xj ) = fj−1 (xj−1 ) = f (x). Since fj−1 (ḡ.xj−1 ) = fj−1 (xj−1 ) it results from (25) that
fj (g.xj ) = fj (xj ).
The index space Pj is called a Gj -principal fiber bundle in differential geometry [26], illustrated by Figure
4. The orbits of Gj in Pj are fibers, indexed by the equivalence classes Bj = Pj /Gj . They are globally
invariant to the action of Gj , and play an important role to separate f . Each fiber is indexing a continuous
Lie group, but it is sampled along Gj at intervals such that values of xj can be interpolated in between. As
in the translation case, these sampling intervals depend upon the local invariance of xj , which increases with
j.
Hierarchical symmetries In a hierarchical convolution network, we further impose that local symmetry
groups are growing with depth, and can be factorized:
∀j ≥ 0 , Gj = Gj−1 o Hj . (26)
The hierarchy begins for j = 0 by the translation group G0 = Rn , which acts on x(u) through the spatial
variable u ∈ Rn . The condition (26) is not necessarily satisfied by general deep networks, besides j = 0
for translations. It is used by joint scattering transforms [29, 3] and has been proposed for unsupervised
convolution network learning [9]. Proposition 1 proves that this hierarchical embedding implies that each
Wj is a convolution on Gj−1 .
Proposition 1. The group embedding (26) implies that xj can be indexed by (g, h, b) ∈ Gj−1 × Hj × Bj and
there exists wj,h.b ∈ CPj−1 such that
X
xj (g, h, b) = ρ xj−1 (v 0 ) wj,h.b (g −1 .v 0 ) = ρ xj−1 ?j−1 wj,h.b (g) , (27)
v 0 ∈Pj−1
If ḡ ∈ Gj then ḡ.xj (v) = xj (ḡ.v) = ρ(hxj−1 , wj,ḡ.v i. One can write wj,v = wj,ḡ.b with ḡ ∈ Gj and
b ∈ Bj = Pj /Gj . If Gj = Gj−1 o Hj then ḡ ∈ Gj can be decomposed into ḡ = (g, h) ∈ Gj−1 o Hj ,
where g.xj = ρ(hg.xj−1 , wj,b i). But g.xj−1 (v 0 ) = xj−1 (g.v 0 ) so with a change of variable we get wj,g.b (v 0 ) =
wj,b (g −1 .v 0 ). Hence wj,ḡ.b (v 0 ) = wj,(g,l).b (v) = h.wj,h.b (g −1 .v 0 ). Inserting this filter expression in (28) proves
(27).
13
This proposition proves that Wj is a convolution along the fibers of Gj−1 in Pj−1 . Each wj,h.b is a
transformation of an elementary filter wj,b by a group of local symmetries h ∈ Hj so that fj (xj (g, h, b))
remains constant when xj is locally transported along h. We give below several examples of groups Hj and
filters wj,h.b . However, learning algorithms compute filters directly, with no prior knowledge on the group
Hj . The filters wj,h.b can be optimized so that variations of xj (g, h, b) along h captures a large variance of
xj−1 within each class. Indeed, this variance is then reduced by the next ρWj+1 . The generators of Hj can
be interpreted as principal symmetry generators, by analogy with the principal directions of a PCA.
Generalized scattering The scattering convolution along translations (16) is replaced in (27) by a con-
volution along Gj−1 , which combines different layer channels. Results for translations can essentially be
extended to the general case. If wj,h.b is an averaging filter then it computes positive coefficients, so the
non-linearity ρ can be removed. If each filter wj,h.b has a support in a single fiber indexed by b, as in Figure 4,
then Bj−1 ⊂ Bj . It defines a generalized scattering transform, which is a structured multiscale hierarchical
convolutional network such that Gj−1 o Hj = Gj and Bj−1 ⊂ Bj . If j = 0 then G0 = P0 = Rn so B0 is
reduced to 1 fiber.
As in the translation case, we need to linearize small deformations in Diff(Gj−1 ), which include much
more local symmetries than the low-dimensional group Gj−1 . A small diffeomorphism g ∈ Diff(Gj−1 ) is a
non-parallel transport along the fibers of Gj−1 in Pj−1 , which is a perturbation of a parallel transport. It
modifies distances between pairs of points in Pj−1 by scaling factors. To linearize such diffeomorphisms, we
must use localized filters whose supports have different scales. Scale parameters are typically different along
the different generators of Gj−1 = Rn o H1 o ... o Hj−1 . Filters can be constructed with wavelets dilated
at different scales, along the generators of each group Hk for 1 ≤ k ≤ j. Linear dimension reduction mostly
results from this filtering. Variations at fine scales may be eliminated, so that xj (g, h, b) can be coarsely
sampled along g.
Rigid movements For small j, the local symmetry groups Hj may be associated to linear or non-linear
physical phenomena such as rotations, scaling, colored illuminations or pitch frequency shifts. Let SO(n)
be the group of rotations. Rigid movements SE(n) = Rn o SO(n) is a non-commutative group, which often
includes local symmetries. For images, n = 2, this group becomes a transport in P1 with H1 = SO(n)
which rotates a wavelet filter w1,h (u) = w1 (rh−1 u). Such filters are often observed in the first layer of deep
convolutional networks [13]. They map the action of ḡ = (v, rk ) ∈ SE(n) on x to a parallel transport of
(u, h) ∈ P1 defined for g ∈ G1 = R2 ×SO(n) by g.(u, h) = (v +rk u, h+k). Small diffeomorphisms in Diff(Gj )
correspond to deformations along translations and rotations, which are sources of local symmetries. A roto-
translation scattering [29, 27] linearizes them with wavelet filters along translations and rotations, with
Gj = SE(n) for all j > 1. This roto-translation scattering can efficiently regress physical functionals which
are often invariant to rigid movements, and Lipschitz continuous to deformations. For example, quantum
molecular energies f (x) are well estimated by sparse regressions over such scattering representations [18].
Audio pitch Pitch frequency shift is a more complex example of a non-linear symmetry for audio signals.
Two different musical notes of a same instrument have a pitch shift. Their harmonic frequencies are multiplied
by a factor 2h , but it is not a dilation because the note duration is not changed. With narrow band-pass
filters w1,h (u) = w1 (2−h u), a pitch shift is approximatively mapped to a translation along h ∈ H1 = R of
ρ(x ? w1,h (u)), with no modification along the time u. The action of g = (v, k) ∈ G1 = R × R = R2 over
(u, h) ∈ P1 is thus a two-dimensional translation g.(u, h) = (u + v, h + k). A pitch shift also comes with
deformations along time and log-frequencies, which define a much larger class of symmetries in Diff(G1 ).
Two-dimensional wavelets along (u, h) can linearize these small time and log-frequency deformations. These
define a joint time-frequency scattering applied to speech and music classifications [3]. Such transformations
were first proposed as neurophysiological models of audition [25].
14
Manifolds of patterns The group Hj is associated to complex transformations when j increases. It
needs to capture large transformations between different patterns in a same class, for example chairs of
different styles. Let us consider training samples {xi }i of a same class. The iterated network contractions
transform them into vectors {xij−1 }i which are much closer. Their distances define weighted graphs which
sample underlying continuous manifolds in the space. Such manifolds clearly appear in [5], for high-level
patterns such as chairs or cars, together with poses and colors. As opposed to manifold learning, deep
network filters result from a global optimization which can be computed in high dimension. The principal
symmetry generators of Hj is associated to common transformations over all manifolds of examples xij−1 ,
which preserve the class while capturing large intra-class variance. They are approximatively mapped to a
parallel transport in xj by the filters wj,h.b . The diffeomorphisms in Diff(Gj ) are non-parallel transports
corresponding to high-dimensional displacements on the manifolds of xj−1 . Linearizing Diff(Gj ) is equivalent
to partially flatten simultaneously all these manifolds, which may explain why manifolds are progressively
more regular as the network depth increases [5], but it involves open mathematical questions.
Sparse support vectors We have up to now been concentrated on the reduction of the data variability
through contractions. We now explain why the classification margin can be preserved thanks to the existence
of multiple fibers Bj in Pj , by adapting filters instead of using standard wavelets. The fibers indexed by
b ∈ Bj are separation instruments, which increase dimensionality to avoid reducing the classification margin.
They prevent from collapsing vectors in different classes, which have a distance kxj−1 − x0j−1 k close to the
minimum margin . These vectors are close to classification frontiers. They are called multiscale support
vectors, by analogy with support vector machines. To avoid further contracting their distance, they can be
separated along different fibers indexed by b. The separation is achieved by filters wj,h.b , which transform
xj−1 and x0j−1 into xj (g, h, b) and x0j (g, h, b) having sparse supports on different fibers b. The next contraction
ρWj+1 reduces distances along fibers indexed by (g, h) ∈ Gj , but not across b ∈ Bj , which preserves distances.
The contraction increases with j so the number of support vectors close to frontiers also increases, which
implies that more fibers are needed to separate them.
When j increases, the size of xj is a balance between the dimension reduction along fibers, by subsampling
g ∈ Gj , and an increasing number of fibers Bj which encode progressively more support vectors. Coefficients
in these fibers become more specialized and invariants, as the grandmother neurons observed in deep layers
of convolutional networks [1]. They have a strong response to particular patterns and are invariant to a
large class of transformations. In this model, the choice of filters wj,h.b are adapted to produce sparse
representations of multiscale support vectors. They provide a sparse distributed code, defining invariant
pattern memorisation. This memorisation is numerically observed in deep network reconstructions [13], which
can restore complex patterns within each class. Let us emphasize that groups and fibers are mathematical
ghost behind filters, which are never computed. The learning optimization is directly performed on filters,
which carry the trade-off between contractions to reduce the data variability and separation to preserve
classification margin.
§8 Conclusion
This paper provides a mathematical framework to analyze contraction and separation properties of deep con-
volutional networks. In this model, network filters are guiding non-linear contractions, to reduce the data
variability in directions of local symmetries. The classification margin can be controlled by sparse separa-
tions along network fibers. Network fibers combine invariances along groups of symmetries and distributed
pattern representations, which could be sufficiently stable to explain transfer learning of deep networks
[21]. However, this is only a framework. We need complexity measures, approximation theorems in spaces
of high-dimensional functions, and guaranteed convergence of filter optimization, to fully understand the
15
mathematics of these convolution networks.
Besides learning, there are striking similarities between these multiscale mathematical tools and the treat-
ment of symmetries in particle and statistical physics [15]. One can expect a rich cross fertilization between
high-dimensional learning and physics, through the development of a common mathematical language.
Acknowledgements I would like to thank Carmine Emanuele Cella, Ivan Dokmaninc, Sira Ferradans,
Edouard Oyallon and Irène Waldspurger for their helpful comments and suggestions.
Funding This work was supported by the ERC grant InvariantClass 320959.
§9 References
[1] Agrawal A, Girshick R, Malik J, 2014, Analyzing the Performance of Multilayer Neural Networks for
Object Recognition, Proc. of ECCV. 2, 15
[2] Andèn J, Mallat S. 2014 Deep Scattering Spectrum, IEEE Trans. on Signal Processing, 62. 10
[3] Andèn J, Lostanlen V, Mallat S. 2015, Joint time-frequency scattering for audio classification, Proc. of
Machine Learn. for Signal Proc., Boston. 13, 14
[4] Anselmi F, Leibo J, Rosasco L, Mutch J, Tacchetti A, Poggio T. 2013 Unsupervised Learning of Invariant
Representations in Hierarchical Architectures arXiv:1311.4158. 1
[5] Aubry M, Russell B. 2015 Understanding deep features with computer-generated imagery,
arXiv:1506.01151. 15
[6] Bruna J, Mallat S. 2013, Invariant Scattering Convolution Networks, IEEE Trans. on PAMI 35. 10
[7] Bruna J, Mallat S, Bacry E, Muzy JF. 2015 Intermittent process analysis with scattering moments Annals
of Stats. 43. 10
[8] Bruna J, Mallat S. 2015 Stochastic scattering models submit. IEEE Trans. Info. Theory. 10, 11
[9] Bruna J., Szlam A., Le Cun Y., 2014, Learning Stable Group Invariant Representations with Convolu-
tional Networks, ICLR 2014. 13
[10] Candès E, Donoho D. 1999, Ridglets: a key to high-dimensional intermittency ? Phil. Trans. Roy. S.
A 357. 1
[11] Carrier J, Greengard L, Rokhlin V. 1988 A Fast Adaptive Multipole Algorithm for Particle Simulations,
SIAM J. Sci. Stat. Comput. 9. 7
[12] Choromanska A, Henaff M, Mathieu M, Ben ArousG, Le Cun Y. 2014, The loss surfaces of multilayer
networks, arXiv:1412.0233. 8
[13] Denton E, Chintala S, Szlam A, Fergus R. 2015 Deep generative image models using a Laplacian pyramid
of adversarial networks, NIPS 2015. 12, 14, 15
[14] Gatys LA, Ecker AS, Bethge M. 2015 Texture synthesis and the controlled generation of natural stimuli
using convolutional neural networks, arXiv:1505.07376. 12
16
[15] Glinsky M 2011 A new perspective on renormalization: the scattering transformation, arXiv:1106.4369.
16
[16] Hastie T, Tibshirani R, Friedman J. 2009 The elements of statistical learning, Springer Series in Statis-
tics. 3
[19] Krizhevsky A, Sutskever I, Hinton G. 2012 ImageNet classification with deep convolutional neural net-
works, In Proc. of NIPS, p. 1090-1098, 2012. 1, 7
[20] Le Cun Y, Boser B, Denker J, Henderson D, Howard R, Hubbard W, Jackelt L. 1990 Handwritten digit
recognition with a back-propagation network, In Proc. of NIPS. 3. 1, 7
[24] Mallat S. 2012, Group Invariant Scattering, Comm. in Pure and Applied Mathematics, 65. 2, 5, 6, 9,
10
[25] Mesgarani M, Slaney M, Shamma S. 2006 Discrimination of speech from nonspeech based on multiscale
spectro-temporal modulations IEEE Trans. Audio, Speech, Lang. Process., 14. 7, 14
[26] J. Petitot, 2008 Neurogémétrie de la vision, Éditions de l’École Polytechnique. 13
[27] Oyallon E, Mallat S. 2015 Deep roto-translation scattering for object classification, Proc. of CVPR. 14
[28] Radford A, Metz L, Chintala S. 2016 Unsupervised representation learning with deep convolutional
generative adversarial networks, ICLR 2016. 5
[29] Sifre L, Mallat S. 2013 Rotation, Scaling and Deformation Invariant Scattering for Texture Discrimi-
nation, In Proc. of CVPR. 10, 13, 14
[30] Sutskever I, Vinyals O, Le QV. 2015 Sequence to sequence learning with neural networks In Proc. of
NIPS, 27. 1
[31] Szegedy C, Erhan D, Zaremba W, Sutskever I, Goodfellow I, Bruna J, Fergus R. 2014, Intriguing
properties of neural networks, In Proc. of ICLR. 8, 12
[32] Waldspurger I. 2015 Wavelet transform modulus: phase retrieval and scattering, Ph.D Ecole Normale
Supèrieure.
9, 11
17