Image Analysis Lecture 7
Image Analysis Lecture 7
Dr Priscilla Canizares
1
© Dr Priscilla Cañizares
What have we got in store for today
Speach signal
https://speechprocessingbook.aalto. /Representations/Waveform.html
fi
STFT
f (⌧ )W (t ⌧ )e i!⌧
d⌧ (90)m n
2. The Fourier Transform of the segment s(t) is computed to g
here x½k" denotes a signal and g½k" denotes an L-point window function. From
aussian: The function F (t = t , !) reveals the spectral content of
.7), the STFT of x½k" can be interpreted as the Fourier transform of the product
0
FT
FT
FT
Z
window for the FT, and ⌧ controls
determines the
1
F (t, !) = fg (t, !) = in time
translation
ˆ f (⌧ )e i!⌧ ⇤
g (⌧ t)dt = hf, gt,! i = f(
1
fi
Compressible signals/images
•Most natural signals, such as images and audio, are highly compressible
•The inherent structure observed in natural data implies that the data admits a sparse
representation in an appropriate coordinate system: Fourier decomposition, wavelets
•Sparse signal representation: when our signal is written in an appropriate basis (e.g. FT, WT)
only a few modes are active thus reducing the number of values that must be stored for an
accurate representation.
•The Fourier modes and wavelets are generic or universal bases vs ,e.g, SVD
es the CWT’s sensitivity to long-time-scale events, and wavelet
Mother wavelet
nsitivity to short-time-scale events. The basic idea in wavelet a
nction, (t), known as the mother wavelet:
!
1 t b
a,b (t) = p ;
a a
family of scaled and translated versions of the function by choos
aar wavelet: 8
es the CWT’s sensitivity to long-time-scale events, and wavelet
Mother wavelet
nsitivity to short-time-scale events. The basic idea in wavelet a
Shift
nction, (t), known as the mother wavelet:
!
1 t b
a,b (t) = p ;
a a
family of scaled and translated versions of the function by choos
Scaling
aar wavelet: 8
the CWT’s sensitivity to long-time-scale events, and wavelet contraction
tivity to short-time-scale events. The basic idea in wavelet analysis is to
ion, How does it works?
(t), known as the mother wavelet:
!
1 t b
a,b (t) = p ; 2.5.(99)
WAVELETS AND MULTI-RESOLUTION ANALYSIS 87
a a
mily of scaled and translated versions of the function by choosing a and b.
Example: Haar wavelet 1
1,0 0
wavelet: 8 -1
0 0.25 0.5 0.75 1
>
>
>
> 1 0 t < 1/2
>
< 1
1
,0 0
(100)-1
2
(t) = 1 1/2 t < 1
>
>
>
>
>
:0 otherwise
0 0.25 0.5 0.75 1
1
1 1
,
elets provide an orthogonal and hierarchical basis for a signal. (explain0
2 2
-1
Figure 2.24: Three Haar wavelets for the first two levels of the multi-resolution
in Fig. 2.23 (d).
begin with the continuous wavelet transform (CWT), which is given by:
Z 1
W (f )(a, b) = hf, a,b i = f (t) ¯a,b (t) dt, (2.54)
1
where ¯a,b denotes the complex conjugate of a,b . This is only valid for func-
• The wavelet is translated, dilated, and contracted
depending on the scale of activity under study.
9
86 86 CHAPTER CHAPTER
2. FOURIER
2. AND WAVELET
FOURIER TRANSFORMS
AND WAVELET TRANSFORMS
(c) Spectogram (d) multi-resolution
(c)(a)
Spectogram
Time series
(a) Time series (b) (d) multi-resolution
Fourier transform
(b) Fourier transform
Frequency
86 CHAPTER 2. FOURIER AND WAVELET TRANSFORMS
Frequency
(a) Time series (b) Fourier transform
!
!
(c) Spectogram
(c) Spectogram (d) multi-resolution
(d) multi-resolution
Time t
Time t
Figure 2.23: Illustration of resolution limitations and uncertainty in time-
Frequency
Frequency
tions are orthogonal then the basis may be used for projection, as in the Fourier
Frequency
tions are orthogonal then the basis may be used for projection, as in the Fourier
! transform.
transform.!
The simplest and earliest example of a wavelet is the Haar wavelet, devel-
The simplest and earliest example of a wavelet is the Haar wavelet, devel-
t
t Time oped in 1910 [227]:
oped in 1910 [227]: Time
Figure 2.23: Illustration of resolution limitations and uncertainty 8 in time-
Figure 2.23: Illustration of resolution limitations and uncertainty in time-
frequency analysis. 8 < 1 0 t < 1/2
frequency analysis. < 1 0 t < 1/2
! (t) = 1 1/2 t < 1 (2.53)
(t) = 1 1/2 t < 1 (2.53)
:
: 0 otherwise.
t tions are orthogonal then the otherwise.
0 basis may be used for projection, as in the Fourier
Time
tions are orthogonal then the basis may be used for projection, as in the Fourier
Figure 2.23: transform.
Illustration of resolution The three
limitations and Haar wavelets,
uncertainty in 1,0 , 1/2,0 ,10
time- and 1/2,1/2 , are shown in Fig. 2.24, repre-
The transform.
three Haar wavelets,
The simplest , 1/2,0
1,0and , andexample
earliest , are
1/2,1/2of a shownis in
wavelet theFig.
Haar2.24, repre-
wavelet, devel-
frequency
The analysis.
simplest and earliest example of a
senting
wavelet
the
is
first
the
two
Haar
layers
wavelet,
of
devel-
the multi-resolution in Fig. 2.23 (d). Notice that
senting the first twoinlayers
oped of the multi-resolution in Fig. 2.23 (d). Notice that
1910 [227]:
Signals at multiple scales
11
ff
ff
fi
<latexit sha1_base64="e9Kyev5DcIVpkqyxB39D/LVIMZY=">AAACCHicbVDLSsNAFJ3UV62vqEsXDhYhBSmJSHUjFAUR3FToC9pQJtNJO3QyCTMTSShduvFX3LhQxK2f4M6/cfpYaOuBC4dz7uXee7yIUals+9vILC2vrK5l13Mbm1vbO+buXl2GscCkhkMWiqaHJGGUk5qiipFmJAgKPEYa3uB67DceiJA05FWVRsQNUI9Tn2KktNQxD63qTQFaaQFewjblqpPAOys5SQu+lRRgN+mYebtoTwAXiTMjeTBDpWN+tbshjgPCFWZIypZjR8odIqEoZmSUa8eSRAgPUI+0NOUoINIdTh4ZwWOtdKEfCl1cwYn6e2KIAinTwNOdAVJ9Oe+Nxf+8Vqz8C3dIeRQrwvF0kR8zqEI4TgV2qSBYsVQThAXVt0LcRwJhpbPL6RCc+ZcXSf206JSKpfuzfPlqFkcWHIAjYAEHnIMyuAUVUAMYPIJn8ArejCfjxXg3PqatGWM2sw/+wPj8AeG6lsA=</latexit>
Z
Generic transform (T F )(y) = K(x, y)f (x)dx
x
i!t
<latexit sha1_base64="9Y6od6cPUOEADcsIg8WEZzXxTj8=">AAACCnicbVA9SwNBEN3z2/h1ammzGgQFCXciainaCDYRjAaSGPY2c8ni3u2xO6eGI7WNf8XGQhFbf4Gd/8ZNcoUaHww83pthZl6QSGHQ876csfGJyanpmdnC3PzC4pK7vHJpVKo5VLiSSlcDZkCKGCooUEI10cCiQMJVcHPS969uQRuh4gvsJtCIWDsWoeAMrdR018+27ne627SuRbuDTGt1RylcZ6KuImgzir2mW/RK3gB0lPg5KZIc5ab7WW8pnkYQI5fMmJrvJdjImEbBJfQK9dRAwvgNa0PN0phFYBrZ4JUe3bRKi4ZK24qRDtSfExmLjOlGge2MGHbMX68v/ufVUgwPG5mIkxQh5sNFYSopKtrPhbaEBo6yawnjWthbKe8wzTja9Ao2BP/vy6Pkcrfk75f2z/eKR8d5HDNkjWyQLeKTA3JETkmZVAgnD+SJvJBX59F5dt6c92HrmJPPrJJfcD6+AfxNmds=</latexit>
Fourier K(x, y) ! e
<latexit sha1_base64="AiwK0ftp4xOv64PWdORUN8wO2hQ=">AAAB/XicbVDJSgNBEO2JW4zbuNy8NAYhgoQZkegx6MVjBLNAZhx6Oj1Jk56F7hohDoO/4sWDIl79D2/+jZ3loIkPCh7vVVFVz08EV2BZ30ZhaXllda24XtrY3NreMXf3WipOJWVNGotYdnyimOARawIHwTqJZCT0BWv7w+ux335gUvE4uoNRwtyQ9CMecEpAS5554CSKexk59fP7zCEK8gqceGbZqloT4EViz0gZzdDwzC+nF9M0ZBFQQZTq2lYCbkYkcCpYXnJSxRJCh6TPuppGJGTKzSbX5/hYKz0cxFJXBHii/p7ISKjUKPR1Z0hgoOa9sfif100huHQzHiUpsIhOFwWpwBDjcRS4xyWjIEaaECq5vhXTAZGEgg6spEOw519eJK2zql2r1m7Py/WrWRxFdIiOUAXZ6ALV0Q1qoCai6BE9o1f0ZjwZL8a78TFtLRizmX30B8bnD1kTlSs=</latexit>
⇤
Wavelet a,b (t)
Continuous wavelet transform
Admisibility condition
ere j,k is a discrete family of wavelets:
!
Continuous wavelet transform1 t kb
j,k (t) = (105)
aj aj
orthogonal wavelets like Haar wavelets, it is possible to expand a function f (t) uniquely
his basis:For orthogonal wavelets, it is possible to expand a function uniquely in this bases
1
X
f (t) = hf (t), j,k (t)i j,k (t) (106)
j,k= 1
47
Signals are generally not sparse in the original (pixel) space but can be sparse after being
decomposed on a suitable set of functions.
Smooth signals/images and piecewise smooth signals; images are compressible in the wavelet domain family.
Wavelet types
• For a 20 × 20 pixel black and white image, there are 2400 distinct possible images!
pixel space
Figure 3.4: Schematic of measurements in the compressed sensing framework.
y ⇥ s y ⇥ s
= =
Figure 3.3: Illustration of the vastness of image (pixel) space, with natural im-
ages occupying a vanishingly small fraction of the space.
is sparse if most of its entries are equal to zero. That is, if it’s support:
⇤ = 1 i N | x[i] 6= 0 (111)
We can model a signal, x, as a linear combination of A elementary function (also called waveforms or atoms):
A
X
x= ↵= ↵[i]'i (112)
i=1
Where ↵[i] is the (representation) coefficient of x in the dictionary = ['1 , . . . , 'A ], the
N ⇥ A matrix whose columns are the atoms 'i , in general normalized to a unit `2 norm:
N
X
2
8i 2 {1, . . . , A} , k'i k = 2
|'i [n]| = 1 (113)
n=1
Signals/images, x, that are sparse in are those that can be written exactly as a super-
position of a small fraction of the atoms in the family ('i ))k
2. Compressible signals/images
Compressible signals/images: Sparsity
In general, signals and images are not sparse.
A signal is compressible or "weakly sparse" if the sorted magnitudes k↵[i]k or the represen-
T
tation coefficients ↵ = x decay quickly according to:
k↵[i]k Ci 1/s
i = 1, . . . , N (114)
You can keep a small fraction of the coe cients without much loss
ffi
FFT keep 0.05%
FFT-1
• Images and audio signals are compressible in Fourier or wavelet bases: in their Fourier or wavelet
representations (wave atoms or local discrete cosine transforms to represent locally oscillating textures), most
coefficients are small and may be neglected with negligible loss of quality.
• Instead of transmitting or storing full data, we just need to transmit/store these remaining few active coefficients
• Signal/image can be recovered by reconstructing it using the inverse Fourier or wavelet transform
International Journal of Computer Science & Engineering Survey (IJCSES) Vol.5, No.2, April 2014
fi
relatively simple to state mathematically, but until
CHAPTER 3. SPARSITY AND COMPRESSED SENSING
t vector consistent with measurements was a non-
102 CHAPTER 3. SPARSITY AND COMPRESSED SENSING
lem. The rapid adoption of compressed sensing
surement How
matrix
and applied sciences rests
to
3
C reconstruct
2 R p⇥n
represents
on the
an
solida image
set of p
mathemat-
from
linear a few
measurements measurements
3
tate
The x. The
measurement choice of measurement
matrix C 2
es conditions for when it is possible to reconstructR p⇥n
matrix
representsC is aof critical
set of p importance
linear measurements
ressed
on the sensing,
state x. and
The
bability using convex algorithms. is discussed
choice of in Sec.
measurement 3.4. Typically,
matrix C measurements
is of critical importance
sistcompressed
in of random
• CS projections
sensing,
exploits the andofofisthe
sparsity a state,
discussed
signal on in
a which
in casetoTypically,
Sec.basis
generic 3.4. the entries
achieve full of Creconstruction from a few measurements
measurements
signal
ssed sensing exploits the sparsity of a signal in a
ssian or Bernoulli
may consist of randomdistributed random
projections variables.
of the state, inItwhich
is alsocase
possible to
the entries of C
signal reconstruction from surprisingly few mea-
individual
are • It is entries
Gaussian of collect
or Bernoulli
possible to x (i.e., single
distributed
fewer pixels
randomly randomif x (compressed)
chosen is an image),
variables. inalso
It is which
measurements possible tosolve for the nonzero elements of s in the
and then
-sparse
onsists
measure
in ,
oftransformed then
random rows
individual
instead
of the
entries
of
of
measuring
identity x directly
matrix.pixels if x is an image), in which
x (i.e., single
coordinate system.
compressing,
knowledge
case C consists it
of the is possible
of sparse
randomvector to collect
rows sofit the dramatically
is possible
identityto reconstruct
matrix. the signal
104 CHAPTER 3. SPARSITY AND COMPRESSED SENSING
compressed
3.1). With measurements
Thus,knowledge
the goal ofofcompressed and then
the sparse sensing
vector solve tofor
is is
s it findthe
possible the tosparsest vectorthe signal
reconstruct Sparsifying basis
he transformed
consistent
x from (3.1).with coordinate
the
Thus, goal ofsystem.
measurements
the y: The sensing
compressed measure- is to find the sparsest vector
areisgiven
⌧snthat by with the measurements y:
consistent y A
C s
x is K-sparse
y = inC s = ⇥s. (3.3)
y =A Cx. y =A C s = ⇥s. (3.2) = (3.3)
em of equations in (3.3) is underdetermined since there are infinitely
mportant
nsistent
The system collaboration
solutions
of equations between
s. The Emmanuel
sparsest
in (3.3) solutionCandès and Ter-
ŝ satisfies
is underdetermined thesince
following
there opti-
are infinitely
nsing
many theconsistent
problem: odd properties of signal
solutions reconstruction
s. The at their kids’
sparsest solution ŝ satisfies the following opti-
Measurement matrix
mization Theproblem:
system of equations is underdetermined since there
are in ŝnitely
= argmin ksk0 subject
many consistent to s.y = C s,
solutions (3.4)
7 Brunton & Kutz. All
s
ŝ =Rights
argmin ksk0 subject to y = C s,
Reserved. (3.4)
s
· k0 denotes the `0 pseudo-norm, given by the number of nonzero en-
swhere
is alsokreferred
· k0 denotesto asthethe`cardinality
0 pseudo-norm, of s. given by the number of nonzero en-
fi
Before Compressed Sensing
• Classically, we cannot recover a single if it is sampled at a bigger rate than the Nyquist-Shannon limit.
• In CS theory, sparsity can be exploited to recover an image/signal from far fewer measurements than
Nyquist-Shannon Aliasing
Ax=b
min kxk1
<latexit sha1_base64="QJPlWzVhYyx70X7AcQ9B8jizU3U=">AAAB9XicbVBNS8NAEJ34WetX1aOXxSJ4KolI9Vj04rGC/YAmhs120y7dbMLuRlvS/g8vHhTx6n/x5r9x2+agrQ8GHu/NMDMvSDhT2ra/rZXVtfWNzcJWcXtnd2+/dHDYVHEqCW2QmMeyHWBFORO0oZnmtJ1IiqOA01YwuJn6rUcqFYvFvR4l1ItwT7CQEayN9OBGTPhD5I6H7th3/FLZrtgzoGXi5KQMOep+6cvtxiSNqNCEY6U6jp1oL8NSM8LppOimiiaYDHCPdgwVOKLKy2ZXT9CpUboojKUpodFM/T2R4UipURSYzgjrvlr0puJ/XifV4ZWXMZGkmgoyXxSmHOkYTSNAXSYp0XxkCCaSmVsR6WOJiTZBFU0IzuLLy6R5XnGqlerdRbl2ncdRgGM4gTNw4BJqcAt1aAABCc/wCm/Wk/VivVsf89YVK585gj+wPn8AQ9qSYA==</latexit>
subject to Ax = b
<latexit sha1_base64="DqHz5KaE3QRpMGd31GWdDDDnlUQ=">AAAB7HicbVBNSwMxEJ3Ur1q/qh69BIvgqeyKVC9C1YvHCm5baJeSTbNtaDa7JFmxLP0NXjwo4tUf5M1/Y9ruQVsfDDzem2FmXpAIro3jfKPCyura+kZxs7S1vbO7V94/aOo4VZR5NBaxagdEM8El8ww3grUTxUgUCNYKRrdTv/XIlOaxfDDjhPkRGUgeckqMlbzrJ3wV9MoVp+rMgJeJm5MK5Gj0yl/dfkzTiElDBdG64zqJ8TOiDKeCTUrdVLOE0BEZsI6lkkRM+9ns2Ak+sUofh7GyJQ2eqb8nMhJpPY4C2xkRM9SL3lT8z+ukJrz0My6T1DBJ54vCVGAT4+nnuM8Vo0aMLSFUcXsrpkOiCDU2n5INwV18eZk0z6purVq7P6/Ub/I4inAEx3AKLlxAHe6gAR5Q4PAMr/CGJHpB7+hj3lpA+cwh/AH6/AEEfY4v</latexit>
31
ffi
sparsity K is unknown, the search is even broader. Because this search is com-
binatorial, solving (3.4) is possible
intractable for to moderately
even relax the optimization
large n in (3.4)
and K, to a convex `1 -minimization [112, 150]:
and
N ⇥N
2R
the prospect of solving larger problems does not improve with Moore’s law of
ŝ = argmin ksk1 subject to y = C s, (3.5)
Thisincreasing
exponentially search is combinatorial:
computational solving
power. this `0 is intractable for even moderately large n
Search for the sparsest vector l1-minimisation s
Fortunately,
and K, under certain conditions on the measurement matrix
where k · k1 is the `1 norm, given by
C, it is
possible to relax the optimization in (3.4) to a convex `1 -minimization [112, 150]:
Fortunately, under certain conditions on the measurement matrix X C,n it is possible to relax
ŝ = argmin ksk1 subject to y = A C s, ksk1 = (3.5) |sk |. (3.6)
the `0 optimisation intos a convex `1 minimisation. k=1
where k · kThe
1 is `
the1 minimum
` 1 norm, norm
given solution
by is sparse, while the ` is not.
The `1 norm is also known as the taxicab or Manhattan norm because it repre-
2
sentsntothe
Conditions for `1 -minimisation distancewith
converge a taxi would
high take between
probability to the two points
sparsest on a rectangular grid.
solution:
X
ksk1 = 3 In the
|skcompressed
|. (3.6)
sensing literature, the measurement matrix is often denoted ; instead,
(1) The measurement matrix weCAuse
must
k=1 C to be
be consistent
incoherentwith the
wrt output
the equation
sparsifying in control
basis theory.
—the is
rowsalso already used to
denote DMD modes in Chapter 7.
The `1 norm is of also
C known
A are as the taxicab
not correlated or Manhattan
with the columns of norm because it repre-
sents the distance a taxi would take between twoCopyright points on © a2017
rectangular
Brunton &grid.
Kutz. All Rights Reserved.
3 (2) The number of measurements p must be of the order of:
In the compressed sensing literature, the measurement matrix is often denoted ; instead,
we use C to be consistent with the output equation in control theory. is also already used to
denote DMD modes in Chapter 7. p ⇡ O(K log (N/K))k (K log (N/K)) (144)
1
matical Example:
ystem ofterms,
equations.
the In single-pixel
mathematical
observed
functions u for N
data y ∈ camera
terms,
C mtheisobservedy data
= y ∈=C AWx
Az is
which both u and ru are square integrable. Thus, this type of regularisation Given an image z 2 RN
and its (inverse
drest
xto=they. signal
via x ∈ C of interest via (1.1)
generally leads to smooth solutions. Given an image z 2 R N
and its (inverse) wavelet transform:
B. single pxl camera
B.K : `single
Consider a forward operator pxl
! ` and let camera 8
he linear Ax = y.
measurement (information) pro-
1 2 (1.1)
x = y.
unters the task of (1.1) z = Wx z
ector x ∈ C m×N N
by solving theJ(u) above linear
matrix A ∈ C
nstance, in signal models the linear2Motivations,
1.2 Applications, = 1
kKu measurement
f k 2
+ ↵kuk
` and ` (information)
.N
Extensions pro- where z 9= Wx is unitary and
that
he the
linear Given
number
measurement m an
of image Given
measurements,
(information) z 2an 2
Rimage
i.e.,
Npro- and
1
z 2 its
R and
(inverse) its (inverse)
wavelet wavelet transform:
transform: W 2 R N xN
wavelet x
transform 2 RN
n one tries
m measuredSuch to
data. recover
regularisation the is vector
often used xto ∈ Cwhere
N
promote by solving
*sparse*W 2 R
the
solutions.
N xN
above is unitary
linear and x 2 R N
is a sparse vector.
e
ectorat
Traditionalleast
x ∈ CasN large as the signal length N
wisdom by solving
suggests the
that above
the numberlinear m of where
measurements, W 2 R N xN
i.e., is unitary {aand
i , . . .x, a
2 m }
R N2 is
[ a
1, sparse
+1] vector.
educes to solving
his
that
nt principle
of the number
measured is data,thembasis of
must for
be most
measurements,
at least devices
{a
as i.e.,
, .
large . . ,
asa the } 2
signal [ 1, +1]
length N Choosing a independently at random g
d data y ∈ C is m i m {a i , . . . , a m } 2 [ 1, +1] i
elog-to-digital
ber atofleast as conversion,
components largeofasx).theThis medical
signal
principle imaging,
lengthis N the basis for most devices z = Wx y = Az = AWx
eed, if m <
B. single pxl camera
N
technology, , then
his principle is the basis for most devices
urrent such classical
as linear
analog-to-digital Choosing
algebra conversion, a i independently
Choosing
medical a independently
imaging,
z = Wx
i at random
at random givesgives the
the intensities
intensities y n y
=
underdetermined
mobile communication.
log-to-digital (1.1)
Given an and
conversion,
image that
z 2 Rthere
Indeed,
N
medical
and are
if
its m infinitely
imaging,
<
(inverse) N ,
waveletthen classical
transform: y = linear
Az = algebra
AWx N
where Wy 2=R Az is N xN
= unitary
AWx and x 2 R is a sparse vector.
at there
that
eed, the
if m exists
linear
< where
N at
system
, thenleast (1.1)one). InNlinear
other
xN words,
is underdetermined
classical algebra and that there are infinitely
utions
mpossible (provided,
to recover of W
course,
x 2
fromthatR {a
ytherein
, . . is
exists
the
. , a unitary
case
} at
2 least
[ 1, and
one).
+1] In x 2
other R N
words,is a sparse vector. 8
nformation) pro- and that there are infinitely
underdetermined i m
(25)
dditional
hannon
thethere
at A= information,
sampling
above linear
exists{a at
i , theorem,
.least
. . , it
a is
one).
m
impossible
}which
2In [ states
other
Choosing 1, to
z = Wx
recover
that
words,
+1]a i
x
independently
from y
at
in the
random
case
gives the intensities y 8
n = hz, an i
This fact
signal mustalsobe relates
twice to xNthe
its Shannon
highest frequencysampling in theorem, which states that
easurements,
mpossible to
where i.e.,
recover
W 2 R N
x is from
unitary and
y in
x 2 the
R N
iscase
a sparse vector.
ing rate of a{acontinuous-time signal
y = must
Az = be AWxtwice its highest frequency in
signal
hannon length Choosing
samplingi , . . .
N , a
theorem,
m } 2 [ Fig.
1, a
+1]1.5 independently
Schematic
i which states that representation at
of a random
single-pixel cameragives
(Image the
courtesyintensities
of Rice University)y = hz, a
n n
nsure reconstruction.
Choosing ai independently at random gives the intensities yn = hz, an i
for most
signal must devices
be twice its highest frequency in
yy
roduction to Compressive
= Az = =Az =
AWxSensing, AWx 1
medical
and H. imaging,
Rauhut, A Mathematical a small mirror
C.
Introduction being
Solving
to switched
Compressive inverse on orproblems
Sensing,off, it contributes
using or not to the light
data-driven
1 intensity
models
OI 10.1007/978-0-8176-4948-7 measured 1,
cal linear algebra
d Numerical Harmonic Analysis, DOI at the sensor. In this way, one 1,
10.1007/978-0-8176-4948-7 realizes in hardware the inner product ⟨z, b⟩
Before Compressed Sensing
Nyquist
NyquistSampling
SamplingTheorem
Theorem
k-space Image space
High-resolution (full-FOV)
Full sampling
| PAGE 9
Choosing ai independently at random gives the intensities yn = hz, an i
y = Az = AWx
Preliminaries
Preliminaries
• 4/8/19
Nonlinear L1 reconstruction with noise
10
4/8/19
W
Undersampled:six fold CS reconstruction Original: Fully sampled
https://candes.su.domains/teaching/stats330/index.shtml