139664
139664
1 Introduction
DSIE|16 p.127
Proceedings of the Doctoral Symposium in Informatics and Telecommunications Engineering
compression steps and enables to reconstruct with the only fewer number of
observations.
This property of compressive Sensing provides evident advantages over Nyquist-
Shannon theorem. The image reconstruction algorithms with CS increase the
efficiency of the overall algorithm in reconstructing the sparse signal. There are
various algorithms available for recovery as shown in section 3.
2 Historical Background
x = Ψ θ. (1)
y = Φx. (2)
p.128 DSIE|16
Proceedings of the Doctoral Symposium in Informatics and Telecommunications Engineering
m ≥ c· r· log(n/r), (4)
Fig. 1. (a) Compressive sensing measurement process with a random Gaussian mea-
surement matrix Φ and discrete cosine transform (DCT) matrix Ψ . The vector of co-
efficients s is sparse with K=4. (b) Measurement process with Θ = ΦΨ . There are four
columns that correspond to nonzero si coefficients, the measurement vector y is linear
combination of these columns [1].
3 Reconstruction Algorithms
CS comprises a collection of methods of representing a signal on the basis of
a limited number of measurements and then recovering the signal from these
DSIE|16 p.129
Proceedings of the Doctoral Symposium in Informatics and Telecommunications Engineering
measurements [35]. The problem of how to effectively recover the original signal
from the compressed data plays a significant role in the CS framework. Cur-
rently, there exists several reconstruction algorithms which are defined either in
the context of convex optimization, or greedy approaches, among them we can
mention [1,6,10,12,35,37,42].
To present an overview of reconstruction algorithms for sparse signal recovery
in compressive sensing, these algorithms may be broadly divided into six types
as show in Fig.2.
M. BPDN
SBLA BPDN
IRLS
LASSO
Basis
Monte- Pursuit
Carlo
Non-convex
FOCUSS
IHT Convex
Relaxation
NNm
Belief
Propa-
gation
Iterative
Thresholding
Message
SMP Gradient
Passing
Pursuit
Tree MP
Sequential Reconstruction
EMP
Algorithms
IST MP
Greedy
Subspace
Pursuit
Bregman
Iterative
OMP
CoSAMP
Regularized
OMP
Combinatorial
OMMP
Stagewise
OMP
FSA CP
HHS
p.130 DSIE|16
Proceedings of the Doctoral Symposium in Informatics and Telecommunications Engineering
plex. Basis Pursuit [14], Basis Pursuit De-Noising (BPDN) [14], Least Abso-
lute Shrinkage and Selection Operator (LASSO) [41] e Least Angle Regression
(LARS) [21] are some examples of such algorithms. Basis Pursuit is a principle
for decomposing a signal into an ”optimal” superposition of dictionary elements,
where optimal means having the smallest l1 norm of coefficients among all such
decompositions.
Basis Pursuit has interesting relations to ideas in areas as diverse as ill-posed
problems, abstract harmonic analysis, total variation denoising, and multiscale
edge denoising. Basis Pursuit in highly overcomplete dictionaries leads to large-
scale optimization problems. Such problems can be attacked successfully only
because of recent advances in linear and quadratic programming by interior-
point methods.
In the paper [41] Lasso (l1 ) penalties are useful for fitting a wide variety
of models. Newly developed computational algorithms allow application of these
models to large data sets, exploiting sparsity for both statistical and computation
gains. Interesting work on the lasso is being carried out in many fields, including
statistics, engineering, mathematics and computer science. Recent works show
matrix versions of signal recovery called ||M ||1 Nuclear Norm minimization [38].
Instead of reconstructing × from Θx, Nuclear Norm minimization tries to recover
a low rank matrix M from Θx. Since rank determines the order, dimension and
complexity of the system, low rank matrices correspond to low order statistical
models.
DSIE|16 p.131
Proceedings of the Doctoral Symposium in Informatics and Telecommunications Engineering
The fast and accurate reconstruction algorithms has been the focus of the
study of CS, they will be the key technologies for the application of CS. At
present, the most important greedy algorithms include matching pursuit and
gradient pursuit [18,19].
The idea is to select columns of Θ in a greedy fashion. At each iteration,
the column of Θ that correlates most with is selected. Conversely, least square
error is minimized in every iteration. Most used greedy algorithms are Matching
Pursuit [32] and its derivative Orthogonal Matching Pursuits(OMP) [42] because
of their low implementation cost and high speed of recovery. However, when the
signal is not much sparse, recovery becomes costly. For such situations, improved
versions of (OMP) have been devised like Regularized OMP [36], Stagewise OMP
[18], Compressive Sampling Matching Pursuits(CoSaMP) [35], Subspace Pursuits
[15], Gradient Pursuits [22] and Orthogonal Multiple Matching Pursuit [30].
p.132 DSIE|16
Proceedings of the Doctoral Symposium in Informatics and Telecommunications Engineering
In paper [15] and [18], Basis Pursuit can reliably recover signals with n =
256 and sparsity level up to 35, from only 128 measurements. The reconstruction
algorithms OMP and ROMP can only be reliable up to sparsity level of 19 for
same n and m. The performance of Basis Pursuit appears promising as compared
to OMP derivatives from minimum measurements perspective.
4 Conclusion
DSIE|16 p.133
Proceedings of the Doctoral Symposium in Informatics and Telecommunications Engineering
During the review process did the survey and identify six types of recon-
struction algorithms classes. In this article, we have provided a comprehensive
survey of the numerous reconstruction algorithms discusses the origin, purpose,
scope and implementation of CS in image reconstruction and compares their
complexity.
5 Acknowledgments
The author gratefully acknowledges ”Conselho Nacional de Desenvolvimento
Cientfico e Tecnologia - CNPq” for the scholarship provided during this research.
References
1. Baraniuk, R.: Compressive Sensing [Lecture Notes]. IEEE Signal Processing Mag-
azine 24(4), 118–121 (jul 2007), http://ieeexplore.ieee.org/xpls/abs{_}all.
jsp?arnumber=4286571{&}tag=1
2. Baron, D., Sarvotham, S., Baraniuk, R.G.: Bayesian compressive sensing via belief
propagation. IEEE Transactions on Signal Processing 58(1), 269–280 (2010), http:
//ieeexplore.ieee.org/xpls/abs{_}all.jsp?arnumber=5169989
3. Bayar, B., Bouaynaya, N., Shterenberg, R.: Kernel reconstruction: An exact greedy
algorithm for compressive sensing. In: 2014 IEEE Global Conference on Signal
and Information Processing (GlobalSIP). pp. 1390–1393. IEEE (dec 2014), http:
//ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7032355
4. Berinde, R., Indyk, P.: Sequential sparse matching pursuit. In: 2009 47th An-
nual Allerton Conference on Communication, Control, and Computing, Allerton
2009. pp. 36–43. IEEE, Monticello, IL (2009), http://ieeexplore.ieee.org/xpl/
articleDetails.jsp?arnumber=5394834{&}tag=1
5. Berinde, R., Indyk, P., Ružić, M.: Practical near-optimal sparse recovery in
the L1 norm. In: 46th Annual Allerton Conference on Communication, Control,
and Computing. pp. 198–205. IEEE (2008), http://ieeexplore.ieee.org/xpls/
abs{_}all.jsp?arnumber=4797556
6. Blumensath, T., Davies, M.E.: Iterative Hard Thresholding for Compressed Sens-
ing. Applied and Computational Harmonic Analysis 27(3), 265–274 (may 2008),
http://arxiv.org/abs/0805.0510
7. Blumensath, T., Davies, M.E.: Iterative hard thresholding for compressed sensing.
Applied and Computational Harmonic Analysis 27(3), 265–274 (nov 2009), http:
//www.sciencedirect.com/science/article/pii/S1063520309000384
8. Bobin, J., Starck, J.L., Ottensamer, R.: Compressed Sensing in Astronomy (feb
2008), http://dx.doi.org/10.1109/JSTSP.2008.2005337
9. Candès, E., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal re-
construction from highly incomplete frequency information. IEEE Transactions
on Information Theory 52(2), 489–509 (feb 2006), http://ieeexplore.ieee.org/
lpdocs/epic03/wrapper.htm?arnumber=1580791
10. Candès, E., Romberg, J.: L1-magic: Recovery of Sparse Signals via Convex Pro-
gramming (2005), http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.
1.212.9120
11. Candes, E., Tao, T.: Near Optimal Signal Recovery From Random Projections:
Universal Encoding Strategies? (oct 2004), http://arxiv.org/abs/math/0410542
p.134 DSIE|16
Proceedings of the Doctoral Symposium in Informatics and Telecommunications Engineering
12. Candès, E.J., Recht, B.: Exact matrix completion via convex optimization.
Foundations of Computational Mathematics 9(6), 717–772 (2009), http://link.
springer.com/article/10.1007/s10208-009-9045-5
13. Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sens-
ing. In: ICASSP, IEEE International Conference on Acoustics, Speech and Signal
Processing - Proceedings. pp. 3869–3872. Acoustics, Speech and Signal Processing,
2008. ICASSP 2008. IEEE International Conference on, Las Vegas, NV (2008),
http://ieeexplore.ieee.org/xpls/abs{_}all.jsp?arnumber=4518498
14. Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic Decomposition by
Basis Pursuit. SIAM Rev. 43(1), 129–159 (2001), http://dx.doi.org/
10.1137/S003614450037906Xhttp://epubs.siam.org/doi/abs/10.1137/
S003614450037906X
15. Dai, W., Milenkovic, O.: Subspace pursuit for compressive sensing signal recon-
struction. IEEE Transactions on Information Theory 55(5), 2230–2249 (2009),
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=4839056
16. Donoho, D.L.: De-noising by soft-thresholding. IEEE Transactions on Infor-
mation Theory 41(3), 613–627 (1995), http://ieeexplore.ieee.org/xpl/
articleDetails.jsp?arnumber=382009&newsearch=true&queryText=10.1109%
2F18.382009
17. Donoho, D.L., Maleki, A., Montanari, A.: Message Passing Algorithms for Com-
pressed Sensing p. 6 (2009), http://arxiv.org/abs/0907.3574http://arxiv.
org/pdf/0907.3574v1.pdf
18. Donoho, D.L., Tsaig, Y., Drori, I., Starck, J.L.: Sparse solution of underdeter-
mined systems of linear equations by stagewise orthogonal matching pursuit. IEEE
Transactions on Information Theory 58(2), 1094–1121 (2012), http://citeseer.
ist.psu.edu/viewdoc/summary?doi=10.1.1.115.5221
19. Du, L., Wang, R., Wan, W., Yu, X.Q., Yu, S.: Analysis on greedy reconstruction
algorithms based on compressed sensing. In: 2012 International Conference on
Audio, Language and Image Processing. pp. 783–789. IEEE (jul 2012), http://
ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6376720
20. Duarte, M., Davenport, M., Takhar, D., Laska, J., Ting Sun, Kelly, K., Bara-
niuk, R.: Single-Pixel Imaging via Compressive Sampling. IEEE Signal Process-
ing Magazine 25(2), 83–91 (mar 2008), http://ieeexplore.ieee.org/lpdocs/
epic03/wrapper.htm?arnumber=4472247
21. Efron, B., Hastie, T., Johnstone, I., Tibshirani, R., Ishwaran, H., Knight, K.,
Loubes, J.M., Massart, P., Madigan, D., Ridgeway, G., Rosset, S., Zhu, J.I.,
Stine, R.a., Turlach, B.a., Weisberg, S., Hastie, T., Johnstone, I., Tibshirani,
R.: Least angle regression. Annals of Statistics 32(2), 407–499 (2004), http:
//arxiv.org/pdf/math/0406456v2.pdf
22. Figueiredo, M.A.T., Nowak, R.D., Wright, S.J.: Gradient projection for sparse
reconstruction: Application to compressed sensing and other inverse problems.
IEEE Journal on Selected Topics in Signal Processing 1(4), 586–597 (2007),
http://ieeexplore.ieee.org/xpls/abs{_}all.jsp?arnumber=4407762
23. Foucart, S.: Sparse Recovery Algorithms: Sufficient Conditions in Terms of Re-
stricted Isometry Constants. In: Approximation Theory XIII: San Antonio 2010,
pp. 65–77. Springer New York (2012), http://link.springer.com/10.1007/
978-1-4614-0772-0{_}5
24. Gilbert, A.C., Muthukrishnan, S., Strauss, M.: Improved time bounds for near-
optimal sparse Fourier representations. In: Papadakis, M., Laine, A.F., Unser,
M.A. (eds.) Proceedings of SPIE. vol. 5914, pp. 59141A–59141A–15. Proc. SPIE
DSIE|16 p.135
Proceedings of the Doctoral Symposium in Informatics and Telecommunications Engineering
p.136 DSIE|16
Proceedings of the Doctoral Symposium in Informatics and Telecommunications Engineering
39. Rudelson, M., Vershynin, R.: Sparse reconstruction by convex relaxation: Fourier
and Gaussian measurements (feb 2006), http://arxiv.org/pdf/math/0602559.
pdf
40. Tauboeck, G., Hlawatsch, F., Eiwen, D., Rauhut, H.: Compressive estimation of
doubly selective channels in multicarrier systems: Leakage effects and sparsity-
enhancing processing http://arxiv.org/abs/0903.2774
41. Tibshirani, R.: Regression Shrinkage and Selection Via the Lasso. Journal of
the Royal Statistical Society, Series B 58, 267—-288 (1996), http://statweb.
stanford.edu/{~}tibs/ftp/lasso-retro.pdf
42. Tropp, J.A., Gilbert, A.C.: Signal Recovery From Random Measurements Via Or-
thogonal Matching Pursuit. IEEE Transactions on Information Theory 53(12),
4655–4666 (dec 2007), http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.
htm?arnumber=4385788
43. Wang, L., Lu, K., Liu, P., Ranjan, R., Chen, L.: IK-SVD: Dictionary Learning for
Spatial Big Data via Incremental Atom Update (2014)
44. Wipf, D.P., Rao, B.D.: Sparse Bayesian learning for basis selection. IEEE
Transactions on Signal Processing 52(8), 2153–2164 (2004), http://ieeexplore.
ieee.org/xpl/freeabs{_}all.jsp?arnumber=1315936{&}abstractAccess=
no{&}userType=inst
45. Yin, W., Osher, S., Goldfarb, D., Darbon, J.: Bregman Iterative Algorithms for
$ 1$-Minimization with Applications to Compressed Sensing. SIAM Journal on
Imaging Sciences 1(1), 143–168 (jan 2008), http://epubs.siam.org/doi/abs/10.
1137/070703983
DSIE|16 p.137