• Previous Article
    Deteriorating inventory with preservation technology under price- and stock-sensitive demand
  • JIMO Home
  • This Issue
  • Next Article
    Analytical modeling of laminated composite plates using Kirchhoff circuit and wave digital filters
doi: 10.3934/jimo.2019035

Sparse signal reconstruction via the approximations of $ \ell_{0} $ quasinorm

Department of Mathematics, Harbin Institute of Technology, Harbin 150001, Heilongjiang, China

* Corresponding author: Xing Tao Wang

Received  June 2018 Revised  November 2018 Published  May 2019

In this paper, we propose two classes of the approximations to the cardinality function via the Moreau envelope of the $ \ell_{1} $ norm. We show that these two approximations are good choices of the merit function for sparsity and are essentially the truncated $ \ell_{1} $ norm and the truncated $ \ell_{2} $ norm. Moreover, we apply the approximations to solve sparse signal recovery problems and then provide new weights for reweighted $ \ell_{1} $ minimization and reweighted least squares to find sparse solutions of underdetermined linear systems of equations. Finally, we present some numerical experiments to illustrate our results.

Citation: Jun Wang, Xing Tao Wang. Sparse signal reconstruction via the approximations of $ \ell_{0} $ quasinorm. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2019035
References:
[1]

Q. Berthet and P. Rigollet, Optimal detection of sparse principal components in high dimension, Annal. Statistics, 41 (2013), 1780-1815. doi: 10.1214/13-AOS1127. Google Scholar

[2]

Md. Z. A. BhotM. O. Ahmad and M. N. S. Swamy, An improved fast iterative shrinkage thresholding algorithm for image deblurring, SIAM J. Imaging Sci., 8 (2015), 1640-1657. doi: 10.1137/140970537. Google Scholar

[3]

A. M. BrucksteinD. L. Donoho and M. Elad, From sparse solutions of systems of equations to sparse modeling of signals and images, SIAM Review, 51 (2009), 34-81. doi: 10.1137/060657704. Google Scholar

[4]

E.J. Candès and B. Recht, Exact matrix completion via convex optimization, Found. Comput. Math., 9 (2009), 717-772. doi: 10.1007/s10208-009-9045-5. Google Scholar

[5]

E. J. CandèsJ. Romberg and T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inf. Theory, 52 (2006), 489-509. doi: 10.1109/TIT.2005.862083. Google Scholar

[6]

E. J. CandésJ. Romberg and T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Comm. Pure Appl. Math., 59 (2006), 1207-1223. doi: 10.1002/cpa.20124. Google Scholar

[7]

E. J. Candès and T. Tao, Decoding by linear programming, IEEE Trans. Inf. Theory, 51 (2005), 4203-4215. doi: 10.1109/TIT.2005.858979. Google Scholar

[8]

E. J. Candès and T. Tao, Near-optimal signal recovery from random projections: Universal encoding strategies?, IEEE Trans. Inf. Theory, 52 (2006), 5406-5425. doi: 10.1109/TIT.2006.885507. Google Scholar

[9]

E. J. CandèsM. Wakin and S. Boyd, Enhancing sparsity by reweighted $\ell_{1}$ minimization, J. Fourier Anal. Appl., 14 (2008), 877-905. doi: 10.1007/s00041-008-9045-x. Google Scholar

[10]

R. Chartrand, Exact reconstruction of sparse signals via nonconvex minimization, IEEE Signal Process. Lett., 14 (2007), 707-710. doi: 10.1109/LSP.2007.898300. Google Scholar

[11]

R. Chartrand and W. Yin, Iteratively reweighted algorithms for compressive sensing, in: Proc. Int. Conf. Accoust. Speech, Signal Process., 2008, 3869–3872. doi: 10.1109/ICASSP.2008.4518498. Google Scholar

[12]

S. S. ChenD. L. Donoho and M. A. Saunders, Atomic decomposition by basis pursuit, SIAM J. Sci. Comput., 43 (2001), 129-159. doi: 10.1137/S003614450037906X. Google Scholar

[13]

A. M. ChristopherM. Arian and G. B. Richard, From denoising to compressed sensing, IEEE Trans. Inf. Theory, 62 (2016), 5117-5144. doi: 10.1109/TIT.2016.2556683. Google Scholar

[14]

A. CohenW. Dahmen and R. DeVore, Compressed sensing and best k-term approximation, J. American Math. Soc., 22 (2009), 211-231. doi: 10.1090/S0894-0347-08-00610-3. Google Scholar

[15]

I. DaubechiesM. Defrise and M. C. De, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Commun. Pur. Appl. Math., 57 (2004), 1413-1457. doi: 10.1002/cpa.20042. Google Scholar

[16]

D. L. Donoho, Compressed sensing, IEEE Trans. Inf. Theory, 52 (2006), 1289-1306. doi: 10.1109/TIT.2006.871582. Google Scholar

[17]

E. EsserY. F. Lou and J. Xin, A method for finding structured sparse solutions to nonnegative least squares problems with applications, SIAM J. maging Sci., 6 (2013), 2010-2046. doi: 10.1137/13090540X. Google Scholar

[18]

J. Fan and R. Li, Variable selection via nonconcave penalized likelihood and its oracle properties, J. Amer. Statist. Assoc., 96 (2001), 1348-1360. doi: 10.1198/016214501753382273. Google Scholar

[19]

M. FornasierS. PeterH. Rauhut and S. Worm, Conjugate gradient acceleration of iteratively re-weighted least squares methods, Comput. Optim. Appl., 65 (2016), 205-259. doi: 10.1007/s10589-016-9839-8. Google Scholar

[20]

S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing, Birkhaäuser, Basel, 2013. doi: 10.1007/978-0-8176-4948-7. Google Scholar

[21]

S. Foucart and M. Lai, Sparsest solutions of underdetermined linear systems via $\ell_{q}$-minimization for $0 <q\leq1$, ppl. Comput. Harmon. Anal., 26 (2009), 395-407. doi: 10.1016/j.acha.2008.09.001. Google Scholar

[22]

I. F. Gorodnitsky and B. D. Rao, Sparse signal reconstruction from limited data using FOCUSS, A re-weighted minimum norm algorithm, IEEE Trans. Signal Process., 45 (1997), 600-616. doi: 10.1109/78.558475. Google Scholar

[23]

J. Huang, Y. Jiao, B. Jin, J. Liu, X. Lu and C. Yang., A unified primal dual active set algorithm for nonconvex sparse recovery, arXiv: 1310.1147.Google Scholar

[24]

J. Huang, Y. Jiao, Y. Liu and X. Lu, A constructive approach to $L_{0}$-penalized regression, J. Mach. Learn. Res., 19 (2018), Paper No. 10, 37 pp. Google Scholar

[25]

Y. JiaoB. Jin and X. Lu, A primal dual active set with continuation algorithm for the $\ell^{0}$-regularized optimization problem, Appl. Comput. Harmon. Anal., 39 (2015), 400-426. doi: 10.1016/j.acha.2014.10.001. Google Scholar

[26]

R. KuengH. Rauhut and U. Terstiege., Low rank matrix recovery from rank one measurements, Appl. Comput. Harmon. Anal., 42 (2017), 88-116. doi: 10.1016/j.acha.2015.07.007. Google Scholar

[27]

M. Lai and J. Wang, An unconstrained $\ell_{q}$ minimization with $0 < q\leq 1$ for sparse solution of underdetermined linear systems, SIAM J. Optim., 21 (2011), 82-101. doi: 10.1137/090775397. Google Scholar

[28]

Z. S. Lu, Iterative reweighted minimization methods for $\ell_{p}$ regularized unconstrained nonlinear programming, Mathematical Program., 147 (2014), 277-307. doi: 10.1007/s10107-013-0722-4. Google Scholar

[29]

Z. S. Lu and X. Li, Sparse recovery via partial regularization: Models, theory, and algorithms, Math. Oper. Res., 43 (2018), 1290-1316. doi: 10.1287/moor.2017.0905. Google Scholar

[30]

J. Lv and Y. Fan, A unified approach to model selection and sparse recovery using regularized least squares, Ann. Statist., 37 (2009), 3498-3528. doi: 10.1214/09-AOS683. Google Scholar

[31]

H. Mansour and R. Saab, Recovery analysis for weighted $\ell_{1}$-minimization using the null space property, Appl. Comput. Harmon. Anal., 43 (2017), 23-38. doi: 10.1016/j.acha.2015.10.005. Google Scholar

[32]

F. D. MarcoA. D. Mark and T. Dharmpal et al., Single-pixel imaging via compressive sampling, IEEE Trans. Signal Magazine, 25 (2008), 83-91. doi: 10.1109/MSP.2007.914730. Google Scholar

[33]

H. MohimaniB. Z. Massoud and C. Jutten, A fast approach for overcomplete sparse decomposition based on smoothed $ \ell^{0} $ norm, IEEE Trans. Signal Process., 57 (2009), 289-301. doi: 10.1109/TSP.2008.2007606. Google Scholar

[34]

B. K. Natarajan, Sparse approximate solutions to linear systems, SIAM J. Comput., 24 (1995), 227-234. doi: 10.1137/S0097539792240406. Google Scholar

[35]

D. Needell and J. A. Tropp, CoSaMP: Iterative signal recovery from incomplete and inaccurate samples, Appl. Comput. Harmon. Anal., 26 (2009), 301-321. doi: 10.1016/j.acha.2008.07.002. Google Scholar

[36]

H. L. Royden and P. M. Fitzpatrick, Real Analysis, 4$^{nd}$ edition, Macmillan Publishing Company, New York, 2010.Google Scholar

[37]

I. Selesnick, Sparse regularization via convex analysis, IEEE Trans. Signal Process., 65 (2017), 4481-4494. doi: 10.1109/TSP.2017.2711501. Google Scholar

[38]

I. Selesnick and M. Farshchian, Sparse signal approximation via nonseparable regularization, IEEE Trans. Signal Process., 65 (2017), 2561-2575. doi: 10.1109/TSP.2017.2669904. Google Scholar

[39]

G. W. Stewart, On scaled projections and pseudoinverses, Linear Algebra Appl., 112 (1989), 189-193. doi: 10.1016/0024-3795(89)90594-6. Google Scholar

[40]

B. TianX. Yang and K. Meng, An interior-point $\ell_{\frac{1}{2}}$-penalty method for inequality constrained nonlinear optimization, J. Ind. Manag. Optim., 12 (2016), 949-973. doi: 10.3934/jimo.2016.12.949. Google Scholar

[41]

R. Tibshirani, Regression shrinkage and selection via the lasso, J. Roy. Statist. Soc. Ser. B, 58 (1996), 267-288. doi: 10.1111/j.2517-6161.1996.tb02080.x. Google Scholar

[42]

J. A. Tropp and A. C. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit, IEEE Trans. Inf. Theory, 53 (2007), 4655-4666. doi: 10.1109/TIT.2007.909108. Google Scholar

[43]

J. Wang and X. T. Wang, Sparse approximate reconstruction decomposed by two optimization problems, Circ. Syst. Signal Process., 37 (2018), 2164-2178. doi: 10.1007/s00034-017-0667-6. Google Scholar

[44]

D. Wipf and S. Nagarajan, Iterative reweighted $\ell_1 $ and $\ell_2 $ methods for finding sparse solutions, IEEE J. Select. Topics Signal Process., 4 (2010), 317-329. doi: 10.1109/JSTSP.2010.2042413. Google Scholar

[45]

J. WrightA. Y. Yang and G. Arvind et al., Robust face recognition via sparse representation, IEEE Trans. Pattern Analysis and Machine Intelligence, 31 (2008), 210-227. doi: 10.1109/AFGR.2008.4813404. Google Scholar

[46]

P. Yin, Y. Lou, Q. He and J. Xin, Minimization of $\ell_{1-2}$ for compressed sensing, SIAM J. Sci. Comput., 37 (2015), A536–A563. doi: 10.1137/140952363. Google Scholar

[47]

C.-H. Zhang, Nearly unbiased variable selection under minimax concave penalty, Statist. Sci., 2 (2010), 894-942. doi: 10.1214/09-AOS729. Google Scholar

[48]

C.-H. Zhang and T. Zhang, A general theory of concave regularization for high-dimensional sparse estimation problems, Statist. Sci., 27 (2012), 576-593. doi: 10.1214/12-STS399. Google Scholar

[49]

T. Zhang, Analysis of multi-stage convex relaxation for sparse regularization, J. Mach. Learn. Res., 11 (2010), 1081-1107. Google Scholar

[50]

Y.-B. Zhao, RSP-based analysis for sparsest and least $\ell_{1}$-norm solutions to underdetermined linear systems, IEEE Trans. Signal Process., 61 (2013), 5777-5788. doi: 10.1109/TSP.2013.2281030. Google Scholar

[51]

Y.-B. Zhao and M. Kocvara, A new computational method for the sparsest solutions to systems of linear equations, SIAM J. Optim., 25 (2015), 1110-1134. doi: 10.1137/140968240. Google Scholar

[52]

Y. Zhao and D. Li, Reweighted $\ell_{1}$-minimization for sparse solutions to underdetermined linear systems, SIAM J. Optim., 22 (2012), 1065-1088. doi: 10.1137/110847445. Google Scholar

show all references

References:
[1]

Q. Berthet and P. Rigollet, Optimal detection of sparse principal components in high dimension, Annal. Statistics, 41 (2013), 1780-1815. doi: 10.1214/13-AOS1127. Google Scholar

[2]

Md. Z. A. BhotM. O. Ahmad and M. N. S. Swamy, An improved fast iterative shrinkage thresholding algorithm for image deblurring, SIAM J. Imaging Sci., 8 (2015), 1640-1657. doi: 10.1137/140970537. Google Scholar

[3]

A. M. BrucksteinD. L. Donoho and M. Elad, From sparse solutions of systems of equations to sparse modeling of signals and images, SIAM Review, 51 (2009), 34-81. doi: 10.1137/060657704. Google Scholar

[4]

E.J. Candès and B. Recht, Exact matrix completion via convex optimization, Found. Comput. Math., 9 (2009), 717-772. doi: 10.1007/s10208-009-9045-5. Google Scholar

[5]

E. J. CandèsJ. Romberg and T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inf. Theory, 52 (2006), 489-509. doi: 10.1109/TIT.2005.862083. Google Scholar

[6]

E. J. CandésJ. Romberg and T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Comm. Pure Appl. Math., 59 (2006), 1207-1223. doi: 10.1002/cpa.20124. Google Scholar

[7]

E. J. Candès and T. Tao, Decoding by linear programming, IEEE Trans. Inf. Theory, 51 (2005), 4203-4215. doi: 10.1109/TIT.2005.858979. Google Scholar

[8]

E. J. Candès and T. Tao, Near-optimal signal recovery from random projections: Universal encoding strategies?, IEEE Trans. Inf. Theory, 52 (2006), 5406-5425. doi: 10.1109/TIT.2006.885507. Google Scholar

[9]

E. J. CandèsM. Wakin and S. Boyd, Enhancing sparsity by reweighted $\ell_{1}$ minimization, J. Fourier Anal. Appl., 14 (2008), 877-905. doi: 10.1007/s00041-008-9045-x. Google Scholar

[10]

R. Chartrand, Exact reconstruction of sparse signals via nonconvex minimization, IEEE Signal Process. Lett., 14 (2007), 707-710. doi: 10.1109/LSP.2007.898300. Google Scholar

[11]

R. Chartrand and W. Yin, Iteratively reweighted algorithms for compressive sensing, in: Proc. Int. Conf. Accoust. Speech, Signal Process., 2008, 3869–3872. doi: 10.1109/ICASSP.2008.4518498. Google Scholar

[12]

S. S. ChenD. L. Donoho and M. A. Saunders, Atomic decomposition by basis pursuit, SIAM J. Sci. Comput., 43 (2001), 129-159. doi: 10.1137/S003614450037906X. Google Scholar

[13]

A. M. ChristopherM. Arian and G. B. Richard, From denoising to compressed sensing, IEEE Trans. Inf. Theory, 62 (2016), 5117-5144. doi: 10.1109/TIT.2016.2556683. Google Scholar

[14]

A. CohenW. Dahmen and R. DeVore, Compressed sensing and best k-term approximation, J. American Math. Soc., 22 (2009), 211-231. doi: 10.1090/S0894-0347-08-00610-3. Google Scholar

[15]

I. DaubechiesM. Defrise and M. C. De, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Commun. Pur. Appl. Math., 57 (2004), 1413-1457. doi: 10.1002/cpa.20042. Google Scholar

[16]

D. L. Donoho, Compressed sensing, IEEE Trans. Inf. Theory, 52 (2006), 1289-1306. doi: 10.1109/TIT.2006.871582. Google Scholar

[17]

E. EsserY. F. Lou and J. Xin, A method for finding structured sparse solutions to nonnegative least squares problems with applications, SIAM J. maging Sci., 6 (2013), 2010-2046. doi: 10.1137/13090540X. Google Scholar

[18]

J. Fan and R. Li, Variable selection via nonconcave penalized likelihood and its oracle properties, J. Amer. Statist. Assoc., 96 (2001), 1348-1360. doi: 10.1198/016214501753382273. Google Scholar

[19]

M. FornasierS. PeterH. Rauhut and S. Worm, Conjugate gradient acceleration of iteratively re-weighted least squares methods, Comput. Optim. Appl., 65 (2016), 205-259. doi: 10.1007/s10589-016-9839-8. Google Scholar

[20]

S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing, Birkhaäuser, Basel, 2013. doi: 10.1007/978-0-8176-4948-7. Google Scholar

[21]

S. Foucart and M. Lai, Sparsest solutions of underdetermined linear systems via $\ell_{q}$-minimization for $0 <q\leq1$, ppl. Comput. Harmon. Anal., 26 (2009), 395-407. doi: 10.1016/j.acha.2008.09.001. Google Scholar

[22]

I. F. Gorodnitsky and B. D. Rao, Sparse signal reconstruction from limited data using FOCUSS, A re-weighted minimum norm algorithm, IEEE Trans. Signal Process., 45 (1997), 600-616. doi: 10.1109/78.558475. Google Scholar

[23]

J. Huang, Y. Jiao, B. Jin, J. Liu, X. Lu and C. Yang., A unified primal dual active set algorithm for nonconvex sparse recovery, arXiv: 1310.1147.Google Scholar

[24]

J. Huang, Y. Jiao, Y. Liu and X. Lu, A constructive approach to $L_{0}$-penalized regression, J. Mach. Learn. Res., 19 (2018), Paper No. 10, 37 pp. Google Scholar

[25]

Y. JiaoB. Jin and X. Lu, A primal dual active set with continuation algorithm for the $\ell^{0}$-regularized optimization problem, Appl. Comput. Harmon. Anal., 39 (2015), 400-426. doi: 10.1016/j.acha.2014.10.001. Google Scholar

[26]

R. KuengH. Rauhut and U. Terstiege., Low rank matrix recovery from rank one measurements, Appl. Comput. Harmon. Anal., 42 (2017), 88-116. doi: 10.1016/j.acha.2015.07.007. Google Scholar

[27]

M. Lai and J. Wang, An unconstrained $\ell_{q}$ minimization with $0 < q\leq 1$ for sparse solution of underdetermined linear systems, SIAM J. Optim., 21 (2011), 82-101. doi: 10.1137/090775397. Google Scholar

[28]

Z. S. Lu, Iterative reweighted minimization methods for $\ell_{p}$ regularized unconstrained nonlinear programming, Mathematical Program., 147 (2014), 277-307. doi: 10.1007/s10107-013-0722-4. Google Scholar

[29]

Z. S. Lu and X. Li, Sparse recovery via partial regularization: Models, theory, and algorithms, Math. Oper. Res., 43 (2018), 1290-1316. doi: 10.1287/moor.2017.0905. Google Scholar

[30]

J. Lv and Y. Fan, A unified approach to model selection and sparse recovery using regularized least squares, Ann. Statist., 37 (2009), 3498-3528. doi: 10.1214/09-AOS683. Google Scholar

[31]

H. Mansour and R. Saab, Recovery analysis for weighted $\ell_{1}$-minimization using the null space property, Appl. Comput. Harmon. Anal., 43 (2017), 23-38. doi: 10.1016/j.acha.2015.10.005. Google Scholar

[32]

F. D. MarcoA. D. Mark and T. Dharmpal et al., Single-pixel imaging via compressive sampling, IEEE Trans. Signal Magazine, 25 (2008), 83-91. doi: 10.1109/MSP.2007.914730. Google Scholar

[33]

H. MohimaniB. Z. Massoud and C. Jutten, A fast approach for overcomplete sparse decomposition based on smoothed $ \ell^{0} $ norm, IEEE Trans. Signal Process., 57 (2009), 289-301. doi: 10.1109/TSP.2008.2007606. Google Scholar

[34]

B. K. Natarajan, Sparse approximate solutions to linear systems, SIAM J. Comput., 24 (1995), 227-234. doi: 10.1137/S0097539792240406. Google Scholar

[35]

D. Needell and J. A. Tropp, CoSaMP: Iterative signal recovery from incomplete and inaccurate samples, Appl. Comput. Harmon. Anal., 26 (2009), 301-321. doi: 10.1016/j.acha.2008.07.002. Google Scholar

[36]

H. L. Royden and P. M. Fitzpatrick, Real Analysis, 4$^{nd}$ edition, Macmillan Publishing Company, New York, 2010.Google Scholar

[37]

I. Selesnick, Sparse regularization via convex analysis, IEEE Trans. Signal Process., 65 (2017), 4481-4494. doi: 10.1109/TSP.2017.2711501. Google Scholar

[38]

I. Selesnick and M. Farshchian, Sparse signal approximation via nonseparable regularization, IEEE Trans. Signal Process., 65 (2017), 2561-2575. doi: 10.1109/TSP.2017.2669904. Google Scholar

[39]

G. W. Stewart, On scaled projections and pseudoinverses, Linear Algebra Appl., 112 (1989), 189-193. doi: 10.1016/0024-3795(89)90594-6. Google Scholar

[40]

B. TianX. Yang and K. Meng, An interior-point $\ell_{\frac{1}{2}}$-penalty method for inequality constrained nonlinear optimization, J. Ind. Manag. Optim., 12 (2016), 949-973. doi: 10.3934/jimo.2016.12.949. Google Scholar

[41]

R. Tibshirani, Regression shrinkage and selection via the lasso, J. Roy. Statist. Soc. Ser. B, 58 (1996), 267-288. doi: 10.1111/j.2517-6161.1996.tb02080.x. Google Scholar

[42]

J. A. Tropp and A. C. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit, IEEE Trans. Inf. Theory, 53 (2007), 4655-4666. doi: 10.1109/TIT.2007.909108. Google Scholar

[43]

J. Wang and X. T. Wang, Sparse approximate reconstruction decomposed by two optimization problems, Circ. Syst. Signal Process., 37 (2018), 2164-2178. doi: 10.1007/s00034-017-0667-6. Google Scholar

[44]

D. Wipf and S. Nagarajan, Iterative reweighted $\ell_1 $ and $\ell_2 $ methods for finding sparse solutions, IEEE J. Select. Topics Signal Process., 4 (2010), 317-329. doi: 10.1109/JSTSP.2010.2042413. Google Scholar

[45]

J. WrightA. Y. Yang and G. Arvind et al., Robust face recognition via sparse representation, IEEE Trans. Pattern Analysis and Machine Intelligence, 31 (2008), 210-227. doi: 10.1109/AFGR.2008.4813404. Google Scholar

[46]

P. Yin, Y. Lou, Q. He and J. Xin, Minimization of $\ell_{1-2}$ for compressed sensing, SIAM J. Sci. Comput., 37 (2015), A536–A563. doi: 10.1137/140952363. Google Scholar

[47]

C.-H. Zhang, Nearly unbiased variable selection under minimax concave penalty, Statist. Sci., 2 (2010), 894-942. doi: 10.1214/09-AOS729. Google Scholar

[48]

C.-H. Zhang and T. Zhang, A general theory of concave regularization for high-dimensional sparse estimation problems, Statist. Sci., 27 (2012), 576-593. doi: 10.1214/12-STS399. Google Scholar

[49]

T. Zhang, Analysis of multi-stage convex relaxation for sparse regularization, J. Mach. Learn. Res., 11 (2010), 1081-1107. Google Scholar

[50]

Y.-B. Zhao, RSP-based analysis for sparsest and least $\ell_{1}$-norm solutions to underdetermined linear systems, IEEE Trans. Signal Process., 61 (2013), 5777-5788. doi: 10.1109/TSP.2013.2281030. Google Scholar

[51]

Y.-B. Zhao and M. Kocvara, A new computational method for the sparsest solutions to systems of linear equations, SIAM J. Optim., 25 (2015), 1110-1134. doi: 10.1137/140968240. Google Scholar

[52]

Y. Zhao and D. Li, Reweighted $\ell_{1}$-minimization for sparse solutions to underdetermined linear systems, SIAM J. Optim., 22 (2012), 1065-1088. doi: 10.1137/110847445. Google Scholar

Figure 1.  Comparison of success rates of finding the $ s $-sparse solution $ \mathbf{y} $ of $ \mathbf{b = Ax} $, where $ \mathbf{A} \in \mathbb{R}^{50\times 250} $ and $ \mathbf{y} \in \mathbb{R}^{250} $. For each $ s $-sparsity, $ 500 $ attempts were made
Figure 2.  Comparison of $ \Vert \mathbf{y}^{k}\Vert_{1} $ and weighted $ \Vert \mathbf{y}^{k}\Vert_{1} $ at each iteration $ k $ with $ \mathbf{A} \in \mathbb{R}^{50\times 250} $, $ \mathbf{y} \in \mathbb{R}^{250} $ and $ p = 0.618, s = 10 $
Figure 3.  Comparison of success rates of finding the $ s $-sparse solution $ \mathbf{y} $ of $ \mathbf{b = Ay} $, where $ \mathbf{A} \in \mathbb{R}^{50\times 250} $ and $ \mathbf{y} \in \mathbb{R}^{250} $. For each $ s $-sparsity, $ 500 $ attempts were made
Figure 4.  Comparison of $ \Vert \mathbf{y}^{k}\Vert_{2}^{2} $ and weighted $ \Vert \mathbf{y}^{k}\Vert_{2}^{2} $ at each iteration $ k $ with $ \mathbf{A} \in \mathbb{R}^{50\times 250} $, $ \mathbf{y} \in \mathbb{R}^{250} $ and $ p = 0.618, s = 10 $
[1]

Lingchen Kong, Naihua Xiu, Guokai Liu. Partial $S$-goodness for partially sparse signal recovery. Numerical Algebra, Control & Optimization, 2014, 4 (1) : 25-38. doi: 10.3934/naco.2014.4.25

[2]

Wanyou Cheng, Zixin Chen, Donghui Li. Nomonotone spectral gradient method for sparse recovery. Inverse Problems & Imaging, 2015, 9 (3) : 815-833. doi: 10.3934/ipi.2015.9.815

[3]

Eduardo Casas, Mariano Mateos, Arnd Rösch. Finite element approximation of sparse parabolic control problems. Mathematical Control & Related Fields, 2017, 7 (3) : 393-417. doi: 10.3934/mcrf.2017014

[4]

Yi Shen, Song Li. Sparse signals recovery from noisy measurements by orthogonal matching pursuit. Inverse Problems & Imaging, 2015, 9 (1) : 231-238. doi: 10.3934/ipi.2015.9.231

[5]

Yunhai Xiao, Soon-Yi Wu, Bing-Sheng He. A proximal alternating direction method for $\ell_{2,1}$-norm least squares problem in multi-task feature learning. Journal of Industrial & Management Optimization, 2012, 8 (4) : 1057-1069. doi: 10.3934/jimo.2012.8.1057

[6]

Ralf Banisch, Carsten Hartmann. A sparse Markov chain approximation of LQ-type stochastic control problems. Mathematical Control & Related Fields, 2016, 6 (3) : 363-389. doi: 10.3934/mcrf.2016007

[7]

Pia Heins, Michael Moeller, Martin Burger. Locally sparse reconstruction using the $l^{1,\infty}$-norm. Inverse Problems & Imaging, 2015, 9 (4) : 1093-1137. doi: 10.3934/ipi.2015.9.1093

[8]

Wengu Chen, Huanmin Ge. Recovery of block sparse signals under the conditions on block RIC and ROC by BOMP and BOMMP. Inverse Problems & Imaging, 2018, 12 (1) : 153-174. doi: 10.3934/ipi.2018006

[9]

Ralf Banisch, Carsten Hartmann. Addendum to "A sparse Markov chain approximation of LQ-type stochastic control problems". Mathematical Control & Related Fields, 2017, 7 (4) : 623-623. doi: 10.3934/mcrf.2017023

[10]

M. Sumon Hossain, M. Monir Uddin. Iterative methods for solving large sparse Lyapunov equations and application to model reduction of index 1 differential-algebraic-equations. Numerical Algebra, Control & Optimization, 2019, 9 (2) : 173-186. doi: 10.3934/naco.2019013

[11]

Chengxiang Wang, Li Zeng, Wei Yu, Liwei Xu. Existence and convergence analysis of $\ell_{0}$ and $\ell_{2}$ regularizations for limited-angle CT reconstruction. Inverse Problems & Imaging, 2018, 12 (3) : 545-572. doi: 10.3934/ipi.2018024

[12]

Björn Popilka, Simon Setzer, Gabriele Steidl. Signal recovery from incomplete measurements in the presence of outliers. Inverse Problems & Imaging, 2007, 1 (4) : 661-672. doi: 10.3934/ipi.2007.1.661

[13]

Yun Cai, Song Li. Convergence and stability of iteratively reweighted least squares for low-rank matrix recovery. Inverse Problems & Imaging, 2017, 11 (4) : 643-661. doi: 10.3934/ipi.2017030

[14]

Kanghui Guo and Demetrio Labate. Sparse shearlet representation of Fourier integral operators. Electronic Research Announcements, 2007, 14: 7-19. doi: 10.3934/era.2007.14.7

[15]

Mattia Bongini, Massimo Fornasier, Oliver Junge, Benjamin Scharf. Sparse control of alignment models in high dimension. Networks & Heterogeneous Media, 2015, 10 (3) : 647-697. doi: 10.3934/nhm.2015.10.647

[16]

Gero Friesecke, Felix Henneke, Karl Kunisch. Frequency-sparse optimal quantum control. Mathematical Control & Related Fields, 2018, 8 (1) : 155-176. doi: 10.3934/mcrf.2018007

[17]

Haixia Liu, Jian-Feng Cai, Yang Wang. Subspace clustering by (k,k)-sparse matrix factorization. Inverse Problems & Imaging, 2017, 11 (3) : 539-551. doi: 10.3934/ipi.2017025

[18]

Zhifeng Dai, Fenghua Wen. A generalized approach to sparse and stable portfolio optimization problem. Journal of Industrial & Management Optimization, 2018, 14 (4) : 1651-1666. doi: 10.3934/jimo.2018025

[19]

Chao Zhang, Jingjing Wang, Naihua Xiu. Robust and sparse portfolio model for index tracking. Journal of Industrial & Management Optimization, 2019, 15 (3) : 1001-1015. doi: 10.3934/jimo.2018082

[20]

Yong Wang, Wanquan Liu, Guanglu Zhou. An efficient algorithm for non-convex sparse optimization. Journal of Industrial & Management Optimization, 2019, 15 (4) : 2009-2021. doi: 10.3934/jimo.2018134

2018 Impact Factor: 1.025

Metrics

  • PDF downloads (19)
  • HTML views (258)
  • Cited by (0)

Other articles
by authors

[Back to Top]