# American Institute of Mathematical Sciences

August  2015, 9(3): 815-833. doi: 10.3934/ipi.2015.9.815

## Nomonotone spectral gradient method for sparse recovery

 1 College of Computer, Dongguan University of Technology, Dongguan, 523000, China, China 2 School of Mathematical Sciences, South China Normal University, Guangzhou, 510631, China

Received  May 2014 Revised  September 2014 Published  July 2015

In the paper, we present an algorithm framework for the more general problem of minimizing the sum $f(x)+\psi(x)$, where $f$ is smooth and $\psi$ is convex, but possible nonsmooth. At each step, the search direction of the algorithm is obtained by solving an optimization problem involving a quadratic term with diagonal Hessian and Barzilai-Borwein steplength plus $\psi(x)$. The nonmonotone strategy is combined with -Borwein steplength to accelerate the convergence process. The method with the nomonotone line search techniques is showed to be globally convergent. In particular, if $f$ is convex, we show that the method shares a sublinear global rate of convergence. Moreover, if $f$ is strongly convex, we prove that the method converges R-linearly. Numerical experiments with compressive sense problems show that our approach is competitive with several known methods for some standard $l_2-l_1$ problems.
Citation: Wanyou Cheng, Zixin Chen, Donghui Li. Nomonotone spectral gradient method for sparse recovery. Inverse Problems & Imaging, 2015, 9 (3) : 815-833. doi: 10.3934/ipi.2015.9.815
##### References:
 [1] M. Afonso, J. Bioucas-Dias and M. Figueiredo, Fast image recovery using variable splitting and constrained optimization,, IEEE Trans. Image Process., 19 (2010), 2345. doi: 10.1109/TIP.2010.2047910. Google Scholar [2] S. Aybat and G. Iyengar, A first-order smoothed penalty method for compressed sensing,, SIAM J. Optim., 21 (2011), 287. doi: 10.1137/090762294. Google Scholar [3] J. Barzilai and J. M. Borwein, Two point step size gradient methods,, IMA J. Numer. Anal., 8 (1988), 141. doi: 10.1093/imanum/8.1.141. Google Scholar [4] A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems,, SIAM J. Imaging Sci., 2 (2009), 183. doi: 10.1137/080716542. Google Scholar [5] A. Beck and Y. C. Eldar, Sparsity constrained nonlinear optimization: Optimality conditions and algorithms,, SIAM J. Optim., 23 (2013), 1480. doi: 10.1137/120869778. Google Scholar [6] J. M. Bioucas-Dias and M. A. T. Figueiredo, A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration,, IEEE Trans. Image Process., 16 (2007), 2992. doi: 10.1109/TIP.2007.909319. Google Scholar [7] E. G. Birgin, J. M. Martínez and M. Raydan, Nonmonotone spectral projected gradient methods on convex sets,, SIAM J. Optim., 10 (2000), 1196. doi: 10.1137/S1052623497330963. Google Scholar [8] D. Boley, Local linear convergence of the alternating direction method of multipliers on quadratic or linear program,, SIAM J. Optim., 23 (2013), 2183. doi: 10.1137/120878951. Google Scholar [9] A. M. Bruckstein, D. L. Donoho and M. Elad, From sparse solutions of systems of equations to sparse modeling of signals and images,, SIAM Rev., 51 (2009), 34. doi: 10.1137/060657704. Google Scholar [10] E. Candes and J. Romberg, $l_1$-magic: A collection of MATLAB Routines for Solving the Convex Optimization Programs Central to Compressive Sampling 2006 [Online]., Available from: , (). Google Scholar [11] W. Y. Cheng and Z. X. Chen, Nonmonotone spectral method for large-scale symmetric nonlinear equations,, Numer Algor., 62 (2013), 149. doi: 10.1007/s11075-012-9572-z. Google Scholar [12] I. Daubechies, M. Defrise and C. De Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,, Comm. Pure Appl. Math., 57 (2004), 1413. doi: 10.1002/cpa.20042. Google Scholar [13] Y.-H. Dai and R. Fletcher, Projected Barzilai-Borwein methods for large-scale box-constrained quadratic programming,, Numer. Math., 100 (2005), 21. doi: 10.1007/s00211-004-0569-y. Google Scholar [14] Y.-H. Dai, W. W. Hager, K. Schittkowski and H. Zhang, The cyclic Barzilar-Borwein method for unconstrained optimization,, IMA J. Numer. Anal., 26 (2006), 604. doi: 10.1093/imanum/drl006. Google Scholar [15] G. Davis, S. Mallat and M. Avellaneda, Greedy adaptive approximation,, Constr Approx., 13 (1997), 57. doi: 10.1007/BF02678430. Google Scholar [16] D. Donoho, Compressed sensing,, IEEE Trans. Inform. Theory, 52 (2006), 1289. doi: 10.1109/TIT.2006.871582. Google Scholar [17] D. Donoho, M. Elad and V. Temlyakov, Stable recovery of sparse overcomplete representations in the presence of noise,, IEEE Trans. Inform. Theory, 52 (2006), 6. doi: 10.1109/TIT.2005.860430. Google Scholar [18] E. D. Dolan and J. J. Moré, Benchmarking optimization software with performance profiles,, Math. Program., 91 (2002), 201. doi: 10.1007/s101070100263. Google Scholar [19] M. Elad, B. Matalon and M. Zibulevsky, Coordinate and subspace optimization methods for linear least squares with non-quadratic regularization,, Appl. Comput. Harmon. Anal., 23 (2007), 346. doi: 10.1016/j.acha.2007.02.002. Google Scholar [20] M. Fukushima and H. Mine, A generalized proximal point algorithm for certain non-convex minimization problems,, Int. J. Syst. Sci., 12 (1981), 989. doi: 10.1080/00207728108963798. Google Scholar [21] M. A. T. Figueiredo and R. D. Nowak, An EM algorithm for wavelet-based image restoration,, IEEE Trans. Image Process., 12 (2003), 906. doi: 10.1109/TIP.2003.814255. Google Scholar [22] M. A. T. Figueiredo, R. D. Nowak and S. J. Wright, Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,, IEEE J. Sel. Top. Signal Process., 1 (2007), 586. doi: 10.1109/JSTSP.2007.910281. Google Scholar [23] M. Fukushima, A successive quadratic programming method for a class of constrained nonsmooth optimization problems,, Math. Program., 49 (): 231. doi: 10.1007/BF01588789. Google Scholar [24] L. Grippo, F. Lampariello and S. Lucidi, A nonmonotone line search technique for Newton's method,, SIAM J. Numer. Anal., 23 (1986), 707. doi: 10.1137/0723046. Google Scholar [25] E. T. Hale, W. Yin and Y. Zhang, Fixed-point continuation for $l_1$ minimization: Methodology and convergence,, SIAM J. Optim., 19 (2008), 1107. doi: 10.1137/070698920. Google Scholar [26] W. W. Hager, D. T. Phan and H. Zhang, Gradient-based methods for sparse recovery,, SIAM J. Imaging Sci., 4 (2011), 146. doi: 10.1137/090775063. Google Scholar [27] K. C. Kiwiel, A method for minimizing the sum of a convex function and a continuously differentiable function,, J. Optim. Theory Appl., 48 (1986), 437. doi: 10.1007/BF00940570. Google Scholar [28] S. J. Kim, K. Koh, M. Lustig, S. Boyd and D. Gorinevsky, An interior-point method for large-scale $l_1$-regularized least squares,, IEEE J. Sel. Top. Signal Process., 1 (2007), 606. Google Scholar [29] H. Mine and M. Fukushima, A minimization method for the sum of a convex function and a continuously differentiable function,, J. Optim. Theory Appl., 33 (1981), 9. doi: 10.1007/BF00935173. Google Scholar [30] Y. Nesterov, Gradient methods for minimizing composite objective function,, 2007, (2007). Google Scholar [31] R. T. Rockafellar, Convex Analysis,, Princeton Mathematical Series, (1970). Google Scholar [32] S. M. Robinson, Linear convergence of $\epsilon$-subgradient descent methods for a class of convex functions,, Math. Program., 86 (1999), 41. doi: 10.1007/s101070050078. Google Scholar [33] M. Saunders, PDCO: Primal-Dual Interior Method for Convex Objectives 2002 [Online]., Available from: , (). Google Scholar [34] P. Tseng and S. W. Yun, A coordinate gradient descent method for nonsmooth separable minimization,, Math. Program., 117 (2009), 387. doi: 10.1007/s10107-007-0170-0. Google Scholar [35] J. Tropp, Greed is good: Algorithmic results for sparse approximation,, IEEE Trans. Inform. Theory, 50 (2006), 2231. doi: 10.1109/TIT.2004.834793. Google Scholar [36] Z. W. Wen, W. T. Yin, D. Goldfarb and Y. Zhang, A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization, and continuation,, SIAM J. Sci. Comput., 32 (2010), 1832. doi: 10.1137/090747695. Google Scholar [37] Z. W. Wen, W. T. Yin, H. Zhang and D. Goldfarb, On the convergence of an active-set method for $l_1$ minimization,, Optim. Methods Softw., 27 (2012), 1127. doi: 10.1080/10556788.2011.591398. Google Scholar [38] J. Wright, R. D. Nowak and M. A. T. Figueiredo, Sparse reconstruction by separable approximation,, IEEE Trans. Signal Process., 57 (2009), 2479. doi: 10.1109/TSP.2009.2016892. Google Scholar [39] W. Yin, S. Osher, D. Goldfarb and J. Darbon, Bregman iterative algorithms for $l_1$-minimization with applications to compressed sensing,, SIAM J. Imaging Sci., 1 (2008), 143. doi: 10.1137/070703983. Google Scholar

show all references

##### References:
 [1] M. Afonso, J. Bioucas-Dias and M. Figueiredo, Fast image recovery using variable splitting and constrained optimization,, IEEE Trans. Image Process., 19 (2010), 2345. doi: 10.1109/TIP.2010.2047910. Google Scholar [2] S. Aybat and G. Iyengar, A first-order smoothed penalty method for compressed sensing,, SIAM J. Optim., 21 (2011), 287. doi: 10.1137/090762294. Google Scholar [3] J. Barzilai and J. M. Borwein, Two point step size gradient methods,, IMA J. Numer. Anal., 8 (1988), 141. doi: 10.1093/imanum/8.1.141. Google Scholar [4] A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems,, SIAM J. Imaging Sci., 2 (2009), 183. doi: 10.1137/080716542. Google Scholar [5] A. Beck and Y. C. Eldar, Sparsity constrained nonlinear optimization: Optimality conditions and algorithms,, SIAM J. Optim., 23 (2013), 1480. doi: 10.1137/120869778. Google Scholar [6] J. M. Bioucas-Dias and M. A. T. Figueiredo, A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration,, IEEE Trans. Image Process., 16 (2007), 2992. doi: 10.1109/TIP.2007.909319. Google Scholar [7] E. G. Birgin, J. M. Martínez and M. Raydan, Nonmonotone spectral projected gradient methods on convex sets,, SIAM J. Optim., 10 (2000), 1196. doi: 10.1137/S1052623497330963. Google Scholar [8] D. Boley, Local linear convergence of the alternating direction method of multipliers on quadratic or linear program,, SIAM J. Optim., 23 (2013), 2183. doi: 10.1137/120878951. Google Scholar [9] A. M. Bruckstein, D. L. Donoho and M. Elad, From sparse solutions of systems of equations to sparse modeling of signals and images,, SIAM Rev., 51 (2009), 34. doi: 10.1137/060657704. Google Scholar [10] E. Candes and J. Romberg, $l_1$-magic: A collection of MATLAB Routines for Solving the Convex Optimization Programs Central to Compressive Sampling 2006 [Online]., Available from: , (). Google Scholar [11] W. Y. Cheng and Z. X. Chen, Nonmonotone spectral method for large-scale symmetric nonlinear equations,, Numer Algor., 62 (2013), 149. doi: 10.1007/s11075-012-9572-z. Google Scholar [12] I. Daubechies, M. Defrise and C. De Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,, Comm. Pure Appl. Math., 57 (2004), 1413. doi: 10.1002/cpa.20042. Google Scholar [13] Y.-H. Dai and R. Fletcher, Projected Barzilai-Borwein methods for large-scale box-constrained quadratic programming,, Numer. Math., 100 (2005), 21. doi: 10.1007/s00211-004-0569-y. Google Scholar [14] Y.-H. Dai, W. W. Hager, K. Schittkowski and H. Zhang, The cyclic Barzilar-Borwein method for unconstrained optimization,, IMA J. Numer. Anal., 26 (2006), 604. doi: 10.1093/imanum/drl006. Google Scholar [15] G. Davis, S. Mallat and M. Avellaneda, Greedy adaptive approximation,, Constr Approx., 13 (1997), 57. doi: 10.1007/BF02678430. Google Scholar [16] D. Donoho, Compressed sensing,, IEEE Trans. Inform. Theory, 52 (2006), 1289. doi: 10.1109/TIT.2006.871582. Google Scholar [17] D. Donoho, M. Elad and V. Temlyakov, Stable recovery of sparse overcomplete representations in the presence of noise,, IEEE Trans. Inform. Theory, 52 (2006), 6. doi: 10.1109/TIT.2005.860430. Google Scholar [18] E. D. Dolan and J. J. Moré, Benchmarking optimization software with performance profiles,, Math. Program., 91 (2002), 201. doi: 10.1007/s101070100263. Google Scholar [19] M. Elad, B. Matalon and M. Zibulevsky, Coordinate and subspace optimization methods for linear least squares with non-quadratic regularization,, Appl. Comput. Harmon. Anal., 23 (2007), 346. doi: 10.1016/j.acha.2007.02.002. Google Scholar [20] M. Fukushima and H. Mine, A generalized proximal point algorithm for certain non-convex minimization problems,, Int. J. Syst. Sci., 12 (1981), 989. doi: 10.1080/00207728108963798. Google Scholar [21] M. A. T. Figueiredo and R. D. Nowak, An EM algorithm for wavelet-based image restoration,, IEEE Trans. Image Process., 12 (2003), 906. doi: 10.1109/TIP.2003.814255. Google Scholar [22] M. A. T. Figueiredo, R. D. Nowak and S. J. Wright, Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,, IEEE J. Sel. Top. Signal Process., 1 (2007), 586. doi: 10.1109/JSTSP.2007.910281. Google Scholar [23] M. Fukushima, A successive quadratic programming method for a class of constrained nonsmooth optimization problems,, Math. Program., 49 (): 231. doi: 10.1007/BF01588789. Google Scholar [24] L. Grippo, F. Lampariello and S. Lucidi, A nonmonotone line search technique for Newton's method,, SIAM J. Numer. Anal., 23 (1986), 707. doi: 10.1137/0723046. Google Scholar [25] E. T. Hale, W. Yin and Y. Zhang, Fixed-point continuation for $l_1$ minimization: Methodology and convergence,, SIAM J. Optim., 19 (2008), 1107. doi: 10.1137/070698920. Google Scholar [26] W. W. Hager, D. T. Phan and H. Zhang, Gradient-based methods for sparse recovery,, SIAM J. Imaging Sci., 4 (2011), 146. doi: 10.1137/090775063. Google Scholar [27] K. C. Kiwiel, A method for minimizing the sum of a convex function and a continuously differentiable function,, J. Optim. Theory Appl., 48 (1986), 437. doi: 10.1007/BF00940570. Google Scholar [28] S. J. Kim, K. Koh, M. Lustig, S. Boyd and D. Gorinevsky, An interior-point method for large-scale $l_1$-regularized least squares,, IEEE J. Sel. Top. Signal Process., 1 (2007), 606. Google Scholar [29] H. Mine and M. Fukushima, A minimization method for the sum of a convex function and a continuously differentiable function,, J. Optim. Theory Appl., 33 (1981), 9. doi: 10.1007/BF00935173. Google Scholar [30] Y. Nesterov, Gradient methods for minimizing composite objective function,, 2007, (2007). Google Scholar [31] R. T. Rockafellar, Convex Analysis,, Princeton Mathematical Series, (1970). Google Scholar [32] S. M. Robinson, Linear convergence of $\epsilon$-subgradient descent methods for a class of convex functions,, Math. Program., 86 (1999), 41. doi: 10.1007/s101070050078. Google Scholar [33] M. Saunders, PDCO: Primal-Dual Interior Method for Convex Objectives 2002 [Online]., Available from: , (). Google Scholar [34] P. Tseng and S. W. Yun, A coordinate gradient descent method for nonsmooth separable minimization,, Math. Program., 117 (2009), 387. doi: 10.1007/s10107-007-0170-0. Google Scholar [35] J. Tropp, Greed is good: Algorithmic results for sparse approximation,, IEEE Trans. Inform. Theory, 50 (2006), 2231. doi: 10.1109/TIT.2004.834793. Google Scholar [36] Z. W. Wen, W. T. Yin, D. Goldfarb and Y. Zhang, A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization, and continuation,, SIAM J. Sci. Comput., 32 (2010), 1832. doi: 10.1137/090747695. Google Scholar [37] Z. W. Wen, W. T. Yin, H. Zhang and D. Goldfarb, On the convergence of an active-set method for $l_1$ minimization,, Optim. Methods Softw., 27 (2012), 1127. doi: 10.1080/10556788.2011.591398. Google Scholar [38] J. Wright, R. D. Nowak and M. A. T. Figueiredo, Sparse reconstruction by separable approximation,, IEEE Trans. Signal Process., 57 (2009), 2479. doi: 10.1109/TSP.2009.2016892. Google Scholar [39] W. Yin, S. Osher, D. Goldfarb and J. Darbon, Bregman iterative algorithms for $l_1$-minimization with applications to compressed sensing,, SIAM J. Imaging Sci., 1 (2008), 143. doi: 10.1137/070703983. Google Scholar
 [1] Huan Gao, Yu-Hong Dai, Xiao-Jiao Tong. Barzilai-Borwein-like methods for the extreme eigenvalue problem. Journal of Industrial & Management Optimization, 2015, 11 (3) : 999-1019. doi: 10.3934/jimo.2015.11.999 [2] Sanming Liu, Zhijie Wang, Chongyang Liu. Proximal iterative Gaussian smoothing algorithm for a class of nonsmooth convex minimization problems. Numerical Algebra, Control & Optimization, 2015, 5 (1) : 79-89. doi: 10.3934/naco.2015.5.79 [3] Nobuko Sagara, Masao Fukushima. trust region method for nonsmooth convex optimization. Journal of Industrial & Management Optimization, 2005, 1 (2) : 171-180. doi: 10.3934/jimo.2005.1.171 [4] Yingying Li, Stanley Osher. Coordinate descent optimization for l1 minimization with application to compressed sensing; a greedy algorithm. Inverse Problems & Imaging, 2009, 3 (3) : 487-503. doi: 10.3934/ipi.2009.3.487 [5] Boshi Tian, Xiaoqi Yang, Kaiwen Meng. An interior-point $l_{\frac{1}{2}}$-penalty method for inequality constrained nonlinear optimization. Journal of Industrial & Management Optimization, 2016, 12 (3) : 949-973. doi: 10.3934/jimo.2016.12.949 [6] A. M. Bagirov, Moumita Ghosh, Dean Webb. A derivative-free method for linearly constrained nonsmooth optimization. Journal of Industrial & Management Optimization, 2006, 2 (3) : 319-338. doi: 10.3934/jimo.2006.2.319 [7] Dan Li, Li-Ping Pang, Fang-Fang Guo, Zun-Quan Xia. An alternating linearization method with inexact data for bilevel nonsmooth convex optimization. Journal of Industrial & Management Optimization, 2014, 10 (3) : 859-869. doi: 10.3934/jimo.2014.10.859 [8] Jueyou Li, Guoquan Li, Zhiyou Wu, Changzhi Wu, Xiangyu Wang, Jae-Myung Lee, Kwang-Hyo Jung. Incremental gradient-free method for nonsmooth distributed optimization. Journal of Industrial & Management Optimization, 2017, 13 (4) : 1841-1857. doi: 10.3934/jimo.2017021 [9] Ya-Zheng Dang, Zhong-Hui Xue, Yan Gao, Jun-Xiang Li. Fast self-adaptive regularization iterative algorithm for solving split feasibility problem. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-15. doi: 10.3934/jimo.2019017 [10] Yigui Ou, Xin Zhou. A modified scaled memoryless BFGS preconditioned conjugate gradient algorithm for nonsmooth convex optimization. Journal of Industrial & Management Optimization, 2018, 14 (2) : 785-801. doi: 10.3934/jimo.2017075 [11] Jie Shen, Jian Lv, Fang-Fang Guo, Ya-Li Gao, Rui Zhao. A new proximal chebychev center cutting plane algorithm for nonsmooth optimization and its convergence. Journal of Industrial & Management Optimization, 2018, 14 (3) : 1143-1155. doi: 10.3934/jimo.2018003 [12] Zhengshan Dong, Jianli Chen, Wenxing Zhu. Homotopy method for matrix rank minimization based on the matrix hard thresholding method. Numerical Algebra, Control & Optimization, 2019, 9 (2) : 211-224. doi: 10.3934/naco.2019015 [13] Yong Xia, Yu-Jun Gong, Sheng-Nan Han. A new semidefinite relaxation for $L_{1}$-constrained quadratic optimization and extensions. Numerical Algebra, Control & Optimization, 2015, 5 (2) : 185-195. doi: 10.3934/naco.2015.5.185 [14] Lei Wu, Zhe Sun. A new spectral method for $l_1$-regularized minimization. Inverse Problems & Imaging, 2015, 9 (1) : 257-272. doi: 10.3934/ipi.2015.9.257 [15] Mohamed Aly Tawhid. Nonsmooth generalized complementarity as unconstrained optimization. Journal of Industrial & Management Optimization, 2010, 6 (2) : 411-423. doi: 10.3934/jimo.2010.6.411 [16] Qiang Long, Changzhi Wu. A hybrid method combining genetic algorithm and Hooke-Jeeves method for constrained global optimization. Journal of Industrial & Management Optimization, 2014, 10 (4) : 1279-1296. doi: 10.3934/jimo.2014.10.1279 [17] Hongxia Yin. An iterative method for general variational inequalities. Journal of Industrial & Management Optimization, 2005, 1 (2) : 201-209. doi: 10.3934/jimo.2005.1.201 [18] Z.Y. Wu, H.W.J. Lee, F.S. Bai, L.S. Zhang. Quadratic smoothing approximation to $l_1$ exact penalty function in global optimization. Journal of Industrial & Management Optimization, 2005, 1 (4) : 533-547. doi: 10.3934/jimo.2005.1.533 [19] Simai He, Min Li, Shuzhong Zhang, Zhi-Quan Luo. A nonconvergent example for the iterative water-filling algorithm. Numerical Algebra, Control & Optimization, 2011, 1 (1) : 147-150. doi: 10.3934/naco.2011.1.147 [20] Lingling Lv, Zhe Zhang, Lei Zhang, Weishu Wang. An iterative algorithm for periodic sylvester matrix equations. Journal of Industrial & Management Optimization, 2018, 14 (1) : 413-425. doi: 10.3934/jimo.2017053

2018 Impact Factor: 1.469