August  2017, 11(4): 643-661. doi: 10.3934/ipi.2017030

Convergence and stability of iteratively reweighted least squares for low-rank matrix recovery

1. 

Department of Mathematics, Nanjing University of Chinese Medicine, Nanjing, Jiangsu 210023, China

2. 

School of Mathematical Science, Zhejiang University, Hangzhou, Zhejiang 310027, China

* Corresponding author: songli@zju.edu.cn

Received  November 2015 Revised  April 2017 Published  June 2017

Fund Project: This work is supported by National Natural Science Foundation of China under Grant No. 11626133 and 11531013

In this paper, we study the theoretical properties of iteratively reweighted least squares algorithm for recovering a matrix (IRLS-M for short) from noisy linear measurements. The IRLS-M was proposed by Fornasier et al. (2011) [17] for solving nuclear norm minimization and by Mohan et al. (2012) [31] for solving Schatten-$p$ (quasi) norm minimization ($0 < p≤q1$) in noiseless case, based on the iteratively reweighted least squares algorithm for sparse signal recovery (IRLS for short) (Daubechies et al., 2010) [15], and numerical experiments have been given to show its efficiency (Fornasier et al. and Mohan et al.) [17], [31]. In this paper, we focus on providing convergence and stability analysis of iteratively reweighted least squares algorithm for low-rank matrix recovery in the presence of noise. The convergence of IRLS-M is proved strictly for all $0 < p≤q1$. Furthermore, when the measurement map $\mathcal{A}$ satisfies the matrix restricted isometry property (M-RIP for short), we show that the IRLS-M is stable for $0 < p≤q1$. Specially, when $p=1$, we prove that the M-RIP constant $δ_{2r} < \sqrt{2}-1$ is sufficient for IRLS-M to recover an unknown (approximately) low rank matrix with an error that is proportional to the noise level. The simplicity of IRLS-M, along with the theoretical guarantees provided in this paper, make a compelling case for its adoption as a standard tool for low rank matrix recovery.

Citation: Yun Cai, Song Li. Convergence and stability of iteratively reweighted least squares for low-rank matrix recovery. Inverse Problems & Imaging, 2017, 11 (4) : 643-661. doi: 10.3934/ipi.2017030
References:
[1]

D. BaB. BabadiP. L. Purdon and E. N. Brown, Convergence and stability of iteratively re-weighted least squares algorithms, IEEE Trans. Signal. Process., 62 (2014), 183-195. doi: 10.1109/TSP.2013.2287685. Google Scholar

[2]

R. Basri and D. Jacobs, Lambertian reflectance and linear subspaces, IEEE Trans. Pattern Anal. Mach. Intell., 25 (2003), 218-233. doi: 10.1109/ICCV.2001.937651. Google Scholar

[3]

D. P. Bertsekas, A. Nedic and A. E. Ozdaglar, Convex Analysis and Optimization, Athena Scientific, 2003. Google Scholar

[4]

S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004. doi: 10.1017/CBO9780511804441. Google Scholar

[5]

T. Cai and A. Zhang, Sparse representation of a polytope and recovery of sparse signals and low-rank matrices, IEEE Trans. Inform. Theory., 60 (2014), 122-132. doi: 10.1109/TIT.2013.2288639. Google Scholar

[6]

Y. Cai and S. Li, Convergence analysis of projected gradient descent for Schatten-$p$ nonconvex matrix recovery, Sci. China Math., 58 (2015), 845-858. doi: 10.1007/s11425-014-4949-1. Google Scholar

[7]

E. J. Candés and T. Tao, Decoding by linear programming, IEEE Trans. Inform. Theory., 51 (2005), 4203-4215. doi: 10.1109/TIT.2005.858979. Google Scholar

[8]

E. J. Candés, The restricted isometry property and its implications for compressed sensing, C. R. Math. Acad. Sci. Paris. Ser. I, 346 (2008), 589-592. doi: 10.1016/j.crma.2008.03.014. Google Scholar

[9]

E. J. Candés and B. Recht, Exact Matrix completion via convex optimization, Found. Comput. Math., 9 (2009), 717-772. doi: 10.1007/s10208-009-9045-5. Google Scholar

[10]

E. J. Candés and T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Trans. Inform. Theory., 56 (2010), 2053-2080. doi: 10.1109/TIT.2010.2044061. Google Scholar

[11]

E. J. Candés and Y. Plan, Matrix completion with noise, Proceedings of the IEEE, 98 (2009), 925-936. Google Scholar

[12]

E. J. Candés and Y. Plan, Tight oracle bounds for low-rank recovery from a minimal number of random measurements, IEEE Trans. Inform. Theory., 57 (2011), 2342-2359. doi: 10.1109/TIT.2011.2111771. Google Scholar

[13]

E. J. CandésY. EldarT. Strohmer and V. Voroninski, Phase retrieval via matrix completion, SIAM J. Imaging Sci., 6 (2013), 199-225. doi: 10.1137/110848074. Google Scholar

[14]

R. Chartrand and W. Yin, Iteratively reweighted algorithms for compressive sensing, International Conference on Acoustics, Speech and Signal Processing, (2008), 3869-3872. Google Scholar

[15]

I. DaubechiesR. DevoreM. Fornasier and C. S. Güntük, Iteratively reweighted least squares minimization for sparse recovery, Commu. Pure. Appl. Math., 63 (2010), 1-38. doi: 10.1002/cpa.20303. Google Scholar

[16]

M. Fazel, Matrix Rank Minimization with Applications, Ph. D thesis, Stanford University, 2002.Google Scholar

[17]

M. FornasierH. Rauhut and R. Ward, Low-rank matrix recovery via iteratively reweighted least squares minimization, SIAM J. Optim., 21 (2011), 1614-1640. doi: 10.1137/100811404. Google Scholar

[18]

S. Foucart and M. J. Lai, Sparsest solutions of underdetermined linear systems via $l_{q}$-minimization for $0 < q≤q1$, Appl. Comput. Harmon. Anal., 26 (2009), 395-407. doi: 10.1016/j.acha.2008.09.001. Google Scholar

[19]

M. Grant and S. Boyd, Graph implementations for nonsmooth convex programs, in Recent Advances in Learning and Control (tribute to M. Vidyasagar) (eds. V. Blondel, S. Boyd and H. Kimura), Springer, 2008, 95–110.Google Scholar

[20]

M. Grant and S. Boyd, CVX: Matlab software for disciplined convex programming, Available from: http://web.stanford.edu/boyd/software.html.Google Scholar

[21]

D. Gross, Y. K. Liu, S. T. Flammia, S. Becker and J. Eisert, Quantum state tomography via compressed sensing Phys. Rev. Lett., 105 (2010), 150401. doi: 10.1103/PhysRevLett.105.150401. Google Scholar

[22]

D. Gross, Recovery Low-rank matrices from few coefficients in any basis, IEEE Trans. Inform. Theory., 57 (2011), 1548-1566. doi: 10.1109/TIT.2011.2104999. Google Scholar

[23]

P. JainP. Netrapalli and S. Sanghavi, Low-rank matrix completion using alternating minimization, Proceedings of the Forty-Fifth Annual ACM Symposium on Theory of Computing, (2013), 665-674. doi: 10.1145/2488608.2488693. Google Scholar

[24]

H. JiC. LiuZ. W. Shen and Y. H. Xu, Robust video denoising using low rank matrix completion, IEEE Conference on Computer Vision and Pattern Recognition, (CVPR), San Francisco, (2010), 1548-1566. Google Scholar

[25]

F. Kramer and R. Ward, New and improved Johnson-Lindenstrauss embeddings via the restricted isometry property, SIAM J. Math. Anal., 43 (2011), 1269-1281. doi: 10.1137/100810447. Google Scholar

[26]

M. J. LaiY. Xu and W. Yin, Improved iteratively reweighted least square for unconstrained smoothed $l_{q}$ minimization, SIAM J. Numer. Anal., 51 (2013), 927-957. doi: 10.1137/110840364. Google Scholar

[27]

C. L. Lawson, Contributions to the Theory of Linear Least Maximum Approximation, Ph. D thesis, University of California, Los Angeles, 1961. Google Scholar

[28]

J. Lin and S. Li, Convergence of projected Landweber iteration for matrix rank minimization, Appl. Comput. Harmon. Anal., 36 (2014), 316-325. doi: 10.1016/j.acha.2013.06.005. Google Scholar

[29]

Z. Liu and L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification, SIAM J. Matrix Anal. Appl., 31 (2009), 1235-1256. doi: 10.1137/090755436. Google Scholar

[30]

K. Mohan and M. Fazel, Iterative reweighted least squares for matrix rank minimization In Proc. 48th Allerton Conference on Controls and Communications, Allerton, IL, 2010b. doi: 10.1109/ALLERTON.2010.5706969. Google Scholar

[31]

K. Mohan and and M. Fazel, Iterative reweighted algorithms for matrix rank minimization, J. Mach. Learn. Res., 13 (2012), 3441-3473. Google Scholar

[32]

T. Morita and T. Kanade, A sequential factorization method for recovering shape and motion from image streams, IEEE Trans. Pattern Anal. Mach. Intell., 19 (1997), 858-867. doi: 10.1109/34.608289. Google Scholar

[33]

Netflix Prize, Available from: http://www.netflixprize.com/.Google Scholar

[34]

S. Oymak, K. Mohan, M. Fazel and B. Hassibi, A simplified approach to recovery conditions for low rank matrices, in: IEEE International Symposium on Information Theory Proceedings, ISIT, (2011), 2318–2322. doi: 10.1109/ISIT.2011.6033976. Google Scholar

[35]

B. RechtM. Fazel and P. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev., 52 (2010), 471-501. doi: 10.1137/070697835. Google Scholar

[36]

B. Recht, A simpler approach to matrix completion, J. Mach. Learn. Res., 12 (2011), 3413-3430. Google Scholar

[37]

R. Saab and O. Yilmaz, Sparse recovery by non-convex optimization-instance optimality, Appl. Comput. Harmon. Anal., 29 (2010), 30-48. doi: 10.1016/j.acha.2009.08.002. Google Scholar

[38]

E. van den Berg and M. P. Friedlander, Sparse optimization with least-squares constraints, SIAM J. Optim., 21 (2011), 1201-1229. doi: 10.1137/100785028. Google Scholar

[39]

W. WangW. Xu and A. Tang, On the performance of sparse recovery via $l_{p}$-minimization, IEEE Trans. Inform. Theory., 57 (2011), 7255-7278. doi: 10.1109/TIT.2011.2159959. Google Scholar

[40]

K. WeiJ-F. CaiT. F. Chan and S. Leung, Guarantees of riemannian optimization for low rank matrix recovery, SIAM J. Matrix Anal. Appl., 37 (2016), 1198-1222. doi: 10.1137/15M1050525. Google Scholar

[41]

M-C. Yue and A. Man-Cho So, A perturbation inequality for concave functions of singular values and its applications in low-rank matrix recovery, Appl. Comput. Harmon. Anal., 40 (2016), 396-416. doi: 10.1016/j.acha.2015.06.006. Google Scholar

[42]

M. ZhangZ. Huang and Y. Zhang, Restricted $p$ -Isometry Properties of Nonconvex matrix recovery, IEEE Trans. Inform. Theory., 59 (2013), 4316-4323. doi: 10.1109/TIT.2013.2250577. Google Scholar

[43]

Z Zhou, J. Wright, X. Li, E. J. Candès and Y. Ma, Stable Principal Component Pursuit Proceedings of International Symposium on Information Theory, (2010). doi: 10.1109/ISIT.2010.5513535. Google Scholar

show all references

References:
[1]

D. BaB. BabadiP. L. Purdon and E. N. Brown, Convergence and stability of iteratively re-weighted least squares algorithms, IEEE Trans. Signal. Process., 62 (2014), 183-195. doi: 10.1109/TSP.2013.2287685. Google Scholar

[2]

R. Basri and D. Jacobs, Lambertian reflectance and linear subspaces, IEEE Trans. Pattern Anal. Mach. Intell., 25 (2003), 218-233. doi: 10.1109/ICCV.2001.937651. Google Scholar

[3]

D. P. Bertsekas, A. Nedic and A. E. Ozdaglar, Convex Analysis and Optimization, Athena Scientific, 2003. Google Scholar

[4]

S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004. doi: 10.1017/CBO9780511804441. Google Scholar

[5]

T. Cai and A. Zhang, Sparse representation of a polytope and recovery of sparse signals and low-rank matrices, IEEE Trans. Inform. Theory., 60 (2014), 122-132. doi: 10.1109/TIT.2013.2288639. Google Scholar

[6]

Y. Cai and S. Li, Convergence analysis of projected gradient descent for Schatten-$p$ nonconvex matrix recovery, Sci. China Math., 58 (2015), 845-858. doi: 10.1007/s11425-014-4949-1. Google Scholar

[7]

E. J. Candés and T. Tao, Decoding by linear programming, IEEE Trans. Inform. Theory., 51 (2005), 4203-4215. doi: 10.1109/TIT.2005.858979. Google Scholar

[8]

E. J. Candés, The restricted isometry property and its implications for compressed sensing, C. R. Math. Acad. Sci. Paris. Ser. I, 346 (2008), 589-592. doi: 10.1016/j.crma.2008.03.014. Google Scholar

[9]

E. J. Candés and B. Recht, Exact Matrix completion via convex optimization, Found. Comput. Math., 9 (2009), 717-772. doi: 10.1007/s10208-009-9045-5. Google Scholar

[10]

E. J. Candés and T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Trans. Inform. Theory., 56 (2010), 2053-2080. doi: 10.1109/TIT.2010.2044061. Google Scholar

[11]

E. J. Candés and Y. Plan, Matrix completion with noise, Proceedings of the IEEE, 98 (2009), 925-936. Google Scholar

[12]

E. J. Candés and Y. Plan, Tight oracle bounds for low-rank recovery from a minimal number of random measurements, IEEE Trans. Inform. Theory., 57 (2011), 2342-2359. doi: 10.1109/TIT.2011.2111771. Google Scholar

[13]

E. J. CandésY. EldarT. Strohmer and V. Voroninski, Phase retrieval via matrix completion, SIAM J. Imaging Sci., 6 (2013), 199-225. doi: 10.1137/110848074. Google Scholar

[14]

R. Chartrand and W. Yin, Iteratively reweighted algorithms for compressive sensing, International Conference on Acoustics, Speech and Signal Processing, (2008), 3869-3872. Google Scholar

[15]

I. DaubechiesR. DevoreM. Fornasier and C. S. Güntük, Iteratively reweighted least squares minimization for sparse recovery, Commu. Pure. Appl. Math., 63 (2010), 1-38. doi: 10.1002/cpa.20303. Google Scholar

[16]

M. Fazel, Matrix Rank Minimization with Applications, Ph. D thesis, Stanford University, 2002.Google Scholar

[17]

M. FornasierH. Rauhut and R. Ward, Low-rank matrix recovery via iteratively reweighted least squares minimization, SIAM J. Optim., 21 (2011), 1614-1640. doi: 10.1137/100811404. Google Scholar

[18]

S. Foucart and M. J. Lai, Sparsest solutions of underdetermined linear systems via $l_{q}$-minimization for $0 < q≤q1$, Appl. Comput. Harmon. Anal., 26 (2009), 395-407. doi: 10.1016/j.acha.2008.09.001. Google Scholar

[19]

M. Grant and S. Boyd, Graph implementations for nonsmooth convex programs, in Recent Advances in Learning and Control (tribute to M. Vidyasagar) (eds. V. Blondel, S. Boyd and H. Kimura), Springer, 2008, 95–110.Google Scholar

[20]

M. Grant and S. Boyd, CVX: Matlab software for disciplined convex programming, Available from: http://web.stanford.edu/boyd/software.html.Google Scholar

[21]

D. Gross, Y. K. Liu, S. T. Flammia, S. Becker and J. Eisert, Quantum state tomography via compressed sensing Phys. Rev. Lett., 105 (2010), 150401. doi: 10.1103/PhysRevLett.105.150401. Google Scholar

[22]

D. Gross, Recovery Low-rank matrices from few coefficients in any basis, IEEE Trans. Inform. Theory., 57 (2011), 1548-1566. doi: 10.1109/TIT.2011.2104999. Google Scholar

[23]

P. JainP. Netrapalli and S. Sanghavi, Low-rank matrix completion using alternating minimization, Proceedings of the Forty-Fifth Annual ACM Symposium on Theory of Computing, (2013), 665-674. doi: 10.1145/2488608.2488693. Google Scholar

[24]

H. JiC. LiuZ. W. Shen and Y. H. Xu, Robust video denoising using low rank matrix completion, IEEE Conference on Computer Vision and Pattern Recognition, (CVPR), San Francisco, (2010), 1548-1566. Google Scholar

[25]

F. Kramer and R. Ward, New and improved Johnson-Lindenstrauss embeddings via the restricted isometry property, SIAM J. Math. Anal., 43 (2011), 1269-1281. doi: 10.1137/100810447. Google Scholar

[26]

M. J. LaiY. Xu and W. Yin, Improved iteratively reweighted least square for unconstrained smoothed $l_{q}$ minimization, SIAM J. Numer. Anal., 51 (2013), 927-957. doi: 10.1137/110840364. Google Scholar

[27]

C. L. Lawson, Contributions to the Theory of Linear Least Maximum Approximation, Ph. D thesis, University of California, Los Angeles, 1961. Google Scholar

[28]

J. Lin and S. Li, Convergence of projected Landweber iteration for matrix rank minimization, Appl. Comput. Harmon. Anal., 36 (2014), 316-325. doi: 10.1016/j.acha.2013.06.005. Google Scholar

[29]

Z. Liu and L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification, SIAM J. Matrix Anal. Appl., 31 (2009), 1235-1256. doi: 10.1137/090755436. Google Scholar

[30]

K. Mohan and M. Fazel, Iterative reweighted least squares for matrix rank minimization In Proc. 48th Allerton Conference on Controls and Communications, Allerton, IL, 2010b. doi: 10.1109/ALLERTON.2010.5706969. Google Scholar

[31]

K. Mohan and and M. Fazel, Iterative reweighted algorithms for matrix rank minimization, J. Mach. Learn. Res., 13 (2012), 3441-3473. Google Scholar

[32]

T. Morita and T. Kanade, A sequential factorization method for recovering shape and motion from image streams, IEEE Trans. Pattern Anal. Mach. Intell., 19 (1997), 858-867. doi: 10.1109/34.608289. Google Scholar

[33]

Netflix Prize, Available from: http://www.netflixprize.com/.Google Scholar

[34]

S. Oymak, K. Mohan, M. Fazel and B. Hassibi, A simplified approach to recovery conditions for low rank matrices, in: IEEE International Symposium on Information Theory Proceedings, ISIT, (2011), 2318–2322. doi: 10.1109/ISIT.2011.6033976. Google Scholar

[35]

B. RechtM. Fazel and P. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev., 52 (2010), 471-501. doi: 10.1137/070697835. Google Scholar

[36]

B. Recht, A simpler approach to matrix completion, J. Mach. Learn. Res., 12 (2011), 3413-3430. Google Scholar

[37]

R. Saab and O. Yilmaz, Sparse recovery by non-convex optimization-instance optimality, Appl. Comput. Harmon. Anal., 29 (2010), 30-48. doi: 10.1016/j.acha.2009.08.002. Google Scholar

[38]

E. van den Berg and M. P. Friedlander, Sparse optimization with least-squares constraints, SIAM J. Optim., 21 (2011), 1201-1229. doi: 10.1137/100785028. Google Scholar

[39]

W. WangW. Xu and A. Tang, On the performance of sparse recovery via $l_{p}$-minimization, IEEE Trans. Inform. Theory., 57 (2011), 7255-7278. doi: 10.1109/TIT.2011.2159959. Google Scholar

[40]

K. WeiJ-F. CaiT. F. Chan and S. Leung, Guarantees of riemannian optimization for low rank matrix recovery, SIAM J. Matrix Anal. Appl., 37 (2016), 1198-1222. doi: 10.1137/15M1050525. Google Scholar

[41]

M-C. Yue and A. Man-Cho So, A perturbation inequality for concave functions of singular values and its applications in low-rank matrix recovery, Appl. Comput. Harmon. Anal., 40 (2016), 396-416. doi: 10.1016/j.acha.2015.06.006. Google Scholar

[42]

M. ZhangZ. Huang and Y. Zhang, Restricted $p$ -Isometry Properties of Nonconvex matrix recovery, IEEE Trans. Inform. Theory., 59 (2013), 4316-4323. doi: 10.1109/TIT.2013.2250577. Google Scholar

[43]

Z Zhou, J. Wright, X. Li, E. J. Candès and Y. Ma, Stable Principal Component Pursuit Proceedings of International Symposium on Information Theory, (2010). doi: 10.1109/ISIT.2010.5513535. Google Scholar

[1]

Tao Wu, Yu Lei, Jiao Shi, Maoguo Gong. An evolutionary multiobjective method for low-rank and sparse matrix decomposition. Big Data & Information Analytics, 2017, 2 (1) : 23-37. doi: 10.3934/bdia.2017006

[2]

Yangyang Xu, Ruru Hao, Wotao Yin, Zhixun Su. Parallel matrix factorization for low-rank tensor completion. Inverse Problems & Imaging, 2015, 9 (2) : 601-624. doi: 10.3934/ipi.2015.9.601

[3]

Sihem Guerarra. Positive and negative definite submatrices in an Hermitian least rank solution of the matrix equation AXA*=B. Numerical Algebra, Control & Optimization, 2019, 9 (1) : 15-22. doi: 10.3934/naco.2019002

[4]

Yi Shen, Song Li. Sparse signals recovery from noisy measurements by orthogonal matching pursuit. Inverse Problems & Imaging, 2015, 9 (1) : 231-238. doi: 10.3934/ipi.2015.9.231

[5]

Zhengshan Dong, Jianli Chen, Wenxing Zhu. Homotopy method for matrix rank minimization based on the matrix hard thresholding method. Numerical Algebra, Control & Optimization, 2019, 9 (2) : 211-224. doi: 10.3934/naco.2019015

[6]

Simon Foucart, Richard G. Lynch. Recovering low-rank matrices from binary measurements. Inverse Problems & Imaging, 2019, 13 (4) : 703-720. doi: 10.3934/ipi.2019032

[7]

H. D. Scolnik, N. E. Echebest, M. T. Guardarucci. Extensions of incomplete oblique projections method for solving rank-deficient least-squares problems. Journal of Industrial & Management Optimization, 2009, 5 (2) : 175-191. doi: 10.3934/jimo.2009.5.175

[8]

Yongge Tian. A survey on rank and inertia optimization problems of the matrix-valued function $A + BXB^{*}$. Numerical Algebra, Control & Optimization, 2015, 5 (3) : 289-326. doi: 10.3934/naco.2015.5.289

[9]

Xianchao Xiu, Lingchen Kong. Rank-one and sparse matrix decomposition for dynamic MRI. Numerical Algebra, Control & Optimization, 2015, 5 (2) : 127-134. doi: 10.3934/naco.2015.5.127

[10]

Amin Boumenir, Vu Kim Tuan. Recovery of the heat coefficient by two measurements. Inverse Problems & Imaging, 2011, 5 (4) : 775-791. doi: 10.3934/ipi.2011.5.775

[11]

Yifu Feng, Min Zhang. A $p$-spherical section property for matrix Schatten-$p$ quasi-norm minimization. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-11. doi: 10.3934/jimo.2018159

[12]

Amin Boumenir, Vu Kim Tuan, Nguyen Hoang. The recovery of a parabolic equation from measurements at a single point. Evolution Equations & Control Theory, 2018, 7 (2) : 197-216. doi: 10.3934/eect.2018010

[13]

Björn Popilka, Simon Setzer, Gabriele Steidl. Signal recovery from incomplete measurements in the presence of outliers. Inverse Problems & Imaging, 2007, 1 (4) : 661-672. doi: 10.3934/ipi.2007.1.661

[14]

Adel Alahmadi, Hamed Alsulami, S.K. Jain, Efim Zelmanov. On matrix wreath products of algebras. Electronic Research Announcements, 2017, 24: 78-86. doi: 10.3934/era.2017.24.009

[15]

Paul Skerritt, Cornelia Vizman. Dual pairs for matrix groups. Journal of Geometric Mechanics, 2019, 11 (2) : 255-275. doi: 10.3934/jgm.2019014

[16]

Meijuan Shang, Yanan Liu, Lingchen Kong, Xianchao Xiu, Ying Yang. Nonconvex mixed matrix minimization. Mathematical Foundations of Computing, 2019, 2 (2) : 107-126. doi: 10.3934/mfc.2019009

[17]

Ya-Xiang Yuan. Recent advances in numerical methods for nonlinear equations and nonlinear least squares. Numerical Algebra, Control & Optimization, 2011, 1 (1) : 15-34. doi: 10.3934/naco.2011.1.15

[18]

Mila Nikolova. Analytical bounds on the minimizers of (nonconvex) regularized least-squares. Inverse Problems & Imaging, 2008, 2 (1) : 133-149. doi: 10.3934/ipi.2008.2.133

[19]

Hassan Mohammad, Mohammed Yusuf Waziri, Sandra Augusta Santos. A brief survey of methods for solving nonlinear least-squares problems. Numerical Algebra, Control & Optimization, 2019, 9 (1) : 1-13. doi: 10.3934/naco.2019001

[20]

K. T. Arasu, Manil T. Mohan. Optimization problems with orthogonal matrix constraints. Numerical Algebra, Control & Optimization, 2018, 8 (4) : 413-440. doi: 10.3934/naco.2018026

2018 Impact Factor: 1.469

Metrics

  • PDF downloads (15)
  • HTML views (23)
  • Cited by (0)

Other articles
by authors

[Back to Top]