August  2019, 13(4): 703-720. doi: 10.3934/ipi.2019032

Recovering low-rank matrices from binary measurements

Department of Mathematics, Mailstop 3368, Texas A & M University, College Station, TX 77843-3368, USA

* Corresponding author: Simon Foucart

Received  August 2017 Revised  January 2019 Published  May 2019

Fund Project: The first author is partially supported by NSF grants DMS-1622134 and DMS-1664803

This article studies the approximate recovery of low-rank matrices acquired through binary measurements. Two types of recovery algorithms are considered, one based on hard singular value thresholding and the other one based on semidefinite programming. In case no thresholds are introduced before binary quantization, it is first shown that the direction of the low-rank matrices can be well approximated. Then, in case nonadaptive thresholds are incorporated, it is shown that both the direction and the magnitude can be well approximated. Finally, by allowing the thresholds to be chosen adaptively, we exhibit a recovery procedure for which low-rank matrices are fully approximated with error decaying exponentially with the number of binary measurements. In all cases, the procedures are robust to prequantization error. The underlying arguments are essentially deterministic: they rely only on an unusual restricted isometry property of the measurement process, which is established once and for all for Gaussian measurement processes.

Citation: Simon Foucart, Richard G. Lynch. Recovering low-rank matrices from binary measurements. Inverse Problems & Imaging, 2019, 13 (4) : 703-720. doi: 10.3934/ipi.2019032
References:
[1]

R. Baraniuk and P. Boufounos, 1-bit compressive sensing, in 2008 42nd Annual Conference on Information Sciences and Systems, IEEE, (2008), 16–21.

[2]

R. BaraniukS. FoucartD. NeedellY. Plan and M. Wootters, Exponential decay of reconstruction error from binary measurements of sparse signals, IEEE Transactions on Information Theory, 63 (2017), 3368-3385. doi: 10.1109/TIT.2017.2688381.

[3]

R. BaraniukS. FoucartD. NeedellY. Plan and M. Wootters, One-bit compressive sensing of dictionary-sparse signals, Information and Inference: A Jounral of the IMA, 7 (2018), 83-104. doi: 10.1093/imaiai/iax009.

[4]

E. J. Candès and Y. Plan, Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements, IEEE Transactions on Information Theory, 57 (2011), 2342-2359. doi: 10.1109/TIT.2011.2111771.

[5]

CVX Research, Inc, CVX: MATLAB software for disciplined convex programming, version 2.1, http://cvxr.com/cvx, 2014.

[6]

M. A. Davenport and J. Romberg, An overview of low-rank matrix recovery from incomplete observations, IEEE Journal of Selected Topics in Signal Processing, 10 (2016), 608-622. doi: 10.1109/JSTSP.2016.2539100.

[7]

S. Foucart, Flavors of Compressive Sensing, in Approximation Theory XV: San Antonio 2016 (eds. G. E. Fasshauer and L. L. Schumaker), Springer Proceedings in Mathematics & Statistics, 201 (2017), 61–104.

[8]

S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing, Birkhäuser, New York, 2013. doi: 10.1007/978-0-8176-4948-7.

[9]

Y. Plan and R. Vershynin, One-bit compressed sensing by linear programming, Communications on Pure and Applied Mathematics, 66 (2013), 1275-1297. doi: 10.1002/cpa.21442.

[10]

Y. Plan and R. Vershynin, Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach, IEEE Transactions on Information Theory, 59 (2013), 482-494. doi: 10.1109/TIT.2012.2207945.

[11]

Y. Plan and R. Vershynin, Dimension reduction by random hyperplane tessellations, Discrete & Computational Geometry, 51 (2014), 438-461. doi: 10.1007/s00454-013-9561-6.

[12]

Y. Plan and R. Vershynin, The generalized Lasso with non-linear observations, IEEE Transactions on Information Theory, 62 (2016), 1528-1537. doi: 10.1109/TIT.2016.2517008.

[13]

G. Schechtman, Two observations regarding embedding subsets of Euclidean spaces in normed spaces, Advances in Mathematics, 200 (2006), 125-135. doi: 10.1016/j.aim.2004.11.003.

show all references

References:
[1]

R. Baraniuk and P. Boufounos, 1-bit compressive sensing, in 2008 42nd Annual Conference on Information Sciences and Systems, IEEE, (2008), 16–21.

[2]

R. BaraniukS. FoucartD. NeedellY. Plan and M. Wootters, Exponential decay of reconstruction error from binary measurements of sparse signals, IEEE Transactions on Information Theory, 63 (2017), 3368-3385. doi: 10.1109/TIT.2017.2688381.

[3]

R. BaraniukS. FoucartD. NeedellY. Plan and M. Wootters, One-bit compressive sensing of dictionary-sparse signals, Information and Inference: A Jounral of the IMA, 7 (2018), 83-104. doi: 10.1093/imaiai/iax009.

[4]

E. J. Candès and Y. Plan, Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements, IEEE Transactions on Information Theory, 57 (2011), 2342-2359. doi: 10.1109/TIT.2011.2111771.

[5]

CVX Research, Inc, CVX: MATLAB software for disciplined convex programming, version 2.1, http://cvxr.com/cvx, 2014.

[6]

M. A. Davenport and J. Romberg, An overview of low-rank matrix recovery from incomplete observations, IEEE Journal of Selected Topics in Signal Processing, 10 (2016), 608-622. doi: 10.1109/JSTSP.2016.2539100.

[7]

S. Foucart, Flavors of Compressive Sensing, in Approximation Theory XV: San Antonio 2016 (eds. G. E. Fasshauer and L. L. Schumaker), Springer Proceedings in Mathematics & Statistics, 201 (2017), 61–104.

[8]

S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing, Birkhäuser, New York, 2013. doi: 10.1007/978-0-8176-4948-7.

[9]

Y. Plan and R. Vershynin, One-bit compressed sensing by linear programming, Communications on Pure and Applied Mathematics, 66 (2013), 1275-1297. doi: 10.1002/cpa.21442.

[10]

Y. Plan and R. Vershynin, Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach, IEEE Transactions on Information Theory, 59 (2013), 482-494. doi: 10.1109/TIT.2012.2207945.

[11]

Y. Plan and R. Vershynin, Dimension reduction by random hyperplane tessellations, Discrete & Computational Geometry, 51 (2014), 438-461. doi: 10.1007/s00454-013-9561-6.

[12]

Y. Plan and R. Vershynin, The generalized Lasso with non-linear observations, IEEE Transactions on Information Theory, 62 (2016), 1528-1537. doi: 10.1109/TIT.2016.2517008.

[13]

G. Schechtman, Two observations regarding embedding subsets of Euclidean spaces in normed spaces, Advances in Mathematics, 200 (2006), 125-135. doi: 10.1016/j.aim.2004.11.003.

Figure 1.  Comparison of the procedures (25) and (39) when the dimension $ n $ varies
Figure 2.  Recovery error $ \varepsilon: = \left\| {\mathbf X} - \dfrac{ {\mathbf X}^{\rm ht}}{\| {\mathbf X}^{\rm ht}\|_F} \right\|_F $ vs magnitude $ \mu: = \| {\mathbf e}\|_1 $ of prequantization error
Figure 3.  Recovery error $ \varepsilon: = \| {\mathbf X} - \widehat{ {\mathbf X}}\|_F $ vs oversampling factor $ \lambda $ for the procedure (62)
Figure 4.  Recovery error $ \varepsilon: = \| {\mathbf X} - \widehat{ {\mathbf X}}\|_F $ vs oversampling factor $ \lambda $ for the procedure (73)-(74)-(75)
[1]

Yun Cai, Song Li. Convergence and stability of iteratively reweighted least squares for low-rank matrix recovery. Inverse Problems & Imaging, 2017, 11 (4) : 643-661. doi: 10.3934/ipi.2017030

[2]

Zhengshan Dong, Jianli Chen, Wenxing Zhu. Homotopy method for matrix rank minimization based on the matrix hard thresholding method. Numerical Algebra, Control & Optimization, 2019, 9 (2) : 211-224. doi: 10.3934/naco.2019015

[3]

Zhouchen Lin. A review on low-rank models in data analysis. Big Data & Information Analytics, 2016, 1 (2&3) : 139-161. doi: 10.3934/bdia.2016001

[4]

Fumio Ishizaki. Analysis of the statistical time-access fairness index of one-bit feedback fair scheduler. Numerical Algebra, Control & Optimization, 2011, 1 (4) : 675-689. doi: 10.3934/naco.2011.1.675

[5]

Tao Wu, Yu Lei, Jiao Shi, Maoguo Gong. An evolutionary multiobjective method for low-rank and sparse matrix decomposition. Big Data & Information Analytics, 2017, 2 (1) : 23-37. doi: 10.3934/bdia.2017006

[6]

Yangyang Xu, Ruru Hao, Wotao Yin, Zhixun Su. Parallel matrix factorization for low-rank tensor completion. Inverse Problems & Imaging, 2015, 9 (2) : 601-624. doi: 10.3934/ipi.2015.9.601

[7]

Hong Jiang, Wei Deng, Zuowei Shen. Surveillance video processing using compressive sensing. Inverse Problems & Imaging, 2012, 6 (2) : 201-214. doi: 10.3934/ipi.2012.6.201

[8]

Qinghong Zhang, Gang Chen, Ting Zhang. Duality formulations in semidefinite programming. Journal of Industrial & Management Optimization, 2010, 6 (4) : 881-893. doi: 10.3934/jimo.2010.6.881

[9]

Manfred Einsiedler, Elon Lindenstrauss. On measures invariant under diagonalizable actions: the Rank-One case and the general Low-Entropy method. Journal of Modern Dynamics, 2008, 2 (1) : 83-128. doi: 10.3934/jmd.2008.2.83

[10]

William Ott, Qiudong Wang. Periodic attractors versus nonuniform expansion in singular limits of families of rank one maps. Discrete & Continuous Dynamical Systems - A, 2010, 26 (3) : 1035-1054. doi: 10.3934/dcds.2010.26.1035

[11]

Kang-Yu Ni, Shankar Rao, Yuri Owechko. Foveated compressive imaging for low power vehicle fingerprinting and tracking in aerial imagery. Inverse Problems & Imaging, 2017, 11 (1) : 177-202. doi: 10.3934/ipi.2017009

[12]

Shouhong Yang. Semidefinite programming via image space analysis. Journal of Industrial & Management Optimization, 2016, 12 (4) : 1187-1197. doi: 10.3934/jimo.2016.12.1187

[13]

Christine Bachoc, Alberto Passuello, Frank Vallentin. Bounds for projective codes from semidefinite programming. Advances in Mathematics of Communications, 2013, 7 (2) : 127-145. doi: 10.3934/amc.2013.7.127

[14]

Jiani Wang, Liwei Zhang. Statistical inference of semidefinite programming with multiple parameters. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-12. doi: 10.3934/jimo.2019015

[15]

Yi Xu, Wenyu Sun. A filter successive linear programming method for nonlinear semidefinite programming problems. Numerical Algebra, Control & Optimization, 2012, 2 (1) : 193-206. doi: 10.3934/naco.2012.2.193

[16]

Yonggui Zhu, Yuying Shi, Bin Zhang, Xinyan Yu. Weighted-average alternating minimization method for magnetic resonance image reconstruction based on compressive sensing. Inverse Problems & Imaging, 2014, 8 (3) : 925-937. doi: 10.3934/ipi.2014.8.925

[17]

Jian-Wu Xue, Xiao-Kun Xu, Feng Zhang. Big data dynamic compressive sensing system architecture and optimization algorithm for internet of things. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1401-1414. doi: 10.3934/dcdss.2015.8.1401

[18]

Cristian Dobre. Mathematical properties of the regular *-representation of matrix $*$-algebras with applications to semidefinite programming. Numerical Algebra, Control & Optimization, 2013, 3 (2) : 367-378. doi: 10.3934/naco.2013.3.367

[19]

Jiying Liu, Jubo Zhu, Fengxia Yan, Zenghui Zhang. Compressive sampling and $l_1$ minimization for SAR imaging with low sampling rate. Inverse Problems & Imaging, 2013, 7 (4) : 1295-1305. doi: 10.3934/ipi.2013.7.1295

[20]

Min-Fan He, Li-Ning Xing, Wen Li, Shang Xiang, Xu Tan. Double layer programming model to the scheduling of remote sensing data processing tasks. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 1515-1526. doi: 10.3934/dcdss.2019104

2018 Impact Factor: 1.469

Metrics

  • PDF downloads (36)
  • HTML views (99)
  • Cited by (0)

Other articles
by authors

[Back to Top]