• Previous Article
    Unbounded state-dependent sweeping processes with perturbations in uniformly convex and q-uniformly smooth Banach spaces
  • NACO Home
  • This Issue
  • Next Article
    $\mathcal{L}_{∞}$-norm computation for large-scale descriptor systems using structured iterative eigensolvers
March 2018, 8(1): 97-117. doi: 10.3934/naco.2018006

Fused LASSO penalized least absolute deviation estimator for high dimensional linear regression

1. 

Community Health Center, Beijing Jiaotong University, Beijing 100044, China

2. 

Department of Mathematics and Statistics, Loyola University Maryland, Baltimore, MD 21210, USA

3. 

Department of Applied Mathematics, Beijing Jiaotong University, Beijing 100044, China

* Corresponding author: Lingchen Kong

This paper was prepared at the occasion of The 10th International Conference on Optimization: Techniques and Applications (ICOTA 2016), Ulaanbaatar, Mongolia, July 23-26,2016, with its Associate Editors of Numerical Algebra, Control and Optimization (NACO) being Prof. Dr. Zhiyou Wu, School of Mathematical Sciences, Chongqing Normal University, Chongqing, China, Prof. Dr. Changjun Yu, Department of Mathematics and Statistics, Curtin University, Perth, Australia, and Shanghai University, China, and Prof. Gerhard-Wilhelm Weber, Middle East Technical University, Ankara, Turkey

Received  January 2017 Revised  December 2017 Published  March 2018

Fund Project: The work was supported in part by National Natural Science Foundation of China (11671029), and the Fundamental Research Funds for the Central Universities (2016JBM081)

The least absolute shrinkage and selection operator (LASSO) has been playing an important role in variable selection and dimensionality reduction for high dimensional linear regression under the zero-mean or Gaussian assumptions of the noises. However, these assumptions may not hold in practice. In this case, the least absolute deviation is a popular and useful method. In this paper, we focus on the least absolute deviation via Fused LASSO, called Robust Fused LASSO, under the assumption that the unknown vector is sparsity for both the coefficients and its successive differences. Robust Fused LASSO estimator does not need any knowledge of standard deviation of the noises or any moment assumptions of the noises. We show that the Robust Fused LASSO estimator possesses near oracle performance, i.e. with large probability, the $\ell_2$ norm of the estimation error is of order $O(\sqrt{k(\log p)/n})$. The result is true for a wide range of noise distributions, even for the Cauchy distribution. In addition, we apply the linearized alternating direction method of multipliers to find the Robust Fused LASSO estimator, which possesses the global convergence. Numerical results are reported to demonstrate the efficiency of our proposed method.

Citation: Yanqing Liu, Jiyuan Tao, Huan Zhang, Xianchao Xiu, Lingchen Kong. Fused LASSO penalized least absolute deviation estimator for high dimensional linear regression. Numerical Algebra, Control & Optimization, 2018, 8 (1) : 97-117. doi: 10.3934/naco.2018006
References:
[1]

S. BoydN. ParikhE. ChuB. Peleato and J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Foundations and Trends® in Machine Learning, 3 (2010), 1-122.

[2]

E. Candès, Compressive sampling, In Proceedings of the International Congress of Mathematicians, 3 (2006), 1433-1452.

[3]

E. Candès and T. Tao, Decoding by linear programming, IEEE Transactions on Information Theory, 51 (2005), 4203-4215.

[4]

T. CaiL. Wang and G. Xu, Shifting inequality and recovery of sparse signals, IEEE Transactions on Signal Processing, 58 (2010), 1300-1308.

[5]

H.B. Chen, Y.J. Wang, G. Wang, Strong convergence of extragradient method for generalized variational inequalities in Hilbert space, Journal of Inequalities and Applications, 2014 (2014), 223, 11 pages.

[6]

W. Deng and W. Yin, On the global and linear convergence of the generalized alternating direction method of multipliers, Journal of Scientific Computing, 66 (2016), 889-916.

[7]

J. Friedman, Multivariate adaptive regression splines, The Annals of Statistics, (1991), 1-67.

[8]

J. FriedmanT. HastieH. Höfling and R. Tibshirani, Pathwise coordinate optimization, Annals of Applied Statistics, 1 (2007), 302-332.

[9]

M. Grant, S. Boyd and Y. Ye, CVX: Matlab Software for Displined Convex Programming, 2009.

[10]

D. HanX. Yuan and W. Zhang, An augmented Lagrangian based parallel splitting method for separable convex minimization with applications to image processing, Mathematics of Computation, 83 (2014), 2263-2291.

[11]

H. Höfling, A path algorithm for the fused lasso signal approximator, Journal of Computational and Graphical Statistics, 19 (2009), 984-1006.

[12]

E. Koc and H. Bozdogan, Model selection in multivariate adaptive regression splines (MARS) using information complexity as the fitness function, Machine Learning, 101 (2015), 35-58.

[13]

X. LiL. MoX. Yuan and J. Zhang, Linearized alternating direction direction method of multipliers for sparse group and fused-LASSO models, Computayional Statistics & Data Analysis, 79 (2013), 203-221.

[14]

J. Liu, S. Ji and J. Ye, SLEP: sparse learning with efficient projections, http://www.public.asu.edu/~jye02/Software/SLEP, 2009.

[15]

H. Ma and Y. Jia, Stability Analysis For Stochastic Differential Equations With Infinite Markovian Switchings, Journal of Mathematical Analysis and Applications, 435 (2016), 593-605.

[16]

S. MaL. Xue and H. Zou, Alternating direction methods for latent variable Gaussian graphical model selection, Neural Computation, 25 (2013), 2172-2198.

[17]

D. MartinezD. ShihV. Chen and S. Kim, A convex version of multivariate adaptive regression splines, Computational Statistics & Data Analysis, 81 (2015), 89-106.

[18]

B. Pete and V. Sara, Statistics for High-dimensional Data: Methods, Theory and Applications, Springer, 2011.

[19]

A. Rinaldo, Properties and refinements of the fused Lasso, Annnal of Statistics, 37 (2009), 2922-2952.

[20]

R. Tibshirani, Regression shrinkage and selection via the lasso, Journal of the Royal Statistical Society. Series B (Methodological), (1996), 267-288.

[21]

R. TibshiraniM. SaundersS. RossetJ. Zhu and K. Knight, Sparsity and smoothness via the fused Lasso, Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67 (2005), 91-108.

[22]

R. Tibshirani and P. Wang, Spatial smoothing and hot pot detection for CGH Data using the fused Lasso, Biostatistics, 9 (2008), 18-29.

[23]

T. Wu and K. Lange, Coordinate decent algorithms for Lasso penalized regression, Annals of Applied Statistics, (2008), 224-244.

[24]

H. WangG. Li and G. Jiang, Robust regression shrinkage and consistent variable selection through the LAD-lasso, Journal of Business and Economic Statististics, 25 (2007), 347-355.

[25]

L. Wang, L1 penalized LAD estimator for high dimensional linear regression, Journal of Multivariate Analysis, 120 (2013), 135-151.

[26]

M. WangL. Song and G. Tian, SCAD-penalized least absolute deviation regression in high dimensional models, Communications in Statistics-Theory and Methods, 44 (2015), 2452-2472.

[27]

X. Wang and X. Yuan, The linearized alternating direction method of multipliers for Dantzig Selector, Siam Journal on Scientific Computing, 34 (2012), A2793-A2811.

[28]

G. Weberİ. BatmazG. KöksalP. Taylan and F. Yerlikaya-özkurt, CMARS: a new contribution to nonparametric regression with multivariate adaptive regression splines supported by continuous optimization, Inverse Problems in Science and Engineering, 20 (2012), 371-400.

[29]

J. Yang and X. Yuan, Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization, Mathematics of Computation, 82 (2013), 301-329.

[30]

X. ZhangM. Burger and S. Osher, A unified primal-dual algorithm framwork based on Bregman iteration, Journal of Scientific Computing, 46 (2011), 20-46.

[31]

H. ZhangG. WahbaY. LinM. VoelkerM. FerrisR. Klein and B. Klein, Variable selection and model building via likelihood basis pursuit, Journal of the American Statistical Association, 96 (2014), 659-672.

[32]

H. ZouT. Hastie and R. Tibshirani, Sparse principle component analysis, Journal of Computational and Graphical Statistics, 15 (2006), 265-286.

show all references

References:
[1]

S. BoydN. ParikhE. ChuB. Peleato and J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Foundations and Trends® in Machine Learning, 3 (2010), 1-122.

[2]

E. Candès, Compressive sampling, In Proceedings of the International Congress of Mathematicians, 3 (2006), 1433-1452.

[3]

E. Candès and T. Tao, Decoding by linear programming, IEEE Transactions on Information Theory, 51 (2005), 4203-4215.

[4]

T. CaiL. Wang and G. Xu, Shifting inequality and recovery of sparse signals, IEEE Transactions on Signal Processing, 58 (2010), 1300-1308.

[5]

H.B. Chen, Y.J. Wang, G. Wang, Strong convergence of extragradient method for generalized variational inequalities in Hilbert space, Journal of Inequalities and Applications, 2014 (2014), 223, 11 pages.

[6]

W. Deng and W. Yin, On the global and linear convergence of the generalized alternating direction method of multipliers, Journal of Scientific Computing, 66 (2016), 889-916.

[7]

J. Friedman, Multivariate adaptive regression splines, The Annals of Statistics, (1991), 1-67.

[8]

J. FriedmanT. HastieH. Höfling and R. Tibshirani, Pathwise coordinate optimization, Annals of Applied Statistics, 1 (2007), 302-332.

[9]

M. Grant, S. Boyd and Y. Ye, CVX: Matlab Software for Displined Convex Programming, 2009.

[10]

D. HanX. Yuan and W. Zhang, An augmented Lagrangian based parallel splitting method for separable convex minimization with applications to image processing, Mathematics of Computation, 83 (2014), 2263-2291.

[11]

H. Höfling, A path algorithm for the fused lasso signal approximator, Journal of Computational and Graphical Statistics, 19 (2009), 984-1006.

[12]

E. Koc and H. Bozdogan, Model selection in multivariate adaptive regression splines (MARS) using information complexity as the fitness function, Machine Learning, 101 (2015), 35-58.

[13]

X. LiL. MoX. Yuan and J. Zhang, Linearized alternating direction direction method of multipliers for sparse group and fused-LASSO models, Computayional Statistics & Data Analysis, 79 (2013), 203-221.

[14]

J. Liu, S. Ji and J. Ye, SLEP: sparse learning with efficient projections, http://www.public.asu.edu/~jye02/Software/SLEP, 2009.

[15]

H. Ma and Y. Jia, Stability Analysis For Stochastic Differential Equations With Infinite Markovian Switchings, Journal of Mathematical Analysis and Applications, 435 (2016), 593-605.

[16]

S. MaL. Xue and H. Zou, Alternating direction methods for latent variable Gaussian graphical model selection, Neural Computation, 25 (2013), 2172-2198.

[17]

D. MartinezD. ShihV. Chen and S. Kim, A convex version of multivariate adaptive regression splines, Computational Statistics & Data Analysis, 81 (2015), 89-106.

[18]

B. Pete and V. Sara, Statistics for High-dimensional Data: Methods, Theory and Applications, Springer, 2011.

[19]

A. Rinaldo, Properties and refinements of the fused Lasso, Annnal of Statistics, 37 (2009), 2922-2952.

[20]

R. Tibshirani, Regression shrinkage and selection via the lasso, Journal of the Royal Statistical Society. Series B (Methodological), (1996), 267-288.

[21]

R. TibshiraniM. SaundersS. RossetJ. Zhu and K. Knight, Sparsity and smoothness via the fused Lasso, Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67 (2005), 91-108.

[22]

R. Tibshirani and P. Wang, Spatial smoothing and hot pot detection for CGH Data using the fused Lasso, Biostatistics, 9 (2008), 18-29.

[23]

T. Wu and K. Lange, Coordinate decent algorithms for Lasso penalized regression, Annals of Applied Statistics, (2008), 224-244.

[24]

H. WangG. Li and G. Jiang, Robust regression shrinkage and consistent variable selection through the LAD-lasso, Journal of Business and Economic Statististics, 25 (2007), 347-355.

[25]

L. Wang, L1 penalized LAD estimator for high dimensional linear regression, Journal of Multivariate Analysis, 120 (2013), 135-151.

[26]

M. WangL. Song and G. Tian, SCAD-penalized least absolute deviation regression in high dimensional models, Communications in Statistics-Theory and Methods, 44 (2015), 2452-2472.

[27]

X. Wang and X. Yuan, The linearized alternating direction method of multipliers for Dantzig Selector, Siam Journal on Scientific Computing, 34 (2012), A2793-A2811.

[28]

G. Weberİ. BatmazG. KöksalP. Taylan and F. Yerlikaya-özkurt, CMARS: a new contribution to nonparametric regression with multivariate adaptive regression splines supported by continuous optimization, Inverse Problems in Science and Engineering, 20 (2012), 371-400.

[29]

J. Yang and X. Yuan, Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization, Mathematics of Computation, 82 (2013), 301-329.

[30]

X. ZhangM. Burger and S. Osher, A unified primal-dual algorithm framwork based on Bregman iteration, Journal of Scientific Computing, 46 (2011), 20-46.

[31]

H. ZhangG. WahbaY. LinM. VoelkerM. FerrisR. Klein and B. Klein, Variable selection and model building via likelihood basis pursuit, Journal of the American Statistical Association, 96 (2014), 659-672.

[32]

H. ZouT. Hastie and R. Tibshirani, Sparse principle component analysis, Journal of Computational and Graphical Statistics, 15 (2006), 265-286.

Figure 1.  Results of RFLASSO when $i = 2, \sigma = 0.001$
Figure 2.  Results of FLASSO when $i = 2, \sigma = 0.001$
Figure 3.  Results for RFLASSO when $i = 2, \sigma = 0.001$
Figure 4.  Results for FLASSO when $i = 2, \sigma = 0.001$
Table 1.  Iterative scheme of LADMM algorithm for RFLASSO
LADMM Algorithm for RFLASSO
Input: $X$ , $Y$ , $\tau$ , $\lambda_1>0, \lambda_2>0, \mu>0, \nu>0$ and $\eta>\rho(\mu X^TX+\nu A^TA)$ , where $\rho(\cdot)$ denotes the spectral radius.
Select $(\beta^0, \gamma^0, \tau^0, \alpha^0, \theta^0)=(1, 1, 1, 1, 1)$ ;
For $k=1, 2, \cdots, n $
       Do
           Compute $\beta^{k+1}$ by (23),
           Compute $\gamma^{k+1}$ by (24),
           Compute $\tau^{k+1}$ by (25),
           Update $\alpha^{k+1}=\alpha^k-\nu(A\beta^{k+1}-\gamma^{k+1})$ ,
           Update $\theta^{k+1}=\theta^k-\mu(y-X\beta^{k+1}-\tau^{k+1})$ .
End
Output: $(\beta^n, \gamma^n, \tau^n, \alpha^n, \theta^n)$ as an approximate solution of (3).
LADMM Algorithm for RFLASSO
Input: $X$ , $Y$ , $\tau$ , $\lambda_1>0, \lambda_2>0, \mu>0, \nu>0$ and $\eta>\rho(\mu X^TX+\nu A^TA)$ , where $\rho(\cdot)$ denotes the spectral radius.
Select $(\beta^0, \gamma^0, \tau^0, \alpha^0, \theta^0)=(1, 1, 1, 1, 1)$ ;
For $k=1, 2, \cdots, n $
       Do
           Compute $\beta^{k+1}$ by (23),
           Compute $\gamma^{k+1}$ by (24),
           Compute $\tau^{k+1}$ by (25),
           Update $\alpha^{k+1}=\alpha^k-\nu(A\beta^{k+1}-\gamma^{k+1})$ ,
           Update $\theta^{k+1}=\theta^k-\mu(y-X\beta^{k+1}-\tau^{k+1})$ .
End
Output: $(\beta^n, \gamma^n, \tau^n, \alpha^n, \theta^n)$ as an approximate solution of (3).
Table 2.  $\rho(0.5X^TX+0.5A^TA)$ for design matrix with unit column norms
i1 2 3 4
$\sigma=0.001$ 5.246 5.289 5.324 5.344
$\sigma=0.005$ 5.265 5.296 5.333 5.350
$\sigma=0.01$ 5.273 5.301 5.342 5.356
i1 2 3 4
$\sigma=0.001$ 5.246 5.289 5.324 5.344
$\sigma=0.005$ 5.265 5.296 5.333 5.350
$\sigma=0.01$ 5.273 5.301 5.342 5.356
Table 3.  The results for RFLASSO with size $(n, p, g) = (360i, 1280i, 160i)$
i $\sigma$ Iter CPU Obj Error
$i=1$ $\sigma=0.001$ 156 0.424 31.052 20.3164
$\sigma=0.005$ 165 0.430 32.276 23.3330
$\sigma=0.01$ 175 0.463 33.364 23.5509
$i=2$ $\sigma=0.001$ 150 1.783 48.669 24.9877
$\sigma=0.005$ 168 1.953 59.415 25.5110
$\sigma=0.01$ 189 2.113 67.510 31.8536
$i=3$ $\sigma=0.001$ 169 4.268 87.465 33.2984
$\sigma=0.005$ 154 4.307 96.761 37.3335
$\sigma=0.01$ 167 4.109 91.263 34.8521
$i=4$ $\sigma=0.001$ 166 7.525 123.087 40.0416
$\sigma=0.005$ 167 7.382 132.316 41.0560
$\sigma=0.01$ 157 6.833 137.114 44.1079
i $\sigma$ Iter CPU Obj Error
$i=1$ $\sigma=0.001$ 156 0.424 31.052 20.3164
$\sigma=0.005$ 165 0.430 32.276 23.3330
$\sigma=0.01$ 175 0.463 33.364 23.5509
$i=2$ $\sigma=0.001$ 150 1.783 48.669 24.9877
$\sigma=0.005$ 168 1.953 59.415 25.5110
$\sigma=0.01$ 189 2.113 67.510 31.8536
$i=3$ $\sigma=0.001$ 169 4.268 87.465 33.2984
$\sigma=0.005$ 154 4.307 96.761 37.3335
$\sigma=0.01$ 167 4.109 91.263 34.8521
$i=4$ $\sigma=0.001$ 166 7.525 123.087 40.0416
$\sigma=0.005$ 167 7.382 132.316 41.0560
$\sigma=0.01$ 157 6.833 137.114 44.1079
Table 4.  Selected results for RFLASSO and FLASSO
RFLASSO FLASSO
$\sharp{\{~|\widehat{\beta}_i|<0.1, ~i\in G^c}\}$ 2240 2240
$\text{max}_{i\in G^c}|\widehat{\beta}_i|$ 0.0147 0.0153
$\sharp{\{~|\widehat{\beta}_i-\beta^*_i|<0.1, ~i\in G}\}$ 320 320
$\text{max}_{i\in G}|\widehat{\beta}_i-\beta^*_i|$ 0.1447 0.1365
RFLASSO FLASSO
$\sharp{\{~|\widehat{\beta}_i|<0.1, ~i\in G^c}\}$ 2240 2240
$\text{max}_{i\in G^c}|\widehat{\beta}_i|$ 0.0147 0.0153
$\sharp{\{~|\widehat{\beta}_i-\beta^*_i|<0.1, ~i\in G}\}$ 320 320
$\text{max}_{i\in G}|\widehat{\beta}_i-\beta^*_i|$ 0.1447 0.1365
Table 5.  Selected results for RFLASSO and FLASSO
RFLASSO FLASSO
$\sharp{\{~|\widehat{\beta}_i|<0.1, ~i\in G^c}\}$ 2240 2372
$\text{max}_{i\in G^c}|\widehat{\beta}_i|$ 0.0442 0.0997
$\sharp{\{~|\widehat{\beta}_i-\beta^*_i|<0.1, ~i\in G}\}$ 320 188
$\text{max}_{i\in G}|\widehat{\beta}_i-\beta^*_i|$ 0.1329 1.7218
RFLASSO FLASSO
$\sharp{\{~|\widehat{\beta}_i|<0.1, ~i\in G^c}\}$ 2240 2372
$\text{max}_{i\in G^c}|\widehat{\beta}_i|$ 0.0442 0.0997
$\sharp{\{~|\widehat{\beta}_i-\beta^*_i|<0.1, ~i\in G}\}$ 320 188
$\text{max}_{i\in G}|\widehat{\beta}_i-\beta^*_i|$ 0.1329 1.7218
[1]

Bingsheng He, Xiaoming Yuan. Linearized alternating direction method of multipliers with Gaussian back substitution for separable convex programming. Numerical Algebra, Control & Optimization, 2013, 3 (2) : 247-260. doi: 10.3934/naco.2013.3.247

[2]

Yuan Shen, Lei Ji. Partial convolution for total variation deblurring and denoising by new linearized alternating direction method of multipliers with extension step. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-17. doi: 10.3934/jimo.2018037

[3]

Zhongming Wu, Xingju Cai, Deren Han. Linearized block-wise alternating direction method of multipliers for multiple-block convex programming. Journal of Industrial & Management Optimization, 2018, 14 (3) : 833-855. doi: 10.3934/jimo.2017078

[4]

Russell E. Warren, Stanley J. Osher. Hyperspectral unmixing by the alternating direction method of multipliers. Inverse Problems & Imaging, 2015, 9 (3) : 917-933. doi: 10.3934/ipi.2015.9.917

[5]

Yunhai Xiao, Soon-Yi Wu, Bing-Sheng He. A proximal alternating direction method for $\ell_{2,1}$-norm least squares problem in multi-task feature learning. Journal of Industrial & Management Optimization, 2012, 8 (4) : 1057-1069. doi: 10.3934/jimo.2012.8.1057

[6]

Foxiang Liu, Lingling Xu, Yuehong Sun, Deren Han. A proximal alternating direction method for multi-block coupled convex optimization. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-15. doi: 10.3934/jimo.2018067

[7]

Yue Lu, Ying-En Ge, Li-Wei Zhang. An alternating direction method for solving a class of inverse semi-definite quadratic programming problems. Journal of Industrial & Management Optimization, 2016, 12 (1) : 317-336. doi: 10.3934/jimo.2016.12.317

[8]

Wei Xue, Wensheng Zhang, Gaohang Yu. Least absolute deviations learning of multiple tasks. Journal of Industrial & Management Optimization, 2018, 14 (2) : 719-729. doi: 10.3934/jimo.2017071

[9]

Peng Zhang. Chance-constrained multiperiod mean absolute deviation uncertain portfolio selection. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-28. doi: 10.3934/jimo.2018056

[10]

Shaoyong Lai, Qichang Xie. A selection problem for a constrained linear regression model. Journal of Industrial & Management Optimization, 2008, 4 (4) : 757-766. doi: 10.3934/jimo.2008.4.757

[11]

Adil Bagirov, Sona Taheri, Soodabeh Asadi. A difference of convex optimization algorithm for piecewise linear regression. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-24. doi: 10.3934/jimo.2018077

[12]

Peng Zhang. Multiperiod mean semi-absolute deviation interval portfolio selection with entropy constraints. Journal of Industrial & Management Optimization, 2017, 13 (3) : 1169-1187. doi: 10.3934/jimo.2016067

[13]

Dan Li, Li-Ping Pang, Fang-Fang Guo, Zun-Quan Xia. An alternating linearization method with inexact data for bilevel nonsmooth convex optimization. Journal of Industrial & Management Optimization, 2014, 10 (3) : 859-869. doi: 10.3934/jimo.2014.10.859

[14]

Sergey P. Degtyarev. On Fourier multipliers in function spaces with partial Hölder condition and their application to the linearized Cahn-Hilliard equation with dynamic boundary conditions. Evolution Equations & Control Theory, 2015, 4 (4) : 391-429. doi: 10.3934/eect.2015.4.391

[15]

Takashi Hara and Gordon Slade. The incipient infinite cluster in high-dimensional percolation. Electronic Research Announcements, 1998, 4: 48-55.

[16]

Tsonka Baicheva, Iliya Bouyukliev. On the least covering radius of binary linear codes of dimension 6. Advances in Mathematics of Communications, 2010, 4 (3) : 399-404. doi: 10.3934/amc.2010.4.399

[17]

Andrew J. Majda, Yuan Yuan. Fundamental limitations of Ad hoc linear and quadratic multi-level regression models for physical systems. Discrete & Continuous Dynamical Systems - B, 2012, 17 (4) : 1333-1363. doi: 10.3934/dcdsb.2012.17.1333

[18]

Lianjun Zhang, Lingchen Kong, Yan Li, Shenglong Zhou. A smoothing iterative method for quantile regression with nonconvex $ \ell_p $ penalty. Journal of Industrial & Management Optimization, 2017, 13 (1) : 93-112. doi: 10.3934/jimo.2016006

[19]

Anna Amirdjanova, Jie Xiong. Large deviation principle for a stochastic navier-Stokes equation in its vorticity form for a two-dimensional incompressible flow. Discrete & Continuous Dynamical Systems - B, 2006, 6 (4) : 651-666. doi: 10.3934/dcdsb.2006.6.651

[20]

Laura Gardini, Roya Makrooni, Iryna Sushko. Cascades of alternating smooth bifurcations and border collision bifurcations with singularity in a family of discontinuous linear-power maps. Discrete & Continuous Dynamical Systems - B, 2018, 23 (2) : 701-729. doi: 10.3934/dcdsb.2018039

 Impact Factor: 

Metrics

  • PDF downloads (49)
  • HTML views (207)
  • Cited by (0)

[Back to Top]