December 2017, 11(6): 975-996. doi: 10.3934/ipi.2017045

A numerical study of a mean curvature denoising model using a novel augmented Lagrangian method

Department of Mathematics, University of Alabama, Box 870350, Tuscaloosa, AL 35487, USA

* Corresponding author: Wei Zhu

Received  June 2016 Revised  January 2017 Published  September 2017

In this paper, we propose a new augmented Lagrangian method for the mean curvature based image denoising model [33]. Different from the previous works in [21,35], this new method only involves two Lagrange multipliers, which significantly reduces the effort of choosing appropriate penalization parameters to ensure the convergence of the iterative process of finding the associated saddle points. With this new algorithm, we demonstrate the features of the model numerically, including the preservation of image contrasts and object corners, as well as its capability of generating smooth patches of image graphs. The data selection property and the role of the spatial mesh size for the model performance are also discussed.

Citation: Wei Zhu. A numerical study of a mean curvature denoising model using a novel augmented Lagrangian method. Inverse Problems & Imaging, 2017, 11 (6) : 975-996. doi: 10.3934/ipi.2017045
References:
[1]

L. Ambrosio and S. Masnou, A direct variational approach to a problem arising in image reconstruction, Interfaces Free Bound., 5 (2003), 63-81. doi: 10.4171/IFB/72.

[2]

L. Ambrosio and S. Masnou, On a variational problem arising in image reconstruction, Free Boundary Problems (Trento, 2002), Internat. Ser. Numer. Math., Birkh"/auser, Basel, 147 (2004), 17-26.

[3]

E. BaeJ. Shi and X. C. Tai, Graph cuts for curvature based image denoising, IEEE Trans. on Image Process, 20 (2011), 1199-1210. doi: 10.1109/TIP.2010.2090533.

[4]

G. BellettiniV. Caselles and M. Novaga, The total variation flow in $\mathbb{R}^n $, J. Differ. Equations, 184 (2002), 475-525. doi: 10.1006/jdeq.2001.4150.

[5]

C. Brito-Loeza and K. Chen, Multigrid algorithm for high order denoising, SIAM J. Imaging. Sciences, 3 (2010), 363-389. doi: 10.1137/080737903.

[6]

A. Chambolle, An algorithm for total variation minimization and applications, J. Math. Imaging Vis., 20 (2004), 89-97. doi: 10.1023/B:JMIV.0000011320.81911.38.

[7]

T. ChanG. H. Golub and P. Mulet, A nonlinear primal-dual method for total variation-based image restoration, SIAM J. Sci. Comput., 20 (1999), 1964-1977. doi: 10.1137/S1064827596299767.

[8]

T. Chan and S. Esedoglu, Aspects of total variation regularized L1 function approximation, SIAM J. Appl. Math., 65 (2005), 1817-1837. doi: 10.1137/040604297.

[9]

T. Chan, S. Esedoḡlu, F. Park and M. H. Yip, Recent Developments in Total Variation Image Restoration, In Handbook of Mathematical Models in Computer Vision. Springer Verlag, 2005. Edt by N. Paragios, Y. Chen, O. Faugeras.

[10]

T. ChanS. H. Kang and J. H. Shen, Euler's elastica and curvature based inpaintings, SIAM J. Appl. Math., 63 (2002), 564-592. doi: 10.1137/S0036139901390088.

[11] M. P. do Carmo, Differential Geometry of Curves and Surfaces, Prentice-Hall, Inc., 1976.
[12]

Y. DuanY. WangX.-C. Tai and J. Hahn, A fast augmented Lagrangian method for Euler's elastica model, SSVM 2011, LSCS, 6667 (2012), 144-156.

[13]

J. Eckstein and W. Yao, Understanding the Convergence of the Alternating Direction Method of Multipliers: Theoretical and Computational Perspectives, Pac. J. Optim., 11 (2015), 619-644.

[14]

N. El-Zehiry and L. Grady, Fast global optimization of curvature, Proc. of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (2010), 3257-3264.

[15]

T. Goldstein and S. Osher, The split Bregman method for L1-regularized problems, SIAM J. Imaging Sci., 2 (2009), 323-343. doi: 10.1137/080725891.

[16]

M. Hintermüller, C. N. Rautenberg and J. Hahn, Functional-analytic and numerical issues in splitting methods for total variation-based image reconstruction, Inverse Problems, 30 (2014), 055014, 34pp. doi: 10.1088/0266-5611/30/5/055014.

[17]

R. March and M. Dozio, A variational method for the recovery of smooth boundaries, Image and Vision Computing, 15 (1997), 705-712. doi: 10.1016/S0262-8856(97)00002-4.

[18]

S. Masnou, Disocclusion: A variational approach using level lines, IEEE Trans. Image Process., 11 (2002), 68-76. doi: 10.1109/83.982815.

[19]

S. Masnou and J. M. Morel, Level lines based disocclusion, Proc. IEEE Int. Conf. on Image Processing, Chicago, IL, (1998), 259-263.

[20]

Y. Meyer, Oscillating Patterns in Image Processing and Nonlinear Evolution Equations, University Lecture Series, Vol 22, American Mathematical Society, Providence, RI, 2001. doi: 10.1090/ulect/022.

[21]

M. MyllykoskiR. GlowinskiT. Kárkkáinen and T. Rossi, A new augmented Lagrangian approach for L1-mean curvature image denoising, SIAM J. Imaging Sci., 8 (2015), 95-125. doi: 10.1137/140962164.

[22]

M. Nitzberg, D. Mumford and T. Shiota, Filering, Segmentation, and Depth, Lecture Notes in Computer Science, Vol. 662, Springer Verlag, Berlin, 1993. doi: 10.1007/3-540-56484-5.

[23]

S. OsherM. BurgerD. GoldfarbJ. J. Xu and W. T. Yin, An iterative regularization method for total variation-based image restoration, Multiscale Model. Simul., 4 (2005), 460-489. doi: 10.1137/040605412.

[24]

R. T. Rockafellar, Augmented Lagrangians and applications of the proximal point algorithm in convex programming, Mathematics of Operations Research, 1 (1976), 97-116. doi: 10.1287/moor.1.2.97.

[25]

L. RudinS. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithm, Physica D, 60 (1992), 259-268. doi: 10.1016/0167-2789(92)90242-F.

[26]

T. Schoenemann, F. Kahl and D. Cremers, Curvature regularity for region-based image segmentation and inpainting: A linear programming relaxation, IEEE International Conference on Computer Vision (ICCV), 2009. doi: 10.1109/ICCV.2009.5459209.

[27]

T. SchoenemannF. KahlS. Masnou and D. Cremers, A linear framework for region-based image segmentation and inpainting involving curvature penalization, Int. J. Comput. Vision, 99 (2012), 53-68. doi: 10.1007/s11263-012-0518-7.

[28]

D. Strong and T. Chan, Edge-preserving and scale-dependent properties of total variation regularization, Inverse Problem, 19 (2003), 165-187. doi: 10.1088/0266-5611/19/6/059.

[29]

X. C. TaiJ. Hahn and G. J. Chung, A fast algorithm for Euler's Elastica model using augmented Lagrangian method, SIAM J. Imaging Sciences, 4 (2011), 313-344. doi: 10.1137/100803730.

[30]

C. Wu and X. C. Tai, Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, Vectorial TV, and high order models, SIAM J. Imaging Sciences, 3 (2010), 300-339. doi: 10.1137/090767558.

[31]

F. YangK. Chen and B. Yu, Homotopy method for a mean curvature-based denoising model, Appl. Numer. Math., 62 (2012), 185-200. doi: 10.1016/j.apnum.2011.12.001.

[32]

W. Zhu and T. Chan, A variational model for capturing illusory contours using curvature,, J. Math. Imaging Vision, 27 (2007), 29-40. doi: 10.1007/s10851-006-9695-8.

[33]

W. Zhu and T. Chan, Image denoising using mean curvature of image surface,, SIAM J. Imaging Sciences, 5 (2012), 1-32. doi: 10.1137/110822268.

[34]

W. ZhuT. Chan and S. Esedoḡlu, Segmentation with depth: A level set approach, SIAM J. Sci. Comput., 28 (2006), 1957-1973. doi: 10.1137/050622213.

[35]

W. ZhuX. C. Tai and T. Chan, Augmented Lagrangian method for a mean curvature based image denoising model, Inverse Probl. Imag., 7 (2013), 1409-1432. doi: 10.3934/ipi.2013.7.1409.

[36]

W. ZhuX. C. Tai and T. Chan, Image segmentation using Euler's elastica as the regularization, J. Sci. Comput., 57 (2013), 414-438. doi: 10.1007/s10915-013-9710-3.

show all references

References:
[1]

L. Ambrosio and S. Masnou, A direct variational approach to a problem arising in image reconstruction, Interfaces Free Bound., 5 (2003), 63-81. doi: 10.4171/IFB/72.

[2]

L. Ambrosio and S. Masnou, On a variational problem arising in image reconstruction, Free Boundary Problems (Trento, 2002), Internat. Ser. Numer. Math., Birkh"/auser, Basel, 147 (2004), 17-26.

[3]

E. BaeJ. Shi and X. C. Tai, Graph cuts for curvature based image denoising, IEEE Trans. on Image Process, 20 (2011), 1199-1210. doi: 10.1109/TIP.2010.2090533.

[4]

G. BellettiniV. Caselles and M. Novaga, The total variation flow in $\mathbb{R}^n $, J. Differ. Equations, 184 (2002), 475-525. doi: 10.1006/jdeq.2001.4150.

[5]

C. Brito-Loeza and K. Chen, Multigrid algorithm for high order denoising, SIAM J. Imaging. Sciences, 3 (2010), 363-389. doi: 10.1137/080737903.

[6]

A. Chambolle, An algorithm for total variation minimization and applications, J. Math. Imaging Vis., 20 (2004), 89-97. doi: 10.1023/B:JMIV.0000011320.81911.38.

[7]

T. ChanG. H. Golub and P. Mulet, A nonlinear primal-dual method for total variation-based image restoration, SIAM J. Sci. Comput., 20 (1999), 1964-1977. doi: 10.1137/S1064827596299767.

[8]

T. Chan and S. Esedoglu, Aspects of total variation regularized L1 function approximation, SIAM J. Appl. Math., 65 (2005), 1817-1837. doi: 10.1137/040604297.

[9]

T. Chan, S. Esedoḡlu, F. Park and M. H. Yip, Recent Developments in Total Variation Image Restoration, In Handbook of Mathematical Models in Computer Vision. Springer Verlag, 2005. Edt by N. Paragios, Y. Chen, O. Faugeras.

[10]

T. ChanS. H. Kang and J. H. Shen, Euler's elastica and curvature based inpaintings, SIAM J. Appl. Math., 63 (2002), 564-592. doi: 10.1137/S0036139901390088.

[11] M. P. do Carmo, Differential Geometry of Curves and Surfaces, Prentice-Hall, Inc., 1976.
[12]

Y. DuanY. WangX.-C. Tai and J. Hahn, A fast augmented Lagrangian method for Euler's elastica model, SSVM 2011, LSCS, 6667 (2012), 144-156.

[13]

J. Eckstein and W. Yao, Understanding the Convergence of the Alternating Direction Method of Multipliers: Theoretical and Computational Perspectives, Pac. J. Optim., 11 (2015), 619-644.

[14]

N. El-Zehiry and L. Grady, Fast global optimization of curvature, Proc. of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (2010), 3257-3264.

[15]

T. Goldstein and S. Osher, The split Bregman method for L1-regularized problems, SIAM J. Imaging Sci., 2 (2009), 323-343. doi: 10.1137/080725891.

[16]

M. Hintermüller, C. N. Rautenberg and J. Hahn, Functional-analytic and numerical issues in splitting methods for total variation-based image reconstruction, Inverse Problems, 30 (2014), 055014, 34pp. doi: 10.1088/0266-5611/30/5/055014.

[17]

R. March and M. Dozio, A variational method for the recovery of smooth boundaries, Image and Vision Computing, 15 (1997), 705-712. doi: 10.1016/S0262-8856(97)00002-4.

[18]

S. Masnou, Disocclusion: A variational approach using level lines, IEEE Trans. Image Process., 11 (2002), 68-76. doi: 10.1109/83.982815.

[19]

S. Masnou and J. M. Morel, Level lines based disocclusion, Proc. IEEE Int. Conf. on Image Processing, Chicago, IL, (1998), 259-263.

[20]

Y. Meyer, Oscillating Patterns in Image Processing and Nonlinear Evolution Equations, University Lecture Series, Vol 22, American Mathematical Society, Providence, RI, 2001. doi: 10.1090/ulect/022.

[21]

M. MyllykoskiR. GlowinskiT. Kárkkáinen and T. Rossi, A new augmented Lagrangian approach for L1-mean curvature image denoising, SIAM J. Imaging Sci., 8 (2015), 95-125. doi: 10.1137/140962164.

[22]

M. Nitzberg, D. Mumford and T. Shiota, Filering, Segmentation, and Depth, Lecture Notes in Computer Science, Vol. 662, Springer Verlag, Berlin, 1993. doi: 10.1007/3-540-56484-5.

[23]

S. OsherM. BurgerD. GoldfarbJ. J. Xu and W. T. Yin, An iterative regularization method for total variation-based image restoration, Multiscale Model. Simul., 4 (2005), 460-489. doi: 10.1137/040605412.

[24]

R. T. Rockafellar, Augmented Lagrangians and applications of the proximal point algorithm in convex programming, Mathematics of Operations Research, 1 (1976), 97-116. doi: 10.1287/moor.1.2.97.

[25]

L. RudinS. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithm, Physica D, 60 (1992), 259-268. doi: 10.1016/0167-2789(92)90242-F.

[26]

T. Schoenemann, F. Kahl and D. Cremers, Curvature regularity for region-based image segmentation and inpainting: A linear programming relaxation, IEEE International Conference on Computer Vision (ICCV), 2009. doi: 10.1109/ICCV.2009.5459209.

[27]

T. SchoenemannF. KahlS. Masnou and D. Cremers, A linear framework for region-based image segmentation and inpainting involving curvature penalization, Int. J. Comput. Vision, 99 (2012), 53-68. doi: 10.1007/s11263-012-0518-7.

[28]

D. Strong and T. Chan, Edge-preserving and scale-dependent properties of total variation regularization, Inverse Problem, 19 (2003), 165-187. doi: 10.1088/0266-5611/19/6/059.

[29]

X. C. TaiJ. Hahn and G. J. Chung, A fast algorithm for Euler's Elastica model using augmented Lagrangian method, SIAM J. Imaging Sciences, 4 (2011), 313-344. doi: 10.1137/100803730.

[30]

C. Wu and X. C. Tai, Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, Vectorial TV, and high order models, SIAM J. Imaging Sciences, 3 (2010), 300-339. doi: 10.1137/090767558.

[31]

F. YangK. Chen and B. Yu, Homotopy method for a mean curvature-based denoising model, Appl. Numer. Math., 62 (2012), 185-200. doi: 10.1016/j.apnum.2011.12.001.

[32]

W. Zhu and T. Chan, A variational model for capturing illusory contours using curvature,, J. Math. Imaging Vision, 27 (2007), 29-40. doi: 10.1007/s10851-006-9695-8.

[33]

W. Zhu and T. Chan, Image denoising using mean curvature of image surface,, SIAM J. Imaging Sciences, 5 (2012), 1-32. doi: 10.1137/110822268.

[34]

W. ZhuT. Chan and S. Esedoḡlu, Segmentation with depth: A level set approach, SIAM J. Sci. Comput., 28 (2006), 1957-1973. doi: 10.1137/050622213.

[35]

W. ZhuX. C. Tai and T. Chan, Augmented Lagrangian method for a mean curvature based image denoising model, Inverse Probl. Imag., 7 (2013), 1409-1432. doi: 10.3934/ipi.2013.7.1409.

[36]

W. ZhuX. C. Tai and T. Chan, Image segmentation using Euler's elastica as the regularization, J. Sci. Comput., 57 (2013), 414-438. doi: 10.1007/s10915-013-9710-3.

Figure 1.  The first row lists the noise-free image $f_{128}$ and the obtained clean one $u_{128}$. The second row presents the difference images $u_{N}-f_{N}$ with $N=32, 64,128$ from the left to the right, respectively
Figure 2.  From the left to right, the original "Matlab Logo" $f$, the obtained clean image $u$, and the difference $u-f$ are listed respectively. The parameters for these experiments are given by $\lambda=1, r_{1}=r_{2}=10$
Figure 3.  The plots of the relative error of $u^{k}$ (Eq. 37), the residuals (Eq. 35), the relative errors of Lagrange multipliers (Eq. 36), and the energy $E(u^{k})$. The left column is for the experiment in Fig. 1 with $N=64$ while the right one for the example "'Matlab Logo'. Note that for each experiment, one of the relative errors or the residuals is around $1e-16$, the precision of floating numbers in Matlab, which demonstrates that a minimizer of the MC denoising model is well approached
Figure 4.  The original image $f$ and three outputs $u_{i}'s, i=1, 2, 3$ for different regularization parameters $\lambda=10,1000, $ and $2400$ respectively. In these experiments, $h=1$, $r_{1}=r_{2}=\lambda$
Figure 5.  Three outputs $u_{i}'s, i=1, 2, 3$ for different spatial sizes $h=1, 0.25, $ and $0.2$ respectively with the same input image $f$ as in Fig. 4. In these experiments, $\lambda=100$, $r_{1}=r_{2}=\lambda/h$
Figure 6.  The first row lists the original noisy "Cameraman" (SNR=8.93), an intermediate result for iteration number=5e+3, and the final clean one for iteration number=1e+4. The second row displays the relative error of $u^{k}$ (Eq. 37) and the relative residuals (Eq. 35), and the energy $E(u^{k})$ versus iterations. The parameters for this experiment are $\lambda=1e+4, r_{10}=500, r_{20}=1.5e+4$
Figure 7.  The comparison of the results using $L^{1}$-and $L^{2}$-norms of mean curvature as regularizers from left to right, respectively. The parameters for the $L^{2}$-norm based MC denoising model are $h=1, \lambda=r_{1}=r_{2}=300$
Table 1.  Augmented Lagrangian method for the MC denoising model
1.Initialization: $u^{0}$, $q^{0}$, $\mathbf{n}^{0}$, and $\lambda_{1}^{0}$, ${\boldsymbol{\lambda}}_{2}^{0}$. For $k \geq 1$, do the following steps (Step $2\sim 4$):
2.Compute an approximate minimizer $(u^{k}, q^{k}, \mathbf{n}^{k})$ of the augmented Lagrangian functional with the fixed Lagrangian multiplier $\lambda_{1}^{k-1}$, ${\boldsymbol{\lambda}}_{2}^{k-1}$:
$ (u^{k}, q^{k}, \mathbf{n}^{k}) \approx \mbox{argmin } \mathcal{L}(u, q, \mathbf{n}; \lambda_{1}^{k-1}, {\boldsymbol{\lambda}}_{2}^{k-1}). $
3. Update the Lagrangian multipliers
$ \lambda_{1}^{k} = \lambda_{1}^{k-1}+r_{1}(q^{k}-\nabla\cdot \mathbf{n}^{k}) $
$ {\boldsymbol{\lambda}}_{2}^{k} = {\boldsymbol{\lambda}}_{2}^{k-1}+r_{2}\left(\frac{\nabla u^{k}}{\sqrt{1+|\nabla u^{k}|^{2}}}-\mathbf{n}^{k}\right), $
4. Measure the relative residuals and stop the iteration if they are smaller than a threshold $\epsilon_{r}$.
1.Initialization: $u^{0}$, $q^{0}$, $\mathbf{n}^{0}$, and $\lambda_{1}^{0}$, ${\boldsymbol{\lambda}}_{2}^{0}$. For $k \geq 1$, do the following steps (Step $2\sim 4$):
2.Compute an approximate minimizer $(u^{k}, q^{k}, \mathbf{n}^{k})$ of the augmented Lagrangian functional with the fixed Lagrangian multiplier $\lambda_{1}^{k-1}$, ${\boldsymbol{\lambda}}_{2}^{k-1}$:
$ (u^{k}, q^{k}, \mathbf{n}^{k}) \approx \mbox{argmin } \mathcal{L}(u, q, \mathbf{n}; \lambda_{1}^{k-1}, {\boldsymbol{\lambda}}_{2}^{k-1}). $
3. Update the Lagrangian multipliers
$ \lambda_{1}^{k} = \lambda_{1}^{k-1}+r_{1}(q^{k}-\nabla\cdot \mathbf{n}^{k}) $
$ {\boldsymbol{\lambda}}_{2}^{k} = {\boldsymbol{\lambda}}_{2}^{k-1}+r_{2}\left(\frac{\nabla u^{k}}{\sqrt{1+|\nabla u^{k}|^{2}}}-\mathbf{n}^{k}\right), $
4. Measure the relative residuals and stop the iteration if they are smaller than a threshold $\epsilon_{r}$.
Table 2.  Alternating minimization method for solving the subproblems
1. Initialization: $\widetilde{u}^{0}=u^{k-1}$, $\widetilde{q}^{0}=q^{k-1}$, and $\widetilde{\mathbf{n}}^{0}=\mathbf{n}^{k-1}$.
2. For fixed Lagrangian multipliers $\lambda_{1}=\lambda_{1}^{k-1}$ and ${\boldsymbol{\lambda}}_{2}={\boldsymbol{\lambda}}_{2}^{k-1}$, solve the following subproblems :
$ \widetilde{u}^{1} = \mbox{argmin } \mathcal{L}(u, \widetilde{q}^{0}, \widetilde{\mathbf{n}}^{0};\lambda_{1}, {\boldsymbol{\lambda}}_{2}) $
$ \widetilde{q}^{1} = \mbox{argmin } \mathcal{L}(\widetilde{u}^{1}, q, \widetilde{\mathbf{n}}^{0};\lambda_{1}, {\boldsymbol{\lambda}}_{2}) $
$ \widetilde{\mathbf{n}}^{1} = \mbox{argmin } \mathcal{L}(\widetilde{u}^{1}, \widetilde{q}^{1}, \mathbf{n}, \lambda_{1}, {\boldsymbol{\lambda}}_{2}) $
3. $(u^{k}, q^{k}, \mathbf{n}^{k})=(\widetilde{u}^{1}, \widetilde{q}^{1}, \widetilde{\mathbf{n}}^{1})$.
1. Initialization: $\widetilde{u}^{0}=u^{k-1}$, $\widetilde{q}^{0}=q^{k-1}$, and $\widetilde{\mathbf{n}}^{0}=\mathbf{n}^{k-1}$.
2. For fixed Lagrangian multipliers $\lambda_{1}=\lambda_{1}^{k-1}$ and ${\boldsymbol{\lambda}}_{2}={\boldsymbol{\lambda}}_{2}^{k-1}$, solve the following subproblems :
$ \widetilde{u}^{1} = \mbox{argmin } \mathcal{L}(u, \widetilde{q}^{0}, \widetilde{\mathbf{n}}^{0};\lambda_{1}, {\boldsymbol{\lambda}}_{2}) $
$ \widetilde{q}^{1} = \mbox{argmin } \mathcal{L}(\widetilde{u}^{1}, q, \widetilde{\mathbf{n}}^{0};\lambda_{1}, {\boldsymbol{\lambda}}_{2}) $
$ \widetilde{\mathbf{n}}^{1} = \mbox{argmin } \mathcal{L}(\widetilde{u}^{1}, \widetilde{q}^{1}, \mathbf{n}, \lambda_{1}, {\boldsymbol{\lambda}}_{2}) $
3. $(u^{k}, q^{k}, \mathbf{n}^{k})=(\widetilde{u}^{1}, \widetilde{q}^{1}, \widetilde{\mathbf{n}}^{1})$.
Table 3.  The $L^{1}$-norm of the difference $u_{N}-f_{N}$ and parameters for experiments in Fig. 1
$N$ $||u_{N}-f_{N}||_{L^{1}}$ $r_{1}$ $r_{2}$
32 $ 1.56e-2$2e-31e-1
64 $ 1.01e-2$2e-31e-1
128 $ 6.53e-3$1e-41e-1
$N$ $||u_{N}-f_{N}||_{L^{1}}$ $r_{1}$ $r_{2}$
32 $ 1.56e-2$2e-31e-1
64 $ 1.01e-2$2e-31e-1
128 $ 6.53e-3$1e-41e-1
[1]

Wei Zhu, Xue-Cheng Tai, Tony Chan. Augmented Lagrangian method for a mean curvature based image denoising model. Inverse Problems & Imaging, 2013, 7 (4) : 1409-1432. doi: 10.3934/ipi.2013.7.1409

[2]

Fangfang Dong, Yunmei Chen. A fractional-order derivative based variational framework for image denoising. Inverse Problems & Imaging, 2016, 10 (1) : 27-50. doi: 10.3934/ipi.2016.10.27

[3]

Guoshan Zhang, Peizhao Yu. Lyapunov method for stability of descriptor second-order and high-order systems. Journal of Industrial & Management Optimization, 2018, 14 (2) : 673-686. doi: 10.3934/jimo.2017068

[4]

Xi-Hong Yan. A new convergence proof of augmented Lagrangian-based method with full Jacobian decomposition for structured variational inequalities. Numerical Algebra, Control & Optimization, 2016, 6 (1) : 45-54. doi: 10.3934/naco.2016.6.45

[5]

Egil Bae, Xue-Cheng Tai, Wei Zhu. Augmented Lagrangian method for an Euler's elastica based segmentation model that promotes convex contours. Inverse Problems & Imaging, 2017, 11 (1) : 1-23. doi: 10.3934/ipi.2017001

[6]

Lela Dorel. Glucose level regulation via integral high-order sliding modes. Mathematical Biosciences & Engineering, 2011, 8 (2) : 549-560. doi: 10.3934/mbe.2011.8.549

[7]

Chunlin Wu, Juyong Zhang, Xue-Cheng Tai. Augmented Lagrangian method for total variation restoration with non-quadratic fidelity. Inverse Problems & Imaging, 2011, 5 (1) : 237-261. doi: 10.3934/ipi.2011.5.237

[8]

Xueyong Wang, Yiju Wang, Gang Wang. An accelerated augmented Lagrangian method for multi-criteria optimization problem. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-9. doi: 10.3934/jimo.2018136

[9]

Feishe Chen, Lixin Shen, Yuesheng Xu, Xueying Zeng. The Moreau envelope approach for the L1/TV image denoising model. Inverse Problems & Imaging, 2014, 8 (1) : 53-77. doi: 10.3934/ipi.2014.8.53

[10]

Seung-Yeal Ha, Jeongho Kim, Jinyeong Park, Xiongtao Zhang. Uniform stability and mean-field limit for the augmented Kuramoto model. Networks & Heterogeneous Media, 2018, 13 (2) : 297-322. doi: 10.3934/nhm.2018013

[11]

Huan Han. A variational model with fractional-order regularization term arising in registration of diffusion tensor image. Inverse Problems & Imaging, 2018, 12 (6) : 1263-1291. doi: 10.3934/ipi.2018053

[12]

Marc Wolff, Stéphane Jaouen, Hervé Jourdren, Eric Sonnendrücker. High-order dimensionally split Lagrange-remap schemes for ideal magnetohydrodynamics. Discrete & Continuous Dynamical Systems - S, 2012, 5 (2) : 345-367. doi: 10.3934/dcdss.2012.5.345

[13]

Raymond H. Chan, Haixia Liang, Suhua Wei, Mila Nikolova, Xue-Cheng Tai. High-order total variation regularization approach for axially symmetric object tomography from a single radiograph. Inverse Problems & Imaging, 2015, 9 (1) : 55-77. doi: 10.3934/ipi.2015.9.55

[14]

Marc Bonnet. Inverse acoustic scattering using high-order small-inclusion expansion of misfit function. Inverse Problems & Imaging, 2018, 12 (4) : 921-953. doi: 10.3934/ipi.2018039

[15]

Tarek Saanouni. Global well-posedness of some high-order semilinear wave and Schrödinger type equations with exponential nonlinearity. Communications on Pure & Applied Analysis, 2014, 13 (1) : 273-291. doi: 10.3934/cpaa.2014.13.273

[16]

Phillip Colella. High-order finite-volume methods on locally-structured grids. Discrete & Continuous Dynamical Systems - A, 2016, 36 (8) : 4247-4270. doi: 10.3934/dcds.2016.36.4247

[17]

Andrey B. Muravnik. On the Cauchy problem for differential-difference parabolic equations with high-order nonlocal terms of general kind. Discrete & Continuous Dynamical Systems - A, 2006, 16 (3) : 541-561. doi: 10.3934/dcds.2006.16.541

[18]

Kolade M. Owolabi, Abdon Atangana. High-order solvers for space-fractional differential equations with Riesz derivative. Discrete & Continuous Dynamical Systems - S, 2019, 12 (3) : 567-590. doi: 10.3934/dcdss.2019037

[19]

Xiantao Xiao, Liwei Zhang, Jianzhong Zhang. On convergence of augmented Lagrangian method for inverse semi-definite quadratic programming problems. Journal of Industrial & Management Optimization, 2009, 5 (2) : 319-339. doi: 10.3934/jimo.2009.5.319

[20]

Qingsong Duan, Mengwei Xu, Yue Lu, Liwei Zhang. A smoothing augmented Lagrangian method for nonconvex, nonsmooth constrained programs and its applications to bilevel problems. Journal of Industrial & Management Optimization, 2018, 13 (5) : 1-21. doi: 10.3934/jimo.2018094

2017 Impact Factor: 1.465

Metrics

  • PDF downloads (16)
  • HTML views (163)
  • Cited by (0)

Other articles
by authors

[Back to Top]