IPI
Gaussian Markov random field priors for inverse problems
Johnathan M. Bardsley
In this paper, our focus is on the connections between the methods of (quadratic) regularization for inverse problems and Gaussian Markov random field (GMRF) priors for problems in spatial statistics. We begin with the most standard GMRFs defined on a uniform computational grid, which correspond to the oft-used discrete negative-Laplacian regularization matrix. Next, we present a class of GMRFs that allow for the formation of edges in reconstructed images, and then draw concrete connections between these GMRFs and numerical discretizations of more general diffusion operators. The benefit of the GMRF interpretation of quadratic regularization is that a GMRF is built-up from concrete statistical assumptions about the values of the unknown at each pixel given the values of its neighbors. Thus the regularization term corresponds to a concrete spatial statistical model for the unknown, encapsulated in the prior. Throughout our discussion, strong ties between specific GMRFs, numerical discretizations of diffusion operators, and corresponding regularization matrices, are established. We then show how such GMRF priors can be used for edge-preserving reconstruction of images, in both image deblurring and medical imaging test cases. Moreover, we demonstrate the effectiveness of GMRF priors for data arising from both Gaussian and Poisson noise models.
keywords: Inverse problems image reconstruction. Gaussian Markov random fields Bayesian inference regularization numerical partial differential equations
IPI
A theoretical framework for the regularization of Poisson likelihood estimation problems
Johnathan M. Bardsley
Let $z=Au+\gamma$ be an ill-posed, linear operator equation. Such a model arises, for example, in both astronomical and medical imaging, in which case $\gamma$ corresponds to background, $u$ the unknown true image, $A$ the forward operator, and $z$ the data. Regularized solutions of this equation can be obtained by solving

$R_\alpha(A,z)= arg\min_{u\geq 0} \{T_0(Au;z)+\alpha J(u)\},$

where $T_0(Au;z)$ is the negative-log of the Poisson likelihood functional, and $\alpha>0$ and $J$ are the regularization parameter and functional, respectively. Our goal in this paper is to determine general conditions which guarantee that $R_\alpha$ defines a regularization scheme for $z=Au+\gamma$. Determining the appropriate definition for regularization scheme in this context is important: not only will it serve to unify previous theoretical arguments in this direction, it will provide a framework for future theoretical analyses. To illustrate the latter, we end the paper with an application of the general framework to a case in which an analysis has not been done.

keywords: regularization Poisson likelihood variational problems. mathematical imaging
IPI
An efficient computational method for total variation-penalized Poisson likelihood estimation
Johnathan M. Bardsley
Approximating non-Gaussian noise processes with Gaussian models is standard in data analysis. This is due in large part to the fact that Gaussian models yield parameter estimation problems of least squares form, which have been extensively studied both from the theoretical and computational points of view. In image processing applications, for example, data is often collected by a CCD camera, in which case the noise is a Guassian/Poisson mixture with the Poisson noise dominating for a sufficiently strong signal. Even so, the standard approach in such cases is to use a Gaussian approximation that leads to a negative-log likelihood function of weighted least squares type.
    In the Bayesian point-of-view taken in this paper, a negative-log prior (or regularization) function is added to the negative-log likelihood function, and the resulting function is minimized. We focus on the case where the negative-log prior is the well-known total variation function and give a statistical interpretation. Regardless of whether the least squares or Poisson negative-log likelihood is used, the total variation term yields a minimization problem that is computationally challenging. The primary result of this work is the efficient computational method that is presented for the solution of such problems, together with its convergence analysis. With the computational method in hand, we then perform experiments that indicate that the Poisson negative-log likelihood yields a more computationally efficient method than does the use of the least squares function. We also present results that indicate that this may even be the case when the data noise is i.i.d. Gaussian, suggesting that regardless of noise statistics, using the Poisson negative-log likelihood can yield a more computationally tractable problem when total variation regularization is used.
keywords: nonnegatively constrained optimization total variation image reconstruction Bayesian statistical methods.

Year of publication

Related Authors

Related Keywords

[Back to Top]