ISSN 19308337(print)
ISSN 19308345(online) 
Current volume

Journal archive


Inverse Problems and Imaging publishes research articles of the highest quality that employ innovative mathematical and modeling techniques to study inverse and imaging problems arising in engineering and other sciences. Every published paper has a strong mathematical orientation employing methods from such areas as control theory, discrete mathematics, differential geometry, harmonic analysis, functional analysis, integral geometry, mathematical physics, numerical analysis, optimization, partial differential equations, and stochastic and statistical methods. The field of applications includes medical and other imaging, nondestructive testing, geophysical prospection and remote sensing as well as image analysis and image processing.
This journal is committed to recording important new results in its field and will maintain the highest standards of innovation and quality. To be published in this journal, a paper must be correct, novel, nontrivial and of interest to a substantial number of researchers and readers.
 AIMS is a member of COPE. All AIMS journals adhere to the publication ethics and malpractice policies outlined by COPE.
 Publishes 6 issues a year in February, April, June, August, October and December.
 Publishes online only.
 Indexed in Science Citation Index, ISI Alerting Services, CompuMath Citation Index, Current Contents/Physical, Chemical & Earth Sciences (CC/PC&ES), INSPEC, Mathematical Reviews, MathSciNet, PASCAL/CNRS, Scopus, Web of Science and Zentralblatt MATH.
 Archived in Portico and CLOCKSS.
 IPI is a publication of the American Institute of Mathematical Sciences. All rights reserved.

TOP 10 Most Read Articles in IPI, October 2017
1 
Coordinate descent optimization for l^{1} minimization with application to compressed sensing; a greedy algorithm
Volume 3, Number 3, Pages: 487  503, 2009
Yingying Li
and Stanley Osher
Abstract
Full Text
Related Articles
We propose a fast algorithm for solving the Basis Pursuit problem, min_{u}
$\{u_1\: \Au=f\}$, which has application to compressed sensing.
We design an efficient method for solving the related unconstrained problem min_{u} $E(u) = u_1 + \lambda \Auf\^2_2$ based on a greedy coordinate descent
method. We claim that in combination with a Bregman iterative method, our
algorithm will achieve a solution with speed and accuracy competitive with some
of the leading methods for the basis pursuit problem.

2 
Video stabilization of atmospheric turbulence distortion
Volume 7, Number 3, Pages: 839  861, 2013
Yifei Lou,
Sung Ha Kang,
Stefano Soatto
and Andrea L. Bertozzi
Abstract
References
Full Text
Related Articles
We present a method to enhance the quality of a video sequence
captured through a turbulent atmospheric medium, and give an
estimate of the radiance of the distant scene, represented as a
``latent image,'' which is assumed to be static throughout the
video. Due to atmospheric turbulence, temporal averaging produces
a blurred version of the scene's radiance. We propose a method
combining Sobolev gradient and Laplacian to stabilize the video
sequence, and a latent image is further found utilizing the ``lucky
region" method. The video sequence is stabilized while keeping
sharp details, and the latent image shows more consistent straight
edges. We analyze the wellposedness for the stabilizing PDE and the
linear stability of the numerical scheme.

3 
Template matching via $l_1$ minimization and its application to hyperspectral data
Volume 5, Number 1, Pages: 19  35, 2011
Zhaohui Guo
and Stanley Osher
Abstract
References
Full Text
Related Articles
Detecting and identifying targets or objects that are present in
hyperspectral ground images are of great interest. Applications
include land and environmental monitoring, mining, military, civil
searchandrescue operations, and so on. We propose and analyze an
extremely simple and efficient idea for template matching based on
$l_1$ minimization. The designed algorithm can be applied in
hyperspectral classification and target detection. Synthetic image
data and real hyperspectral image (HSI) data are used to assess the
performance, with comparisons to other approaches, e.g. spectral
angle map (SAM), adaptive coherence estimator (ACE),
generalizedlikelihood ratio test (GLRT) and matched filter. We
demonstrate that this algorithm achieves excellent results with both
high speed and accuracy by using Bregman iteration.

4 
Some proximal methods for Poisson intensity CBCT and PET
Volume 6, Number 4, Pages: 565  598, 2012
Sandrine Anthoine,
JeanFrançois Aujol,
Yannick Boursier
and Clothilde Mélot
Abstract
References
Full Text
Related Articles
ConeBeam Computerized Tomography (CBCT) and Positron Emission Tomography (PET) are two complementary medical imaging modalities providing respectively anatomic and metabolic information on a patient.
In the context of public health, one must address the problem of dose reduction of the potentially harmful quantities related to each exam protocol : Xrays for CBCT and radiotracer for PET.
Two demonstrators based on a technological breakthrough (acquisition devices work in photoncounting mode) have been developed.
It turns out that in this lowdose context, i.e. for low intensity signals acquired by photon counting devices, noise should not be approximated anymore by a Gaussian distribution, but is following a Poisson distribution.
We investigate in this paper the two related tomographic reconstruction problems.
We formulate separately the CBCT and the PET problems in two general frameworks that encompass the physics of the acquisition devices and the specific discretization of the object to reconstruct.
We propose various fast numerical schemes based on proximal methods to compute the solution of each problem.
In particular, we show that primaldual approaches are well suited in the PET case when considering non differentiable regularizations such as Total Variation.
Experiments on numerical simulations and real data are in favor of the proposed algorithms when compared with wellestablished methods.

5 
4DCT reconstruction with unified spatialtemporal patchbased regularization
Volume 9, Number 2, Pages: 447  467, 2015
Daniil Kazantsev,
William M. Thompson,
William R. B. Lionheart,
Geert Van Eyndhoven,
Anders P. Kaestner,
Katherine J. Dobson,
Philip J. Withers
and Peter D. Lee
Abstract
References
Full Text
Related Articles
In this paper, we consider a limited data reconstruction problem for temporarily evolving computed tomography (CT), where some regions are static during the whole scan and some are dynamic (intensely or slowly changing). When motion occurs during a tomographic experiment one would like to minimize the number of projections used and reconstruct the image iteratively. To ensure stability of the iterative method spatial and temporal constraints are highly desirable. Here, we present a novel spatialtemporal regularization approach where all time frames are reconstructed collectively as a unified function of space and time. Our method has two main differences from the stateoftheart spatialtemporal regularization methods. Firstly, all available temporal information is used to improve the spatial resolution of each time frame. Secondly, our method does not treat spatial and temporal penalty terms separately but rather unifies them in one regularization term. Additionally we optimize the temporal smoothing part of the method by considering the nonlocal patches which are most likely to belong to one intensity class. This modification significantly improves the signaltonoise ratio of the reconstructed images and reduces computational time. The proposed approach is used in combination with golden ratio sampling of the projection data which allows one to find a better tradeoff between temporal and spatial resolution scenarios.

6 
Fast dual minimization of the vectorial total variation norm and applications to color image processing
Volume 2, Number 4, Pages: 455  484, 2008
Xavier Bresson
and Tony F. Chan
Abstract
Full Text
Related Articles
We propose a regularization algorithm for color/vectorial images which is fast, easy to code and mathematically wellposed. More precisely, the regularization model is based on the dual formulation of the vectorial Total Variation (VTV) norm and it may be regarded as the vectorial extension of the dual approach defined by Chambolle in [13] for grayscale/scalar images. The proposed model offers several advantages. First, it minimizes the exact VTV norm whereas standard approaches use a regularized norm. Then, the numerical scheme of minimization is straightforward to implement and finally, the number of iterations to reach the solution is low, which gives a fast regularization algorithm. Finally, and maybe more importantly, the proposed VTV minimization scheme can be easily extended to many standard applications. We apply this $L^1$ vectorial regularization algorithm to the following problems: color inverse scale space, color denoising with the chromaticitybrightness color representation, color image inpainting, color wavelet shrinkage, color image decomposition, color image deblurring, and color denoising on manifolds. Generally speaking, this VTV minimization scheme can be used in problems that required vector field (color, other feature vector) regularization while preserving discontinuities.

7 
An anisotropic perfectly matched layer method for Helmholtz scattering problems
with discontinuous wave number
Volume 7, Number 3, Pages: 663  678, 2013
Zhiming Chen,
Chao Liang
and Xueshuang Xiang
Abstract
References
Full Text
Related Articles
The anisotropic perfectly matched layer (PML) defines a continuous vector field outside a rectangle domain and performs the complex coordinate stretching along the direction of the vector field. In this paper we propose a new way of constructing the vector field which allows us to prove the exponential decay of the stretched Green function without the constraint on the thickness of the PML layer. We report numerical experiments to illustrate the competitive behavior of the proposed PML method.

8 
Adaptive meshing approach to identification of cracks with
electrical impedance tomography
Volume 8, Number 1, Pages: 127  148, 2014
Kimmo Karhunen,
Aku Seppänen
and Jari P. Kaipio
Abstract
References
Full Text
Related Articles
Electrical impedance tomography (EIT) is a noninvasive imaging
modality in which the internal conductivity distribution
is reconstructed
based on boundary voltage measurements.
In this work, we consider the
application of EIT to nondestructive testing (NDT) of materials and,
especially, crack detection.
The main goal is to estimate the location, depth
and orientation of a crack in three dimensions.
We formulate the crack detection task as a shape estimation problem for
boundaries imposed with Neumann zero boundary conditions.
We propose an adaptive meshing algorithm that iteratively
seeks the maximum a posteriori estimate for the shape of the crack.
The approach is tested both numerically and experimentally.
In all test cases, the EIT measurements
are collected using a set of electrodes attached on only
a single planar surface of the target 
this is often the only realizable configuration in NDT of
large building structures,
such as concrete walls.
The results show that with the proposed computational method,
it is possible to recover the position and size of the crack,
even in cases where the background conductivity is inhomogeneous.

9 
Variational denoising of diffusion weighted MRI
Volume 3, Number 4, Pages: 625  648, 2009
Tim McGraw,
Baba Vemuri,
Evren Özarslan,
Yunmei Chen
and Thomas Mareci
Abstract
Full Text
Related Articles
In this paper, we present a novel variational formulation for
restoring high angular resolution diffusion imaging (HARDI) data. The
restoration formulation involves smoothing signal measurements over
the spherical domain and across the 3D image lattice. The
regularization across the lattice is achieved using a total
variation (TV) norm based scheme, while the finite element method
(FEM) was employed to smooth the data on the sphere at each lattice
point using first and second order smoothness constraints. Examples
are presented to show the performance of the
HARDI data restoration scheme and its effect on fiber direction
computation on synthetic data, as well as on real data sets
collected from excised rat brain and spinal cord.

10 
An efficient computational method for total variationpenalized Poisson likelihood estimation
Volume 2, Number 2, Pages: 167  185, 2008
Johnathan M. Bardsley
Abstract
References
Full Text
Related Articles
Approximating nonGaussian noise processes with Gaussian models is standard in data analysis. This is due in large part to the fact that Gaussian models yield parameter estimation problems of least squares form, which have been extensively studied both from the theoretical and computational points of view. In image processing applications, for example, data is often collected by a CCD camera, in which case the noise is a Guassian/Poisson mixture with the Poisson noise dominating for a sufficiently strong signal. Even so, the standard approach in such cases is to use a Gaussian approximation that leads to a negativelog likelihood function of weighted least squares type.
In the Bayesian pointofview taken in this paper, a negativelog prior (or regularization) function is added to the negativelog likelihood function, and the resulting function is minimized. We focus on the case where the negativelog prior is the wellknown total variation function and give a statistical interpretation. Regardless of whether the least squares or Poisson negativelog likelihood is used, the total variation term yields a minimization problem that is computationally challenging. The primary result of this work is the efficient computational method that is presented for the solution of such problems, together with its convergence analysis. With the computational method in hand, we then perform experiments that indicate that the Poisson negativelog likelihood yields a more computationally efficient method than does the use of the least squares function. We also present results that indicate that this may even be the case when the data noise is i.i.d. Gaussian, suggesting that regardless of noise statistics, using the Poisson negativelog likelihood can yield a more computationally tractable problem when total variation regularization is used.

Go to top

