首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Recently, graph cuts algorithms have been used to solve variational image restoration problems, especially for noise removal and segmentation. Compared to time-marching PDE methods, graph cuts based methods are more efficient and able to obtain the global minimizer. However, for high resolution and large-scale images, the cost of both memory and computational time increases dramatically. In this paper, we combine the domain decomposition method and the graph cuts algorithm for solving the total variation minimizations with L 1 and L 2 fidelity term. Numerous numerical experiments on large-scale data demonstrate the proposed algorithm yield good results in terms of computational time and memory usage.  相似文献   

2.
Variational methods have become an important kind of methods in signal and image restoration—a typical inverse problem. One important minimization model consists of the squared ?_2 data fidelity(corresponding to Gaussian noise) and a regularization term constructed by a potential function composed of first order difference operators. It is well known that total variation(TV) regularization, although achieved great successes,suffers from a contrast reduction effect. Using a typical signal, we show that, actually all convex regularizers and most nonconvex regularizers have this effect. With this motivation, we present a general truncated regularization framework. The potential function is a truncation of existing nonsmooth potential functions and thus flat on(τ, +∞) for some positive τ. Some analysis in 1 D theoretically demonstrate the good contrast-preserving ability of the framework. We also give optimization algorithms with convergence verification in 2 D, where global minimizers of each subproblem(either convex or nonconvex) are calculated. Experiments numerically show the advantages of the framework.  相似文献   

3.
The truncated variation, TVcTVc, is a fairly new concept introduced in ?ochowski (2008) [5]. Roughly speaking, given a càdlàg function ff, its truncated variation is “the total variation which does not pay attention to small changes of ff, below some threshold c>0c>0”. The very basic consequence of such approach is that contrary to the total variation, TVcTVc is always finite. This is appealing to the stochastic analysis where so-far large classes of processes, like semimartingales or diffusions, could not be studied with the total variation. Recently in ?ochowski (2011) [6], another characterization of TVcTVc has been found. Namely TVcTVc is the smallest possible total variation of a function which approximates ff uniformly with accuracy c/2c/2. Due to these properties we envisage that TVcTVc might be a useful concept both in the theory and applications of stochastic processes.  相似文献   

4.
Although the residual method, or constrained regularization, is frequently used in applications, a detailed study of its properties is still missing. This sharply contrasts the progress of the theory of Tikhonov regularization, where a series of new results for regularization in Banach spaces has been published in the recent years. The present paper intends to bridge the gap between the existing theories as far as possible. We develop a stability and convergence theory for the residual method in general topological spaces. In addition, we prove convergence rates in terms of (generalized) Bregman distances, which can also be applied to non-convex regularization functionals.We provide three examples that show the applicability of our theory. The first example is the regularized solution of linear operator equations on Lp-spaces, where we show that the results of Tikhonov regularization generalize unchanged to the residual method. As a second example, we consider the problem of density estimation from a finite number of sampling points, using the Wasserstein distance as a fidelity term and an entropy measure as regularization term. It is shown that the densities obtained in this way depend continuously on the location of the sampled points and that the underlying density can be recovered as the number of sampling points tends to infinity. Finally, we apply our theory to compressed sensing. Here, we show the well-posedness of the method and derive convergence rates both for convex and non-convex regularization under rather weak conditions.  相似文献   

5.
A family of classification algorithms generated from Tikhonov regularization schemes are considered. They involve multi-kernel spaces and general convex loss functions. Our main purpose is to provide satisfactory estimates for the excess misclassification error of these multi-kernel regularized classifiers when the loss functions achieve the zero value. The error analysis consists of two parts: regularization error and sample error. Allowing multi-kernels in the algorithm improves the regularization error and approximation error, which is one advantage of the multi-kernel setting. For a general loss function, we show how to bound the regularization error by the approximation in some weighted LqLq spaces. For the sample error, we use a projection operator. The projection in connection with the decay of the regularization error enables us to improve convergence rates in the literature even for the one-kernel schemes and special loss functions: least-square loss and hinge loss for support vector machine soft margin classifiers. Existence of the optimization problem for the regularization scheme associated with multi-kernels is verified when the kernel functions are continuous with respect to the index set. Concrete examples, including Gaussian kernels with flexible variances and probability distributions with some noise conditions, are used to illustrate the general theory.  相似文献   

6.
In this paper, we propose, analyze and test primal and dual versions of the alternating direction algorithm for the sparse signal reconstruction from its major noise contained observation data. The algorithm minimizes a convex non-smooth function consisting of the sum of ? 1-norm regularization term and ? 1-norm data fidelity term. We minimize the corresponding augmented Lagrangian function alternatively from either primal or dual forms. Both of the resulting subproblems admit explicit solutions either by using a one-dimensional shrinkage or by an efficient Euclidean projection. The algorithm is easily implementable and it requires only two matrix-vector multiplications per-iteration. The global convergence of the proposed algorithm is established under some technical conditions. The extensions to the non-negative signal recovery problem and the weighted regularization minimization problem are also discussed and tested. Numerical results illustrate that the proposed algorithm performs better than the state-of-the-art algorithm YALL1.  相似文献   

7.
Our work considers the optimization of the sum of a non-smooth convex function and a finite family of composite convex functions, each one of which is composed of a convex function and a bounded linear operator. This type of problem is associated with many interesting challenges encountered in the image restoration and image reconstruction fields. We developed a splitting primal-dual proximity algorithm to solve this problem. Furthermore, we propose a preconditioned method, of which the iterative parameters are obtained without the need to know some particular operator norm in advance. Theoretical convergence theorems are presented. We then apply the proposed methods to solve a total variation regularization model, in which the L2 data error function is added to the L1 data error function. The main advantageous feature of this model is its capability to combine different loss functions. The numerical results obtained for computed tomography (CT) image reconstruction demonstrated the ability of the proposed algorithm to reconstruct an image with few and sparse projection views while maintaining the image quality.  相似文献   

8.
Many existing algorithms taking the seminorm in BV(Ω) for regularization have achieved great success in image processing. However, this paper considers the total bounded variation regularization based approach to perform image deblurring. Based on this novel model, we introduce an extended split Bregman iteration to obtain the optimum solution quickly. We also provide the rigorous convergence analysis of the iterative algorithm here. Compared with the results of the ROF method, numerical simulations illustrate the more excellent reconstruction performance of the proposed algorithm.  相似文献   

9.
The cascade algorithm plays an important role in computer graphics and wavelet analysis.In this paper,we first investigate the convergence of cascade algorithms associated with a polynomially decaying mask and a general dilation matrix in L p (R s) (1 p ∞) spaces,and then we give an error estimate of the cascade algorithms associated with truncated masks.It is proved that under some appropriate conditions if the cascade algorithm associated with a polynomially decaying mask converges in the L p-norm,then the cascade algorithms associated with the truncated masks also converge in the L p-norm.Moreover,the error between the two resulting limit functions is estimated in terms of the masks.  相似文献   

10.
The aim of this paper is to introduce and investigate the concept of pseudo-atoms of a real-valued function m defined on an effect algebra L; a few examples of pseudo-atoms and atoms are given in the context of null-additive, null-null-additive and pseudo-null-additive functions and also, some fundamental results for pseudo-atoms under the assumption of null-null-additivity are established. The notions of total variation |m|, positive variation m+ and negative variation m of a real-valued function m on L are studied elaborately and it is proved for a modular measure m (which is of bounded total variation) defined on a D-lattice L that, m is pseudo-atomic (or atomic) if and only if its total variation |m| is pseudo-atomic (or atomic). Finally, a Jordan type decomposition theorem for an extended real-valued function m of bounded total variation defined on an effect algebra L is proved and some properties on decomposed parts of m such as continuity from below, pseudo-atomicity (or atomicity) and being measure, are discussed. A characterization for the function m to be of bounded total variation is established here and used in proving above-mentioned Jordan type decomposition theorem.  相似文献   

11.
Linear least squares problems with box constraints are commonly solved to find model parameters within bounds based on physical considerations. Common algorithms include Bounded Variable Least Squares (BVLS) and the Matlab function lsqlin. Here, the goal is to find solutions to ill-posed inverse problems that lie within box constraints. To do this, we formulate the box constraints as quadratic constraints, and solve the corresponding unconstrained regularized least squares problem. Using box constraints as quadratic constraints is an efficient approach because the optimization problem has a closed form solution. The effectiveness of the proposed algorithm is investigated through solving three benchmark problems and one from a hydrological application. Results are compared with solutions found by lsqlin, and the quadratically constrained formulation is solved using the L-curve, maximum a posteriori estimation (MAP), and the χ2 regularization method. The χ2 regularization method with quadratic constraints is the most effective method for solving least squares problems with box constraints.  相似文献   

12.
This paper introduces a proximity operator framework for studying the L1/TV image denoising model which minimizes the sum of a data fidelity term measured in the ?1-norm and the total-variation regularization term. Both terms in the model are non-differentiable. This causes algorithmic difficulties for its numerical treatment. To overcome the difficulties, we formulate the total-variation as a composition of a convex function (the ?1-norm or the ?2-norm) and the first order difference operator, and then express the solution of the model in terms of the proximity operator of the composition. By developing a “chain rule” for the proximity operator of the composition, we identify the solution as fixed point of a nonlinear mapping expressed in terms of the proximity operator of the ?1-norm or the ?2-norm, each of which is explicitly given. This formulation naturally leads to fixed-point algorithms for the numerical treatment of the model. We propose an alternative model by replacing the non-differentiable convex function in the formulation of the total variation with its differentiable Moreau envelope and develop corresponding fixed-point algorithms for solving the new model. When partial information of the underlying image is available, we modify the model by adding an indicator function to the minimization functional and derive its corresponding fixed-point algorithms. Numerical experiments are conducted to test the approximation accuracy and computational efficiency of the proposed algorithms. Also, we provide a comparison of our approach to two state-of-the-art algorithms available in the literature. Numerical results confirm that our algorithms perform favorably, in terms of PSNR-values and CPU-time, in comparison to the two algorithms.  相似文献   

13.
This paper examines worst-case evaluation bounds for finding weak minimizers in unconstrained optimization. For the cubic regularization algorithm, Nesterov and Polyak (2006) [15] and Cartis et al. (2010) [3] show that at most O(?−3) iterations may have to be performed for finding an iterate which is within ? of satisfying second-order optimality conditions. We first show that this bound can be derived for a version of the algorithm, which only uses one-dimensional global optimization of the cubic model and that it is sharp. We next consider the standard trust-region method and show that a bound of the same type may also be derived for this method, and that it is also sharp in some cases. We conclude by showing that a comparison of the bounds on the worst-case behaviour of the cubic regularization and trust-region algorithms favours the first of these methods.  相似文献   

14.
In the classical bin packing problem one is given a list of items and asked to pack them into the fewest possible unit-sized bins. Given two lists, L1 and L2, where L2 is derived from L1 by deleting some elements of L1 and/or reducing the size of some elements of L1, one might hope that an approximation algorithm would use no more bins to pack L2 than it uses to pack L1. Johnson and Graham have given examples showing that First-Fit and First-Fit Decreasing can actually use more bins to pack L2 than L1. Graham has also studied this type of behavior among multiprocessor scheduling algorithms. In the present paper we extend this study of anomalous behavior to a broad class of approximation algorithms for bin packing. To do this we introduce a technique which allows one to characterize the monotonic/anomalous behavior of any algorithm in a large, natural class. We then derive upper and lower bounds on the anomalous behavior of the algorithms which are anomalous and provide conditions under which a normally nonmonotonic algorithm becomes monotonic.  相似文献   

15.
16.
In this paper, we propose a new model for MR image reconstruction based on second order total variation ( \(\text {TV}^{2}\) ) regularization and wavelet, which can be considered as requiring the image to be sparse in both the spatial finite differences and wavelet transforms. Furthermore, by applying the variable splitting technique twice, augmented Lagrangian method and the Barzilai-Borwein step size selection scheme, an ADMM algorithm is designed to solve the proposed model. It reduces the reconstruction problem to several unconstrained minimization subproblems, which can be solved by shrinking operators and alternating minimization algorithms. The proposed algorithm needs not to solve a fourth-order PDE but to solve several second-order PDEs so as to improve calculation efficiency. Numerical results demonstrate the effectiveness of the presented algorithm and illustrate that the proposed model outperforms some reconstruction models in the quality of reconstructed images.  相似文献   

17.
The variational image decomposition model decomposes an image into a structural and an oscillatory component by regularization technique and functional minimization. It is an important task in various image processing methods, such as image restoration, image segmentation, and object recognition. In this paper, we propose a non-convex and non-smooth variational decomposition model for image restoration that uses non-convex and non-smooth total variation (TV) to measure the structure component and the negative Sobolev space H1 to model the oscillatory component. The new model combines the advantages of non-convex regularization and weaker-norm texture modeling, and it can well remove the noises while preserving the valuable edges and contours of the image. The iteratively reweighted l1 (IRL1) algorithm is employed to solve the proposed non-convex minimization problem. For each subproblem, we use the alternating direction method of multipliers (ADMM) algorithm to solve it. Numerical results validate the effectiveness of the proposed model for both synthetic and real images in terms of peak signal-to-noise ratio (PSNR) and mean structural similarity index (MSSIM).  相似文献   

18.
We derive a sharp nonasymptotic bound of parameter estimation of the L1/2 regularization.The bound shows that the solutions of the L1/2 regularization can achieve a loss within logarithmic factor of an ideal mean squared error and therefore underlies the feasibility and effectiveness of the L1/2regularization.Interestingly,when applied to compressive sensing,the L1/2 regularization scheme has exhibited a very promising capability of completed recovery from a much less sampling information.As compared with the Lp(0 p 1) penalty,it is appeared that the L1/2 penalty can always yield the most sparse solution among all the Lp penalty when 1/2 ≤ p 1,and when 0 p 1/2,the Lp penalty exhibits the similar properties as the L1/2 penalty.This suggests that the L1/2 regularization scheme can be accepted as the best and therefore the representative of all the Lp(0 p 1) regularization schemes.  相似文献   

19.
We derive worst-case bounds, with respect to the L p norm, on the error achieved by algorithms aimed at approximating a concave function of a single variable, through the evaluation of the function and its subgradient at a fixed number of points to be determined. We prove that, for p larger than 1, adaptive algorithms outperform passive ones. Next, for the uniform norm, we propose an improvement of the Sandwich algorithm, based on a dynamic programming formulation of the problem.  相似文献   

20.
We study two widely used algorithms for the Potts model on rectangular subsets of the hypercubic lattice ${\mathbb{Z}^{d}}$ —heat bath dynamics and the Swendsen–Wang algorithm—and prove that, under certain circumstances, the mixing in these algorithms is torpid or slow. In particular, we show that for heat bath dynamics throughout the region of phase coexistence, and for the Swendsen–Wang algorithm at the transition point, the mixing time in a box of side length L with periodic boundary conditions has upper and lower bounds which are exponential in L d-1. This work provides the first upper bound of this form for the Swendsen–Wang algorithm, and gives lower bounds for both algorithms which significantly improve the previous lower bounds that were exponential in L/(log L)2.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号