首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This work addresses the problem of regularized linear least squares (RLS) with non-quadratic separable regularization. Despite being frequently deployed in many applications, the RLS problem is often hard to solve using standard iterative methods. In a recent work [M. Elad, Why simple shrinkage is still relevant for redundant representations? IEEE Trans. Inform. Theory 52 (12) (2006) 5559–5569], a new iterative method called parallel coordinate descent (PCD) was devised. We provide herein a convergence analysis of the PCD algorithm, and also introduce a form of the regularization function, which permits analytical solution to the coordinate optimization. Several other recent works [I. Daubechies, M. Defrise, C. De-Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Comm. Pure Appl. Math. LVII (2004) 1413–1457; M.A. Figueiredo, R.D. Nowak, An EM algorithm for wavelet-based image restoration, IEEE Trans. Image Process. 12 (8) (2003) 906–916; M.A. Figueiredo, R.D. Nowak, A bound optimization approach to wavelet-based image deconvolution, in: IEEE International Conference on Image Processing, 2005], which considered the deblurring problem in a Bayesian methodology, also obtained element-wise optimization algorithms. We show that the last three methods are essentially equivalent, and the unified method is termed separable surrogate functionals (SSF). We also provide a convergence analysis for SSF. To further accelerate PCD and SSF, we merge them into a recently developed sequential subspace optimization technique (SESOP), with almost no additional complexity. A thorough numerical comparison of the denoising application is presented, using the basis pursuit denoising (BPDN) objective function, which leads all of the above algorithms to an iterated shrinkage format. Both with synthetic data and with real images, the advantage of the combined PCD-SESOP method is demonstrated.  相似文献   

2.
A fast algorithm for the total variation model of image denoising   总被引:2,自引:0,他引:2  
The total variation model of Rudin, Osher, and Fatemi for image denoising is considered to be one of the best denoising models. In the past, its solutions were based on nonlinear partial differential equations and the resulting algorithms were very complicated. In this paper, we propose a fast algorithm for the solution of the total variation model. Our algorithm is very simple and does not involve partial differential equations. We also provide a rigorous proof for the convergence of our algorithm.  相似文献   

3.
The 1990s witnessed an explosion of wavelet-based methods in the field of image processing. This article will focus primarily on wavelet-based image compression. We shall describe the connection between wavelets and vision, and how wavelet techniques provide image compression algorithms that are clearly superior to the present JPEG standard. In particular, the wavelet-based algorithms known as SPIHT, ASWDR, and the new standard JPEG2000, will be described and compared. Our comparison will show that, in many respects, ASWDR is the best algorithm. Applications to denoising will also be briefly referenced and pointers supplied to other references on wavelet-based image processing.  相似文献   

4.
A number of high‐order variational models for image denoising have been proposed within the last few years. The main motivation behind these models is to fix problems such as the staircase effect and the loss of image contrast that the classical Rudin–Osher–Fatemi model [Leonid I. Rudin, Stanley Osher and Emad Fatemi, Nonlinear total variation based noise removal algorithms, Physica D 60 (1992), pp. 259–268] and others also based on the gradient of the image do have. In this work, we propose a new variational model for image denoising based on the Gaussian curvature of the image surface of a given image. We analytically study the proposed model to show why it preserves image contrast, recovers sharp edges, does not transform piecewise smooth functions into piecewise constant functions and is also able to preserve corners. In addition, we also provide two fast solvers for its numerical realization. Numerical experiments are shown to illustrate the good performance of the algorithms and test results. © 2015 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 32: 1066–1089, 2016  相似文献   

5.
Recently [Solak E, Çokal C, Yildiz OT Biyikogˇlu T. Cryptanalysis of Fridrich’s chaotic image encryption. Int J Bifur Chaos 2010;20:1405-1413] cryptanalyzed the chaotic image encryption algorithm of [Fridrich J. Symmetric ciphers based on two-dimensional chaotic maps. Int J Bifur Chaos 1998;8(6):1259-1284], which was considered a benchmark for measuring security of many image encryption algorithms. This attack can also be applied to other encryption algorithms that have a structure similar to Fridrich’s algorithm, such as that of [Chen G, Mao Y, Chui, C. A symmetric image encryption scheme based on 3D chaotic cat maps. Chaos Soliton Fract 2004;21:749-761]. In this paper, we suggest a novel image encryption algorithm based on a three dimensional (3D) chaotic map that can defeat the aforementioned attack among other existing attacks. The design of the proposed algorithm is simple and efficient, and based on three phases which provide the necessary properties for a secure image encryption algorithm including the confusion and diffusion properties. In phase I, the image pixels are shuffled according to a search rule based on the 3D chaotic map. In phases II and III, 3D chaotic maps are used to scramble shuffled pixels through mixing and masking rules, respectively. Simulation results show that the suggested algorithm satisfies the required performance tests such as high level security, large key space and acceptable encryption speed. These characteristics make it a suitable candidate for use in cryptographic applications.  相似文献   

6.
基于Tai等人的前期工作,本文研究修正的TV-Stokes图像去噪模型,提出一些新的求解该两步模型的快速算法.我们利用对偶形式和多重网格方法得到一个求解第1步的快速算法.给出另外一种新的求解光滑的切向量场的保不可压性质的算法.在第2步中,我们提出一类有效的全新算法:首先通过计算Poisson方程得到具有光滑法向量场的函数g,然后利用Jia和Zhao的方法得到恢复的图像.新算法的运算速度非常快,用于图像恢复的CPU时间少于0.1 s.数值结果显示新的快速算法是有效的和稳定的,恢复图像的质量也超过了一般去噪方法.  相似文献   

7.
Based on the very recent work by Censor and Segal (2009) [1], and inspired by Xu (2006) [9], Zhao and Yang (2005) [10], and Bauschke and Combettes (2001) [2], we introduce and analyze an algorithm for solving the split common fixed-point problem for the wide class of quasi-nonexpansive operators in Hilbert spaces. Our results improve and develop previously discussed feasibility problems and related algorithms.  相似文献   

8.
基于奇异谱分析对信号的自适应滤波特性,提出了一种降低混沌信号噪声的算法,这个算法首先求得信号的各阶经验正交函数(EOF)和主分量(PC),然后用经验正交函数和主分量重构信号,根据重构信号的奇异谱选择最优的重构阶次以获得降噪后的信号.在计算动力系统最大Liapunov指数时,由于噪声的存在会降低计算的精度,因此将提出的降噪算法应用于最大Liapunov指数的计算中.通过对Henon映射和Logistic映射这两个典型混沌系统最大Liapunov指数的计算,结果表明该算法能有效提高最大Liapunov指数计算的精度.  相似文献   

9.
A randomized algorithm for finding a hyperplane separating two finite point sets in the Euclidean space d and a randomized algorithm for solving linearly constrained general convex quadratic problems are proposed. The expected running time of the separating algorithm isO(dd! (m + n)), wherem andn are cardinalities of sets to be separated. The expected running time of the algorithm for solving quadratic problems isO(dd! s) wheres is the number of inequality constraints. These algorithms are based on the ideas of Seidel's linear programming algorithm [6]. They are closely related to algorithms of [8], [2], and [9] and belong to an abstract class of algorithms investigated in [1]. The algorithm for solving quadratic problems has some features of the one proposed in [7].This research was done when the author was supported by the Alexander von Humboldt Foundation, Germany.On leave from the Institute of Mathematics and Mechanics, Ural Department of the Russian Academy of Sciences, 620219 Ekaterinburg, S. Kovalevskaya str. 16, Russia.  相似文献   

10.
<正>Efficient data visualization techniques are critical for many scientific applications. Centroidal Voronoi tessellation(CVT) based algorithms offer a convenient vehicle for performing image analysis,segmentation and compression while allowing to optimize retained image quality with respect to a given metric.In experimental science with data counts following Poisson distributions,several CVT-based data tessellation algorithms have been recently developed.Although they surpass their predecessors in robustness and quality of reconstructed data,time consumption remains to be an issue due to heavy utilization of the slowly converging Lloyd iteration.This paper discusses one possible approach to accelerating data visualization algorithms.It relies on a multidimensional generalization of the optimization based multilevel algorithm for the numerical computation of the CVTs introduced in[1],where a rigorous proof of its uniform convergence has been presented in 1-dimensional setting.The multidimensional implementation employs barycentric coordinate based interpolation and maximal independent set coarsening procedures.It is shown that when coupled with bin accretion algorithm accounting for the discrete nature of the data,the algorithm outperforms Lloyd-based schemes and preserves uniform convergence with respect to the problem size.Although numerical demonstrations provided are limited to spectroscopy data analysis,the method has a context-independent setup and can potentially deliver significant speedup to other scientific and engineering applications.  相似文献   

11.
申远  李倩倩  吴坚 《计算数学》2018,40(1):85-95
本文考虑求解一种源于信号及图像处理问题的鞍点问题.基于邻近点算法的思想,我们对原始-对偶算法进行改进,构造一种对称正定且可变的邻近项矩阵,得到一种新的原始-对偶算法.新算法可以看成一种邻近点算法,因此它的收敛性易于分析,且无需较强的假设条件.初步实验结果表明,当新算法被应用于求解图像去模糊问题时,和其他几种主流的高效算法相比,新算法能得到较高质量的结果,且计算时间也是有竞争力的.  相似文献   

12.
Total variation regularization introduced by Rudin, Osher, and Fatemi (ROF) is widely used in image denoising problems for its capability to preserve repetitive textures and details of images. Many efforts have been devoted to obtain efficient gradient descent schemes for dual minimization of ROF model, such as Chambolle’s algorithm or gradient projection (GP) algorithm. In this paper, we propose a general gradient descent algorithm with a shrinking factor. Both Chambolle’s and GP algorithm can be regarded as the special cases of the proposed methods with special parameters. Global convergence analysis of the new algorithms with various step lengths and shrinking factors are present. Numerical results demonstrate their competitiveness in computational efficiency and reconstruction quality with some existing classic algorithms on a set of gray scale images.  相似文献   

13.
《Applied Mathematical Modelling》2014,38(5-6):1911-1918
Recently, Kadadevaramath et al. (2012) [1] presented a mathematical model for optimizing a three echelon supply chain network. Their model is an integer linear programming (ILP) model. In order to solve it, they developed five algorithms; four of them are based on a particle swarm optimization (PSO) method and the other is a genetic algorithm (GA). In this paper, we develop a more general mathematical model that contains the model developed by Kadadevaramath et al. (2012) [1]. Furthermore, we show that all instances proved in Kadadevaramath et al. (2012) [1] can easily be solved optimally by any integer linear programming solver.  相似文献   

14.
D. L. Shell published in 1959, [4], a fast algorithm for internal sorting. R. M. Frank and R. B. Lazarus pointed out in 1960, [3], a disadvantage in the original design of the algorithm and changes were proposed based on heuristic arguments. J. Boothroyd took care of Frank and Lazarus objections in an elegant algorithm published 1963, [1]. If we change Boothroyds algorithm slightly we get an algorithm which, in the general case with the arguments of Frank and Lazarus, is even worse than the original one. On the other hand this algorithm has the advantage that it is easy to analyse theoretically. To the author's knowledge such an analysis has not yet been given for any of the other Shellsort algorithms.  相似文献   

15.
The significance of combinatorial optimization for many important applications is well understood. The simulated annealing algorithm (which we will denote from here on as SA) has generated great interest and attention in the scientific community. Its authors derived it from analogies to the physical domain [15,10], and a myriad of publications followed (see references to the book on simulated annealing in [11]). The prevailing opinion expressed in these publications was that the SA algorithm represents a new, hitherto unknown class of algorithms and provides a breakthrough in the solution of NP-hard optimization problems. As might be expected, roots of the SA do exist, and one of the purposes of this paper is to trace these roots. We prove that SA, like many other randomized algorithms, belongs to the class of S-type GH-stochastic automata. We provide other representatives of this class together with algorithms from some other classes, and discuss the issue of convergence. Large computational experiemts were performed on a network of Apollo computers.  相似文献   

16.
This primer provides a self-contained exposition of the case where spatial birth-and-death processes are used for perfect simulation of locally stable point processes. Particularly, a simple dominating coupling from the past (CFTP) algorithm and the CFTP algorithms introduced in [13], [14], and [5] are studied. Some empirical results for the algorithms are discussed. Received: 30 June 2002  相似文献   

17.
Many applications like pointer analysis and incremental compilation require maintaining a topological ordering of the nodes of a directed acyclic graph (DAG) under dynamic updates. All known algorithms for this problem are either only analyzed for worst-case insertion sequences or only evaluated experimentally on random DAGs. We present the first average-case analysis of incremental topological ordering algorithms. We prove an expected runtime of under insertion of the edges of a complete DAG in a random order for the algorithms of Alpern et al. (1990) [4], Katriel and Bodlaender (2006) [18], and Pearce and Kelly (2006) [23].  相似文献   

18.
Traditional integer‐order partial differential equation based image denoising approach can easily lead edge and complex texture detail blur, thus its denoising effect for texture image is always not well. To solve the problem, we propose to implement a fractional partial differential equation (FPDE) based denoising model for texture image by applying a novel mathematical method—fractional calculus to image processing from the view of system evolution. Previous studies show that fractional calculus has some unique properties that it can nonlinearly enhance complex texture detail in digital image processing, which is obvious different with integer‐order differential calculus. The goal of the modeling is to overcome the problems of the existed denoising approaches by utilizing the aforementioned properties of fractional differential calculus. Using classic definition and property of fractional differential calculus, we extend integer‐order steepest descent approach to fractional field to implement fractional steepest descent approach. Then, based on the earlier fractional formulas, a FPDE based multiscale denoising model for texture image is proposed and further analyze optimal parameters value for FPDE based denoising model. The experimental results prove that the ability for preserving high‐frequency edge and complex texture information of the proposed fractional denoising model are obviously superior to traditional integral based algorithms, as for texture detail rich images. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
As a continuation of [1], this paper considers implementation of ODE approaches. A modified Hamming's algorithm for integration of (ECP)-equation is suggested to obtain a local solution. In addition to the main algorithm, three supporting algorithms are also described: two are for evaluation of the right-hand side of (ECP)-equation, which may be especially suitable for certain kinds of (ECP)-equation when applied to large scale problems; the third one, with a convergence theorem, is for computing an initial feasible point. Our numerical results obtained by executing these algorithms on an example of (ECP)-equation given in [1] on five test problems indicate their remarkable superiority of performance to Tanabe's ODE version that is recently claimed to be much better than some well-known SQP techniques.  相似文献   

20.
在非下采样Contourlet变换的基础上,综合考虑全变差扩散和正态逆高斯模型,提出一种新的图像去噪算法.首先,对图像进行非下采样Contourlet变换,得到高频子带和低频子带系数.然后,对低频子带进行全变差扩散处理,对于方向带通子带,先通过分类准则对其进行分类,将其分为重要系数和不重要系数,对重要系数采样正态逆高斯建模,不重要系数采用高斯分布模型建模.实验结果证明,本文方法在视觉效果、峰值信噪比以及平均结构性上均优于许多算法.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号