首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
基于分数阶logistic映射提出了洗牌加密方法.通过离散分数阶微积分得到分数阶序列并把它作为密钥.利用位异或算子,提出了一种新的图像加密算法.对该算法的密钥空间、密钥敏感性和统计特性进行相应的仿真分析.结果表明,该算法可以达到较好的加解密效果,具有很高的安全性,可以满足图像加密安全性的要求.  相似文献   

2.
稀疏正则化方法在参数重构中起到了越来越重要的作用.与传统的正则化方法相比,稀疏正则化方法能较好地重构稀疏变量.由于稀疏正则化的不可微性,需要对已有的经典算法进行改进.本文构建同伦摄动稀疏正则化方法克服标准稀疏正则化的不可微性,并将该方法应用到基于布莱克一斯科尔斯期权定价模型重构隐含波动率和基于托达罗模型重构政策参数.数值实验表明,所提出的方法是收敛和稳定的.  相似文献   

3.
本文研究了图像去模糊去噪问题.利用正则化技术结合Krylov子空间方法,提出了混合正则化LSQR算法.实验结果表明该算法有效改善了问题的不适定性,获得了逼真度较高的复原图像.  相似文献   

4.
张宏武  吕拥 《应用数学》2022,(1):110-119
本文研究一类空间分数阶扩散逆时问题.基于条件稳定性结果,发展一种广义吉洪诺夫正则化方法克服其不适定性,并且通过正则化参数的后验选取规则获得正则化方法对数和双对数型收敛性估计.一些数值模拟结果验证了该方法的收敛性与稳定性.  相似文献   

5.
本文研究了电阻率反演成像(ERT)中的牛顿拉夫逊基础算法及改进问题.利用最小二乘法和Tikhonov正则化等方法将反演算法予以优化,获得了与实验样本结构吻合的碳纤维复合层的电阻率分布图像,推广了牛顿拉夫逊算法的数理反演模型.  相似文献   

6.
C-V模型中Heaviside函数和Dirac函数正则化逼近影响对目标图像的分割,根据Heaviside函数和Dirac函数的性质,提出了新的正则化Heaviside函数和Dirac函数.首先分析了C-V模型中正则化的Heaviside函数和Dirac函数在图像分割中所起的作用,在此基础上提出了新的正则化的Heaviside函数和Dirac函数,改进了C-V模型.实验结果表明,运用正则化的Heaviside函数和Dirac函数的图像分割效果较好.  相似文献   

7.
由于经典的正则化方法存在过度光滑的缺陷,例如经典的Tikhonov方法,考虑一种新的分数次Tikhonov正则化方法,此方法包含了经典的Tikhonov方法.以一个时间分数阶反扩散问题为例,讨论新方法的正则化参数的选取,及相应的误差估计.进一步,数值实验显示了所提方法的可行性和有效性.  相似文献   

8.
稀疏优化模型是目前最优化领域中非常热门的研究前沿课题,在压缩感知、图像处理、机器学习和统计建模等领域都获得了成功的应用.本文以光谱分析技术、数字信号处理和推荐系统等多个应用问题为例,阐述稀疏优化模型的建模过程与核心思想.稀疏优化模型属于组合优化模型,非常难以求解(NP-难).正则化方法是稀疏优化模型的一类常用的求解方法.我们将介绍正则化方法的原理与几类常见的正则化模型,并阐述正则化模型的稳定性理论与多种先进算法.数值实验表明,这些算法都具有快速、高效、稳健等显著优点.稀疏正则化模型将在大数据时代中发挥更显著的计算优势与应用价值.  相似文献   

9.
全变差正则化数据拟合问题产生于许多图像处理任务,如图像去噪、去模糊、图像修复、磁共振成像、压缩图像感知等.近年来,求解此类问题的快速高效算法发展很快.以最小二乘、最小一乘等为例简要回顾求解此类问题的主要算法,并讨论一个全变差正则化非凸数据拟合模型在脉冲噪声图像去模糊问题中的应用.  相似文献   

10.
提出了一种基于正则化技术的信号稀疏表示方法.该方法与经典稀疏表示算法的主要区别可概括为两点:其一,直接使用e_0模而不是被广泛采用的e_1模来度量稀疏性;其二,正则化项的引入使得该模型得到的信号表达是所有表示中最优稀疏的.在本文中,正则化项采用框架势来描述稀疏表示的"最优性",利用二次可微的凹函数来逼近e_0模,得到了求解所提出的正则化模型的近似算法,并给出了收敛性分析.此外,数值实验也显现了本文所提模型及算法相比于经典算法的优越性.  相似文献   

11.
The aim of this paper is to develop an efficient algorithm for solving a class of unconstrained nondifferentiable convex optimization problems in finite dimensional spaces. To this end we formulate first its Fenchel dual problem and regularize it in two steps into a differentiable strongly convex one with Lipschitz continuous gradient. The doubly regularized dual problem is then solved via a fast gradient method with the aim of accelerating the resulting convergence scheme. The theoretical results are finally applied to an l 1 regularization problem arising in image processing.  相似文献   

12.
Images captured by image acquisition systems using photon-counting devices such as astronomical imaging, positron emission tomography and confocal microscopy imaging, are often contaminated by Poisson noise. Total variation (TV) regularization, which is a classic regularization technique in image restoration, is well-known for recovering sharp edges of an image. Since the regularization parameter is important for a good recovery, Chen and Cheng (2012) proposed an effective TV-based Poissonian image deblurring model with a spatially adapted regularization parameter. However, it has drawbacks since the TV regularization produces staircase artifacts. In this paper, in order to remedy the shortcoming of TV of their model, we introduce an extra high-order total variation (HTV) regularization term. Furthermore, to balance the trade-off between edges and the smooth regions in the images, we also incorporate a weighting parameter to discriminate the TV and the HTV penalty. The proposed model is solved by an iterative algorithm under the framework of the well-known alternating direction method of multipliers. Our numerical results demonstrate the effectiveness and efficiency of the proposed method, in terms of signal-to-noise ratio (SNR) and relative error (RelRrr).  相似文献   

13.
Variational models for image segmentation are usually solved by the level set method, which is not only slow to compute but also dependent on initialization strongly. Recently, fuzzy region competition models or globally convex segmentation models have been introduced. They are insensitive to initialization, but contain TV-regularizers, making them difficult to compute. Goldstein, Bresson and Osher have applied the split Bregman iteration to globally convex segmentation models which avoided the regularization of TV norm and speeded up the computation. However, the split Bregman method needs to solve a partial differential equation (PDE) in each iteration. In this paper, we present a simple algorithm without solving the PDEs proposed originally by Jia et al. (2009) with application to image segmentation problems. The algorithm also avoids the regularization of TV norm and has a simpler form, which is in favor of implementing. Numerical experiments show that our algorithm works faster and more efficiently than other fast schemes, such as duality based methods and the split Bregman scheme.  相似文献   

14.
<正>Image restoration is often solved by minimizing an energy function consisting of a data-fidelity term and a regularization term.A regularized convex term can usually preserve the image edges well in the restored image.In this paper,we consider a class of convex and edge-preserving regularization functions,i.e.,multiplicative half-quadratic regularizations,and we use the Newton method to solve the correspondingly reduced systems of nonlinear equations.At each Newton iterate,the preconditioned conjugate gradient method,incorporated with a constraint preconditioner,is employed to solve the structured Newton equation that has a symmetric positive definite coefficient matrix. The eigenvalue bounds of the preconditioned matrix are deliberately derived,which can be used to estimate the convergence speed of the preconditioned conjugate gradient method.We use experimental results to demonstrate that this new approach is efficient, and the effect of image restoration is reasonably well.  相似文献   

15.
Variational methods have become an important kind of methods in signal and image restoration—a typical inverse problem. One important minimization model consists of the squared ?_2 data fidelity(corresponding to Gaussian noise) and a regularization term constructed by a potential function composed of first order difference operators. It is well known that total variation(TV) regularization, although achieved great successes,suffers from a contrast reduction effect. Using a typical signal, we show that, actually all convex regularizers and most nonconvex regularizers have this effect. With this motivation, we present a general truncated regularization framework. The potential function is a truncation of existing nonsmooth potential functions and thus flat on(τ, +∞) for some positive τ. Some analysis in 1 D theoretically demonstrate the good contrast-preserving ability of the framework. We also give optimization algorithms with convergence verification in 2 D, where global minimizers of each subproblem(either convex or nonconvex) are calculated. Experiments numerically show the advantages of the framework.  相似文献   

16.
Recently, optimization algorithms for solving a minimization problem whose objective function is a sum of two convex functions have been widely investigated in the field of image processing. In particular, the scenario when a non-differentiable convex function such as the total variation (TV) norm is included in the objective function has received considerable interests since many variational models encountered in image processing have this nature. In this paper, we propose a fast fixed point algorithm based on the adapted metric method, and apply it in the field of TV-based image deblurring. The novel method is derived from the idea of establishing a general fixed point algorithm framework based on an adequate quadratic approximation of one convex function in the objective function, in a way reminiscent of Quasi-Newton methods. Utilizing the non-expansion property of the proximity operator we further investigate the global convergence of the proposed algorithm. Numerical experiments on image deblurring problem demonstrate that the proposed algorithm is very competitive with the current state-of-the-art algorithms in terms of computational efficiency.  相似文献   

17.
This paper shows that the optimal subgradient algorithm (OSGA)—which uses first-order information to solve convex optimization problems with optimal complexity—can be used to efficiently solve arbitrary bound-constrained convex optimization problems. This is done by constructing an explicit method as well as an inexact scheme for solving the bound-constrained rational subproblem required by OSGA. This leads to an efficient implementation of OSGA on large-scale problems in applications arising from signal and image processing, machine learning and statistics. Numerical experiments demonstrate the promising performance of OSGA on such problems. A software package implementing OSGA for bound-constrained convex problems is available.  相似文献   

18.
Iterative regularization multigrid methods have been successfully applied to signal/image deblurring problems. When zero-Dirichlet boundary conditions are imposed the deblurring matrix has a Toeplitz structure and it is potentially full. A crucial task of a multilevel strategy is to preserve the Toeplitz structure at the coarse levels which can be exploited to obtain fast computations. The smoother has to be an iterative regularization method. The grid transfer operator should preserve the regularization property of the smoother. This paper improves the iterative multigrid method proposed in [11] introducing a wavelet soft-thresholding denoising post-smoother. Such post-smoother avoids the noise amplification that is the cause of the semi-convergence of iterative regularization methods and reduces ringing effects. The resulting iterative multigrid regularization method stabilizes the iterations so that the imprecise (over) estimate of the stopping iteration does not have a deleterious effect on the computed solution. Numerical examples of signal and image deblurring problems confirm the effectiveness of the proposed method.  相似文献   

19.
Image restoration is an inverse problem that has been widely studied in recent years. The total variation based model by Rudin-Osher-Fatemi (1992) is one of the most effective and well known due to its ability to preserve sharp features in restoration. This paper addresses an important and yet outstanding issue for this model in selection of an optimal regularization parameter, for the case of image deblurring. We propose to compute the optimal regularization parameter along with the restored image in the same variational setting, by considering a Karush Kuhn Tucker (KKT) system. Through establishing analytically the monotonicity result, we can compute this parameter by an iterative algorithm for the KKT system. Such an approach corresponds to solving an equation using discrepancy principle, rather than using discrepancy principle only as a stopping criterion. Numerical experiments show that the algorithm is efficient and effective for image deblurring problems and yet is competitive.  相似文献   

20.
卫星舱内长方体群布局的优化模型及全局优化算法   总被引:7,自引:2,他引:5  
本文研究了卫星舱内长方体群优化问题,建立了一个三维布局优化模型,并用图论,群论等工具克服了布局优化问题时断时续性质带来的困难,在此基础上构造了一个全局收敛的优化算法,文中所用的方法可用于求解类似问题。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号