首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
In this work, we propose a new method, termed as R‐CORK, for the numerical solution of large‐scale rational eigenvalue problems, which is based on a linearization and on a compact decomposition of the rational Krylov subspaces corresponding to this linearization. R‐CORK is an extension of the compact rational Krylov method (CORK) introduced very recently in the literature to solve a family of nonlinear eigenvalue problems that can be expressed and linearized in certain particular ways and which include arbitrary polynomial eigenvalue problems, but not arbitrary rational eigenvalue problems. The R‐CORK method exploits the structure of the linearized problem by representing the Krylov vectors in a compact form in order to reduce the cost of storage, resulting in a method with two levels of orthogonalization. The first level of orthogonalization works with vectors of the same size as the original problem, and the second level works with vectors of size much smaller than the original problem. Since vectors of the size of the linearization are never stored or orthogonalized, R‐CORK is more efficient from the point of view of memory and orthogonalization than the classical rational Krylov method applied directly to the linearization. Taking into account that the R‐CORK method is based on a classical rational Krylov method, to implement implicit restarting is also possible, and we show how to do it in a memory‐efficient way. Finally, some numerical examples are included in order to show that the R‐CORK method performs satisfactorily in practice.  相似文献   

2.
Krylov iterative methods usually solve an optimization problem, per iteration, to obtain a vector whose components are the step lengths associated with the previous search directions. This vector can be viewed as the solution of a multiparameter optimization problem. In that sense, Krylov methods can be combined with the spectral choice of step length that has recently been developed to accelerate descent methods in optimization. In this work, we discuss different spectral variants of Krylov methods and present encouraging preliminary numerical experiments, with and without preconditioning.  相似文献   

3.
We propose a new algorithm for the total variation based on image denoising problem. The split Bregman method is used to convert an unconstrained minimization denoising problem to a linear system in the outer iteration. An algebraic multi-grid method is applied to solve the linear system in the inner iteration. Furthermore, Krylov subspace acceleration is adopted to improve convergence in the outer iteration. Numerical experiments demonstrate that this algorithm is efficient even for images with large signal-to-noise ratio.  相似文献   

4.
In this paper image with horizontal motion blur, vertical motion blur and angled motion blur are considered. We construct several difference schemes to the highly nonlinear term ·(u)/((|u|)~(1/2)2+β) of the total variation-based image motion deblurring problem. The large nonlinear system is linearized by fixed point iteration method. An algebraic multigrid method with Krylov subspace acceleration is used to solve the corresponding linear equations as in [7]. The algorithms can restore the image very well. We give some numerical experiments to demonstrate that our difference schemes are efficient and robust.  相似文献   

5.
It is shown how the rational Krylov algorithm can be applied to a matrix eigenvalue problem that is nonlinear in the eigenvalue parameter. Bibliography: 6 titles.  相似文献   

6.
周茜  雷渊  乔文龙 《计算数学》2016,38(2):171-186
本文主要考虑一类线性矩阵不等式及其最小二乘问题,它等价于相应的矩阵不等式最小非负偏差问题.之前相关文献提出了求解该类最小非负偏差问题的迭代方法,但该方法在每步迭代过程中需要精确求解一个约束最小二乘子问题,因此对规模较大的问题,整个迭代过程需要耗费巨大的计算量.为了提高计算效率,本文在现有算法的基础上,提出了一类修正迭代方法.该方法在每步迭代过程中利用有限步的矩阵型LSQR方法求解一个低维矩阵Krylov子空间上的约束最小二乘子问题,降低了整个迭代所需的计算量.进一步运用投影定理以及相关的矩阵分析方法证明了该修正算法的收敛性,最后通过数值例子验证了本文的理论结果以及算法的有效性.  相似文献   

7.
Algorithms to solve large sparse eigenvalue problems are considered. A new class of algorithms which is based on rational functions of the matrix is described. The Lanczos method, the Arnoldi method, the spectral transformation Lanczos method, and Rayleigh quotient iteration all are special cases, but there are also new algorithms which correspond to rational functions with several poles. In the simplest case a basis of a rational Krylov subspace is found in which the matrix eigenvalue problem is formulated as a linear matrix pencil with a pair of Hessenberg matrices.  相似文献   

8.
Based on the special positive semidefinite splittings of the saddle point matrix, we propose a new alternating positive semidefinite splitting (APSS) iteration method for the saddle point problem arising from the finite element discretization of the hybrid formulation of the time-harmonic eddy current problem. We prove that the new APSS iteration method is unconditionally convergent for both cases of the simple topology and the general topology. The new APSS matrix can be used as a preconditioner to accelerate the convergence rate of Krylov subspace methods. Numerical results show that the new APSS preconditioner is superior to the existing preconditioners.  相似文献   

9.
By further generalizing the modified skew-Hermitian triangular splitting iteration methods studied in [L. Wang, Z.-Z. Bai, Skew-Hermitian triangular splitting iteration methods for non-Hermitian positive definite linear systems of strong skew-Hermitian parts, BIT Numer. Math. 44 (2004) 363-386], in this paper, we present a new iteration scheme, called the product-type skew-Hermitian triangular splitting iteration method, for solving the strongly non-Hermitian systems of linear equations with positive definite coefficient matrices. We discuss the convergence property and the optimal parameters of this method. Moreover, when it is applied to precondition the Krylov subspace methods, the preconditioning property of the product-type skew-Hermitian triangular splitting iteration is analyzed in detail. Numerical results show that the product-type skew-Hermitian triangular splitting iteration method can produce high-quality preconditioners for the Krylov subspace methods for solving large sparse positive definite systems of linear equations of strong skew-Hermitian parts.  相似文献   

10.
Iterative methods of Krylov‐subspace type can be very effective solvers for matrix systems resulting from partial differential equations if appropriate preconditioning is employed. We describe and test block preconditioners based on a Schur complement approximation which uses a multigrid method for finite element approximations of the linearized incompressible Navier‐Stokes equations in streamfunction and vorticity formulation. By using a Picard iteration, we use this technology to solve fully nonlinear Navier‐Stokes problems. The solvers which result scale very well with problem parameters. © 2011 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2011  相似文献   

11.
We construct a novel multi-step iterative method for solving systems of nonlinear equations by introducing a parameter θ to generalize the multi-step Newton method while keeping its order of convergence and computational cost. By an appropriate selection of θ, the new method can both have faster convergence and have larger radius of convergence. The new iterative method only requires one Jacobian inversion per iteration, and therefore, can be efficiently implemented using Krylov subspace methods. The new method can be used to solve nonlinear systems of partial differential equations, such as complex generalized Zakharov systems of partial differential equations, by transforming them into systems of nonlinear equations by discretizing approaches in both spatial and temporal independent variables such as, for instance, the Chebyshev pseudo-spectral discretizing method. Quite extensive tests show that the new method can have significantly faster convergence and significantly larger radius of convergence than the multi-step Newton method.  相似文献   

12.
Recently, Bai et al. (2013) proposed an effective and efficient matrix splitting iterative method, called preconditioned modified Hermitian/skew-Hermitian splitting (PMHSS) iteration method, for two-by-two block linear systems of equations. The eigenvalue distribution of the iterative matrix suggests that the splitting matrix could be advantageously used as a preconditioner. In this study, the CGNR method is utilized for solving the PMHSS preconditioned linear systems, and the performance of the method is considered by estimating the condition number of the normal equations. Furthermore, the proposed method is compared with other PMHSS preconditioned Krylov subspace methods by solving linear systems arising in complex partial differential equations and a distributed control problem. The numerical results demonstrate the difference in the performance of the methods under consideration.  相似文献   

13.
This paper discusses the accelerating of nonlinear parabolic equations. Two iterative methods for solving the implicit scheme new nonlinear iterative methods named by the implicit-explicit quasi-Newton (IEQN) method and the derivative free implicit-explicit quasi-Newton (DFIEQN) method are introduced, in which the resulting linear equations from the linearization can preserve the parabolic characteristics of the original partial differential equations. It is proved that the iterative sequence of the iteration method can converge to the solution of the implicit scheme quadratically. Moreover, compared with the Jacobian Free Newton-Krylov (JFNK) method, the DFIEQN method has some advantages, e.g., its implementation is easy, and it gives a linear algebraic system with an explicit coefficient matrix, so that the linear (inner) iteration is not restricted to the Krylov method. Computational results by the IEQN, DFIEQN, JFNK and Picard iteration methods are presented in confirmation of the theory and comparison of the performance of these methods.  相似文献   

14.
A superlinear convergence bound for rational Arnoldi approximations to functions of matrices is derived. This bound generalizes the well-known superlinear convergence bound for the conjugate gradient method to more general functions with finite singularities and to rational Krylov spaces. A constrained equilibrium problem from potential theory is used to characterize a max-min quotient of a nodal rational function underlying the rational Arnoldi approximation, where an additional external field is required for taking into account the poles of the rational Krylov space. The resulting convergence bound is illustrated at several numerical examples, in particular, the convergence of the extended Krylov method for the matrix square root.  相似文献   

15.
In this paper, we propose an efficient combination model of the second-order ROF model and a simple fourth-order partial differential equation (PDE) for image denoising. The split Bregman method is used to convert the nonlinear combination model into a linear system in the outer iteration, and an algebraic multigrid method is applied to solve the linear system in the inner iteration. Furthermore, Krylov subspace acceleration is adopted to improve convergence in the outer iteration. At the same time, we prove that the model is strictly convex and exists a unique global minimizer. We have also conducted a variety of numerical experiments to analyze the parameter selection criteria and discuss the performance of the fourth-order PDE in the combination model. The results show that our model can reduce blocky effects and our algorithm is efficient and robust to solve the proposed model.  相似文献   

16.
This paper develops truncated Newton methods as an appropriate tool for nonlinear inverse problems which are ill-posed in the sense of Hadamard. In each Newton step an approximate solution for the linearized problem is computed with the conjugate gradient method as an inner iteration. The conjugate gradient iteration is terminated when the residual has been reduced to a prescribed percentage. Under certain assumptions on the nonlinear operator it is shown that the algorithm converges and is stable if the discrepancy principle is used to terminate the outer iteration. These assumptions are fulfilled, e.g., for the inverse problem of identifying the diffusion coefficient in a parabolic differential equation from distributed data.  相似文献   

17.
The CMRH method [H. Sadok, Méthodes de projections pour les systèmes linéaires et non linéaires, Habilitation thesis, University of Lille1, Lille, France, 1994; H. Sadok, CMRH: A new method for solving nonsymmetric linear systems based on the Hessenberg reduction algorithm, Numer. Algorithms 20 (1999) 303–321] is an algorithm for solving nonsymmetric linear systems in which the Arnoldi component of GMRES is replaced by the Hessenberg process, which generates Krylov basis vectors which are orthogonal to standard unit basis vectors rather than mutually orthogonal. The iterate is formed from these vectors by solving a small least squares problem involving a Hessenberg matrix. Like GMRES, this method requires one matrix–vector product per iteration. However, it can be implemented to require half as much arithmetic work and less storage. Moreover, numerical experiments show that this method performs accurately and reduces the residual about as fast as GMRES. With this new implementation, we show that the CMRH method is the only method with long-term recurrence which requires not storing at the same time the entire Krylov vectors basis and the original matrix as in the GMRES algorithm. A comparison with Gaussian elimination is provided.  相似文献   

18.
We present a new algorithm for computing DWT-based preconditioners at a reduced cost, and we illustrate the savings that can be achieved with examples taken from the solution of a nonlinear problem by a Newton–Krylov method.  相似文献   

19.
Novel memory‐efficient Arnoldi algorithms for solving matrix polynomial eigenvalue problems are presented. More specifically, we consider the case of matrix polynomials expressed in the Chebyshev basis, which is often numerically more appropriate than the standard monomial basis for a larger degree d. The standard way of solving polynomial eigenvalue problems proceeds by linearization, which increases the problem size by a factor d. Consequently, the memory requirements of Krylov subspace methods applied to the linearization grow by this factor. In this paper, we develop two variants of the Arnoldi method that build the Krylov subspace basis implicitly, in a way that only vectors of length equal to the size of the original problem need to be stored. The proposed variants are generalizations of the so‐called quadratic Arnoldi method and two‐level orthogonal Arnoldi procedure methods, which have been developed for the monomial case. We also show how the typical ingredients of a full implementation of the Arnoldi method, including shift‐and‐invert and restarting, can be incorporated. Numerical experiments are presented for matrix polynomials up to degree 30 arising from the interpolation of nonlinear eigenvalue problems, which stem from boundary element discretizations of PDE eigenvalue problems. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

20.
白中治  仇寿霞 《计算数学》2002,24(1):113-128
1.引 言 考虑大型稀疏线性代数方程组 为利用系数矩阵的稀疏结构以尽可能减少存储空间和计算开销,Krylov子空间迭代算法[1,16,23]及其预处理变型[6,8,13,18,19]通常是求解(1)的有效而实用的方法.当系数矩阵对称正定时,共轭梯度法(CG(  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号