首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
n this paper, we present an inexact inverse subspace iteration method for computing a few eigenpairs of the generalized eigenvalue problem Ax=λBx. We first formulate a version of inexact inverse subspace iteration in which the approximation from one step is used as an initial approximation for the next step. We then analyze the convergence property, which relates the accuracy in the inner iteration to the convergence rate of the outer iteration. In particular, the linear convergence property of the inverse subspace iteration is preserved. Numerical examples are given to demonstrate the theoretical results.  相似文献   

2.
The discretization of eigenvalue problems for partial differential operators is a major source of matrix eigenvalue problems having very large dimensions, but only some of the smallest eigenvalues together with the eigenvectors are to be determined. Preconditioned inverse iteration (a “matrix-free” method) derives from the well-known inverse iteration procedure in such a way that the associated system of linear equations is solved approximately by using a (multigrid) preconditioner. A new convergence analysis for preconditioned inverse iteration is presented. The preconditioner is assumed to satisfy some bound for the spectral radius of the error propagation matrix resulting in a simple geometric setup. In this first part the case of poorest convergence depending on the choice of the preconditioner is analyzed. In the second part the dependence on all initial vectors having a fixed Rayleigh quotient is considered. The given theory provides sharp convergence estimates for the eigenvalue approximations showing that multigrid eigenvalue/vector computations can be done with comparable efficiency as known from multigrid methods for boundary value problems.  相似文献   

3.
In this paper we study inexact inverse iteration for solving the generalised eigenvalue problem A xM x. We show that inexact inverse iteration is a modified Newton method and hence obtain convergence rates for various versions of inexact inverse iteration for the calculation of an algebraically simple eigenvalue. In particular, if the inexact solves are carried out with a tolerance chosen proportional to the eigenvalue residual then quadratic convergence is achieved. We also show how modifying the right hand side in inverse iteration still provides a convergent method, but the rate of convergence will be quadratic only under certain conditions on the right hand side. We discuss the implications of this for the preconditioned iterative solution of the linear systems. Finally we introduce a new ILU preconditioner which is a simple modification to the usual preconditioner, but which has advantages both for the standard form of inverse iteration and for the version with a modified right hand side. Numerical examples are given to illustrate the theoretical results. AMS subject classification (2000)  65F15, 65F10  相似文献   

4.
We present a method to compute the lowest eigenpairs of a generalized eigenvalue problem resulting from the discretization of a stationary Schrödinger equation by a fourth order finite difference scheme of Numerov type. We propose to use an inverse iteration method combined with a Rayleigh-Ritz procedure to correct several eigenvectors at the same time. The linear systems in the inverse iteration scheme are regularized by projections on lower dimensional spaces and approximately solved by a multigrid algorithm.We apply the method to the electronic structure calculation in quantum chemistry.  相似文献   

5.
In this paper we discuss an abstract iteration scheme for the calculation of the smallest eigenvalue of an elliptic operator eigenvalue problem. A short and geometric proof based on the preconditioned inverse iteration (PINVIT) for matrices (Knyazev and Neymeyr, SIAM J Matrix Anal 31:621–628, 2009) is extended to the case of operators. We show that convergence is retained up to any tolerance if one only uses approximate applications of operators which leads to the perturbed preconditioned inverse iteration (PPINVIT). We then analyze the Besov regularity of the eigenfunctions of the Poisson eigenvalue problem on a polygonal domain, showing the advantage of an adaptive solver to uniform refinement when using a stable wavelet base. A numerical example for PPINVIT, applied to the model problem on the L-shaped domain, is shown to reproduce the predicted behaviour.  相似文献   

6.
与特征值计算的算法丰富多彩相比,在已知比较精确的特征值的情况下,求其相应的特征向量的算法却不多见,已有的算法有基本反迭代法[1][2][4][5]、交替法[3]等.到目前为止,计算特征向量的算法都是基于反迭代法的,衡量算法是否收敛都是以残量的大小为标准,本文的算法也不例外.本文的目的就是计算不可约实对称三对角矩阵T=[bj-1,aj,bj]的相应于某个特征值λi(已得到其近似λ)的特征向量.首先我们来看下面的例子:例1 我们取T为201阶的Wilkinson负矩阵,λ取计算的最大特征值,分别令迭代的初始向量是e1,e100,e201,e=(1,1,…,1)T.图1反映了反迭代的收敛速度.  相似文献   

7.
Summary We suppose an inverse eigenvalue problem which includes the classical additive and multiplicative inverse eigenvalue problems as special cases. For the numerical solution of this problem we propose a Newton iteration process and compare it with a known method. Finally we apply it to a numerical example.  相似文献   

8.
An algorithm is devised that improves an eigenvector approximation corresponding to the largest (or smallest) eigenvalue of a large and sparse symmetric matrix. It solves the linear systems that arise in inverse iteration by means of the c-g algorithm. Stopping criteria are developed which ensure an accurate result, and in many cases give convergence after a small numer of c-g steps.  相似文献   

9.
Computing the eigenvalue of smallest modulus and its corresponding eigneveclor of an irreducible nonsingular M-matrix A is considered, It is shown that if the entries of A are known with high relative accuracy, its eigenvalue of smallest modulus and each component of the corresponding eigenvector will be determined to much higher accuracy than the standard perturbation theory suggests. An algorithm is presented to compute them with a small componentwise backward error, which is consistent with the perturbation results.  相似文献   

10.
Klaus Neymeyr 《PAMM》2011,11(1):749-750
Gradient iterations for the minimization of the Rayleigh quotient are robust and (with a proper preconditioning) fast iterations to compute approximations of the smallest eigenvalue of a self-adjoint elliptic partial differential operator. Up to now sharp convergence estimates were only known for the basic fixed-step size preconditioned gradient iteration (also called preconditioned inverse iteration). Recently sharp convergence estimates have been proved for optimal step size (preconditioned) gradient iterations. These new estimates are compared with previous results. (© 2011 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

11.
In order to compute the smallest eigenvalue together with an eigenfunction of a self-adjoint elliptic partial differential operator one can use the preconditioned inverse iteration scheme, also called the preconditioned gradient iteration. For this iterative eigensolver estimates on the poorest convergence have been published by several authors. In this paper estimates on the fastest possible convergence are derived. To this end the convergence problem is reformulated as a two-level constrained optimization problem for the Rayleigh quotient. The new convergence estimates reveal a wide range between the fastest possible and the slowest convergence.  相似文献   

12.
This paper proposes new iterative methods for the efficient computation of the smallest eigenvalue of symmetric nonlinear matrix eigenvalue problems of large order with a monotone dependence on the spectral parameter. Monotone nonlinear eigenvalue problems for differential equations have important applications in mechanics and physics. The discretization of these eigenvalue problems leads to nonlinear eigenvalue problems with very large sparse ill-conditioned matrices monotonically depending on the spectral parameter. To compute the smallest eigenvalue of large-scale matrix nonlinear eigenvalue problems, we suggest preconditioned iterative methods: preconditioned simple iteration method, preconditioned steepest descent method, and preconditioned conjugate gradient method. These methods use only matrix-vector multiplications, preconditioner-vector multiplications, linear operations with vectors, and inner products of vectors. We investigate the convergence and derive grid-independent error estimates for these methods. Numerical experiments demonstrate the practical effectiveness of the proposed methods for a model problem.  相似文献   

13.
We introduce an adaptive finite element method for computing electromagnetic guided waves in a closed, inhomogeneous, pillared three-dimensional waveguide at a given frequency based on the inverse iteration method. The problem is formulated as a generalized eigenvalue problems. By modifying the exact inverse iteration algorithm for the eigenvalue problem, we design a new adaptive inverse iteration finite element algorithm. Adaptive finite element methods based on a posteriori error estimate are known to be successful in resolving singularities of eigenfunctions which deteriorate the finite element convergence. We construct a posteriori error estimator for the electromagnetic guided waves problem. Numerical results are reported to illustrate the quasi-optimal performance of our adaptive inverse iteration finite element method.  相似文献   

14.
We study inexact subspace iteration for solving generalized non-Hermitian eigenvalue problems with spectral transformation, with focus on a few strategies that help accelerate preconditioned iterative solution of the linear systems of equations arising in this context. We provide new insights into a special type of preconditioner with “tuning” that has been studied for this algorithm applied to standard eigenvalue problems. Specifically, we propose an alternative way to use the tuned preconditioner to achieve similar performance for generalized problems, and we show that these performance improvements can also be obtained by solving an inexpensive least squares problem. In addition, we show that the cost of iterative solution of the linear systems can be further reduced by using deflation of converged Schur vectors, special starting vectors constructed from previously solved linear systems, and iterative linear solvers with subspace recycling. The effectiveness of these techniques is demonstrated by numerical experiments.  相似文献   

15.
1.IntroductionConsiderthefollowinginverseeigenvalueproblem:ProblemG.LetA(x)ERnxn5earealanalyticmatrix-valuedfunctionofxeR".Findapointx*eR"suchthatthematrixA(x*)ha8agiven8Pectral8etL={Al,'tA.}.HereA1,'1A.aregivencomPlexnum6ersandclosedundercomplexconjugation.Thiskindofproblemarisesofteninvariousareasofapplications(seeFreidlandetal.(1987)andreferencescontainedtherein).ThetwospecialcasesofProblemG,whicharefrequentlyencountered,arethefollowingproblemsproposedbyDowningandHouseholder(19…  相似文献   

16.
This paper deals with the convergence analysis of various preconditioned iterations to compute the smallest eigenvalue of a discretized self-adjoint and elliptic partial differential operator. For these eigenproblems several preconditioned iterative solvers are known, but unfortunately, the convergence theory for some of these solvers is not very well understood.The aim is to show that preconditioned eigensolvers (like the preconditioned steepest descent iteration (PSD) and the locally optimal preconditioned conjugate gradient method (LOPCG)) can be interpreted as truncated approximate Krylov subspace iterations. In the limit of preconditioning with the exact inverse of the system matrix (such preconditioning can be approximated by multiple steps of a preconditioned linear solver) the iterations behave like Invert-Lanczos processes for which convergence estimates are derived.  相似文献   

17.
Methods for the polynomial eigenvalue problem sometimes need to be followed by an iterative refinement process to improve the accuracy of the computed solutions. This can be accomplished by means of a Newton iteration tailored to matrix polynomials. The computational cost of this step is usually higher than the cost of computing the initial approximations, due to the need of solving multiple linear systems of equations with a bordered coefficient matrix. An effective parallelization is thus important, and we propose different approaches for the message‐passing scenario. Some schemes use a subcommunicator strategy in order to improve the scalability whenever direct linear solvers are used. We show performance results for the various alternatives implemented in the context of SLEPc, the Scalable Library for Eigenvalue Problem Computations. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

18.
We propose a numerical method to verify the existence and local uniqueness of solutions to nonlinear elliptic equations. We numerically construct a set containing solutions which satisfies the hypothesis of Banach's fixed point theorem in a certain Sobolev space. By using the finite element approximation and constructive error estimates, we calculate the eigenvalue bound with smallest absolute value to evaluate the norm of the inverse of the linearized operator. Utilizing this bound we derive a verification condition of the Newton-Kaiitorovich type. Numerical examples are presented.  相似文献   

19.
We introduce a type of full multigrid method for the nonlinear eigenvalue problem. The main idea is to transform the solution of the nonlinear eigenvalue problem into a series of solutions of the corresponding linear boundary value problems on the sequence of finite element spaces and nonlinear eigenvalue problems on the coarsest finite element space. The linearized boundary value problems are solved by some multigrid iterations. Besides the multigrid iteration, all other efficient iteration methods for solving boundary value problems can serve as the linear problem solver. We prove that the computational work of this new scheme is truly optimal, the same as solving the linear corresponding boundary value problem. In this case, this type of iteration scheme certainly improves the overfull efficiency of solving nonlinear eigenvalue problems. Some numerical experiments are presented to validate the efficiency of the new method.  相似文献   

20.
It is well-known that if we have an approximate eigenvalue λ- of a normal matrix A of order n,a good approximation to the corresponding eigenvector u can be computed by one inverse iteration provided the position,say kmax,of the largest component of u is known.In this paper we give a detailed theoretical analysis to show relations between the eigenvecor u and vector xk,k=1,…,n,obtained by simple inverse iteration,i.e.,the solution to the system(A-λI)x=ek with ek the kth column of the identity matrix I.We prove that under some weak conditions,the index kmax is of some optimal properties related to the smallest residual and smallest approximation error to u in spectral norm and Frobenius norm.We also prove that the normalized absolute vector v=|u|/||u||∞ of u can be approximated by the normalized vector of (||x1||2,…||xn||2)^T,We also give some upper bounds of |u(k)| for those “optimal“ indexeds such as Fernando‘s heuristic for kmax without any assumptions,A stable double orthogonal factorization method and a simpler but may less stable approach are proposed for locating the largest component of u.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号