首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Convergence Theory for Preconditioned Eigenvalue Solvers in a Nutshell
Authors:Merico E Argentati  Andrew V Knyazev  Klaus Neymeyr  Evgueni E Ovtchinnikov  Ming Zhou
Institution:1.Department of Mathematical and Statistical Sciences,University Colorado Denver,Denver,USA;2.Mitsubishi Electric Research Laboratories,Cambridge,USA;3.Universit?t Rostock,Institut für Mathematik,Rostock,Germany;4.Numerical Analysis Group, Building R18,STFC Rutherford Appleton Laboratory,Didcot,UK
Abstract:Preconditioned iterative methods for numerical solution of large matrix eigenvalue problems are increasingly gaining importance in various application areas, ranging from material sciences to data mining. Some of them, e.g., those using multilevel preconditioning for elliptic differential operators or graph Laplacian eigenvalue problems, exhibit almost optimal complexity in practice; i.e., their computational costs to calculate a fixed number of eigenvalues and eigenvectors grow linearly with the matrix problem size. Theoretical justification of their optimality requires convergence rate bounds that do not deteriorate with the increase of the problem size. Such bounds were pioneered by E. D’yakonov over three decades ago, but to date only a handful have been derived, mostly for symmetric eigenvalue problems. Just a few of known bounds are sharp. One of them is proved in doi: 10.1016/S0024-3795(01)00461-X for the simplest preconditioned eigensolver with a fixed step size. The original proof has been greatly simplified and shortened in doi: 10.1137/080727567 by using a gradient flow integration approach. In the present work, we give an even more succinct proof, using novel ideas based on Karush–Kuhn–Tucker theory and nonlinear programming.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号