首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper deals with the convergence analysis of various preconditioned iterations to compute the smallest eigenvalue of a discretized self-adjoint and elliptic partial differential operator. For these eigenproblems several preconditioned iterative solvers are known, but unfortunately, the convergence theory for some of these solvers is not very well understood.The aim is to show that preconditioned eigensolvers (like the preconditioned steepest descent iteration (PSD) and the locally optimal preconditioned conjugate gradient method (LOPCG)) can be interpreted as truncated approximate Krylov subspace iterations. In the limit of preconditioning with the exact inverse of the system matrix (such preconditioning can be approximated by multiple steps of a preconditioned linear solver) the iterations behave like Invert-Lanczos processes for which convergence estimates are derived.  相似文献   

2.
The paper presents convergence estimates for a class of iterative methods for solving partial generalized symmetric eigenvalue problems whereby a sequence of subspaces containing approximations to eigenvectors is generated by combining the Rayleigh-Ritz and the preconditioned steepest descent/ascent methods. The paper uses a novel approach of studying the convergence of groups of eigenvalues, rather than individual ones, to obtain new convergence estimates for this class of methods that are cluster robust, i.e. do not involve distances between computed eigenvalues.  相似文献   

3.
Steepest descent preconditioning is considered for the recently proposed nonlinear generalized minimal residual (N‐GMRES) optimization algorithm for unconstrained nonlinear optimization. Two steepest descent preconditioning variants are proposed. The first employs a line search, whereas the second employs a predefined small step. A simple global convergence proof is provided for the N‐GMRES optimization algorithm with the first steepest descent preconditioner (with line search), under mild standard conditions on the objective function and the line search processes. Steepest descent preconditioning for N‐GMRES optimization is also motivated by relating it to standard non‐preconditioned GMRES for linear systems in the case of a standard quadratic optimization problem with symmetric positive definite operator. Numerical tests on a variety of model problems show that the N‐GMRES optimization algorithm is able to very significantly accelerate convergence of stand‐alone steepest descent optimization. Moreover, performance of steepest‐descent preconditioned N‐GMRES is shown to be competitive with standard nonlinear conjugate gradient and limited‐memory Broyden–Fletcher–Goldfarb–Shanno methods for the model problems considered. These results serve to theoretically and numerically establish steepest‐descent preconditioned N‐GMRES as a general optimization method for unconstrained nonlinear optimization, with performance that appears promising compared with established techniques. In addition, it is argued that the real potential of the N‐GMRES optimization framework lies in the fact that it can make use of problem‐dependent nonlinear preconditioners that are more powerful than steepest descent (or, equivalently, N‐GMRES can be used as a simple wrapper around any other iterative optimization process to seek acceleration of that process), and this potential is illustrated with a further application example. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

4.
This paper proposes new iterative methods for the efficient computation of the smallest eigenvalue of symmetric nonlinear matrix eigenvalue problems of large order with a monotone dependence on the spectral parameter. Monotone nonlinear eigenvalue problems for differential equations have important applications in mechanics and physics. The discretization of these eigenvalue problems leads to nonlinear eigenvalue problems with very large sparse ill-conditioned matrices monotonically depending on the spectral parameter. To compute the smallest eigenvalue of large-scale matrix nonlinear eigenvalue problems, we suggest preconditioned iterative methods: preconditioned simple iteration method, preconditioned steepest descent method, and preconditioned conjugate gradient method. These methods use only matrix-vector multiplications, preconditioner-vector multiplications, linear operations with vectors, and inner products of vectors. We investigate the convergence and derive grid-independent error estimates for these methods. Numerical experiments demonstrate the practical effectiveness of the proposed methods for a model problem.  相似文献   

5.
Summary. In this paper, we consider some nonlinear inexact Uzawa methods for iteratively solving linear saddle-point problems. By means of a new technique, we first give an essential improvement on the convergence results of Bramble-Paschiak-Vassilev for a known nonlinear inexact Uzawa algorithm. Then we propose two new algorithms, which can be viewed as a combination of the known nonlinear inexact Uzawa method with the classical steepest descent method and conjugate gradient method respectively. The two new algorithms converge under very practical conditions and do not require any apriori estimates on the minimal and maximal eigenvalues of the preconditioned systems involved, including the preconditioned Schur complement. Numerical results of the algorithms applied for the Stokes problem and a purely linear system of algebraic equations are presented to show the efficiency of the algorithms. Received December 8, 1999 / Revised version received September 8, 2001 / Published online March 8, 2002 RID="*" ID="*" The work of this author was partially supported by a grant from The Institute of Mathematical Sciences, CUHK RID="**" ID="**" The work of this author was partially supported by Hong Kong RGC Grants CUHK 4292/00P and CUHK 4244/01P  相似文献   

6.
Relaxed Steepest Descent and Cauchy-Barzilai-Borwein Method   总被引:6,自引:0,他引:6  
The negative gradient direction to find local minimizers has been associated with the classical steepest descent method which behaves poorly except for very well conditioned problems. We stress out that the poor behavior of the steepest descent methods is due to the optimal Cauchy choice of steplength and not to the choice of the search direction. We discuss over and under relaxation of the optimal steplength. In fact, we study and extend recent nonmonotone choices of steplength that significantly enhance the behavior of the method. For a new particular case (Cauchy-Barzilai-Borwein method), we present a convergence analysis and encouraging numerical results to illustrate the advantages of using nonmonotone overrelaxations of the gradient method.  相似文献   

7.
Steepest Descent, CG, and Iterative Regularization of Ill-Posed Problems   总被引:3,自引:1,他引:2  
The state of the art iterative method for solving large linear systems is the conjugate gradient (CG) algorithm. Theoretical convergence analysis suggests that CG converges more rapidly than steepest descent. This paper argues that steepest descent may be an attractive alternative to CG when solving linear systems arising from the discretization of ill-posed problems. Specifically, it is shown that, for ill-posed problems, steepest descent has a more stable convergence behavior than CG, which may be explained by the fact that the filter factors for steepest descent behave much less erratically than those for CG. Moreover, it is shown that, with proper preconditioning, the convergence rate of steepest descent is competitive with that of CG.This revised version was published online in October 2005 with corrections to the Cover Date.  相似文献   

8.
The topic of this paper is the convergence analysis of subspace gradient iterations for the simultaneous computation of a few of the smallest eigenvalues plus eigenvectors of a symmetric and positive definite matrix pair (A,M). The methods are based on subspace iterations for A ? 1M and use the Rayleigh‐Ritz procedure for convergence acceleration. New sharp convergence estimates are proved by generalizing estimates, which have been presented for vectorial steepest descent iterations (see SIAM J. Matrix Anal. Appl., 32(2):443‐456, 2011). Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
Shape optimization based on the shape calculus is numerically mostly performed using steepest descent methods. This paper provides a novel framework for analyzing shape Newton optimization methods by exploiting a Riemannian perspective. A Riemannian shape Hessian is defined possessing often sought properties like symmetry and quadratic convergence for Newton optimization methods.  相似文献   

10.
矩阵填充是指利用矩阵的低秩特性而由部分观测元素恢复出原矩阵,在推荐系统、信号处理、医学成像、机器学习等领域有着广泛的应用。采用精确线搜索的交替最速下降法由于每次迭代计算量小因而对大规模问题的求解非常有效。本文在其基础上采用分离地精确线搜索,可使得每次迭代下降更多但计算量相同,从而可望进一步提高计算效率。本文分析了新算法的收敛性。数值结果也表明所提出的算法更加有效。  相似文献   

11.
We obtain exact (unimprovable) estimates for the rate of convergence of the s-step method of steepest descent for finding the least (greatest) eigenvalue of a linear bounded self-adjoint operator in a Hilbert space.  相似文献   

12.
In this paper, we discuss the strong convergence of the hybrid steepest descent method relative to the case when the involved operators belong to a wide class of possibly nonself-mappings. Our convergence results cover previous ones, and the techniques of analysis used are simple and can be adapted to many other fixed point methods.  相似文献   

13.
The block‐Lanczos method serves to compute a moderate number of eigenvalues and the corresponding invariant subspace of a symmetric matrix. In this paper, the convergence behavior of nonrestarted and restarted versions of the block‐Lanczos method is analyzed. For the nonrestarted version, we improve an estimate by Saad by means of a change of the auxiliary vector so that the new estimate is much more accurate in the case of clustered or multiple eigenvalues. For the restarted version, an estimate by Knyazev is generalized by extending our previous results on block steepest descent iterations and single‐vector restarted Krylov subspace iterations. The new estimates can also be reformulated and applied to invert‐block‐Lanczos methods for solving generalized matrix eigenvalue problems.  相似文献   

14.
In this paper, we introduce a novel projected steepest descent iterative method with frozen derivative. The classical projected steepest descent iterative method involves the computation of derivative of the nonlinear operator at each iterate. The method of this paper requires the computation of derivative of the nonlinear operator only at an initial point. We exhibit the convergence analysis of our method by assuming the conditional stability of the inverse problem on a convex and compact set. Further, by assuming the conditional stability on a nested family of convex and compact subsets, we develop a multi-level method. In order to enhance the accuracy of approximation between neighboring levels, we couple it with the growth of stability constants. This along with a suitable discrepancy criterion ensures that the algorithm proceeds from level to level and terminates within finite steps. Finally, we discuss an inverse problem on which our methods are applicable.  相似文献   

15.
The aim of this paper is to provide a convergence analysis for a preconditioned subspace iteration, which is designated to determine a modest number of the smallest eigenvalues and its corresponding invariant subspace of eigenvectors of a large, symmetric positive definite matrix. The algorithm is built upon a subspace implementation of preconditioned inverse iteration, i.e., the well-known inverse iteration procedure, where the associated system of linear equations is solved approximately by using a preconditioner. This step is followed by a Rayleigh-Ritz projection so that preconditioned inverse iteration is always applied to the Ritz vectors of the actual subspace of approximate eigenvectors. The given theory provides sharp convergence estimates for the Ritz values and is mainly built on arguments exploiting the geometry underlying preconditioned inverse iteration.  相似文献   

16.
In this article, we study a new second‐order energy stable Backward Differentiation Formula (BDF) finite difference scheme for the epitaxial thin film equation with slope selection (SS). One major challenge for higher‐order‐in‐time temporal discretizations is how to ensure an unconditional energy stability without compromising numerical efficiency or accuracy. We propose a framework for designing a second‐order numerical scheme with unconditional energy stability using the BDF method with constant coefficient stabilizing terms. Based on the unconditional energy stability property that we establish, we derive an stability for the numerical solution and provide an optimal convergence analysis. To deal with the highly nonlinear four‐Laplacian term at each time step, we apply efficient preconditioned steepest descent and preconditioned nonlinear conjugate gradient algorithms to solve the corresponding nonlinear system. Various numerical simulations are presented to demonstrate the stability and efficiency of the proposed schemes and solvers. Comparisons with other second‐order schemes are presented.  相似文献   

17.
The stability and convergence rate of Olver’s collocation method for the numerical solution of Riemann–Hilbert problems (RHPs) are known to depend very sensitively on the particular choice of contours used as data of the RHP. By manually performing contour deformations that proved to be successful in the asymptotic analysis of RHPs, such as the method of nonlinear steepest descent, the numerical method can basically be preconditioned, making it asymptotically stable. In this paper, however, we will show that most of these preconditioning deformations, including lensing, can be addressed in an automatic, completely algorithmic fashion that would turn the numerical method into a black-box solver. To this end, the preconditioning of RHPs is recast as a discrete, graph-based optimization problem: the deformed contours are obtained as a system of shortest paths within a planar graph weighted by the relative strength of the jump matrices. The algorithm is illustrated for the RHP representing the Painlevé II transcendents.  相似文献   

18.
Eigenvalue and condition number estimates for preconditioned iteration matrices provide the information required to estimate the rate of convergence of iterative methods, such as preconditioned conjugate gradient methods. In recent years various estimates have been derived for (perturbed) modified (block) incomplete factorizations. We survey and extend some of these and derive new estimates. In particular we derive upper and lower estimates of individual eigenvalues and of condition number. This includes a discussion that the condition number of preconditioned second order elliptic difference matrices is O(h−1). Some of the methods are applied to compute certain parameters involved in the computation of the preconditioner.  相似文献   

19.
In maximizing a non-linear function G(), it is well known that the steepest descent method has a slow convergence rate. Here we propose a systematic procedure to obtain a 1–1 transformation on the variables , so that in the space of the transformed variables, the steepest descent method produces the solution faster. The final solution in the original space is obtained by taking the inverse transformation. We apply the procedure in maximizing the likelihood functions of some generalized distributions which are widely used in modeling count data. It was shown that for these distributions, the steepest descent method via transformations produced the solutions very fast. It is also observed that the proposed procedure can be used to expedite the convergence rate of the first derivative based algorithms, such as Polak-Ribiere, Fletcher and Reeves conjugate gradient methods as well.  相似文献   

20.
主要研究对称正定矩阵群上的内蕴最速下降算法的收敛性问题.首先针对一个可转化为对称正定矩阵群上无约束优化问题的半监督度量学习模型,提出对称正定矩阵群上一种自适应变步长的内蕴最速下降算法.然后利用李群上的光滑函数在任意一点处带积分余项的泰勒展开式,证明所提算法在对称正定矩阵群上是线性收敛的.最后通过在分类问题中的数值实验说明算法的有效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号