首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The conjugate gradient method is one of the most popular iterative methods for computing approximate solutions of linear systems of equations with a symmetric positive definite matrix A. It is generally desirable to terminate the iterations as soon as a sufficiently accurate approximate solution has been computed. This paper discusses known and new methods for computing bounds or estimates of the A-norm of the error in the approximate solutions generated by the conjugate gradient method.  相似文献   

2.
Analysis and design of linear periodic control systems are closely related to the periodic matrix equations. The objective of this paper is to provide four new iterative methods based on the conjugate gradient normal equation error (CGNE), conjugate gradient normal equation residual (CGNR), and least‐squares QR factorization (LSQR) algorithms to find the reflexive periodic solutions (X1,Y1,X2,Y2,…,Xσ,Yσ) of the general periodic matrix equations for i = 1,2,…,σ. The iterative methods are guaranteed to converge in a finite number of steps in the absence of round‐off errors. Finally, some numerical results are performed to illustrate the efficiency and feasibility of new methods.  相似文献   

3.
A class of iterative methods is presented for the solution of systems of linear equationsAx=b, whereA is a generalm ×n matrix. The methods are based on a development as a continued fraction of the inner product (r, r), wherer=b-Ax is the residual. The methods as defined are quite general and include some wellknown methods such as the minimal residual conjugate gradient method with one step.  相似文献   

4.
Linear systems of the form Ax = b, where the matrix A is symmetric and positive definite, often arise from the discretization of elliptic partial differential equations. A very successful method for solving these linear systems is the preconditioned conjugate gradient method. In this paper, we study parallel preconditioners for the conjugate gradient method based on the block two-stage iterative methods. Sufficient conditions for the validity of these preconditioners are given. Computational results of these preconditioned conjugate gradient methods on two parallel computing systems are presented.  相似文献   

5.
For solving inverse gravimetry problems, efficient stable parallel algorithms based on iterative gradient methods are proposed. For solving systems of linear algebraic equations with block-tridiagonal matrices arising in geoelectrics problems, a parallel matrix sweep algorithm, a square root method, and a conjugate gradient method with preconditioner are proposed. The algorithms are implemented numerically on a parallel computing system of the Institute of Mathematics and Mechanics (PCS-IMM), NVIDIA graphics processors, and an Intel multi-core CPU with some new computing technologies. The parallel algorithms are incorporated into a system of remote computations entitled “Specialized Web-Portal for Solving Geophysical Problems on Multiprocessor Computers.” Some problems with “quasi-model” and real data are solved.  相似文献   

6.
Iterative methods applied to the normal equationsA T Ax=A T b are sometimes used for solving large sparse linear least squares problems. However, when the matrix is rank-deficient many methods, although convergent, fail to produce the unique solution of minimal Euclidean norm. Examples of such methods are the Jacobi and SOR methods as well as the preconditioned conjugate gradient algorithm. We analyze here an iterative scheme that overcomes this difficulty for the case of stationary iterative methods. The scheme combines two stationary iterative methods. The first method produces any least squares solution whereas the second produces the minimum norm solution to a consistent system. This work was supported by the Swedish Research Council for Engineering Sciences, TFR.  相似文献   

7.
The development of the Lanczos algorithm for finding eigenvalues of large sparse symmetric matrices was followed by that of block forms of the algorithm. In this paper, similar extensions are carried out for a relative of the Lanczos method, the conjugate gradient algorithm. The resulting block algorithms are useful for simultaneously solving multiple linear systems or for solving a single linear system in which the matrix has several separated eigenvalues or is not easily accessed on a computer. We develop a block biconjugate gradient algorithm for general matrices, and develop block conjugate gradient, minimum residual, and minimum error algorithms for symmetric semidefinite matrices. Bounds on the rate of convergence of the block conjugate gradient algorithm are presented, and issues related to computational implementation are discussed. Variants of the block conjugate gradient algorithm applicable to symmetric indefinite matrices are also developed.  相似文献   

8.
A minimal residual method, called MINRES-N2, that is based on the use of unconventional Krylov subspaces was previously proposed by the authors for solving a system of linear equations Ax = b with a normal coefficient matrix whose spectrum belongs to an algebraic second-degree curve Γ. However, the computational scheme of this method does not cover matrices of the form A = αU + βI, where U is an arbitrary unitary matrix; for such matrices, Γ is a circle. Systems of this type are repeatedly solved when the eigenvectors of a unitary matrix are calculated by inverse iteration. In this paper, a modification of MINRES-N2 suitable for linear polynomials in unitary matrices is proposed. Numerical results are presented demonstrating the significant superiority of the modified method over GMRES as applied to systems of this class.  相似文献   

9.
针对源于Markov跳变线性二次控制问题中的一类对偶代数Riccati方程组,分别采用修正共轭梯度算法和正交投影算法作为非精确Newton算法的内迭代方法,建立求其对称自反解的非精确Newton-MCG算法和非精确Newton-OGP算法.两种迭代算法仅要求Riccati方程组存在对称自反解,对系数矩阵等没有附加限定.数值算例表明,两种迭代算法是有效的.  相似文献   

10.
郑凤芹  张凯院  武见 《数学杂志》2011,31(6):1117-1124
本文研究了求双变量线性矩阵方程组的对称最小二乘解的问题.利用求解线性代数方程组的共轭梯度法的基本思想,通过对有关矩阵和系数的变形与近似处理,建立了一种迭代算法.拓宽了共轭梯度法的适用范围.算例表明,迭代算法是有效的.  相似文献   

11.
In this paper, we study the numerical computation of the errors in linear systems when using iterative methods. This is done by using methods to obtain bounds or approximations of quadratic formsu T A −1 u whereA is a symmetric positive definite matrix andu is a given vector. Numerical examples are given for the Gauss-Seidel algorithm. Moreover, we show that using a formula for theA-norm of the error from Dahlquist, Golub and Nash [1978] very good bounds of the error can be computed almost for free during the iterations of the conjugate gradient method leading to a reliable stopping criterion. The work of the first author was partially supported by NSF Grant CCR-950539.  相似文献   

12.
For large systems of linear equations, iterative methods provide attractive solution techniques. We describe the applicability and convergence of iterative methods of Krylov subspace type for an important class of symmetric and indefinite matrix problems, namely augmented (or KKT) systems. Specifically, we consider preconditioned minimum residual methods and discuss indefinite versus positive definite preconditioning. For a natural choice of starting vector we prove that when the definite and indenfinite preconditioners are related in the obvious way, MINRES (which is applicable in the case of positive definite preconditioning) and full GMRES (which is applicable in the case of indefinite preconditioning) give residual vectors with identical Euclidean norm at each iteration. Moreover, we show that the convergence of both methods is related to a system of normal equations for which the LSQR algorithm can be employed. As a side result, we give a rare example of a non-trivial normal(1) matrix where the corresponding inner product is explicitly known: a conjugate gradient method therefore exists and can be employed in this case. This work was supported by British Council/German Academic Exchange Service Research Collaboration Project 465 and NATO Collaborative Research Grant CRG 960782  相似文献   

13.
Summary On the basis of a Rayleigh Quotient Iteration method in [10] and a Maximal Quotient Iteration method in [5, 8] two algorithms for solving special eigenvalue problems are developed. The characteristic properties of these methods lie in the application of iterative linear methods to solving systems of linear equations. The convergence properties are investigated. We apply the algorithms to the computation of the spectralradius of a nonnegative irreducible matrix.
  相似文献   

14.
This article is concerned with iterative techniques for linear systems of equations arising from a least squares formulation of boundary value problems. In its classical form, the solution of the least squares method is obtained by solving the traditional normal equation. However, for nonsmooth boundary conditions or in the case of refinement at a selected set of interior points, the matrix associated with the normal equation tends to be ill-conditioned. In this case, the least squares method may be formulated as a Powell multiplier method and the equations solved iteratively. Therein we use and compare two different iterative algorithms. The first algorithm is the preconditioned conjugate gradient method applied to the normal equation, while the second is a new algorithm based on the Powell method and formulated on the stabilized dual problem. The two algorithms are first compared on a one-dimensional problem with poorly conditioned matrices. Results show that, for such problems, the new algorithm gives more accurate results. The new algorithm is then applied to a two-dimensional steady state diffusion problem and a boundary layer problem. A comparison between the least squares method of Bramble and Schatz and the new algorithm demonstrates the ability of the new method to give highly accurate results on the boundary, or at a set of given interior collocation points without the deterioration of the condition number of the matrix. Conditions for convergence of the proposed algorithm are discussed. © 1997 John Wiley & Sons, Inc.  相似文献   

15.
We present a probabilistic analysis of two Krylov subspace methods for solving linear systems. We prove a central limit theorem for norms of the residual vectors that are produced by the conjugate gradient and MINRES algorithms when applied to a wide class of sample covariance matrices satisfying some standard moment conditions. The proof involves establishing a four-moment theorem for the so-called spectral measure, implying, in particular, universality for the matrix produced by the Lanczos iteration. The central limit theorem then implies an almost-deterministic iteration count for the iterative methods in question. © 2022 Wiley Periodicals LLC.  相似文献   

16.
For solving systems of linear algebraic equations with block-tridiagonal matrices arising in geoelectrics problems, the parallel matrix sweep algorithm, conjugate gradient method with preconditioner, and square root method are proposed and implemented numerically on multi-core CPU Intel with graphics processors NVIDIA. Investigation of efficiency and optimization of parallel algorithms for solving the problem with quasi-model data are performed.  相似文献   

17.
In this paper we propose a fundamentally different conjugate gradient method, in which the well-known parameter βk is computed by an approximation of the Hessian/vector product through finite differences. For search direction computation, the method uses a forward difference approximation to the Hessian/vector product in combination with a careful choice of the finite difference interval. For the step length computation we suggest an acceleration scheme able to improve the efficiency of the algorithm. Under common assumptions, the method is proved to be globally convergent. It is shown that for uniformly convex functions the convergence of the accelerated algorithm is still linear, but the reduction in function values is significantly improved. Numerical comparisons with conjugate gradient algorithms including CONMIN by Shanno and Phua [D.F. Shanno, K.H. Phua, Algorithm 500, minimization of unconstrained multivariate functions, ACM Trans. Math. Softw. 2 (1976) 87–94], SCALCG by Andrei [N. Andrei, Scaled conjugate gradient algorithms for unconstrained optimization, Comput. Optim. Appl. 38 (2007) 401–416; N. Andrei, Scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization, Optim. Methods Softw. 22 (2007) 561–571; N. Andrei, A scaled BFGS preconditioned conjugate gradient algorithm for unconstrained optimization, Appl. Math. Lett. 20 (2007) 645–650], and new conjugacy condition and related new conjugate gradient by Li, Tang and Wei [G. Li, C. Tang, Z. Wei, New conjugacy condition and related new conjugate gradient methods for unconstrained optimization, J. Comput. Appl. Math. 202 (2007) 523–539] or truncated Newton TN by Nash [S.G. Nash, Preconditioning of truncated-Newton methods, SIAM J. on Scientific and Statistical Computing 6 (1985) 599–616] using a set of 750 unconstrained optimization test problems show that the suggested algorithm outperforms these conjugate gradient algorithms as well as TN.  相似文献   

18.
The paper addresses the orthogonal and variational properties of a family of iterative algorithms in Krylov subspaces for solving the systems of linear algebraic equations (SLAE) with sparse nonsymmetric matrices. There are proposed and studied a biconjugate residual method, squared biconjugate residual method, and stabilized conjugate residual method. Some results of numerical experiments are given for a series of model problems as well.  相似文献   

19.
《Optimization》2012,61(4):549-570
The best spectral conjugate gradient algorithm by (Birgin, E. and Martínez, J.M., 2001, A spectral conjugate gradient method for unconstrained optimization. Applied Mathematics and Optimization, 43, 117–128). which is mainly a scaled variant of (Perry, J.M., 1977, A class of Conjugate gradient algorithms with a two step varaiable metric memory, Discussion Paper 269, Center for Mathematical Studies in Economics and Management Science, Northwestern University), is modified in such a way as to overcome the lack of positive definiteness of the matrix defining the search direction. This modification is based on the quasi-Newton BFGS updating formula. The computational scheme is embedded into the restart philosophy of Beale–Powell. The parameter scaling the gradient is selected as spectral gradient or in an anticipative way by means of a formula using the function values in two successive points. In very mild conditions it is shown that, for strongly convex functions, the algorithm is global convergent. Computational results and performance profiles for a set consisting of 700 unconstrained optimization problems show that this new scaled nonlinear conjugate gradient algorithm substantially outperforms known conjugate gradient methods including: the spectral conjugate gradient SCG by Birgin and Martínez, the scaled Fletcher and Reeves, the Polak and Ribière algorithms and the CONMIN by (Shanno, D.F. and Phua, K.H., 1976, Algorithm 500, Minimization of unconstrained multivariate functions. ACM Transactions on Mathematical Software, 2, 87–94).  相似文献   

20.
The conjugate gradient method is a powerful solution scheme for solving unconstrained optimization problems, especially for large-scale problems. However, the convergence rate of the method without restart is only linear. In this paper, we will consider an idea contained in [16] and present a new restart technique for this method. Given an arbitrary descent direction d t and the gradient g t , our key idea is to make use of the BFGS updating formula to provide a symmetric positive definite matrix P t such that d t =?P t g t , and then define the conjugate gradient iteration in the transformed space. Two conjugate gradient algorithms are designed based on the new restart technique. Their global convergence is proved under mild assumptions on the objective function. Numerical experiments are also reported, which show that the two algorithms are comparable to the Beale–Powell restart algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号