首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Linear systems of the form Ax = b, where the matrix A is symmetric and positive definite, often arise from the discretization of elliptic partial differential equations. A very successful method for solving these linear systems is the preconditioned conjugate gradient method. In this paper, we study parallel preconditioners for the conjugate gradient method based on the block two-stage iterative methods. Sufficient conditions for the validity of these preconditioners are given. Computational results of these preconditioned conjugate gradient methods on two parallel computing systems are presented.  相似文献   

2.
Recently, new higher order finite volume methods (FVM) were introduced in [Z. Cai, J. Douglas, M. Park, Development and analysis of higher order finite volume methods over rectangles for elliptic equations, Adv. Comput. Math. 19 (2003) 3-33], where the linear system derived by the hybridization with Lagrange multiplier satisfying the flux consistency condition is reduced to a linear system for a pressure variable by an appropriate quadrature rule. We study the convergence of an iterative solver for this linear system. The conjugate gradient (CG) method is a natural choice to solve the system, but it seems slow, possibly due to the non-diagonal dominance of the system. In this paper, we propose block iterative methods with a reordering scheme to solve the linear system derived by the higher order FVM and prove their convergence. With a proper ordering, each block subproblem can be solved by fast methods such as the multigrid (MG) method. The numerical experiments show that these block iterative methods are much faster than CG.  相似文献   

3.
4.
We consider solving complex symmetric linear systems with multiple right-hand sides. We assume that the coefficient matrix has indefinite real part and positive definite imaginary part. We propose a new block conjugate gradient type method based on the Schur complement of a certain 2-by-2 real block form. The algorithm of the proposed method consists of building blocks that involve only real arithmetic with real symmetric matrices of the original size. We also present the convergence property of the proposed method and an efficient algorithmic implementation. In numerical experiments, we compare our method to a complex-valued direct solver, and a preconditioned and nonpreconditioned block Krylov method that uses complex arithmetic.  相似文献   

5.
The conjugate gradient method is a useful and powerful approach for solving large-scale minimization problems. Liu and Storey developed a conjugate gradient method, which has good numerical performance but no global convergence result under traditional line searches such as Armijo, Wolfe and Goldstein line searches. In this paper a convergent version of Liu–Storey conjugate gradient method (LS in short) is proposed for minimizing functions that have Lipschitz continuous partial derivatives. By estimating the Lipschitz constant of the derivative of objective functions, we can find an adequate step size at each iteration so as to guarantee the global convergence and improve the efficiency of LS method in practical computation.  相似文献   

6.
The development of the Lanczos algorithm for finding eigenvalues of large sparse symmetric matrices was followed by that of block forms of the algorithm. In this paper, similar extensions are carried out for a relative of the Lanczos method, the conjugate gradient algorithm. The resulting block algorithms are useful for simultaneously solving multiple linear systems or for solving a single linear system in which the matrix has several separated eigenvalues or is not easily accessed on a computer. We develop a block biconjugate gradient algorithm for general matrices, and develop block conjugate gradient, minimum residual, and minimum error algorithms for symmetric semidefinite matrices. Bounds on the rate of convergence of the block conjugate gradient algorithm are presented, and issues related to computational implementation are discussed. Variants of the block conjugate gradient algorithm applicable to symmetric indefinite matrices are also developed.  相似文献   

7.
An incomplete factorization method for preconditioning symmetric positive definite matrices is introduced to solve normal equations. The normal equations are form to solve linear least squares problems. The procedure is based on a block incomplete Cholesky factorization and a multilevel recursive strategy with an approximate Schur complement matrix formed implicitly. A diagonal perturbation strategy is implemented to enhance factorization robustness. The factors obtained are used as a preconditioner for the conjugate gradient method. Numerical experiments are used to show the robustness and efficiency of this preconditioning technique, and to compare it with two other preconditioners.  相似文献   

8.
The conjugate gradient method is a useful and powerful approach for solving large-scale minimization problems. Liu and Storey developed a conjugate gradient method, which has good numerical performance but no global convergence under traditional line searches such as Armijo line search, Wolfe line search, and Goldstein line search. In this paper we propose a new nonmonotone line search for Liu-Storey conjugate gradient method (LS in short). The new nonmonotone line search can guarantee the global convergence of LS method and has a good numerical performance. By estimating the Lipschitz constant of the derivative of objective functions in the new nonmonotone line search, we can find an adequate step size and substantially decrease the number of functional evaluations at each iteration. Numerical results show that the new approach is effective in practical computation.  相似文献   

9.
本文对Navier-Stokes问题加罚变分形成有限元解给出了共轭梯度算法和分块迭代算法,由于共轭梯度算法中,求解单变量极小值问题得到简化,使得计算时间大为节约. 本文还给出了计算实例.  相似文献   

10.
A method for solving a partial algebraic eigenvalues problem is constructed. It exploits tensor structure of eigenvectors in two-dimensional case. For a symmetric matrix represented in tensor format, the method finds low-rank approximations to the eigenvectors corresponding to the smallest eigenvalues. For sparse matrices, execution time and required memory for the proposed method are proportional to the square root of miscellaneous overall number of unknowns, whereas this dependence is usually linear. To maintain tensor structure of vectors at each iteration step, low-rank approximations are performed, which introduces errors into the original method. Nevertheless, the new method was proved to converge. Convergence rate estimates are obtained for various tensor modifications of the abstract one-step method. It is shown how the convergence of a multistep method can be derived from the convergence of the corresponding one-step method. Several modifications of the method with an low-rank approximation techniques were implemented on the basis of the block conjugate gradient method. Their performance is compared on numerical examples.  相似文献   

11.
Numerical analysts, physicists, and signal processing engineers have proposed algorithms that might be called conjugate gradient for problems associated with the computation of eigenvalues. There are many variations, mostly one eigenvalue at a time though sometimes block algorithms are proposed. Is there a correct “conjugate gradient” algorithm for the eigenvalue problem? How are the algorithms related amongst themselves and with other related algorithms such as Lanczos, the Newton method, and the Rayleigh quotient?  相似文献   

12.
A new family of conjugate gradient methods   总被引:1,自引:0,他引:1  
In this paper we develop a new class of conjugate gradient methods for unconstrained optimization problems. A new nonmonotone line search technique is proposed to guarantee the global convergence of these conjugate gradient methods under some mild conditions. In particular, Polak–Ribiére–Polyak and Liu–Storey conjugate gradient methods are special cases of the new class of conjugate gradient methods. By estimating the local Lipschitz constant of the derivative of objective functions, we can find an adequate step size and substantially decrease the function evaluations at each iteration. Numerical results show that these new conjugate gradient methods are effective in minimizing large-scale non-convex non-quadratic functions.  相似文献   

13.
孙清滢 《数学季刊》2003,18(2):154-162
Conjugate gradient optimization algorithms depend on the search directions.with different choices for the parameters in the search directions.In this note,by combining the nice numerical performance of PR and HS methods with the global convergence property of the class of conjugate gradient methods presented by HU and STOREY(1991),a class of new restarting conjugate gradient methods is presented.Global convergences of the new method with two kinds of common line searches,are proved .Firstly,it is shown that,using reverse modulus of continuity funciton and forcing function,the new method for solving unconstrained optimization can work for a continously differentiable function with Curry-Altman‘s step size rule and a bounded level set .Secondly,by using comparing technique,some general convergence propecties of the new method with other kind of step size rule are established,Numerical experiments show that the new method is efficient by comparing with FR conjugate gradient method.  相似文献   

14.
对求解无约束规划的超记忆梯度算法中线搜索方向中的参数,给了一个假设条件,从而确定了它的一个新的取值范围,保证了搜索方向是目标函数的充分下降方向,由此提出了一类新的记忆梯度算法.在去掉迭代点列有界和Armijo步长搜索下,讨论了算法的全局收敛性,且给出了结合形如共轭梯度法FR,PR,HS的记忆梯度法的修正形式.数值实验表明,新算法比Armijo线搜索下的FR、PR、HS共轭梯度法和超记忆梯度法更稳定、更有效.  相似文献   

15.
An Efficient Hybrid Conjugate Gradient Method for Unconstrained Optimization   总被引:22,自引:0,他引:22  
Recently, we propose a nonlinear conjugate gradient method, which produces a descent search direction at every iteration and converges globally provided that the line search satisfies the weak Wolfe conditions. In this paper, we will study methods related to the new nonlinear conjugate gradient method. Specifically, if the size of the scalar k with respect to the one in the new method belongs to some interval, then the corresponding methods are proved to be globally convergent; otherwise, we are able to construct a convex quadratic example showing that the methods need not converge. Numerical experiments are made for two combinations of the new method and the Hestenes–Stiefel conjugate gradient method. The initial results show that, one of the hybrid methods is especially efficient for the given test problems.  相似文献   

16.
针对共轭梯度法求解无约束二次凸规划时,在构造共轭方向上的局限性,对共轭梯度法进行了改进.给出了构造共轭方向的新方法,利用数学归纳法对新方法进行了证明.同时还给出了改进共轭梯度法在应用时的基本计算过程,并对方法的收敛性进行了证明.通过实例求解,说明了在求解二次无约束凸规划时,该方法相比共轭梯度法具有一定的优势.  相似文献   

17.
A fast method to compute high-order approximate inverses based on truncated elimination is constructed for multidiagonal matrices of diagonal dominance. Together with the block preconditioned conjugate gradient method, it can be used for the numerical solution of elliptic partial differential equations and related problems. Through numerical experiments it is shown that the method is robust and efficient.  相似文献   

18.
对闭凸集约束的非线性规划问题构造了一个修正共轭梯度投影下降算法,在去掉迭代点列有界的条件下,分析了算法的全局收敛性.新算法与共轭梯度参数结合,给出了三类结合共轭梯度参数的修正共轭梯度投影算法.数值例子表明算法是有效的.  相似文献   

19.
We study convergence of multisplitting method associated with a block diagonal conformable multisplitting for solving a linear system whose coefficient matrix is a symmetric positive definite matrix which is not an H-matrix. Next, we study the validity ofm-step multisplitting polynomial preconditioners which will be used in the preconditioned conjugate gradient method.  相似文献   

20.
Image restoration is a fundamental problem in image processing. Except for many different filters applied to obtain a restored image in image restoration, a degraded image can often be recovered efficiently by minimizing a cost function which consists of a data-fidelity term and a regularization term. In specific, half-quadratic regularization can effectively preserve image edges in the recovered images and a fixed-point iteration method is usually employed to solve the minimization problem. In this paper, the Newton method is applied to solve the half-quadratic regularization image restoration problem. And at each step of the Newton method, a structured linear system of a symmetric positive definite coefficient matrix arises. We design two different decomposition-based block preconditioning matrices by considering the special structure of the coefficient matrix and apply the preconditioned conjugate gradient method to solve this linear system. Theoretical analysis shows the eigenvector properties and the spectral bounds for the preconditioned matrices. The method used to analyze the spectral distribution of the preconditioned matrix and the correspondingly obtained spectral bounds are different from those in the literature. The experimental results also demonstrate that the decomposition-based block preconditioned conjugate gradient method is efficient for solving the half-quadratic regularization image restoration in terms of the numerical performance and image recovering quality.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号