首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In order to precondition a sparse symmetric positive definite matrix, its approximate inverse is examined, which is represented as the product of two sparse mutually adjoint triangular matrices. In this way, the solution of the corresponding system of linear algebraic equations (SLAE) by applying the preconditioned conjugate gradient method (CGM) is reduced to performing only elementary vector operations and calculating sparse matrix-vector products. A method for constructing the above preconditioner is described and analyzed. The triangular factor has a fixed sparsity pattern and is optimal in the sense that the preconditioned matrix has a minimum K-condition number. The use of polynomial preconditioning based on Chebyshev polynomials makes it possible to considerably reduce the amount of scalar product operations (at the cost of an insignificant increase in the total number of arithmetic operations). The possibility of an efficient massively parallel implementation of the resulting method for solving SLAEs is discussed. For a sequential version of this method, the results obtained by solving 56 test problems from the Florida sparse matrix collection (which are large-scale and ill-conditioned) are presented. These results show that the method is highly reliable and has low computational costs.  相似文献   

2.
Summary. The one-dimensional discrete Poisson equation on a uniform grid with points produces a linear system of equations with a symmetric, positive-definite coefficient matrix. Hence, the conjugate gradient method can be used, and standard analysis gives an upper bound of ) on the number of iterations required for convergence. This paper introduces a systematically defined set of solutions dependent on a parameter , and for several values of , presents exact analytic expressions for the number of steps ) needed to achieve accuracy . The asymptotic behavior of these expressions has the form )} as and )} as . In particular, two choices of corresponding to nonsmooth solutions give , i.e., iteration counts independent of ; this is in contrast to the standard bounds. The standard asymptotic convergence behavior, , is seen for a relatively smooth solution. Numerical examples illustrate and supplement the analysis. Received August 30, 1995 / Revised version received January 23, 1996  相似文献   

3.
《Optimization》2012,61(2):163-179
In this article, we consider the global convergence of the Polak–Ribiére–Polyak conjugate gradient method (abbreviated PRP method) for minimizing functions that have Lipschitz continuous partial derivatives. A novel form of non-monotone line search is proposed to guarantee the global convergence of the PRP method. It is also shown that the PRP method has linear convergence rate under some mild conditions when the non-monotone line search reduces to a related monotone line search. The new non-monotone line search needs to estimate the Lipschitz constant of the gradients of objective functions, for which two practical estimations are proposed to help us to find a suitable initial step size for the PRP method. Numerical results show that the new line search approach is efficient in practical computation.  相似文献   

4.
In this paper we present a new family of conjugate gradientalgorithms. This family originates in the algorithms providedby Wolfe and Lemaréchal for non-differentiable problems.It is shown that the Wolfe-Lemaréchal algorithm is identicalto the Fletcher-Reeves algorithm when the objective functionis smooth and when line searches are exact. The convergenceproperties of the new algorithms are investigated. One of themis globally convergent under minimum requirements on the directionalminimization.  相似文献   

5.
Global convergence result for conjugate gradient methods   总被引:2,自引:0,他引:2  
Conjugate gradient optimization algorithms depend on the search directions,
  相似文献   

6.
Summary. Stabilized methods (also called Chebyshev methods) are explicit Runge-Kutta methods with extended stability domains along the negative real axis. These methods are intended for large mildly stiff problems, originating mainly from parabolic PDEs. The aim of this paper is to show that with the use of orthogonal polynomials, we can construct nearly optimal stability polynomials of second order with a three-term recurrence relation. These polynomials can be used to construct a new numerical method, which is implemented in a code called ROCK2. This new numerical method can be seen as a combination of van der Houwen-Sommeijer-type methods and Lebedev-type methods. Received January 14, 2000 / Revised version received November 3, 2000 / Published online May 4, 2001  相似文献   

7.
In this paper, HS conjugate gradient method for minimizing a continuously differentiable functionf onR n is modified to have global convergence property. Firstly, it is shown that, using reverse modulus of continuity function and forcing function, the new method for solving unconstrained optimization can work for a continuously differentiable function with Curry-Altman’s step size rule and a bounded level set. Secondly, by using comparing technique, some general convergence properties of the new method with Armijo step size rule are established. Numerical results show that the new algorithms are efficient.  相似文献   

8.
It has been conjectured that the conjugate gradient method for minimizing functions of several variables has a superlinear rate of convergence, but Crowder and Wolfe show by example that the conjecture is false. Now the stronger result is given that, if the objective function is a convex quadratic and if the initial search direction is an arbitrary downhill direction, then either termination occurs or the rate of convergence is only linear, the second possibility being more usual. Relations between the starting point and the initial search direction that are necessary and sufficient for termination in the quadratic case are studied.  相似文献   

9.
The conjugate gradient method is one of the most popular iterative methods for computing approximate solutions of linear systems of equations with a symmetric positive definite matrix A. It is generally desirable to terminate the iterations as soon as a sufficiently accurate approximate solution has been computed. This paper discusses known and new methods for computing bounds or estimates of the A-norm of the error in the approximate solutions generated by the conjugate gradient method.  相似文献   

10.
共轭梯度法是最优化中最常用的方法之一,广泛地应用于求解大规模优化问题,其中参数β_k的不同选取可以构成不同的共轭梯度法.给出了一类含有三个参数的共轭梯度算法,这种算法能够在给定的条件下证明选定的β_k在每一步都能产生一个下降方向,同时在强Wolfe线搜索下,这种算法具有全局收敛性.  相似文献   

11.
This paper is concerned with proving theoretical results related to the convergence of the conjugate gradient (CG) method for solving positive definite symmetric linear systems. Considering the inverse of the projection of the inverse of the matrix, new relations for ratios of the A‐norm of the error and the norm of the residual are provided, starting from some earlier results of Sadok (Numer. Algorithms 2005; 40 :201–216). The proofs of our results rely on the well‐known correspondence between the CG method and the Lanczos algorithm. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

12.
In this study, we present a numerical scheme for solving a class of fractional partial differential equations. First, we introduce psi -Laguerre polynomials like psi-shifted Chebyshev polynomials and employ these newly introduced polynomials for the solution of space-time fractional differential equations. In our approach, we project these polynomials to develop operational matrices of fractional integration. The use of these orthogonal polynomials converts the problem under consideration into a system of algebraic equations. The solution of this system provide us the desired results. The convergence of the proposed method is analyzed. Finally, some illustrative examples are included to observe the validity and applicability of the proposed method.  相似文献   

13.
On different compact sets from ? n , new multidimensional analogs of algebraic polynomials least deviating from zero (Chebyshev polynomials) are constructed. A brief review of the analogs constructed earlier is given. Estimates of values of the best approximation obtained by using extremal signatures, lattices, and finite groups are presented.  相似文献   

14.
On different compact sets from ℝ n , new multidimensional analogs of algebraic polynomials least deviating from zero (Chebyshev polynomials) are constructed. A brief review of the analogs constructed earlier is given. Estimates of values of the best approximation obtained by using extremal signatures, lattices, and finite groups are presented.  相似文献   

15.
New families of generating functions and identities concerning the Chebyshev polynomials are derived. It is shown that the proposed method allows the derivation of sum rules involving products of Chebyshev polynomials and addition theorems. The possiblity of extending the results to include gnerating functions involving products of Chebyshev and other polynomials is finally analyzed.
Sunto Si derivano nuove famiglie di funzioni generatrici e di identità relative ai polinomi di Chebyshev. Si dimostra che il metodo proposto permette la derivazione di regole di somma relative a prodotti di polinomi di Chebyshev e teoremi di addizione. La possibilità di estendere i risultati includendos funzioni generatrici di prodotti di polinomi di Chebyshev ed altri polinomi è infine analizzata.
  相似文献   

16.
In this paper we evaluate Chebyshev polynomials of the second kind on a class of symmetric integer matrices, namely on adjacency matrices of simply laced Dynkin and extended Dynkin diagrams. As an application of these results we explicitly calculate minimal projective resolutions of simple modules of symmetric algebras with radical cube zero that are of finite and tame representation type.  相似文献   

17.
18.
In this paper we explore two sets of polynomials, the orthogonal polynomials and the residual polynomials, associated with a preconditioned conjugate gradient iteration for the solution of the linear system Ax = b. In the context of preconditioning by the matrix C, we show that the roots of the orthogonal polynomials, also known as generalized Ritz values, are the eigenvalues of an orthogonal section of the matrix C A while the roots of the residual polynomials, also known as pseudo-Ritz values (or roots of kernel polynomials), are the reciprocals of the eigenvalues of an orthogonal section of the matrix (C A)?1. When C A is selfadjoint positive definite, this distinction is minimal, but for the indefinite or nonselfadjoint case this distinction becomes important. We use these two sets of roots to form possibly nonconvex regions in the complex plane that describe the spectrum of C A.  相似文献   

19.
20.
Instead of the standard estimate in terms of the spectral condition number we develop a new CG iteration number estimate depending on the quantity B = 1/ntr M/(det M)1/n, where M is an n × n preconditioned matrix. A new family of iterative methods for solving symmetric positive definite systems based on B-reducing strategies is described. Numerical results are presented for the new algorithms and compared with several well-known preconditioned CG methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号