共查询到20条相似文献,搜索用时 15 毫秒
1.
Two fundamental convergence theorems for nonlinear conjugate gradient methods and their applications
1. IntroductionWe consider the global convergence of conjugate gradient methods for the unconstrainednonlinear optimization problemadn f(x),where f: Re - RI is continuously dtherelltiable and its gradiellt is denoted by g. Weconsider only the cajse where the methods are implemented without regular restarts. Theiterative formula is given byXk 1 = Xk Akdk, (1'1).and the seaxch direction da is defined bywhere gb is a scalar, ^k is a stenlength, and gb denotes g(xk).The best-known formulas fo… 相似文献
2.
A modified PRP conjugate gradient method 总被引:4,自引:0,他引:4
This paper gives a modified PRP method which possesses the global convergence of nonconvex function and the R-linear convergence
rate of uniformly convex function. Furthermore, the presented method has sufficiently descent property and characteristic
of automatically being in a trust region without carrying out any line search technique. Numerical results indicate that the
new method is interesting for the given test problems.
This work is supported by Guangxi University SF grands X061041 and China NSF grands 10761001. 相似文献
3.
Two modified Dai-Yuan nonlinear conjugate gradient methods 总被引:1,自引:0,他引:1
Li Zhang 《Numerical Algorithms》2009,50(1):1-16
In this paper, we propose two modified versions of the Dai-Yuan (DY) nonlinear conjugate gradient method. One is based on
the MBFGS method (Li and Fukushima, J Comput Appl Math 129:15–35, 2001) and inherits all nice properties of the DY method. Moreover, this method converges globally for nonconvex functions even
if the standard Armijo line search is used. The other is based on the ideas of Wei et al. (Appl Math Comput 183:1341–1350,
2006), Zhang et al. (Numer Math 104:561–572, 2006) and possesses good performance of the Hestenes-Stiefel method. Numerical results are also reported.
This work was supported by the NSF foundation (10701018) of China. 相似文献
4.
The conjugate gradient method is a useful and powerful approach for solving large-scale minimization problems. Liu and Storey developed a conjugate gradient method, which has good numerical performance but no global convergence under traditional line searches such as Armijo line search, Wolfe line search, and Goldstein line search. In this paper we propose a new nonmonotone line search for Liu-Storey conjugate gradient method (LS in short). The new nonmonotone line search can guarantee the global convergence of LS method and has a good numerical performance. By estimating the Lipschitz constant of the derivative of objective functions in the new nonmonotone line search, we can find an adequate step size and substantially decrease the number of functional evaluations at each iteration. Numerical results show that the new approach is effective in practical computation. 相似文献
5.
6.
Arnd Meyer 《Numerical Linear Algebra with Applications》1994,1(2):129-139
Yserentant's and Bramble/Pasciak/Xu's hierarchical preconditioners only require the hierarchical node connection of a finite element grid. So they could be applied to nonsymmetric FE-systems too, having a symmetric preconditioner. We investigate the question whether this fact could be useful in the CG-like iterative methods. The main key is the consideration of an appropriate choice of the inner product defined in the N-vector space. It is shown, how the inner product influences the formulas of the methods, the rate of convergence and some other properties. 相似文献
7.
Global convergence result for conjugate gradient methods 总被引:2,自引:0,他引:2
Conjugate gradient optimization algorithms depend on the search directions,
相似文献
8.
Xing Cai Bjørn Fredrik Nielsen Aslak Tveito 《Numerical Linear Algebra with Applications》2007,14(5):459-467
We discuss the efficiency of the conjugate gradient (CG) method for solving a sequence of linear systems; Aun+1 = un, where A is assumed to be sparse, symmetric, and positive definite. We show that under certain conditions the Krylov subspace, which is generated when solving the first linear system Au1 = u0, contains the solutions {un} for subsequent time steps. The solutions of these equations can therefore be computed by a straightforward projection of the right‐hand side onto the already computed Krylov subspace. Our theoretical considerations are illustrated by numerical experiments that compare this method with the order‐optimal scheme obtained by applying the multigrid method as a preconditioner for the CG‐method at each time step. Copyright © 2007 John Wiley & Sons, Ltd. 相似文献
9.
In this paper, we take a little modification to the Wei–Yao–Liu nonlinear conjugate gradient method proposed by Wei et al. [Z. Wei, S. Yao, L. Liu, The convergence properties of some new conjugate gradient methods, Appl. Math. Comput. 183 (2006) 1341–1350] such that the modified method possesses better convergence properties. In fact, we prove that the modified method satisfies sufficient descent condition with greater parameter in the strong Wolfe line search and converges globally for nonconvex minimization. We also extend these results to the Hestenes–Stiefel method and prove that the modified HS method is globally convergent for nonconvex functions with the standard Wolfe conditions. Numerical results are reported by using some test problems in the CUTE library. 相似文献
10.
John A. Ford Yasushi Narushima Hiroshi Yabe 《Computational Optimization and Applications》2008,40(2):191-216
Conjugate gradient methods are appealing for large scale nonlinear optimization problems, because they avoid the storage of
matrices. Recently, seeking fast convergence of these methods, Dai and Liao (Appl. Math. Optim. 43:87–101, 2001) proposed a conjugate gradient method based on the secant condition of quasi-Newton methods, and later Yabe and Takano (Comput.
Optim. Appl. 28:203–225, 2004) proposed another conjugate gradient method based on the modified secant condition. In this paper, we make use of a multi-step
secant condition given by Ford and Moghrabi (Optim. Methods Softw. 2:357–370, 1993; J. Comput. Appl. Math. 50:305–323, 1994) and propose two new conjugate gradient methods based on this condition. The methods are shown to be globally convergent
under certain assumptions. Numerical results are reported. 相似文献
11.
Gonglin Yuan Xiwen Lu Zengxin Wei 《Journal of Computational and Applied Mathematics》2009,233(2):519-530
A modified conjugate gradient method is presented for solving unconstrained optimization problems, which possesses the following properties: (i) The sufficient descent property is satisfied without any line search; (ii) The search direction will be in a trust region automatically; (iii) The Zoutendijk condition holds for the Wolfe–Powell line search technique; (iv) This method inherits an important property of the well-known Polak–Ribière–Polyak (PRP) method: the tendency to turn towards the steepest descent direction if a small step is generated away from the solution, preventing a sequence of tiny steps from happening. The global convergence and the linearly convergent rate of the given method are established. Numerical results show that this method is interesting. 相似文献
12.
13.
In this paper, we make a modification to the Liu-Storey (LS) conjugate gradient method and propose a descent LS method. The method can generate sufficient descent directions for the objective function. This property is independent of the line search used. We prove that the modified LS method is globally convergent with the strong Wolfe line search. The numerical results show that the proposed descent LS method is efficient for the unconstrained problems in the CUTEr library. 相似文献
14.
R. Bouyouli G. Meurant L. Smoch H. Sadok 《Numerical Linear Algebra with Applications》2009,16(3):223-236
This paper is concerned with proving theoretical results related to the convergence of the conjugate gradient (CG) method for solving positive definite symmetric linear systems. Considering the inverse of the projection of the inverse of the matrix, new relations for ratios of the A‐norm of the error and the norm of the residual are provided, starting from some earlier results of Sadok (Numer. Algorithms 2005; 40 :201–216). The proofs of our results rely on the well‐known correspondence between the CG method and the Lanczos algorithm. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
15.
Saman Babaie-Kafaki 《Journal of Computational and Applied Mathematics》2010,234(5):1374-1386
Following the approach proposed by Dai and Liao, we introduce two nonlinear conjugate gradient methods for unconstrained optimization problems. One of our proposed methods is based on a modified version of the secant equation proposed by Zhang, Deng and Chen, and Zhang and Xu, and the other is based on the modified BFGS update proposed by Yuan. An interesting feature of our methods is their account of both the gradient and function values. Under proper conditions, we show that one of the proposed methods is globally convergent for general functions and that the other is globally convergent for uniformly convex functions. To enhance the performance of the line search procedure, we also propose a new approach for computing the initial steplength to be used for initiating the procedure. We provide a comparison of implementations of our methods with the efficient conjugate gradient methods proposed by Dai and Liao, and Hestenes and Stiefel. Numerical test results show the efficiency of our proposed methods. 相似文献
16.
Michiya Kobayashi Hiroshi Yabe 《Journal of Computational and Applied Mathematics》2010,234(2):375-397
In this paper, we deal with conjugate gradient methods for solving nonlinear least squares problems. Several Newton-like methods have been studied for solving nonlinear least squares problems, which include the Gauss-Newton method, the Levenberg-Marquardt method and the structured quasi-Newton methods. On the other hand, conjugate gradient methods are appealing for general large-scale nonlinear optimization problems. By combining the structured secant condition and the idea of Dai and Liao (2001) [20], the present paper proposes conjugate gradient methods that make use of the structure of the Hessian of the objective function of nonlinear least squares problems. The proposed methods are shown to be globally convergent under some assumptions. Finally, some numerical results are given. 相似文献
17.
Aiguo Lu Hongwei LiuXiuyun Zheng Weijie Cong 《Applied mathematics and computation》2011,217(12):5547-5552
18.
共轭梯度法是一类具有广泛应用的求解大规模无约束优化问题的方法. 提出了一种新的非线性共轭梯度(CG)法,理论分析显示新算法在多种线搜索条件下具有充分下降性. 进一步证明了新CG算法的全局收敛性定理. 最后,进行了大量数值实验,其结果表明与传统的几类CG方法相比,新算法具有更为高效的计算性能. 相似文献
19.
It is well known that global convergence has not been established for the Polak-Ribière-Polyak (PRP) conjugate gradient method using the standard Wolfe conditions. In the convergence analysis of PRP method with Wolfe line search, the (sufficient) descent condition and the restriction βk?0 are indispensable (see [4,7]). This paper shows that these restrictions could be relaxed. Under some suitable conditions, by using a modified Wolfe line search, global convergence results were established for the PRP method. Some special choices for βk which can ensure the search direction’s descent property were also discussed in this paper. Preliminary numerical results on a set of large-scale problems were reported to show that the PRP method’s computational efficiency is encouraging. 相似文献
20.
Takeshi Iwashita Kota Ikehara Takeshi Fukaya Takeshi Mifune 《Numerical Linear Algebra with Applications》2023,30(6):e2512
In this article, we focus on solving a sequence of linear systems that have identical (or similar) coefficient matrices. For this type of problem, we investigate subspace correction (SC) and deflation methods, which use an auxiliary matrix (subspace) to accelerate the convergence of the iterative method. In practical simulations, these acceleration methods typically work well when the range of the auxiliary matrix contains eigenspaces corresponding to small eigenvalues of the coefficient matrix. We develop a new algebraic auxiliary matrix construction method based on error vector sampling in which eigenvectors with small eigenvalues are efficiently identified in the solution process. We use the generated auxiliary matrix for convergence acceleration in the following solution step. Numerical tests confirm that both SC and deflation methods with the auxiliary matrix can accelerate the solution process of the iterative solver. Furthermore, we examine the applicability of our technique to the estimation of the condition number of the coefficient matrix. We also present the algorithm of the preconditioned conjugate gradient method with condition number estimation. 相似文献
|