首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
孙清滢 《数学进展》2004,33(5):598-606
利用Rosen投影矩阵,建立求解带线性或非线性不等式约束优化问题的三项记忆梯度Rosen投影下降算法,并证明了算法的收敛性.同时给出了结合FR,PR,HS共轭梯度参数的三项记忆梯度Rosen投影算法,从而将经典的共轭梯度法推广用于求解约束规划问题.数值例子表明算法是有效的。  相似文献   

2.
We present a class of nested iteration schemes for solving large sparse systems of linear equations with a coefficient matrix with a dominant symmetric positive definite part. These new schemes are actually inner/outer iterations, which employ the classical conjugate gradient method as inner iteration to approximate each outer iterate, while each outer iteration is induced by a convergent and symmetric positive definite splitting of the coefficient matrix. Convergence properties of the new schemes are studied in depth, possible choices of the inner iteration steps are discussed in detail, and numerical examples from the finite-difference discretization of a second-order partial differential equation are used to further examine the effectiveness and robustness of the new schemes over GMRES and its preconditioned variant. Also, we show that the new schemes are, at least, comparable to the variable-step generalized conjugate gradient method and its preconditioned variant.  相似文献   

3.
孙清滢 《计算数学》2004,26(4):401-412
本文利用广义投影矩阵,对求解无约束规划的超记忆梯度算法中的参数给出一种新的取值范围以保证得到目标函数的超记忆梯度广义投影下降方向,并与处理任意初始点的方法技巧结合建立求解非线性不等式约束优化问题的一个初始点任意的超记忆梯度广义投影算法,在较弱条件下证明了算法的收敛性.同时给出结合FR,PR,HS共轭梯度参数的超记忆梯度广义投影算法,从而将经典的共轭梯度法推广用于求解约束规划问题.数值例子表明算法是有效的.  相似文献   

4.
5.
A new conjugate gradient method is proposed by applying Powell’s symmetrical technique to conjugate gradient methods in this paper. Using Wolfe line searches, the global convergence of the method is analyzed by using the spectral analysis of the conjugate gradient iteration matrix and Zoutendijk’s condition. Based on this, some concrete descent algorithms are developed. 200s numerical experiments are presented to verify their performance and the numerical results show that these algorithms are competitive compared with the PRP+ algorithm. Finally, a brief discussion of the new proposed method is given.  相似文献   

6.
利用广义投影矩阵,对求解无约束规划的三项记忆梯度算法中的参数给一条件,确定它们的取值范围,以保证得到目标函数的三项记忆梯度广义投影下降方向,建立了求解非线性等式和不等式约束优化问题的三项记忆梯度广义投影算法,并证明了算法的收敛性.同时给出了结合FR,PR,HS共轭梯度参数的三项记忆梯度广义投影算法,从而将经典的共轭梯度算法推广用于求解约束规划问题.数值例子表明算法是有效的.  相似文献   

7.
For classical orthogonal projection methods for large matrix eigenproblems, it may be much more difficult for a Ritz vector to converge than for its corresponding Ritz value when the matrix in question is non-Hermitian. To this end, a class of new refined orthogonal projection methods has been proposed. It is proved that in some sense each refined method is a composite of two classical orthogonal projections, in which each refined approximate eigenvector is obtained by realizing a new one of some Hermitian semipositive definite matrix onto the same subspace. Apriori error bounds on the refined approximate eigenvector are established in terms of the sine of acute angle of the normalized eigenvector and the subspace involved. It is shown that the sufficient conditions for convergence of the refined vector and that of the Ritz value are the same, so that the refined methods may be much more efficient than the classical ones.  相似文献   

8.
The object of the paper is to study the absolute matrix summability problem of Fourier series, conjugate series and some associated series under a new set of conditions on matrix methods, generalising many known results in the literature.  相似文献   

9.
The classical constructions of wavelets and scaling functions from conjugate mirror filters are extended to settings that lack multiresolution analyses. Using analogues of the classical filter conditions, generalized mirror filters are defined in the context of a generalized notion of multiresolution analysis. Scaling functions are constructed from these filters using an infinite matrix product. From these scaling functions, non-MRA wavelets are built, including one whose Fourier transform is infinitely differentiable on an arbitrarily large interval.  相似文献   

10.
三项共轭梯度法收敛性分析   总被引:5,自引:0,他引:5  
戴彧虹  袁亚湘 《计算数学》1999,21(3):355-362
1.引言考虑求解无约束光滑优化问题的线搜索方法其中al事先给定,山为搜索方向,Ik是步长因子.在经典的共轭梯度法中,对k三2,搜索方向dk由负梯度方向一gb和已有搜索方向小.1两个方向组成:其中山—-91,作为参数.关于参数作的计算公式很多,其中两个有名的计算公式称为*R公式和**P公式(见门和河1叩,它们分别为此处及以下11·11均指欧氏范数.在文献山中,Beale提出了搜索方向形如的三项重开始共轭梯度法,其中dt为重开始方向.Powellll]对这一方法引入了适当的重开始准则,获得了很好的数值结果.本文里,我们将研究搜索方向…  相似文献   

11.
This letter presents a scaled memoryless BFGS preconditioned conjugate gradient algorithm for solving unconstrained optimization problems. The basic idea is to combine the scaled memoryless BFGS method and the preconditioning technique in the frame of the conjugate gradient method. The preconditioner, which is also a scaled memoryless BFGS matrix, is reset when the Powell restart criterion holds. The parameter scaling the gradient is selected as the spectral gradient. Computational results for a set consisting of 750 test unconstrained optimization problems show that this new scaled conjugate gradient algorithm substantially outperforms known conjugate gradient methods such as the spectral conjugate gradient SCG of Birgin and Martínez [E. Birgin, J.M. Martínez, A spectral conjugate gradient method for unconstrained optimization, Appl. Math. Optim. 43 (2001) 117–128] and the (classical) conjugate gradient of Polak and Ribière [E. Polak, G. Ribière, Note sur la convergence de méthodes de directions conjuguées, Revue Francaise Informat. Reserche Opérationnelle, 3e Année 16 (1969) 35–43], but subject to the CPU time metric it is outperformed by L-BFGS [D. Liu, J. Nocedal, On the limited memory BFGS method for large scale optimization, Math. Program. B 45 (1989) 503–528; J. Nocedal. http://www.ece.northwestern.edu/~nocedal/lbfgs.html].  相似文献   

12.
The conjugate gradient method is one of the most popular iterative methods for computing approximate solutions of linear systems of equations with a symmetric positive definite matrix A. It is generally desirable to terminate the iterations as soon as a sufficiently accurate approximate solution has been computed. This paper discusses known and new methods for computing bounds or estimates of the A-norm of the error in the approximate solutions generated by the conjugate gradient method.  相似文献   

13.
Hybrid iterative methods that combine a conjugate direction method with a simpler iteration scheme, such as Chebyshev or Richardson iteration, were first proposed in the 1950s. The ease with which Chebyshev and Richardson iteration can be implemented efficiently on a large variety of computer architectures has in recent years lead to renewed interest in iterative methods that use Chebyshev or Richardson iteration. This paper presents a new hybrid iterative method for the solution of linear systems of equations with a symmetric indefinite matrix. Our method combines the conjugate residual method with Richardson iteration. Special attention is paid to the determination of two real intervals, one on each side of the origin, that contain most of the eigenvalues of the matrix. These intervals are used to compute suitable iteration parameters for Richardson iteration. We also discuss when to switch between the methods. The hybrid scheme typically uses the Richardson method for most iterations, and this reduces the number of arithmetic vector operations significantly compared with the number of arithmetic vector operations required when only the conjugate residual method is used. Computed examples illustrate the competitiveness of the hybrid scheme.  相似文献   

14.
A new iterative scheme is described for the solution of large linear systems of equations with a matrix of the form A = ρU + ζI, where ρ and ζ are constants, U is a unitary matrix and I is the identity matrix. We show that for such matrices a Krylov subspace basis can be generated by recursion formulas with few terms. This leads to a minimal residual algorithm that requires little storage and makes it possible to determine each iterate with fairly little arithmetic work. This algorithm provides a model for iterative methods for non-Hermitian linear systems of equations, in a similar way to the conjugate gradient and conjugate residual algorithms. Our iterative scheme illustrates that results by Faber and Manteuffel [3,4] on the existence of conjugate gradient algorithms with short recurrence relations, and related results by Joubert and Young [13], can be extended.  相似文献   

15.
A new family of conjugate gradient methods   总被引:1,自引:0,他引:1  
In this paper we develop a new class of conjugate gradient methods for unconstrained optimization problems. A new nonmonotone line search technique is proposed to guarantee the global convergence of these conjugate gradient methods under some mild conditions. In particular, Polak–Ribiére–Polyak and Liu–Storey conjugate gradient methods are special cases of the new class of conjugate gradient methods. By estimating the local Lipschitz constant of the derivative of objective functions, we can find an adequate step size and substantially decrease the function evaluations at each iteration. Numerical results show that these new conjugate gradient methods are effective in minimizing large-scale non-convex non-quadratic functions.  相似文献   

16.
Composite orthogonal projection methods for large matrix eigenproblems   总被引:1,自引:0,他引:1  
For classical orthogonal projection methods for large matrix eigenproblems, it may be much more difficult for a Ritz vector to converge than for its corresponding Ritz value when the matrix in question is non-Hermitian. To this end, a class of new refined orthogonal projection methods has been proposed. It is proved that in some sense each refined method is a composite of two classical orthogonal projections, in which each refined approximate eigenvector is obtained by realizing a new one of some Hermitian semipositive definite matrix onto the same subspace. Apriori error bounds on the refined approximate eigenvector are established in terms of the sine of acute angle of the normalized eigenvector and the subspace involved. It is shown that the sufficient conditions for convergence of the refined vector and that of the Ritz value are the same, so that the refined methods may be much more efficient than the classical ones. Project supported by the China State Major Key Projects for Basic Researches, the National Natural Science Foundation of China (Grant No. 19571014), the Doctoral Program (97014113), the Foundation of Excellent Young Scholors of Ministry of Education, the Foundation of Returned Scholars of China and the Liaoning Province Natural Science Foundation.  相似文献   

17.
Minimizing two different upper bounds of the matrix which generates search directions of the nonlinear conjugate gradient method proposed by Dai and Liao, two modified conjugate gradient methods are proposed. Under proper conditions, it is briefly shown that the methods are globally convergent when the line search fulfills the strong Wolfe conditions. Numerical comparisons between the implementations of the proposed methods and the conjugate gradient methods proposed by Hager and Zhang, and Dai and Kou, are made on a set of unconstrained optimization test problems of the CUTEr collection. The results show the efficiency of the proposed methods in the sense of the performance profile introduced by Dolan and Moré.  相似文献   

18.
This article is concerned with solving the high order Stein tensor equation arising in control theory. The conjugate gradient squared (CGS) method and the biconjugate gradient stabilized (BiCGSTAB) method are attractive methods for solving linear systems. Compared with the large-scale matrix equation, the equivalent tensor equation needs less storage space and computational costs. Therefore, we present the tensor formats of CGS and BiCGSTAB methods for solving high order Stein tensor equations. Moreover, a nearest Kronecker product preconditioner is given and the preconditioned tensor format methods are studied. Finally, the feasibility and effectiveness of the new methods are verified by some numerical examples.  相似文献   

19.
对闭凸集约束的非线性规划问题构造了一个修正共轭梯度投影下降算法,在去掉迭代点列有界的条件下,分析了算法的全局收敛性.新算法与共轭梯度参数结合,给出了三类结合共轭梯度参数的修正共轭梯度投影算法.数值例子表明算法是有效的.  相似文献   

20.
Minimizing the distance between search direction matrix of the Dai–Liao method and the scaled memoryless BFGS update in the Frobenius norm, and using Powell’s nonnegative restriction of the conjugate gradient parameters, a one-parameter class of nonlinear conjugate gradient methods is proposed. Then, a brief global convergence analysis is made with and without convexity assumption on the objective function. Preliminary numerical results are reported; they demonstrate a proper choice for the parameter of the proposed class of conjugate gradient methods may lead to promising numerical performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号