首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
We present an algorithm, partitioning group correction (PGC) algorithm based on trust region and conjugate gradient method, for large-scale sparse unconstrained optimization. In large sparse optimization, computing the whole Hessian matrix and solving the Newton-like equations at each iteration can be considerably expensive when a trust region method is adopted. The method depends on a symmetric consistent partition of the columns of the Hessian matrix and an inaccurate solution to the Newton-like equations by conjugate gradient method. And we allow that the current direction exceeds the trust region bound if it is a good descent direction. Besides, we studies a method dealing with some sparse matrices having a dense structure part. Some good convergence properties are kept and we contrast the computational behavior of our method with that of other algorithms. Our numerical tests show that the algorithm is promising and quite effective, and that its performance is comparable to or better than that of other algorithms available.  相似文献   

2.
The truncated Newton algorithm was devised by Dembo and Steihaug (Ref. 1) for solving large sparse unconstrained optimization problems. When far from a minimum, an accurate solution to the Newton equations may not be justified. Dembo's method solves these equations by the conjugate direction method, but truncates the iteration when a required degree of accuracy has been obtained. We present favorable numerical results obtained with the algorithm and compare them with existing codes for large-scale optimization.  相似文献   

3.
黄翔 《运筹学学报》2005,9(4):74-80
近年来,决定椭圆型方程系数反问题在地磁、地球物理、冶金和生物等实际问题上有着广泛的应用.本文讨论了二维的决定椭圆型方程系数反问题的数值求解方法.由误差平方和最小原则,这个反问题可化为一个变分问题,并进一步离散化为一个最优化问题,其目标函数依赖于要决定的方程系数.本文着重考察非线性共轭梯度法在此最优化问题数值计算中的表现,并与拟牛顿法作为对比.为了提高算法的效率我们适当选择加快收敛速度的预处理矩阵.同时还考察了线搜索方法的不同对优化算法的影响.数值实验的结果表明,非线性共轭梯度法在这类大规模优化问题中相对于拟牛顿法更有效.  相似文献   

4.
A Newton method to solve total least squares problems for Toeplitz systems of equations is considered. When coupled with a bisection scheme, which is based on an efficient algorithm for factoring Toeplitz matrices, global convergence can be guaranteed. Circulant and approximate factorization preconditioners are proposed to speed convergence when a conjugate gradient method is used to solve linear systems arising during the Newton iterations. The work of the second author was partially supported by a National Science Foundation Postdoctoral Research Fellowship.  相似文献   

5.
An Inexact Newton Method Derived from Efficiency Analysis   总被引:1,自引:0,他引:1  
We consider solving an unconstrained optimization problem by Newton-PCG like methods in which the preconditioned conjugate gradient method is applied to solve the Newton equations. The main question to be investigated is how efficient Newton-PCG like methods can be from theoretical point of view. An algorithmic model with several parameters is established. Furthermore, a lower bound of the efficiency measure of the algorithmic model is derived as a function of the parameters. By maximizing this lower bound function, the parameters are specified and therefore an implementable algorithm is obtained. The efficiency of the implementable algorithm is compared with Newtons method by theoretical analysis and numerical experiments. The results show that this algorithm is competitive.Mathematics Subject Classification: 90C30, 65K05.This work was supported by the National Science Foundation of China Grant No. 10371131, and Hong Kong Competitive Earmarked Research Grant CityU 1066/00P from Hong Kong University Grant Council  相似文献   

6.
In this paper, we present a new hybrid conjugate gradient algorithm for unconstrained optimization. This method is a convex combination of Liu-Storey conjugate gradient method and Fletcher-Reeves conjugate gradient method. We also prove that the search direction of any hybrid conjugate gradient method, which is a convex combination of two conjugate gradient methods, satisfies the famous D-L conjugacy condition and in the same time accords with the Newton direction with the suitable condition. Furthermore, this property doesn't depend on any line search. Next, we also prove that, moduling the value of the parameter t,the Newton direction condition is equivalent to Dai-Liao conjugacy condition.The strong Wolfe line search conditions are used.The global convergence of this new method is proved.Numerical comparisons show that the present hybrid conjugate gradient algorithm is the efficient one.  相似文献   

7.
For unconstrained optimization, an inexact Newton algorithm is proposed recently, in which the preconditioned conjugate gradient method is applied to solve the Newton equations. In this paper, we improve this algorithm by efficiently using automatic differentiation and establish a new inexact Newton algorithm. Based on the efficiency coefficient defined by Brent, a theoretical efficiency ratio of the new algorithm to the old algorithm is introduced. It has been shown that this ratio is greater than 1, which implies that the new algorithm is always more efficient than the old one. Furthermore, this improvement is significant at least for some cases. This theoretical conclusion is supported by numerical experiments.   相似文献   

8.
This paper considers the numerical simulation of optimal control evolution dam problem by using conjugate gradient method.The paper considers the free boundary value problem related to time dependent fluid flow in a homogeneous earth rectangular dam.The dam is taken to be sufficiently long that the flow is considered to be two dimensional.On the left and right walls of the dam there is a reservoir of fluid at a level dependent on time.This problem can be transformed into a variational inequality on a fixed domain.The numerical techniques we use are based on a linear finite element method to approximate the state equations and a conjugate gradient algorithm to solve the discrete optimal control problem.This algorithm is based on Armijo's rule in the unconstrained optimization theory.The convergence of the discrete optimal solutions to the continuous optimal solutions,and the convergence of the conjugate gradient algorithm are proved.A numerical example is given to determine the location of the minimum surface  相似文献   

9.
In this paper we report a sparse truncated Newton algorithm for handling large-scale simple bound nonlinear constrained minimixation problem. The truncated Newton method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. At each iterative level, the search direction consists of three parts, one of which is a subspace truncated Newton direction, the other two are subspace gradient and modified gradient directions. The subspace truncated Newton direction is obtained by solving a sparse system of linear equations. The global convergence and quadratic convergence rate of the algorithm are proved and some numerical tests are given.  相似文献   

10.
Mesh shape-quality optimization using the inverse mean-ratio metric   总被引:1,自引:0,他引:1  
Meshes containing elements with bad quality can result in poorly conditioned systems of equations that must be solved when using a discretization method, such as the finite-element method, for solving a partial differential equation. Moreover, such meshes can lead to poor accuracy in the approximate solution computed. In this paper, we present a nonlinear fractional program that relocates the vertex coordinates of a given mesh to optimize the average element shape quality as measured by the inverse mean-ratio metric. To solve the resulting large-scale optimization problems, we apply an efficient implementation of an inexact Newton algorithm that uses the conjugate gradient method with a block Jacobi preconditioner to compute the direction. We show that the block Jacobi preconditioner is positive definite by proving a general theorem concerning the convexity of fractional functions, applying this result to components of the inverse mean-ratio metric, and showing that each block in the preconditioner is invertible. Numerical results obtained with this special-purpose code on several test meshes are presented and used to quantify the impact on solution time and memory requirements of using a modeling language and general-purpose algorithm to solve these problems.  相似文献   

11.
In this paper, a new gradient-related algorithm for solving large-scale unconstrained optimization problems is proposed. The new algorithm is a kind of line search method. The basic idea is to choose a combination of the current gradient and some previous search directions as a new search direction and to find a step-size by using various inexact line searches. Using more information at the current iterative step may improve the performance of the algorithm. This motivates us to find some new gradient algorithms which may be more effective than standard conjugate gradient methods. Uniformly gradient-related conception is useful and it can be used to analyze global convergence of the new algorithm. The global convergence and linear convergence rate of the new algorithm are investigated under diverse weak conditions. Numerical experiments show that the new algorithm seems to converge more stably and is superior to other similar methods in many situations.  相似文献   

12.
Many constrained sets in problems such as signal processing and optimal control can be represented as a fixed point set of a certain nonexpansive mapping, and a number of iterative algorithms have been presented for solving a convex optimization problem over a fixed point set. This paper presents a novel gradient method with a three-term conjugate gradient direction that is used to accelerate conjugate gradient methods for solving unconstrained optimization problems. It is guaranteed that the algorithm strongly converges to the solution to the problem under the standard assumptions. Numerical comparisons with the existing gradient methods demonstrate the effectiveness and fast convergence of this algorithm.  相似文献   

13.
刘金魁  孙悦  赵永祥 《计算数学》2021,43(3):388-400
基于HS共轭梯度法的结构,本文在弱假设条件下建立了一种求解凸约束伪单调方程组问题的迭代投影算法.该算法不需要利用方程组的任何梯度或Jacobian矩阵信息,因此它适合求解大规模问题.算法在每一次迭代中都能产生充分下降方向,且不依赖于任何线搜索条件.特别是,我们在不需要假设方程组满足Lipschitz条件下建立了算法的全...  相似文献   

14.
《Optimization》2012,61(10):1631-1648
ABSTRACT

In this paper, we develop a three-term conjugate gradient method involving spectral quotient, which always satisfies the famous Dai-Liao conjugacy condition and quasi-Newton secant equation, independently of any line search. This new three-term conjugate gradient method can be regarded as a variant of the memoryless Broyden-Fletcher-Goldfarb-Shanno quasi-Newton method with regard to spectral quotient. By combining this method with the projection technique proposed by Solodov and Svaiter in 1998, we establish a derivative-free three-term projection algorithm for dealing with large-scale nonlinear monotone system of equations. We prove the global convergence of the algorithm and obtain the R-linear convergence rate under some mild conditions. Numerical results show that our projection algorithm is effective and robust, and is more competitive with the TTDFP algorithm proposed Liu and Li [A three-term derivative-free projection method for nonlinear monotone system of equations. Calcolo. 2016;53:427–450].  相似文献   

15.
Theoretical Efficiency of an Inexact Newton Method   总被引:6,自引:0,他引:6  
We propose a local algorithm for smooth unconstrained optimization problems with n variables. The algorithm is the optimal combination of an exact Newton step with Choleski factorization and several inexact Newton steps with preconditioned conjugate gradient subiterations. The preconditioner is taken as the inverse of the Choleski factorization in the previous exact Newton step. While the Newton method is converging precisely with Q-order 2, this algorithm is also precisely converging with Q-order 2. Theoretically, its average number of arithmetic operations per step is much less than the corresponding number of the Newton method for middle-scale and large-scale problems. For instance, when n=200, the ratio of these two numbers is less than 0.53. Furthermore, the ratio tends to zero approximately at a rate of log 2/logn when n approaches infinity.  相似文献   

16.
Algorithms to solve constrained optimization problems are derived. These schemes combine an unconstrained minimization scheme like the conjugate gradient method, an augmented Lagrangian, and multiplier updates to obtain global quadratic convergence. Since an augmented Lagrangian can be ill conditioned, a preconditioning strategy is developed to eliminate the instabilities associated with the penalty term. A criterion for deciding when to increase the penalty is presented.This work was supported by the National Science Foundation, Grant Nos. MCS-81-01892, DMS-84-01758, and DMS-85-20926, and by the Air Force Office of Scientific Research, Grant No. AFOSR-ISSA-860091.  相似文献   

17.
An algorithm for solving nonlinear monotone equations is proposed, which combines a modified Liu-Storey conjugate gradient method with hyperplane projection method. Under mild conditions, the global convergence of the proposed method is established with a suitable line search method. The method can be applied to solve large-scale problems for its lower storage requirement. Numerical results indicate that our method is efficient.  相似文献   

18.
The limiting factors of second-order methods for large-scale semidefinite optimization are the storage and factorization of the Newton matrix. For a particular algorithm based on the modified barrier method, we propose to use iterative solvers instead of the routinely used direct factorization techniques. The preconditioned conjugate gradient method proves to be a viable alternative for problems with a large number of variables and modest size of the constrained matrix. We further propose to avoid explicit calculation of the Newton matrix either by an implicit scheme in the matrix–vector product or using a finite-difference formula. This leads to huge savings in memory requirements and, for certain problems, to further speed-up of the algorithm. Dedicated to the memory of Jos Sturm.  相似文献   

19.
Interior-point methods for nonlinear complementarity problems   总被引:1,自引:0,他引:1  
We present a potential reduction interior-point algorithm for monotone nonlinear complementarity problems. At each iteration, one has to compute an approximate solution of a nonlinear system such that a certain accuracy requirement is satisfied. For problems satisfying a scaled Lipschitz condition, this requirement is satisfied by the approximate solution obtained by applying one Newton step to that nonlinear system. We discuss the global and local convergence rates of the algorithm, convergence toward a maximal complementarity solution, a criterion for switching from the interior-point algorithm to a pure Newton method, and the complexity of the resulting hybrid algorithm.This research was supported in part by NSF Grant DDM-89-22636.The authors would like to thank Rongqin Sheng and three anonymous referees for their comments leading to a better presentation of the results.  相似文献   

20.
梯度投影法是一类有效的约束最优化算法,在最优化领域中占有重要的地位.但是,梯度投影法所采用的投影是正交投影,不包含目标函数和约束函数的二阶导数信息·因而;收敛速度不太令人满意.本文介绍一种共轭投影概念,利用共轭投影构造了一般线性或非线性约束下的共轭投影变尺度算法,并证明了算法在一定条件下具有全局收敛性.由于算法中的共轭投影恰当地包含了目标函数和约束函数的二阶导数信息,因而收敛速度有希望加快.数值试验的结果表明算法是有效的.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号