首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the constituted algorithm with either Wolfe-type or Armijotype line search converges globally and Q-superlinearly if the function to be minimized has Lipschitz continuous gradient.  相似文献   

2.
3.
We develop criteria for the existence and uniqueness of the global minima of a continuous bounded function on a noncompact set. Special attention is given to the problem of parameter estimation via minimization of the sum of squares in nonlinear regression and maximum likelihood. Definitions of local convexity and unimodality are given using the level set. A fundamental theorem of nonconvex optimization is formulated: If a function approaches the minimal limiting value at the boundary of the optimization domain from below and its Hessian matrix is positive definite at the point where the gradient vanishes, then the function has a unique minimum. It is shown that the local convexity level of the sum of squares is equal to the minimal squared radius of the regression curvature. A new multimodal function is introduced, the decomposition function, which can be represented as the composition of a convex function and a nonlinear function from the argument space to a space of larger dimension. Several general global criteria based on majorization and minorization functions are formulated.  相似文献   

4.
Quasi-Newton Methods for Unconstrained Optimization   总被引:3,自引:0,他引:3  
A revised algorithm is given for unconstrained optimizationusing quasi-Newton methods. The method is based on recurringthe factorization of an approximation to the Hessian matrix.Knowledge of this factorization allows greater flexibility whenchoosing the direction of search while minimizing the adverseeffects of rounding error. The control of rounding error isparticularly important when analytical derivatives are unavailable,and a modification of the algorithm to accept finite-differenceapproximations to the derivatives is given.  相似文献   

5.
This paper describes a class of optimization methods that interlace iterations of the limited memory BFGS method (L-BFGS) and a Hessian-free Newton method (HFN) in such a way that the information collected by one type of iteration improves the performance of the other. Curvature information about the objective function is stored in the form of a limited memory matrix, and plays the dual role of preconditioning the inner conjugate gradient iteration in the HFN method and of providing an initial matrix for L-BFGS iterations. The lengths of the L-BFGS and HFN cycles are adjusted dynamically during the course of the optimization. Numerical experiments indicate that the new algorithms are both effective and not sensitive to the choice of parameters.  相似文献   

6.
We consider multi-step quasi-Newton methods for unconstrained optimization. These methods were introduced by Ford and Moghrabi (Appl. Math., vol. 50, pp. 305–323, 1994; Optimization Methods and Software, vol. 2, pp. 357–370, 1993), who showed how interpolating curves could be used to derive a generalization of the Secant Equation (the relation normally employed in the construction of quasi-Newton methods). One of the most successful of these multi-step methods makes use of the current approximation to the Hessian to determine the parameterization of the interpolating curve in the variable-space and, hence, the generalized updating formula. In this paper, we investigate new parameterization techniques to the approximate Hessian, in an attempt to determine a better Hessian approximation at each iteration and, thus, improve the numerical performance of such algorithms.  相似文献   

7.
Globally Convergent Algorithms for Unconstrained Optimization   总被引:2,自引:0,他引:2  
A new globalization strategy for solving an unconstrained minimization problem is proposed based on the idea of combining Newton's direction and the steepest descent direction WITHIN each iteration. Global convergence is guaranteed with an arbitrary initial point. The search direction in each iteration is chosen to be as close to the Newton's direction as possible and could be the Newton's direction itself. Asymptotically the Newton step will be taken in each iteration and thus the local convergence is quadratic. Numerical experiments are also reported. Possible combination of a Quasi-Newton direction with the steepest descent direction is also considered in our numerical experiments. The differences between the proposed strategy and a few other strategies are also discussed.  相似文献   

8.
Frame Based Methods for Unconstrained Optimization   总被引:9,自引:0,他引:9  
This paper describes a wide class of direct search methods for unconstrained optimization, which make use of fragments of grids called frames. Convergence is shown under mild conditions which allow successive frames to be rotated, translated, and scaled relative to one another.  相似文献   

9.
An extension of the global convergence framework for unconstrained derivative-free optimization methods is presented. The extension makes it possible for the framework to include optimization methods with varying cardinality of the ordered direction set. Grid-based search methods are shown to be a special case of the more general extended global convergence framework. Furthermore,the required properties of the sequence of ordered direction sets listed in the definition of grid-based methods are relaxed and simplified by removing the requirement of structural equivalence.  相似文献   

10.
在给出块共轭概念的基础上,提出了适合并行计算的向量组的块共轭化方法,进而得到解无约束最优化问题的并行块共轭方向法.有大量数值结果表明块共轭方向法具有工作量少.适用函数范围广等特点,是一种比较有效的无约束最优化方法.  相似文献   

11.
本文基于分式逼近提出了一类求解单变量无约束优化问题的新割线法,给出并证明了该方法的收敛阶是(√2+1).并进一步对新方法的性能进行了分析,给出了新方法、经典的牛顿法和其他修正的割线类方法解单变量无约束优化问题的数值实验.理论和数值结果均表明新的割线法是有效的.  相似文献   

12.
利用前一步得到的曲率信息代替xk到xk+1段二次模型的曲率给出一个具有和BFGS类似的收敛性质的类BFGS算法,并揭示新算法与自调比拟牛顿法的关系.从试验函数库CUTE中选择标准试验函数,对比标准BFGS算法及其它改进BFGS算法进行数值试验.试验结果表明这个新算法的表现有点象自调比拟牛顿算法.  相似文献   

13.
A Modified BFGS Algorithm for Unconstrained Optimization   总被引:7,自引:0,他引:7  
In this paper we present a modified BFGS algorithm for unconstrainedoptimization. The BFGS algorithm updates an approximate Hessianwhich satisfies the most recent quasi-Newton equation. The quasi-Newtoncondition can be interpreted as the interpolation conditionthat the gradient value of the local quadratic model matchesthat of the objective function at the previous iterate. Ourmodified algorithm requires that the function value is matched,instead of the gradient value, at the previous iterate. Themodified algorithm preserves the global and local superlinearconvergence properties of the BFGS algorithm. Numerical resultsare presented, which suggest that a slight improvement has beenachieved.  相似文献   

14.
In this paper, we propose a class of new NCP functions and discuss their properties. By these function, we transfer the complementarity problem into unconstrained optimization problem and study the corresponding optimization problem. Numerical results are given.  相似文献   

15.
Consider the unconstrained optimilzation problemminf(x), x E Rn, (1)where f: Rn - R1 is a continuously differentiable function. We denote the gradient function off by g(x) = f(x) and denote gb = g(xk), xk is an iterative point in descent method.The descent method has the following formXk 1 = Xk sk, (2)where sk = akdk, ak is a kind of stepsize and dk is a search direction such thatCos < -gb, dk >2 >, (3)where < -gk, dk > denotes the angle of -gk and dk, 0 < > 1. If ak is determined byArmij…  相似文献   

16.
Descent algorithms use sufficient descent directions combined with stepsize rules, such as the Armijo rule, to produce sequences of iterates whose cluster points satisfy some necessary optimality conditions. In this note, we present a proof that the whole sequence of iterates converges for quasiconvex objective functions.  相似文献   

17.
We propose a new inexact line search rule and analyze the global convergence and convergence rate of related descent methods. The new line search rule is similar to the Armijo line-search rule and contains it as a special case. We can choose a larger stepsize in each line-search procedure and maintain the global convergence of related line-search methods. This idea can make us design new line-search methods in some wider sense. In some special cases, the new descent method can reduce to the Barzilai and Borewein method. Numerical results show that the new line-search methods are efficient for solving unconstrained optimization problems. The work was supported by NSF of China Grant 10171054, Postdoctoral Fund of China, and K. C. Wong Postdoctoral Fund of CAS Grant 6765700. The authors thank the anonymous referees for constructive comments and suggestions that greatly improved the paper.  相似文献   

18.
本文提出了一种新的解无约束优化的共轭梯度算法,分析了算法的收敛性,并对算法进行了数值实验.数值实验的结果表明算法是有效的.  相似文献   

19.
Existing algorithms for solving unconstrained optimization problems are generally only optimal in the short term. It is desirable to have algorithms which are long-term optimal. To achieve this, the problem of computing the minimum point of an unconstrained function is formulated as a sequence of optimal control problems. Some qualitative results are obtained from the optimal control analysis. These qualitative results are then used to construct a theoretical iterative method and a new continuous-time method for computing the minimum point of a nonlinear unconstrained function. New iterative algorithms which approximate the theoretical iterative method and the proposed continuous-time method are then established. For convergence analysis, it is useful to note that the numerical solution of an unconstrained optimization problem is none other than an inverse Lyapunov function problem. Convergence conditions for the proposed continuous-time method and iterative algorithms are established by using the Lyapunov function theorem.  相似文献   

20.
一种新的无约束优化线搜索算法   总被引:2,自引:2,他引:2  
在对各种有效的线搜索算法分析的基础上,给出了一种求解光滑无约束优化问题的新的线搜索算法.对于目标函数是二次连续可微且下有界的无约束优化问题,算法具有与Wolfe-Powell线搜索算法相同的理论性质.在每一步迭代中算法至多需要计算两次梯度,对于计算目标函数梯度花费较大的情形可以节省一定的计算量.数值试验表明本文算法是可行的和有效的.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号