首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A Class of Collinear Scaling Algorithms for Unconstrained Optimization. An appealing approach to the solution of nonlinear optimization problems based on conic models of the objective function has been in troduced by Davidon (1980). It leads to a broad class of algorithms which can be considered to generalize the existing quasi-Newton methods. One particular member of this class has been deeply discussed by Sorensen (1980), who has proved some interesting theoretical properties. In this paper, we generalize Sorensen's technique to Spedicato three-parameter family of variable-metric updates. Furthermore, we point out that the collinear scaling three- parameter family is essentially equivalent to the Spedicato three-parameter family. In addition, numerical expriments have been carried out to compare some colliner scaling algorithms with a straightforward implementation of the BFGS quasi-Newton method.  相似文献   

2.
Although quasi-Newton algorithms generally converge in fewer iterations than conjugate gradient algorithms, they have the disadvantage of requiring substantially more storage. An algorithm will be described which uses an intermediate (and variable) amount of storage and which demonstrates convergence which is also intermediate, that is, generally better than that observed for conjugate gradient algorithms but not so good as in a quasi-Newton approach. The new algorithm uses a strategy of generating a form of conjugate gradient search direction for most iterations, but it periodically uses a quasi-Newton step to improve the convergence. Some theoretical background for a new algorithm has been presented in an earlier paper; here we examine properties of the new algorithm and its implementation. We also present the results of some computational experience.This research was supported by the National Research Council of Canada grant number A-8962.  相似文献   

3.
Feasible Direction Interior-Point Technique for Nonlinear Optimization   总被引:5,自引:0,他引:5  
We propose a feasible direction approach for the minimization by interior-point algorithms of a smooth function under smooth equality and inequality constraints. It consists of the iterative solution in the primal and dual variables of the Karush–Kuhn–Tucker first-order optimality conditions. At each iteration, a descent direction is defined by solving a linear system. In a second stage, the linear system is perturbed so as to deflect the descent direction and obtain a feasible descent direction. A line search is then performed to get a new interior point and ensure global convergence. Based on this approach, first-order, Newton, and quasi-Newton algorithms can be obtained. To introduce the method, we consider first the inequality constrained problem and present a globally convergent basic algorithm. Particular first-order and quasi-Newton versions of this algorithm are also stated. Then, equality constraints are included. This method, which is simple to code, does not require the solution of quadratic programs and it is neither a penalty method nor a barrier method. Several practical applications and numerical results show that our method is strong and efficient.  相似文献   

4.
Two existing function-space quasi-Newton algorithms, the Davidon algorithm and the projected gradient algorithm, are modified so that they may handle directly control-variable inequality constraints. A third quasi-Newton-type algorithm, developed by Broyden, is extended to optimal control problems. The Broyden algorithm is further modified so that it may handle directly control-variable inequality constraints. From a computational viewpoint, dyadic operator implementation of quasi-Newton methods is shown to be superior to the integral kernel representation. The quasi-Newton methods, along with the steepest descent method and two conjugate gradient algorithms, are simulated on three relatively simple (yet representative) bounded control problems, two of which possess singular subarcs. Overall, the Broyden algorithm was found to be superior. The most notable result of the simulations was the clear superiority of the Broyden and Davidon algorithms in producing a sharp singular control subarc.This research was supported by the National Science Foundation under Grant Nos. GK-30115 and ENG 74-21618 and by the National Aeronautics and Space Administration under Contract No. NAS 9-12872.  相似文献   

5.
一族超线性收敛的投影拟牛顿算法   总被引:5,自引:0,他引:5  
本文将梯度投影与拟牛顿法相结合,给出了求解一般线性约束非线性规划问题含两组参数的算法族.在一定的条件下证明了算法族的全局收敛性与它的子族的超线性收敛速度,并给出了投影D.F.P方法、投影BFGS方法等一些特例.  相似文献   

6.
Both conjugate gradient and quasi-Newton methods are quite successful at minimizing smooth nonlinear functions of several variables, and each has its advantages. In particular, conjugate gradient methods require much less storage to implement than a quasi-Newton code and therefore find application when storage limitations occur. They are, however, slower, so there have recently been attempts to combine CG and QN algorithms so as to obtain an algorithm with good convergence properties and low storage requirements. One such method is the code CONMIN due to Shanno and Phua; it has proven quite successful but it has one limitation. It has no middle ground, in that it either operates as a quasi-Newton code using O(n 2) storage locations, or as a conjugate gradient code using 7n locations, but it cannot take advantage of the not unusual situation where more than 7n locations are available, but a quasi-Newton code requires an excessive amount of storage. In this paper we present a way of looking at conjugate gradient algorithms which was in fact given by Shanno and Phua but which we carry further, emphasize and clarify. This applies in particular to Beale's 3-term recurrence relation. Using this point of view, we develop a new combined CG-QN algorithm which can use whatever storage is available; CONMIN occurs as a special case. We present numerical results to demonstrate that the new algorithm is never worse than CONMIN and that it is almost always better if even a small amount of extra storage is provided.  相似文献   

7.
四种无约束优化算法的比较研究   总被引:1,自引:0,他引:1  
从数值试验的角度 ,通过对 3个测试问题 (其中构造了一个规模大小可变的算例 )的求解 ,对共轭梯度法、BFGS拟牛顿法、DFP拟牛顿法和截断牛顿法进行比较研究 ,根据测试结果的分析 ,显示截断牛顿法在求解大规模优化问题时具有优势 ,从而为大规模寻优算法的研究提供了有益的借鉴 .  相似文献   

8.
SR1更新公式对比其他的拟牛顿更新公式,会更加简单且每次迭代需要更少的计算量。但是一般SR1更新公式的收敛性质是在一致线性无关这一很强的条件下证明的。基于前人的研究成果,提出了一种新的修正SR1公式,并分别证明了其在一致线性无关和没有一致线性无关这两个条件下的局部收敛性,最后通过数值实验验证了提出的更新公式的有效性,以及所作出假设的合理性。根据实验数据显示,在某些条件下基于所提出更新公式的拟牛顿算法会比基于传统的SR1更新公式的算法收敛效果更好一些。  相似文献   

9.
Rutishauser, Gragg and Harrod and finally H.Y. Zha used the same class of chasing algorithms for transforming arrowhead matrices to tridiagonal form. Using a graphical theoretical approach, we propose a new chasing algorithm. Although this algorithm has the same sequential computational complexity and backward error properties as the old algorithms, it is better suited for a pipelined approach. The parallel algorithm for this new chasing method is described, with performance results on the Paragon and nCUBE. Comparison results between the old and the new algorithms are also presented.

  相似文献   


10.
无约束优化问题的对角稀疏拟牛顿法   总被引:3,自引:0,他引:3  
对无约束优化问题提出了对角稀疏拟牛顿法,该算法采用了Armijo非精确线性搜索,并在每次迭代中利用对角矩阵近似拟牛顿法中的校正矩阵,使计算搜索方向的存贮量和工作量明显减少,为大型无约束优化问题的求解提供了新的思路.在通常的假设条件下,证明了算法的全局收敛性,线性收敛速度并分析了超线性收敛特征。数值实验表明算法比共轭梯度法有效,适于求解大型无约束优化问题.  相似文献   

11.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

12.
In this paper, some Newton and quasi-Newton algorithms for the solution of inequality constrained minimization problems are considered. All the algorithms described produce sequences {x k } convergingq-superlinearly to the solution. Furthermore, under mild assumptions, aq-quadratic convergence rate inx is also attained. Other features of these algorithms are that only the solution of linear systems of equations is required at each iteration and that the strict complementarity assumption is never invoked. First, the superlinear or quadratic convergence rate of a Newton-like algorithm is proved. Then, a simpler version of this algorithm is studied, and it is shown that it is superlinearly convergent. Finally, quasi-Newton versions of the previous algorithms are considered and, provided the sequence defined by the algorithms converges, a characterization of superlinear convergence extending the result of Boggs, Tolle, and Wang is given.This research was supported by the National Research Program Metodi di Ottimizzazione per la Decisioni, MURST, Roma, Italy.  相似文献   

13.
In the present work, we explore a general framework for the design of new minimization algorithms with desirable characteristics, namely, supervisor-searcher cooperation. We propose a class of algorithms within this framework and examine a gradient algorithm in the class. Global convergence is established for the deterministic case in the absence of noise and the convergence rate is studied. Both theoretical analysis and numerical tests show that the algorithm is efficient for the deterministic case. Furthermore, the fact that there is no line search procedure incorporated in the algorithm seems to strengthen its robustness so that it tackles effectively test problems with stronger stochastic noises. The numerical results for both deterministic and stochastic test problems illustrate the appealing attributes of the algorithm.  相似文献   

14.
In this paper, an improved spectral conjugate gradient algorithm is developed for solving nonconvex unconstrained optimization problems. Different from the existent methods, the spectral and conjugate parameters are chosen such that the obtained search direction is always sufficiently descent as well as being close to the quasi-Newton direction. With these suitable choices, the additional assumption in the method proposed by Andrei on the boundedness of the spectral parameter is removed. Under some mild conditions, global convergence is established. Numerical experiments are employed to demonstrate the efficiency of the algorithm for solving large-scale benchmark test problems, particularly in comparison with the existent state-of-the-art algorithms available in the literature.  相似文献   

15.
一类带非单调线搜索的信赖域算法   总被引:1,自引:0,他引:1  
通过将非单调Wolfe线搜索技术与传统的信赖域算法相结合,我们提出了一类新的求解无约束最优化问题的信赖域算法.新算法在每一迭代步只需求解一次信赖域子问题,而且在每一迭代步Hesse阵的近似都满足拟牛顿条件并保持正定传递.在一定条件下,证明了算法的全局收敛性和强收敛性.数值试验表明新算法继承了非单调技术的优点,对于求解某...  相似文献   

16.
We propose a multi-time scale quasi-Newton based smoothed functional (QN-SF) algorithm for stochastic optimization both with and without inequality constraints. The algorithm combines the smoothed functional (SF) scheme for estimating the gradient with the quasi-Newton method to solve the optimization problem. Newton algorithms typically update the Hessian at each instant and subsequently (a) project them to the space of positive definite and symmetric matrices, and (b) invert the projected Hessian. The latter operation is computationally expensive. In order to save computational effort, we propose in this paper a quasi-Newton SF (QN-SF) algorithm based on the Broyden-Fletcher-Goldfarb-Shanno (BFGS) update rule. In Bhatnagar (ACM TModel Comput S. 18(1): 27–62, 2007), a Jacobi variant of Newton SF (JN-SF) was proposed and implemented to save computational effort. We compare our QN-SF algorithm with gradient SF (G-SF) and JN-SF algorithms on two different problems – first on a simple stochastic function minimization problem and the other on a problem of optimal routing in a queueing network. We observe from the experiments that the QN-SF algorithm performs significantly better than both G-SF and JN-SF algorithms on both the problem settings. Next we extend the QN-SF algorithm to the case of constrained optimization. In this case too, the QN-SF algorithm performs much better than the JN-SF algorithm. Finally we present the proof of convergence for the QN-SF algorithm in both unconstrained and constrained settings.  相似文献   

17.
《Optimization》2012,61(6):733-763
We present a non-monotone trust region algorithm for unconstrained optimization. Using the filter technique of Fletcher and Leyffer, we introduce a new filter acceptance criterion and use it to define reference iterations dynamically. In contrast with the early filter criteria, the new criterion ensures that the size of the filter is finite. We also show a correlation between problem dimension and the filter size. We prove the global convergence of the proposed algorithm to first- and second-order critical points under suitable assumptions. It is significant that the global convergence analysis does not require the common assumption of monotonicity of the sequence of objective function values in reference iterations, as assumed by the standard non-monotone trust region algorithms. Numerical experiments on the CUTEr problems indicate that the new algorithm is competitive compared to some representative non-monotone trust region algorithms.  相似文献   

18.
A new class of quasi-Newton methods is introduced that can locate a unique stationary point of ann-dimensional quadratic function in at mostn steps. When applied to positive-definite or negative-definite quadratic functions, the new class is identical to Huang's symmetric family of quasi-Newton methods (Ref. 1). Unlike the latter, however, the new family can handle indefinite quadratic forms and therefore is capable of solving saddlepoint problems that arise, for instance, in constrained optimization. The novel feature of the new class is a planar iteration that is activated whenever the algorithm encounters a near-singular direction of search, along which the objective function approaches zero curvature. In such iterations, the next point is selected as the stationary point of the objective function over a plane containing the problematic search direction, and the inverse Hessian approximation is updated with respect to that plane via a new four-parameter family of rank-three updates. It is shown that the new class possesses properties which are similar to or which generalize the properties of Huang's family. Furthermore, the new method is equivalent to Fletcher's (Ref. 2) modified version of Luenberger's (Ref. 3) hyperbolic pairs method, with respect to the metric defined by the initial inverse Hessian approximation. Several issues related to implementing the proposed method in nonquadratic cases are discussed.An earlier version of this paper was presented at the 10th Mathematical Programing Symposium, Montreal, Canada, 1979.  相似文献   

19.
In this work we present a new scheduling model for parallel machines, which extends the multiprocessor scheduling problem with release times for minimizing the total tardiness, and also extends the problem of vehicle routing with time windows. This new model is motivated by a resource allocation problem, which appears in the service sector. We present two class of heuristic algorithms for the solution of the problem, the first class is a class of greedy algorithms, the second class is based on the solutions of linear assignment problems. Furthermore we give a rescheduling algorithm, which improves a given feasible solution of the problem. This research has been supported by the Hungarian National Foundation for Scientific Research, Grant T046405.  相似文献   

20.
In this paper, a new quasi-Newton equation is applied to the structured secant methods for nonlinear least squares problems. We show that the new equation is better than the original quasi-Newton equation as it provides a more accurate approximation to the second order information. Furthermore, combining the new quasi-Newton equation with a product structure, a new algorithm is established. It is shown that the resulting algorithm is quadratically convergent for the zero-residual case and superlinearly convergent for the nonzero-residual case. In order to compare the new algorithm with some related methods, our preliminary numerical experiments are also reported.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号