共查询到20条相似文献,搜索用时 15 毫秒
1.
Norihiko Adachi 《Journal of Optimization Theory and Applications》1971,7(6):391-410
In this paper, a generalized variable-metric algorithm is presented. It is shown that this algorithm attains the minimum of a positive-definite, quadratic function in a finite number of steps and that thevariable-metric matrix tends to the inverse of the Hessian matrix. Most known variable-metric algorithms can be derived from this generalized algorithm, and some new algorithms are also obtained.The author expresses his gratitude to Professor H. Tokumaru for guidance and encouragement. 相似文献
2.
On search directions for minimization algorithms 总被引:1,自引:0,他引:1
M. J. D. Powell 《Mathematical Programming》1973,4(1):193-201
Some examples are given of differentiable functions of three variables, having the property that if they are treated by the minimization algorithm that searches along the coordinate directions in sequence, then the search path tends to a closed loop. On this loop the gradient of the objective function is bounded away from zero. We discuss the relevance of these examples to the problem of proving general convergence theorems for minimization algorithms that use search directions. 相似文献
3.
Powell has shown that the cyclic coordinate method with exact searches may not converge to a stationary point. In this note we consider a more general class of algorithms for unconstrained minimization, and establish their convergence under the assumption that the objective function has a unique minimum along any line. 相似文献
4.
The paper compares the numerical performances of the LDL decomposition of the BFGS variable-metric algorithm, the Dennis-Mei dogleg algorithm on the BFGS update, and Davidon's projections with the BFGS update with the straight BFGS update on a number of standard test problems. Numerical results indicate that the standard BFGS algorithm is superior to all of the more complex strategies.This research was supported by the National Science Foundation under Research Grant No. MCS77-07327. 相似文献
5.
D. G. McDowell 《Journal of Optimization Theory and Applications》1983,41(3):439-450
Variable-metric algorithms have played an important role in unconstrained optimization theory. This paper presents a sufficiency condition on the sequence of metrics in a variable-metric algorithm that will make it a conjugate-gradient algorithm. The Huang class of algorithms (Ref. 1) and the class of self-scaling variable-metric algorithms by Oren (Ref. 2) all satisfy the condition. This paper also includes a discussion of the behavior of algorithms that meet the condition on nonquadratic functions. 相似文献
6.
S. P. Uryas'ev 《Journal of Optimization Theory and Applications》1991,71(2):359-388
This paper deals with new variable-metric algorithms for nonsmooth optimization problems, the so-called adaptive algorithms. The essence of these algorithms is that there are two simultaneously working gradient algorithms: the first is in the main space and the second is in the space of the matrices that modify the main variables. The convergence of these algorithms is proved for different cases. The results of numerical experiments are also given. 相似文献
7.
A convergent minimization algorithm made up of repetitive line searches is considered in
n
. It is shown that the uniform nonsingularity of the matrices consisting ofn successive normalized search directions guarantees a speed of convergence which is at leastn-step Q-linear. Consequences are given for multistep methods, including Powell's 1964 procedure for function minimization without calculating derivatives as well as Zangwill's modifications of this procedure.The authors wish to thank the Namur Department of Mathematics, especially its optimization group, for many discussions and encouragement. They also thank the reviewers for many helpful suggestions. 相似文献
8.
Jorge Amaya 《Operations Research Letters》1985,4(1):31-34
The purpose of this paper is to unify the conditions under which curvilinear algorithms for unconstrained optimization converge. Particularly, two gradient path approximation algorithms and a trust region curvilinear algorithm are examined in this context. 相似文献
9.
Geovani N. Grapiglia Ekkehard W. Sachs 《Computational Optimization and Applications》2017,68(3):555-577
A general class of non-monotone line search algorithms has been proposed by Sachs and Sachs (Control Cybern 40:1059–1075, 2011) for smooth unconstrained optimization, generalizing various non-monotone step size rules such as the modified Armijo rule of Zhang and Hager (SIAM J Optim 14:1043–1056, 2004). In this paper, the worst-case complexity of this class of non-monotone algorithms is studied. The analysis is carried out in the context of non-convex, convex and strongly convex objectives with Lipschitz continuous gradients. Despite de nonmonotonicity in the decrease of function values, the complexity bounds obtained agree in order with the bounds already established for monotone algorithms. 相似文献
10.
杨晓光 《应用数学学报(英文版)》1997,13(4):337-341
1.IntroductionIntillspaperweanalyzetheconvergenceonmultiplicativeiterativealgorithmsfortheIninimizationofadiffcrentiablefunctiondefinedonthepositiveorthantofR".ThealgorithmissllggestedbyEggermolltl'],andisrelatedtotheEM[2](Expextation--Maximization)algoritllnlforPositronemissiontonlography[']andimagereconstructi..14].Wecollsidertheproblenl"linf(x)s.t.x20.Themultiplicativeiterativealgorithmshavethel'orlniforj=l,2,',n,withAhdeterminedthroughalinesearch.Whilelusem[5]establishedanelegantconv… 相似文献
11.
G.-Tth B. Hendrix E. M. T. Casado L. G. 《Central European Journal of Operations Research》2022,30(3):1071-1092
Central European Journal of Operations Research - Over the last decades, algorithms have been developed for checking copositivity of a matrix. Methods are based on several principles, such as... 相似文献
12.
13.
Chaotic harmony search algorithms 总被引:2,自引:0,他引:2
Bilal Alatas 《Applied mathematics and computation》2010,216(9):2687-6117
Harmony Search (HS) is one of the newest and the easiest to code music inspired heuristics for optimization problems. Like the use of chaos in adjusting note parameters such as pitch, dynamic, rhythm, duration, tempo, instrument selection, attack time, etc. in real music and in sound synthesis and timbre construction, this paper proposes new HS algorithms that use chaotic maps for parameter adaptation in order to improve the convergence characteristics and to prevent the HS to get stuck on local solutions. This has been done by using of chaotic number generators each time a random number is needed by the classical HS algorithm. Seven new chaotic HS algorithms have been proposed and different chaotic maps have been analyzed in the benchmark functions. It has been detected that coupling emergent results in different areas, like those of HS and complex dynamics, can improve the quality of results in some optimization problems. It has been also shown that, some of the proposed methods have somewhat increased the solution quality, that is in some cases they improved the global searching capability by escaping the local solutions. 相似文献
14.
Combining search directions using gradient flows 总被引:2,自引:0,他引:2
The efficient combination of directions is a significant problem in line search methods that either use negative curvature,
or wish to include additional information such as the gradient or different approximations to the Newton direction.
In this paper we describe a new procedure to combine several of these directions within an interior-point primal-dual algorithm.
Basically, we combine in an efficient manner a modified Newton direction with the gradient of a merit function and a direction
of negative curvature, if it exists. We also show that the procedure is well-defined, and it has reasonable theoretical properties
regarding the rate of convergence of the method.
We also present numerical results from an implementation of the proposed algorithm on a set of small test problems from the
CUTE collection.
Received: November 2000 / Accepted: October 2002 Published online: February 14, 2003
Key Words. negative curvature – primal-dual methods – interior-point methods – nonconvex optimization – line searches
Mathematics Subject Classification (1991): 49M37, 65K05, 90C30 相似文献
15.
16.
Pavlik John A. Sewell Edward C. Jacobson Sheldon H. 《Computational Optimization and Applications》2021,80(2):377-409
Computational Optimization and Applications - This paper presents two new bidirectional heuristic search algorithms for solving the shortest path problem on graphs: consistent-heuristic... 相似文献
17.
18.
Certain search algorithms produce a sequence of decreasing regions converging to a pointx
*. After renormalizing to a standard region at each iteration, the renormalized location ofx
*, sayx
x, may obey a dynamic process. In this case, simple ergodic theory might be used to compute asymptotic rates. The family of second-order line search algorithms which contains the Golden Section (GS) method have this property. The paper exhibits several alternatives to GS which have better almost sure ergodic rates of convergence for symmetric functions despite the fact that GS is asymptotically minimax. The discussion in the last section includes weakening of the symmetry conditions and announces a backtracking bifurcation algorithm with optimum asymptotic rate. 相似文献
19.
Octavian G. Mustafa Donal O’Regan 《Nonlinear Analysis: Theory, Methods & Applications》2011,74(17):6383-6386
By a convenient reparametrization of the integral curves of a nonlinear ordinary differential equation (ODE), we are able to improve the conclusions of a recent generalization of Nagumo’s uniqueness theorem. In this way, we establish a flexible uniqueness criterion for ODEs without Lipschitz-like nonlinearities. 相似文献
20.
A key algorithmic element of a real-time trajectory optimization hardware/software implementation is presented, the search step solver. This is one piece of an algorithm whose overall goal is to make nonlinear trajectory optimization fast enough to provide real-time commands during guidance of a vehicle such as an aeromaneuvering orbiter or the National Aerospace Plane. Many methods of nonlinear programming require the solution of a quadratic program (QP) at each iteration to determine the search step. In the trajectory optimization case, the QP has a special dynamic programming structure, an LQR-like structure. The algorithm exploits this special structure with a divide-and-conquer type of parallel implementation. A hypercube message-passing parallel machine, the INTEL iPSC/2, has been used. The algorithm solves a (p·N)-stage problem onN processors inO(p + log2
N) operations. The algorithm yields a factor of 8 speed-up over the fastest known serial algorithm when solving a 1024-stage test problem on 32 processors.This research was supported in part by the National Aeronautics and Space Administration under Grant No. NAG-1-1009. 相似文献