首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, a new filled function which has better properties is proposed for identifying a global minimum point for a general class of nonlinear programming problems within a closed bounded domain. An algorithm for unconstrained global optimization is developed from the new filled function. Theoretical and numerical properties of the proposed filled function are investigated. The implementation of the algorithm on seven test problems is reported with satisfactory numerical results.  相似文献   

2.
The filled function method is considered as an efficient method to find the global minimum of multidimensional functions. A number of filled functions were proposed recently, most of which have one or two adjustable parameters. However, there is no efficient criterion to choose the parameter appropriately. In this paper, we propose a filled function without parameter. And this function includes neither exponential terms nor logarithmic terms so it is superior to the traditional ones. Theories of the filled function are investigated. And an algorithm which does not compute gradients during minimizing the filled function is presented. Moreover, the numerical experiments demonstrate the efficiency of the proposed filled function.  相似文献   

3.
The filled function method is considered as an efficient approach to solve the global optimization problems. In this paper, a new filled function method is proposed. Its main idea is as follows: a new continuously differentiable filled function with only one parameter is constructed for unconstrained global optimization when a minimizer of the objective function is found, then a minimizer of the filled function will be found in a lower basin of the objective function, thereafter, a better minimizer of the objective function will be found. The above process is repeated until the global optimal solution is found. The numerical experiments show the efficiency of the proposed filled function method.  相似文献   

4.
Improving Hit-and-Run is a random search algorithm for global optimization that at each iteration generates a candidate point for improvement that is uniformly distributed along a randomly chosen direction within the feasible region. The candidate point is accepted as the next iterate if it offers an improvement over the current iterate. We show that for positive definite quadratic programs, the expected number of function evaluations needed to arbitrarily well approximate the optimal solution is at most O(n5/2) wheren is the dimension of the problem. Improving Hit-and-Run when applied to global optimization problems can therefore be expected to converge polynomially fast as it approaches the global optimum.Paper presented at the II. IIASA-Workshop on Global Optimization, December 9–14, 1990, Sopron (Hungary).  相似文献   

5.
Pure adaptive search in global optimization   总被引:10,自引:0,他引:10  
Pure adaptive seach iteratively constructs a sequence of interior points uniformly distributed within the corresponding sequence of nested improving regions of the feasible space. That is, at any iteration, the next point in the sequence is uniformly distributed over the region of feasible space containing all points that are strictly superior in value to the previous points in the sequence. The complexity of this algorithm is measured by the expected number of iterations required to achieve a given accuracy of solution. We show that for global mathematical programs satisfying the Lipschitz condition, its complexity increases at mostlinearly in the dimension of the problem.This work was supported in part by NATO grant 0119/89.  相似文献   

6.
We present an extension of continuous domain Simulated Annealing. Our algorithm employs a globally reaching candidate generator, adaptive stochastic acceptance probabilities, and converges in probability to the optimal value. An application to simulation-optimization problems with asymptotically diminishing errors is presented. Numerical results on a noisy protein-folding problem are included.  相似文献   

7.
Simulated annealing for constrained global optimization   总被引:10,自引:0,他引:10  
Hide-and-Seek is a powerful yet simple and easily implemented continuous simulated annealing algorithm for finding the maximum of a continuous function over an arbitrary closed, bounded and full-dimensional body. The function may be nondifferentiable and the feasible region may be nonconvex or even disconnected. The algorithm begins with any feasible interior point. In each iteration it generates a candidate successor point by generating a uniformly distributed point along a direction chosen at random from the current iteration point. In contrast to the discrete case, a single step of this algorithm may generateany point in the feasible region as a candidate point. The candidate point is then accepted as the next iteration point according to the Metropolis criterion parametrized by anadaptive cooling schedule. Again in contrast to discrete simulated annealing, the sequence of iteration points converges in probability to a global optimum regardless of how rapidly the temperatures converge to zero. Empirical comparisons with other algorithms suggest competitive performance by Hide-and-Seek.This material is based on work supported by a NATO Collaborative Research Grant, no. 0119/89.  相似文献   

8.
We have recently developed a global optimization methodology for solving combinatorial problems with either deterministic or stochastic performance functions. This method, the Nested Partitions (NP) method has been shown to generate a Markov chain and with probability one to converge to a global optimum. In this paper, we study the rate of convergence of the method through the use of Markov Chain Monte Carlo (MCMC) methods, and use this to derive stopping rules that can be applied during simulation-based optimization. A numerical example serves to illustrate the feasibility of our approach.  相似文献   

9.
Efficient line search algorithm for unconstrained optimization   总被引:6,自引:0,他引:6  
A new line search algorithm for smooth unconstrained optimization is presented that requires only one gradient evaluation with an inaccurate line search and at most two gradient evaluations with an accurate line search. It terminates in finitely many operations and shares the same theoretical properties as the standard line search rules like the Armijo-Goldstein-Wolfe-Powell rules. This algorithm is especially appropriate for the situation when gradient evaluations are very expensive relative to function evaluations.The authors would like to thank Margaret Wright and Jorge Moré for valuable comments on earlier versions of this paper.  相似文献   

10.
This paper presents a quadratically converging algorithm for unconstrained minimization. All the accumulation points that it constructs satisfy second-order necessary conditions of optimality. Thus, it avoids second-order saddle andinflection points, an essential feature for a method to be used in minimizing the modified Lagrangians in multiplier methods.The work of the first author was supported by NSF RANN AEN 73-07732-A02 and JSEP Contract No. F44620-71-C-0087; the work of the second author was supported by NSF Grant No. GK-37672 and the ARO Contract No. DAHCO4-730C-0025.  相似文献   

11.
A family of accelerated conjugate direction methods, corresponding to the Broyden family of quasi-Newton methods, is described. It is shown thatall members of the family generate the same sequence of points approximating the optimum and the same sequence of search directions, provided only that each direction vector is normalized before the stepsize to be taken in that direction is determined.With minimal restrictions on how the stepsize is determined (sufficient only for convergence), the accelerated methods applied to the optimization of a function ofn variables are shown to have an (n+1)-step quadratic rate of convergence. Furthermore, the information needed to generate an accelerating step can be stored in a singlen-vector, rather than the usualn×n symmetric matrix, without changing the theoretical order of convergence.The relationships between this family of methods and existing conjugate direction methods are discussed, and numerical experience with two members of the family is presented.This research was sponsored by the United States Army under Contract No. DAAG29-75-C-0024.The author gratefully acknowledges the valuable assistance of Julia H. Gray, of the Mathematics Research Center, University of Wisconsin, Madison, who painstakingly programmed these methods and obtained the computational results.  相似文献   

12.
Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. This paper proposes a three-parameter family of hybrid conjugate gradient methods. Two important features of the family are that (i) it can avoid the propensity of small steps, namely, if a small step is generated away from the solution point, the next search direction will be close to the negative gradient direction; and (ii) its descent property and global convergence are likely to be achieved provided that the line search satisfies the Wolfe conditions. Some numerical results with the family are also presented.

  相似文献   


13.
A restricted trust region algorithm for unconstrained optimization   总被引:3,自引:0,他引:3  
This paper proposes an efficient implementation of a trust-region-like algorithm. The trust region is restricted to an appropriately chosen two-dimensional subspace. Convergence properties are discussed and numerical results are reported.The numerical experiments were performed on the Data General MV-8000 computer at the Center for Operations Research and Econometrics, Université Catholique de Louvain, and financed by Services de la Programmation de la Politique Scientifique under Contract No. 80-85/12. The authors are grateful for the support.  相似文献   

14.
This paper proposes the hybrid NM-PSO algorithm based on the Nelder–Mead (NM) simplex search method and particle swarm optimization (PSO) for unconstrained optimization. NM-PSO is very easy to implement in practice since it does not require gradient computation. The modification of both the Nelder–Mead simplex search method and particle swarm optimization intends to produce faster and more accurate convergence. The main purpose of the paper is to demonstrate how the standard particle swarm optimizers can be improved by incorporating a hybridization strategy. In a suite of 20 test function problems taken from the literature, computational results via a comprehensive experimental study, preceded by the investigation of parameter selection, show that the hybrid NM-PSO approach outperforms other three relevant search techniques (i.e., the original NM simplex search method, the original PSO and the guaranteed convergence particle swarm optimization (GCPSO)) in terms of solution quality and convergence rate. In a later part of the comparative experiment, the NM-PSO algorithm is compared to various most up-to-date cooperative PSO (CPSO) procedures appearing in the literature. The comparison report still largely favors the NM-PSO algorithm in the performance of accuracy, robustness and function evaluation. As evidenced by the overall assessment based on two kinds of computational experience, the new algorithm has demonstrated to be extremely effective and efficient at locating best-practice optimal solutions for unconstrained optimization.  相似文献   

15.
This paper presents a successive element correction algorithm and a secant modification of this algorithm. The new algorithms are designed to use the gradient evaluations as efficiently as possible in forming the approximate Hessian. The estimates of theq-convergence andr-convergence rates show that the new algorithms may have good local convergence properties. Some restricted numerical results and comparisons with some previously established algorithms suggest that the new algorithms may be efficient in practice.The author would like to thank T. F. Coleman for his many important and helpful suggestions and corrections on the preliminary draft of this paper. The author is also grateful to R. A. Tapia, the editors, and the referees for helpful suggestions and corrections.  相似文献   

16.
An algorithm called DE-PSO is proposed which incorporates concepts from DE and PSO, updating particles not only by DE operators but also by mechanisms of PSO. The proposed algorithm is tested on several benchmark functions. Numerical comparisons with different hybrid meta-heuristics demonstrate its effectiveness and efficiency.  相似文献   

17.
The paper is concerned with the filled functions for global optimization of a continuous function of several variables. More general forms of filled functions are presented for smooth and nonsmooth optimizations. These functions have either two adjustable parameters or one adjustable parameter. Conditions on functions and on the values of parameters are given so that the constructed functions are desired filled functions.  相似文献   

18.
Optimization algorithm with probabilistic estimation   总被引:2,自引:0,他引:2  
In this paper, we present a stochastic optimization algorithm based on the idea of the gradient method which incorporates a new adaptive-precision technique. Because of this new technique, unlike recent methods, the proposed algorithm adaptively selects the precision without any need for prior knowledge on the speed of convergence of the generated sequence. With this new technique, the algorithm can avoid increasing the estimation precision unnecessarily, yet it retains its favorable convergence properties. In fact, it tries to maintain a nice balance between the requirements for computational accuracy and those for computational expediency. Furthermore, we present two types of convergence results delineating under what assumptions what kinds of convergence can be obtained for the proposed algorithm.The work reported here was supported in part by NSF Grant No. ECS-85-06249 and USAF Grant No. AFOSR-89-0518. The authors wish to thank the anonymous reviewers whose careful reading and criticism have helped them improve the paper considerably.  相似文献   

19.
A method is presented for attempting global minimization for a function of continuous variables subject to constraints. The method, calledAdaptive Simulated Annealing (ASA), is distinguished by the fact that the fixed temperature schedules and step generation routines that characterize other implementations are here replaced by heuristic-based methods that effectively eliminate the dependence of the algorithm's overall performance on user-specified control parameters. A parallelprocessing version of ASA that gives increased efficiency is presented and applied to two standard problems for illustration and comparison.This research was supported by the University Research Initiative of the U.S. Army Research Office.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号