首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A generalized primal-relaxed dual algorithm for global optimization is proposed and its convergence is proved. The (GOP) algorithm of Floudas and Visweswaran (Refs. 1–2) is shown to be a special case of this general algorithm. Within the proposed framework, the algorithm of Floudas and Visweswaran (Refs. 1–2) is further extended to the nonsmooth case. A penalty implementation of the extended (GOP) algorithm is studied to improve its efficiency.  相似文献   

2.
In many global optimization problems motivated by engineering applications, the number of function evaluations is severely limited by time or cost. To ensure that each evaluation contributes to the localization of good candidates for the role of global minimizer, a sequential choice of evaluation points is usually carried out. In particular, when Kriging is used to interpolate past evaluations, the uncertainty associated with the lack of information on the function can be expressed and used to compute a number of criteria accounting for the interest of an additional evaluation at any given point. This paper introduces minimizers entropy as a new Kriging-based criterion for the sequential choice of points at which the function should be evaluated. Based on stepwise uncertainty reduction, it accounts for the informational gain on the minimizer expected from a new evaluation. The criterion is approximated using conditional simulations of the Gaussian process model behind Kriging, and then inserted into an algorithm similar in spirit to the Efficient Global Optimization (EGO) algorithm. An empirical comparison is carried out between our criterion and expected improvement, one of the reference criteria in the literature. Experimental results indicate major evaluation savings over EGO. Finally, the method, which we call IAGO (for Informational Approach to Global Optimization), is extended to robust optimization problems, where both the factors to be tuned and the function evaluations are corrupted by noise.  相似文献   

3.
In this paper we perform a computational analysis of a population based approach for global optimization, Population Basin Hopping (PBH), which was proven to be very efficient on very challenging global optimization problems by the authors (see ). The experimental analysis aims at understanding more deeply how the approach works and why it is successful on challenging problems.  相似文献   

4.
For constrained concave global minimization problems, two very different solution techniques have been investigated. The first such method is a stochastic mulitstart approach which typically finds, with high probability, all local minima for the problem. The second method is deterministic and guarantees a global minimum solution to within any user specified tolerance. It is the purpose of this paper to make a careful comparison of these two methods on a range of test problems using separable concave objectives over compact polyhedral sets, and to investigate in this way the advantages and disadvantages of each method. A direct computational comparison, on the same set of over 140 problems, is presented.  相似文献   

5.
A branch-and-reduce approach to global optimization   总被引:4,自引:0,他引:4  
This paper presents valid inequalities and range contraction techniques that can be used to reduce the size of the search space of global optimization problems. To demonstrate the algorithmic usefulness of these techniques, we incorporate them within the branch-and-bound framework. This results in a branch-and-reduce global optimization algorithm. A detailed discussion of the algorithm components and theoretical properties are provided. Specialized algorithms for polynomial and multiplicative programs are developed. Extensive computational results are presented for engineering design problems, standard global optimization test problems, univariate polynomial programs, linear multiplicative programs, mixed-integer nonlinear programs and concave quadratic programs. For the problems solved, the computer implementation of the proposed algorithm provides very accurate solutions in modest computational time.  相似文献   

6.
The Particle Swarm Optimization (PSO) method is a well-established technique for global optimization. During the past years several variations of the original PSO have been proposed in the relevant literature. Because of the increasing necessity in global optimization methods in almost all fields of science there is a great demand for efficient and fast implementations of relative algorithms. In this work we propose three modifications of the original PSO method in order to increase the speed and its efficiency that can be applied independently in almost every PSO variant. These modifications are: (a) a new stopping rule, (b) a similarity check and (c) a conditional application of some local search method. The proposed were tested using three popular PSO variants and a variety test functions. We have found that the application of these modifications resulted in significant gain in speed and efficiency.  相似文献   

7.
Stochastic global optimization methods part I: Clustering methods   总被引:1,自引:0,他引:1  
In this stochastic approach to global optimization, clustering techniques are applied to identify local minima of a real valued objective function that are potentially global. Three different methods of this type are described; their accuracy and efficiency are analyzed in detail.  相似文献   

8.
In this paper, we report results of implementations of algorithms designed (i) to solve the global optimization problem (GOP) and (ii) to run on a parallel network of transputers. There have always been two alternative approaches to the solution of the GOP, probabilistic and deterministic. Interval methods can be implemented on our network of transputers using Concurrent ADA, and a secondary objective of the tests reported was to investigate the relative computer times required by parallel interval algorithms compared to probabilistic methods.  相似文献   

9.
Many engineering optimization problems frequently encounter discrete variables as well as continuous variables and the presence of nonlinear discrete variables considerably adds to the solution complexity. Very few of the existing methods can find a globally optimal solution when the objective functions are non-convex and non-differentiable. In this paper, we present a mixed-variable evolutionary programming (MVEP) technique for solving these nonlinear optimization problems which contain integer, discrete, zero-one and continuous variables. The MVEP provides an improvement in global search reliability in a mixed-variable space and converges steadily to a good solution. An approach to handle various kinds of variables and constraints is discussed. Some examples of mixed-variable optimization problems in the literature are tested, which demonstrate that the proposed approach is superior to current methods for finding the best solution, in terms of both solution quality and algorithm robustness.  相似文献   

10.
A new deterministic method for solving a global optimization problem is proposed. The proposed method consists of three phases. The first phase is a typical local search to compute a local minimum. The second phase employs a discrete sup-local search to locate a so-called sup-local minimum taking the lowest objective value among the neighboring local minima. The third phase is an attractor-based global search to locate a new point of next descent with a lower objective value. The simulation results through well-known global optimization problems are shown to demonstrate the efficiency of the proposed method.  相似文献   

11.
In this paper, the Bayesian methods of global optimization are considered. They provide the minimal expected deviation from the global minimum. It is shown that, using the Bayesian methods, the asymptotic density of calculations of the objective function is much greater around the point of global minimum. The relation of this density to the parameters of the method and to the function is defined.Algorithms are described which apply the Bayesian methods to problems with linear and nonlinear constraints. The Bayesian approach to global multiobjective optimization is defined. Interactive procedures and reduction of multidimensional data in the case of global optimization are discussed.  相似文献   

12.
An algorithm is presented which locates the global minimum or maximum of a function satisfying a Lipschitz condition. The algorithm uses lower bound functions defined on a partitioned domain to generate a sequence of lower bounds for the global minimum. Convergence is proved, and some numerical results are presented.  相似文献   

13.
In Ref. 1, a general class of branch-and-bound methods was proposed by Horst for solving global optimization problems. One of the main contributions of Ref. 1 was the opportunity of handling partition elements whose feasibility is not known. Deletion-by-infeasibility rules were presented for problems where the feasible set is convex, is defined by finitely many convex and reverse convex constraints, or is defined by Lipschitzian inequalities. In this note, we propose a new deletion-by-infeasibility rule for problems whose feasible set is defined by functions representable as differences of convex functions.This research was supported in part by the Hungarian National Research Foundation, Grant OTKA No. 2568.  相似文献   

14.
Two improvements for the algorithm of Breiman and Cutler are presented. Better envelopes can be built up using positive quadratic forms. Better utilization of first and second derivative information is attained by combining both global aspects of curvature and local aspects near the global optimum. The basis of the results is the geometric viewpoint developed by the first author and can be applied to a number of covering type methods. Improvements in convergence rates are demonstrated empirically on standard test functions.Partially supported by an University of Canterbury Erskine grant.  相似文献   

15.
In this paper we consider a global optimization method for space trajectory design problems. The method, which actually aims at finding not only the global minimizer but a whole set of low-lying local minimizers (corresponding to a set of different design options), is based on a domain decomposition technique where each subdomain is evaluated through a procedure based on the evolution of a population of agents. The method is applied to two space trajectory design problems and compared with existing deterministic and stochastic global optimization methods.  相似文献   

16.
A counterexample to an algorithm of Dien (1988) for solving a minimization problem with a quasiconcave objective function and both linear and a reverse-convex constraint shows that this algorithm needn't lead to a solution of the given problem.  相似文献   

17.
By far the most efficient methods for global optimization are based on starting a local optimization routine from an appropriate subset of uniformly distributed starting points. As the number of local optima is frequently unknown in advance, it is a crucial problem when to stop the sequence of sampling and searching. By viewing a set of observed minima as a sample from a generalized multinomial distribution whose cells correspond to the local optima of the objective function, we obtain the posterior distribution of the number of local optima and of the relative size of their regions of attraction. This information is used to construct sequential Bayesian stopping rules which find the optimal trade off between reliability and computational effort.  相似文献   

18.
On the convergence of global methods in multiextremal optimization   总被引:2,自引:0,他引:2  
A general class of derivative-free optimization procedures is presented including the corresponding convergence theory. This theory turns out to be very constructive, in the sense that the convergence conditions not only can be verified easily for many existing algorithms, but also allow one to construct new procedures. It is shown that popular methods such as branch-and-bound concepts, Pintér's general class of procedures, the algorithms of Pijavskii, Shubert, and Mladineo, and the approach of Zheng and Galperin can not only be subsumed under this class of methods, but also partly be improved by regarding them within the framework presented.  相似文献   

19.
This paper discusses algorithms of Moore, Skelboe, Ichida, Fujii and Hansen for solving the global unconstrained optimization problem. These algorithms have been tried on computers, but a thorough theoretical discussion of their convergence properties has been missing. The discussion was started in part I of this paper (Mathematical Programming 33 (1985) 300–317) where the convergence to the global minimum was studied. The present paper is concerned with the different behaviours of these algorithms when they are used for the determination of global minimum points. The solution sets of the algorithms can be a subset of the set of global minimum points,G, a superset ofG, or exactlyG. The algorithms are applicable to a very general class of functions: functions which are continuous, and have suitable inclusion functions. The number of global minimum points can be infinite.This work was supported by the Deutsche Forschungsgemeinschaft.  相似文献   

20.
A method is presented for attempting global minimization for a function of continuous variables subject to constraints. The method, calledAdaptive Simulated Annealing (ASA), is distinguished by the fact that the fixed temperature schedules and step generation routines that characterize other implementations are here replaced by heuristic-based methods that effectively eliminate the dependence of the algorithm's overall performance on user-specified control parameters. A parallelprocessing version of ASA that gives increased efficiency is presented and applied to two standard problems for illustration and comparison.This research was supported by the University Research Initiative of the U.S. Army Research Office.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号