首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In many global optimization problems motivated by engineering applications, the number of function evaluations is severely limited by time or cost. To ensure that each evaluation contributes to the localization of good candidates for the role of global minimizer, a sequential choice of evaluation points is usually carried out. In particular, when Kriging is used to interpolate past evaluations, the uncertainty associated with the lack of information on the function can be expressed and used to compute a number of criteria accounting for the interest of an additional evaluation at any given point. This paper introduces minimizers entropy as a new Kriging-based criterion for the sequential choice of points at which the function should be evaluated. Based on stepwise uncertainty reduction, it accounts for the informational gain on the minimizer expected from a new evaluation. The criterion is approximated using conditional simulations of the Gaussian process model behind Kriging, and then inserted into an algorithm similar in spirit to the Efficient Global Optimization (EGO) algorithm. An empirical comparison is carried out between our criterion and expected improvement, one of the reference criteria in the literature. Experimental results indicate major evaluation savings over EGO. Finally, the method, which we call IAGO (for Informational Approach to Global Optimization), is extended to robust optimization problems, where both the factors to be tuned and the function evaluations are corrupted by noise.  相似文献   

2.
One of the most commonly encountered approaches for the solution of unconstrained global optimization problems is the application of multi-start algorithms. These algorithms usually combine already computed minimizers and previously selected initial points, to generate new starting points, at which, local search methods are applied to detect new minimizers. Multi-start algorithms are usually terminated once a stochastic criterion is satisfied. In this paper, the operators of the Differential Evolution algorithm are employed to generate the starting points of a global optimization method with dynamic search trajectories. Results for various well-known and widely used test functions are reported, supporting the claim that the proposed approach improves drastically the performance of the algorithm, in terms of the total number of function evaluations required to reach a global minimizer.  相似文献   

3.
针对传统Kriging模型在多变量(高维)输入全局优化中因超参数过多而引发收敛速度慢,精度低,建模效率不高问题,提出了基于偏最小二乘变换技术和Kriging模型的有效全局优化方法.首先,构造偏最小二乘高斯核函数;其次,借助差分进化算法寻找满足期望改进准则最大化条件的新样本点;然后,将不同核函数和期望改进准则组合,构建四种有效全局优化算法并进行比较;最后,数值算例结果表明,基于偏最小二乘变换的Kriging全局优化方法在解决高维全局优化问题方面相比于标准的全局优化算法在收敛精度及收敛速度方面更具优势.  相似文献   

4.
A new method is proposed for solving box constrained global optimization problems. The basic idea of the method is described as follows: Constructing a so-called cut-peak function and a choice function for each present minimizer, the original problem of finding a global solution is converted into an auxiliary minimization problem of finding local minimizers of the choice function, whose objective function values are smaller than the previous ones. For a local minimum solution of auxiliary problems this procedure is repeated until no new minimizer with a smaller objective function value could be found for the last minimizer. Construction of auxiliary problems and choice of parameters are relatively simple, so the algorithm is relatively easy to implement, and the results of the numerical tests are satisfactory compared to other methods.  相似文献   

5.
Many estimation problems amount to minimizing a piecewise Cm objective function, with m ≥ 2, composed of a quadratic data-fidelity term and a general regularization term. It is widely accepted that the minimizers obtained using non-convex and possibly non-smooth regularization terms are frequently good estimates. However, few facts are known on the ways to control properties of these minimizers. This work is dedicated to the stability of the minimizers of such objective functions with respect to variations of the data. It consists of two parts: first we consider all local minimizers, whereas in a second part we derive results on global minimizers. In this part we focus on data points such that every local minimizer is isolated and results from a Cm-1 local minimizer function, defined on some neighborhood. We demonstrate that all data points for which this fails form a set whose closure is negligible.  相似文献   

6.
We address estimation problems where the sought-after solution is defined as the minimizer of an objective function composed of a quadratic data-fidelity term and a regularization term. We especially focus on non-convex and possibly non-smooth regularization terms because of their ability to yield good estimates. This work is dedicated to the stability of the minimizers of such piecewise Cm, with m ≥ 2, non-convex objective functions. It is composed of two parts. In the previous part of this work we considered general local minimizers. In this part we derive results on global minimizers. We show that the data domain contains an open, dense subset such that for every data point therein, the objective function has a finite number of local minimizers, and a unique global minimizer. It gives rise to a global minimizer function which is Cm-1 everywhere on an open and dense subset of the data domain.  相似文献   

7.
A novel method, entitled the discrete global descent method, is developed in this paper to solve discrete global optimization problems and nonlinear integer programming problems. This method moves from one discrete minimizer of the objective function f to another better one at each iteration with the help of an auxiliary function, entitled the discrete global descent function. The discrete global descent function guarantees that its discrete minimizers coincide with the better discrete minimizers of f under some standard assumptions. This property also ensures that a better discrete minimizer of f can be found by some classical local search methods. Numerical experiments on several test problems with up to 100 integer variables and up to 1.38 × 10104 feasible points have demonstrated the applicability and efficiency of the proposed method.  相似文献   

8.
A Locally-Biased form of the DIRECT Algorithm   总被引:4,自引:0,他引:4  
In this paper we propose a form of the DIRECT algorithm that is strongly biased toward local search. This form should do well for small problems with a single global minimizer and only a few local minimizers. We motivate our formulation with some results on how the original formulation of the DIRECT algorithm clusters its search near a global minimizer. We report on the performance of our algorithm on a suite of test problems and observe that the algorithm performs particularly well when termination is based on a budget of function evaluations.  相似文献   

9.
《Optimization》2012,61(1):29-51
The problem of approximation of a given function on a given set by a polynomial of a fixed degree in the Chebyshev metric (the Chebyshev polynomial approximation problem) is a typical problem of Nonsmooth Analysis (to be more precise, it is a convex nonsmooth problem). It has many important applications, both in mathematics and in practice. The theory of Chebyshev approximations enjoys very nice properties (the most famous being the Chebyshev alternation rule). In the present article the problem of approximation of a given function on a given finite set of points by several polynomials is studied. As a criterion function, the maximin deviation is taken. The resulting functional is nonsmooth and nonconvex and therefore the problem becomes multiextremal and may have local minimizers which are not global ones. A necessary and sufficient condition for a point to be a local minimizer is proved. It is shown that a generalized alternation rule is still valid. Sufficient conditions for a point to be a strict local minimizer are established as well. These conditions are also formulated in terms of alternants. An exchange algorithm for finding a local minimizer is constructed. An k -exchange algorithm, allowing to find a "better" local minimizer is stated. Numerical examples illustrating the theory are presented.  相似文献   

10.
We present a new strategy for the constrained global optimization of expensive black box functions using response surface models. A response surface model is simply a multivariate approximation of a continuous black box function which is used as a surrogate model for optimization in situations where function evaluations are computationally expensive. Prior global optimization methods that utilize response surface models were limited to box-constrained problems, but the new method can easily incorporate general nonlinear constraints. In the proposed method, which we refer to as the Constrained Optimization using Response Surfaces (CORS) Method, the next point for costly function evaluation is chosen to be the one that minimizes the current response surface model subject to the given constraints and to additional constraints that the point be of some distance from previously evaluated points. The distance requirement is allowed to cycle, starting from a high value (global search) and ending with a low value (local search). The purpose of the constraint is to drive the method towards unexplored regions of the domain and to prevent the premature convergence of the method to some point which may not even be a local minimizer of the black box function. The new method can be shown to converge to the global minimizer of any continuous function on a compact set regardless of the response surface model that is used. Finally, we considered two particular implementations of the CORS method which utilize a radial basis function model (CORS-RBF) and applied it on the box-constrained Dixon–Szegö test functions and on a simple nonlinearly constrained test function. The results indicate that the CORS-RBF algorithms are competitive with existing global optimization algorithms for costly functions on the box-constrained test problems. The results also show that the CORS-RBF algorithms are better than other algorithms for constrained global optimization on the nonlinearly constrained test problem.  相似文献   

11.
《Optimization》2012,61(6):661-684
A prominent advantage of using surrogate models in structural design optimization is that computational effort can be greatly reduced without significantly compromising model accuracy. The essential goal is to perform the design optimization with fewer evaluations of the typically finite element analysis and ensuring accuracy of the optimization results. An adaptive surrogate based design optimization framework is proposed, in which Latin hypercube sampling and Kriging are used to build surrogate models. Accuracy of the models is improved adaptively using an infill criterion called expected improvement (EI). It is the anticipated improvement that an interpolation point will lead to the current surrogate models. The point that will lead to the maximum EI is searched and used as infill points at each iteration. For constrained optimization problems, the surrogate of constraint is also utilized to form a constrained EI as the corresponding infill criterion. Computational trials on mathematical test functions and on a three-dimensional aircraft wing model are carried out to test the feasibility of this method. Compared with the traditional surrogate base design optimization and direct optimization methods, this method can find the optimum design with fewer evaluations of the original system model and maintain good accuracy.  相似文献   

12.
A new method for continuous global minimization problems, acronymed SCM, is introduced. This method gives a simple transformation to convert the objective function to an auxiliary function with gradually fewer local minimizers. All Local minimizers except a prefixed one of the auxiliary function are in the region where the function value of the objective function is lower than its current minimal value. Based on this method, an algorithm is designed which uses a local optimization method to minimize the auxiliary function to find a local minimizer at which the value of the objective function is lower than its current minimal value. The algorithm converges asymptotically with probability one to a global minimizer of the objective function. Numerical experiments on a set of standard test problems with several problems' dimensions up to 50 show that the algorithm is very efficient compared with other global optimization methods.  相似文献   

13.
This paper considers the nonlinearly constrained continuous global minimization problem. Based on the idea of the penalty function method, an auxiliary function, which has approximately the same global minimizers as the original problem, is constructed. An algorithm is developed to minimize the auxiliary function to find an approximate constrained global minimizer of the constrained global minimization problem. The algorithm can escape from the previously converged local minimizers, and can converge to an approximate global minimizer of the problem asymptotically with probability one. Numerical experiments show that it is better than some other well known recent methods for constrained global minimization problems.  相似文献   

14.
In this paper, we consider the box constrained nonlinear integer programming problem. We present an auxiliary function, which has the same discrete global minimizers as the problem. The minimization of the function using a discrete local search method can escape successfully from previously converged discrete local minimizers by taking increasing values of a parameter. We propose an algorithm to find a global minimizer of the box constrained nonlinear integer programming problem. The algorithm minimizes the auxiliary function from random initial points. We prove that the algorithm can converge asymptotically with probability one. Numerical experiments on a set of test problems show that the algorithm is efficient and robust.  相似文献   

15.
16.
求解无约束总体优化问题的一类双参数填充函数算法需要假设该问题的局部极小解的个数只有有限个,而且填充函数中参数的选取与局部极小解的谷域的半径有关.该文对其填充函数作了适当改进,使得新的填充函数算法不仅无需对问题的局部极小解的个数作假设,而且填充函数中参数的选取与局部极小解的谷域的半径无关.数值试验表明算法是有效的.  相似文献   

17.
Kriging is a popular method for estimating the global optimum of a simulated system. Kriging approximates the input/output function of the simulation model. Kriging also estimates the variances of the predictions of outputs for input combinations not yet simulated. These predictions and their variances are used by ‘efficient global optimization’ (EGO), to balance local and global search. This article focuses on two related questions: (1) How to select the next combination to be simulated when searching for the global optimum? (2) How to derive confidence intervals for outputs of input combinations not yet simulated? Classic Kriging simply plugs the estimated Kriging parameters into the formula for the predictor variance, so theoretically this variance is biased. This article concludes that practitioners may ignore this bias, because classic Kriging gives acceptable confidence intervals and estimates of the optimal input combination. This conclusion is based on bootstrapping and conditional simulation.  相似文献   

18.
In this paper we consider a global optimization method for space trajectory design problems. The method, which actually aims at finding not only the global minimizer but a whole set of low-lying local minimizers (corresponding to a set of different design options), is based on a domain decomposition technique where each subdomain is evaluated through a procedure based on the evolution of a population of agents. The method is applied to two space trajectory design problems and compared with existing deterministic and stochastic global optimization methods.  相似文献   

19.
Most parallel efficient global optimization (EGO) algorithms focus only on the parallel architectures for producing multiple updating points, but give few attention to the balance between the global search (i.e., sampling in different areas of the search space) and local search (i.e., sampling more intensely in one promising area of the search space) of the updating points. In this study, a novel approach is proposed to apply this idea to further accelerate the search of parallel EGO algorithms. In each cycle of the proposed algorithm, all local maxima of expected improvement (EI) function are identified by a multi-modal optimization algorithm. Then the local EI maxima with value greater than a threshold are selected and candidates are sampled around these selected EI maxima. The results of numerical experiments show that, although the proposed parallel EGO algorithm needs more evaluations to find the optimum compared to the standard EGO algorithm, it is able to reduce the optimization cycles. Moreover, the proposed parallel EGO algorithm gains better results in terms of both number of cycles and evaluations compared to a state-of-the-art parallel EGO algorithm over six test problems.  相似文献   

20.
We present new conditions for a Karush-Kuhn-Tucker point to be a global minimizer of a mathematical programming problem which may have many local minimizers that are not global. The new conditions make use of underestimators of the Lagrangian at the Karush-Kuhn-Tucker point. We establish that a Karush-Kuhn-Tucker point is a global minimizer if the Lagrangian admits an underestimator, which is convex or, more generally, has the property that every stationary point is a global minimizer. In particular, we obtain sufficient conditions by using the fact that the biconjugate function of the Lagrangian is a convex underestimator at a point whenever it coincides with the Lagrangian at that point. We present also sufficient conditions for weak and strong duality results in terms of underestimators. The authors are grateful to Professor Gue Myung Lee, Pukyong National University, Korea, and the referees for their comments and suggestions which have contributed to the final preparation of the paper. The work was partially supported by the Australian Research Council Discovery Project Grant.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号