首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We introduce GOSAC, a global optimization algorithm for problems with computationally expensive black-box constraints and computationally cheap objective functions. The variables may be continuous, integer, or mixed-integer. GOSAC uses a two-phase optimization approach. The first phase aims at finding a feasible point by solving a multi-objective optimization problem in which the constraints are minimized simultaneously. The second phase aims at improving the feasible solution. In both phases, we use cubic radial basis function surrogate models to approximate the computationally expensive constraints. We iteratively select sample points by minimizing the computationally cheap objective function subject to the constraint function approximations. We assess GOSAC’s efficiency on computationally cheap test problems with integer, mixed-integer, and continuous variables and two environmental applications. We compare GOSAC to NOMAD and a genetic algorithm (GA). The results of the numerical experiments show that for a given budget of allowed expensive constraint evaluations, GOSAC finds better feasible solutions more efficiently than NOMAD and GA for most benchmark problems and both applications. GOSAC finds feasible solutions with a higher probability than NOMAD and GOSAC.  相似文献   

2.
《Optimization》2012,61(6):661-684
A prominent advantage of using surrogate models in structural design optimization is that computational effort can be greatly reduced without significantly compromising model accuracy. The essential goal is to perform the design optimization with fewer evaluations of the typically finite element analysis and ensuring accuracy of the optimization results. An adaptive surrogate based design optimization framework is proposed, in which Latin hypercube sampling and Kriging are used to build surrogate models. Accuracy of the models is improved adaptively using an infill criterion called expected improvement (EI). It is the anticipated improvement that an interpolation point will lead to the current surrogate models. The point that will lead to the maximum EI is searched and used as infill points at each iteration. For constrained optimization problems, the surrogate of constraint is also utilized to form a constrained EI as the corresponding infill criterion. Computational trials on mathematical test functions and on a three-dimensional aircraft wing model are carried out to test the feasibility of this method. Compared with the traditional surrogate base design optimization and direct optimization methods, this method can find the optimum design with fewer evaluations of the original system model and maintain good accuracy.  相似文献   

3.
This paper introduces a novel methodology for the global optimization of general constrained grey-box problems. A grey-box problem may contain a combination of black-box constraints and constraints with a known functional form. The novel features of this work include (i) the selection of initial samples through a subset selection optimization problem from a large number of faster low-fidelity model samples (when a low-fidelity model is available), (ii) the exploration of a diverse set of interpolating and non-interpolating functional forms for representing the objective function and each of the constraints, (iii) the global optimization of the parameter estimation of surrogate functions and the global optimization of the constrained grey-box formulation, and (iv) the updating of variable bounds based on a clustering technique. The performance of the algorithm is presented for a set of case studies representing an expensive non-linear algebraic partial differential equation simulation of a pressure swing adsorption system for \(\hbox {CO}_{2}\). We address three significant sources of variability and their effects on the consistency and reliability of the algorithm: (i) the initial sampling variability, (ii) the type of surrogate function, and (iii) global versus local optimization of the surrogate function parameter estimation and overall surrogate constrained grey-box problem. It is shown that globally optimizing the parameters in the parameter estimation model, and globally optimizing the constrained grey-box formulation has a significant impact on the performance. The effect of sampling variability is mitigated by a two-stage sampling approach which exploits information from reduced-order models. Finally, the proposed global optimization approach is compared to existing constrained derivative-free optimization algorithms.  相似文献   

4.
This paper presents the surrogate model based algorithm SO-I for solving purely integer optimization problems that have computationally expensive black-box objective functions and that may have computationally expensive constraints. The algorithm was developed for solving global optimization problems, meaning that the relaxed optimization problems have many local optima. However, the method is also shown to perform well on many local optimization problems, and problems with linear objective functions. The performance of SO-I, a genetic algorithm, Nonsmooth Optimization by Mesh Adaptive Direct Search (NOMAD), SO-MI (Müller et al. in Comput Oper Res 40(5):1383–1400, 2013), variable neighborhood search, and a version of SO-I that only uses a local search has been compared on 17 test problems from the literature, and on eight realizations of two application problems. One application problem relates to hydropower generation, and the other one to throughput maximization. The numerical results show that SO-I finds good solutions most efficiently. Moreover, as opposed to SO-MI, SO-I is able to find feasible points by employing a first optimization phase that aims at minimizing a constraint violation function. A feasible user-supplied point is not necessary.  相似文献   

5.
In this work, we present an algorithm for solving constrained optimization problems that does not make explicit use of the objective function derivatives. The algorithm mixes an inexact restoration framework with filter techniques, where the forbidden regions can be given by the flat or slanting filter rule. Each iteration is decomposed into two independent phases: a feasibility phase which reduces an infeasibility measure without evaluations of the objective function, and an optimality phase which reduces the objective function value. As the derivatives of the objective function are not available, the optimality step is computed by derivative-free trust-region internal iterations. Any technique to construct the trust-region models can be used since the gradient of the model is a reasonable approximation of the gradient of the objective function at the current point. Assuming this and classical assumptions, we prove that the full steps are efficient in the sense that near a feasible nonstationary point, the decrease in the objective function is relatively large, ensuring the global convergence results of the algorithm. Numerical experiments show the effectiveness of the proposed method.  相似文献   

6.
本文研究服务台不可靠的M/M/1常数率重试排队系统中顾客的均衡进队策略, 其中服务台在正常工作和空闲状态下以不同的速率发生故障。在该系统中, 服务台前没有等待空间, 如果到达的顾客发现服务台处于空闲状态, 该顾客可占用服务台开始服务。否则, 如果服务台处于忙碌状态, 顾客可以选择留下信息, 使得服务台在空闲时可以按顺序在重试空间中寻找之前留下信息的顾客进行服务。当服务台发生故障时, 正在被服务的顾客会发生丢失, 且系统拒绝新的顾客进入系统。根据系统提供给顾客的不同程度的信息, 研究队长可见和不可见两种信息情形下系统的稳态指标, 以及顾客基于收入-支出函数的均衡进队策略, 并建立单位时间内服务商的收益和社会福利函数。比较发现, 披露队长信息不一定能提高服务商收益和社会福利。  相似文献   

7.
This paper presents a new sequential method for constrained nonlinear optimization problems. The principal characteristics of these problems are very time consuming function evaluations and the absence of derivative information. Such problems are common in design optimization, where time consuming function evaluations are carried out by simulation tools (e.g., FEM, CFD). Classical optimization methods, based on derivatives, are not applicable because often derivative information is not available and is too expensive to approximate through finite differencing.The algorithm first creates an experimental design. In the design points the underlying functions are evaluated. Local linear approximations of the real model are obtained with help of weighted regression techniques. The approximating model is then optimized within a trust region to find the best feasible objective improving point. This trust region moves along the most promising direction, which is determined on the basis of the evaluated objective values and constraint violations combined in a filter criterion. If the geometry of the points that determine the local approximations becomes bad, i.e. the points are located in such a way that they result in a bad approximation of the actual model, then we evaluate a geometry improving instead of an objective improving point. In each iteration a new local linear approximation is built, and either a new point is evaluated (objective or geometry improving) or the trust region is decreased. Convergence of the algorithm is guided by the size of this trust region. The focus of the approach is on getting good solutions with a limited number of function evaluations.  相似文献   

8.
This paper examines the influence of two major aspects on the solution quality of surrogate model algorithms for computationally expensive black-box global optimization problems, namely the surrogate model choice and the method of iteratively selecting sample points. A random sampling strategy (algorithm SO-M-c) and a strategy where the minimum point of the response surface is used as new sample point (algorithm SO-M-s) are compared in numerical experiments. Various surrogate models and their combinations have been used within the SO-M-c and SO-M-s sampling frameworks. The Dempster–Shafer Theory approach used in the algorithm by Müller and Piché (J Glob Optim 51:79–104, 2011) has been used for combining the surrogate models. The algorithms are numerically compared on 13 deterministic literature test problems with 2–30 dimensions, an application problem that deals with groundwater bioremediation, and an application that arises in energy generation using tethered kites. NOMAD and the particle swarm pattern search algorithm (PSWARM), which are derivative-free optimization methods, have been included in the comparison. The algorithms have also been compared to a kriging method that uses the expected improvement as sampling strategy (FEI), which is similar to the Efficient Global Optimization (EGO) algorithm. Data and performance profiles show that surrogate model combinations containing the cubic radial basis function (RBF) model work best regardless of the sampling strategy, whereas using only a polynomial regression model should be avoided. Kriging and combinations including kriging perform in general worse than when RBF models are used. NOMAD, PSWARM, and FEI perform for most problems worse than SO-M-s and SO-M-c. Within the scope of this study a Matlab toolbox has been developed that allows the user to choose, among others, between various sampling strategies and surrogate models and their combinations. The open source toolbox is available from the authors upon request.  相似文献   

9.
This article addresses the problem of derivative-free (single- or multi-objective) optimization subject to multiple inequality constraints. Both the objective and constraint functions are assumed to be smooth, non-linear and expensive to evaluate. As a consequence, the number of evaluations that can be used to carry out the optimization is very limited, as in complex industrial design optimization problems. The method we propose to overcome this difficulty has its roots in both the Bayesian and the multi-objective optimization literatures. More specifically, an extended domination rule is used to handle objectives and constraints in a unified way, and a corresponding expected hyper-volume improvement sampling criterion is proposed. This new criterion is naturally adapted to the search of a feasible point when none is available, and reduces to existing Bayesian sampling criteria—the classical Expected Improvement (EI) criterion and some of its constrained/multi-objective extensions—as soon as at least one feasible point is available. The calculation and optimization of the criterion are performed using Sequential Monte Carlo techniques. In particular, an algorithm similar to the subset simulation method, which is well known in the field of structural reliability, is used to estimate the criterion. The method, which we call BMOO (for Bayesian Multi-Objective Optimization), is compared to state-of-the-art algorithms for single- and multi-objective constrained optimization.  相似文献   

10.
A new algorithm is proposed to deal with the worst-case optimization of black-box functions evaluated through costly computer simulations. The input variables of these computer experiments are assumed to be of two types. Control variables must be tuned while environmental variables have an undesirable effect, to which the design of the control variables should be robust. The algorithm to be proposed searches for a minimax solution, i.e., values of the control variables that minimize the maximum of the objective function with respect to the environmental variables. The problem is particularly difficult when the control and environmental variables live in continuous spaces. Combining a relaxation procedure with Kriging-based optimization makes it possible to deal with the continuity of the variables and the fact that no analytical expression of the objective function is available in most real-case problems. Numerical experiments are conducted to assess the accuracy and efficiency of the algorithm, both on analytical test functions with known results and on an engineering application.  相似文献   

11.
This paper presents a meta-algorithm for approximating the Pareto optimal set of costly black-box multiobjective optimization problems given a limited number of objective function evaluations. The key idea is to switch among different algorithms during the optimization search based on the predicted performance of each algorithm at the time. Algorithm performance is modeled using a machine learning technique based on the available information. The predicted best algorithm is then selected to run for a limited number of evaluations. The proposed approach is tested on several benchmark problems and the results are compared against those obtained using any one of the candidate algorithms alone.  相似文献   

12.
In this paper, we present a sequential quadratically constrained quadratic programming (SQCQP) norm-relaxed algorithm of strongly sub-feasible directions for the solution of inequality constrained optimization problems. By introducing a new unified line search and making use of the idea of strongly sub-feasible direction method, the proposed algorithm can well combine the phase of finding a feasible point (by finite iterations) and the phase of a feasible descent norm-relaxed SQCQP algorithm. Moreover, the former phase can preserve the “sub-feasibility” of the current iteration, and control the increase of the objective function. At each iteration, only a consistent convex quadratically constrained quadratic programming problem needs to be solved to obtain a search direction. Without any other correctional directions, the global, superlinear and a certain quadratic convergence (which is between 1-step and 2-step quadratic convergence) properties are proved under reasonable assumptions. Finally, some preliminary numerical results show that the proposed algorithm is also encouraging.  相似文献   

13.
The simplicial homology global optimisation (SHGO) algorithm is a general purpose global optimisation algorithm based on applications of simplicial integral homology and combinatorial topology. SHGO approximates the homology groups of a complex built on a hypersurface homeomorphic to a complex on the objective function. This provides both approximations of locally convex subdomains in the search space through Sperner’s lemma and a useful visual tool for characterising and efficiently solving higher dimensional black and grey box optimisation problems. This complex is built up using sampling points within the feasible search space as vertices. The algorithm is specialised in finding all the local minima of an objective function with expensive function evaluations efficiently which is especially suitable to applications such as energy landscape exploration. SHGO was initially developed as an improvement on the topographical global optimisation (TGO) method. It is proven that the SHGO algorithm will always outperform TGO on function evaluations if the objective function is Lipschitz smooth. In this paper SHGO is applied to non-convex problems with linear and box constraints with bounds placed on the variables. Numerical experiments on linearly constrained test problems show that SHGO gives competitive results compared to TGO and the recently developed Lc-DISIMPL algorithm as well as the PSwarm, LGO and DIRECT-L1 algorithms. Furthermore SHGO is compared with the TGO, basinhopping (BH) and differential evolution (DE) global optimisation algorithms over a large selection of black-box problems with bounds placed on the variables from the SciPy benchmarking test suite. A Python implementation of the SHGO and TGO algorithms published under a MIT license can be found from https://bitbucket.org/upiamcompthermo/shgo/.  相似文献   

14.
A novel hybrid evolutionary algorithm is developed based on the particle swarm optimization (PSO) and genetic algorithms (GAs). The PSO phase involves the enhancement of worst solutions by using the global-local best inertia weight and acceleration coefficients to increase the efficiency. In the genetic algorithm phase, a new rank-based multi-parent crossover is used by modifying the crossover and mutation operators which favors both the local and global exploration simultaneously. In addition, the Euclidean distance-based niching is implemented in the replacement phase of the GA to maintain the population diversity. To avoid the local optimum solutions, the stagnation check is performed and the solution is randomized when needed. The constraints are handled using an effective feasible population based approach. The parameters are self-adaptive requiring no tuning based on the type of problems. Numerical simulations are performed first to evaluate the current algorithm for a set of 24 benchmark constrained nonlinear optimization problems. The results demonstrate reasonable correlation and high quality optimum solutions with significantly less function evaluations against other state-of-the-art heuristic-based optimization algorithms. The algorithm is also applied to various nonlinear engineering optimization problems and shown to be excellent in searching for the global optimal solutions.  相似文献   

15.
Augmented Lagrangian function is one of the most important tools used in solving some constrained optimization problems. In this article, we study an augmented Lagrangian objective penalty function and a modified augmented Lagrangian objective penalty function for inequality constrained optimization problems. First, we prove the dual properties of the augmented Lagrangian objective penalty function, which are at least as good as the traditional Lagrangian function's. Under some conditions, the saddle point of the augmented Lagrangian objective penalty function satisfies the first-order Karush-Kuhn-Tucker condition. This is especially so when the Karush-Kuhn-Tucker condition holds for convex programming of its saddle point existence. Second, we prove the dual properties of the modified augmented Lagrangian objective penalty function. For a global optimal solution, when the exactness of the modified augmented Lagrangian objective penalty function holds, its saddle point exists. The sufficient and necessary stability conditions used to determine whether the modified augmented Lagrangian objective penalty function is exact for a global solution is proved. Based on the modified augmented Lagrangian objective penalty function, an algorithm is developed to find a global solution to an inequality constrained optimization problem, and its global convergence is also proved under some conditions. Furthermore, the sufficient and necessary calmness condition on the exactness of the modified augmented Lagrangian objective penalty function is proved for a local solution. An algorithm is presented in finding a local solution, with its convergence proved under some conditions.  相似文献   

16.
Motivated by the problem of fitting a surrogate model to a set of feasible points in the context of constrained derivative-free optimization, we consider the problem of selecting a small set of points with good space-filling and orthogonality properties from a larger set of feasible points. We propose four mixed-integer linear programming models for this task and we show that the corresponding optimization problems are NP-hard. Numerical experiments show that our models consistently yield well-distributed points that, on average, help reducing the variance of model fitting errors.  相似文献   

17.
Among the penalty based approaches for constrained optimization, augmented Lagrangian (AL) methods are better in at least three ways: (i) they have theoretical convergence properties, (ii) they distort the original objective function minimally, thereby providing a better function landscape for search, and (iii) they can result in computing optimal Lagrange multiplier for each constraint as a by-product. Instead of keeping a constant penalty parameter throughout the optimization process, these algorithms update the parameters (called multipliers) adaptively so that the corresponding penalized function dynamically changes its optimum from the unconstrained minimum point to the constrained minimum point with iterations. However, the flip side of these algorithms is that the overall algorithm requires a serial application of a number of unconstrained optimization tasks, a process that is usually time-consuming and tend to be computationally expensive. In this paper, we devise a genetic algorithm based parameter update strategy to a particular AL method. The proposed strategy updates critical parameters in an adaptive manner based on population statistics. Occasionally, a classical optimization method is used to improve the GA-obtained solution, thereby providing the resulting hybrid procedure its theoretical convergence property. The GAAL method is applied to a number of constrained test problems taken from the evolutionary algorithms (EAs) literature. The number of function evaluations required by GAAL in most problems is found to be smaller than that needed by a number of existing evolutionary based constraint handling methods. GAAL method is found to be accurate, computationally fast, and reliable over multiple runs. Besides solving the problems, the proposed GAAL method is also able to find the optimal Lagrange multiplier associated with each constraint for the test problems as an added benefit??a matter that is important for a sensitivity analysis of the obtained optimized solution, but has not yet been paid adequate attention in the past evolutionary constrained optimization studies.  相似文献   

18.
For smooth or non-smooth unconstrained global optimization problems, an one parameter filled function is derived to identify their global optimizers or approximately global optimizers. The theoretical properties of the proposed function are investigated. Based on the filled function, an algorithm is designed for solving unconstrained global optimization problems. The algorithm consists of two phases: local minimization and filling. The former is intended to minimize the objective function and obtain a local optimizer, the latter aims to find a better initial point for the first phase. Numerical experimentation is also provided. The preliminary computational results confirm that the proposed filled function approach is promising.  相似文献   

19.
This paper develops the OPUS (Optimization by Particle swarm Using Surrogates) framework for expensive black-box optimization. In each iteration, OPUS considers multiple trial positions for each particle in the swarm and uses a surrogate model to identify the most promising trial position. Moreover, the current overall best position is refined by finding the global minimum of the surrogate in the neighborhood of that position. OPUS is implemented using an RBF surrogate and the resulting OPUS-RBF algorithm is applied to a 36-D groundwater bioremediation problem, a 14-D watershed calibration problem, and ten mostly 30-D test problems. OPUS-RBF is compared with a standard PSO, CMA-ES, two other surrogate-assisted PSO algorithms, and an RBF-assisted evolution strategy. The numerical results suggest that OPUS-RBF is promising for expensive black-box optimization.  相似文献   

20.
We present a new strategy for the constrained global optimization of expensive black box functions using response surface models. A response surface model is simply a multivariate approximation of a continuous black box function which is used as a surrogate model for optimization in situations where function evaluations are computationally expensive. Prior global optimization methods that utilize response surface models were limited to box-constrained problems, but the new method can easily incorporate general nonlinear constraints. In the proposed method, which we refer to as the Constrained Optimization using Response Surfaces (CORS) Method, the next point for costly function evaluation is chosen to be the one that minimizes the current response surface model subject to the given constraints and to additional constraints that the point be of some distance from previously evaluated points. The distance requirement is allowed to cycle, starting from a high value (global search) and ending with a low value (local search). The purpose of the constraint is to drive the method towards unexplored regions of the domain and to prevent the premature convergence of the method to some point which may not even be a local minimizer of the black box function. The new method can be shown to converge to the global minimizer of any continuous function on a compact set regardless of the response surface model that is used. Finally, we considered two particular implementations of the CORS method which utilize a radial basis function model (CORS-RBF) and applied it on the box-constrained Dixon–Szegö test functions and on a simple nonlinearly constrained test function. The results indicate that the CORS-RBF algorithms are competitive with existing global optimization algorithms for costly functions on the box-constrained test problems. The results also show that the CORS-RBF algorithms are better than other algorithms for constrained global optimization on the nonlinearly constrained test problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号