首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In many engineering optimization problems, the objective and the constraints which come from complex analytical models are often black-box functions with extensive computational effort. In this case, it is necessary for optimization process to use sampling data to fit surrogate models so as to reduce the number of objective and constraint evaluations as soon as possible. In addition, it is sometimes difficult for the constrained optimization problems based on surrogate models to find a feasible point, which is the premise of further searching for a global optimal feasible solution. For this purpose, a new Kriging-based Constrained Global Optimization (KCGO) algorithm is proposed. Unlike previous Kriging-based methods, this algorithm can dispose black-box constrained optimization problem even if all initial sampling points are infeasible. There are two pivotal phases in KCGO algorithm. The main task of the first phase is to find a feasible point when there is no feasible data in the initial sample. And the aim of the second phase is to obtain a better feasible point under the circumstances of fewer expensive function evaluations. Several numerical problems and three design problems are tested to illustrate the feasibility, stability and effectiveness of the proposed method.  相似文献   

2.
In many global optimization problems motivated by engineering applications, the number of function evaluations is severely limited by time or cost. To ensure that each of these evaluations usefully contributes to the localization of good candidates for the role of global minimizer, a stochastic model of the function can be built to conduct a sequential choice of evaluation points. Based on Gaussian processes and Kriging, the authors have recently introduced the informational approach to global optimization (IAGO) which provides a one-step optimal choice of evaluation points in terms of reduction of uncertainty on the location of the minimizers. To do so, the probability density of the minimizers is approximated using conditional simulations of the Gaussian process model behind Kriging. In this paper, an empirical comparison between the underlying sampling criterion called conditional minimizer entropy (CME) and the standard expected improvement sampling criterion (EI) is presented. Classical test functions are used as well as sample paths of the Gaussian model and an industrial application. They show the interest of the CME sampling criterion in terms of evaluation savings.  相似文献   

3.
We present a new strategy for the constrained global optimization of expensive black box functions using response surface models. A response surface model is simply a multivariate approximation of a continuous black box function which is used as a surrogate model for optimization in situations where function evaluations are computationally expensive. Prior global optimization methods that utilize response surface models were limited to box-constrained problems, but the new method can easily incorporate general nonlinear constraints. In the proposed method, which we refer to as the Constrained Optimization using Response Surfaces (CORS) Method, the next point for costly function evaluation is chosen to be the one that minimizes the current response surface model subject to the given constraints and to additional constraints that the point be of some distance from previously evaluated points. The distance requirement is allowed to cycle, starting from a high value (global search) and ending with a low value (local search). The purpose of the constraint is to drive the method towards unexplored regions of the domain and to prevent the premature convergence of the method to some point which may not even be a local minimizer of the black box function. The new method can be shown to converge to the global minimizer of any continuous function on a compact set regardless of the response surface model that is used. Finally, we considered two particular implementations of the CORS method which utilize a radial basis function model (CORS-RBF) and applied it on the box-constrained Dixon–Szegö test functions and on a simple nonlinearly constrained test function. The results indicate that the CORS-RBF algorithms are competitive with existing global optimization algorithms for costly functions on the box-constrained test problems. The results also show that the CORS-RBF algorithms are better than other algorithms for constrained global optimization on the nonlinearly constrained test problem.  相似文献   

4.
In this paper some transformation techniques, based on power transformations, are discussed. The techniques can be applied to solve optimization problems including signomial functions to global optimality. Signomial terms can always be convexified and underestimated using power transformations on the individual variables in the terms. However, often not all variables need to be transformed. A method for minimizing the number of original variables involved in the transformations is, therefore, presented. In order to illustrate how the given method can be integrated into the transformation framework, some mixed integer optimization problems including signomial functions are finally solved to global optimality using the given techniques.  相似文献   

5.
本文研究服务台不可靠的M/M/1常数率重试排队系统中顾客的均衡进队策略, 其中服务台在正常工作和空闲状态下以不同的速率发生故障。在该系统中, 服务台前没有等待空间, 如果到达的顾客发现服务台处于空闲状态, 该顾客可占用服务台开始服务。否则, 如果服务台处于忙碌状态, 顾客可以选择留下信息, 使得服务台在空闲时可以按顺序在重试空间中寻找之前留下信息的顾客进行服务。当服务台发生故障时, 正在被服务的顾客会发生丢失, 且系统拒绝新的顾客进入系统。根据系统提供给顾客的不同程度的信息, 研究队长可见和不可见两种信息情形下系统的稳态指标, 以及顾客基于收入-支出函数的均衡进队策略, 并建立单位时间内服务商的收益和社会福利函数。比较发现, 披露队长信息不一定能提高服务商收益和社会福利。  相似文献   

6.
Estimating the values of the parameter estimates of econometric functions (maximum likelihood functions or nonlinear least squares functions) are often challenging global optimization problems. Determining the global optimum for these functions is necessary to understand economic behavior and to develop effective economic policies. These functions often have flat surfaces or surfaces characterized by many local optima. Classical deterministic optimization methods often do not yield successful results. For that reason, stochastic optimization methods are becoming widely used in econometrics. Selected stochastic methods are applied to two difficult econometric functions to determine if they might be useful in estimating the parameters of these functions.  相似文献   

7.
When solving real-world optimization problems, evolutionary algorithms often require a large number of fitness evaluations in order to converge to the global optima. Attempts have been made to find techniques to reduce the number of fitness function evaluations. We propose a novel framework in the context of multi-objective optimization where fitness evaluations are distributed by creating a limited number of adaptive spheres spanning the search space. These spheres move towards the global Pareto front as components of a swarm optimization system. We call this process localization. The contribution of the paper is a general framework for distributed evolutionary multi-objective optimization, in which the individuals in each sphere can be controlled by any existing evolutionary multi-objective optimization algorithm in the literature.  相似文献   

8.
An experimental methodology for response surface optimization methods   总被引:1,自引:0,他引:1  
Response surface methods, and global optimization techniques in general, are typically evaluated using a small number of standard synthetic test problems, in the hope that these are a good surrogate for real-world problems. We introduce a new, more rigorous methodology for evaluating global optimization techniques that is based on generating thousands of test functions and then evaluating algorithm performance on each one. The test functions are generated by sampling from a Gaussian process, which allows us to create a set of test functions that are interesting and diverse. They will have different numbers of modes, different maxima, etc., and yet they will be similar to each other in overall structure and level of difficulty. This approach allows for a much richer empirical evaluation of methods that is capable of revealing insights that would not be gained using a small set of test functions. To facilitate the development of large empirical studies for evaluating response surface methods, we introduce a dimension-independent measure of average test problem difficulty, and we introduce acquisition criteria that are invariant to vertical shifting and scaling of the objective function. We also use our experimental methodology to conduct a large empirical study of response surface methods. We investigate the influence of three properties—parameter estimation, exploration level, and gradient information—on the performance of response surface methods.  相似文献   

9.
This paper presents a new sequential method for constrained nonlinear optimization problems. The principal characteristics of these problems are very time consuming function evaluations and the absence of derivative information. Such problems are common in design optimization, where time consuming function evaluations are carried out by simulation tools (e.g., FEM, CFD). Classical optimization methods, based on derivatives, are not applicable because often derivative information is not available and is too expensive to approximate through finite differencing.The algorithm first creates an experimental design. In the design points the underlying functions are evaluated. Local linear approximations of the real model are obtained with help of weighted regression techniques. The approximating model is then optimized within a trust region to find the best feasible objective improving point. This trust region moves along the most promising direction, which is determined on the basis of the evaluated objective values and constraint violations combined in a filter criterion. If the geometry of the points that determine the local approximations becomes bad, i.e. the points are located in such a way that they result in a bad approximation of the actual model, then we evaluate a geometry improving instead of an objective improving point. In each iteration a new local linear approximation is built, and either a new point is evaluated (objective or geometry improving) or the trust region is decreased. Convergence of the algorithm is guided by the size of this trust region. The focus of the approach is on getting good solutions with a limited number of function evaluations.  相似文献   

10.
While there is abundant literature on Response Surface Methodology (RSM) about how to seek optimal operating settings for Dual Response Systems (DRS) using various optimisation approaches, the inherent sampling variability of the fitted responses has typically been neglected. That is, the single global optimum settings for the fitted response represent the expected value of the fitted functions since the true response systems are, in general, noisy and unknown in many engineering and scientific experiments. This paper presents an approach for DRS based on Monte Carlo simulation of the system under study. For each simulated set of responses, a new global optimisation algorithm for DRS is utilised to compute the global optimal factor settings. Repetition of this process constructs an optimal region in the control factor space that provides more useful information to a process engineer than a single optimal—expected—solution. It is shown how the optimal region can be used as an indicator of how trustworthy this single solution is, and as a set of alternative solutions from where an engineer can select other process settings in case limitations not considered by the DRS model prevent the adoption of the single expected optimum. Application to Taguchi's Robust Parameter Design problems illustrates the proposed method.  相似文献   

11.
In many global optimization problems motivated by engineering applications, the number of function evaluations is severely limited by time or cost. To ensure that each evaluation contributes to the localization of good candidates for the role of global minimizer, a sequential choice of evaluation points is usually carried out. In particular, when Kriging is used to interpolate past evaluations, the uncertainty associated with the lack of information on the function can be expressed and used to compute a number of criteria accounting for the interest of an additional evaluation at any given point. This paper introduces minimizers entropy as a new Kriging-based criterion for the sequential choice of points at which the function should be evaluated. Based on stepwise uncertainty reduction, it accounts for the informational gain on the minimizer expected from a new evaluation. The criterion is approximated using conditional simulations of the Gaussian process model behind Kriging, and then inserted into an algorithm similar in spirit to the Efficient Global Optimization (EGO) algorithm. An empirical comparison is carried out between our criterion and expected improvement, one of the reference criteria in the literature. Experimental results indicate major evaluation savings over EGO. Finally, the method, which we call IAGO (for Informational Approach to Global Optimization), is extended to robust optimization problems, where both the factors to be tuned and the function evaluations are corrupted by noise.  相似文献   

12.
Functional optimization problems can be solved analytically only if special assumptions are verified; otherwise, approximations are needed. The approximate method that we propose is based on two steps. First, the decision functions are constrained to take on the structure of linear combinations of basis functions containing free parameters to be optimized (hence, this step can be considered as an extension to the Ritz method, for which fixed basis functions are used). Then, the functional optimization problem can be approximated by nonlinear programming problems. Linear combinations of basis functions are called approximating networks when they benefit from suitable density properties. We term such networks nonlinear (linear) approximating networks if their basis functions contain (do not contain) free parameters. For certain classes of d-variable functions to be approximated, nonlinear approximating networks may require a number of parameters increasing moderately with d, whereas linear approximating networks may be ruled out by the curse of dimensionality. Since the cost functions of the resulting nonlinear programming problems include complex averaging operations, we minimize such functions by stochastic approximation algorithms. As important special cases, we consider stochastic optimal control and estimation problems. Numerical examples show the effectiveness of the method in solving optimization problems stated in high-dimensional settings, involving for instance several tens of state variables.  相似文献   

13.
Global optimization is a field of mathematical programming dealing with finding global (absolute) minima of multi-dimensional multiextremal functions. Problems of this kind where the objective function is non-differentiable, satisfies the Lipschitz condition with an unknown Lipschitz constant, and is given as a “black-box” are very often encountered in engineering optimization applications. Due to the presence of multiple local minima and the absence of differentiability, traditional optimization techniques using gradients and working with problems having only one minimum cannot be applied in this case. These real-life applied problems are attacked here by employing one of the mostly abstract mathematical objects—space-filling curves. A practical derivative-free deterministic method reducing the dimensionality of the problem by using space-filling curves and working simultaneously with all possible estimates of Lipschitz and Hölder constants is proposed. A smart adaptive balancing of local and global information collected during the search is performed at each iteration. Conditions ensuring convergence of the new method to the global minima are established. Results of numerical experiments on 1000 randomly generated test functions show a clear superiority of the new method w.r.t. the popular method DIRECT and other competitors.  相似文献   

14.
Robust optimization with simulated annealing   总被引:1,自引:0,他引:1  
Complex systems can be optimized to improve the performance with respect to desired functionalities. An optimized solution, however, can become suboptimal or even infeasible, when errors in implementation or input data are encountered. We report on a robust simulated annealing algorithm that does not require any knowledge of the problems structure. This is necessary in many engineering applications where solutions are often not explicitly known and have to be obtained by numerical simulations. While this nonconvex and global optimization method improves the performance as well as the robustness, it also warrants for a global optimum which is robust against data and implementation uncertainties. We demonstrate it on a polynomial optimization problem and on a high-dimensional and complex nanophotonic engineering problem and show significant improvements in efficiency as well as in actual optimality.  相似文献   

15.
Gregor Kotucha  Klaus Hackl 《PAMM》2006,6(1):229-230
The formulation of structural optimization problems on the basis of the finite–element–method often leads to numerical instabilities resulting in non–optimal designs, which turn out to be difficult to realize from the engineering point of view. In the case of topology optimization problems the formation of designs characterized by oscillating density distributions such as the well–known “checkerboard–patterns” can be observed, whereas the solution of shape optimization problems often results in unfavourable designs with non–smooth boundary shapes caused by high–frequency oscillations of the boundary shape functions. Furthermore a strong dependence of the obtained designs on the finite–element–mesh can be observed in both cases. In this context we have already shown, that the topology design problem can be regularized by penalizing spatial oscillations of the density function by means of a penalty–approach based on the density gradient. In the present paper we apply the idea of problem regularization by penalizing oscillations of the design variable to overcome the numerical difficulties related to the shape design problem, where an analogous approach restricting the boundary surface can be introduced. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

16.
The deterministic annealing optimization method is related to homotopy methods of optimization, but is oriented towards global optimization: specifically, it tries to tune a penalty parameter, thought of as ``temperature', in such a way as to reach a global optimum. Optimization by deterministic annealing is based on thermodynamics, in the same sense that simulated annealing is based on statistical mechanics. It is claimed to be very fast and effective, and is popular in significant engineering applications. The language used to describe it is usually that of statistical physics and there has been relatively little attention paid by the optimization community; this paper in part attempts to overcome this barrier by describing deterministic annealing in more familiar terms.The main contribution of this paper is to show explicitly that that constraints can be handled in the context of deterministic annealing by using constraint selection functions, a generalization of penalty and barrier functions. Constraint selection allows embedding of discrete problems into (non-convex) continuous problems.We also show how an idealized version of deterministic annealing can be understood in terms of bifurcation theory, which clarifies limitations of its convergence properties.  相似文献   

17.
极小曲面在工程领域有着广泛应用,因此将其引入计算机辅助几何设计领域具有重要意义.详细概述了近年来计算机辅助几何设计领域中极小曲面造型的研究工作,按照造型方法的不同,可将现有工作分为精确造型方法和逼近造型方法两类.精确造型方法主要包括两个部分:某些特殊极小曲面的控制网格表示与构造;等温参数多项式极小曲面的挖掘与性质.逼近造型方法主要包括3个部分t基于数值计算的逼近方法;基于线性偏微分方程的逼近方法;基于能量函数最优化的逼近方法.最后对这些方法进行了分析比较,并讨论了极小曲面造型中有待进一步解决的问题.  相似文献   

18.
B-spline curves and surfaces are generally used in computer aided design (CAD), data visualization, virtual reality, surface modeling and many other fields. Especially, data fitting with B-splines is a challenging problem in reverse engineering. In addition to this, B-splines are the most preferred approximating curve because they are very flexible and have powerful mathematical properties and, can represent a large variety of shapes efficiently [1]. The selection of the knots in B-spline approximation has an important and considerable effect on the behavior of the final approximation. Recently, in literature, there has been a considerable attention paid to employing algorithms inspired by natural processes or events to solve optimization problems such as genetic algorithms, simulated annealing, ant colony optimization and particle swarm optimization. Invasive weed optimization (IWO) is a novel optimization method inspired from ecological events and is a phenomenon used in agriculture. In this paper, optimal knots are selected for B-spline curve fitting through invasive weed optimization method. Test functions which are selected from the literature are used to measure performance. Results are compared with other approaches used in B-spline curve fitting such as Lasso, particle swarm optimization, the improved clustering algorithm, genetic algorithms and artificial immune system. The experimental results illustrate that results from IWO are generally better than results from other methods.  相似文献   

19.
A local linear embedding module for evolutionary computation optimization   总被引:1,自引:0,他引:1  
A Local Linear Embedding (LLE) module enhances the performance of two Evolutionary Computation (EC) algorithms employed as search tools in global optimization problems. The LLE employs the stochastic sampling of the data space inherent in Evolutionary Computation in order to reconstruct an approximate mapping from the data space back into the parameter space. This allows to map the target data vector directly into the parameter space in order to obtain a rough estimate of the global optimum, which is then added to the EC generation. This process is iterated and considerably improves the EC convergence. Thirteen standard test functions and two real-world optimization problems serve to benchmark the performance of the method. In most of our tests, optimization aided by the LLE mapping outperforms standard implementations of a genetic algorithm and a particle swarm optimization. The number and ranges of functions we tested suggest that the proposed algorithm can be considered as a valid alternative to traditional EC tools in more general applications. The performance improvement in the early stage of the convergence also suggests that this hybrid implementation could be successful as an initial global search to select candidates for subsequent local optimization.  相似文献   

20.
This paper presents a meta-algorithm for approximating the Pareto optimal set of costly black-box multiobjective optimization problems given a limited number of objective function evaluations. The key idea is to switch among different algorithms during the optimization search based on the predicted performance of each algorithm at the time. Algorithm performance is modeled using a machine learning technique based on the available information. The predicted best algorithm is then selected to run for a limited number of evaluations. The proposed approach is tested on several benchmark problems and the results are compared against those obtained using any one of the candidate algorithms alone.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号