首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The use of surrogate based optimization (SBO) is widely spread in engineering design to reduce the number of computational expensive simulations. However, “real-world” problems often consist of multiple, conflicting objectives leading to a set of competitive solutions (the Pareto front). The objectives are often aggregated into a single cost function to reduce the computational cost, though a better approach is to use multiobjective optimization methods to directly identify a set of Pareto-optimal solutions, which can be used by the designer to make more efficient design decisions (instead of weighting and aggregating the costs upfront). Most of the work in multiobjective optimization is focused on multiobjective evolutionary algorithms (MOEAs). While MOEAs are well-suited to handle large, intractable design spaces, they typically require thousands of expensive simulations, which is prohibitively expensive for the problems under study. Therefore, the use of surrogate models in multiobjective optimization, denoted as multiobjective surrogate-based optimization, may prove to be even more worthwhile than SBO methods to expedite the optimization of computational expensive systems. In this paper, the authors propose the efficient multiobjective optimization (EMO) algorithm which uses Kriging models and multiobjective versions of the probability of improvement and expected improvement criteria to identify the Pareto front with a minimal number of expensive simulations. The EMO algorithm is applied on multiple standard benchmark problems and compared against the well-known NSGA-II, SPEA2 and SMS-EMOA multiobjective optimization methods.  相似文献   

2.
Model-based search methods are a class of optimization techniques that search the solution space by sampling from an underlying probability distribution “model,” which is updated iteratively after evaluating the performance of the samples at each iteration. This paper aims to improve the sampling efficiency of model-based methods by considering a generalization where a population of distribution models is maintained and subsequently propagated from generation to generation. A key issue in the proposed approach is how to efficiently allocate the sampling budget among the population of models to maximize the algorithm performance. We formulate this problem as a generalized max k-armed bandit problem, and derive an efficient dynamic sample allocation scheme based on Markov decision theory to adaptively allocate computational resources. The proposed allocation scheme is then further used to update the current population to produce an improving population of models. Our preliminary numerical results indicate that the proposed procedure may considerably reduce the number of function evaluations needed to obtain high quality solutions, and thus further enhance the value of model-based methods for optimization problems that require expensive function evaluations for performance evaluation.  相似文献   

3.
Convexity and decomposition of mean-risk stochastic programs   总被引:1,自引:0,他引:1  
Traditional stochastic programming is risk neutral in the sense that it is concerned with the optimization of an expectation criterion. A common approach to addressing risk in decision making problems is to consider a weighted mean-risk objective, where some dispersion statistic is used as a measure of risk. We investigate the computational suitability of various mean-risk objective functions in addressing risk in stochastic programming models. We prove that the classical mean-variance criterion leads to computational intractability even in the simplest stochastic programs. On the other hand, a number of alternative mean-risk functions are shown to be computationally tractable using slight variants of existing stochastic programming decomposition algorithms. We propose decomposition-based parametric cutting plane algorithms to generate mean-risk efficient frontiers for two particular classes of mean-risk objectives.  相似文献   

4.
Uncertainties are a daily issue to deal with in aerospace engineering and applications. Robust optimization methods commonly use a random generation of the inputs and take advantage of multi-point criteria to look for robust solutions accounting with uncertainty definition. From the computational point of view, the application to coupled problems, like fluid-dynamics (CFD) or fluid-structure interaction (FSI), can be extremely expensive. This work presents a coupling between stochastic analysis techniques and evolutionary optimization algorithms for the definition of a stochastic robust optimization procedure. At first, a stochastic procedure is proposed to be applied into optimization problems. The proposed method has been applied to both CFD and FSI problems for the reduction of drag and flutter, respectively.  相似文献   

5.
In many engineering optimization problems, the objective and the constraints which come from complex analytical models are often black-box functions with extensive computational effort. In this case, it is necessary for optimization process to use sampling data to fit surrogate models so as to reduce the number of objective and constraint evaluations as soon as possible. In addition, it is sometimes difficult for the constrained optimization problems based on surrogate models to find a feasible point, which is the premise of further searching for a global optimal feasible solution. For this purpose, a new Kriging-based Constrained Global Optimization (KCGO) algorithm is proposed. Unlike previous Kriging-based methods, this algorithm can dispose black-box constrained optimization problem even if all initial sampling points are infeasible. There are two pivotal phases in KCGO algorithm. The main task of the first phase is to find a feasible point when there is no feasible data in the initial sample. And the aim of the second phase is to obtain a better feasible point under the circumstances of fewer expensive function evaluations. Several numerical problems and three design problems are tested to illustrate the feasibility, stability and effectiveness of the proposed method.  相似文献   

6.

In this study, we consider two classes of multicriteria two-stage stochastic programs in finite probability spaces with multivariate risk constraints. The first-stage problem features multivariate stochastic benchmarking constraints based on a vector-valued random variable representing multiple and possibly conflicting stochastic performance measures associated with the second-stage decisions. In particular, the aim is to ensure that the decision-based random outcome vector of interest is preferable to a specified benchmark with respect to the multivariate polyhedral conditional value-at-risk or a multivariate stochastic order relation. In this case, the classical decomposition methods cannot be used directly due to the complicating multivariate stochastic benchmarking constraints. We propose an exact unified decomposition framework for solving these two classes of optimization problems and show its finite convergence. We apply the proposed approach to a stochastic network design problem in the context of pre-disaster humanitarian logistics and conduct a computational study concerning the threat of hurricanes in the Southeastern part of the United States. The numerical results provide practical insights about our modeling approach and show that the proposed algorithm is computationally scalable.

  相似文献   

7.
In this paper, we present an evolutionary algorithm hybridized with a gradient-based optimization technique in the spirit of Lamarckian learning for efficient design optimization. In order to expedite gradient search, we employ local surrogate models that approximate the outputs of a computationally expensive Euler solver. Our focus is on the case when an adjoint Euler solver is available for efficiently computing the sensitivities of the outputs with respect to the design variables. We propose the idea of using Hermite interpolation to construct gradient-enhanced radial basis function networks that incorporate sensitivity data provided by the adjoint Euler solver. Further, we conduct local search using a trust-region framework that interleaves gradient-enhanced surrogate models with the computationally expensive adjoint Euler solver. This ensures that the present hybrid evolutionary algorithm inherits the convergence properties of the classical trust-region approach. We present numerical results for airfoil aerodynamic design optimization problems to show that the proposed algorithm converges to good designs on a limited computational budget.  相似文献   

8.
Two common questions when one uses a stochastic global optimization algorithm, e.g., simulated annealing, are when to stop a single run of the algorithm, and whether to restart with a new run or terminate the entire algorithm. In this paper, we develop a stopping and restarting strategy that considers tradeoffs between the computational effort and the probability of obtaining the global optimum. The analysis is based on a stochastic process called Hesitant Adaptive Search with Power-Law Improvement Distribution (HASPLID). HASPLID models the behavior of stochastic optimization algorithms, and motivates an implementable framework, Dynamic Multistart Sequential Search (DMSS). We demonstrate here the practicality of DMSS by using it to govern the application of a simple local search heuristic on three test problems from the global optimization literature.  相似文献   

9.

In this work, we study a stochastic single machine scheduling problem in which the features of learning effect on processing times, sequence-dependent setup times, and machine configuration selection are considered simultaneously. More precisely, the machine works under a set of configurations and requires stochastic sequence-dependent setup times to switch from one configuration to another. Also, the stochastic processing time of a job is a function of its position and the machine configuration. The objective is to find the sequence of jobs and choose a configuration to process each job to minimize the makespan. We first show that the proposed problem can be formulated through two-stage and multi-stage Stochastic Programming models, which are challenging from the computational point of view. Then, by looking at the problem as a multi-stage dynamic random decision process, a new deterministic approximation-based formulation is developed. The method first derives a mixed-integer non-linear model based on the concept of accessibility to all possible and available alternatives at each stage of the decision-making process. Then, to efficiently solve the problem, a new accessibility measure is defined to convert the model into the search of a shortest path throughout the stages. Extensive computational experiments are carried out on various sets of instances. We discuss and compare the results found by the resolution of plain stochastic models with those obtained by the deterministic approximation approach. Our approximation shows excellent performances both in terms of solution accuracy and computational time.

  相似文献   

10.
A class of stochastic optimization problems is analyzed that cannot be solved by deterministic and standard stochastic approximation methods. We consider risk-control problems, optimization of stochastic networks and discrete event systems, screening irreversible changes, and pollution control. The results of Ermoliev et al. are extended to the case of stochastic systems and general constraints. It is shown that the concept of stochastic mollifier gradient leads to easily implementable computational procedures for systems with Lipschitz and discontinuous objective functions. New optimality conditions are formulated for designing stochastic search procedures for constrained optimization of discontinuous systems.  相似文献   

11.

This paper proposes two algorithms for solving stochastic control problems with deep learning, with a focus on the utility maximisation problem. The first algorithm solves Markovian problems via the Hamilton Jacobi Bellman (HJB) equation. We solve this highly nonlinear partial differential equation (PDE) with a second order backward stochastic differential equation (2BSDE) formulation. The convex structure of the problem allows us to describe a dual problem that can either verify the original primal approach or bypass some of the complexity. The second algorithm utilises the full power of the duality method to solve non-Markovian problems, which are often beyond the scope of stochastic control solvers in the existing literature. We solve an adjoint BSDE that satisfies the dual optimality conditions. We apply these algorithms to problems with power, log and non-HARA utilities in the Black-Scholes, the Heston stochastic volatility, and path dependent volatility models. Numerical experiments show highly accurate results with low computational cost, supporting our proposed algorithms.

  相似文献   

12.
In this paper the usage of a stochastic optimization algorithm as a model search tool is proposed for the Bayesian variable selection problem in generalized linear models. Combining aspects of three well known stochastic optimization algorithms, namely, simulated annealing, genetic algorithm and tabu search, a powerful model search algorithm is produced. After choosing suitable priors, the posterior model probability is used as a criterion function for the algorithm; in cases when it is not analytically tractable Laplace approximation is used. The proposed algorithm is illustrated on normal linear and logistic regression models, for simulated and real-life examples, and it is shown that, with a very low computational cost, it achieves improved performance when compared with popular MCMC algorithms, such as the MCMC model composition, as well as with “vanilla” versions of simulated annealing, genetic algorithm and tabu search.  相似文献   

13.
We present a new strategy for the constrained global optimization of expensive black box functions using response surface models. A response surface model is simply a multivariate approximation of a continuous black box function which is used as a surrogate model for optimization in situations where function evaluations are computationally expensive. Prior global optimization methods that utilize response surface models were limited to box-constrained problems, but the new method can easily incorporate general nonlinear constraints. In the proposed method, which we refer to as the Constrained Optimization using Response Surfaces (CORS) Method, the next point for costly function evaluation is chosen to be the one that minimizes the current response surface model subject to the given constraints and to additional constraints that the point be of some distance from previously evaluated points. The distance requirement is allowed to cycle, starting from a high value (global search) and ending with a low value (local search). The purpose of the constraint is to drive the method towards unexplored regions of the domain and to prevent the premature convergence of the method to some point which may not even be a local minimizer of the black box function. The new method can be shown to converge to the global minimizer of any continuous function on a compact set regardless of the response surface model that is used. Finally, we considered two particular implementations of the CORS method which utilize a radial basis function model (CORS-RBF) and applied it on the box-constrained Dixon–Szegö test functions and on a simple nonlinearly constrained test function. The results indicate that the CORS-RBF algorithms are competitive with existing global optimization algorithms for costly functions on the box-constrained test problems. The results also show that the CORS-RBF algorithms are better than other algorithms for constrained global optimization on the nonlinearly constrained test problem.  相似文献   

14.
We propose a multi-time scale quasi-Newton based smoothed functional (QN-SF) algorithm for stochastic optimization both with and without inequality constraints. The algorithm combines the smoothed functional (SF) scheme for estimating the gradient with the quasi-Newton method to solve the optimization problem. Newton algorithms typically update the Hessian at each instant and subsequently (a) project them to the space of positive definite and symmetric matrices, and (b) invert the projected Hessian. The latter operation is computationally expensive. In order to save computational effort, we propose in this paper a quasi-Newton SF (QN-SF) algorithm based on the Broyden-Fletcher-Goldfarb-Shanno (BFGS) update rule. In Bhatnagar (ACM TModel Comput S. 18(1): 27–62, 2007), a Jacobi variant of Newton SF (JN-SF) was proposed and implemented to save computational effort. We compare our QN-SF algorithm with gradient SF (G-SF) and JN-SF algorithms on two different problems – first on a simple stochastic function minimization problem and the other on a problem of optimal routing in a queueing network. We observe from the experiments that the QN-SF algorithm performs significantly better than both G-SF and JN-SF algorithms on both the problem settings. Next we extend the QN-SF algorithm to the case of constrained optimization. In this case too, the QN-SF algorithm performs much better than the JN-SF algorithm. Finally we present the proof of convergence for the QN-SF algorithm in both unconstrained and constrained settings.  相似文献   

15.
This paper presents the surrogate model based algorithm SO-I for solving purely integer optimization problems that have computationally expensive black-box objective functions and that may have computationally expensive constraints. The algorithm was developed for solving global optimization problems, meaning that the relaxed optimization problems have many local optima. However, the method is also shown to perform well on many local optimization problems, and problems with linear objective functions. The performance of SO-I, a genetic algorithm, Nonsmooth Optimization by Mesh Adaptive Direct Search (NOMAD), SO-MI (Müller et al. in Comput Oper Res 40(5):1383–1400, 2013), variable neighborhood search, and a version of SO-I that only uses a local search has been compared on 17 test problems from the literature, and on eight realizations of two application problems. One application problem relates to hydropower generation, and the other one to throughput maximization. The numerical results show that SO-I finds good solutions most efficiently. Moreover, as opposed to SO-MI, SO-I is able to find feasible points by employing a first optimization phase that aims at minimizing a constraint violation function. A feasible user-supplied point is not necessary.  相似文献   

16.
We consider the problem of coordinating the operations of two supply chain partners: a foreign shipping company and a domestic port. The two partners have conflicting business objectives, and the issue is to determine the optimal cycle time, by which the shipping company removes the empty containers from the domestic port, so that the joint profit of the two partners is maximized. The domestic port prefers a shorter cycle time to mitigate its empty container accumulation and land use problems, while the shipping company wishes a longer cycle time to save its expensive vessel capacities. We propose an iterative procedure to search for this optimal cycle time. In each iteration, a candidate cycle time is evaluated by solving a deterministic vessel scheduling problem and a stochastic container-yard capacity optimization problem. We prove the properties of the vessel scheduling problem, derive the optimality condition under which the vessel scheduling problem can be decomposed, and show that the profit function of the domestic port is convex and thus the optimal container-yard capacity can be determined efficiently. Empirical observations on the algorithm computational performance collected from over 300 randomly generated test cases under various problem settings are reported.  相似文献   

17.
This paper considers deterministic global optimization of scenario-based, two-stage stochastic mixed-integer nonlinear programs (MINLPs) in which the participating functions are nonconvex and separable in integer and continuous variables. A novel decomposition method based on generalized Benders decomposition, named nonconvex generalized Benders decomposition (NGBD), is developed to obtain ??-optimal solutions of the stochastic MINLPs of interest in finite time. The dramatic computational advantage of NGBD over state-of-the-art global optimizers is demonstrated through the computational study of several engineering problems, where a problem with almost 150,000 variables is solved by NGBD within 80 minutes of solver time.  相似文献   

18.
Numerous optimization methods have been proposed for the solution of the unconstrained optimization problems, such as mathematical programming methods, stochastic global optimization approaches, and metaheuristics. In this paper, a metaheuristic algorithm called Modified Shuffled Complex Evolution (MSCE) is proposed, where an adaptation of the Downhill Simplex search strategy combined with the differential evolution method is proposed. The efficiency of the new method is analyzed in terms of the mean performance and computational time, in comparison with the genetic algorithm using floating-point representation (GAF) and the classical shuffled complex evolution (SCE-UA) algorithm using six benchmark optimization functions. Simulation results and the comparisons with SCE-UA and GAF indicate that the MSCE improves the search performance on the five benchmark functions of six tested functions.  相似文献   

19.
We introduce a novel global optimization method called Continuous GRASP (C-GRASP) which extends Feo and Resende’s greedy randomized adaptive search procedure (GRASP) from the domain of discrete optimization to that of continuous global optimization. This stochastic local search method is simple to implement, is widely applicable, and does not make use of derivative information, thus making it a well-suited approach for solving global optimization problems. We illustrate the effectiveness of the procedure on a set of standard test problems as well as two hard global optimization problems.  相似文献   

20.
The paper considers solving of linear programming problems with p-order conic constraints that are related to a certain class of stochastic optimization models with risk objective or constraints. The proposed approach is based on construction of polyhedral approximations for p-order cones, and then invoking a Benders decomposition scheme that allows for efficient solving of the approximating problems. The conducted case study of portfolio optimization with p-order conic constraints demonstrates that the developed computational techniques compare favorably against a number of benchmark methods, including second-order conic programming methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号