首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
Global optimization problem is known to be challenging, for which it is difficult to have an algorithm that performs uniformly efficient for all problems. Stochastic optimization algorithms are suitable for these problems, which are inspired by natural phenomena, such as metal annealing, social behavior of animals, etc. In this paper, subset simulation, which is originally a reliability analysis method, is modified to solve unconstrained global optimization problems by introducing artificial probabilistic assumptions on design variables. The basic idea is to deal with the global optimization problems in the context of reliability analysis. By randomizing the design variables, the objective function maps the multi-dimensional design variable space into a one-dimensional random variable. Although the objective function itself may have many local optima, its cumulative distribution function has only one maximum at its tail, as it is a monotonic, non-decreasing, right-continuous function. It turns out that the searching process of optimal solution(s) of a global optimization problem is equivalent to exploring the process of the tail distribution in a reliability problem. The proposed algorithm is illustrated by two groups of benchmark test problems. The first group is carried out for parametric study and the second group focuses on the statistical performance.  相似文献   

2.
Combinatorial optimization problems have applications in a variety of sciences and engineering. In the presence of data uncertainty, these problems lead to stochastic combinatorial optimization problems which result in very large scale combinatorial optimization problems. In this paper, we report on the solution of some of the largest stochastic combinatorial optimization problems consisting of over a million binary variables. While the methodology is quite general, the specific application with which we conduct our experiments arises in stochastic server location problems. The main observation is that stochastic combinatorial optimization problems are comprised of loosely coupled subsystems. By taking advantage of the loosely coupled structure, we show that decomposition-coordination methods provide highly effective algorithms, and surpass the scalability of even the most efficiently implemented backtracking search algorithms.  相似文献   

3.
In this paper, the optimization of time-varying objective functions, known only through estimates, is considered. Recent research defined algorithms for static optimization problems. Based on one of these algorithms, we derive an optimization scheme for the time-varying case. In stochastic optimization problems, convergence of an algorithm to the optimum prevents the algorithm from being efficiently adaptive to changes of the objective function if it is time-varying. So, convergence cannot be required in a time-varying scenario. Rather, we require convergence to the optimum with high probability together with a satisfactory dynamical behavior. Analytical and simulative results illustrate the performance of the proposed algorithm compared with other optimization techniques.  相似文献   

4.
We investigate the theoretical and numerical properties of two global optimization techniques for the solution of mixed complementarity problems. More precisely, using a standard semismooth Newton-type method as a basic solver for complementarity problems, we describe how the performance of this method can be improved by incorporating two well-known global optimization algorithms, namely a tunneling and a filled function method. These methods are tested and compared with each other on a couple of very difficult test examples.  相似文献   

5.
The Wiener process is a widely used statistical model for stochastic global optimization. One of the first optimization algorithms based on a statistical model, the so-called P-algorithm, was based on the Wiener process. Despite many advantages, this process does not give a realistic model for many optimization problems, particularly from the point of view of local behavior. In the present paper, a version of the P-algorithm is constructed based on a stochastic process with smooth sampling functions. It is shown that, in such a case, the algorithm has a better convergence rate than in the case of the Wiener process. A similar convergence rate is proved for a combination of the Wiener model-based P-algorithm with quadratic fit-based local search.  相似文献   

6.
We introduce a master–worker framework for parallel global optimization of computationally expensive functions using response surface models. In particular, we parallelize two radial basis function (RBF) methods for global optimization, namely, the RBF method by Gutmann [Gutmann, H.M., 2001a. A radial basis function method for global optimization. Journal of Global Optimization 19(3), 201–227] (Gutmann-RBF) and the RBF method by Regis and Shoemaker [Regis, R.G., Shoemaker, C.A., 2005. Constrained global optimization of expensive black box functions using radial basis functions, Journal of Global Optimization 31, 153–171] (CORS-RBF). We modify these algorithms so that they can generate multiple points for simultaneous evaluation in parallel. We compare the performance of the two parallel RBF methods with a parallel multistart derivative-based algorithm, a parallel multistart derivative-free trust-region algorithm, and a parallel evolutionary algorithm on eleven test problems and on a 6-dimensional groundwater bioremediation application. The results indicate that the two parallel RBF algorithms are generally better than the other three alternatives on most of the test problems. Moreover, the two parallel RBF algorithms have comparable performances on the test problems considered. Finally, we report good speedups for both parallel RBF algorithms when using a small number of processors.  相似文献   

7.
《Optimization》2012,61(3):403-419
In this article, the application of the electromagnetism-like method (EM) for solving constrained optimization problems is investigated. A number of penalty functions have been tested with EM in this investigation, and their merits and demerits have been discussed. We have also provided motivations for such an investigation. Finally, we have compared EM with two recent global optimization algorithms from the literature. We have shown that EM is a suitable alternative to these methods and that it has a role to play in solving constrained global optimization problems.  相似文献   

8.
We present a new strategy for the constrained global optimization of expensive black box functions using response surface models. A response surface model is simply a multivariate approximation of a continuous black box function which is used as a surrogate model for optimization in situations where function evaluations are computationally expensive. Prior global optimization methods that utilize response surface models were limited to box-constrained problems, but the new method can easily incorporate general nonlinear constraints. In the proposed method, which we refer to as the Constrained Optimization using Response Surfaces (CORS) Method, the next point for costly function evaluation is chosen to be the one that minimizes the current response surface model subject to the given constraints and to additional constraints that the point be of some distance from previously evaluated points. The distance requirement is allowed to cycle, starting from a high value (global search) and ending with a low value (local search). The purpose of the constraint is to drive the method towards unexplored regions of the domain and to prevent the premature convergence of the method to some point which may not even be a local minimizer of the black box function. The new method can be shown to converge to the global minimizer of any continuous function on a compact set regardless of the response surface model that is used. Finally, we considered two particular implementations of the CORS method which utilize a radial basis function model (CORS-RBF) and applied it on the box-constrained Dixon–Szegö test functions and on a simple nonlinearly constrained test function. The results indicate that the CORS-RBF algorithms are competitive with existing global optimization algorithms for costly functions on the box-constrained test problems. The results also show that the CORS-RBF algorithms are better than other algorithms for constrained global optimization on the nonlinearly constrained test problem.  相似文献   

9.
Two common questions when one uses a stochastic global optimization algorithm, e.g., simulated annealing, are when to stop a single run of the algorithm, and whether to restart with a new run or terminate the entire algorithm. In this paper, we develop a stopping and restarting strategy that considers tradeoffs between the computational effort and the probability of obtaining the global optimum. The analysis is based on a stochastic process called Hesitant Adaptive Search with Power-Law Improvement Distribution (HASPLID). HASPLID models the behavior of stochastic optimization algorithms, and motivates an implementable framework, Dynamic Multistart Sequential Search (DMSS). We demonstrate here the practicality of DMSS by using it to govern the application of a simple local search heuristic on three test problems from the global optimization literature.  相似文献   

10.
This paper presents some simple technical conditions that guarantee the convergence of a general class of adaptive stochastic global optimization algorithms. By imposing some conditions on the probability distributions that generate the iterates, these stochastic algorithms can be shown to converge to the global optimum in a probabilistic sense. These results also apply to global optimization algorithms that combine local and global stochastic search strategies and also those algorithms that combine deterministic and stochastic search strategies. This makes the results applicable to a wide range of global optimization algorithms that are useful in practice. Moreover, this paper provides convergence conditions involving the conditional densities of the random vector iterates that are easy to verify in practice. It also provides some convergence conditions in the special case when the iterates are generated by elliptical distributions such as the multivariate Normal and Cauchy distributions. These results are then used to prove the convergence of some practical stochastic global optimization algorithms, including an evolutionary programming algorithm. In addition, this paper introduces the notion of a stochastic algorithm being probabilistically dense in the domain of the function and shows that, under simple assumptions, this is equivalent to seeing any point in the domain with probability 1. This, in turn, is equivalent to almost sure convergence to the global minimum. Finally, some simple results on convergence rates are also proved.  相似文献   

11.
We describe global optimization problems from three different fields representing many-body potentials in physical chemistry, optimal control of a chemical reactor, and fitting a statistical model to empirical data. Historical background for each of the problems as well as the practical significance of the first two are given. The problems are solved by using eight recently developed stochastic global optimization algorithms representing controlled random search (4 algorithms), simulated annealing (2 algorithms), and clustering (2 algorithms). The results are discussed, and the importance of global optimization in each respective field is focused.  相似文献   

12.
This paper presents an augmented Lagrangian methodology with a stochastic population based algorithm for solving nonlinear constrained global optimization problems. The method approximately solves a sequence of simple bound global optimization subproblems using a fish swarm intelligent algorithm. A stochastic convergence analysis of the fish swarm iterative process is included. Numerical results with a benchmark set of problems are shown, including a comparison with other stochastic-type algorithms.  相似文献   

13.
Efficient Global Optimization of Expensive Black-Box Functions   总被引:41,自引:0,他引:41  
In many engineering optimization problems, the number of function evaluations is severely limited by time or cost. These problems pose a special challenge to the field of global optimization, since existing methods often require more function evaluations than can be comfortably afforded. One way to address this challenge is to fit response surfaces to data collected by evaluating the objective and constraint functions at a few points. These surfaces can then be used for visualization, tradeoff analysis, and optimization. In this paper, we introduce the reader to a response surface methodology that is especially good at modeling the nonlinear, multimodal functions that often occur in engineering. We then show how these approximating functions can be used to construct an efficient global optimization algorithm with a credible stopping rule. The key to using response surfaces for global optimization lies in balancing the need to exploit the approximating surface (by sampling where it is minimized) with the need to improve the approximation (by sampling where prediction error may be high). Striking this balance requires solving certain auxiliary problems which have previously been considered intractable, but we show how these computational obstacles can be overcome.  相似文献   

14.
The theoretical foundation of integral global optimization has become widely known and well accepted [4],[24],[25]. However, more effort is needed to demonstrate the effectiveness of the integral global optimization algorithms. In this work we detail the implementation of the integral global minimization algorithms. We describe how the integral global optimization method handles nonconvex unconstrained or box constrained, constrained or discrete minimization problems. We illustrate the flexibility and the efficiency of integral global optimization method by presenting the performance of algorithms on a collection of well known test problems in global optimization literature. We provide the software which solves these test problems and other minimization problems. The performance of the computations demonstrates that the integral global algorithms are not only extremely flexible and reliable but also very efficient.Research supported partially by NSERC grant and Mount St Vincent University research grant.  相似文献   

15.
Estimating the values of the parameter estimates of econometric functions (maximum likelihood functions or nonlinear least squares functions) are often challenging global optimization problems. Determining the global optimum for these functions is necessary to understand economic behavior and to develop effective economic policies. These functions often have flat surfaces or surfaces characterized by many local optima. Classical deterministic optimization methods often do not yield successful results. For that reason, stochastic optimization methods are becoming widely used in econometrics. Selected stochastic methods are applied to two difficult econometric functions to determine if they might be useful in estimating the parameters of these functions.  相似文献   

16.
The degree of difficulty in solving a global optimization problem is in general dependent on the dimensionality of the problem and certain characteristics of the objective function. This paper discusses five of these characteristics and presents a strategy for function optimization called the shuffled complex evolution (SCE) method, which promises to be robust, effective, and efficient for a broad class of problems. The SCE method is based on a synthesis of four concepts that have proved successful for global optimization: (a) combination of probabilistic and deterministic approaches; (b) clustering; (c) systematic evolution of a complex of points spanning the space, in the direction of global improvement; and (d) competitive evolution. Two algorithms based on the SCE method are presented. These algorithms are tested by running 100 randomly initiated trials on eight test problems of differing difficulty. The performance of the two algorithms is compared to that of the controlled random search CRS2 method presented by Price (1983, 1987) and to a multistart algorithm based on the simplex method presented by Nelder and Mead (1965).This work was partially funded by the National Science Foundation, Grant ECE-86-10584, and by the National Weather Service, Grant NA85AA-H-HY088.  相似文献   

17.
This paper presents a new method for solving global optimization problems. We use a local technique based on the notion of discrete gradients for finding a cone of descent directions and then we use a global cutting angle algorithm for finding global minimum within the intersection of the cone and the feasible region. We present results of numerical experiments with well-known test problems and with the so-called cluster function. These results confirm that the proposed algorithms allows one to find a global minimizer or at least a deep local minimizer of a function with a huge amount of shallow local minima.  相似文献   

18.
In this paper we propose a new class of test functions for unconstrained global optimization problems. The class depends on some parameters through which the difficulty of the test problems can be controlled. As a basis for future comparison, we propose a selected set of these functions, with increasing difficulty, and some computational experiments with two simple global optimization algorithms.  相似文献   

19.
Solving a stochastic optimization problem often involves performing repeated noisy function evaluations at points encountered during the algorithm. Recently, a continuous optimization framework for executing a single observation per search point was shown to exhibit a martingale property so that associated estimation errors are guaranteed to converge to zero. We generalize this martingale single observation approach to problems with mixed discrete–continuous variables. We establish mild regularity conditions for this class of algorithms to converge to a global optimum.  相似文献   

20.
Continuous GRASP (C-GRASP) is a stochastic local search metaheuristic for finding cost-efficient solutions to continuous global optimization problems subject to box constraints (Hirsch et al., 2007). Like a greedy randomized adaptive search procedure (GRASP), a C-GRASP is a multi-start procedure where a starting solution for local improvement is constructed in a greedy randomized fashion. In this paper, we describe several improvements that speed up the original C-GRASP and make it more robust. We compare the new C-GRASP with the original version as well as with other algorithms from the recent literature on a set of benchmark multimodal test functions whose global minima are known. Hart’s sequential stopping rule (1998) is implemented and C-GRASP is shown to converge on all test problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号