首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The traditional numerical analysis considers optimization algorithms which guarantee some accuracy for all functions to be optimized. This includes the exact algorithms. Limiting the maximal error requires a computational effort that in many cases increases exponentially with the size of the problem (Horst and Pardalos, 1995, Handbook of Global Optimization, Kluwer). That limits practical applications of the worst case analysis. An alternative is the average case analysis where the average error is made as small as possible (Calvin and Glynn, 1997, J. Appl. Prob., 32: 157). The average is taken over a set of functions to be optimized. The average case analysis is called the Bayesian Approach (BA) (Diaconis, 1988, Statistical Decision Theory and Related Topics, Springer; Mockus and Mockus, 1987, Theory of Optimal Decisions, Nauk, Lithuania). Application of BA to optimization of heuristics is called the Bayesian Heuristic Approach (BHA) (Mockus, 2000, A Set of Examples of Global and Discrete Optimization, Kluwer). In this paper a short presentation of the basic ideas of BHA (described in detail in Mockus (1989), Bayesian Approach to Global Optimization, Kluwer and Mockus (2000), A Set of Examples of Global and Discrete Optimization, Kluwer) is given using the knapsack problem as an example. The application potential is illustrated by the school scheduling example. In addition the new heuristic algorithm for solving a bimatrix game problem is investigated. The results ae applied while solving real life optimization problems and also as examples for distance graduate level studies of the theory of games and markets in the Internet environment.  相似文献   

2.
Human Learning Optimization is a simple but efficient meta-heuristic algorithm in which three learning operators, i.e. the random learning operator, the individual learning operator, and the social learning operator, are developed to efficiently search the optimal solution by imitating the learning mechanisms of human beings. However, HLO assumes that all the individuals possess the same learning ability, which is not true in a real human population as the IQ scores of humans, one of the most important indices of the learning ability of humans, follow Gaussian distribution and increase with the development of society and technology. Inspired by this fact, this paper proposes a Diverse Human Learning Optimization algorithm (DHLO), into which the Gaussian distribution and dynamic adjusting strategy are introduced. By adopting a set of Gaussian distributed parameter values instead of a constant to diversify the learning abilities of DHLO, the robustness of the algorithm is strengthened. In addition, by cooperating with the dynamic updating operation, DHLO can adjust to better parameter values and consequently enhances the global search ability of the algorithm. Finally, DHLO is applied to tackle the CEC05 benchmark functions as well as knapsack problems, and its performance is compared with the standard HLO as well as the other eight meta-heuristics, i.e. the Binary Differential Evolution, Simplified Binary Artificial Fish Swarm Algorithm, Adaptive Binary Harmony Search, Binary Gravitational Search Algorithms, Binary Bat Algorithms, Binary Artificial Bee Colony, Bi-Velocity Discrete Particle Swarm Optimization, and Modified Binary Particle Swarm Optimization. The experimental results show that the presented DHLO outperforms the other algorithms in terms of search accuracy and scalability.  相似文献   

3.
This paper presents a new optimization algorithm called GHS + LEM, which is based on the Global-best Harmony Search algorithm (GHS) and techniques from the learnable evolution models (LEM) to improve convergence and accuracy of the algorithm. The performance of the algorithm is evaluated with fifteen optimization functions commonly used by the optimization community. In addition, the results obtained are compared against the original Harmony Search algorithm, the Improved Harmony Search algorithm and the Global-best Harmony Search algorithm. The assessment shows that the proposed algorithm (GHS + LEM) improves the accuracy of the results obtained in relation to the other options, producing better results in most situations, but more specifically in problems with high dimensionality, where it offers a faster convergence with fewer iterations.  相似文献   

4.
一种修正的求约束总极值的积分-水平集方法   总被引:3,自引:0,他引:3  
对于有约束的全局最优化问题,在Chew-Zheng的《Integral Global Optimization》和邬冬华等的《一种修正的求总极值的积分-水平集方法的实现算法收敛性》的基础上,给出一种修正的求约束总极值的积分-水平集方法,它同样具有修正的求总极值的积分-水平集方法的两个特点: 1) 每一步构造一个新函数,它与原目标函数具有相同的总极值; 2) 避免了郑权算法在一般情况下,由于水平集不易求得而造成难以求出水平集的困难.同时给出了其实现算法,并证明了算法的收敛性.  相似文献   

5.
Journal of Global Optimization - We introduce and investigate a new generalized convexity notion for functions called prox-convexity. The proximity operator of such a function is single-valued and...  相似文献   

6.
Journal of Global Optimization - In this paper, we propose a novel algorithm that is based on quadratic-piecewise-linear approximations of DC functions to solve nonnegative sparsity-constrained...  相似文献   

7.
8.
Global optimisation of unknown noisy functions is a daunting task that appears in domains ranging from games to control problems to meta-parameter optimisation for machine learning. We show how to incorporate heuristics to Stochastic Simultaneous Optimistic Optimization (STOSOO), a global optimisation algorithm that has very weak requirements from the function. In our case, heuristics come in the form of Covariance Matrix Adaptation Evolution Strategy (CMA-ES). The new algorithm, termed Guided STOSOO (STOSOO-G), combines the ability of CMA-ES for fast local convergence (due to the algorithm following the “natural” gradient) and the global optimisation abilities of STOSOO. We compare all three algorithms in the “harder” parts of the Comparing Continuous Optimisers on Black-Box Optimization Benchmarking benchmark suite, which provides a default set of functions for testing. We show that our approach keeps the best of both worlds, i.e. the almost optimal exploration/exploitation of STOSOO with the local optimisation strength of CMA-ES.  相似文献   

9.
The solution of the Subproblem of the Cutting Angle Method of Global Optimization for problems of minimizing increasing positively homogeneous of degree one functions is proved to be NP-Complete. For the proof of this fact we formulate another problem which we call “Dominating Subset with Minimal Weight”. This problem is also NP-Complete. An O(n2)-time algorithm is presented for approximate solution of Dominant Subset with Minimal Weight Problem. This problem can be expressed as a kind of Assignment Problem in which it is allowed to assign multiple tasks to a single processor. Experimental analysis of the algorithm is performed using the program implemented in ANSI-C. The results of the analysis show the efficiency of the proposed algorithm.Mathematics Subject Classification (2000): 65K05, 90C27, 68Q25  相似文献   

10.
We introduce a master–worker framework for parallel global optimization of computationally expensive functions using response surface models. In particular, we parallelize two radial basis function (RBF) methods for global optimization, namely, the RBF method by Gutmann [Gutmann, H.M., 2001a. A radial basis function method for global optimization. Journal of Global Optimization 19(3), 201–227] (Gutmann-RBF) and the RBF method by Regis and Shoemaker [Regis, R.G., Shoemaker, C.A., 2005. Constrained global optimization of expensive black box functions using radial basis functions, Journal of Global Optimization 31, 153–171] (CORS-RBF). We modify these algorithms so that they can generate multiple points for simultaneous evaluation in parallel. We compare the performance of the two parallel RBF methods with a parallel multistart derivative-based algorithm, a parallel multistart derivative-free trust-region algorithm, and a parallel evolutionary algorithm on eleven test problems and on a 6-dimensional groundwater bioremediation application. The results indicate that the two parallel RBF algorithms are generally better than the other three alternatives on most of the test problems. Moreover, the two parallel RBF algorithms have comparable performances on the test problems considered. Finally, we report good speedups for both parallel RBF algorithms when using a small number of processors.  相似文献   

11.
The algorithm known as Pure Adaptive Search is a global optimisation ideal with desirable complexity. In this paper we temper it to a framework we term Somewhat Adaptive Search. This retains the desirable complexity, but allows scope for a practical realisation. We introduce a new algorithm termed Pure Localisation Search which attempts to reach the practical ideal. For a certain class of one variable functions the gap is bridged.  相似文献   

12.
A Numerical Comparison of Some Modified Controlled Random Search Algorithms   总被引:4,自引:0,他引:4  
In this paper we propose a new version of the Controlled Random Search(CRS) algorithm of Price. The new algorithmhas been tested on thirteen global optimization test problems. Numericalexperiments indicate that the resulting algorithm performs considerablybetter than the earlier versions of the CRS algorithms. The algorithm,therefore, could offer a reasonable alternative to many currently availablestochastic algorithms, especially for problems requiring direct searchtype methods. Also a classification of the CRS algorithms is made based onglobal technique – local technique and the relative performance ofclasses is numerically explored.  相似文献   

13.
A new optimization technique called Grenade Explosion Method (GEM) is introduced and its underlying ideas, including the concept of Optimal Search Direction (OSD), are elaborated. The applicability and efficiency of the technique is demonstrated using standard benchmark functions. Comparison of the results with those of other, widely-used, evolutionary algorithms shows that the proposed algorithm outperforms its rivals both in the success rate and rate of convergence. The method is also shown to be capable of finding most, or even all, optima of functions having multiple global optima. Moreover, it is shown that the performance of GEM is invariant against shifting and scaling of the search space and objective function.  相似文献   

14.
The Probabilistic Traveling Salesman Problem is a variation of the classic traveling salesman problem and one of the most significant stochastic routing problems. In probabilistic traveling salesman problem only a subset of potential customers need to be visited on any given instance of the problem. The number of customers to be visited each time is a random variable. In this paper, a variant of the well-known Greedy Randomized Adaptive Search Procedure (GRASP), the Expanding Neighborhood Search–GRASP, is proposed for the solution of the probabilistic traveling salesman problem. expanding neighborhood search–GRASP has been proved to be a very efficient algorithm for the solution of the traveling salesman problem. The proposed algorithm is tested on a numerous benchmark problems from TSPLIB with very satisfactory results. Comparisons with the classic GRASP algorithm and with a Tabu Search algorithm are also presented. Also, a comparison is performed with the results of a number of implementations of the Ant Colony Optimization algorithm from the literature and in six out of ten cases the proposed algorithm gives a new best solution.  相似文献   

15.
This paper presents the surrogate model based algorithm SO-I for solving purely integer optimization problems that have computationally expensive black-box objective functions and that may have computationally expensive constraints. The algorithm was developed for solving global optimization problems, meaning that the relaxed optimization problems have many local optima. However, the method is also shown to perform well on many local optimization problems, and problems with linear objective functions. The performance of SO-I, a genetic algorithm, Nonsmooth Optimization by Mesh Adaptive Direct Search (NOMAD), SO-MI (Müller et al. in Comput Oper Res 40(5):1383–1400, 2013), variable neighborhood search, and a version of SO-I that only uses a local search has been compared on 17 test problems from the literature, and on eight realizations of two application problems. One application problem relates to hydropower generation, and the other one to throughput maximization. The numerical results show that SO-I finds good solutions most efficiently. Moreover, as opposed to SO-MI, SO-I is able to find feasible points by employing a first optimization phase that aims at minimizing a constraint violation function. A feasible user-supplied point is not necessary.  相似文献   

16.
In this paper, we show the functional similarities between Meta-heuristics and the aspects of the science of life (biology): (a) Meta-heuristics based on gene transfer: Genetic algorithms (natural evolution of genes in an organic population), Transgenic Algorithm (transfers of genetic material to another cell that is not descending); (b) Meta-heuristics based on interactions among individual insects: Ant Colony Optimization (on interactions among individuals insects, Ant Colonies), Firefly algorithm (fireflies of the family Lampyridze), Marriage in honey bees Optimization algorithm (the process of reproduction of Honey Bees), Artificial Bee Colony algorithm (the process of recollection of Honey Bees); and (c) Meta-heuristics based on biological aspects of alive beings: Tabu Search Algorithm (Classical Conditioning on alive beings), Simulated Annealing algorithm (temperature control of spiders), Particle Swarm Optimization algorithm (social behavior and movement dynamics of birds and fish) and Artificial Immune System (immunological mechanism of the vertebrates).  相似文献   

17.
In many global optimization problems motivated by engineering applications, the number of function evaluations is severely limited by time or cost. To ensure that each evaluation contributes to the localization of good candidates for the role of global minimizer, a sequential choice of evaluation points is usually carried out. In particular, when Kriging is used to interpolate past evaluations, the uncertainty associated with the lack of information on the function can be expressed and used to compute a number of criteria accounting for the interest of an additional evaluation at any given point. This paper introduces minimizers entropy as a new Kriging-based criterion for the sequential choice of points at which the function should be evaluated. Based on stepwise uncertainty reduction, it accounts for the informational gain on the minimizer expected from a new evaluation. The criterion is approximated using conditional simulations of the Gaussian process model behind Kriging, and then inserted into an algorithm similar in spirit to the Efficient Global Optimization (EGO) algorithm. An empirical comparison is carried out between our criterion and expected improvement, one of the reference criteria in the literature. Experimental results indicate major evaluation savings over EGO. Finally, the method, which we call IAGO (for Informational Approach to Global Optimization), is extended to robust optimization problems, where both the factors to be tuned and the function evaluations are corrupted by noise.  相似文献   

18.
The complexity status of the minimum dilation triangulation (MDT) problem for a general point set is unknown. Therefore, we focus on the development of approximated algorithms to find high quality triangulations of minimum dilation. For an initial approach, we design a greedy strategy able to obtain approximate solutions to the optimal ones in a simple way. We also propose an operator to generate the neighborhood which is used in different algorithms: Local Search, Iterated Local Search, and Simulated Annealing. Besides, we present an algorithm called Random Local Search where good and bad solutions are accepted using the previous mentioned operator. For the experimental study we have created a set of problem instances since no reference to benchmarks for these problems were found in the literature. We use the sequential parameter optimization toolbox for tuning the parameters of the SA algorithm. We compare our results with those obtained by the OV-MDT algorithm that uses the obstacle value to sort the edges in the constructive process. This is the only available algorithm found in the literature. Through the experimental evaluation and statistical analysis, we assess the performance of the proposed algorithms using this operator.  相似文献   

19.
In this paper we propose an algorithm for the minimization of potential energy functions. The new algorithm is based on the differential evolution algorithm of Storn and Price (Journal of Global Optimization, vol. 11, pp. 341–359, 1997). The algorithm is tested on two different potential energy functions. The first function is the Lennard Jones energy function and the second function is the many-body potential energy function of Tersoff (Physics Review B, vol. 37, pp. 6991–7000, 1988; vol. 38, pp. 9902–9905, 1988). The first problem is a pair potential and the second problem is a semi-empirical many-body potential energy function considered for silicon-silicon atomic interactions. The minimum binding energies of up to 30 atoms are reported.Visitor at the Institute for Mathematics and its Applications, University of Minnesota, USA.  相似文献   

20.
The Maximum Clique Problem (MCP) is regarded here as the maximization of an indefinite quadratic form over the canonical simplex. For solving MCP an algorithm based upon Global Optimality Conditions (GOC) is applied. Furthermore, each step of the algorithm is analytically investigated and tested. The computational results for the proposed algorithm are compared with other Global Search approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号