首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents a general approach that combines global search strategies with local search and attempts to find a global minimum of a real valued function of n variables. It assumes that derivative information is unreliable; consequently, it deals with derivative free algorithms, but derivative information can be easily incorporated. This paper presents a nonmonotone derivative free algorithm and shows numerically that it may converge to a better minimum starting from a local nonglobal minimum. This property is then incorporated into a random population to globalize the algorithm. Convergence to a zero order stationary point is established for nonsmooth convex functions, and convergence to a first order stationary point is established for strictly differentiable functions. Preliminary numerical results are encouraging. A Java implementation that can be run directly from the Web allows the interested reader to get a better insight of the performance of the algorithm on several standard functions. The general framework proposed here, allows the user to incorporate variants of well known global search strategies. Research done under the cooperation agreement between Universidade de Vigo and Universidad Simón Bolívar.  相似文献   

2.
This paper presents a framework for analyzing and comparing sub-optimal performance of local search algorithms for hard discrete optimization problems. The β-acceptable solution probability is introduced that captures how effectively an algorithm has performed to date and how effectively an algorithm can be expected to perform in the future. Using this probability, the necessary conditions for a local search algorithm to converge in probability to β-acceptable solutions are derived. To evaluate and compare the effectiveness of local search algorithms, two estimators for the expected number of iterations to visit a β-acceptable solution are obtained. Computational experiments are reported with simulated annealing and tabu search applied to four small traveling salesman problem instances, and the Lin-Kernighan-Helsgaun algorithm applied to eight medium to large traveling salesman problem instances (all with known optimal solutions), to illustrate the application of these estimators.  相似文献   

3.
Any global minimization algorithm is made by several local searches performed sequentially. In the classical multistart algorithm, the starting point for each new local search is selected at random uniformly in the region of interest. In the tunneling algorithm, such a starting point is required to have the same function value obtained by the last local minimization. We introduce the class of acceptance-rejection based algorithms in order to investigate intermediate procedures. A particular instance is to choose at random the new point approximately according to a Boltzmann distribution, whose temperatureT is updated during the algorithm. AsT 0, such distribution peaks around the global minima of the cost function, producing a kind of random tunneling effect. The motivation for such an approach comes from recent works on the simulated annealing approach in global optimization. The resulting algorithm has been tested on several examples proposed in the literature.  相似文献   

4.
In this paper, the method of dual matrices for the minimization of functions is introduced. The method, which is developed on the model of a quadratic function, is characterized by two matrices at each iteration. One matrix is such that a linearly independent set of directions can be generated, regardless of the stepsize employed. The other matrix is such that, at the point where the first matrix fails to yield a gradient linearly independent of all the previous gradients, it generates a displacement leading to the minimal point. Thus, the one-dimensional search is bypassed. For a quadratic function, it is proved that the minimal point is obtained in at mostn + 1 iterations, wheren is the number of variables in the function. Since the one-dimensional search is not needed, the total number of gradient evaluations for convergence is at mostn + 2.Three algorithms of the method are presented. A reverse algorithm, which permits the use of only one matrix, is also given. Considerations pertaining to the applications of this method to the minimization of a quadratic function and a nonquadratic function are given. It is believed that, since the one-dimensional search can be bypassed, a considerable amount of computational saving can be achieved.This paper, supported by the National Science Foundation, Grant No. GP-32453, is based on Ref. 1.  相似文献   

5.
A Heuristic for the Vehicle Routing Problem with Time Windows   总被引:3,自引:0,他引:3  
In this paper we propose a heuristic algorithm to solve the Vehicle Routing Problem with Time Windows. Its framework is a smart combination of three simple procedures: the classical k-opt exchanges improve the solution, an ad hoc procedure reduces the number of vehicles and a second objective function drives the search out of local optima. No parameter tuning is required and no random choice is made: these are the distinguishing features with respect to the recent literature. The algorithm has been tested on benchmark problems which prove it to be more effective than comparable algorithms.  相似文献   

6.
Direct-type global optimization algorithms often spend an excessive number of function evaluations on problems with many local optima exploring suboptimal local minima, thereby delaying discovery of the global minimum. In this paper, a globally-biased simplicial partition Disimpl algorithm for global optimization of expensive Lipschitz continuous functions with an unknown Lipschitz constant is proposed. A scheme for an adaptive balancing of local and global information during the search is introduced, implemented, experimentally investigated, and compared with the well-known Direct and Direct l methods. Extensive numerical experiments executed on 800 multidimensional multiextremal test functions show a promising performance of the new acceleration technique with respect to competitors.  相似文献   

7.
Conjugate gradient methods have been extensively used to locate unconstrained minimum points of real-valued functions. At present, there are several readily implementable conjugate gradient algorithms that do not require exact line search and yet are shown to be superlinearly convergent. However, these existing algorithms usually require several trials to find an acceptable stepsize at each iteration, and their inexact line search can be very timeconsuming.In this paper we present new readily implementable conjugate gradient algorithms that will eventually require only one trial stepsize to find an acceptable stepsize at each iteration.Making usual continuity assumptions on the function being minimized, we have established the following properties of the proposed algorithms. Without any convexity assumptions on the function being minimized, the algorithms are globally convergent in the sense that every accumulation point of the generated sequences is a stationary point. Furthermore, when the generated sequences converge to local minimum points satisfying second-order sufficient conditions for optimality, the algorithms eventually demand only one trial stepsize at each iteration, and their rate of convergence isn-step superlinear andn-step quadratic.This research was supported in part by the National Science Foundation under Grant No. ENG 76-09913.  相似文献   

8.
The effectiveness of local search algorithms on discrete optimization problems is influenced by the choice of the neighborhood function. A neighborhood function that results in all local minima being global minima is said to have zero L-locals. A polynomially sized neighborhood function with zero L-locals would ensure that at each iteration, a local search algorithm would be able to find an improving solution or conclude that the current solution is a global minimum. This paper presents a recursive relationship for computing the number of neighborhood functions over a generic solution space that results in zero L-locals. Expressions are also given for the number of tree neighborhood functions with zero L-locals. These results provide a first step towards developing expressions that are applicable to discrete optimization problems, as well as providing results that add to the collection of solved graphical enumeration problems.  相似文献   

9.
In this paper a local integral simplex algorithm will be described which, starting with the initial tableau of a set partitioning problem, makes pivots using the pivot on one rule until no more such pivots are possible because a local optimum has been found. If the local optimum is also a global optimum the process stops. Otherwise, a global integral simplex algorithm creates and solves the problems in a search tree consisting of a polynomial number of subproblems, subproblems of subproblems, etc. The solution to at least one of these subproblems is guaranteed to be an optimal solution to the original problem. If that solution has a bounded objective then it is an optimal set partitioning solution of the original problem, but if it has an unbounded objective then the original problem has no feasible solution. It will be shown that the total number of pivots required for the global integral simplex method to solve a set partitioning problem having m rows, where m is an arbitrary but fixed positive integer, is bounded by a polynomial function of n.A method for programming the algorithms in this paper to run on parallel computers is discussed briefly.  相似文献   

10.
Among the penalty based approaches for constrained optimization, augmented Lagrangian (AL) methods are better in at least three ways: (i) they have theoretical convergence properties, (ii) they distort the original objective function minimally, thereby providing a better function landscape for search, and (iii) they can result in computing optimal Lagrange multiplier for each constraint as a by-product. Instead of keeping a constant penalty parameter throughout the optimization process, these algorithms update the parameters (called multipliers) adaptively so that the corresponding penalized function dynamically changes its optimum from the unconstrained minimum point to the constrained minimum point with iterations. However, the flip side of these algorithms is that the overall algorithm requires a serial application of a number of unconstrained optimization tasks, a process that is usually time-consuming and tend to be computationally expensive. In this paper, we devise a genetic algorithm based parameter update strategy to a particular AL method. The proposed strategy updates critical parameters in an adaptive manner based on population statistics. Occasionally, a classical optimization method is used to improve the GA-obtained solution, thereby providing the resulting hybrid procedure its theoretical convergence property. The GAAL method is applied to a number of constrained test problems taken from the evolutionary algorithms (EAs) literature. The number of function evaluations required by GAAL in most problems is found to be smaller than that needed by a number of existing evolutionary based constraint handling methods. GAAL method is found to be accurate, computationally fast, and reliable over multiple runs. Besides solving the problems, the proposed GAAL method is also able to find the optimal Lagrange multiplier associated with each constraint for the test problems as an added benefit??a matter that is important for a sensitivity analysis of the obtained optimized solution, but has not yet been paid adequate attention in the past evolutionary constrained optimization studies.  相似文献   

11.
Most parallel efficient global optimization (EGO) algorithms focus only on the parallel architectures for producing multiple updating points, but give few attention to the balance between the global search (i.e., sampling in different areas of the search space) and local search (i.e., sampling more intensely in one promising area of the search space) of the updating points. In this study, a novel approach is proposed to apply this idea to further accelerate the search of parallel EGO algorithms. In each cycle of the proposed algorithm, all local maxima of expected improvement (EI) function are identified by a multi-modal optimization algorithm. Then the local EI maxima with value greater than a threshold are selected and candidates are sampled around these selected EI maxima. The results of numerical experiments show that, although the proposed parallel EGO algorithm needs more evaluations to find the optimum compared to the standard EGO algorithm, it is able to reduce the optimization cycles. Moreover, the proposed parallel EGO algorithm gains better results in terms of both number of cycles and evaluations compared to a state-of-the-art parallel EGO algorithm over six test problems.  相似文献   

12.
In this paper a new heuristic hybrid technique for bound-constrained global optimization is proposed. We developed iterative algorithm called GLPτS that uses genetic algorithms, LPτ low-discrepancy sequences of points and heuristic rules to find regions of attraction when searching a global minimum of an objective function. Subsequently Nelder–Mead Simplex local search technique is used to refine the solution. The combination of the three techniques (Genetic algorithms, LPτO Low-discrepancy search and Simplex search) provides a powerful hybrid heuristic optimization method which is tested on a number of benchmark multimodal functions with 10–150 dimensions, and the method properties – applicability, convergence, consistency and stability are discussed in detail.  相似文献   

13.
The n-queens problem is a classical combinatorial optimization problem which has been proved to be NP-hard. The goal is to place n non-attacking queens on an n×n chessboard. In this paper, two single-solution-based (Local Search (LS) and Tuned Simulated Annealing (SA)) and two population-based metaheuristics (two versions of Scatter Search (SS)) are presented for solving the problem. Since parameters of heuristic and metaheuristic algorithms have great influence on their performance, a TOPSIS-Taguchi based parameter tuning method is proposed, which not only considers the number of fitness function evaluations, but also aims to minimize the runtime of the presented metaheuristics. The performance of the suggested approaches was investigated through computational analyses, which showed that the Local Search method outperformed the other two algorithms in terms of average runtimes and average number of fitness function evaluations. The LS was also compared to the Cooperative PSO (CPSO) and SA algorithms, which are currently the best algorithms in the literature for finding the first solution to the n-queens problem, and the results showed that the average fitness function evaluation of the LS is approximately 21 and 8 times less than that of SA and CPSO, respectively. Also, a fitness analysis of landscape for the n-queens problem was conducted which indicated that the distribution of local optima is uniformly random and scattered over the search space. The landscape is rugged and there is no significant correlation between fitness and distance of solutions, and so a local search heuristic can search a rugged plain landscape effectively and find a solution quickly. As a result, it was statistically and analytically proved that single-solution-based metaheuristics outperform population-based metaheuristics in finding the first solution of the n-queens problem.  相似文献   

14.
This paper is concerned with finding two solutions of a set covering problem that have a minimum number of variables in common. We show that this problem is NP-complete, even in the case where we are only interested in completely disjoint solutions. We describe three heuristic methods based on the standard greedy algorithm for set covering problems. Two of these algorithms find the solutions sequentially, while the third finds them simultaneously. A local search method for reducing the overlap of the two given solutions is then described. This method involves the solution of a reduced set covering problem. Finally, extensive computational tests are given demonstrating the nature of these algorithms. These tests are carried out both on randomly generated problems and on problems found in the literature.  相似文献   

15.
Random search technique is the simplest one of the heuristic algorithms. It is stated in the literature that the probability of finding global minimum is equal to 1 by using the basic random search technique, but it takes too much time to reach the global minimum. Improving the basic random search technique may decrease the solution time. In this study, in order to obtain the global minimum fastly, a new random search algorithm is suggested. This algorithm is called as the Dynamic Random Search Technique (DRASET). DRASET consists of two phases, which are general search and local search based on general solution. Knowledge related to the best solution found in the process of general search is kept and then that knowledge is used as initial value of local search. DRASET’s performance was experimented with 15 test problems and satisfactory results were obtained.  相似文献   

16.
In this paper we derive the probability distribution of trial points in the differential evolution (de) algorithm, in particular the probability distribution of points generated by mutation. We propose a point generation scheme that uses an approximation to this distribution. The scheme can dispense with the differential vector used in the mutation of de. We propose a de algorithm that replaces the differential based mutation scheme with a probability distribution based point generation scheme. We also propose a de algorithm that uses a probabilistic combination of the point generation by the probability distribution and the point generation by mutation. A numerical study is carried out using a set of 50 test problems, many of which are inspired by practical applications. Numerical results suggest that the new algorithms are superior to the original version both in terms of the number of function evaluations and cpu times.  相似文献   

17.
Usually, interval global optimization algorithms use local search methods to obtain a good upper (lower) bound of the solution. These local methods are based on point evaluations. This paper investigates a new local search method based on interval analysis information and on a new selection criterion to direct the search. When this new method is used alone, the guarantee to obtain a global solution is lost. To maintain this guarantee, the new local search method can be incorporated to a standard interval GO algorithm, not only to find a good upper bound of the solution, but also to simultaneously carry out part of the work of the interval B&B algorithm. Moreover, the new method permits improvement of the guaranteed upper bound of the solution with the memory requirements established by the user. Thus, the user can avoid the possible memory problems arising in interval GO algorithms, mainly when derivative information is not used. The chance of reaching the global solution with this algorithm may depend on the established memory limitations. The algorithm has been evaluated numerically using a wide set of test functions which includes easy and hard problems. The numerical results show that it is possible to obtain accurate solutions for all the easy functions and also for the investigated hard problems.  相似文献   

18.
We implemented five conversions of simulated annealing (SA) algorithm from sequential-to-parallel forms on high-performance computers and applied them to a set of standard function optimization problems in order to test their performances. According to the experimental results, we eventually found that the traditional approach to parallelizing simulated annealing, namely, parallelizing moves in sequential SA, difficultly handled very difficult problem instances. Divide-and-conquer decomposition strategy used in a search space sometimes might find the global optimum function value, but it frequently resulted in great time cost if the random search space was considerably expanded. The most effective way we found in identifying the global optimum solution is to introduce genetic algorithm (GA) and build a highly hybrid GA+SA algorithm. In this approach, GA has been applied to each cooling temperature stage. Additionally, the performance analyses of the best algorithm among the five implemented algorithms have been done on the IBM Beowulf PCs Cluster and some comparisons have been made with some recent global optimization algorithms in terms of the number of functional evaluations needed to obtain a global minimum, success rate and solution quality.  相似文献   

19.
In this paper, we consider the problem of minimizing a function in severalvariables which could be multimodal and may possess discontinuities. A newalgorithm for the problem based on the genetic technique is developed. Thealgorithm is hybrid in nature in the sense that it utilizes the genetictechnique to generate search directions, which are used in an optimizationscheme and is thus different from any other methods in the literature.The algorithm has been tested on the Rosenbrock valley functions in 2 and 4dimensions, and multimodal functions in 2 and 4 dimensions, which are of ahigh degree of difficulty. The results are compared with the Adaptive RandomSearch, and Simulated Annealing algorithms. The performance of the algorithmis also compared to recent global algorithms in terms of the number offunctional evaluations needed to obtain a global minimum and results show thatthe proposed algorithm is better than these algorithms on a set of standardtest problems. It seems that the proposed algorithm is efficient and robust.  相似文献   

20.
When solving real-world optimization problems, evolutionary algorithms often require a large number of fitness evaluations in order to converge to the global optima. Attempts have been made to find techniques to reduce the number of fitness function evaluations. We propose a novel framework in the context of multi-objective optimization where fitness evaluations are distributed by creating a limited number of adaptive spheres spanning the search space. These spheres move towards the global Pareto front as components of a swarm optimization system. We call this process localization. The contribution of the paper is a general framework for distributed evolutionary multi-objective optimization, in which the individuals in each sphere can be controlled by any existing evolutionary multi-objective optimization algorithm in the literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号