共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents some simple technical conditions that guarantee the convergence of a general class of adaptive stochastic global optimization algorithms. By imposing some conditions on the probability distributions that generate the iterates, these stochastic algorithms can be shown to converge to the global optimum in a probabilistic sense. These results also apply to global optimization algorithms that combine local and global stochastic search strategies and also those algorithms that combine deterministic and stochastic search strategies. This makes the results applicable to a wide range of global optimization algorithms that are useful in practice. Moreover, this paper provides convergence conditions involving the conditional densities of the random vector iterates that are easy to verify in practice. It also provides some convergence conditions in the special case when the iterates are generated by elliptical distributions such as the multivariate Normal and Cauchy distributions. These results are then used to prove the convergence of some practical stochastic global optimization algorithms, including an evolutionary programming algorithm. In addition, this paper introduces the notion of a stochastic algorithm being probabilistically dense in the domain of the function and shows that, under simple assumptions, this is equivalent to seeing any point in the domain with probability 1. This, in turn, is equivalent to almost sure convergence to the global minimum. Finally, some simple results on convergence rates are also proved. 相似文献
2.
Shangyao Yan Der-shin Juang Chien-rong Chen Wei-shen Lai 《Journal of Global Optimization》2005,33(1):123-156
Traditionally, the minimum cost transshipment problems have been simplified as
linear cost problems, which are not practical in real applications. Recently, some advanced
local search algorithms have been developed that can directly solve concave cost bipartite
network problems. However, they are not applicable to general transshipment problems.
Moreover, the effectiveness of these modified local search algorithms for solving general
concave cost transshipment problems is doubtful. In this research, we propose a global search algorithm for solving concave
cost transshipment problems. Effecient methods for encoding, generating initial populations, selection, crossover and mutation
are proposed, according to the problem characteristics. To evaluate the effectiveness of the proposed global search algorithm,
four advanced local search algorithms based on the threshold accepting algorithm, the great deluge algorithm, and the tabu
search algorithm, are also developed and are used for comparison purpose. To assist with the comparison of the proposed algorithms,
a randomized network generator is designed to produce test problems. All the tests are performed on a personal computer. The
results indicate that the proposed global search algorithm is more effective than the four advanced local algorithms, for
solving concave cost transshipment problems. 相似文献
3.
We present algorithms for the single-source uncapacitated version of the minimum concave cost network flow problem. Each algorithm exploits the fact that an extreme feasible solution corresponds to a sub-tree of the original network. A global search heuristic based on random extreme feasible initial solutions and local search is developed. The algorithm is used to evaluate the complexity of the randomly generated test problems. An exact global search algorithm is developed, based on enumerative search of rooted subtrees. This exact technique is extended to bound the search based on cost properties and linear underestimation. The technique is accelerated by exploiting the network structure. 相似文献
4.
U. M. Garcia-Palomares F. J. Gonzalez-Castaño J. C. Burguillo-Rial 《Journal of Global Optimization》2006,34(3):409-426
This paper presents a general approach that combines global search strategies with local search and attempts to find a global
minimum of a real valued function of n variables. It assumes that derivative information is unreliable; consequently, it deals with derivative free algorithms,
but derivative information can be easily incorporated. This paper presents a nonmonotone derivative free algorithm and shows
numerically that it may converge to a better minimum starting from a local nonglobal minimum. This property is then incorporated
into a random population to globalize the algorithm. Convergence to a zero order stationary point is established for nonsmooth
convex functions, and convergence to a first order stationary point is established for strictly differentiable functions.
Preliminary numerical results are encouraging. A Java implementation that can be run directly from the Web allows the interested
reader to get a better insight of the performance of the algorithm on several standard functions. The general framework proposed
here, allows the user to incorporate variants of well known global search strategies.
Research done under the cooperation agreement between Universidade de Vigo and Universidad Simón Bolívar. 相似文献
5.
D. W. Bulger 《Journal of Optimization Theory and Applications》2007,133(3):289-301
Grover’s quantum algorithm promises a quadratic acceleration for any problem formulable as a search. For unstructured search
problems, its implementation and performance are well understood. The curse of dimensionality and the intractability of the
general global optimization problem require any identifiable structure or regularity to be incorporated into a solution method.
This paper addresses the application of Grover’s algorithm when a local search technique is available, thereby combining the
quadratic acceleration with the acceleration seen in the multistart method.
The author thanks Dr. Bill Baritompa for helpful discussions. 相似文献
6.
Global Optimization using Dynamic Search Trajectories 总被引:1,自引:0,他引:1
Two global optimization algorithms are presented. Both algorithms attempt to minimize an unconstrained objective function through the modeling of dynamic search trajectories. The first, namely the Snyman–Fatti algorithm, originated in the 1980's and still appears an effective global optimization algorithm. The second algorithm is currently under development, and is denoted the modified bouncing ball algorithm. For both algorithms, the search trajectories are modified to increase the likelihood of convergence to a low local minimum. Numerical results illustrate the effectiveness of both algorithms. 相似文献
7.
We describe a new algorithm which uses the trajectories of a discrete dynamical system to sample the domain of an unconstrained objective function in search of global minima. The algorithm is unusually adept at avoiding nonoptimal local minima and successfully converging to a global minimum. Trajectories generated by the algorithm for objective functions with many local minima exhibit chaotic behavior, in the sense that they are extremely sensitive to changes in initial conditions and system parameters. In this context, chaos seems to have a beneficial effect: failure to converge to a global minimum from a given initial point can often be rectified by making arbitrarily small changes in the system parameters. 相似文献
8.
A global minimization algorithm for Lipschitz functions 总被引:1,自引:0,他引:1
The global optimization problem with and f(x) satisfying the Lipschitz condition , is considered. To solve it a region-search algorithm is introduced. This combines a local minimum algorithm with a procedure that at the ith iteration finds a region S
i
where the global minimum has to be searched for. Specifically, by making use of the Lipschitz condition, S
i
, which is a sequence of intervals, is constructed by leaving out from S
i-1 an interval where the global minimum cannot be located. A convergence property of the algorithm is given. Further, the ratio
between the measure of the initial feasible region and that of the unexplored region may be used as stop rule. Numerical experiments
are carried out; these show that the algorithm works well in finding and reducing the measure of the unexplored region. 相似文献
9.
The method of partitioned random search has been proposed in recent years to obtain an as good as possible solution for the global optimization problem (1). A practical algorithm has been developed and applied to real-life problems. However, the design of this algorithm was based mainly on intuition. The theoretical foundation of the method is an important issue in the development of efficient algorithms for such problems. In this paper, we generalize previous theoretical results and propose a sequential sampling policy for the partitioned random search for global optimization with sampling cost and discounting factor. A proof of the optimality of the proposed sequential sampling policy is given by using the theory of optimal stopping. 相似文献
10.
M. Duran Toksar? Ertan Güner 《Journal of Mathematical Analysis and Applications》2007,328(2):1178-1187
This paper presents variable neighborhood search (VNS) for the problem of finding the global minimum of a nonconvex function. The variable neighborhood search, which changes systematically neighborhood structures in the search for finding a better solution, is used to guide a set of standard improvement heuristics. This algorithm was tested on some standard test functions, and successful results were obtained. Its performance was compared with the other algorithms, and observed to be better. 相似文献
11.
The effectiveness of local search algorithms on discrete optimization problems is influenced by the choice of the neighborhood function. A neighborhood function that results in all local minima being global minima is said to have zero L-locals. A polynomially sized neighborhood function with zero L-locals would ensure that at each iteration, a local search algorithm would be able to find an improving solution or conclude that the current solution is a global minimum. This paper presents a recursive relationship for computing the number of neighborhood functions over a generic solution space that results in zero L-locals. Expressions are also given for the number of tree neighborhood functions with zero L-locals. These results provide a first step towards developing expressions that are applicable to discrete optimization problems, as well as providing results that add to the collection of solved graphical enumeration problems. 相似文献
12.
In this paper a new heuristic hybrid technique for bound-constrained global optimization is proposed. We developed iterative algorithm called GLPτS that uses genetic algorithms, LPτ low-discrepancy sequences of points and heuristic rules to find regions of attraction when searching a global minimum of an objective function. Subsequently Nelder–Mead Simplex local search technique is used to refine the solution. The combination of the three techniques (Genetic algorithms, LPτO Low-discrepancy search and Simplex search) provides a powerful hybrid heuristic optimization method which is tested on a number of benchmark multimodal functions with 10–150 dimensions, and the method properties – applicability, convergence, consistency and stability are discussed in detail. 相似文献
13.
Mend-Amar Majig Abdel-Rahman Hedar Masao Fukushima 《Journal of Global Optimization》2007,38(4):637-651
This paper considers the problem of finding as many as possible, hopefully all, solutions of the general (i.e., not necessarily
monotone) variational inequality problem (VIP). Based on global optimization reformulation of VIP, we propose a hybrid evolutionary
algorithm that incorporates local search in promising regions. In order to prevent searching process from returning to the
already detected global or local solutions, we employ the tunneling and hump-tunneling function techniques. The proposed algorithm
is tested on a set of test problems in the MCPLIB library and numerical results indicate that it works well in practice. 相似文献
14.
Johannes Jahn 《Computational Optimization and Applications》2006,35(2):161-175
This paper presents a multiobjective search algorithm with subdivision technique (MOSAST) for the global solution of multiobjective
constrained optimization problems with possibly noncontinuous objective or constraint functions. This method is based on a
random search method and a new version of the Graef-Younes algorithm and it uses a subdivision technique. Numerical results
are given for bicriterial test problems. 相似文献
15.
A new multi-start algorithm for global unconstrained minimization is presented in which the search trajectories are derived from the equation of motion of a particle in a conservative force field, where the function to be minimized represents the potential energy. The trajectories are modified to increase the probability of convergence to a comparatively low local minimum, thus increasing the region of convergence of the global minimum. A Bayesian argument is adopted by which, under mild assumptions, the confidence level that the global minimum has been attained may be computed. When applied to standard and other test functions, the algorithm never failed to yield the global minimum.The first author wishes to thank Prof. M. Levitt of the Department of Chemical Physics of the Weizmann Institute of Science for suggesting this line of research and also Drs. T. B. Scheffler and E. A. Evangelidis for fruitful discussions regarding Conjecture 2.1. He also acknowledges the exchange agreement award received from the National Council for Research and Development in Israel and the Council for Scientific and Industrial Research in South Africa, which made possible the visit to the Weizmann Institute where this work was initiated. 相似文献
16.
Zelda B. Zabinsky Robert L. Smith J. Fred McDonald H. Edwin Romeijn David E. Kaufman 《Journal of Global Optimization》1993,3(2):171-192
Improving Hit-and-Run is a random search algorithm for global optimization that at each iteration generates a candidate point for improvement that is uniformly distributed along a randomly chosen direction within the feasible region. The candidate point is accepted as the next iterate if it offers an improvement over the current iterate. We show that for positive definite quadratic programs, the expected number of function evaluations needed to arbitrarily well approximate the optimal solution is at most O(n5/2) wheren is the dimension of the problem. Improving Hit-and-Run when applied to global optimization problems can therefore be expected to converge polynomially fast as it approaches the global optimum.Paper presented at the II. IIASA-Workshop on Global Optimization, December 9–14, 1990, Sopron (Hungary). 相似文献
17.
Many studies on ants behavior have demonstrated that their food searching process starts with Scout Ants’ scouting all around for food. In this paper, we propose a novel Scout Ant Continuous Optimization (SACO) algorithm which can simulate the food searching process of the Scout Ants. In this algorithm, the solution space of an optimization problem is divided into m subspaces. One Scout Ant is assigned to each subspace, and a total number of m Scout Ants in m subspaces will cooperate in the whole food searching process. The location of a Scout Ant in a subspace corresponds to a solution in that subspace. When an ant moves, the change of its position is driven by two factors. The first factor is the independent, random ergodic search with a small moving step in the ant’s assigned subspace, and the second is the collaborative, global search inspired by the global heuristic information accumulated among m ants. Each of these two factors is weighed by an appropriate weight to balance its contribution to the moving step size. This balanced computation helps adapt optimization problems with different features. Our numerical experiments have demonstrated that, in addition to the high accuracy and success rate in seeking the optimized solutions, our algorithm also has very fast convergence speed and impressive performance in optimization applications in high-dimensional spaces. 相似文献
18.
A derivative-free simulated annealing driven multi-start algorithm for continuous global optimization is presented. We first propose a trial point generation scheme in continuous simulated annealing which eliminates the need for the gradient-based trial point generation. We then suitably embed the multi-start procedure within the simulated annealing algorithm. We modify the derivative-free pattern search method and use it as the local search in the multi-start procedure. We study the convergence properties of the algorithm and test its performance on a set of 50 problems. Numerical results are presented which show the robustness of the algorithm. Numerical comparisons with a gradient-based simulated annealing algorithm and three population-based global optimization algorithms show that the new algorithm could offer a reasonable alternative to many currently available global optimization algorithms, specially for problems requiring ‘direct search’ type algorithm. 相似文献
19.
In this paper we investigate a model where travel time is not necessarily proportional to the distance. Every trip starts at speed zero, then the vehicle accelerates to a cruising speed, stays at the cruising speed for a portion of the trip and then decelerates back to a speed of zero. We define a time equivalent distance which is equal to the travel time multiplied by the cruising speed. This time equivalent distance is referred to as the acceleration–deceleration (A–D) distance. We prove that every demand point is a local minimum for the Weber problem defined by travel time rather than distance. We propose a heuristic approach employing the generalized Weiszfeld algorithm and an optimal approach applying the Big Triangle Small Triangle global optimization method. These two approaches are very efficient and problems of 10,000 demand points are solved in about 0.015 seconds by the generalized Weiszfeld algorithm and in about 1 minute by the BTST technique. When the generalized Weiszfeld algorithm was repeated 1000 times, the optimal solution was found at least once for all test problems. 相似文献
20.
Variable neighborhood search: Principles and applications 总被引:5,自引:0,他引:5
Systematic change of neighborhood within a possibly randomized local search algorithm yields a simple and effective metaheuristic for combinatorial and global optimization, called variable neighborhood search (VNS). We present a basic scheme for this purpose, which can easily be implemented using any local search algorithm as a subroutine. Its effectiveness is illustrated by solving several classical combinatorial or global optimization problems. Moreover, several extensions are proposed for solving large problem instances: using VNS within the successive approximation method yields a two-level VNS, called variable neighborhood decomposition search (VNDS); modifying the basic scheme to explore easily valleys far from the incumbent solution yields an efficient skewed VNS (SVNS) heuristic. Finally, we show how to stabilize column generation algorithms with help of VNS and discuss various ways to use VNS in graph theory, i.e., to suggest, disprove or give hints on how to prove conjectures, an area where metaheuristics do not appear to have been applied before. 相似文献