共查询到20条相似文献,搜索用时 46 毫秒
1.
Generalized hill climbing algorithms provide a framework for modeling several local search algorithms for hard discrete optimization problems. This paper introduces and analyzes generalized hill climbing algorithm performance measures that reflect how effectively an algorithm has performed to date in visiting a global optimum and how effectively an algorithm may pes]rform in the future in visiting such a solution. These measures are also used to obtain a necessary asymptotic convergence (in probability) condition to a global optimum, which is then used to show that a common formulation of threshold accepting does not converge. These measures assume particularly simple forms when applied to specific search strategies such as Monte Carlo search and threshold accepting. 相似文献
2.
This paper concerns with a new nonmonotone strategy and its application to the line search approach for unconstrained optimization. It has been believed that nonmonotone techniques can improve the possibility of finding the global optimum and increase the convergence rate of the algorithms. We first introduce a new nonmonotone strategy which includes a convex combination of the maximum function value of some preceding successful iterates and the current function value. We then incorporate the proposed nonmonotone strategy into an inexact Armijo-type line search approach to construct a more relaxed line search procedure. The global convergence to first-order stationary points is subsequently proved and the R-linear convergence rate are established under suitable assumptions. Preliminary numerical results finally show the efficiency and the robustness of the proposed approach for solving unconstrained nonlinear optimization problems. 相似文献
3.
Fabio Boschetti 《Journal of Heuristics》2008,14(1):95-116
A Local Linear Embedding (LLE) module enhances the performance of two Evolutionary Computation (EC) algorithms employed as
search tools in global optimization problems. The LLE employs the stochastic sampling of the data space inherent in Evolutionary
Computation in order to reconstruct an approximate mapping from the data space back into the parameter space. This allows
to map the target data vector directly into the parameter space in order to obtain a rough estimate of the global optimum,
which is then added to the EC generation. This process is iterated and considerably improves the EC convergence. Thirteen
standard test functions and two real-world optimization problems serve to benchmark the performance of the method. In most
of our tests, optimization aided by the LLE mapping outperforms standard implementations of a genetic algorithm and a particle
swarm optimization. The number and ranges of functions we tested suggest that the proposed algorithm can be considered as
a valid alternative to traditional EC tools in more general applications. The performance improvement in the early stage of
the convergence also suggests that this hybrid implementation could be successful as an initial global search to select candidates
for subsequent local optimization. 相似文献
4.
Mark A. Abramson Charles Audet James W. Chrissis Jennifer G. Walston 《Optimization Letters》2009,3(1):35-47
This paper introduces a new derivative-free class of mesh adaptive direct search (MADS) algorithms for solving constrained
mixed variable optimization problems, in which the variables may be continuous or categorical. This new class of algorithms,
called mixed variable MADS (MV-MADS), generalizes both mixed variable pattern search (MVPS) algorithms for linearly constrained mixed variable problems
and MADS algorithms for general constrained problems with only continuous variables. The convergence analysis, which makes
use of the Clarke nonsmooth calculus, similarly generalizes the existing theory for both MVPS and MADS algorithms, and reasonable
conditions are established for ensuring convergence of a subsequence of iterates to a suitably defined stationary point in
the nonsmooth and mixed variable sense. 相似文献
5.
Bulger D. Baritompa W. P. Wood G. R. 《Journal of Optimization Theory and Applications》2003,116(3):517-529
Pure adaptive search (PAS) is an idealized stochastic algorithm for unconstrained global optimization. The number of PAS iterations required to solve a problem increases only linearly in the domain dimension. However, each iteration requires the generation of a random domain point uniformly distributed in the current improving region. If no regularity conditions are known to hold for the objective function, then this task requires a number of classical function evaluations varying inversely with the proportion of the domain constituted by the improving region, entirely counteracting the PAS apparent speedup. The Grover quantum computational search algorithm provides a way to generate the PAS iterates. We show that the resulting implementation, which we call the Grover adaptive search (GAS), realizes PAS for functions satisfying certain conditions, and we believe that, when quantum computers will be available, GAS will be a practical algorithm. 相似文献
6.
Tzyy-Chyang LuJyh-Ching Juang 《Applied mathematics and computation》2011,218(6):2516-2532
This work presents the evolutionary quantum-inspired space search algorithm (QSSA) for solving numerical optimization problems. In the proposed algorithm, the feasible solution space is decomposed into regions in terms of quantum representation. As the search progresses from one generation to the next, the quantum bits evolve gradually to increase the probability of selecting the regions that render good fitness values. Through the inherent probabilistic mechanism, the QSSA initially behaves as a global search algorithm and gradually evolves into a local search algorithm, yielding a good balance between exploration and exploitation. To prevent a premature convergence and to speed up the overall search speed, an overlapping strategy is also proposed. The QSSA is applied to a series of numerical optimization problems. The experiments show that the results obtained by the QSSA are quite competitive compared to those obtained using state-of-the-art IPOP-CMA-ES and QEA. 相似文献
7.
《Operations Research Letters》2022,50(3):347-355
Optimal search methods are proposed for solving optimization problems with analytically unobtainable objectives. This paper proposes a method by incorporating sampling schemes into the directional direct search with variable number sample path and investigates its effectiveness in solving stochastic optimization problems. We also explore the conditions on sample sizes at each iteration under which the convergence in probability can be guaranteed. Finally, a set of benchmark problems are numerically tested to show the effectiveness in different sampling schemes. 相似文献
8.
Generalized hill climbing algorithms provide a framework to describe and analyze metaheuristics for addressing intractable discrete optimization problems. The performance of such algorithms can be assessed asymptotically, either through convergence results or by comparison to other algorithms. This paper presents necessary and sufficient convergence conditions for generalized hill climbing algorithms. These conditions are shown to be equivalent to necessary and sufficient convergence conditions for simulated annealing when the generalized hill climbing algorithm is restricted to simulated annealing. Performance measures are also introduced that permit generalized hill climbing algorithms to be compared using random restart local search. These results identify a solution landscape parameter based on the basins of attraction for local optima that determines whether simulated annealing or random restart local search is more effective in visiting a global optimum. The implications and limitations of these results are discussed. 相似文献
9.
In this paper two nonmonolone curved search (NCS) algorithms fur unconstrained optimization are presented. The NCS algorithms possess both a global convergence properly and a quadratic rale of convergence. Some numerical results are also reported which show that the NCS algorithn is superior to the usual curved search (UCS)aIgorithm for typical lest problems. 相似文献
10.
一类带非单调线搜索的信赖域算法 总被引:1,自引:0,他引:1
通过将非单调Wolfe线搜索技术与传统的信赖域算法相结合,我们提出了一类新的求解无约束最优化问题的信赖域算法.新算法在每一迭代步只需求解一次信赖域子问题,而且在每一迭代步Hesse阵的近似都满足拟牛顿条件并保持正定传递.在一定条件下,证明了算法的全局收敛性和强收敛性.数值试验表明新算法继承了非单调技术的优点,对于求解某... 相似文献
11.
In global optimization, a typical population-based stochastic search method works on a set of sample points from the feasible region. In this paper, we study a recently proposed method of this sort. The method utilizes an attraction-repulsion mechanism to move sample points toward optimality and is thus referred to as electromagnetism-like method (EM). The computational results showed that EM is robust in practice, so we further investigate the theoretical structure. After reviewing the original method, we present some necessary modifications for the convergence proof. We show that in the limit, the modified method converges to the vicinity of global optimum with probability one. 相似文献
12.
Morteza Alinia Ahandani Mohammad-Taghi Vakil-Baghmisheh Mohammad Talebi 《Computational Optimization and Applications》2014,59(3):725-748
In this paper, we combine two types of local search algorithms for global optimization of continuous functions. In the literature, most of the hybrid algorithms are produced by combination of a global optimization algorithm with a local search algorithm and the local search is used to improve the solution quality, not to explore the search space to find independently the global optimum. The focus of this research is on some simple and efficient hybrid algorithms by combining the Nelder–Mead simplex (NM) variants and the bidirectional random optimization (BRO) methods for optimization of continuous functions. The NM explores the whole search space to find some promising areas and then the BRO local search is entered to exploit optimal solution as accurately as possible. Also a new strategy for shrinkage stage borrowed from differential evolution (DE) is incorporated in the NM variants. To examine the efficiency of proposed algorithms, those are evaluated by 25 benchmark functions designed for the special session on real-parameter optimization of CEC2005. A comparison study between the hybrid algorithms and some DE algorithms and non-parametric analysis of obtained results demonstrate that the proposed algorithms outperform most of other algorithms and their difference in most cases is statistically considerable. In a later part of the comparative experiments, a comparison of the proposed algorithms with some other evolutionary algorithms reported in the CEC2005 confirms a better performance of our proposed algorithms. 相似文献
13.
讨论了具有一般约束的全局优化问题,给出该问题的一个随机搜索算法,证明了该算法依概率1收敛到问题的全局最优解.数值结果显示该方法是有效的. 相似文献
14.
Global Optimization using Dynamic Search Trajectories 总被引:1,自引:0,他引:1
Two global optimization algorithms are presented. Both algorithms attempt to minimize an unconstrained objective function through the modeling of dynamic search trajectories. The first, namely the Snyman–Fatti algorithm, originated in the 1980's and still appears an effective global optimization algorithm. The second algorithm is currently under development, and is denoted the modified bouncing ball algorithm. For both algorithms, the search trajectories are modified to increase the likelihood of convergence to a low local minimum. Numerical results illustrate the effectiveness of both algorithms. 相似文献
15.
Honggang Wang 《European Journal of Operational Research》2012,217(1):141-148
We propose a family of retrospective optimization (RO) algorithms for optimizing stochastic systems with both integer and continuous decision variables. The algorithms are continuous search procedures embedded in a RO framework using dynamic simplex interpolation (RODSI). By decreasing dimensions (corresponding to the continuous variables) of simplex, the retrospective solutions become closer to an optimizer of the objective function. We present convergence results of RODSI algorithms for stochastic “convex” systems. Numerical results show that a simple implementation of RODSI algorithms significantly outperforms some random search algorithms such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). 相似文献
16.
Zelda B. Zabinsky David Bulger Charoenchai Khompatraporn 《Journal of Global Optimization》2010,46(2):273-286
Two common questions when one uses a stochastic global optimization algorithm, e.g., simulated annealing, are when to stop
a single run of the algorithm, and whether to restart with a new run or terminate the entire algorithm. In this paper, we
develop a stopping and restarting strategy that considers tradeoffs between the computational effort and the probability of
obtaining the global optimum. The analysis is based on a stochastic process called Hesitant Adaptive Search with Power-Law
Improvement Distribution (HASPLID). HASPLID models the behavior of stochastic optimization algorithms, and motivates an implementable
framework, Dynamic Multistart Sequential Search (DMSS). We demonstrate here the practicality of DMSS by using it to govern
the application of a simple local search heuristic on three test problems from the global optimization literature. 相似文献
17.
Diane E. Vaughan Sheldon H. Jacobson Hemanshu Kaul 《Computational Optimization and Applications》2007,37(1):103-119
Simultaneous generalized hill climbing (SGHC) algorithms provide a framework for using heuristics to simultaneously address
sets of intractable discrete optimization problems where information is shared between the problems during the algorithm execution.
Many well-known heuristics can be embedded within the SGHC algorithm framework. This paper shows that the solutions generated
by an SGHC algorithm are a stochastic process that satisfies the Markov property. This allows problem probability mass functions
to be formulated for particular sets of problems based on the long-term behavior of the algorithm. Such results can be used
to determine the proportion of iterations that an SGHC algorithm will spend optimizing over each discrete optimization problem.
Sufficient conditions that guarantee that the algorithm spends an equal number of iterations in each discrete optimization
problem are provided. SGHC algorithms can also be formulated such that the overall performance of the algorithm is independent
of the initial discrete optimization problem chosen. Sufficient conditions are obtained guaranteeing that an SGHC algorithm
will visit the globally optimal solution for each discrete optimization problem. Lastly, rates of convergence for SGHC algorithms
are reported that show that given a rate of convergence for the embedded GHC algorithm, the SGHC algorithm can be designed
to preserve this rate. 相似文献
18.
A derivative-free simulated annealing driven multi-start algorithm for continuous global optimization is presented. We first propose a trial point generation scheme in continuous simulated annealing which eliminates the need for the gradient-based trial point generation. We then suitably embed the multi-start procedure within the simulated annealing algorithm. We modify the derivative-free pattern search method and use it as the local search in the multi-start procedure. We study the convergence properties of the algorithm and test its performance on a set of 50 problems. Numerical results are presented which show the robustness of the algorithm. Numerical comparisons with a gradient-based simulated annealing algorithm and three population-based global optimization algorithms show that the new algorithm could offer a reasonable alternative to many currently available global optimization algorithms, specially for problems requiring ‘direct search’ type algorithm. 相似文献
19.
20.
Ana Maria A.C. Rocha Tiago F.M.C. Martins 《Journal of Computational and Applied Mathematics》2011,235(16):4611-4620
This paper presents an augmented Lagrangian methodology with a stochastic population based algorithm for solving nonlinear constrained global optimization problems. The method approximately solves a sequence of simple bound global optimization subproblems using a fish swarm intelligent algorithm. A stochastic convergence analysis of the fish swarm iterative process is included. Numerical results with a benchmark set of problems are shown, including a comparison with other stochastic-type algorithms. 相似文献