首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
In this paper, we combine two types of local search algorithms for global optimization of continuous functions. In the literature, most of the hybrid algorithms are produced by combination of a global optimization algorithm with a local search algorithm and the local search is used to improve the solution quality, not to explore the search space to find independently the global optimum. The focus of this research is on some simple and efficient hybrid algorithms by combining the Nelder–Mead simplex (NM) variants and the bidirectional random optimization (BRO) methods for optimization of continuous functions. The NM explores the whole search space to find some promising areas and then the BRO local search is entered to exploit optimal solution as accurately as possible. Also a new strategy for shrinkage stage borrowed from differential evolution (DE) is incorporated in the NM variants. To examine the efficiency of proposed algorithms, those are evaluated by 25 benchmark functions designed for the special session on real-parameter optimization of CEC2005. A comparison study between the hybrid algorithms and some DE algorithms and non-parametric analysis of obtained results demonstrate that the proposed algorithms outperform most of other algorithms and their difference in most cases is statistically considerable. In a later part of the comparative experiments, a comparison of the proposed algorithms with some other evolutionary algorithms reported in the CEC2005 confirms a better performance of our proposed algorithms.  相似文献   

2.
A multi-objective evolutionary algorithm which can be applied to many nonlinear multi-objective optimization problems is proposed. Its aim is to quickly obtain a fixed size Pareto-front approximation. It adapts ideas from different multi-objective evolutionary algorithms, but also incorporates new devices. In particular, the search in the feasible region is carried out on promising areas (hyperspheres) determined by a radius value, which decreases as the optimization procedure evolves. This mechanism helps to maintain a balance between exploration and exploitation of the search space. Additionally, a new local search method which accelerates the convergence of the population towards the Pareto-front, has been incorporated. It is an extension of the local optimizer SASS and improves a given solution along a search direction (no gradient information is used). Finally, a termination criterion has also been proposed, which stops the algorithm if the distances between the Pareto-front approximations provided by the algorithm in three consecutive iterations are smaller than a given tolerance. To know how far two of those sets are from each other, a modification of the well-known Hausdorff distance is proposed. In order to analyze the algorithm performance, it has been compared to the reference algorithms NSGA-II and SPEA2 and the state-of-the-art algorithms MOEA/D and SMS-EMOA. Several quality indicators have been considered, namely, hypervolume, average distance, additive epsilon indicator, spread and spacing. According to the computational tests performed, the new algorithm, named FEMOEA, outperforms the other algorithms.  相似文献   

3.
Usually, interval global optimization algorithms use local search methods to obtain a good upper (lower) bound of the solution. These local methods are based on point evaluations. This paper investigates a new local search method based on interval analysis information and on a new selection criterion to direct the search. When this new method is used alone, the guarantee to obtain a global solution is lost. To maintain this guarantee, the new local search method can be incorporated to a standard interval GO algorithm, not only to find a good upper bound of the solution, but also to simultaneously carry out part of the work of the interval B&B algorithm. Moreover, the new method permits improvement of the guaranteed upper bound of the solution with the memory requirements established by the user. Thus, the user can avoid the possible memory problems arising in interval GO algorithms, mainly when derivative information is not used. The chance of reaching the global solution with this algorithm may depend on the established memory limitations. The algorithm has been evaluated numerically using a wide set of test functions which includes easy and hard problems. The numerical results show that it is possible to obtain accurate solutions for all the easy functions and also for the investigated hard problems.  相似文献   

4.
This paper presents some simple technical conditions that guarantee the convergence of a general class of adaptive stochastic global optimization algorithms. By imposing some conditions on the probability distributions that generate the iterates, these stochastic algorithms can be shown to converge to the global optimum in a probabilistic sense. These results also apply to global optimization algorithms that combine local and global stochastic search strategies and also those algorithms that combine deterministic and stochastic search strategies. This makes the results applicable to a wide range of global optimization algorithms that are useful in practice. Moreover, this paper provides convergence conditions involving the conditional densities of the random vector iterates that are easy to verify in practice. It also provides some convergence conditions in the special case when the iterates are generated by elliptical distributions such as the multivariate Normal and Cauchy distributions. These results are then used to prove the convergence of some practical stochastic global optimization algorithms, including an evolutionary programming algorithm. In addition, this paper introduces the notion of a stochastic algorithm being probabilistically dense in the domain of the function and shows that, under simple assumptions, this is equivalent to seeing any point in the domain with probability 1. This, in turn, is equivalent to almost sure convergence to the global minimum. Finally, some simple results on convergence rates are also proved.  相似文献   

5.
A novel staged continuous Tabu search (SCTS) algorithm is proposed for solving global optimization problems of multi-minima functions with multi-variables. The proposed method comprises three stages that are based on the continuous Tabu search (CTS) algorithm with different neighbor-search strategies, with each devoting to one task. The method searches for the global optimum thoroughly and efficiently over the space of solutions compared to a single process of CTS. The effectiveness of the proposed SCTS algorithm is evaluated using a set of benchmark multimodal functions whose global and local minima are known. The numerical test results obtained indicate that the proposed method is more efficient than an improved genetic algorithm published previously. The method is also applied to the optimization of fiber grating design for optical communication systems. Compared with two other well-known algorithms, namely, genetic algorithm (GA) and simulated annealing (SA), the proposed method performs better in the optimization of the fiber grating design.  相似文献   

6.
蚁群遗传混合算法   总被引:2,自引:0,他引:2  
将蚁群遗传混合算法分别求解离散空间的和连续空间优化问题.求解旅行商问题的混合算法是以遗传算法为整个算法的框架,利用了蚁群算法中的信息素特性的进行交叉操作;根据旅行商问题的特点,给出了4种变异策略;针对遗传算法存在的过早收敛问题,加入2-0pt方法对问题求解进行了局部优化.与模拟退火算法、标准遗传算法和标准蚁群算法进行比较,4种混合算法效果都比较好,策略D的混合算法效果最好.求解连续空间优化问题是以蚁群算法为整个算法的框架,加入遗传算法的交叉操作和变异操作,用测试函数验证了混合蚁群算法的正确性.  相似文献   

7.
1. Illtroductioncrust region method is a well-accepted technique in nonlinear optindzation to assure globalconvergence. One of the adVantages of the model is that it does not require the objectivefunction to be convex. Many differellt versions have been suggested in using trust regiontechnique. For each iteration, suppose a current iterate point, a local quadratic model of thefunction and a trust region with center at the point and a certain radius are given. A point thatminimizes the model f…  相似文献   

8.
Solving a stochastic optimization problem often involves performing repeated noisy function evaluations at points encountered during the algorithm. Recently, a continuous optimization framework for executing a single observation per search point was shown to exhibit a martingale property so that associated estimation errors are guaranteed to converge to zero. We generalize this martingale single observation approach to problems with mixed discrete–continuous variables. We establish mild regularity conditions for this class of algorithms to converge to a global optimum.  相似文献   

9.
Most parallel efficient global optimization (EGO) algorithms focus only on the parallel architectures for producing multiple updating points, but give few attention to the balance between the global search (i.e., sampling in different areas of the search space) and local search (i.e., sampling more intensely in one promising area of the search space) of the updating points. In this study, a novel approach is proposed to apply this idea to further accelerate the search of parallel EGO algorithms. In each cycle of the proposed algorithm, all local maxima of expected improvement (EI) function are identified by a multi-modal optimization algorithm. Then the local EI maxima with value greater than a threshold are selected and candidates are sampled around these selected EI maxima. The results of numerical experiments show that, although the proposed parallel EGO algorithm needs more evaluations to find the optimum compared to the standard EGO algorithm, it is able to reduce the optimization cycles. Moreover, the proposed parallel EGO algorithm gains better results in terms of both number of cycles and evaluations compared to a state-of-the-art parallel EGO algorithm over six test problems.  相似文献   

10.
In a recent paper the authors introduced an infinite class of global optimization algorithms based upon random sampling from the feasible region and local searches started from selected sample points, based upon an acceptance/rejection criterion. All of the algorithms of that class possess strong theoretical properties.Here we analyze a member of that family, which, although being significantly simpler to implement and more efficient than the well known Multi-Level Single-Linkage algorithm, enjoys the same theoretical properties. It is shown here that, with very high probability, our method is able to discover from which points Multi-Level Single-Linkage will decide to start local search.  相似文献   

11.
In this paper, we present a new line search and trust region algorithm for unconstrained optimization problem with the trust region radius converging to zero. The new trust region algorithm performs a backtracking line search from the failed, point instead of resolving the subproblem when the trial step results in an increase in the objective function. We show that the algorithm preserves the convergence properties of the traditional trust region algorithms. Numerical results are also given.  相似文献   

12.
Generalized hill climbing algorithms provide a framework for modeling several local search algorithms for hard discrete optimization problems. This paper introduces and analyzes generalized hill climbing algorithm performance measures that reflect how effectively an algorithm has performed to date in visiting a global optimum and how effectively an algorithm may pes]rform in the future in visiting such a solution. These measures are also used to obtain a necessary asymptotic convergence (in probability) condition to a global optimum, which is then used to show that a common formulation of threshold accepting does not converge. These measures assume particularly simple forms when applied to specific search strategies such as Monte Carlo search and threshold accepting.  相似文献   

13.
In the area of broad-band antenna array signal processing, the global minimum of a quadratic equality constrained quadratic cost minimization problem is often required. The problem posed is usually characterized by a large optimization space (around 50–90 tuples), a large number of linear equality constraints, and a few quadratic equality constraints each having very low rank quadratic constraint matrices. Two main difficulties arise in this class of problem. Firstly, the feasibility region is nonconvex and multiple local minima abound. This makes conventional numerical search techniques unattractive as they are unable to locate the global optimum consistently (unless a finite search area is specified). Secondly, the large optimization space makes the use of decision-method algorithms for the theory of the reals unattractive. This is because these algorithms involve the solution of the roots of univariate polynomials of order to the square of the optimization space. In this paper we present a new algorithm which exploits the structure of the constraints to reduce the optimization space to a more manageable size. The new algorithm relies on linear-algebra concepts, basic optimization theory, and a multivariate polynomial root-solving tool often used by decision-method algorithms.This research was supported by the Australian Research Council and the Corporative Research Centre for Broadband Telecommunications and Networking.  相似文献   

14.
In this paper the usage of a stochastic optimization algorithm as a model search tool is proposed for the Bayesian variable selection problem in generalized linear models. Combining aspects of three well known stochastic optimization algorithms, namely, simulated annealing, genetic algorithm and tabu search, a powerful model search algorithm is produced. After choosing suitable priors, the posterior model probability is used as a criterion function for the algorithm; in cases when it is not analytically tractable Laplace approximation is used. The proposed algorithm is illustrated on normal linear and logistic regression models, for simulated and real-life examples, and it is shown that, with a very low computational cost, it achieves improved performance when compared with popular MCMC algorithms, such as the MCMC model composition, as well as with “vanilla” versions of simulated annealing, genetic algorithm and tabu search.  相似文献   

15.
A niche hybrid genetic algorithm (NHGA) is proposed in this paper to solve continuous multimodal optimization problems more efficiently, accurately and reliably. It provides a new architecture of hybrid algorithms, which organically merges the niche techniques and Nelder–Mead's simplex method into GAs. In the new architecture, the simplex search is first performed in the potential niches, which likely contain a global optimum, to locate the promising zones within search space, quickly and reliably. Then another simplex search is used to quickly discover the global optimum in the located promising zones. The proposed method not only makes the exploration capabilities of GAs stronger through niche techniques, but also has more powerful exploitation capabilities by using simplex search. So it effectively alleviates premature convergence and improves weak exploitation capacities of GAs. A set of benchmark functions is used to demonstrate the validity of NHGA and the role of every component of NHGA. Numerical experiments show that the NHGA may, efficiently and reliably, obtain a more accurate global optimum for the complex and high-dimension multimodal optimization problems. It also demonstrates that the new hybrid architecture is potential and can be used to generate more potential hybrid algorithms.  相似文献   

16.
Over the last few decades several methods have been proposed for handling functional constraints while solving optimization problems using evolutionary algorithms (EAs). However, the presence of equality constraints makes the feasible space very small compared to the entire search space. As a consequence, the handling of equality constraints has long been a difficult issue for evolutionary optimization methods. This paper presents a Hybrid Evolutionary Algorithm (HEA) for solving optimization problems with both equality and inequality constraints. In HEA, we propose a new local search technique with special emphasis on equality constraints. The basic concept of the new technique is to reach a point on the equality constraint from the current position of an individual solution, and then explore on the constraint landscape. We believe this new concept will influence the future research direction for constrained optimization using population based algorithms. The proposed algorithm is tested on a set of standard benchmark problems. The results show that the proposed technique works very well on those benchmark problems.  相似文献   

17.
In this paper, we study a few challenging theoretical and numerical issues on the well known trust region policy optimization for deep reinforcement learning. The goal is to find a policy that maximizes the total expected reward when the agent acts according to the policy. The trust region subproblem is constructed with a surrogate function coherent to the total expected reward and a general distance constraint around the latest policy. We solve the subproblem using a reconditioned stochastic gradient method with a line search scheme to ensure that each step promotes the model function and stays in the trust region. To overcome the bias caused by sampling to the function estimations under the random settings, we add the empirical standard deviation of the total expected reward to the predicted increase in a ratio in order to update the trust region radius and decide whether the trial point is accepted. Moreover, for a Gaussian policy which is commonly used for continuous action space, the maximization with respect to the mean and covariance is performed separately to control the entropy loss. Our theoretical analysis shows that the deterministic version of the proposed algorithm tends to generate a monotonic improvement of the total expected reward and the global convergence is guaranteed under moderate assumptions. Comparisons with the state-of-the-art methods demonstrate the effectiveness and robustness of our method over robotic controls and game playings from OpenAI Gym.  相似文献   

18.
一类带非单调线搜索的信赖域算法   总被引:1,自引:0,他引:1  
通过将非单调Wolfe线搜索技术与传统的信赖域算法相结合,我们提出了一类新的求解无约束最优化问题的信赖域算法.新算法在每一迭代步只需求解一次信赖域子问题,而且在每一迭代步Hesse阵的近似都满足拟牛顿条件并保持正定传递.在一定条件下,证明了算法的全局收敛性和强收敛性.数值试验表明新算法继承了非单调技术的优点,对于求解某...  相似文献   

19.
The conceptual design of aircraft often entails a large number of nonlinear constraints that result in a nonconvex feasible design space and multiple local optima. The design of the high-speed civil transport (HSCT) is used as an example of a highly complex conceptual design with 26 design variables and 68 constraints. This paper compares three global optimization techniques on the HSCT problem and two test problems containing thousands of local optima and noise: multistart local optimizations using either sequential quadratic programming (SQP) as implemented in the design optimization tools (DOT) program or Snyman's dynamic search method, and a modified form of Jones' DIRECT global optimization algorithm. SQP is a local optimizer, while Snyman's algorithm is capable of moving through shallow local minima. The modified DIRECT algorithm is a global search method based on Lipschitzian optimization that locates small promising regions of design space and then uses a local optimizer to converge to the optimum. DOT and the dynamic search algorithms proved to be superior for finding a single optimum masked by noise of trigonometric form. The modified DIRECT algorithm was found to be better for locating the global optimum of functions with many widely separated true local optima.  相似文献   

20.
In this paper, we propose a new nonmonotonic interior point backtracking strategy to modify the reduced projective affine scaling trust region algorithm for solving optimization subject to nonlinear equality and linear inequality constraints. The general full trust region subproblem for solving the nonlinear equality and linear inequality constrained optimization is decomposed to a pair of trust region subproblems in horizontal and vertical subspaces of linearize equality constraints and extended affine scaling equality constraints. The horizontal subproblem in the proposed algorithm is defined by minimizing a quadratic projective reduced Hessian function subject only to an ellipsoidal trust region constraint in a null subspace of the tangential space, while the vertical subproblem is also defined by the least squares subproblem subject only to an ellipsoidal trust region constraint. By introducing the Fletcher's penalty function as the merit function, trust region strategy with interior point backtracking technique will switch to strictly feasible interior point step generated by a component direction of the two trust region subproblems. The global convergence of the proposed algorithm while maintaining fast local convergence rate of the proposed algorithm are established under some reasonable conditions. A nonmonotonic criterion should bring about speeding up the convergence progress in some high nonlinear function conditioned cases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号