首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Grey wolf optimizer algorithm was recently presented as a new heuristic search algorithm with satisfactory results in real-valued and binary encoded optimization problems that are categorized in swarm intelligence optimization techniques. This algorithm is more effective than some conventional population-based algorithms, such as particle swarm optimization, differential evolution and gravitational search algorithm. Some grey wolf optimizer variants were developed by researchers to improve the performance of the basic grey wolf optimizer algorithm. Inspired by particle swarm optimization algorithm, this study investigates the performance of a new algorithm called Inspired grey wolf optimizer which extends the original grey wolf optimizer by adding two features, namely, a nonlinear adjustment strategy of the control parameter, and a modified position-updating equation based on the personal historical best position and the global best position. Experiments are performed on four classical high-dimensional benchmark functions, four test functions proposed in the IEEE Congress on Evolutionary Computation 2005 special session, three well-known engineering design problems, and one real-world problem. The results show that the proposed algorithm can find more accurate solutions and has higher convergence rate and less number of fitness function evaluations than the other compared techniques.  相似文献   

2.
Evolutionary computations are very effective at performing global search (in probability), however, the speed of convergence could be slow. This paper presents an evolutionary programming algorithm combined with macro-mutation (MM), local linear bisection search (LBS) and crossover operators for global optimization. The MM operator is designed to explore the whole search space and the LBS operator to exploit the neighborhood of the solution. Simulated annealing is adopted to prevent premature convergence. The performance of the proposed algorithm is assessed by numerical experiments on 12 benchmark problems. Combined with MM, the effectiveness of various local search operators is also studied.  相似文献   

3.
Large-scale global optimization (LSGO) is a very important and challenging task in optimization domain, which is embedded in many scientific and engineering applications. In order to strengthen both effectiveness and efficiency of LSGO algorithm, this paper designs a two-stage based ensemble optimization evolutionary algorithm (EOEA) framework, which serially implements two sub-optimizers. These two sub-optimizers mainly focus on exploration and exploitation separately. The EOEA framework can be easily generated, flexibly altered and modified, according to different implementation conditions. In order to analyze the effects of EOEA’s components, we compare its performance on diverse kinds of problems with its two sub-optimizers and three variants. To show its superiorities over the previous LSGO algorithms, we compare its performance with six classical LSGO algorithms on the LSGO test functions of IEEE Congress of Evolutionary Computation (CEC 2008). The performance of EOEA is further evaluated by experimental comparison with four state-of-the-art LSGO algorithms on the test functions of CEC 2010 LSGO competition. To benchmark the practical applicability of EOEA, we adopt EOEA to the parameter calibration problem of water pipeline system. Based on the experimental results on diverse scales of systems, EOEA performs steadily and robustly.  相似文献   

4.
Evolutionary algorithms are robust and powerful global optimization techniques for solving large-scale problems that have many local optima. However, they require high CPU times, and they are very poor in terms of convergence performance. On the other hand, local search algorithms can converge in a few iterations but lack a global perspective. The combination of global and local search procedures should offer the advantages of both optimization methods while offsetting their disadvantages. This paper proposes a new hybrid optimization technique that merges a genetic algorithm with a local search strategy based on the interior point method. The efficiency of this hybrid approach is demonstrated by solving a constrained multi-objective mathematical test-case.  相似文献   

5.
The classical Differential Evolution (DE) algorithm, one of population-based Evolutionary Computation methods, proved to be a successful approach for relatively simple problems, but does not perform well for difficult multi-dimensional non-convex functions. A number of significant modifications of DE have been proposed in recent years, including very few approaches referring to the idea of distributed Evolutionary Algorithms. The present paper presents a new algorithm to improve optimization performance, namely DE with Separated Groups (DE-SG), which distributes population into small groups, defines rules of exchange of information and individuals between the groups and uses two different strategies to keep balance between exploration and exploitation capabilities. The performance of DE-SG is compared to that of eight algorithms belonging to the class of Evolutionary Strategies (Covariance Matrix Adaptation ES), Particle Swarm Optimization (Comprehensive Learning PSO and Efficient Population Utilization Strategy PSO), Differential Evolution (Distributed DE with explorative-exploitative population families, Self-adaptive DE, DE with global and local neighbours and Grouping Differential Evolution) and multi-algorithms (AMALGAM). The comparison is carried out for a set of 10-, 30- and 50-dimensional rotated test problems of varying difficulty, including 10- and 30-dimensional composition functions from CEC2005. Although slow for simple functions, the proposed DE-SG algorithm achieves a great success rate for more difficult 30- and 50-dimensional problems.  相似文献   

6.
This paper introduces a novel global optimization heuristic algorithm based on the basic paradigms of Evolutionary Algorithms (EA). The algorithm greatly extends a previous strategy proposed by the authors in Munteanu and Lazarescu (1998). In the newly designed algorithm the exploration/exploitation of the search space is adapted on-line based on the current features of the landscape that is being searched. The on-line adaptation mechanism involves a decision process as to whether more exploitation or exploration is needed depending on the current progress of the algorithm and on the current estimated potential of discovering better solutions. The convergence with probability 1 in finite time and discrete space is analyzed, as well as an extensive comparison with other evolutionary optimization heuristics is performed on a set of test functions.  相似文献   

7.
Evolutionary Algorithms (EAs) are emerging as competitive and reliable techniques for several optimization tasks. Juxtapositioning their higher-level and implicit correspondence; it is provocative to query if one optimization algorithm can benefit from another by studying underlying similarities and dissimilarities. This paper establishes a clear and fundamental algorithmic linking between particle swarm optimization (PSO) algorithm and genetic algorithms (GAs). Specifically, we select the task of solving unimodal optimization problems, and demonstrate that key algorithmic features of an effective Generalized Generation Gap based Genetic Algorithm can be introduced into the PSO by leveraging this algorithmic linking while significantly enhance the PSO’s performance. However, the goal of this paper is not to solve unimodal problems, neither is to demonstrate that the modified PSO algorithm resembles a GA, but to highlight the concept of algorithmic linking in an attempt towards designing efficient optimization algorithms. We intend to emphasize that the evolutionary and other optimization researchers should direct more efforts in establishing equivalence between different genetic, evolutionary and other nature-inspired or non-traditional algorithms. In addition to achieving performance gains, such an exercise shall deepen the understanding and scope of various operators from different paradigms in Evolutionary Computation (EC) and other optimization methods.  相似文献   

8.
Stochastic optimization methods such as evolutionary algorithms and Markov Chain Monte Carlo methods usually involve a Markov search of the optimization domain. Evolutionary annealing is an evolutionary algorithm that leverages all the information gathered by previous queries to the cost function. Evolutionary annealing can be viewed either as simulated annealing with improved sampling or as a non-Markovian selection mechanism for evolutionary algorithms. This article develops the basic algorithm and presents implementation details. Evolutionary annealing is a martingale-driven optimizer, where evaluation yields a source of increasingly refined information about the fitness function. A set of experiments with twelve standard global optimization benchmarks is performed to compare evolutionary annealing with six other stochastic optimization methods. Evolutionary annealing outperforms other methods on asymmetric, multimodal, non-separable benchmarks and exhibits strong performance on others. It is therefore a promising new approach to global optimization.  相似文献   

9.
提出了一种基于正态云模型的果蝇优化算法(NCMFOA).该算法通过直接将果蝇位置赋值给气味浓度判定值和引入正态云模型来刻画果蝇嗅觉搜索行为的随机性与模糊性,从而解决了果蝇优化算法(FOA)不能搜索负值空间的缺陷,并有效克服了FOA算法在解决复杂优化问题时容易陷入局部极值的不足.通过正态云模型熵值的动态调整,使得NCMFOA算法在进化的前期阶段具有较强的随机性与模糊性,以提高算法的全局探索能力;随着迭代次数的增加,算法搜索行为的随机性与模糊性逐渐减弱,使得其局部开发能力逐渐增强,算法收敛精度得到提高.此外,通过引入视觉实时更新方案,进一步加速了算法的收敛速度.用经典的基准测试函数验证了NCMFOA算法的可行性与有效性,结果表明该算法具有收敛速度快、收敛精度高以及鲁棒性好等优点,对于高维复杂优化问题,该算法同样获得了良好的优化效果.将NCMFOA算法用于解决混沌系统的参数估计问题,进一步验证了该算法具有较强的解决实际工程优化问题的能力.  相似文献   

10.
整数规划的布谷鸟算法   总被引:1,自引:0,他引:1  
布谷鸟搜索算法是一种新型的智能优化算法.本文采用截断取整的方法将基本布谷鸟搜索算法用于求解整数规划问题.通过对标准测试函数进行仿真实验并与粒子群算法进行比较,结果表明本文所提算法比粒子群算法拥有更好的性能和更强的全局寻优能力,可以作为一种实用方法用于求解整数规划问题.  相似文献   

11.
Particle swarm optimization (PSO) algorithm has been developing rapidly and many results have been reported. PSO algorithm has shown some important advantages by providing high speed of convergence in specific problems, but it has a tendency to get stuck in a near optimal solution and one may find it difficult to improve solution accuracy by fine tuning. This paper presents a dynamic global and local combined particle swarm optimization (DGLCPSO) algorithm to improve the performance of original PSO, in which all particles dynamically share the best information of the local particle, global particle and group particles. It is tested with a set of eight benchmark functions with different dimensions and compared with original PSO. Experimental results indicate that the DGLCPSO algorithm improves the search performance on the benchmark functions significantly, and shows the effectiveness of the algorithm to solve optimization problems.  相似文献   

12.
Evolutionary algorithm (EA) has become popular in global optimization with applications widely used in many industrial areas. However, there exists probable premature convergence problem when rugged contour situation is encountered. As to the original genetic algorithm (GA), no matter single population or multi-population cases, the ways to prevent the problem of probable premature convergence are to implement various selection methods, penalty functions and mutation approaches. This work proposes a novel approach to perform very efficient mutation to prevent from premature convergence by introducing the concept of information theory. Information-guided mutation is implemented to several variables, which are selected based on the information entropy derived in this work. The areas of search are also determined on the basis of the information amount obtained from previous searches. Several benchmark problems are solved to show the superiority of this information-guided EA. An industrial scale problem is also presented in this work.  相似文献   

13.
针对果蝇优化算法易陷入早熟收敛、收敛速度慢、寻优精度低的缺点,提出一种基于极坐标编码的果蝇优化算法.为提高果蝇优化算法的寻优精度,采用极坐标编码的形式,以增加单个母体寻优空间表示方法的多样性,并使种群中的个体,在围绕个体的整个超球体内随机搜索,使个体的搜索范围更加广泛.在迭代寻优过程中,根据适应度值和概率调整极角,逐渐降低观测结果的不确定性.通过9个基准测试函数,对基于极坐标编码的果蝇优化算法进行仿真实验,结果表明了算法在收敛性和稳定性方面,优于其它5个优化算法,测试结果验证了极坐标编码方法的有效性和可行性.  相似文献   

14.
The particle swarm optimization (PSO) technique is a powerful stochastic evolutionary algorithm that can be used to find the global optimum solution in a complex search space. This paper presents a variation on the standard PSO algorithm called the rank based particle swarm optimizer, or PSOrank, employing cooperative behavior of the particles to significantly improve the performance of the original algorithm. In this method, in order to efficiently control the local search and convergence to global optimum solution, the γ best particles are taken to contribute to the updating of the position of a candidate particle. The contribution of each particle is proportional to its strength. The strength is a function of three parameters: strivness, immediacy and number of contributed particles. All particles are sorted according to their fitness values, and only the γ best particles will be selected. The value of γ decreases linearly as the iteration increases. A time-varying inertia weight decreasing non-linearly is introduced to improve the performance. PSOrank is tested on a commonly used set of optimization problems and is compared to other variants of the PSO algorithm presented in the literature. As a real application, PSOrank is used for neural network training. The PSOrank strategy outperformed all the methods considered in this investigation for most of the functions. Experimental results show the suitability of the proposed algorithm in terms of effectiveness and robustness.  相似文献   

15.
In this paper, a new gradient-related algorithm for solving large-scale unconstrained optimization problems is proposed. The new algorithm is a kind of line search method. The basic idea is to choose a combination of the current gradient and some previous search directions as a new search direction and to find a step-size by using various inexact line searches. Using more information at the current iterative step may improve the performance of the algorithm. This motivates us to find some new gradient algorithms which may be more effective than standard conjugate gradient methods. Uniformly gradient-related conception is useful and it can be used to analyze global convergence of the new algorithm. The global convergence and linear convergence rate of the new algorithm are investigated under diverse weak conditions. Numerical experiments show that the new algorithm seems to converge more stably and is superior to other similar methods in many situations.  相似文献   

16.
Scale factor local search in differential evolution   总被引:8,自引:0,他引:8  
This paper proposes the scale factor local search differential evolution (SFLSDE). The SFLSDE is a differential evolution (DE) based memetic algorithm which employs, within a self-adaptive scheme, two local search algorithms. These local search algorithms aim at detecting a value of the scale factor corresponding to an offspring with a high performance, while the generation is executed. The local search algorithms thus assist in the global search and generate offspring with high performance which are subsequently supposed to promote the generation of enhanced solutions within the evolutionary framework. Despite its simplicity, the proposed algorithm seems to have very good performance on various test problems. Numerical results are shown in order to justify the use of a double local search instead of a single search. In addition, the SFLSDE has been compared with a standard DE and three other modern DE based metaheuristic for a large and varied set of test problems. Numerical results are given for relatively low and high dimensional cases. A statistical analysis of the optimization results has been included in order to compare the results in terms of final solution detected and convergence speed. The efficiency of the proposed algorithm seems to be very high especially for large scale problems and complex fitness landscapes.  相似文献   

17.
In this paper we present a chaos-based evolutionary algorithm (EA) for solving nonlinear programming problems named chaotic genetic algorithm (CGA). CGA integrates genetic algorithm (GA) and chaotic local search (CLS) strategy to accelerate the optimum seeking operation and to speed the convergence to the global solution. The integration of global search represented in genetic algorithm and CLS procedures should offer the advantages of both optimization methods while offsetting their disadvantages. By this way, it is intended to enhance the global convergence and to prevent to stick on a local solution. The inherent characteristics of chaos can enhance optimization algorithms by enabling it to escape from local solutions and increase the convergence to reach to the global solution. Twelve chaotic maps have been analyzed in the proposed approach. The simulation results using the set of CEC’2005 show that the application of chaotic mapping may be an effective strategy to improve the performances of EAs.  相似文献   

18.
为了进一步提高差分进化算法的收敛速度、算法精度和稳定性,采用多种群技术来增加算法收敛速度和降低复杂度;利用精英区域学习策略来对算法的全局搜索能力和算法精度进一步提升,引进自适应免疫搜索策略,以实现自适应修正差分算法的变异因子和交叉因子。通过五个测试函数,把本文算法与最新文献中的算法进行对比,表明算法在收敛速度、精度和高维问题寻优能力方面的优越性。  相似文献   

19.
针对标准灰狼算法种群多样性差、后期收敛速度慢、易陷入局部最优的缺陷,提出一种改进灰狼算法.利用改进Tent混沌映射初始化种群,增加种群多样性;引入螺旋函数,提高算法收敛速度;融合模拟退火思想,避免陷入局部最优;设置搜索阈值,平衡全局搜索与局部搜索;利用改进Tent混沌映射产生新个体,替换性能较差个体并进行高斯扰动,增加寻优精度;将当前解和新解进行算术杂交,以保留当前解优点并减小扰动差异.使用基准测试函数和共享单车停车点选址及期初配置模型测试算法性能.结果表明,改进灰狼算法较标准灰狼算法、遗传算法和粒子群算法,收敛速度更快,寻优精度更高,性能更优越,并将该算法应用到共享单车停车选址上,验证了算法的有效性.  相似文献   

20.
Based on the mechanism of biological DNA genetic information and evolution, a modified DNA genetic algorithm (MDNA-GA) is proposed to estimate the kinetic parameters of the 2-Chlorophenol oxidation in supercritical water. In this approach, DNA encoding method, choose crossover operator and frame-shift mutation operator inspired by the biological DNA are developed for improving the global searching ability. Besides, an adaptive mutation probability which can be adjusted automatically according to the diversity of population is also adopted. A local search method is used to explore the search space to accelerate the convergence towards global optimum. The performance of MDNA-GA in typical benchmark functions and kinetic parameter estimation is studied and compared with RNA-GA. The experimental results demonstrate that the proposed algorithm can overcome premature convergence and yield the global optimum with high efficiency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号