首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Metaheuristic optimization algorithms have become popular choice for solving complex and intricate problems which are otherwise difficult to solve by traditional methods. In the present study an attempt is made to review the hybrid optimization techniques in which one main algorithm is a well known metaheuristic; particle swarm optimization or PSO. Hybridization is a method of combining two (or more) techniques in a judicious manner such that the resulting algorithm contains the positive features of both (or all) the algorithms. Depending on the algorithm/s used we made three classifications as (i) Hybridization of PSO and genetic algorithms (ii) Hybridization of PSO with differential evolution and (iii) Hybridization of PSO with other techniques. Where, other techniques include various local and global search methods. Besides giving the review we also show a comparison of three hybrid PSO algorithms; hybrid differential evolution particle swarm optimization (DE-PSO), adaptive mutation particle swarm optimization (AMPSO) and hybrid genetic algorithm particle swarm optimization (GA-PSO) on a test suite of nine conventional benchmark problems.  相似文献   

2.
The particle swarm optimization (PSO) technique is a powerful stochastic evolutionary algorithm that can be used to find the global optimum solution in a complex search space. This paper presents a variation on the standard PSO algorithm called the rank based particle swarm optimizer, or PSOrank, employing cooperative behavior of the particles to significantly improve the performance of the original algorithm. In this method, in order to efficiently control the local search and convergence to global optimum solution, the γ best particles are taken to contribute to the updating of the position of a candidate particle. The contribution of each particle is proportional to its strength. The strength is a function of three parameters: strivness, immediacy and number of contributed particles. All particles are sorted according to their fitness values, and only the γ best particles will be selected. The value of γ decreases linearly as the iteration increases. A time-varying inertia weight decreasing non-linearly is introduced to improve the performance. PSOrank is tested on a commonly used set of optimization problems and is compared to other variants of the PSO algorithm presented in the literature. As a real application, PSOrank is used for neural network training. The PSOrank strategy outperformed all the methods considered in this investigation for most of the functions. Experimental results show the suitability of the proposed algorithm in terms of effectiveness and robustness.  相似文献   

3.
Particle swarm optimization (PSO) algorithm has been developing rapidly and many results have been reported. PSO algorithm has shown some important advantages by providing high speed of convergence in specific problems, but it has a tendency to get stuck in a near optimal solution and one may find it difficult to improve solution accuracy by fine tuning. This paper presents a dynamic global and local combined particle swarm optimization (DGLCPSO) algorithm to improve the performance of original PSO, in which all particles dynamically share the best information of the local particle, global particle and group particles. It is tested with a set of eight benchmark functions with different dimensions and compared with original PSO. Experimental results indicate that the DGLCPSO algorithm improves the search performance on the benchmark functions significantly, and shows the effectiveness of the algorithm to solve optimization problems.  相似文献   

4.
Cluster analysis is an important task in data mining and refers to group a set of objects such that the similarities among objects within the same group are maximal while similarities among objects from different groups are minimal. The particle swarm optimization algorithm (PSO) is one of the famous metaheuristic optimization algorithms, which has been successfully applied to solve the clustering problem. However, it has two major shortcomings. The PSO algorithm converges rapidly during the initial stages of the search process, but near global optimum, the convergence speed will become very slow. Moreover, it may get trapped in local optimum if the global best and local best values are equal to the particle’s position over a certain number of iterations. In this paper we hybridized the PSO with a heuristic search algorithm to overcome the shortcomings of the PSO algorithm. In the proposed algorithm, called PSOHS, the particle swarm optimization is used to produce an initial solution to the clustering problem and then a heuristic search algorithm is applied to improve the quality of this solution by searching around it. The superiority of the proposed PSOHS clustering method, as compared to other popular methods for clustering problem is established for seven benchmark and real datasets including Iris, Wine, Crude Oil, Cancer, CMC, Glass and Vowel.  相似文献   

5.
针对基本粒子群优化算法容易陷入局部极值的缺陷,提出了一种免疫逃避型粒子群优化算法.其基本思想是将初始粒子群划分为寄生与宿主两个种群以模拟生物寄生行为,对寄生种群的粒子采用精英学习策略,对宿主群的粒子采用探索策略,再引入免疫系统的高频变异对寄生群采用相应的免疫逃避机制,以增强群体逃离局部极值、提高算法的全局寻优能力.采用标准测试函数的实验结果表明,该算法在收敛速度和求解精度方面均有显著改进.  相似文献   

6.
Chaotic catfish particle swarm optimization (C-CatfishPSO) is a novel optimization algorithm proposed in this paper. C-CatfishPSO introduces chaotic maps into catfish particle swarm optimization (CatfishPSO), which increase the search capability of CatfishPSO via the chaos approach. Simple CatfishPSO relies on the incorporation of catfish particles into particle swarm optimization (PSO). The introduced catfish particles improve the performance of PSO considerably. Unlike other ordinary particles, the catfish particles initialize a new search from extreme points of the search space when the gbest fitness value (global optimum at each iteration) has not changed for a certain number of consecutive iterations. This results in further opportunities of finding better solutions for the swarm by guiding the entire swarm to promising new regions of the search space and accelerating the search. The introduced chaotic maps strengthen the solution quality of PSO and CatfishPSO significantly. The resulting improved PSO and CatfishPSO are called chaotic PSO (C-PSO) and chaotic CatfishPSO (C-CatfishPSO), respectively. PSO, C-PSO, CatfishPSO, C-CatfishPSO, as well as other advanced PSO procedures from the literature were extensively compared on several benchmark test functions. Statistical analysis of the experimental results indicate that the performance of C-CatfishPSO is better than the performance of PSO, C-PSO, CatfishPSO and that C-CatfishPSO is also superior to advanced PSO methods from the literature.  相似文献   

7.
In this paper, a novel memetic algorithm (MA) named GS-MPSO is proposed by combining a particle swarm optimization (PSO) with a Gaussian mutation operator and a Simulated Annealing (SA)-based local search operator. In GS-MPSO, the particles are organized as a ring lattice. The Gaussian mutation operator is applied to the stagnant particles to prevent GS-MPSO trapping into local optima. The SA-based local search strategy is developed to combine with the cognition-only PSO model and perform a fine-grained local search around the promising regions. The experimental results show that GS-MPSO is superior to some other variants of PSO with better performance on optimizing the benchmark functions when the computing resource is limited. Data clustering is studied as a real case study to further demonstrate its optimization ability and usability, too.  相似文献   

8.
This study presents tribal particle swarm optimization (TPSO) to optimize the parameters of the functional-link-based neurofuzzy inference system (FLNIS) for prediction applications. The proposed TPSO uses particle swarm optimization (PSO) as evolution strategies of the tribes optimization algorithm (TOA) to balance local and global exploration of the search space. The proposed TPSO uses a self-clustering algorithm to divide the particle swarm into multiple tribes, and selects suitable evolution strategies to update each particle. The TPSO also uses a tribal adaptation mechanism to remove and generate particles and reconstruct tribal links. The tribal adaptation mechanism can improve the qualities of the tribe and the tribe adaptation. Finally, the FLNIS model with the proposed TPSO (FLNIS-TPSO) was used in several predictive applications. Experimental results demonstrated that the proposed TPSO method converges quickly and yields a lower RMS error than other current methods.  相似文献   

9.
Particle swarm optimization (PSO) is originally developed as an unconstrained optimization technique, therefore lacks an explicit mechanism for handling constraints. When solving constrained optimization problems (COPs) with PSO, the existing research mainly focuses on how to handle constraints, and the impact of constraints on the inherent search mechanism of PSO has been scarcely explored. Motivated by this fact, in this paper we mainly investigate how to utilize the impact of constraints (or the knowledge about the feasible region) to improve the optimization ability of the particles. Based on these investigations, we present a modified PSO, called self-adaptive velocity particle swarm optimization (SAVPSO), for solving COPs. To handle constraints, in SAVPSO we adopt our recently proposed dynamic-objective constraint-handling method (DOCHM), which is essentially a constituent part of the inherent search mechanism of the integrated SAVPSO, i.e., DOCHM + SAVPSO. The performance of the integrated SAVPSO is tested on a well-known benchmark suite and the experimental results show that appropriately utilizing the knowledge about the feasible region can substantially improve the performance of the underlying algorithm in solving COPs.  相似文献   

10.
Heuristic optimization provides a robust and efficient approach for solving complex real-world problems. The aim of this paper is to introduce a hybrid approach combining two heuristic optimization techniques, particle swarm optimization (PSO) and genetic algorithms (GA). Our approach integrates the merits of both GA and PSO and it has two characteristic features. Firstly, the algorithm is initialized by a set of random particles which travel through the search space. During this travel an evolution of these particles is performed by integrating PSO and GA. Secondly, to restrict velocity of the particles and control it, we introduce a modified constriction factor. Finally, the results of various experimental studies using a suite of multimodal test functions taken from the literature have demonstrated the superiority of the proposed approach to finding the global optimal solution.  相似文献   

11.
非线性约束优化问题的混合粒子群算法   总被引:3,自引:0,他引:3  
高岳林  李会荣 《计算数学》2010,32(2):135-146
把处理约束条件的一个外点方法和改进的粒子群优化算法相结合,提出了一种求解非线性约束优化问题的混合粒子群优化算法.该方法兼顾了粒子群优化和外点法的优点,对算法迭代过程中出现不可行粒子,利用外点法处理后产生可行粒子.数值实验表明了提出的新算法具有有效性、通用性和稳健性.  相似文献   

12.
Parametric optimization of flexible satellite controller is an essential for almost all modern satellites. Particle swarm algorithm is a global optimization algorithm but it suffers from two major shortcomings, that of, premature convergence and low searching accuracy. To solve these problems, this paper proposes an improved particle swarm optimization (IPSO) which substitute “poorly-fitted-particles” with a cross operation. Based on decision possibility, the cross operation can interchange local optima between three particles. Thereafter the swarm is split in two halves, and random number (s) get generated by crossing the dimension of particle from both halves. This produces a new swarm. Now the new swarm and old swarm are mixed, and based on relative fitness a half of the particles are selected for the next generation. As a result of the cross operation, IPSO can easily jump out of local optima, has improved searching accuracy and accelerates the convergence speed. Some test functions with different dimensions are used to analyze the performance of IPSO algorithm. Simulation results show that the IPSO has more advantages than standard PSO and Genetic Algorithm PSO (GAPSO). In that it has a more stable performance and lower level of complexity. Thus the IPSO is applied for parametric optimization of flexible satellite control, for a satellite having solar wings and antennae. Simulation results shows that the IPSO can effectively get the best controller parameters vis-a-vis the other optimization methods.  相似文献   

13.
This paper proposes the hybrid NM-PSO algorithm based on the Nelder–Mead (NM) simplex search method and particle swarm optimization (PSO) for unconstrained optimization. NM-PSO is very easy to implement in practice since it does not require gradient computation. The modification of both the Nelder–Mead simplex search method and particle swarm optimization intends to produce faster and more accurate convergence. The main purpose of the paper is to demonstrate how the standard particle swarm optimizers can be improved by incorporating a hybridization strategy. In a suite of 20 test function problems taken from the literature, computational results via a comprehensive experimental study, preceded by the investigation of parameter selection, show that the hybrid NM-PSO approach outperforms other three relevant search techniques (i.e., the original NM simplex search method, the original PSO and the guaranteed convergence particle swarm optimization (GCPSO)) in terms of solution quality and convergence rate. In a later part of the comparative experiment, the NM-PSO algorithm is compared to various most up-to-date cooperative PSO (CPSO) procedures appearing in the literature. The comparison report still largely favors the NM-PSO algorithm in the performance of accuracy, robustness and function evaluation. As evidenced by the overall assessment based on two kinds of computational experience, the new algorithm has demonstrated to be extremely effective and efficient at locating best-practice optimal solutions for unconstrained optimization.  相似文献   

14.
一种加入创新粒子的粒子群   总被引:1,自引:0,他引:1  
粒子群算法是一种基于群体智能的随机并行算法,它在很多优化问题中都得到了比较好的应用。本文针对粒子群容易陷入局部最优解,提出了一种加入创新粒子的粒子群,实验模拟结果表明加入创新粒子的粒子群有更好的结果和收敛速度。  相似文献   

15.
宋健  邓雪 《运筹与管理》2018,27(9):148-155
针对模糊不确定的证券市场,用可能性均值、下可能性方差和协方差分别替换了投资组合模型中概率均值、方差和协方差,构建了双目标均值-方差投资组合模型。然后采用线性加权法将双目标模型转化为单目标模型,进而提出了一个PSO-AFSA混合算法对其求解。该混合算法中,将粒子群算法搜索的结果作为人工鱼群算法初始鱼群,进一步搜索,这样能有效的避免粒子群算法陷入局部最优。同时,将人工鱼群中的最好位置反馈到粒子群算法的速度更新公式中,指引粒子运动,加快算法收敛。最后,进行实例分析,结果表明:PSO-AFSA混合算法是有效的,混合算法搜索到的全局最优值好于基本粒子群算法搜索到的全局最优值。  相似文献   

16.
基于粒子群算法的非线性二层规划问题的求解算法   总被引:3,自引:0,他引:3  
粒子群算法(Particle Swarm Optimization,PSO)是一种新兴的优化技术,其思想来源于人工生命和演化计算理论。PSO通过粒子追随自己找到的最好解和整个群的最好解来完成优化。该算法简单易实现,可调参数少,已得到了广泛研究和应用。本文根据该算法能够有效的求出非凸数学规划全局最优解的特点,对非线性二层规划的上下层问题求解,并根据二层规划的特点,给出了求解非线性二层规划问题全局最优解的有效算法。数值计算结果表明该算法有效。  相似文献   

17.
本文针对求解旅行商问题的标准粒子群算法所存在的早熟和低效的问题,提出一种基于Greedy Heuristic的初始解与粒子群相结合的混合粒子群算法(SKHPSO)。该算法通过本文给出的类Kruskal算法作为Greedy Heuristic的具体实现手段,产生一个较优的初始可行解,作为粒子群中的一员,然后再用改进的混合粒子群算法进行启发式搜索。SKHPSO的局部搜索借鉴了Lin-Kernighan邻域搜索,而全局搜索结合了遗传算法中的交叉及置换操作。应用该算法对TSPLIB中的典型算例进行了算法测试分析,结果表明:SKHPSO可明显提高求解的质量和效率。  相似文献   

18.
针对粒子群算法局部搜索能力差,后期收敛速度慢等缺点,提出了一种改进的粒子群算法,该算法是在粒子群算法后期加入拟牛顿方法,充分发挥了粒子群算法的全局搜索性和拟牛顿法的局部精细搜索性,从而克服了粒子群算法的不足,把超越方程转化为函数优化的问题,利用该算法求解,数值实验结果表明,算法有较高的收敛速度和求解精度。  相似文献   

19.
We propose a novel cooperative swarm intelligence algorithm to solve multi-objective discrete optimization problems (MODP). Our algorithm combines a firefly algorithm (FA) and a particle swarm optimization (PSO). Basically, we address three main points: the effect of FA and PSO cooperation on the exploration of the search space, the discretization of the two algorithms using a transfer function, and finally, the use of the epsilon dominance relation to manage the size of the external archive and to guarantee the convergence and the diversity of Pareto optimal solutions.We compared the results of our algorithm with the results of five well-known meta-heuristics on nine multi-objective knapsack problem benchmarks. The experiments show clearly the ability of our algorithm to provide a better spread of solutions with a better convergence behavior.  相似文献   

20.
Particle swarm optimization (PSO) is characterized by a fast convergence, which can lead the algorithms of this class to stagnate in local optima. In this paper, a variant of the standard PSO algorithm is presented, called PSO-2S, based on several initializations in different zones of the search space, using charged particles. This algorithm uses two kinds of swarms, a main one that gathers the best particles of auxiliary ones, initialized several times. The auxiliary swarms are initialized in different areas, then an electrostatic repulsion heuristic is applied in each area to increase its diversity. We analyse the performance of the proposed approach on a testbed made of unimodal and multimodal test functions with and without coordinate rotation and shift. The Lennard-Jones potential problem is also used. The proposed algorithm is compared to several other PSO algorithms on this benchmark. The obtained results show the efficiency of the proposed algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号