首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 109 毫秒
1.
Chaotic catfish particle swarm optimization (C-CatfishPSO) is a novel optimization algorithm proposed in this paper. C-CatfishPSO introduces chaotic maps into catfish particle swarm optimization (CatfishPSO), which increase the search capability of CatfishPSO via the chaos approach. Simple CatfishPSO relies on the incorporation of catfish particles into particle swarm optimization (PSO). The introduced catfish particles improve the performance of PSO considerably. Unlike other ordinary particles, the catfish particles initialize a new search from extreme points of the search space when the gbest fitness value (global optimum at each iteration) has not changed for a certain number of consecutive iterations. This results in further opportunities of finding better solutions for the swarm by guiding the entire swarm to promising new regions of the search space and accelerating the search. The introduced chaotic maps strengthen the solution quality of PSO and CatfishPSO significantly. The resulting improved PSO and CatfishPSO are called chaotic PSO (C-PSO) and chaotic CatfishPSO (C-CatfishPSO), respectively. PSO, C-PSO, CatfishPSO, C-CatfishPSO, as well as other advanced PSO procedures from the literature were extensively compared on several benchmark test functions. Statistical analysis of the experimental results indicate that the performance of C-CatfishPSO is better than the performance of PSO, C-PSO, CatfishPSO and that C-CatfishPSO is also superior to advanced PSO methods from the literature.  相似文献   

2.
Particle swarm optimization (PSO) is originally developed as an unconstrained optimization technique, therefore lacks an explicit mechanism for handling constraints. When solving constrained optimization problems (COPs) with PSO, the existing research mainly focuses on how to handle constraints, and the impact of constraints on the inherent search mechanism of PSO has been scarcely explored. Motivated by this fact, in this paper we mainly investigate how to utilize the impact of constraints (or the knowledge about the feasible region) to improve the optimization ability of the particles. Based on these investigations, we present a modified PSO, called self-adaptive velocity particle swarm optimization (SAVPSO), for solving COPs. To handle constraints, in SAVPSO we adopt our recently proposed dynamic-objective constraint-handling method (DOCHM), which is essentially a constituent part of the inherent search mechanism of the integrated SAVPSO, i.e., DOCHM + SAVPSO. The performance of the integrated SAVPSO is tested on a well-known benchmark suite and the experimental results show that appropriately utilizing the knowledge about the feasible region can substantially improve the performance of the underlying algorithm in solving COPs.  相似文献   

3.
A generalization of the particle swarm optimization (PSO) algorithm is presented in this paper. The novel optimizer, the Generalized PSO (GPSO), is inspired by linear control theory. It enables direct control over the key aspects of particle dynamics during the optimization process. A detailed theoretical and empirical analysis is presented, and parameter-tuning schemes are proposed. GPSO is compared to the classical PSO and genetic algorithm (GA) on a set of benchmark problems. The results clearly demonstrate the effectiveness of the proposed algorithm. Finally, an application of the GPSO algorithm to the fine-tuning of the support vector machines classifier for electrical machines fault detection is presented.  相似文献   

4.
This paper proposes the hybrid NM-PSO algorithm based on the Nelder–Mead (NM) simplex search method and particle swarm optimization (PSO) for unconstrained optimization. NM-PSO is very easy to implement in practice since it does not require gradient computation. The modification of both the Nelder–Mead simplex search method and particle swarm optimization intends to produce faster and more accurate convergence. The main purpose of the paper is to demonstrate how the standard particle swarm optimizers can be improved by incorporating a hybridization strategy. In a suite of 20 test function problems taken from the literature, computational results via a comprehensive experimental study, preceded by the investigation of parameter selection, show that the hybrid NM-PSO approach outperforms other three relevant search techniques (i.e., the original NM simplex search method, the original PSO and the guaranteed convergence particle swarm optimization (GCPSO)) in terms of solution quality and convergence rate. In a later part of the comparative experiment, the NM-PSO algorithm is compared to various most up-to-date cooperative PSO (CPSO) procedures appearing in the literature. The comparison report still largely favors the NM-PSO algorithm in the performance of accuracy, robustness and function evaluation. As evidenced by the overall assessment based on two kinds of computational experience, the new algorithm has demonstrated to be extremely effective and efficient at locating best-practice optimal solutions for unconstrained optimization.  相似文献   

5.
This paper proposes particle swarm optimization with age-group topology (PSOAG), a novel age-based particle swarm optimization (PSO). In this work, we present a new concept of age to measure the search ability of each particle in local area. To keep population diversity during searching, we separate particles to different age-groups by their age and particles in each age-group can only select the ones in younger groups or their own groups as their neighbourhoods. To allow search escape from local optima, the aging particles are regularly replaced by new and randomly generated ones. In addition, we design an age-group based parameter setting method, where particles in different age-groups have different parameters, to accelerate convergence. This algorithm is applied to nonlinear function optimization and data clustering problems for performance evaluation. In comparison against several PSO variants and other EAs, we find that the proposed algorithm provides significantly better performances on both the function optimization problems and the data clustering tasks.  相似文献   

6.
Artificial bee colony (ABC) algorithm invented recently by Karaboga is a biological-inspired optimization algorithm, which has been shown to be competitive with some conventional biological-inspired algorithms, such as genetic algorithm (GA), differential evolution (DE) and particle swarm optimization (PSO). However, there is still an insufficiency in ABC algorithm regarding its solution search equation, which is good at exploration but poor at exploitation. Inspired by PSO, we propose an improved ABC algorithm called gbest-guided ABC (GABC) algorithm by incorporating the information of global best (gbest) solution into the solution search equation to improve the exploitation. The experimental results tested on a set of numerical benchmark functions show that GABC algorithm can outperform ABC algorithm in most of the experiments.  相似文献   

7.
Parametric optimization of flexible satellite controller is an essential for almost all modern satellites. Particle swarm algorithm is a global optimization algorithm but it suffers from two major shortcomings, that of, premature convergence and low searching accuracy. To solve these problems, this paper proposes an improved particle swarm optimization (IPSO) which substitute “poorly-fitted-particles” with a cross operation. Based on decision possibility, the cross operation can interchange local optima between three particles. Thereafter the swarm is split in two halves, and random number (s) get generated by crossing the dimension of particle from both halves. This produces a new swarm. Now the new swarm and old swarm are mixed, and based on relative fitness a half of the particles are selected for the next generation. As a result of the cross operation, IPSO can easily jump out of local optima, has improved searching accuracy and accelerates the convergence speed. Some test functions with different dimensions are used to analyze the performance of IPSO algorithm. Simulation results show that the IPSO has more advantages than standard PSO and Genetic Algorithm PSO (GAPSO). In that it has a more stable performance and lower level of complexity. Thus the IPSO is applied for parametric optimization of flexible satellite control, for a satellite having solar wings and antennae. Simulation results shows that the IPSO can effectively get the best controller parameters vis-a-vis the other optimization methods.  相似文献   

8.
This paper proposes a new co-swarm PSO (CSHPSO) for constrained optimization problems, which is obtained by hybridizing the recently proposed shrinking hypersphere PSO (SHPSO) with the differential evolution (DE) approach. The total swarm is subdivided into two sub swarms in such a way that the first sub swarms uses SHPSO and second sub swarms uses DE. Experiments are performed on a state-of-the-art problems proposed in IEEE CEC 2006. The results of the CSHPSO is compared with SHPSO and DE in a variety of fashions. A statistical approach is applied to provide the significance of the numerical experiments. In order to further test the efficacy of the proposed CSHPSO, an economic dispatch (ED) problem with valve points effects for 40 generating units is solved. The results of the problem using CSHPSO is compared with SHPSO, DE and the existing solutions in the literature. It is concluded that CSHPSO is able to give the minimal cost for the ED problem in comparison with the other algorithms considered. Hence, CSHPSO is a promising new co-swarm PSO which can be used to solve any real constrained optimization problem.  相似文献   

9.
《Applied Mathematical Modelling》2014,38(7-8):2000-2014
Real engineering design problems are generally characterized by the presence of many often conflicting and incommensurable objectives. Naturally, these objectives involve many parameters whose possible values may be assigned by the experts. The aim of this paper is to introduce a hybrid approach combining three optimization techniques, dynamic programming (DP), genetic algorithms and particle swarm optimization (PSO). Our approach integrates the merits of both DP and artificial optimization techniques and it has two characteristic features. Firstly, the proposed algorithm converts fuzzy multiobjective optimization problem to a sequence of a crisp nonlinear programming problems. Secondly, the proposed algorithm uses H-SOA for solving nonlinear programming problem. In which, any complex problem under certain structure can be solved and there is no need for the existence of some properties rather than traditional methods that need some features of the problem such as differentiability and continuity. Finally, with different degree of α we get different α-Pareto optimal solution of the problem. A numerical example is given to illustrate the results developed in this paper.  相似文献   

10.
The particle swarm optimization (PSO) technique is a powerful stochastic evolutionary algorithm that can be used to find the global optimum solution in a complex search space. This paper presents a variation on the standard PSO algorithm called the rank based particle swarm optimizer, or PSOrank, employing cooperative behavior of the particles to significantly improve the performance of the original algorithm. In this method, in order to efficiently control the local search and convergence to global optimum solution, the γ best particles are taken to contribute to the updating of the position of a candidate particle. The contribution of each particle is proportional to its strength. The strength is a function of three parameters: strivness, immediacy and number of contributed particles. All particles are sorted according to their fitness values, and only the γ best particles will be selected. The value of γ decreases linearly as the iteration increases. A time-varying inertia weight decreasing non-linearly is introduced to improve the performance. PSOrank is tested on a commonly used set of optimization problems and is compared to other variants of the PSO algorithm presented in the literature. As a real application, PSOrank is used for neural network training. The PSOrank strategy outperformed all the methods considered in this investigation for most of the functions. Experimental results show the suitability of the proposed algorithm in terms of effectiveness and robustness.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号