首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Memetic particle swarm optimization   总被引:2,自引:0,他引:2  
We propose a new Memetic Particle Swarm Optimization scheme that incorporates local search techniques in the standard Particle Swarm Optimization algorithm, resulting in an efficient and effective optimization method, which is analyzed theoretically. The proposed algorithm is applied to different unconstrained, constrained, minimax and integer programming problems and the obtained results are compared to that of the global and local variants of Particle Swarm Optimization, justifying the superiority of the memetic approach.  相似文献   

2.
Balanced fuzzy particle swarm optimization   总被引:1,自引:0,他引:1  
In the present study an extension of particle swarm optimization (PSO) algorithm which is in conformity with actual nature is introduced for solving combinatorial optimization problems. Development of this algorithm is essentially based on balanced fuzzy sets theory. The classical fuzzy sets theory cannot distinguish differences between positive and negative information of membership functions, while in the new method both kinds of information “positive and negative” about membership function are equally important. The balanced fuzzy particle swarm optimization algorithm is used for fundamental optimization problem entitled traveling salesman problem (TSP). For convergence inspecting of new algorithm, method was used for TSP problems. Convergence curves were represented fast convergence in restricted and low iterations for balanced fuzzy particle swarm optimization algorithm (BF-PSO) comparison with fuzzy particle swarm optimization algorithm (F-PSO).  相似文献   

3.
A new modification to the particle swarm optimization (PSO) algorithm is proposed aiming to make the algorithm less sensitive to selection of the initial search domain. To achieve this goal, we release the boundaries of the search domain and enable each boundary to drift independently, guided by the number of collisions with particles involved in the optimization process. The gradual modification of the active search domain range enables us to prevent particles from revisiting less promising regions of the search domain and also to explore the areas located outside the initial search domain. With time, the search domain shrinks around a region holding a global extremum. This helps improve the quality of the final solution obtained. It also makes the algorithm less sensitive to initial choice of the search domain ranges. The effectiveness of the proposed Floating Boundary PSO (FBPSO) is demonstrated using a set of standard test functions. To control the performance of the algorithm, new parameters are introduced. Their optimal values are determined through numerical examples.  相似文献   

4.
Improved particle swarm optimization combined with chaos   总被引:25,自引:0,他引:25  
As a novel optimization technique, chaos has gained much attention and some applications during the past decade. For a given energy or cost function, by following chaotic ergodic orbits, a chaotic dynamic system may eventually reach the global optimum or its good approximation with high probability. To enhance the performance of particle swarm optimization (PSO), which is an evolutionary computation technique through individual improvement plus population cooperation and competition, hybrid particle swarm optimization algorithm is proposed by incorporating chaos. Firstly, adaptive inertia weight factor (AIWF) is introduced in PSO to efficiently balance the exploration and exploitation abilities. Secondly, PSO with AIWF and chaos are hybridized to form a chaotic PSO (CPSO), which reasonably combines the population-based evolutionary searching ability of PSO and chaotic searching behavior. Simulation results and comparisons with the standard PSO and several meta-heuristics show that the CPSO can effectively enhance the searching efficiency and greatly improve the searching quality.  相似文献   

5.
Inspired by the migratory behavior in the nature, a novel particle swarm optimization algorithm based on particle migration (MPSO) is proposed in this work. In this new algorithm, the population is randomly partitioned into several sub-swarms, each of which is made to evolve based on particle swarm optimization with time varying inertia weight and acceleration coefficients (LPSO-TVAC). At periodic stage in the evolution, some particles migrate from one complex to another to enhance the diversity of the population and avoid premature convergence. It further improves the ability of exploration and exploitation. Simulations for benchmark test functions illustrate that the proposed algorithm possesses better ability to find the global optima than other variants and is an effective global optimization tool.  相似文献   

6.
This paper proposes a new co-swarm PSO (CSHPSO) for constrained optimization problems, which is obtained by hybridizing the recently proposed shrinking hypersphere PSO (SHPSO) with the differential evolution (DE) approach. The total swarm is subdivided into two sub swarms in such a way that the first sub swarms uses SHPSO and second sub swarms uses DE. Experiments are performed on a state-of-the-art problems proposed in IEEE CEC 2006. The results of the CSHPSO is compared with SHPSO and DE in a variety of fashions. A statistical approach is applied to provide the significance of the numerical experiments. In order to further test the efficacy of the proposed CSHPSO, an economic dispatch (ED) problem with valve points effects for 40 generating units is solved. The results of the problem using CSHPSO is compared with SHPSO, DE and the existing solutions in the literature. It is concluded that CSHPSO is able to give the minimal cost for the ED problem in comparison with the other algorithms considered. Hence, CSHPSO is a promising new co-swarm PSO which can be used to solve any real constrained optimization problem.  相似文献   

7.
Particle swarm optimization (PSO) is originally developed as an unconstrained optimization technique, therefore lacks an explicit mechanism for handling constraints. When solving constrained optimization problems (COPs) with PSO, the existing research mainly focuses on how to handle constraints, and the impact of constraints on the inherent search mechanism of PSO has been scarcely explored. Motivated by this fact, in this paper we mainly investigate how to utilize the impact of constraints (or the knowledge about the feasible region) to improve the optimization ability of the particles. Based on these investigations, we present a modified PSO, called self-adaptive velocity particle swarm optimization (SAVPSO), for solving COPs. To handle constraints, in SAVPSO we adopt our recently proposed dynamic-objective constraint-handling method (DOCHM), which is essentially a constituent part of the inherent search mechanism of the integrated SAVPSO, i.e., DOCHM + SAVPSO. The performance of the integrated SAVPSO is tested on a well-known benchmark suite and the experimental results show that appropriately utilizing the knowledge about the feasible region can substantially improve the performance of the underlying algorithm in solving COPs.  相似文献   

8.
Improved particle swarm algorithm for hydrological parameter optimization   总被引:1,自引:0,他引:1  
In this paper, a new method named MSSE-PSO (master-slave swarms shuffling evolution algorithm based on particle swarm optimization) is proposed. Firstly, a population of points is sampled randomly from the feasible space, and then partitioned into several sub-swarms (one master swarm and other slave swarms). Each slave swarm independently executes PSO or its variants, including the update of particles’ position and velocity. For the master swarm, the particles enhance themselves based on the social knowledge of master swarm and that of slave swarms. At periodic stage in the evolution, the master swarm and the whole slave swarms are forced to mix, and points are then reassigned to several sub-swarms to ensure the share of information. The process is repeated until a user-defined stopping criterion is reached. The tests of numerical simulation and the case study on hydrological model show that MSSE-PSO remarkably improves the accuracy of calibration, reduces the time of computation and enhances the performance of stability. Therefore, it is an effective and efficient global optimization method.  相似文献   

9.
The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. This paper proposes three new nonlinear strategies for selecting inertia weight which plays a significant role in particle’s foraging behaviour. The PSO variants implying these strategies are named as: fine grained inertia weight PSO (FGIWPSO); Double Exponential Self Adaptive IWPSO (DESIWPSO) and Double Exponential Dynamic IWPSO (DEDIWPSO). In FGIWPSO, inertia weight is obtained adaptively, depending on particle’s iteration wise performance and decreases exponentially. DESIWPSO and DEDIWPSO employ Gompertz function, a double exponential function for selecting inertia weight. In DESIWPSO the particles’ iteration wise performance is fed as input to the Gompertz function. On the other hand DEDIWPSO evaluates the inertia weight for whole swarm iteratively using Gompertz function where relative iteration is fed as input. The efficacy and efficiency of proposed approaches is validated on a suite of benchmark functions. The proposed variants are compared with non linear inertia weight and exponential inertia weight strategies. Experimental results assert that the proposed modifications help in improving PSO performance in terms of solution quality as well as convergence rate.  相似文献   

10.
Particle swarm optimization (PSO) is an evolutionary algorithm used extensively. This paper presented a new particle swarm optimizer based on evolutionary game (EGPSO). We map particles’ finding optimal solution in PSO algorithm to players’ pursuing maximum utility by choosing strategies in evolutionary games, using replicator dynamics to model the behavior of particles. And in order to overcome premature convergence a multi-start technique was introduced. Experimental results show that EGPSO can overcome premature convergence and has great performance of convergence property over traditional PSO.  相似文献   

11.
The particle swarm optimization (PSO) computational method has recently become popular. However, it has limitations. It may trap into local optima and cause the premature convergence phenomenon, especially for multimodal and high-dimensional problems. In this paper, we focus on investigating the fitness evaluation in terms of a particle’s position. Particularly, we find that the fitness evaluation strategy in the standard PSO has two drawbacks, i.e., “two steps forward and one step back” and “two steps back and one step forward”. In addition, we propose a general fitness evaluation strategy (GFES), by which a particle is evaluated in multiple subspaces and different contexts in order to take diverse paces towards the destination position. As demonstrations of GFES, a series of PSOs with GFES are presented. Experiments are conducted on several benchmark optimization problems. The results show that GFES is effective at handling multimodal and high-dimensional problems.  相似文献   

12.
Chaotic catfish particle swarm optimization (C-CatfishPSO) is a novel optimization algorithm proposed in this paper. C-CatfishPSO introduces chaotic maps into catfish particle swarm optimization (CatfishPSO), which increase the search capability of CatfishPSO via the chaos approach. Simple CatfishPSO relies on the incorporation of catfish particles into particle swarm optimization (PSO). The introduced catfish particles improve the performance of PSO considerably. Unlike other ordinary particles, the catfish particles initialize a new search from extreme points of the search space when the gbest fitness value (global optimum at each iteration) has not changed for a certain number of consecutive iterations. This results in further opportunities of finding better solutions for the swarm by guiding the entire swarm to promising new regions of the search space and accelerating the search. The introduced chaotic maps strengthen the solution quality of PSO and CatfishPSO significantly. The resulting improved PSO and CatfishPSO are called chaotic PSO (C-PSO) and chaotic CatfishPSO (C-CatfishPSO), respectively. PSO, C-PSO, CatfishPSO, C-CatfishPSO, as well as other advanced PSO procedures from the literature were extensively compared on several benchmark test functions. Statistical analysis of the experimental results indicate that the performance of C-CatfishPSO is better than the performance of PSO, C-PSO, CatfishPSO and that C-CatfishPSO is also superior to advanced PSO methods from the literature.  相似文献   

13.
This paper proposes the hybrid NM-PSO algorithm based on the Nelder–Mead (NM) simplex search method and particle swarm optimization (PSO) for unconstrained optimization. NM-PSO is very easy to implement in practice since it does not require gradient computation. The modification of both the Nelder–Mead simplex search method and particle swarm optimization intends to produce faster and more accurate convergence. The main purpose of the paper is to demonstrate how the standard particle swarm optimizers can be improved by incorporating a hybridization strategy. In a suite of 20 test function problems taken from the literature, computational results via a comprehensive experimental study, preceded by the investigation of parameter selection, show that the hybrid NM-PSO approach outperforms other three relevant search techniques (i.e., the original NM simplex search method, the original PSO and the guaranteed convergence particle swarm optimization (GCPSO)) in terms of solution quality and convergence rate. In a later part of the comparative experiment, the NM-PSO algorithm is compared to various most up-to-date cooperative PSO (CPSO) procedures appearing in the literature. The comparison report still largely favors the NM-PSO algorithm in the performance of accuracy, robustness and function evaluation. As evidenced by the overall assessment based on two kinds of computational experience, the new algorithm has demonstrated to be extremely effective and efficient at locating best-practice optimal solutions for unconstrained optimization.  相似文献   

14.
15.
Social behaviour is mainly based on swarm colonies, in which each individual shares its knowledge about the environment with other individuals to get optimal solutions. Such co-operative model differs from competitive models in the way that individuals die and are born by combining information of alive ones. This paper presents the particle swarm optimization with differential evolution algorithm in order to train a neural network instead the classic back propagation algorithm. The performance of a neural network for particular problems is critically dependant on the choice of the processing elements, the net architecture and the learning algorithm. This work is focused in the development of methods for the evolutionary design of artificial neural networks. This paper focuses in optimizing the topology and structure of connectivity for these networks.  相似文献   

16.
Multi-objective particle swarm optimization (MOPSO) is a promising meta-heuristic to solve multi-objective problems (MOPs). Previous works have shown that selecting a proper combination of leader and archiving methods, which is a challenging task, improves the search ability of the algorithm. A previous study has employed a simple hyper-heuristic to select these components, obtaining good results. In this research, an analysis is made to verify if using more advanced heuristic selection methods improves the search ability of the algorithm. Empirical studies are conducted to investigate this hypothesis. In these studies, first, four heuristic selection methods are compared: a choice function, a multi-armed bandit, a random one, and the previously proposed roulette wheel. A second study is made to identify if it is best to adapt only the leader method, the archiving method, or both simultaneously. Moreover, the influence of the interval used to replace the low-level heuristic is analyzed. At last, a final study compares the best variant to a hyper-heuristic framework that combines a Multi-Armed Bandit algorithm into the multi-objective optimization based on decomposition with dynamical resource allocation (MOEA/D-DRA) and a state-of-the-art MOPSO. Our results indicate that the resulting algorithm outperforms the hyper-heuristic framework in most of the problems investigated. Moreover, it achieves competitive results compared to a state-of-the-art MOPSO.  相似文献   

17.
This paper presents a methodology for finding optimal system parameters and optimal control parameters using a novel adaptive particle swarm optimization (APSO) algorithm. In the proposed APSO, every particle dynamically adjusts inertia weight according to feedback taken from particles’ best memories. The main advantages of the proposed APSO are to achieve faster convergence speed and better solution accuracy with minimum incremental computational burden. In the beginning we attempt to utilize the proposed algorithm to identify the unknown system parameters the structure of which is assumed to be known previously. Next, according to the identified system, PID gains are optimally found by also using the proposed algorithm. Two simulated examples are finally given to demonstrate the effectiveness of the proposed algorithm. The comparison to PSO with linearly decreasing inertia weight (LDW-PSO) and genetic algorithm (GA) exhibits the APSO-based system’s superiority.  相似文献   

18.
In this paper, we present a novel multi-modal optimization algorithm for finding multiple local optima in objective function surfaces. We build from Species-based particle swarm optimization (SPSO) by using deterministic sampling to generate new particles during the optimization process, by implementing proximity-based speciation coupled with speciation of isolated particles, and by including “turbulence regions” around already found solutions to prevent unnecessary function evaluations. Instead of using error threshold values, the new algorithm uses the particle’s experience, geometric mean, and “exclusion factor” to detect local optima and stop the algorithm. The performance of each extension is assessed with leave-it-out tests, and the results are discussed. We use the new algorithm called Isolated-Speciation-based particle swarm optimization (ISPSO) and a benchmark algorithm called Niche particle swarm optimization (NichePSO) to solve a six-dimensional rainfall characterization problem for 192 rain gages across the United States. We show why it is important to find multiple local optima for solving this real-world complex problem by discussing its high multi-modality. Solutions found by both algorithms are compared, and we conclude that ISPSO is more reliable than NichePSO at finding optima with a significantly lower objective function value.  相似文献   

19.
Heuristic optimization provides a robust and efficient approach for solving complex real-world problems. The aim of this paper is to introduce a hybrid approach combining two heuristic optimization techniques, particle swarm optimization (PSO) and genetic algorithms (GA). Our approach integrates the merits of both GA and PSO and it has two characteristic features. Firstly, the algorithm is initialized by a set of random particles which travel through the search space. During this travel an evolution of these particles is performed by integrating PSO and GA. Secondly, to restrict velocity of the particles and control it, we introduce a modified constriction factor. Finally, the results of various experimental studies using a suite of multimodal test functions taken from the literature have demonstrated the superiority of the proposed approach to finding the global optimal solution.  相似文献   

20.
Orienteering problem is a well researched routing problem which is a generalization of the traveling salesman problem. Team orienteering problem (TOP) is the extended version of the orienteering problem with more than one member in the team. In this paper the first known discrete particle swarm optimization (DPSO) algorithm has been developed for 2, 3 and 4-member TOP. In the DPSO meta-heuristic novel methods have been introduced for the initial particle generation process. Reduced variable neighborhood search and 2-opt were applied as the local search tools. The efficacy of the algorithm was tested using seven commonly used benchmark problem sets ranging in size from 21 to 102 nodes. The results of the DPSO algorithm were compared against seven other heuristic algorithms that have been developed for TOP. It was concluded that the developed DPSO algorithm for the TOP is competitive and robust across the benchmark problem sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号