首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 11 毫秒
1.
An algorithm called DE-PSO is proposed which incorporates concepts from DE and PSO, updating particles not only by DE operators but also by mechanisms of PSO. The proposed algorithm is tested on several benchmark functions. Numerical comparisons with different hybrid meta-heuristics demonstrate its effectiveness and efficiency.  相似文献   

2.
In this paper we develop, analyze, and test a new algorithm for the global minimization of a function subject to simple bounds without the use of derivatives. The underlying algorithm is a pattern search method, more specifically a coordinate search method, which guarantees convergence to stationary points from arbitrary starting points. In the optional search phase of pattern search we apply a particle swarm scheme to globally explore the possible nonconvexity of the objective function. Our extensive numerical experiments showed that the resulting algorithm is highly competitive with other global optimization methods also based on function values. Support for Luís N. Vicente was provided by Centro de Matemática da Universidade de Coimbra and by FCT under grant POCI/MAT/59442/2004.  相似文献   

3.
In this paper, we present a novel multi-modal optimization algorithm for finding multiple local optima in objective function surfaces. We build from Species-based particle swarm optimization (SPSO) by using deterministic sampling to generate new particles during the optimization process, by implementing proximity-based speciation coupled with speciation of isolated particles, and by including “turbulence regions” around already found solutions to prevent unnecessary function evaluations. Instead of using error threshold values, the new algorithm uses the particle’s experience, geometric mean, and “exclusion factor” to detect local optima and stop the algorithm. The performance of each extension is assessed with leave-it-out tests, and the results are discussed. We use the new algorithm called Isolated-Speciation-based particle swarm optimization (ISPSO) and a benchmark algorithm called Niche particle swarm optimization (NichePSO) to solve a six-dimensional rainfall characterization problem for 192 rain gages across the United States. We show why it is important to find multiple local optima for solving this real-world complex problem by discussing its high multi-modality. Solutions found by both algorithms are compared, and we conclude that ISPSO is more reliable than NichePSO at finding optima with a significantly lower objective function value.  相似文献   

4.
Efficient line search algorithm for unconstrained optimization   总被引:6,自引:0,他引:6  
A new line search algorithm for smooth unconstrained optimization is presented that requires only one gradient evaluation with an inaccurate line search and at most two gradient evaluations with an accurate line search. It terminates in finitely many operations and shares the same theoretical properties as the standard line search rules like the Armijo-Goldstein-Wolfe-Powell rules. This algorithm is especially appropriate for the situation when gradient evaluations are very expensive relative to function evaluations.The authors would like to thank Margaret Wright and Jorge Moré for valuable comments on earlier versions of this paper.  相似文献   

5.
This paper presents a hybrid trust region algorithm for unconstrained optimization problems. It can be regarded as a combination of ODE-based methods, line search and trust region techniques. A feature of the proposed method is that at each iteration, a system of linear equations is solved only once to obtain a trial step. Further, when the trial step is not accepted, the method performs an inexact line search along it instead of resolving a new linear system. Under reasonable assumptions, the algorithm is proven to be globally and superlinearly convergent. Numerical results are also reported that show the efficiency of this proposed method.  相似文献   

6.
In this paper, a new descent algorithm for solving unconstrained optimization problem is presented. Its search direction is descent and line search procedure can be avoided except for the first iteration. It is globally convergent under mild conditions. The search direction of the new algorithm is generalized and convergence of corresponding algorithm is also proved. Numerical results show that the algorithm is efficient for given test problems.  相似文献   

7.
We consider an efficient trust-region framework which employs a new nonmonotone line search technique for unconstrained optimization problems. Unlike the traditional nonmonotone trust-region method, our proposed algorithm avoids resolving the subproblem whenever a trial step is rejected. Instead, it performs a nonmonotone Armijo-type line search in direction of the rejected trial step to construct a new point. Theoretical analysis indicates that the new approach preserves the global convergence to the first-order critical points under classical assumptions. Moreover, superlinear and quadratic convergence are established under suitable conditions. Numerical experiments show the efficiency and effectiveness of the proposed approach for solving unconstrained optimization problems.  相似文献   

8.
In this paper we present a new Discrete Particle Swarm Optimization (DPSO) approach to face the NP-hard single machine total weighted tardiness scheduling problem in presence of sequence-dependent setup times. Differently from previous approaches the proposed DPSO uses a discrete model both for particle position and velocity and a coherent sequence metric. We tested the proposed DPSO mainly over a benchmark originally proposed by Cicirello in 2003 and available online. The results obtained show the competitiveness of our DPSO, which is able to outperform the best known results for the benchmark. In addition, we also tested the DPSO on a set of benchmark instances from ORLIB for the single machine total weighted tardiness problem, and we analysed the role of the DPSO swarm intelligence mechanisms as well as the local search intensification phase included in the algorithm.  相似文献   

9.
Memetic particle swarm optimization   总被引:2,自引:0,他引:2  
We propose a new Memetic Particle Swarm Optimization scheme that incorporates local search techniques in the standard Particle Swarm Optimization algorithm, resulting in an efficient and effective optimization method, which is analyzed theoretically. The proposed algorithm is applied to different unconstrained, constrained, minimax and integer programming problems and the obtained results are compared to that of the global and local variants of Particle Swarm Optimization, justifying the superiority of the memetic approach.  相似文献   

10.
Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. This paper proposes a three-parameter family of hybrid conjugate gradient methods. Two important features of the family are that (i) it can avoid the propensity of small steps, namely, if a small step is generated away from the solution point, the next search direction will be close to the negative gradient direction; and (ii) its descent property and global convergence are likely to be achieved provided that the line search satisfies the Wolfe conditions. Some numerical results with the family are also presented.

  相似文献   


11.
This paper investigates the feature subset selection problem for the binary classification problem using logistic regression model. We developed a modified discrete particle swarm optimization (PSO) algorithm for the feature subset selection problem. This approach embodies an adaptive feature selection procedure which dynamically accounts for the relevance and dependence of the features included the feature subset. We compare the proposed methodology with the tabu search and scatter search algorithms using publicly available datasets. The results show that the proposed discrete PSO algorithm is competitive in terms of both classification accuracy and computational performance.  相似文献   

12.
In this paper, we present a nonmonotone conic trust region method based on line search technique for unconstrained optimization. The new algorithm can be regarded as a combination of nonmonotone technique, line search technique and conic trust region method. When a trial step is not accepted, the method does not resolve the trust region subproblem but generates an iterative point whose steplength satisfies some line search condition. The function value can only be allowed to increase when trial steps are not accepted in close succession of iterations. The local and global convergence properties are proved under reasonable assumptions. Numerical experiments are conducted to compare this method with the existing methods.  相似文献   

13.
The particle swarm optimization (PSO) technique is a powerful stochastic evolutionary algorithm that can be used to find the global optimum solution in a complex search space. This paper presents a variation on the standard PSO algorithm called the rank based particle swarm optimizer, or PSOrank, employing cooperative behavior of the particles to significantly improve the performance of the original algorithm. In this method, in order to efficiently control the local search and convergence to global optimum solution, the γ best particles are taken to contribute to the updating of the position of a candidate particle. The contribution of each particle is proportional to its strength. The strength is a function of three parameters: strivness, immediacy and number of contributed particles. All particles are sorted according to their fitness values, and only the γ best particles will be selected. The value of γ decreases linearly as the iteration increases. A time-varying inertia weight decreasing non-linearly is introduced to improve the performance. PSOrank is tested on a commonly used set of optimization problems and is compared to other variants of the PSO algorithm presented in the literature. As a real application, PSOrank is used for neural network training. The PSOrank strategy outperformed all the methods considered in this investigation for most of the functions. Experimental results show the suitability of the proposed algorithm in terms of effectiveness and robustness.  相似文献   

14.
Balanced fuzzy particle swarm optimization   总被引:1,自引:0,他引:1  
In the present study an extension of particle swarm optimization (PSO) algorithm which is in conformity with actual nature is introduced for solving combinatorial optimization problems. Development of this algorithm is essentially based on balanced fuzzy sets theory. The classical fuzzy sets theory cannot distinguish differences between positive and negative information of membership functions, while in the new method both kinds of information “positive and negative” about membership function are equally important. The balanced fuzzy particle swarm optimization algorithm is used for fundamental optimization problem entitled traveling salesman problem (TSP). For convergence inspecting of new algorithm, method was used for TSP problems. Convergence curves were represented fast convergence in restricted and low iterations for balanced fuzzy particle swarm optimization algorithm (BF-PSO) comparison with fuzzy particle swarm optimization algorithm (F-PSO).  相似文献   

15.
Another hybrid conjugate gradient algorithm is subject to analysis. The parameter β k is computed as a convex combination of (Hestenes-Stiefel) and (Dai-Yuan) algorithms, i.e. . The parameter θ k in the convex combination is computed in such a way so that the direction corresponding to the conjugate gradient algorithm to be the Newton direction and the pair (s k , y k ) to satisfy the quasi-Newton equation , where and . The algorithm uses the standard Wolfe line search conditions. Numerical comparisons with conjugate gradient algorithms show that this hybrid computational scheme outperforms the Hestenes-Stiefel and the Dai-Yuan conjugate gradient algorithms as well as the hybrid conjugate gradient algorithms of Dai and Yuan. A set of 750 unconstrained optimization problems are used, some of them from the CUTE library.   相似文献   

16.
Inspired by the migratory behavior in the nature, a novel particle swarm optimization algorithm based on particle migration (MPSO) is proposed in this work. In this new algorithm, the population is randomly partitioned into several sub-swarms, each of which is made to evolve based on particle swarm optimization with time varying inertia weight and acceleration coefficients (LPSO-TVAC). At periodic stage in the evolution, some particles migrate from one complex to another to enhance the diversity of the population and avoid premature convergence. It further improves the ability of exploration and exploitation. Simulations for benchmark test functions illustrate that the proposed algorithm possesses better ability to find the global optima than other variants and is an effective global optimization tool.  相似文献   

17.
Improved particle swarm algorithm for hydrological parameter optimization   总被引:1,自引:0,他引:1  
In this paper, a new method named MSSE-PSO (master-slave swarms shuffling evolution algorithm based on particle swarm optimization) is proposed. Firstly, a population of points is sampled randomly from the feasible space, and then partitioned into several sub-swarms (one master swarm and other slave swarms). Each slave swarm independently executes PSO or its variants, including the update of particles’ position and velocity. For the master swarm, the particles enhance themselves based on the social knowledge of master swarm and that of slave swarms. At periodic stage in the evolution, the master swarm and the whole slave swarms are forced to mix, and points are then reassigned to several sub-swarms to ensure the share of information. The process is repeated until a user-defined stopping criterion is reached. The tests of numerical simulation and the case study on hydrological model show that MSSE-PSO remarkably improves the accuracy of calibration, reduces the time of computation and enhances the performance of stability. Therefore, it is an effective and efficient global optimization method.  相似文献   

18.
A modified conjugate gradient method is presented for solving unconstrained optimization problems, which possesses the following properties: (i) The sufficient descent property is satisfied without any line search; (ii) The search direction will be in a trust region automatically; (iii) The Zoutendijk condition holds for the Wolfe–Powell line search technique; (iv) This method inherits an important property of the well-known Polak–Ribière–Polyak (PRP) method: the tendency to turn towards the steepest descent direction if a small step is generated away from the solution, preventing a sequence of tiny steps from happening. The global convergence and the linearly convergent rate of the given method are established. Numerical results show that this method is interesting.  相似文献   

19.
This paper proposes a new co-swarm PSO (CSHPSO) for constrained optimization problems, which is obtained by hybridizing the recently proposed shrinking hypersphere PSO (SHPSO) with the differential evolution (DE) approach. The total swarm is subdivided into two sub swarms in such a way that the first sub swarms uses SHPSO and second sub swarms uses DE. Experiments are performed on a state-of-the-art problems proposed in IEEE CEC 2006. The results of the CSHPSO is compared with SHPSO and DE in a variety of fashions. A statistical approach is applied to provide the significance of the numerical experiments. In order to further test the efficacy of the proposed CSHPSO, an economic dispatch (ED) problem with valve points effects for 40 generating units is solved. The results of the problem using CSHPSO is compared with SHPSO, DE and the existing solutions in the literature. It is concluded that CSHPSO is able to give the minimal cost for the ED problem in comparison with the other algorithms considered. Hence, CSHPSO is a promising new co-swarm PSO which can be used to solve any real constrained optimization problem.  相似文献   

20.
Particle swarm optimization (PSO) is originally developed as an unconstrained optimization technique, therefore lacks an explicit mechanism for handling constraints. When solving constrained optimization problems (COPs) with PSO, the existing research mainly focuses on how to handle constraints, and the impact of constraints on the inherent search mechanism of PSO has been scarcely explored. Motivated by this fact, in this paper we mainly investigate how to utilize the impact of constraints (or the knowledge about the feasible region) to improve the optimization ability of the particles. Based on these investigations, we present a modified PSO, called self-adaptive velocity particle swarm optimization (SAVPSO), for solving COPs. To handle constraints, in SAVPSO we adopt our recently proposed dynamic-objective constraint-handling method (DOCHM), which is essentially a constituent part of the inherent search mechanism of the integrated SAVPSO, i.e., DOCHM + SAVPSO. The performance of the integrated SAVPSO is tested on a well-known benchmark suite and the experimental results show that appropriately utilizing the knowledge about the feasible region can substantially improve the performance of the underlying algorithm in solving COPs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号