首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
Traditionally, minimum cost transshipment problems have been simplified as linear cost problems, which are not practical in real applications. Some advanced local search algorithms have been developed to solve concave cost bipartite network problems. These have been found to be more effective than the traditional linear approximation methods and local search methods. Recently, a genetic algorithm and an ant colony system algorithm were employed to develop two global search algorithms for solving concave cost transshipment problems. These two global search algorithms were found to be more effective than the advanced local search algorithms for solving concave cost transshipment problems. Although the particle swarm optimization algorithm has been used to obtain good results in many applications, to the best of our knowledge, it has not yet been applied in minimum concave cost network flow problems. Thus, in this study, we employ an arc-based particle swarm optimization algorithm, coupled with some genetic algorithm and threshold accepting method techniques, as well as concave cost network heuristics, to develop a hybrid global search algorithm for efficiently solving minimum cost network flow problems with concave arc costs. The proposed algorithm is evaluated by solving several randomly generated network flow problems. The results indicate that the proposed algorithm is more effective than several other recently designed methods, such as local search algorithms, genetic algorithms and ant colony system algorithms, for solving minimum cost network flow problems with concave arc costs.  相似文献   

2.
In this paper, we combine two types of local search algorithms for global optimization of continuous functions. In the literature, most of the hybrid algorithms are produced by combination of a global optimization algorithm with a local search algorithm and the local search is used to improve the solution quality, not to explore the search space to find independently the global optimum. The focus of this research is on some simple and efficient hybrid algorithms by combining the Nelder–Mead simplex (NM) variants and the bidirectional random optimization (BRO) methods for optimization of continuous functions. The NM explores the whole search space to find some promising areas and then the BRO local search is entered to exploit optimal solution as accurately as possible. Also a new strategy for shrinkage stage borrowed from differential evolution (DE) is incorporated in the NM variants. To examine the efficiency of proposed algorithms, those are evaluated by 25 benchmark functions designed for the special session on real-parameter optimization of CEC2005. A comparison study between the hybrid algorithms and some DE algorithms and non-parametric analysis of obtained results demonstrate that the proposed algorithms outperform most of other algorithms and their difference in most cases is statistically considerable. In a later part of the comparative experiments, a comparison of the proposed algorithms with some other evolutionary algorithms reported in the CEC2005 confirms a better performance of our proposed algorithms.  相似文献   

3.
Metaheuristic optimization algorithms have become popular choice for solving complex and intricate problems which are otherwise difficult to solve by traditional methods. In the present study an attempt is made to review the hybrid optimization techniques in which one main algorithm is a well known metaheuristic; particle swarm optimization or PSO. Hybridization is a method of combining two (or more) techniques in a judicious manner such that the resulting algorithm contains the positive features of both (or all) the algorithms. Depending on the algorithm/s used we made three classifications as (i) Hybridization of PSO and genetic algorithms (ii) Hybridization of PSO with differential evolution and (iii) Hybridization of PSO with other techniques. Where, other techniques include various local and global search methods. Besides giving the review we also show a comparison of three hybrid PSO algorithms; hybrid differential evolution particle swarm optimization (DE-PSO), adaptive mutation particle swarm optimization (AMPSO) and hybrid genetic algorithm particle swarm optimization (GA-PSO) on a test suite of nine conventional benchmark problems.  相似文献   

4.
Traditionally, the minimum cost transshipment problems have been simplified as linear cost problems, which are not practical in real applications. Recently, some advanced local search algorithms have been developed that can directly solve concave cost bipartite network problems. However, they are not applicable to general transshipment problems. Moreover, the effectiveness of these modified local search algorithms for solving general concave cost transshipment problems is doubtful. In this research, we propose a global search algorithm for solving concave cost transshipment problems. Effecient methods for encoding, generating initial populations, selection, crossover and mutation are proposed, according to the problem characteristics. To evaluate the effectiveness of the proposed global search algorithm, four advanced local search algorithms based on the threshold accepting algorithm, the great deluge algorithm, and the tabu search algorithm, are also developed and are used for comparison purpose. To assist with the comparison of the proposed algorithms, a randomized network generator is designed to produce test problems. All the tests are performed on a personal computer. The results indicate that the proposed global search algorithm is more effective than the four advanced local algorithms, for solving concave cost transshipment problems.  相似文献   

5.
本文研究考虑交易成本的投资组合模型,分别以风险价值(VAR)和夏普比率(SR)作为投资组合的风险评价指标和效益评价指标。为有效求解此模型,本文在引力搜索和粒子群算法的基础上提出了一种混合优化算法(IN-GSA-PSO),将粒子群算法的群体最佳位置和个体最佳位置与引力搜索算法的加速度算子有机结合,使混合优化算法充分发挥单一算法的开采能力和探索能力。通过对算法相关参数的合理设置,算法能够达到全局搜索和局部搜索的平衡,快速收敛到模型的最优解。本文选取上证50股2014年下半年126个交易日的数据,运用Matlab软件进行仿真实验,实验结果显示,考虑交易成本的投资组合模型可使投资者得到更高的收益率。研究同时表明,基于PSO和GSA的混合算法在求解投资组合模型时比单一算法具有更好的性能,能够得到满意的优化结果。  相似文献   

6.
This paper proposes a self-adaptive penalty function and presents a penalty-based algorithm for solving nonsmooth and nonconvex constrained optimization problems. We prove that the general constrained optimization problem is equivalent to a bound constrained problem in the sense that they have the same global solutions. The global minimizer of the penalty function subject to a set of bound constraints may be obtained by a population-based meta-heuristic. Further, a hybrid self-adaptive penalty firefly algorithm, with a local intensification search, is designed, and its convergence analysis is established. The numerical experiments and a comparison with other penalty-based approaches show the effectiveness of the new self-adaptive penalty algorithm in solving constrained global optimization problems.  相似文献   

7.
Nonmonotone line search approach is a new technique for solving optimization problems. It relaxes the line search range and finds a larger step-size at each iteration, so as to possibly avoid local minimizer and run away from narrow curved valley. It is helpful to find the global minimizer of optimization problems. In this paper we develop a new modification of matrix-free nonmonotone Armijo line search and analyze the global convergence and convergence rate of the resulting method. We also address several approaches to estimate the Lipschitz constant of the gradient of objective functions that would be used in line search algorithms. Numerical results show that this new modification of Armijo line search is efficient for solving large scale unconstrained optimization problems.  相似文献   

8.
In this paper, we are concerned with the development of parallel algorithms for solving some classes of nonconvex optimization problems. We present an introductory survey of parallel algorithms that have been used to solve structured problems (partially separable, and large-scale block structured problems), and algorithms based on parallel local searches for solving general nonconvex problems. Indefinite quadratic programming posynomial optimization, and the general global concave minimization problem can be solved using these approaches. In addition, for the minimum concave cost network flow problem, we are going to present new parallel search algorithms for large-scale problems. Computational results of an efficient implementation on a multi-transputer system will be presented.  相似文献   

9.
蚁群遗传混合算法   总被引:2,自引:0,他引:2  
将蚁群遗传混合算法分别求解离散空间的和连续空间优化问题.求解旅行商问题的混合算法是以遗传算法为整个算法的框架,利用了蚁群算法中的信息素特性的进行交叉操作;根据旅行商问题的特点,给出了4种变异策略;针对遗传算法存在的过早收敛问题,加入2-0pt方法对问题求解进行了局部优化.与模拟退火算法、标准遗传算法和标准蚁群算法进行比较,4种混合算法效果都比较好,策略D的混合算法效果最好.求解连续空间优化问题是以蚁群算法为整个算法的框架,加入遗传算法的交叉操作和变异操作,用测试函数验证了混合蚁群算法的正确性.  相似文献   

10.
Chaos optimization algorithms (COAs) usually utilize the chaotic map like Logistic map to generate the pseudo-random numbers mapped as the design variables for global optimization. Many existing researches indicated that COA can more easily escape from the local minima than classical stochastic optimization algorithms. This paper reveals the inherent mechanism of high efficiency and superior performance of COA, from a new perspective of both the probability distribution property and search speed of chaotic sequences generated by different chaotic maps. The statistical property and search speed of chaotic sequences are represented by the probability density function (PDF) and the Lyapunov exponent, respectively. Meanwhile, the computational performances of hybrid chaos-BFGS algorithms based on eight one-dimensional chaotic maps with different PDF and Lyapunov exponents are compared, in which BFGS is a quasi-Newton method for local optimization. Moreover, several multimodal benchmark examples illustrate that, the probability distribution property and search speed of chaotic sequences from different chaotic maps significantly affect the global searching capability and optimization efficiency of COA. To achieve the high efficiency of COA, it is recommended to adopt the appropriate chaotic map generating the desired chaotic sequences with uniform or nearly uniform probability distribution and large Lyapunov exponent.  相似文献   

11.
In this paper a new heuristic hybrid technique for bound-constrained global optimization is proposed. We developed iterative algorithm called GLPτS that uses genetic algorithms, LPτ low-discrepancy sequences of points and heuristic rules to find regions of attraction when searching a global minimum of an objective function. Subsequently Nelder–Mead Simplex local search technique is used to refine the solution. The combination of the three techniques (Genetic algorithms, LPτO Low-discrepancy search and Simplex search) provides a powerful hybrid heuristic optimization method which is tested on a number of benchmark multimodal functions with 10–150 dimensions, and the method properties – applicability, convergence, consistency and stability are discussed in detail.  相似文献   

12.
A hybrid immune multiobjective optimization algorithm   总被引:1,自引:0,他引:1  
In this paper, we develop a hybrid immune multiobjective optimization algorithm (HIMO) based on clonal selection principle. In HIMO, a hybrid mutation operator is proposed with the combination of Gaussian and polynomial mutations (GP-HM operator). The GP-HM operator adopts an adaptive switching parameter to control the mutation process, which uses relative large steps in high probability for boundary individuals and less-crowded individuals. With the generation running, the probability to perform relative large steps is reduced gradually. By this means, the exploratory capabilities are enhanced by keeping a desirable balance between global search and local search, so as to accelerate the convergence speed to the true Pareto-optimal front in the global space with many local Pareto-optimal fronts. When comparing HIMO with various state-of-the-art multiobjective optimization algorithms developed recently, simulation results show that HIMO performs better evidently.  相似文献   

13.
Global Minimization Algorithms for Holder Functions   总被引:1,自引:0,他引:1  
This paper deals with the one-dimensional global optimization problem where the objective function satisfies a Hölder condition over a closed interval. A direct extension of the popular Piyavskii method proposed for Lipschitz functions to Hölder optimization requires an a priori estimate of the Hölder constant and solution to an equation of degree N at each iteration. In this paper a new scheme is introduced. Three algorithms are proposed for solving one-dimensional Hölder global optimization problems. All of them work without solving equations of degree N. The case (very often arising in applications) when a Hölder constant is not given a priori is considered. It is shown that local information about the objective function used inside the global procedure can accelerate the search signicantly. Numerical experiments show quite promising performance of the new algorithms.  相似文献   

14.
Variable neighborhood search: Principles and applications   总被引:5,自引:0,他引:5  
Systematic change of neighborhood within a possibly randomized local search algorithm yields a simple and effective metaheuristic for combinatorial and global optimization, called variable neighborhood search (VNS). We present a basic scheme for this purpose, which can easily be implemented using any local search algorithm as a subroutine. Its effectiveness is illustrated by solving several classical combinatorial or global optimization problems. Moreover, several extensions are proposed for solving large problem instances: using VNS within the successive approximation method yields a two-level VNS, called variable neighborhood decomposition search (VNDS); modifying the basic scheme to explore easily valleys far from the incumbent solution yields an efficient skewed VNS (SVNS) heuristic. Finally, we show how to stabilize column generation algorithms with help of VNS and discuss various ways to use VNS in graph theory, i.e., to suggest, disprove or give hints on how to prove conjectures, an area where metaheuristics do not appear to have been applied before.  相似文献   

15.
In this paper new global optimization algorithms are proposed for solving problems where the objective function is univariate and has Lipschitzean first derivatives. To solve this problem, smooth auxiliary functions, which are adaptively improved during the course of the search, are constructed. Three new algorithms are introduced: the first used the exact a priori known Lipschitz constant for derivatives; the second, when this constant is unknown, estimates it during the course of the search and finally, the last method uses neither the exact global Lipschitz constant nor its estimate but instead adaptively estimates the local Lipschitz constants in different sectors of the search region during the course of optimization. Convergence conditions of the methods are investigated from a general viewpoint and some numerical results are also given. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.  相似文献   

16.
Several meta-heuristic algorithms, such as evolutionary algorithms (EAs) and genetic algorithms (GAs), have been developed for solving feature selection problems due to their efficiency for searching feature subset spaces in feature selection problems. Recently, hybrid GAs have been proposed to improve the performance of conventional GAs by embedding a local search operation, or sequential forward floating search mutation, into the GA. Existing hybrid algorithms may damage individuals’ genetic information obtained from genetic operations during the local improvement procedure because of a sequential process of the mutation operation and the local improvement operation. Another issue with a local search operation used in the existing hybrid algorithms is its inappropriateness for large-scale problems. Therefore, we propose a novel approach for solving large-sized feature selection problems, namely, an EA with a partial sequential forward floating search mutation (EAwPS). The proposed approach integrates a local search technique, that is, the partial sequential forward floating search mutation into an EA method. Two algorithms, EAwPS-binary representation (EAwPS-BR) for medium-sized problems and EAwPS-integer representation (EAwPS-IR) for large-sized problems, have been developed. The adaptation of a local improvement method into the EA speeds up the search and directs the search into promising solution areas. We compare the performance of the proposed algorithms with other popular meta-heuristic algorithms using the medium- and large-sized data sets. Experimental results demonstrate that the proposed EAwPS extracts better features within reasonable computational times.  相似文献   

17.
在某些生产制造场景中,工件在不同机器间的传输时间对车间调度的总拖期具有重要影响,本文基于此扩展了总拖期最小的柔性作业车间调度模型。针对问题模型的复杂性,采用粒子群优化算法和遗传算法的混合算法进行求解。在初始化过程以一定概率优选加工时间和传输时间短的机器并排除调度频繁的机器,使种群在保持多样性的前提下尽量选择优化结果好的个体;采用线性调整的方式动态改变交叉概率和变异概率的值,使种群在遗传算法的不同阶段具有不同的搜索强度;采用粒子群优化算法进行局部搜索,弥补了遗传算法局部搜索能力的不足。最后采用本文方法和其他方法求解柔性作业车间调度问题实例,并对比不同水平层次传输时间下的总拖期,验证了本文方法的有效性。  相似文献   

18.
Genetic algorithms have attracted a good deal of interest in the heuristic search community. Yet there are several different types of genetic algorithms with varying performance and search characteristics. In this article we look at three genetic algorithms: an elitist simple genetic algorithm, the CHC algorithm and Genitor. One problem in comparing algorithms is that most test problems in the genetic algorithm literature can be solved using simple local search methods. In this article, the three algorithms are compared using new test problems that are not readily solved using simple local search methods. We then compare a local search method to genetic algorithms for geometric matching and examine a hybrid algorithm that combines local and genetic search. The geometric matching problem matches a model (e.g., a line drawing) to a subset of lines contained in a field of line fragments. Local search is currently the best known method for solving general geometric matching problems.  相似文献   

19.
In this paper we present a chaos-based evolutionary algorithm (EA) for solving nonlinear programming problems named chaotic genetic algorithm (CGA). CGA integrates genetic algorithm (GA) and chaotic local search (CLS) strategy to accelerate the optimum seeking operation and to speed the convergence to the global solution. The integration of global search represented in genetic algorithm and CLS procedures should offer the advantages of both optimization methods while offsetting their disadvantages. By this way, it is intended to enhance the global convergence and to prevent to stick on a local solution. The inherent characteristics of chaos can enhance optimization algorithms by enabling it to escape from local solutions and increase the convergence to reach to the global solution. Twelve chaotic maps have been analyzed in the proposed approach. The simulation results using the set of CEC’2005 show that the application of chaotic mapping may be an effective strategy to improve the performances of EAs.  相似文献   

20.
In the paper, we consider the bioprocess system optimal control problem. Generally speaking, it is very difficult to solve this problem analytically. To obtain the numerical solution, the problem is transformed into a parameter optimization problem with some variable bounds, which can be efficiently solved using any conventional optimization algorithms, e.g. the improved Broyden–Fletcher–Goldfarb–Shanno algorithm. However, in spite of the improved Broyden–Fletcher–Goldfarb–Shanno algorithm is very efficient for local search, the solution obtained is usually a local extremum for non-convex optimal control problems. In order to escape from the local extremum, we develop a novel stochastic search method. By performing a large amount of numerical experiments, we find that the novel stochastic search method is excellent in exploration, while bad in exploitation. In order to improve the exploitation, we propose a hybrid numerical optimization algorithm to solve the problem based on the novel stochastic search method and the improved Broyden–Fletcher–Goldfarb–Shanno algorithm. Convergence results indicate that any global optimal solution of the approximate problem is also a global optimal solution of the original problem. Finally, two bioprocess system optimal control problems illustrate that the hybrid numerical optimization algorithm proposed by us is low time-consuming and obtains a better cost function value than the existing approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号