首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 30 毫秒
1.
Based on the mechanism of biological DNA genetic information and evolution, a modified DNA genetic algorithm (MDNA-GA) is proposed to estimate the kinetic parameters of the 2-Chlorophenol oxidation in supercritical water. In this approach, DNA encoding method, choose crossover operator and frame-shift mutation operator inspired by the biological DNA are developed for improving the global searching ability. Besides, an adaptive mutation probability which can be adjusted automatically according to the diversity of population is also adopted. A local search method is used to explore the search space to accelerate the convergence towards global optimum. The performance of MDNA-GA in typical benchmark functions and kinetic parameter estimation is studied and compared with RNA-GA. The experimental results demonstrate that the proposed algorithm can overcome premature convergence and yield the global optimum with high efficiency.  相似文献   

2.
一种无约束全局优化的水平值下降算法   总被引:1,自引:0,他引:1  
彭拯  张海东  邬冬华 《应用数学》2007,20(1):213-219
本文研究无约束全局优化问题,建立了一种新的水平值下降算法(Level-value Descent Method,LDM).讨论并建立了概率意义下取全局最小值的一个充分必要条件,证明了算法LDM是依概率测度收敛的.这种LDM算法是基于重点度取样(Improtance Sampling)和Markov链Monte-Carlo随机模拟实现的,并利用相对熵方法(TheCross-Entropy Method)自动更新取样密度,算例表明LDM算法具有较高的数值精度和较好的全局收敛性.  相似文献   

3.
Stochastic global search algorithms such as genetic algorithms are used to attack difficult combinatorial optimization problems. However, genetic algorithms suffer from the lack of a convergence proof. This means that it is difficult to establish reliable algorithm braking criteria without extensive a priori knowledge of the solution space. The hybrid genetic algorithm presented here combines a genetic algorithm with simulated annealing in order to overcome the algorithm convergence problem. The genetic algorithm runs inside the simulated annealing algorithm and provides convergence via a Boltzmann cooling process. The hybrid algorithm was used successfully to solve a classical 30-city traveling salesman problem; it consistently outperformed both a conventional genetic algorithm and a conventional simulated annealing algorithm. This work was supported by the University of Colorado at Colorado Springs.  相似文献   

4.
In this paper we present a chaos-based evolutionary algorithm (EA) for solving nonlinear programming problems named chaotic genetic algorithm (CGA). CGA integrates genetic algorithm (GA) and chaotic local search (CLS) strategy to accelerate the optimum seeking operation and to speed the convergence to the global solution. The integration of global search represented in genetic algorithm and CLS procedures should offer the advantages of both optimization methods while offsetting their disadvantages. By this way, it is intended to enhance the global convergence and to prevent to stick on a local solution. The inherent characteristics of chaos can enhance optimization algorithms by enabling it to escape from local solutions and increase the convergence to reach to the global solution. Twelve chaotic maps have been analyzed in the proposed approach. The simulation results using the set of CEC’2005 show that the application of chaotic mapping may be an effective strategy to improve the performances of EAs.  相似文献   

5.
Generalized hill climbing algorithms provide a framework for modeling several local search algorithms for hard discrete optimization problems. This paper introduces and analyzes generalized hill climbing algorithm performance measures that reflect how effectively an algorithm has performed to date in visiting a global optimum and how effectively an algorithm may pes]rform in the future in visiting such a solution. These measures are also used to obtain a necessary asymptotic convergence (in probability) condition to a global optimum, which is then used to show that a common formulation of threshold accepting does not converge. These measures assume particularly simple forms when applied to specific search strategies such as Monte Carlo search and threshold accepting.  相似文献   

6.
This paper presents some simple technical conditions that guarantee the convergence of a general class of adaptive stochastic global optimization algorithms. By imposing some conditions on the probability distributions that generate the iterates, these stochastic algorithms can be shown to converge to the global optimum in a probabilistic sense. These results also apply to global optimization algorithms that combine local and global stochastic search strategies and also those algorithms that combine deterministic and stochastic search strategies. This makes the results applicable to a wide range of global optimization algorithms that are useful in practice. Moreover, this paper provides convergence conditions involving the conditional densities of the random vector iterates that are easy to verify in practice. It also provides some convergence conditions in the special case when the iterates are generated by elliptical distributions such as the multivariate Normal and Cauchy distributions. These results are then used to prove the convergence of some practical stochastic global optimization algorithms, including an evolutionary programming algorithm. In addition, this paper introduces the notion of a stochastic algorithm being probabilistically dense in the domain of the function and shows that, under simple assumptions, this is equivalent to seeing any point in the domain with probability 1. This, in turn, is equivalent to almost sure convergence to the global minimum. Finally, some simple results on convergence rates are also proved.  相似文献   

7.
A class of simulated annealing algorithms for continuous global optimization is considered in this paper. The global convergence property is analyzed with respect to the objective value sequence and the minimum objective value sequence induced by simulated annealing algorithms. The convergence analysis provides the appropriate conditions on both the generation probability density function and the temperature updating function. Different forms of temperature updating functions are obtained with respect to different kinds of generation probability density functions, leading to different types of simulated annealing algorithms which all guarantee the convergence to the global optimum.  相似文献   

8.
针对灰狼算法易陷入局部最优、收敛精度不高、收敛速度慢等缺点,提出一种改进的灰狼算法.引入莱维飞行,扩大搜索范围,增强全局搜索能力,避免陷入局部最优;引入贪婪原理,提升种群优良性以提高算法收敛精度;引入自适应收敛因子,加快收敛速度;引入动态权重策略,制约全局搜索与局部搜索的相互影响.将改进算法与其他四种算法作对比,实验表...  相似文献   

9.
Abstract

The EM algorithm is widely used in incomplete-data problems (and some complete-data problems) for parameter estimation. One limitation of the EM algorithm is that, upon termination, it is not always near a global optimum. As reported by Wu (1982), when several stationary points exist, convergence to a particular stationary point depends on the choice of starting point. Furthermore, convergence to a saddle point or local minimum is also possible. In the EM algorithm, although the log-likelihood is unknown, an interval containing the gradient of the EM q function can be computed at individual points using interval analysis methods. By using interval analysis to enclose the gradient of the EM q function (and, consequently, the log-likelihood), an algorithm is developed that is able to locate all stationary points of the log-likelihood within any designated region of the parameter space. The algorithm is applied to several examples. In one example involving the t distribution, the algorithm successfully locates (all) seven stationary points of the log-likelihood.  相似文献   

10.
We propose a stochastic algorithm for the global optimization of chance constrained problems. We assume that the probability measure with which the constraints are evaluated is known only through its moments. The algorithm proceeds in two phases. In the first phase the probability distribution is (coarsely) discretized and solved to global optimality using a stochastic algorithm. We only assume that the stochastic algorithm exhibits a weak* convergence to a probability measure assigning all its mass to the discretized problem. A diffusion process is derived that has this convergence property. In the second phase, the discretization is improved by solving another nonlinear programming problem. It is shown that the algorithm converges to the solution of the original problem. We discuss the numerical performance of the algorithm and its application to process design.  相似文献   

11.
秦晓伟  刘新国  赵娜 《计算数学》2011,33(4):345-356
对求解极大相关问题的P-SOR方法的收敛性做了进一步研究.得到了一些新的收敛条件.为了提高收敛到全局最大解的可能性,提出了一种新的初始向量选择策略.给出了P-SOR算法的对称形式(P-SSOR).还给出了一种算法精化策略.最后,用数值例子说明新方法的有效性.  相似文献   

12.
郑权在1978年提出的一种积分水平集算法概念性算法.由于水平集一般情况下难以求出,此算法通过Monte-Carlo随机取点来实现.本文提出了数学期望型水平值逼近全局最小值的概念性算法,它利用了相对熵主要思想,通过改变重要样本密度函数,克服了郑权算法水平集不易求得而难以求出水平值的困难.本文还给出了求全局最小值的收敛准则并证明了它的渐进收敛性.  相似文献   

13.
We derive the value of the mutation probability which maximizes the probability that the genetic algorithm finds the optimum value of the objective function under simple assumptions. This value is compared with the optimum mutation probability derived in other studies. An empirical study shows that this value, when used with a larger scaling factor in linear scaling, improves the performance of the genetic algorithm. This feature is then added to a model developed by Hinton and Nowlan which allows certain bits to be guessed in an effort to increase the probability of finding the optimum solution.  相似文献   

14.
In this paper, the optimization of time-varying objective functions, known only through estimates, is considered. Recent research defined algorithms for static optimization problems. Based on one of these algorithms, we derive an optimization scheme for the time-varying case. In stochastic optimization problems, convergence of an algorithm to the optimum prevents the algorithm from being efficiently adaptive to changes of the objective function if it is time-varying. So, convergence cannot be required in a time-varying scenario. Rather, we require convergence to the optimum with high probability together with a satisfactory dynamical behavior. Analytical and simulative results illustrate the performance of the proposed algorithm compared with other optimization techniques.  相似文献   

15.
In Floudas and Visweswaran (1990, 1993), a deterministic global optimization approach was proposed for solving certain classes of nonconvex optimization problems. An algorithm, GOP, was presented for the solution of the problem through a series ofprimal andrelaxed dual problems that provide valid upper and lower bounds respectively on the global solution. The algorithm was proved to have finite convergence to an -global optimum. In this paper, new theoretical properties are presented that help to enhance the computational performance of the GOP algorithm applied to problems of special structure. The effect of the new properties is illustrated through application of the GOP algorithm to a difficult indefinite quadratic problem, a multiperiod tankage quality problem that occurs frequently in the modeling of refinery processes, and a set of pooling/blending problems from the literature. In addition, extensive computational experience is reported for randomly generated concave and indefinite quadratic programming problems of different sizes. The results show that the properties help to make the algorithm computationally efficient for fairly large problems.  相似文献   

16.
针对遗传算法爬山能力弱但合局搜索能力强的特点 ,本文将遗传算法嵌入到基入传统优化的拟下降算法中 ,并对算法的拟下降步骤做了一定的改进 ,使得整个算法具有全局收敛性 .本文采用马尔可夫的观点进一步证明了算法的全局收敛性 ,并用极难优化的测试函数给出了数值算例 ,证明了本文算法为一种可行的全局优化算法 .  相似文献   

17.
遗传算法因其具有的特性,它采用交换、复制和突变等方法,获取的解为全局最优解,而且无需计算函数的导数,是一种只考虑输入与输出关系的黑箱问题,适用于处理各种复杂问题.此文基于最优保存的思想,把最速下降法与最优保存和自适应遗传算法相结合,用于求解非线性函数优化问题,提出一种基于自适应混合遗传算法的非线性函数全局优化方法.  相似文献   

18.
基于存档策略的多目标优化的遗传算法及其收敛性分析   总被引:1,自引:0,他引:1  
设计了一种用遗传算法求解多目标优化问题的有效方法——基于存档策略的多目标优化的遗传算法,并讨论了此算法的收敛性.首先给出档案的定义,设计出基于支配关系下的带有存档策略遗传算法,并通过算例检验了算法的有效性;然后引入了两档案间的距离的概念,在此距离定义的基础上证明了算法在概率意义下是收敛的.  相似文献   

19.
With advanced capability in data collection, applications of linear regression analysis now often involve a large number of predictors. Variable selection thus has become an increasingly important issue in building a linear regression model. For a given selection criterion, variable selection is essentially an optimization problem that seeks the optimal solution over 2m possible linear regression models, where m is the total number of candidate predictors. When m is large, exhaustive search becomes practically impossible. Simple suboptimal procedures such as forward addition, backward elimination, and backward-forward stepwise procedure are fast but can easily be trapped in a local solution. In this article we propose a relatively simple algorithm for selecting explanatory variables in a linear regression for a given variable selection criterion. Although the algorithm is still a suboptimal algorithm, it has been shown to perform well in extensive empirical study. The main idea of the procedure is to partition the candidate predictors into a small number of groups. Working with various combinations of the groups and iterating the search through random regrouping, the search space is substantially reduced, hence increasing the probability of finding the global optimum. By identifying and collecting “important” variables throughout the iterations, the algorithm finds increasingly better models until convergence. The proposed algorithm performs well in simulation studies with 60 to 300 predictors. As a by-product of the proposed procedure, we are able to study the behavior of variable selection criteria when the number of predictors is large. Such a study has not been possible with traditional search algorithms.

This article has supplementary material online.  相似文献   

20.
Chew Soo Hong,Zheng Q uan提出了一个积分——水平集求全局最优的概念性算法及M on te-C ar-lo随机投点的实现途径,并在很多实际问题中得到了很好的应用,但这一实现算法的收敛性是个未解决的问题.利用近年来广泛应用的遗传算法,给出了这一算法的另一种实现途径,并从理论和数值两个方面验证了算法的可行性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号