共查询到20条相似文献,搜索用时 125 毫秒
1.
2.
阶梯状黄土边坡稳定性分析的关键是估算其稳定系数的最小值.稳定系数的求解涉及诸多因素且计算过程繁杂,传统优化算法往往不能有效地搜索到其全局最小解.为此,提出一种改进的自适应遗传算法.算法对基因变量空间进行网格状划分,采用迭代选优法建立均匀分布的初始种群,运用优质个体保留遗传策略,并按照特定的准则自适应地调整交叉概率和变异概率,提高算法的全局搜索能力和收敛速度.实例应用表明算法能够快速有效地收敛于土坡稳定系数的全局最小解,且计算结果与实际情况更加吻合. 相似文献
3.
约束优化问题的一个变尺度投影算法的全局收敛性 总被引:3,自引:1,他引:2
叶留青 《数学的实践与认识》2004,34(2):115-117
利用投影算子 PxΩ 建立了求解问题 ( P)的变尺度投影算法 ,并讨论了算法的全局收敛性 . 相似文献
4.
为了求解带有条件风险价值(CVaR)约束的均值-方差模型,提出一种基于广义学习和柯西变异的粒子群算法(CCPSO).在CCPSO算法中,为了提升种群跳出局部最优解的能力,引入一种广义学习策略,提升粒子向最优解飞行的概率;并引入一种动态变异概率,对粒子自身最优位置进行柯西变异,更好地引导种群的飞行;最后,根据全局最优粒子的运行状况,每间隔若干代对其进行变异,以产生全局新的领导者.在基准函数测试中,结果显示CCPSO算法有较好的运行结果.在CVaR模型投资组合优化中,与其它算法相比,CCPSO算法所获结果是有效的,并且优于其它算法. 相似文献
5.
取样是一种通用有效的近似技术,利用取样技术进行近似聚集查询处理是决策支持系统和数据挖掘工具中的常用方法,如何正确有效地给出近似查询结果并最小化近似查询误差是查询处理的关键和目标.在对应用于近似聚集查询的代表性取样方法Congressional Samples(国会取样)深入研究的基础上,指出其存在的不足和应用的局限,提出了一个优化的Congressional Samples取样方法:OptCongress算法,算法在组数据内部存在高方差分布时能克服原算法简单均匀取样的不足,提高了近似聚集查询的质量,同时改进了原算法的各组取样数分配算法,克服了原分配算法缺乏严格的公式描述,难以进行理论评估的不足.最后,通过实验比较验证了该优化算法的有效性和正确性. 相似文献
6.
研究机器带学习效应, 目标函数为时间表长的两台平行机排序问题, 问题是NP-难的. 首先建立了求解该问题最优解的整数规划模型. 其次, 基于模拟退火算法给出了该问题的近似算法SA, 并证明了该算法依概率1 全局收敛到最优解. 最后, 通过数值模拟对所提出的算法进行了性能分析. 数值模拟结果表明, 近似算法SA可以达到最优值的99%, 准确度高, 算法较有效. 相似文献
7.
讨论无约束极大极小(minimax)问题,基于积极集识别技术,结合摄动的序列二次规划(SQP)方法,建立问题的一个数值方法.在相当弱的条件下,算法具有弱全局收敛性,并对算法进行了初步的数值试验. 相似文献
8.
9.
《数学的实践与认识》2015,(19)
针对IAGA自适应遗传算法存在的未成熟收敛问题,提出了一种改进的自适应遗传算法(NIAGA算法),根据自定义判别式判断群体是否出现了未成熟收敛趋势,由不同情况,分别采用宏观调控与微观处理两种方法来设置交叉概率Pc和变异概率Pm,以此促使算法摆脱未成熟收敛.仿真结果表明,新算法有效地改善了IAGA算法的未成熟收敛问题,显示出了更强的全局收敛性. 相似文献
10.
11.
In this paper, based on the transfer relationship between reciprocal preference relation and multiplicative preference relation, we proposed a least deviation method (LDM) to obtain a priority vector for group decision making (GDM) problems where decision-makers' (DMs') assessments on alternatives are furnished as incomplete reciprocal preference relations with missing values. Relevant theorems are investigated and a convergent iterative algorithm about LDM is developed. Using three numerical examples, the LDM is compared with the other prioritization methods based on two performance evaluation criteria: maximum deviation and maximum absolute deviation. Statistical comparative study, complexity of computation of different algorithms, and comparative analyses are provided to show its advantages over existing approaches. 相似文献
12.
《Journal of the Egyptian Mathematical Society》2014,22(1):102-114
This paper is devoted to the numerical comparison of methods applied to solve the generalized Ito system. Four numerical methods are compared, namely, the Laplace decomposition method (LDM), the variation iteration method (VIM), the homotopy perturbation method (HPM) and the Laplace decomposition method with the Pade approximant (LD–PA) with the exact solution. 相似文献
13.
A parallel stochastic algorithm is presented for solving the linearly constrained concave global minimization problem. The algorithm is a multistart method and makes use of a Bayesian stopping rule to identify the global minimum with high probability. Computational results are presented for more than 200 problems on a Cray X-MP EA/464 supercomputer. 相似文献
14.
This paper considers the nonlinearly constrained continuous global minimization problem. Based on the idea of the penalty function method, an auxiliary function, which has approximately the same global minimizers as the original problem, is constructed. An algorithm is developed to minimize the auxiliary function to find an approximate constrained global minimizer of the constrained global minimization problem. The algorithm can escape from the previously converged local minimizers, and can converge to an approximate global minimizer of the problem asymptotically with probability one. Numerical experiments show that it is better than some other well known recent methods for constrained global minimization problems. 相似文献
15.
16.
In this paper, we consider the box constrained nonlinear integer programming problem. We present an auxiliary function, which has the same discrete global minimizers as the problem. The minimization of the function using a discrete local search method can escape successfully from previously converged discrete local minimizers by taking increasing values of a parameter. We propose an algorithm to find a global minimizer of the box constrained nonlinear integer programming problem. The algorithm minimizes the auxiliary function from random initial points. We prove that the algorithm can converge asymptotically with probability one. Numerical experiments on a set of test problems show that the algorithm is efficient and robust. 相似文献
17.
A new method for continuous global minimization problems, acronymed SCM, is introduced. This method gives a simple transformation to convert the objective function to an auxiliary function with gradually fewer local minimizers. All Local minimizers except a prefixed one of the auxiliary function are in the region where the function value of the objective function is lower than its current minimal value. Based on this method, an algorithm is designed which uses a local optimization method to minimize the auxiliary function to find a local minimizer at which the value of the objective function is lower than its current minimal value. The algorithm converges asymptotically with probability one to a global minimizer of the objective function. Numerical experiments on a set of standard test problems with several problems' dimensions up to 50 show that the algorithm is very efficient compared with other global optimization methods. 相似文献
18.
19.
20.
This paper presents a kind of dynamic genetic algorithm based on a continuous neural network, which is intrinsically the steepest decent method for constrained optimization problems. The proposed algorithm combines the local searching ability of the steepest decent methods with the global searching ability of genetic algorithms. Genetic algorithms are used to decide each initial point of the steepest decent methods so that all the initial points can be searched intelligently. The steepest decent methods are employed to decide the fitness of genetic algorithms so that some good initial points can be selected. The proposed algorithm is motivated theoretically and biologically. It can be used to solve a non-convex optimization problem which is quadratic and even more non-linear. Compared with standard genetic algorithms, it can improve the precision of the solution while decreasing the searching scale. In contrast to the ordinary steepest decent method, it can obtain global sub-optimal solution while lessening the complexity of calculation. 相似文献