首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 46 毫秒
1.
改进遗传算法优化非线性规划问题   总被引:1,自引:0,他引:1  
针对遗传算法在处理优化问题上的独特优势,主要研究遗传算法的改进,并将其应用于优化非线性规划问题.在进化策略上,采用群体精英保留方式,将适应度值低的个体进行变异;交叉算子采用按决策变量分段交叉方式,提高进化速度;在优化有约束非线性规划问题时,引入算子修正法,对非可行个体进行改善.MATLAB仿真实验表明,方法是一种有效的、可靠的、方便的方法.  相似文献   

2.
无约束非线性l_p问题的调节熵方法   总被引:1,自引:0,他引:1  
本文构造了求解无约束非线性lp问题的新方法——调节熵函数法。给出了数值算法,证明了算法的收敛性。通过数值仿真将该方法与求解无约束非线性lp问题的极大熵函数法进行了比较,表明该算法是十分有效的。  相似文献   

3.
董丽  周金川 《数学杂志》2015,35(1):173-179
本文研究了无约束优化问题.利用当前和前面迭代点的信息以及曲线搜索技巧产生新的迭代点,得到了一个新的求解无约束优化问题的下降方法.在较弱条件下证明了算法具有全局收敛性.当目标函数为一致凸函数时,证明了算法具有线性收敛速率.初步的数值试验表明算法是有效的.  相似文献   

4.
提出了一个求解无约束非线性规划问题的无参数填充函数,并分析了其性质.同时引进了滤子技术,在此基础上设计了无参数滤子填充函数算法,数值实验证明该算法是有效的.  相似文献   

5.
共轭梯度法是一类具有广泛应用的求解大规模无约束优化问题的方法. 提出了一种新的非线性共轭梯度(CG)法,理论分析显示新算法在多种线搜索条件下具有充分下降性. 进一步证明了新CG算法的全局收敛性定理. 最后,进行了大量数值实验,其结果表明与传统的几类CG方法相比,新算法具有更为高效的计算性能.  相似文献   

6.
无约束优化的自适应信赖域方法   总被引:7,自引:0,他引:7  
本文对无约束优化问题提出一个自适应信赖域方法,每次迭代都充分利用前迭代点的信息自动产生一个恰当的信赖域半径,在此区域内,二次模型与原目标函数尽可能一致,避免盲目的尝试,提高了计算效率。文中在通常条件下证明了全局收敛性及局部超线性收敛结果,给出了新算法与传统信赖域方法的数值结果,证实了新方法的有效性。  相似文献   

7.
戚有建 《数学通讯》2013,(Z1):28-29
我们知道,线性规划研究的是线性约束条件下线性目标函数的最值,那么类似的会有非线性的规划问题,主要是下面三类问题:(1)非线性约束条件下求线性目标函数的最值;(2)线性约束条件下求非线性目标函数的最值;  相似文献   

8.
求解无约束优化问题的一类新的下降算法   总被引:2,自引:0,他引:2  
本文对求解无约束优化问题提出了一类新的下降算法,并且给出了HS算法与其相结合的两类杂交算法.在Wolfe线搜索下不需给定下降条件,即证明了它们的全局收敛性.数值实验表明新的算法十分有效,尤其是对求解大规模问题而言.  相似文献   

9.
遗传算法因其具有的特性,它采用交换、复制和突变等方法,获取的解为全局最优解,而且无需计算函数的导数,是一种只考虑输入与输出关系的黑箱问题,适用于处理各种复杂问题.此文基于最优保存的思想,把最速下降法与最优保存和自适应遗传算法相结合,用于求解非线性函数优化问题,提出一种基于自适应混合遗传算法的非线性函数全局优化方法.  相似文献   

10.
本文提出了一种解无约束优化问题的新的非单调自适应信赖域方法.这种方法借助于目标函数的海赛矩阵的近似数量矩阵来确定信赖域半径.在通常的条件下,给出了新算法的全局收敛性以及局部超线性收敛的结果,数值试验验证了新的非单调方法的有效性.  相似文献   

11.
Existing algorithms for solving unconstrained optimization problems are generally only optimal in the short term. It is desirable to have algorithms which are long-term optimal. To achieve this, the problem of computing the minimum point of an unconstrained function is formulated as a sequence of optimal control problems. Some qualitative results are obtained from the optimal control analysis. These qualitative results are then used to construct a theoretical iterative method and a new continuous-time method for computing the minimum point of a nonlinear unconstrained function. New iterative algorithms which approximate the theoretical iterative method and the proposed continuous-time method are then established. For convergence analysis, it is useful to note that the numerical solution of an unconstrained optimization problem is none other than an inverse Lyapunov function problem. Convergence conditions for the proposed continuous-time method and iterative algorithms are established by using the Lyapunov function theorem.  相似文献   

12.
We will propose an outer-approximation (cutting plane) method for minimizing a function f X subject to semi-definite constraints on the variables XR n. A number of efficient algorithms have been proposed when the objective function is linear. However, there are very few practical algorithms when the objective function is nonlinear. An algorithm to be proposed here is a kind of outer-approximation(cutting plane) method, which has been successfully applied to several low rank global optimization problems including generalized convex multiplicative programming problems and generalized linear fractional programming problems, etc. We will show that this algorithm works well when f is convex and n is relatively small. Also, we will provide the proof of its convergence under various technical assumptions.  相似文献   

13.
Global Optimization of Nonlinear Bilevel Programming Problems   总被引:5,自引:0,他引:5  
A novel technique that addresses the solution of the general nonlinear bilevel programming problem to global optimality is presented. Global optimality is guaranteed for problems that involve twice differentiable nonlinear functions as long as the linear independence constraint qualification condition holds for the inner problem constraints. The approach is based on the relaxation of the feasible region by convex underestimation, embedded in a branch and bound framework utilizing the basic principles of the deterministic global optimization algorithm, BB [2, 4, 5, 11]. Epsilon global optimality in a finite number of iterations is theoretically guaranteed. Computational studies on several literature problems are reported.  相似文献   

14.
15.
带有固定步长的非单调自适应信赖域算法   总被引:1,自引:0,他引:1  
提出了求解无约束优化问题带有固定步长的非单调自适应信赖域算法.信赖域半径的修正采用自适应技术,算法在试探步不被接受时,采用固定步长寻找下一迭代点.并在适当的条件下,证明算法具有全局收敛性和超线性收敛性.初步的数值试验表明算法对高维问题具有较好的效果.  相似文献   

16.
We show that, for an unconstrained optimization problem, the long-term optimal trajectory consists of a sequence of greatest descent directions and a Newton method in the final iteration. The greatest descent direction can be computed approximately by using a Levenberg-Marquardt like formula. This implies the view that the Newton method approximates a Levenberg-Marquardt like formula at a finite distance from the minimum point, instead of the standard view that the Levenberg-Marquadt formula is a way to approximate the Newton method. With the insight gained from this analysis, we develop a two-dimensional version of a Levenberg-Marquardt like formula. We make use of the two numerically largest components of the gradient vector to define here new search directions. In this way, we avoid the need of inverting a high-dimensional matrix. This reduces also the storage requirements for the full Hessian matrix in problems with a large number of variables. The author thanks Mark Wu, Professors Sanyang Liu, Junmin Li, Shuisheng Zhou and Feng Ye for support and help in this research as well as the referees for helpful comments.  相似文献   

17.
Globally Convergent Algorithms for Unconstrained Optimization   总被引:2,自引:0,他引:2  
A new globalization strategy for solving an unconstrained minimization problem is proposed based on the idea of combining Newton's direction and the steepest descent direction WITHIN each iteration. Global convergence is guaranteed with an arbitrary initial point. The search direction in each iteration is chosen to be as close to the Newton's direction as possible and could be the Newton's direction itself. Asymptotically the Newton step will be taken in each iteration and thus the local convergence is quadratic. Numerical experiments are also reported. Possible combination of a Quasi-Newton direction with the steepest descent direction is also considered in our numerical experiments. The differences between the proposed strategy and a few other strategies are also discussed.  相似文献   

18.
In this paper a canonical neural network with adaptively changing synaptic weights and activation function parameters is presented to solve general nonlinear programming problems. The basic part of the model is a sub-network used to find a solution of quadratic programming problems with simple upper and lower bounds. By sequentially activating the sub-network under the control of an external computer or a special analog or digital processor that adjusts the weights and parameters, one then solves general nonlinear programming problems. Convergence proof and numerical results are given.  相似文献   

19.
This paper describes a class of frame-based direct search methods for unconstrained optimization without derivatives. A template for convergent direct search methods is developed, some requiring only the relative ordering of function values. At each iteration, the template considers a number of search steps which form a positive basis and conducts a ray search along a step giving adequate decrease. Various ray search strategies are possible, including discrete equivalents of the Goldstein–Armijo and one-sided Wolfe–Powell ray searches. Convergence is shown under mild conditions which allow successive frames to be rotated, translated, and scaled relative to one another.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号