首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 78 毫秒
1.
李柏林  陈永 《计算数学》1993,15(3):303-309
§1.前言 有些实践中的优化问题可以按无约束来处理,而且大量非常有效的约束优化算法都涉及无约束优化方法,因此,无约束优化方法在实用上是很重要的。 考虑下面的二次目标函数F(X)的无约束优化问题:  相似文献   

2.
针对混合整数非线性约束优化问题(MINLP)的一般形式,通过罚函数的方法,给出了它的几种等价形式,并证明了最优解的等价性.将约束优化问题转化成更容易求解的无约束非线性优化问题,并把混合整数规划转化成非整数优化问题,从而将MINLP的求解简化为求解一个连续的无约束非线性优化问题,进而可用已有的一般无约束优化算法进行求解.  相似文献   

3.
针对无约束非线性规划传统优化方法存在的问题,将区间自适应遗传算法引入无约束非线性规划优化中,算法可以利用当前进化信息,自适应移动搜索区间,找到全局最优解,故可缩短搜索区间长度,提高编码精度,降低算法计算量,解决了传统遗传算法处理优化问题时,给定区间必须包含最优解这一问题,这也是本算法有别于其他优化算法的独特优势,为某些最优解所在区间难以估计的无约束非线性规划问题的优化提供了一条有效可行的途径.系统阐述了区间自适应遗传算法的原理,给出了算法优化无约束非线性规划问题的步骤,以MatlabR2016b仿真方式对算法进行了实例测试,结果表明,方法是一种计算稳定、正确、有效、可靠实用的无约束非线性规划优化方法.  相似文献   

4.
无约束优化问题的对角稀疏拟牛顿法   总被引:3,自引:0,他引:3  
对无约束优化问题提出了对角稀疏拟牛顿法,该算法采用了Armijo非精确线性搜索,并在每次迭代中利用对角矩阵近似拟牛顿法中的校正矩阵,使计算搜索方向的存贮量和工作量明显减少,为大型无约束优化问题的求解提供了新的思路.在通常的假设条件下,证明了算法的全局收敛性,线性收敛速度并分析了超线性收敛特征。数值实验表明算法比共轭梯度法有效,适于求解大型无约束优化问题.  相似文献   

5.
一类新的记忆梯度法及其全局收敛性   总被引:1,自引:0,他引:1  
研究了求解无约束优化问题的记忆梯度法,利用当前和前面迭代点的信息产生下降方向,得到了一类新的无约束优化算法,在Wolfe线性搜索下证明了其全局收敛性.新算法结构简单,不用计算和存储矩阵,适于求解大型优化问题.数值试验表明算法有效.  相似文献   

6.
本文提出一种新的无约束优化记忆梯度算法,在Armijo搜索下,该算法在每步迭代时利用了前面迭代点的信息,增加了参数选择的自由度,适于求解大规模无约束优化问题。分析了算法的全局收敛性。  相似文献   

7.
本文提出一种新的无约束优化记忆梯度算法,算法在每步迭代时利用了前面迭代点的信息,增加了参数选择的自由度,适于求解大规模无约束优化问题.分析了算法的全局收敛性.数值试验表明算法是有效的.  相似文献   

8.
一类不可微优化问题的有效解法   总被引:3,自引:0,他引:3       下载免费PDF全文
李兴斯 《中国科学A辑》1994,37(4):371-377
本文提出一种以最大熵方法为基础的光滑技术,用来求解和“极大值”函数有关的一类不可微优化问题,解决问题的基本思路,是用一个称之为“凝聚”函数的光滑函数直接代替不可微的极大值函数,文中给出了该函数的推导和证明了它的一些有用性质,使用这一光滑技术,可把无约束和有约束极大极小两种问题均转化为光滑函数的无约束优化问题,因此可以直接利用现有的无约束优化算法软件解这类不可微优化问题,本文方法特别易于计算机实现,而且收敛速度快、数值稳定性好。  相似文献   

9.
基于无约束单目标记忆梯度法,本文提出了一种无约束多目标优化问题的记忆梯度法,并证明了算法在Armijo线性搜索下的收敛性。数据试验结果验证了该算法的有效性。  相似文献   

10.
董丽  周金川 《数学杂志》2015,35(1):173-179
本文研究了无约束优化问题.利用当前和前面迭代点的信息以及曲线搜索技巧产生新的迭代点,得到了一个新的求解无约束优化问题的下降方法.在较弱条件下证明了算法具有全局收敛性.当目标函数为一致凸函数时,证明了算法具有线性收敛速率.初步的数值试验表明算法是有效的.  相似文献   

11.
基于动力系统的线性不等式组的解法   总被引:1,自引:0,他引:1  
本文提出了一种新的求解线性不等式组可行解的方法-基于动力系统的方法.假设线性不等式组的可行域为非空,在可行域的相对内域上建立一个非线性关系表达式,进而得到一个结构简单的动力系统模型.同时,定义了穿越方向。文章最后的数值实验结果表明此算法是有效的.  相似文献   

12.
求解线性不等式组的方法   总被引:5,自引:0,他引:5  
本提出了一个新的求解线性不等式组可行解的方法--无约束极值方法。通过在线性不等式组的非空可行域的相对内域上建立一个非线性极值问题,根据对偶关系,得到了一个对偶空间的无约束极值及原始,对偶变量之间的简单线性映射关系,这样将原来线性不等式组问题的求解转化为一个无约束极值问题。中主要讨论了求解无约束极值问题的共轭梯度算法。同时,在寻找不等式组可行解的过程中,定义了穿越方向,这样大大减少计算量。中最后数值实验结果表明此算法是有效的。  相似文献   

13.
The exact penalty approach aims at replacing a constrained optimization problem by an equivalent unconstrained optimization problem. Most results in the literature of exact penalization are mainly concerned with finding conditions under which a solution of the constrained optimization problem is a solution of an unconstrained penalized optimization problem, and the reverse property is rarely studied. In this paper, we study the reverse property. We give the conditions under which the original constrained (single and/or multiobjective) optimization problem and the unconstrained exact penalized problem are exactly equivalent. The main conditions to ensure the exact penalty principle for optimization problems include the global and local error bound conditions. By using variational analysis, these conditions may be characterized by using generalized differentiation.  相似文献   

14.
In this paper, we consider a general class of nonlinear mixed discrete programming problems. By introducing continuous variables to replace the discrete variables, the problem is first transformed into an equivalent nonlinear continuous optimization problem subject to original constraints and additional linear and quadratic constraints. Then, an exact penalty function is employed to construct a sequence of unconstrained optimization problems, each of which can be solved effectively by unconstrained optimization techniques, such as conjugate gradient or quasi-Newton methods. It is shown that any local optimal solution of the unconstrained optimization problem is a local optimal solution of the transformed nonlinear constrained continuous optimization problem when the penalty parameter is sufficiently large. Numerical experiments are carried out to test the efficiency of the proposed method.  相似文献   

15.
Nonlinear complementarity as unconstrained and constrained minimization   总被引:11,自引:0,他引:11  
The nonlinear complementarity problem is cast as an unconstrained minimization problem that is obtained from an augmented Lagrangian formulation. The dimensionality of the unconstrained problem is the same as that of the original problem, and the penalty parameter need only be greater than one. Another feature of the unconstrained problem is that it has global minima of zero at precisely all the solution points of the complementarity problem without any monotonicity assumption. If the mapping of the complementarity problem is differentiable, then so is the objective of the unconstrained problem, and its gradient vanishes at all solution points of the complementarity problem. Under assumptions of nondegeneracy and linear independence of gradients of active constraints at a complementarity problem solution, the corresponding global unconstrained minimum point is locally unique. A Wolfe dual to a standard constrained optimization problem associated with the nonlinear complementarity problem is also formulated under a monotonicity and differentiability assumption. Most of the standard duality results are established even though the underlying constrained optimization problem may be nonconvex. Preliminary numerical tests on two small nonmonotone problems from the published literature converged to degenerate or nondegenerate solutions from all attempted starting points in 7 to 28 steps of a BFGS quasi-Newton method for unconstrained optimization.Dedicated to Phil Wolfe on his 65th birthday, in appreciation of his major contributions to mathematical programming.This material is based on research supported by Air Force Office of Scientific Research Grant AFOSR-89-0410 and National Science Foundation Grant CCR-9101801.  相似文献   

16.
The paper provides some examples of mutually dual unconstrained optimization problems originating from regularization problems for systems of linear equations and/or inequalities. The solution of each of these mutually dual problems can be found from the solution of the other problem by means of simple formulas. Since mutually dual problems have different dimensions, it is natural to solve the unconstrained optimization problem of the smaller dimension.  相似文献   

17.
As a synchronization parallel framework, the parallel variable transformation (PVT) algorithm is effective to solve unconstrained optimization problems. In this paper, based on the idea that a constrained optimization problem is equivalent to a differentiable unconstrained optimization problem by introducing the Fischer Function, we propose an asynchronous PVT algorithm for solving large-scale linearly constrained convex minimization problems. This new algorithm can terminate when some processor satisfies terminal condition without waiting for other processors. Meanwhile, it can enhances practical efficiency for large-scale optimization problem. Global convergence of the new algorithm is established under suitable assumptions. And in particular, the linear rate of convergence does not depend on the number of processors.  相似文献   

18.
Existing algorithms for solving unconstrained optimization problems are generally only optimal in the short term. It is desirable to have algorithms which are long-term optimal. To achieve this, the problem of computing the minimum point of an unconstrained function is formulated as a sequence of optimal control problems. Some qualitative results are obtained from the optimal control analysis. These qualitative results are then used to construct a theoretical iterative method and a new continuous-time method for computing the minimum point of a nonlinear unconstrained function. New iterative algorithms which approximate the theoretical iterative method and the proposed continuous-time method are then established. For convergence analysis, it is useful to note that the numerical solution of an unconstrained optimization problem is none other than an inverse Lyapunov function problem. Convergence conditions for the proposed continuous-time method and iterative algorithms are established by using the Lyapunov function theorem.  相似文献   

19.
This paper presents a study on solutions to the global minimization of polynomials. The backward differential flow by the K–T equation with respect to the optimization problem is introduced to deal with a ball-constrained optimization problem. The unconstrained optimization is reduced to a constrained optimization problem which can be solved by a backward differential flow. Some examples are illustrated with an algorithm for computing the backward flow.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号