共查询到19条相似文献,搜索用时 125 毫秒
1.
基于加罚方法和增广Lagrange泛函, 本文给出了一种求解具有梯度限制的四阶障碍问题的增广Lagrange迭代方法, 并证明了算法的收敛性.通过采用非协调有限元离散的数值实验表明, 该算法是行之有效的. 相似文献
2.
根据冲击接触计算模型所需满足的基本控制方程和非线性互补条件,应用非线性互补问题与约束优化的等价关系将非线性互补接触问题转变成一个非线性规划问题,系统地推导建立了冲击接触问题的一种双共轭投影梯度计算方法.增广Lagrange乘子法克服了罚函数要求减小迭代步长以达到计算稳定的限制,即使对于冲击接触问题亦可以采用较大迭代步长,在形成的与原互补问题等价的无约束规划模式下,应用双共轭投影梯度算法提高非线性搜索速度和计算效率.算法模型计算结果表明,所建立的双共轭投影梯度计算理论及方法是正确有效的. 相似文献
3.
本文提出一个求解不等式约束优化问题的基于指数型增广Lagrange函数的信赖域方法.基于指数型增广Lagrange函数,将传统的增广Lagrange方法的精确求解子问题转化为一个信赖域子问题,从而减少了计算量,并建立相应的信赖域算法.在一定的假设条件下,证明了算法的全局收敛性,并给出相应经典算例的数值实验结果. 相似文献
4.
增广拉格朗日方法是求解带线性约束的凸优化问题的有效算法.线性化增广拉格朗日方法通过线性化增广拉格朗日函数的二次罚项并加上一个临近正则项,使得子问题容易求解,其中正则项系数的恰当选取对算法的收敛性和收敛速度至关重要.较大的系数可保证算法收敛性,但容易导致小步长.较小的系数允许迭代步长增大,但容易导致算法不收敛.本文考虑求解带线性等式或不等式约束的凸优化问题.我们利用自适应技术设计了一类不定线性化增广拉格朗日方法,即利用当前迭代点的信息自适应选取合适的正则项系数,在保证收敛性的前提下尽量使得子问题步长选择范围更大,从而提高算法收敛速度.我们从理论上证明了算法的全局收敛性,并利用数值实验说明了算法的有效性. 相似文献
5.
采用既约预条件共轭梯度路径结合非单调技术解线性等式约束的非线性优化问题.基于广义消去法将原问题转化为等式约束矩阵的零空间中的一个无约束优化问题,通过一个增广系统获得既约预条件方程,并构造共轭梯度路径解二次模型,从而获得搜索方向和迭代步长.基于共轭梯度路径的良好性质,在合理的假设条件下,证明了算法不仅具有整体收敛性,而且保持快速的超线性收敛速率.进一步,数值计算表明了算法的可行性和有效性. 相似文献
6.
非线性互补问题可以转化成非线性约束优化问题. 提出一种非单调线搜索的可行SQP方法. 利用QP子问题的K-T点得到一个可行下降方向,通过引入一个高阶校正步以克服Maratos效应. 同时,算法采用非单调线搜索技巧获得搜索步长. 证明全局收敛性时不需要严格互补条件, 最后给出数值试验. 相似文献
7.
本文研究了约束优化信赖域法中的线性化约束条件在信赖域内无解的问题.利用一种基于增广Lagrange函数的方法.获得了一个改进的约束优化的信赖域法.该法的线性化约束条件在信赖内有解,并且具有全局收敛性和超线性收敛性. 相似文献
8.
《数学的实践与认识》2015,(9)
给出了一种求解非线性约束优化问题的算法.利用Lagrange函数,将非线性约束优化问题转化为无约束优化问题,从而得到解决.方法仅仅依靠求解一个线性方程组来求解,因此使得计算量减小,计算速度变快.在一定条件下,给出算法的收敛性证明.数值试验表明方法是有效的. 相似文献
9.
10.
定步长的连续极小化方法... 总被引:4,自引:0,他引:4
冯国胜 《纯粹数学与应用数学》1994,10(1):69-76
求解非线性方程组F(x)=0可转化为求非线性最小二乘问题min1/2F(x)^TF(x)的极小点。文章提出了一种求解上述非线性最小二乘问题的连续极小化方法,方法给出确定的步长,并证明具有整体和线性的收敛性。两个数值例子说明了方法的优越性。 相似文献
11.
12.
In this paper, a new type of stepsize, approximate optimal stepsize, for gradient method is introduced to interpret the Barzilai–Borwein (BB) method, and an efficient gradient method with an approximate optimal stepsize for the strictly convex quadratic minimization problem is presented. Based on a multi-step quasi-Newton condition, we construct a new quadratic approximation model to generate an approximate optimal stepsize. We then use the two well-known BB stepsizes to truncate it for improving numerical effects and treat the resulted approximate optimal stepsize as the new stepsize for gradient method. We establish the global convergence and R-linear convergence of the proposed method. Numerical results show that the proposed method outperforms some well-known gradient methods. 相似文献
13.
14.
D. Sciutti 《Journal of Optimization Theory and Applications》1977,22(2):227-237
The connection between the convergence of the Hestenes method of multipliers and the existence of augmented Lagrange multipliers for the constrained minimum problem (P): minimizef(x), subject tog(x)=0, is investigated under very general assumptions onX,f, andg.In the first part, we use the existence of augmented Lagrange multipliers as a sufficient condition for the convergence of the algorithm. In the second part, we prove that this is also a necessary condition for the convergence of the method and the boundedness of the sequence of the multiplier estimates.Further, we give very simple examples to show that the existence of augmented Lagrange multipliers is independent of smoothness condition onf andg. Finally, an application to the linear-convex problem is given. 相似文献
15.
O. Knoth 《Numerische Mathematik》1989,56(6):591-607
Summary For solving an equality constrained nonlinear least squares problem, a globalization scheme for the generalized Gauss-Newton method via damping is proposed. The stepsize strategy is based on a special exact penalty function. Under natural conditions the global convergence of the algorithm is proved. Moreover, if the algorithm converges to a solution having a sufficiently small residual, the algorithm is shown to change automatically into the undamped generalized Gauss-Newton method with a fast linear rate of convergence. The behaviour of the method is demonstrated on hand of some examples taken from the literature. 相似文献
16.
17.
Kunrada Kankam Nattawut Pholasa Prasit Cholamjiak 《Mathematical Methods in the Applied Sciences》2019,42(5):1352-1362
In optimization theory, convex minimization problems have been intensively investigated in the current literature due to its wide range in applications. A major and effective tool for solving such problem is the forward‐backward splitting algorithm. However, to guarantee the convergence, it is usually assumed that the gradient of functions is Lipschitz continuous and the stepsize depends on the Lipschitz constant, which is not an easy task in practice. In this work, we propose the modified forward‐backward splitting method using new linesearches for choosing suitable stepsizes and discuss the convergence analysis including its complexity without any Lipschitz continuity assumption on the gradient. Finally, we provide numerical experiments in signal recovery to demonstrate the computational performance of our algorithm in comparison to some well‐known methods. Our reports show that the proposed algorithm has a good convergence behavior and can outperform the compared methods. 相似文献
18.
The classical method for optimizing a functional subject to an integral constraint is to introduce the Lagrange multiplier and apply the Euler-Lagrange equations to the augmented integrand. The Lagrange multiplier is a constant whose value is selected such that the integral constraint is satisfied. This value is frequently an eigenvalue of the boundary-value problem and is determined by a trial-and-error procedure. A new approach for solving this isoperimetric problem is presented. The Lagrange multiplier is introduced as a state variable and evaluated simultaneously with the optimum solution. A numerical example is given and is shown to have a large region of convergence. 相似文献
19.
J. T. Betts 《Journal of Optimization Theory and Applications》1975,16(1-2):1-24
An effective algorithm is described for solving the general constrained parameter optimization problem. The method is quasi-second-order and requires only function and gradient information. An exterior point penalty function method is used to transform the constrained problem into a sequence of unconstrained problems. The penalty weightr is chosen as a function of the pointx such that the sequence of optimization problems is computationally easy. A rank-one optimization algorithm is developed that takes advantage of the special properties of the augmented performance index. The optimization algorithm accounts for the usual difficulties associated with discontinuous second derivatives of the augmented index. Finite convergence is exhibited for a quadratic performance index with linear constraints; accelerated convergence is demonstrated for nonquadratic indices and nonlinear constraints. A computer program has been written to implement the algorithm and its performance is illustrated in fourteen test problems. 相似文献