共查询到20条相似文献,搜索用时 171 毫秒
1.
提出一种在分布式环境中利用共轭梯度法优化二次损失函数的算法,该算法利用本地子机器局部损失函数的一阶导数信息更新迭代点,在每次迭代中执行两轮通信,通过通信协作使主机器上的损失函数之和最小化.经过理论分析,证明该算法具有线性收敛性.在模拟数据集上与分布式交替方向乘子法进行对比,结果表明分布式共轭梯度算法更匹配于集中式性能.... 相似文献
2.
王在华 《数学的实践与认识》2021,(7):119-126
研究一类线性矩阵方程最小二乘问题的迭代法求解,利用目标函数与矩阵迹之间的关系构造了矩阵形式的"梯度"下降法迭代格式,推广了向量形式的经典"梯度"下降法,并引入了两个矩阵之间的弱正交性来刻画迭代修正量的特点.作为本文算法的应用,给出了机器翻译优化问题的一种迭代求解格式. 相似文献
3.
研究目标函数是若干光滑函数和的可分离优化问题,提出了一种单位化增量梯度算法.该算法每次子迭代只需要计算一个(或几个)分量函数的单位负梯度方向作为迭代方向.在一定条件下,证明了采用发散步长的单位化增量梯度算法的收敛性.作为应用,新算法和Bertsekas D P,Tsitsikils J N提出的(没有单位化)增量梯度算... 相似文献
4.
共轭梯度法是求解大规模无约束优化问题的一类重要方法.由于共轭梯度法产生的搜索方向不一定是下降方向,为保证每次迭代方向都是下降方向,本文提出一种求解无约束优化问题的谱共轭梯度算法,该方法的每次搜索方向都是下降方向.当假设目标函数一致凸,且其梯度满足Lipschitz条件,线性搜索满足Wolfe条件时,讨论所设计算法的全局收敛性. 相似文献
5.
6.
线性约束最优化的一个共轭投影梯度法 总被引:1,自引:0,他引:1
本结合共轭梯度法及梯度投影法的思想,建立线性等式约束最优化的一个新算法,称之为共轭投影梯度法。分别对二次凸目标函数和一般目标函数分析和论证了算法的重要性质和收敛性。 相似文献
7.
研究列正交约束下广义Sylvester方程极小化问题的有效算法.基于Stiefel流形的几何性质和欧氏空间中的MPRP共轭梯度法,构造一类黎曼MPRP共轭梯度迭代求解算法,给出算法全局收敛性.该迭代格式得到的搜索方向总能保证该目标函数下降.数值实验和数值比较验证所提出算法对于问题模型是高效可行的. 相似文献
8.
9.
10.
11.
一类新的非单调记忆梯度法及其全局收敛性 总被引:1,自引:0,他引:1
在非单调Armijo线搜索的基础上提出一种新的非单调线搜索,研究了一类在该线搜索下的记忆梯度法,在较弱条件下证明了其全局收敛性。与非单调Armijo线搜索相比,新的非单调线搜索在每次迭代时可以产生更大的步长,从而使目标函数值充分下降,降低算法的计算量。 相似文献
12.
Changyu Wang Chengyun Gao Zhenjun Shi 《Computational Optimization and Applications》1997,7(2):239-253
In this paper, we extend the ordinary discrete type facility location problems to continuous type ones. Unlike the discrete type facility location problem in which the objective function isn't everywhere differentiable, the objective function in the continuous type facility location problem is strictly convex and continuously differentiable. An algorithm without line search for solving the continuous type facility location problems is proposed and its global convergence, linear convergence rate is proved. Numerical experiments illustrate that the algorithm suggested in this paper have smaller amount of computation, quicker convergence rate than the gradient method and conjugate direction method in some sense. 相似文献
13.
Nonmonotonic trust region algorithm 总被引:24,自引:0,他引:24
A nonmonotonic trust region method for unconstrained optimization problems is presented. Although the method allows the sequence of values of the objective function to be nonmonotonic, convergence properties similar to those for the usual trust region method are proved under certain conditions, including conditions on the approximate solutions to the subproblem. To make the solution satisfy these conditions, an algorithm to solve the subproblem is also established. Finally, some numerical results are reported which show that the nonmonotonic trust region method is superior to the usual trust region method according to both the number of gradient evaluations and the number of function evaluations.The authors would like to thank Professor L. C. W. Dixon for his useful suggestions. 相似文献
14.
1 引 言 传统的求零点的迭代法只讨论迭代序列{xn}的收敛阶,近年来,G.Alefeld和F.A.Po-tra研究了含零点的区间半径序列的收敛性[2][3],而我们提出了同时具有点和区间半径序列均平方收敛的免导迭代法[1],即当n充分大时,序列{xn}和含零点区间的半径序列{(bn-an)}都是平方收敛的.通过进一步的分析,我们发现,文[1]中的结果仍可改进,并且,不需 相似文献
15.
In this work, we present an algorithm for solving constrained optimization problems that does not make explicit use of the objective function derivatives. The algorithm mixes an inexact restoration framework with filter techniques, where the forbidden regions can be given by the flat or slanting filter rule. Each iteration is decomposed into two independent phases: a feasibility phase which reduces an infeasibility measure without evaluations of the objective function, and an optimality phase which reduces the objective function value. As the derivatives of the objective function are not available, the optimality step is computed by derivative-free trust-region internal iterations. Any technique to construct the trust-region models can be used since the gradient of the model is a reasonable approximation of the gradient of the objective function at the current point. Assuming this and classical assumptions, we prove that the full steps are efficient in the sense that near a feasible nonstationary point, the decrease in the objective function is relatively large, ensuring the global convergence results of the algorithm. Numerical experiments show the effectiveness of the proposed method. 相似文献
16.
17.
针对非线性方程求单根问题,提出了一种新的Newton预测-校正格式.通过每步迭代增加计算一个函数值和一阶导数值,使得每步迭代需要估计两个函数值和两个一阶导数值.与标准的Newton算法的二阶收敛速度相比,新算法具有更高阶的收敛速度2+\sqrt{6}.通过测试函数对新算法进行测试, 与相关算法比较,表明算法在迭代次数、运算时间及最优值方面都具有较明显的优势. 最后,将这种新格式推广到多维向量值函数, 采用泰勒公式证明了其收敛性,并给出了两个二维算例来验证其收敛的有效性. 相似文献
18.
Two fundamental convergence theorems for nonlinear conjugate gradient methods and their applications
1. IntroductionWe consider the global convergence of conjugate gradient methods for the unconstrainednonlinear optimization problemadn f(x),where f: Re - RI is continuously dtherelltiable and its gradiellt is denoted by g. Weconsider only the cajse where the methods are implemented without regular restarts. Theiterative formula is given byXk 1 = Xk Akdk, (1'1).and the seaxch direction da is defined bywhere gb is a scalar, ^k is a stenlength, and gb denotes g(xk).The best-known formulas fo… 相似文献
19.
20.
This paper presents a coordinate gradient descent approach for minimizing the sum of a smooth function and a nonseparable convex function. We find a search direction by solving a subproblem obtained by a second-order approximation of the smooth function and adding a separable convex function. Under a local Lipschitzian error bound assumption, we show that the algorithm possesses global and local linear convergence properties. We also give some numerical tests (including image recovery examples) to illustrate the efficiency of the proposed method. 相似文献