首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
无约束优化问题的对角稀疏拟牛顿法   总被引:3,自引:0,他引:3  
对无约束优化问题提出了对角稀疏拟牛顿法,该算法采用了Armijo非精确线性搜索,并在每次迭代中利用对角矩阵近似拟牛顿法中的校正矩阵,使计算搜索方向的存贮量和工作量明显减少,为大型无约束优化问题的求解提供了新的思路.在通常的假设条件下,证明了算法的全局收敛性,线性收敛速度并分析了超线性收敛特征。数值实验表明算法比共轭梯度法有效,适于求解大型无约束优化问题.  相似文献   

2.
不可微合成函数的极小化方法   总被引:1,自引:0,他引:1  
本文提出了一种极小化不可微合成函数的下降算法。该算法通过内部迭代寻找下降方向,每次内部迭代求解一个二次规划。外部迭代点不精确线搜索求得,算法在有限步内得到近似平稳点,经过适当修正后,算法全局收敛到平衡点。  相似文献   

3.
刘景辉  马昌凤  陈争 《计算数学》2012,34(3):275-284
在传统信赖域方法的基础上, 提出了求解无约束最优化问题的一个新的带线搜索的信赖域算法. 该算法采用大步长 Armijo 线搜索技术获得迭代步长, 克服了每次迭代求解信赖域子问题时计算量较大的缺点, 因而适用于求解大型的优化问题. 在适当的条件下, 我们证明了算法的全局收敛性. 数值实验结果表明本文所提出的算法是有效的.  相似文献   

4.
汤京永  董丽  郭淑利 《运筹与管理》2009,18(4):79-81,117
本文提出一类求解无约束优化问题的非单调曲线搜索方法, 在较弱条件下证明了其收敛性.该算法有如下特点:(1)采用曲线搜索方法, 在每步迭代时同时确定下降方向和步长;(2)采用非单调搜索技巧, 产生较大的迭代步长, 降低了算法的计算量;(3)利用当前和前面迭代点的信息产生下降方向, 无需计算和存储矩阵, 适于求解大型优化问题.  相似文献   

5.
本文对非线性不等式约束优化问题提出了一个新的可行 QP-free 算法. 新算法保存了现有算法的优点, 并具有以下特性: (1) 算法每次迭代只需求解三个具有相同系数矩阵的线性方程组, 计算量小; (2) 可行下降方向只需通过求解一个线性方程组即可获得, 克服了以往分别求解两个线性方程组获得下降方向和可行方向, 然后再做凸组合的困难;(3) 迭代点均为可行点, 并不要求是严格内点; (4) 算法中采用了试探性线搜索,可以进一步减少计算量; (5) 算法中参数很少,数值试验表明算法具有较好的数值效果和较强的稳定性.  相似文献   

6.
周鑫  欧宜贵 《应用数学》2018,31(2):400-407
给出一个求解无约束优化的非单调拟牛顿型ODE方法.它的主要特点是:在每次迭代时,搜索方向仅需计算矩阵和向量的乘积就能获得,从而避免求解线性方程组系统,减少算法的计算量.然后采用一个改进的非单调线搜索以获得下一个新迭代点.在适当的条件下,该方法还是整体收敛和局部超线性收敛的.初步的数值试验结果表明了其有效性.  相似文献   

7.
一类新的非单调记忆梯度法及其全局收敛性   总被引:1,自引:0,他引:1  
在非单调Armijo线搜索的基础上提出一种新的非单调线搜索,研究了一类在该线搜索下的记忆梯度法,在较弱条件下证明了其全局收敛性。与非单调Armijo线搜索相比,新的非单调线搜索在每次迭代时可以产生更大的步长,从而使目标函数值充分下降,降低算法的计算量。  相似文献   

8.
一类新共轭下降算法的全局收敛性   总被引:2,自引:1,他引:1  
本文提出一类新的共轭下降算法,在算法的迭代过程中,迭代方向保持下降性,在非精确线搜索下,证明了算法的整体收敛性.  相似文献   

9.
孙青青  王川龙 《计算数学》2021,43(4):516-528
针对低秩稀疏矩阵恢复问题的一个非凸优化模型,本文提出了一种快速非单调交替极小化方法.主要思想是对低秩矩阵部分采用交替极小化方法,对稀疏矩阵部分采用非单调线搜索技术来分别进行迭代更新.非单调线搜索技术是将单步下降放宽为多步下降,从而提高了计算效率.文中还给出了新算法的收敛性分析.最后,通过数值实验的比较表明,矩阵恢复的非单调交替极小化方法比原单调类方法更有效.  相似文献   

10.
本文对线性约束优化问题提出了一个新的广义梯度投影法,该算法采用了非精确线性搜索,并在每次迭代运算中结合了广义投影矩阵和变尺度方法的思想确定其搜索方向.在通常的假设条件下,证明了该算法的整体收敛性和超线性收敛速度.  相似文献   

11.
We investigate the problem of robust matrix completion with a fraction of observation corrupted by sparsity outlier noise. We propose an algorithmic framework based on the ADMM algorithm for a non-convex optimization, whose objective function consists of an $\ell_1$ norm data fidelity and a rank constraint. To reduce the computational cost per iteration, two inexact schemes are developed to replace the most time-consuming step in the generic ADMM algorithm. The resulting algorithms remarkably outperform the existing solvers for robust matrix completion with outlier noise. When the noise is severe and the underlying matrix is ill-conditioned, the proposed algorithms are faster and give more accurate solutions than state-of-the-art robust matrix completion approaches.  相似文献   

12.
Low Tucker rank tensor completion has wide applications in science and engineering. Many existing approaches dealt with the Tucker rank by unfolding matrix rank. However, unfolding a tensor to a matrix would destroy the data's original multi-way structure, resulting in vital information loss and degraded performance. In this article, we establish a relationship between the Tucker ranks and the ranks of the factor matrices in Tucker decomposition. Then, we reformulate the low Tucker rank tensor completion problem as a multilinear low rank matrix completion problem. For the reformulated problem, a symmetric block coordinate descent method is customized. For each matrix rank minimization subproblem, the classical truncated nuclear norm minimization is adopted. Furthermore, temporal characteristics in image and video data are introduced to such a model, which benefits the performance of the method. Numerical simulations illustrate the efficiency of our proposed models and methods.  相似文献   

13.
A recent work of Shi (Numer. Linear Algebra Appl. 2002; 9 : 195–203) proposed a hybrid algorithm which combines a primal‐dual potential reduction algorithm with the use of the steepest descent direction of the potential function. The complexity of the potential reduction algorithm remains valid but the overall computational cost can be reduced. In this paper, we make efforts to further reduce the computational costs. We notice that in order to obtain the steepest descent direction of the potential function, the Hessian matrix of second order partial derivatives of the objective function needs to be computed. To avoid this, we in this paper propose another hybrid algorithm which uses a projected steepest descent direction of the objective function instead of the steepest descent direction of the potential function. The complexity of the original potential reduction algorithm still remains valid but the overall computational cost is further reduced. Our numerical experiments are also reported. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

14.
For solving large scale linear least‐squares problem by iteration methods, we introduce an effective probability criterion for selecting the working columns from the coefficient matrix and construct a greedy randomized coordinate descent method. It is proved that this method converges to the unique solution of the linear least‐squares problem when its coefficient matrix is of full rank, with the number of rows being no less than the number of columns. Numerical results show that the greedy randomized coordinate descent method is more efficient than the randomized coordinate descent method.  相似文献   

15.
This paper proposes new iterative methods for the efficient computation of the smallest eigenvalue of symmetric nonlinear matrix eigenvalue problems of large order with a monotone dependence on the spectral parameter. Monotone nonlinear eigenvalue problems for differential equations have important applications in mechanics and physics. The discretization of these eigenvalue problems leads to nonlinear eigenvalue problems with very large sparse ill-conditioned matrices monotonically depending on the spectral parameter. To compute the smallest eigenvalue of large-scale matrix nonlinear eigenvalue problems, we suggest preconditioned iterative methods: preconditioned simple iteration method, preconditioned steepest descent method, and preconditioned conjugate gradient method. These methods use only matrix-vector multiplications, preconditioner-vector multiplications, linear operations with vectors, and inner products of vectors. We investigate the convergence and derive grid-independent error estimates for these methods. Numerical experiments demonstrate the practical effectiveness of the proposed methods for a model problem.  相似文献   

16.
Globally Convergent Algorithms for Unconstrained Optimization   总被引:2,自引:0,他引:2  
A new globalization strategy for solving an unconstrained minimization problem is proposed based on the idea of combining Newton's direction and the steepest descent direction WITHIN each iteration. Global convergence is guaranteed with an arbitrary initial point. The search direction in each iteration is chosen to be as close to the Newton's direction as possible and could be the Newton's direction itself. Asymptotically the Newton step will be taken in each iteration and thus the local convergence is quadratic. Numerical experiments are also reported. Possible combination of a Quasi-Newton direction with the steepest descent direction is also considered in our numerical experiments. The differences between the proposed strategy and a few other strategies are also discussed.  相似文献   

17.
In this paper, an improved feasible QP-free method is proposed to solve nonlinear inequality constrained optimization problems. Here, a new modified method is presented to obtain the revised feasible descent direction. In view of the computational cost, the most attractive feature of the new algorithm is that only one system of linear equations is required to obtain the revised feasible descent direction. Thereby, per single iteration, it is only necessary to solve three systems of linear equations with the same coefficient matrix. In particular, without the positive definiteness assumption on the Hessian estimate, the proposed algorithm is still global convergence. Under some suitable conditions, the superlinear convergence rate is obtained.  相似文献   

18.
周茜  雷渊  乔文龙 《计算数学》2016,38(2):171-186
本文主要考虑一类线性矩阵不等式及其最小二乘问题,它等价于相应的矩阵不等式最小非负偏差问题.之前相关文献提出了求解该类最小非负偏差问题的迭代方法,但该方法在每步迭代过程中需要精确求解一个约束最小二乘子问题,因此对规模较大的问题,整个迭代过程需要耗费巨大的计算量.为了提高计算效率,本文在现有算法的基础上,提出了一类修正迭代方法.该方法在每步迭代过程中利用有限步的矩阵型LSQR方法求解一个低维矩阵Krylov子空间上的约束最小二乘子问题,降低了整个迭代所需的计算量.进一步运用投影定理以及相关的矩阵分析方法证明了该修正算法的收敛性,最后通过数值例子验证了本文的理论结果以及算法的有效性.  相似文献   

19.
主要研究对称正定矩阵群上的内蕴最速下降算法的收敛性问题.首先针对一个可转化为对称正定矩阵群上无约束优化问题的半监督度量学习模型,提出对称正定矩阵群上一种自适应变步长的内蕴最速下降算法.然后利用李群上的光滑函数在任意一点处带积分余项的泰勒展开式,证明所提算法在对称正定矩阵群上是线性收敛的.最后通过在分类问题中的数值实验说明算法的有效性.  相似文献   

20.
We focus on efficient preconditioning techniques for sequences of Karush‐Kuhn‐Tucker (KKT) linear systems arising from the interior point (IP) solution of large convex quadratic programming problems. Constraint preconditioners (CPs), although very effective in accelerating Krylov methods in the solution of KKT systems, have a very high computational cost in some instances, because their factorization may be the most time‐consuming task at each IP iteration. We overcome this problem by computing the CP from scratch only at selected IP iterations and by updating the last computed CP at the remaining iterations, via suitable low‐rank modifications based on a BFGS‐like formula. This work extends the limited‐memory preconditioners (LMPs) for symmetric positive definite matrices proposed by Gratton, Sartenaer and Tshimanga in 2011, by exploiting specific features of KKT systems and CPs. We prove that the updated preconditioners still belong to the class of exact CPs, thus allowing the use of the conjugate gradient method. Furthermore, they have the property of increasing the number of unit eigenvalues of the preconditioned matrix as compared with the generally used CPs. Numerical experiments are reported, which show the effectiveness of our updating technique when the cost for the factorization of the CP is high.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号