首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
景书杰  赵海燕 《数学杂志》2014,34(6):1193-1199
本文研究了约束优化问题min x∈Ωf(x).利用共轭梯度算法与GLP梯度投影思想相结合的方法,构造了一个新的共轭梯度投影算法,并在Wolfe线搜索下获得了该算法的全局收敛性结果.  相似文献   

2.
The development of the Lanczos algorithm for finding eigenvalues of large sparse symmetric matrices was followed by that of block forms of the algorithm. In this paper, similar extensions are carried out for a relative of the Lanczos method, the conjugate gradient algorithm. The resulting block algorithms are useful for simultaneously solving multiple linear systems or for solving a single linear system in which the matrix has several separated eigenvalues or is not easily accessed on a computer. We develop a block biconjugate gradient algorithm for general matrices, and develop block conjugate gradient, minimum residual, and minimum error algorithms for symmetric semidefinite matrices. Bounds on the rate of convergence of the block conjugate gradient algorithm are presented, and issues related to computational implementation are discussed. Variants of the block conjugate gradient algorithm applicable to symmetric indefinite matrices are also developed.  相似文献   

3.
The stochastic approximation problem is to find some root or minimum of a nonlinear function in the presence of noisy measurements. The classical algorithm for stochastic approximation problem is the Robbins-Monro (RM) algorithm, which uses the noisy negative gradient direction as the iterative direction. In order to accelerate the classical RM algorithm, this paper gives a new combined direction stochastic approximation algorithm which employs a weighted combination of the current noisy negative gradient and some former noisy negative gradient as iterative direction. Both the almost sure convergence and the asymptotic rate of convergence of the new algorithm are established. Numerical experiments show that the new algorithm outperforms the classical RM algorithm.  相似文献   

4.
A version of the facility location problem (the well-known p-median minimization problem) and its generalization—the problem of minimizing a supermodular set function—is studied. These problems are NP-hard, and they are approximately solved by a gradient algorithm that is a discrete analog of the steepest descent algorithm. A priori bounds on the worst-case behavior of the gradient algorithm for the problems under consideration are obtained. As a consequence, a bound on the performance guarantee of the gradient algorithm for the p-median minimization problem in terms of the production and transportation cost matrix is obtained.  相似文献   

5.
New accelerated nonlinear conjugate gradient algorithms which are mainly modifications of Dai and Yuan’s for unconstrained optimization are proposed. Using the exact line search, the algorithm reduces to the Dai and Yuan conjugate gradient computational scheme. For inexact line search the algorithm satisfies the sufficient descent condition. Since the step lengths in conjugate gradient algorithms may differ from 1 by two orders of magnitude and tend to vary in a very unpredictable manner, the algorithms are equipped with an acceleration scheme able to improve the efficiency of the algorithms. Computational results for a set consisting of 750 unconstrained optimization test problems show that these new conjugate gradient algorithms substantially outperform the Dai-Yuan conjugate gradient algorithm and its hybrid variants, Hestenes-Stiefel, Polak-Ribière-Polyak, CONMIN conjugate gradient algorithms, limited quasi-Newton algorithm LBFGS and compare favorably with CG_DESCENT. In the frame of this numerical study the accelerated scaled memoryless BFGS preconditioned conjugate gradient ASCALCG algorithm proved to be more robust.  相似文献   

6.
Wolfe线搜索下一类混合共轭梯度法的全局收敛性   总被引:3,自引:0,他引:3  
本文给出了一个新的共轭梯度公式,新公式在精确线搜索下与DY公式等价,并给出了新公式的相关性质.结合新公式和DY公式提出了一个新的混合共轭梯度法,新算法在Wolfe线搜索下产生一个下降方向,并证明了算法的全局收敛性,并给出了数值例子.  相似文献   

7.
Techniques for estimating the condition number of a nonsingular matrix are developed. It is shown that Hager??s 1-norm condition number estimator is equivalent to the conditional gradient algorithm applied to the problem of maximizing the 1-norm of a matrix-vector product over the unit sphere in the 1-norm. By changing the constraint in this optimization problem from the unit sphere to the unit simplex, a new formulation is obtained which is the basis for both conditional gradient and projected gradient algorithms. In the test problems, the spectral projected gradient algorithm yields condition number estimates at least as good as those obtained by the previous approach. Moreover, in some cases, the spectral gradient projection algorithm, with a careful choice of the parameters, yields improved condition number estimates.  相似文献   

8.
In this paper, a new gradient-related algorithm for solving large-scale unconstrained optimization problems is proposed. The new algorithm is a kind of line search method. The basic idea is to choose a combination of the current gradient and some previous search directions as a new search direction and to find a step-size by using various inexact line searches. Using more information at the current iterative step may improve the performance of the algorithm. This motivates us to find some new gradient algorithms which may be more effective than standard conjugate gradient methods. Uniformly gradient-related conception is useful and it can be used to analyze global convergence of the new algorithm. The global convergence and linear convergence rate of the new algorithm are investigated under diverse weak conditions. Numerical experiments show that the new algorithm seems to converge more stably and is superior to other similar methods in many situations.  相似文献   

9.
本文研究球面上的$\ell_1$正则优化问题,其目标函数由一般光滑函数项和非光滑$\ell_1$正则项构成,且假设光滑函数的随机梯度可由随机一阶oracle估计.这类优化问题被广泛应用在机器学习,图像、信号处理和统计等领域.根据流形临近梯度法和随机梯度估计技术,提出一种球面随机临近梯度算法.基于非光滑函数的全局隐函数定理,分析了子问题解关于参数的Lipschtiz连续性,进而证明了算法的全局收敛性.在基于随机数据集和实际数据集的球面$\ell_1$正则二次规划问题、有限和SPCA问题和球面$\ell_1$正则逻辑回归问题上数值实验结果显示所提出的算法与流形临近梯度法、黎曼随机临近梯度法相比CPU时间上具有一定的优越性.  相似文献   

10.
本文通过结合牛顿法与PRP共轭梯度法提出一修正PRP方法,新方法中包含了二阶导数信息,在适当的假设下算法全局收敛,数值算例表明了算法的有效性.  相似文献   

11.
利用偏微分方程最优控制中的伴随方法讨论一维Boussinesq方程渗流系数反演问题的数值解法.吸收正则化思想改造最小二乘方法,利用变分伴随思想构造新迭代算法.迭代过程中首次搜索方向采用泛函下降最快的负梯度方向,第二次及以后搜索方向采用一种新的全局收敛的下降算法(Pan-Chen算法).与共轭梯度法比较,新算法具有更好的收敛性.数值模拟结果验证了理论算法的可靠性.  相似文献   

12.
一类具约束选址模型的组合算法   总被引:1,自引:0,他引:1  
杨益民 《应用数学》2003,16(3):70-74
针对一般具闭凸集约束的单址选址模型,提出具全局收敛性的组合算法.算法在迭代中先采用信赖域技巧,当出现“内循环”时,则改用不做线搜索的梯度法.该算法既具有信赖域算法的优越性,又避免了出现“内循环”时速成的隐迭代.同时,该算法通常不需进行线搜索,较之其它组合算法更加简捷实用.  相似文献   

13.
In this paper, a truncated conjugate gradient method with an inexact Gauss-Newton technique is proposed for solving nonlinear systems.?The iterative direction is obtained by the conjugate gradient method solving the inexact Gauss-Newton equation.?Global convergence and local superlinear convergence rate of the proposed algorithm are established under some reasonable conditions. Finally, some numerical results are presented to illustrate the effectiveness of the proposed algorithm.  相似文献   

14.
In this paper, we suggest another accelerated conjugate gradient algorithm for which both the descent and the conjugacy conditions are guaranteed. The search direction is selected as a linear combination of the gradient and the previous direction. The coefficients in this linear combination are selected in such a way that both the descent and the conjugacy condition are satisfied at every iteration. The algorithm introduces the modified Wolfe line search, in which the parameter in the second Wolfe condition is modified at every iteration. It is shown that both for uniformly convex functions and for general nonlinear functions, the algorithm with strong Wolfe line search generates directions bounded away from infinity. The algorithm uses an acceleration scheme modifying the step length in such a manner as to improve the reduction of the function values along the iterations. Numerical comparisons with some conjugate gradient algorithms using a set of 75 unconstrained optimization problems with different dimensions show that the computational scheme outperforms the known conjugate gradient algorithms like Hestenes and Stiefel; Polak, Ribière and Polyak; Dai and Yuan or the hybrid Dai and Yuan; CG_DESCENT with Wolfe line search, as well as the quasi-Newton L-BFGS.  相似文献   

15.
In this paper, we present a new hybrid conjugate gradient algorithm for unconstrained optimization. This method is a convex combination of Liu-Storey conjugate gradient method and Fletcher-Reeves conjugate gradient method. We also prove that the search direction of any hybrid conjugate gradient method, which is a convex combination of two conjugate gradient methods, satisfies the famous D-L conjugacy condition and in the same time accords with the Newton direction with the suitable condition. Furthermore, this property doesn't depend on any line search. Next, we also prove that, moduling the value of the parameter t,the Newton direction condition is equivalent to Dai-Liao conjugacy condition.The strong Wolfe line search conditions are used.The global convergence of this new method is proved.Numerical comparisons show that the present hybrid conjugate gradient algorithm is the efficient one.  相似文献   

16.
提出了一种凸组合共轭梯度算法,并将其算法应用到ARIMA模型参数估计中.新算法由改进的谱共轭梯度算法与共轭梯度算法作凸组合构造而成,具有下述特性:1)具备共轭性条件;2)自动满足充分下降性.证明了在标准Wolfe线搜索下新算法具备完全收敛性,最后数值实验表明通过调节凸组合参数,新算法更加快速有效,通过具体实例证实了模型的显著拟合效果.  相似文献   

17.
研究一类线性矩阵方程最小二乘问题的迭代法求解,利用目标函数与矩阵迹之间的关系构造了矩阵形式的"梯度"下降法迭代格式,推广了向量形式的经典"梯度"下降法,并引入了两个矩阵之间的弱正交性来刻画迭代修正量的特点.作为本文算法的应用,给出了机器翻译优化问题的一种迭代求解格式.  相似文献   

18.
In this paper we report a sparse truncated Newton algorithm for handling large-scale simple bound nonlinear constrained minimixation problem. The truncated Newton method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. At each iterative level, the search direction consists of three parts, one of which is a subspace truncated Newton direction, the other two are subspace gradient and modified gradient directions. The subspace truncated Newton direction is obtained by solving a sparse system of linear equations. The global convergence and quadratic convergence rate of the algorithm are proved and some numerical tests are given.  相似文献   

19.
Learning gradients is one approach for variable selection and feature covariation estimation when dealing with large data of many variables or coordinates. In a classification setting involving a convex loss function, a possible algorithm for gradient learning is implemented by solving convex quadratic programming optimization problems induced by regularization schemes in reproducing kernel Hilbert spaces. The complexity for such an algorithm might be very high when the number of variables or samples is huge. We introduce a gradient descent algorithm for gradient learning in classification. The implementation of this algorithm is simple and its convergence is elegantly studied. Explicit learning rates are presented in terms of the regularization parameter and the step size. Deep analysis for approximation by reproducing kernel Hilbert spaces under some mild conditions on the probability measure for sampling allows us to deal with a general class of convex loss functions.  相似文献   

20.
In this paper a modified gradient based algorithm for solving Sylvester equations is presented. Different from the gradient based method introduced by Ding and Chen [7] and the relaxed gradient based algorithm proposed by Niu et al. [18], the information generated in the first half-iterative step is fully exploited and used to construct the approximate solution. Theoretical analysis shows that the new method converges under certain assumptions. Numerical results are given to verify the efficiency of the new method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号