首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
1.引言 牛顿型方法是解变分不等式的一类重要数值迭代算法.其局部收敛性质的研究也取得了很好的成果(见[5]等).近几年来,此类算法的全局收敛性研究也得到了许多进展.如阻尼牛顿法的局部超线性乃至二阶收敛性质的研究(见[4,6,9; 11, 12, 14; 16]等).然而,对于计算上更为实用的拟牛顿法的研究还不多见.文[18]基于祁力群等在[14]中给出的逐次逼近牛顿型法,建立了一种解非线性互补问题的拟牛顿法,并得到了类Broyden算法的全局收敛性.但是,该方法有以下两个缺陷:1.线搜索可能不能实现…  相似文献   

2.
带有广义Wolfe线搜索的变尺度算法的收敛性   总被引:1,自引:0,他引:1  
本文提出一类广义Wolfe线搜索模型,并且把它与著名的BFGS方法相结合,对于所得到的算法证明了:对于凸函数算法具有全局收敛性和超线性收敛速度,这推广了参考文献[1]中的结果.  相似文献   

3.
BFGS算法对非凸函数优化问题的收敛性   总被引:1,自引:0,他引:1  
BFGS算法是无约束最优化中最著名的数值算法之一,对非凸函数BFGS算法是否具有整体收敛性,这是一个open问题,本文考虑Wolfo线搜索下目标函数非凸的BFGS算法,我们给出一个使该算法收敛的充分条件。  相似文献   

4.
本文就非拟牛顿法在无约束最优化问题上,对采用非单调线搜索的情况下是否具有全局收敛性进行了研究,在目标函数满足一致凸的条件下,证明了非拟牛顿族是全局收敛的.  相似文献   

5.
1引言变分不等式的性质及解法的研究是优化领域的重要课题.所谓变分不等式问题就是:寻找一个点,使得其中X是Rn中的非空闲凸集,F是Rn中的映射,表示Rn中的内积.求解问题(1.1)有多种思路[1,4,5]其中之一就是将(1.1)转化为它的某种等价问题,再进行求解.在山中MasaoFukushima给出了(1.1)的如下的等价问题G是对称正定矩阵.山提出了求解(1.2)的带精确搜索和Armijo搜索的两种收敛性算法.本文建立了“d-function”的概念,利用“D-functin”给出了(1.1)…  相似文献   

6.
对一般目标函数极小化问题的拟牛顿法及其全局收敛性的研究,已经成为拟牛顿法理论中最基本的开问题之一.本文对这个问题做了进一步的研究,对无约束优化问题提出一类新的广义拟牛顿算法,并结合Goldstein线搜索证明了算法对一般非凸目标函数极小化问题的全局收敛性.  相似文献   

7.
一族超线性收敛的投影拟牛顿算法   总被引:5,自引:0,他引:5  
本文将梯度投影与拟牛顿法相结合,给出了求解一般线性约束非线性规划问题含两组参数的算法族.在一定的条件下证明了算法族的全局收敛性与它的子族的超线性收敛速度,并给出了投影D.F.P方法、投影BFGS方法等一些特例.  相似文献   

8.
无约束最优化线搜索一般模型及BFGS方法的整体收敛性   总被引:7,自引:0,他引:7  
本文给出了无约束最优化的算法中线性搜索的可接受的步长选择律的一种一般形式,它概括了大多数已有的步长律为其特例,并且研究了它基本性质,最后证明了此线性搜索一般模拟相结合的无约束优化的BFGS算法的整体收敛性。  相似文献   

9.
非凸无约束优化问题的广义拟牛顿法的全局收敛性   总被引:3,自引:0,他引:3  
陈兰平  焦宝聪 《应用数学》2005,18(4):573-579
本文对无约束优化问题提出一类新的广义拟牛顿法,并采用一类非精确线搜索证明了算法对一般非凸目标函数极小化问题的全局收敛性.  相似文献   

10.
一类超线性收敛的广义拟Newton算法   总被引:7,自引:0,他引:7  
1引言考虑无约束最优化问题其中目标函数f(x)二阶连续可微,记fk=f(x),当充分小时,有如下近似关系:它们对二次函数皆严格成立.考虑选代其中B(G的近似)已知,为某种线搜索确定的步长.对B修正产生B,即U为待定n阶矩阵.若要求B+满足关系即B满足拟Newton方程,由它可导出许多著名的拟Newton算法[1-[4]).若要求B满足关系则可导出伪Newton-δ族校正公式,它不再是Huang族成员[6].从信息资源的利用看,(1.6)仅利用了与信息,(1.7)仅利用了与信息.一般而言,较多的信…  相似文献   

11.
带一类非精确搜索的Broyden族的全局收敛性   总被引:9,自引:1,他引:8  
刘光辉  韩继业 《计算数学》1996,18(3):233-240
带一类非精确搜索的Broyden族的全局收敛性刘光辉,韩继业(中国科学院应用数学研究所)GLOBALCONVERGENCEOFTHEBROYDEN'SFAMILYWITHACLASSOFINEXACTLINESEARCHES¥LiuGuang-hui...  相似文献   

12.
高岳林  井霞 《计算数学》2013,35(1):89-98
提出了求解一类线性乘积规划问题的分支定界缩减方法, 并证明了算法的收敛性.在这个方法中, 利用两个变量乘积的凸包络技术, 给出了目标函数与约束函数中乘积的下界, 由此确定原问题的一个松弛凸规划, 从而找到原问题全局最优值的下界和可行解. 为了加快所提算法的收敛速度, 使用了超矩形的缩减策略. 数值结果表明所提出的算法是可行的.  相似文献   

13.
We analyze the convergence rate of the alternating direction method of multipliers (ADMM) for minimizing the sum of two or more nonsmooth convex separable functions subject to linear constraints. Previous analysis of the ADMM typically assumes that the objective function is the sum of only two convex functions defined on two separable blocks of variables even though the algorithm works well in numerical experiments for three or more blocks. Moreover, there has been no rate of convergence analysis for the ADMM without strong convexity in the objective function. In this paper we establish the global R-linear convergence of the ADMM for minimizing the sum of any number of convex separable functions, assuming that a certain error bound condition holds true and the dual stepsize is sufficiently small. Such an error bound condition is satisfied for example when the feasible set is a compact polyhedron and the objective function consists of a smooth strictly convex function composed with a linear mapping, and a nonsmooth \(\ell _1\) regularizer. This result implies the linear convergence of the ADMM for contemporary applications such as LASSO without assuming strong convexity of the objective function.  相似文献   

14.
In this paper, we propose a parallel decomposition algorithm for solving a class of convex optimization problems, which is broad enough to contain ordinary convex programming problems with a strongly convex objective function. The algorithm is a variant of the trust region method applied to the Fenchel dual of the given problem. We prove global convergence of the algorithm and report some computational experience with the proposed algorithm on the Connection Machine Model CM-5.  相似文献   

15.
We study an inexact proximal stochastic gradient (IPSG) method for convex composite optimization, whose objective function is a summation of an average of a large number of smooth convex functions and a convex, but possibly nonsmooth, function. Variance reduction techniques are incorporated in the method to reduce the stochastic gradient variance. The main feature of this IPSG algorithm is to allow solving the proximal subproblems inexactly while still keeping the global convergence with desirable complexity bounds. Different subproblem stopping criteria are proposed. Global convergence and the component gradient complexity bounds are derived for the both cases when the objective function is strongly convex or just generally convex. Preliminary numerical experiment shows the overall efficiency of the IPSG algorithm.  相似文献   

16.
一类改进BFGS算法及其收敛性分析   总被引:6,自引:0,他引:6  
本文针对无约束最优化问题,基于目标函数的局部二次模型近似,提出一类改进的BFGS算法,称为 MBFGS算法。其修正 B_k的公式中含有一个参数θ∈[0,l],当 θ= 1时即得经典的BFGS公式;当θ∈[0、l)时,所得公式已不属于拟Newton类。在目标函数一致凸假设下,证明了所给算法的全局收敛性及局部超线性收敛性。  相似文献   

17.
In this paper, we analyze the global convergence of the least-change secant method proposed by Dennis and Wolkowicz, when applied to convex objective functions. One of the most distinguished features of this method is that the Dennis-Wolkowicz update doesn't necessarily belong to the Broyden convex family and can be close to the DFP update, but it still has the self-correcting property. We prove that, for convex objective functions, this method with the commonly used Wolfe line search is globally convergent. We also provide some numerical results.  相似文献   

18.
Recently Fukushima and Qi proposed a proximal Newton method for minimizating a nonsmooth convex function. An alternative global convergence proof for that method is presented in this paper. Global convergence was established without any additional assumption on the objective function. We also show that the infimum of a convex function is always equal to the infimun of its Moreau—Yosida regularization  相似文献   

19.
This paper is concerned with quadratic and superlinear convergence of structured quasi-Newton methods for solving nonlinear least squares problems. These methods make use of a special structure of the Hessian matrix of the objective function. Recently, Huschens proposed a new kind of structured quasi-Newton methods and dealt with the convex class of the structured Broyden family, and showed its quadratic and superlinear convergence properties for zero and nonzero residual problems, respectively. In this paper, we extend the results by Huschens to a wider class of the structured Broyden family. We prove local convergence properties of the method in a way different from the proof by Huschens.  相似文献   

20.
We study the convergence rate of the proximal-gradient homotopy algorithm applied to norm-regularized linear least squares problems, for a general class of norms. The homotopy algorithm reduces the regularization parameter in a series of steps, and uses a proximal-gradient algorithm to solve the problem at each step. Proximal-gradient algorithm has a linear rate of convergence given that the objective function is strongly convex, and the gradient of the smooth component of the objective function is Lipschitz continuous. In many applications, the objective function in this type of problem is not strongly convex, especially when the problem is high-dimensional and regularizers are chosen that induce sparsity or low-dimensionality. We show that if the linear sampling matrix satisfies certain assumptions and the regularizing norm is decomposable, proximal-gradient homotopy algorithm converges with a linear rate even though the objective function is not strongly convex. Our result generalizes results on the linear convergence of homotopy algorithm for \(\ell _1\)-regularized least squares problems. Numerical experiments are presented that support the theoretical convergence rate analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号