首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 156 毫秒
1.
Global Convergence of Conjugate Gradient Methods without Line Search   总被引:11,自引:0,他引:11  
Global convergence results are derived for well-known conjugate gradient methods in which the line search step is replaced by a step whose length is determined by a formula. The results include the following cases: (1) The Fletcher–Reeves method, the Hestenes–Stiefel method, and the Dai–Yuan method applied to a strongly convex LC 1 objective function; (2) The Polak–Ribière method and the Conjugate Descent method applied to a general, not necessarily convex, LC 1 objective function.  相似文献   

2.
We consider the linearly constrained separable convex minimization problem, whose objective function consists of the sum of \(m\) individual convex functions in the absence of any coupling variables. While augmented Lagrangian-based decomposition methods have been well developed in the literature for solving such problems, a noteworthy requirement of these methods is that an additional correction step is a must to guarantee their convergence. This note shows that a straightforward Jacobian decomposition of the augmented Lagrangian method is globally convergent if the involved functions are further assumed to be strongly convex.  相似文献   

3.
Armijo线性搜索下Hager-Zhang共轭梯度法的全局收敛性   总被引:2,自引:0,他引:2       下载免费PDF全文
Hager和Zhang[4]提出了一种新的非线性共轭梯度法(简称 HZ 方法), 并证明了该方法在 Wolfe搜索和 Goldstein 搜索下求解强凸问题的全局收敛性.但是HZ方法在标准Armijo 搜索下求解非凸问题是否全局收敛尚不清楚.该文提出了一种保守的HZ共轭梯度法,并且证明了这种方法在 Armijo 线性搜索下求解非凸优化问题的全局收敛性.此外,作者给出了一些 数值结果以检验该方法的有效性.  相似文献   

4.
This paper proposes a feedback neural network model for solving convex nonlinear programming (CNLP) problems. Under the condition that the objective function is convex and all constraint functions are strictly convex or that the objective function is strictly convex and the constraint function is convex, the proposed neural network is proved to be stable in the sense of Lyapunov and globally convergent to an exact optimal solution of the original problem. The validity and transient behavior of the neural network are demonstrated by using some examples.  相似文献   

5.
《Optimization》2012,61(6):889-905
A family of supermemory gradient projection methods for solving the convex constrained optimization problem is presented in this article. It is proven to have stronger convergence properties than the traditional gradient projection method. In particular, it is shown to be globally convergent if the objective function is convex.  相似文献   

6.
对一般凸目标函数和一般凸集约束的凸规划问题新解法进行探讨,它是线性规划一种新算法的扩展和改进,此算法的基本思想是在规划问题的可行域中由所建-的一个切割面到另一个切割面的不断推进来求取最优的。文章对目标函数是二次的且约束是一般凸集和二次目标函数且约束是线性的情形,给出了更简单的算法。  相似文献   

7.
提出了求解一类带一般凸约束的复合非光滑优化的信赖域算法 .和通常的信赖域方法不同的是 :该方法在每一步迭代时不是迫使目标函数严格单调递减 ,而是采用非单调策略 .由于光滑函数、逐段光滑函数、凸函数以及它们的复合都是局部Lipschitz函数 ,故本文所提方法是已有的处理同类型问题 ,包括带界约束的非线性最优化问题的方法的一般化 ,从而使得信赖域方法的适用范围扩大了 .同时 ,在一定条件下 ,该算法还是整体收敛的 .数值实验结果表明 :从计算的角度来看 ,非单调策略对高度非线性优化问题的求解非常有效  相似文献   

8.
This paper describes a new technique to find the minimum norm solution of a linear program. The main idea is to reformulate this problem as an unconstrained minimization problem with a convex and smooth objective function. The minimization of this objective function can be carried out by a Newton-type method which is shown to be globally convergent. Furthermore, under certain assumptions, this Newton-type method converges in a finite number of iterations to the minimum norm solution of the underlying linear program.  相似文献   

9.
An inverse problem of determination of a coefficient in an elliptic equation is considered. This problem is ill-posed in the sense of Hadamard and Tikhonov's regularization method is used for solving it in a stable way. This method requires globally solving nonconvex optimization problems, the solution methods for which have been very little studied in the inverse problems community. It is proved that the objective function of the corresponding optimization problem for our inverse problem can be represented as the difference of two convex functions (d.c. functions), and the difference of convex functions algorithm (DCA) in combination with a branch-and-bound technique can be used to globally solve it. Numerical examples are presented which show the efficiency of the method.  相似文献   

10.
We propose a trust-region type method for a class of nonsmooth nonconvex optimization problems where the objective function is a summation of a (probably nonconvex) smooth function and a (probably nonsmooth) convex function. The model function of our trust-region subproblem is always quadratic and the linear term of the model is generated using abstract descent directions. Therefore, the trust-region subproblems can be easily constructed as well as efficiently solved by cheap and standard methods. When the accuracy of the model function at the solution of the subproblem is not sufficient, we add a safeguard on the stepsizes for improving the accuracy. For a class of functions that can be "truncated'', an additional truncation step is defined and a stepsize modification strategy is designed. The overall scheme converges globally and we establish fast local convergence under suitable assumptions. In particular, using a connection with a smooth Riemannian trust-region method, we prove local quadratic convergence for partly smooth functions under a strict complementary condition. Preliminary numerical results on a family of $\ell_1$-optimization problems are reported and demonstrate the efficiency of our approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号