首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
BFGS算法对非凸函数优化问题的收敛性   总被引:1,自引:0,他引:1  
BFGS算法是无约束最优化中最著名的数值算法之一,对非凸函数BFGS算法是否具有整体收敛性,这是一个open问题,本文考虑Wolfo线搜索下目标函数非凸的BFGS算法,我们给出一个使该算法收敛的充分条件。  相似文献   

2.
This paper describes a class of optimization methods that interlace iterations of the limited memory BFGS method (L-BFGS) and a Hessian-free Newton method (HFN) in such a way that the information collected by one type of iteration improves the performance of the other. Curvature information about the objective function is stored in the form of a limited memory matrix, and plays the dual role of preconditioning the inner conjugate gradient iteration in the HFN method and of providing an initial matrix for L-BFGS iterations. The lengths of the L-BFGS and HFN cycles are adjusted dynamically during the course of the optimization. Numerical experiments indicate that the new algorithms are both effective and not sensitive to the choice of parameters.  相似文献   

3.

A displacement aggregation strategy is proposed for the curvature pairs stored in a limited-memory BFGS (a.k.a. L-BFGS) method such that the resulting (inverse) Hessian approximations are equal to those that would be derived from a full-memory BFGS method. This means that, if a sufficiently large number of pairs are stored, then an optimization algorithm employing the limited-memory method can achieve the same theoretical convergence properties as when full-memory (inverse) Hessian approximations are stored and employed, such as a local superlinear rate of convergence under assumptions that are common for attaining such guarantees. To the best of our knowledge, this is the first work in which a local superlinear convergence rate guarantee is offered by a quasi-Newton scheme that does not either store all curvature pairs throughout the entire run of the optimization algorithm or store an explicit (inverse) Hessian approximation. Numerical results are presented to show that displacement aggregation within an adaptive L-BFGS scheme can lead to better performance than standard L-BFGS.

  相似文献   

4.
This paper considers simple modifications of the limited memory BFGS (L-BFGS) method for large scale optimization. It describes algorithms in which alternating ways of re-using a given set of stored difference vectors are outlined. The proposed algorithms resemble the L-BFGS method, except that the initial Hessian approximation is defined implicitly like the L-BFGS Hessian in terms of some stored vectors rather than the usual choice of a multiple of the unit matrix. Numerical experiments show that the new algorithms yield desirable improvement over the L-BFGS method. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

5.
In this paper, we propose a new method for image restoration problems, which are degraded by impulsive noise, with nonconvex data fitting term and nonconvex regularizer.The proposed method possesses the advantages of nonconvex data fitting and nonconvex regularizer simultaneously, namely, robustness for impulsive noise and efficiency for restoring neat edge images.Further, we propose an efficient algorithm to solve the “Nonconvex+Nonconvex” structure problem via using the alternating direction minimization, and prove that the algorithm is globally convergent when the regularization parameter is known. However, the regularization parameter is unavailable in general. Thereby, we combine the algorithm with the continuation technique and modified Morozov’s discrepancy principle to get an improved algorithm in which a suitable regularization parameter can be chosen automatically. The experiments reveal the superior performances of the proposed algorithm in comparison with some existing methods.  相似文献   

6.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

7.
Two-phase image segmentation is a fundamental task to partition an image into foreground and background. In this paper, two types of nonconvex and nonsmooth regularization models are proposed for basic two-phase segmentation. They extend the convex regularization on the characteristic function on the image domain to the nonconvex case, which are able to better obtain piecewise constant regions with neat boundaries. By analyzing the proposed non-Lipschitz model, we combine the proximal alternating minimization framework with support shrinkage and linearization strategies to design our algorithm. This leads to two alternating strongly convex subproblems which can be easily solved. Similarly, we present an algorithm without support shrinkage operation for the nonconvex Lipschitz case. Using the Kurdyka-Łojasiewicz property of the objective function, we prove that the limit point of the generated sequence is a critical point of the original nonconvex nonsmooth problem. Numerical experiments and comparisons illustrate the effectiveness of our method in two-phase image segmentation.  相似文献   

8.
We propose an implementable BFGS method for solving a nonsmooth convex optimization problem by converting the original objective function into a once continuously differentiable function by way of the Moreau–Yosida regularization. The proposed method makes use of approximate function and gradient values of the Moreau-Yosida regularization instead of the corresponding exact values. We prove the global convergence of the proposed method under the assumption of strong convexity of the objective function.  相似文献   

9.
On the limited memory BFGS method for large scale optimization   总被引:60,自引:0,他引:60  
We study the numerical performance of a limited memory quasi-Newton method for large scale optimization, which we call the L-BFGS method. We compare its performance with that of the method developed by Buckley and LeNir (1985), which combines cycles of BFGS steps and conjugate direction steps. Our numerical tests indicate that the L-BFGS method is faster than the method of Buckley and LeNir, and is better able to use additional storage to accelerate convergence. We show that the L-BFGS method can be greatly accelerated by means of a simple scaling. We then compare the L-BFGS method with the partitioned quasi-Newton method of Griewank and Toint (1982a). The results show that, for some problems, the partitioned quasi-Newton method is clearly superior to the L-BFGS method. However we find that for other problems the L-BFGS method is very competitive due to its low iteration cost. We also study the convergence properties of the L-BFGS method, and prove global convergence on uniformly convex problems.This work was supported by the Applied Mathematical Sciences subprogram of the Office of Energy Research, U.S. Department of Energy, under contract DE-FG02-87ER25047, and by National Science Foundation Grant No. DCR-86-02071.  相似文献   

10.
The proximal method is a standard regularization approach in optimization. Practical implementations of this algorithm require (i)?an algorithm to compute the proximal point, (ii)?a rule to stop this algorithm, (iii)?an update formula for the proximal parameter. In this work we focus on?(ii), when smoothness is present??so that Newton-like methods can be used for?(i): we aim at giving adequate stopping rules to reach overall efficiency of the method. Roughly speaking, usual rules consist in stopping inner iterations when the current iterate is close to the proximal point. By contrast, we use the standard paradigm of numerical optimization: the basis for our stopping test is a ??sufficient?? decrease of the objective function, namely a fraction of the ideal decrease. We establish convergence of the algorithm thus obtained and we illustrate it on some ill-conditioned problems. The experiments show that combining the proposed inexact proximal scheme with a standard smooth optimization algorithm improves the numerical behaviour of the latter for those ill-conditioned problems.  相似文献   

11.
The forward–backward splitting method (FBS) for minimizing a nonsmooth composite function can be interpreted as a (variable-metric) gradient method over a continuously differentiable function which we call forward–backward envelope (FBE). This allows to extend algorithms for smooth unconstrained optimization and apply them to nonsmooth (possibly constrained) problems. Since the FBE can be computed by simply evaluating forward–backward steps, the resulting methods rely on a similar black-box oracle as FBS. We propose an algorithmic scheme that enjoys the same global convergence properties of FBS when the problem is convex, or when the objective function possesses the Kurdyka–?ojasiewicz property at its critical points. Moreover, when using quasi-Newton directions the proposed method achieves superlinear convergence provided that usual second-order sufficiency conditions on the FBE hold at the limit point of the generated sequence. Such conditions translate into milder requirements on the original function involving generalized second-order differentiability. We show that BFGS fits our framework and that the limited-memory variant L-BFGS is well suited for large-scale problems, greatly outperforming FBS or its accelerated version in practice, as well as ADMM and other problem-specific solvers. The analysis of superlinear convergence is based on an extension of the Dennis and Moré theorem for the proposed algorithmic scheme.  相似文献   

12.
The application of quasi-Newton methods is widespread in numerical optimization. Independently of the application, the techniques used to update the BFGS matrices seem to play an important role in the performance of the overall method. In this paper, we address precisely this issue. We compare two implementations of the limited memory BFGS method for large-scale unconstrained problems. They differ in the updating technique and the choice of initial matrix. L-BFGS performs continuous updating, whereas SNOPT uses a restarted limited memory strategy. Our study shows that continuous updating techniques are more effective, particularly for large problems.  相似文献   

13.
《Journal of Complexity》2002,18(2):557-572
This paper studies recent modifications of the limited memory BFGS (L-BFGS) method for solving large scale unconstrained optimization problems. Each modification technique attempts to improve the quality of the L-BFGS Hessian by employing (extra) updates in a certain sense. Because at some iterations these updates might be redundant or worsen the quality of this Hessian, this paper proposes an updates criterion to measure this quality. Hence, extra updates are employed only to improve the poor approximation of the L-BFGS Hessian. The presented numerical results illustrate the usefulness of this criterion and show that extra updates improve the performance of the L-BFGS method substantially.  相似文献   

14.
We present a new matrix-free method for the computation of negative curvature directions based on the eigenstructure of minimal-memory BFGS matrices. We determine via simple formulas the eigenvalues of these matrices and we compute the desirable eigenvectors by explicit forms. Consequently, a negative curvature direction is computed in such a way that avoids the storage and the factorization of any matrix. We propose a modification of the L-BFGS method in which no information is kept from old iterations, so that memory requirements are minimal. The proposed algorithm incorporates a curvilinear path and a linesearch procedure, which combines two search directions; a memoryless quasi-Newton direction and a direction of negative curvature. Results of numerical experiments for large scale problems are also presented.  相似文献   

15.
In this paper, a method is developed for solving nonsmooth nonconvex minimization problems. This method extends the classical BFGS framework. First, we generalize the Wolfe conditions for locally Lipschitz functions and prove that this generalization is well defined. Then, a line search algorithm is presented to find a step length satisfying the generalized Wolfe conditions. Next, the Goldstein e-subgradient is approximated by an iterative method and a descent direction is computed using a positive definite matrix. This matrix is updated using the BFGS method. Finally, a minimization algorithm based on the BFGS method is described. The algorithm is implemented in MATLAB and numerical results using it are reported.  相似文献   

16.
Iterative methods, such as Newton’s, behave poorly when solving ill-conditioned problems: they become slow (first order), and decrease their accuracy. In this paper we analyze deeply and widely the convergence of a modified Newton method, which we call perturbed Newton, in order to overcome the usual disadvantages Newton’s one presents. The basic point of this method is the dependence of a parameter affording a degree of freedom that introduces regularization. Choices for that parameter are proposed. The theoretical analysis will be illustrated through examples.  相似文献   

17.
In many science and engineering applications, the discretization of linear ill-posed problems gives rise to large ill-conditioned linear systems with the right-hand side degraded by noise. The solution of such linear systems requires the solution of minimization problems with one quadratic constraint, depending on an estimate of the variance of the noise. This strategy is known as regularization. In this work, we propose a modification of the Lagrange method for the solution of the noise constrained regularization problem. We present the numerical results of test problems, image restoration and medical imaging denoising. Our results indicate that the proposed Lagrange method is effective and efficient in computing good regularized solutions of ill-conditioned linear systems and in computing the corresponding Lagrange multipliers. Moreover, our numerical experiments show that the Lagrange method is computationally convenient. Therefore, the Lagrange method is a promising approach for dealing with ill-posed problems. This work was supported by the Italian FIRB Project “Parallel algorithms and Nonlinear Numerical Optimization” RBAU01JYPN.  相似文献   

18.
A family of variable metric proximal methods   总被引:5,自引:0,他引:5  
We consider conceptual optimization methods combining two ideas: the Moreau—Yosida regularization in convex analysis, and quasi-Newton approximations of smooth functions. We outline several approaches based on this combination, and establish their global convergence. Then we study theoretically the local convergence properties of one of these approaches, which uses quasi-Newton updates of the objective function itself. Also, we obtain a globally and superlinearly convergent BFGS proximal method. At each step of our study, we single out the assumptions that are useful to derive the result concerned.  相似文献   

19.
The numerical solution of linear discrete ill-posed problems typically requires regularization, i.e., replacement of the available ill-conditioned problem by a nearby better conditioned one. The most popular regularization methods for problems of small to moderate size, which allow evaluation of the singular value decomposition of the matrix defining the problem, are the truncated singular value decomposition and Tikhonov regularization. The present paper proposes a novel choice of regularization matrix for Tikhonov regularization that bridges the gap between Tikhonov regularization and truncated singular value decomposition. Computed examples illustrate the benefit of the proposed method.  相似文献   

20.
This letter presents a scaled memoryless BFGS preconditioned conjugate gradient algorithm for solving unconstrained optimization problems. The basic idea is to combine the scaled memoryless BFGS method and the preconditioning technique in the frame of the conjugate gradient method. The preconditioner, which is also a scaled memoryless BFGS matrix, is reset when the Powell restart criterion holds. The parameter scaling the gradient is selected as the spectral gradient. Computational results for a set consisting of 750 test unconstrained optimization problems show that this new scaled conjugate gradient algorithm substantially outperforms known conjugate gradient methods such as the spectral conjugate gradient SCG of Birgin and Martínez [E. Birgin, J.M. Martínez, A spectral conjugate gradient method for unconstrained optimization, Appl. Math. Optim. 43 (2001) 117–128] and the (classical) conjugate gradient of Polak and Ribière [E. Polak, G. Ribière, Note sur la convergence de méthodes de directions conjuguées, Revue Francaise Informat. Reserche Opérationnelle, 3e Année 16 (1969) 35–43], but subject to the CPU time metric it is outperformed by L-BFGS [D. Liu, J. Nocedal, On the limited memory BFGS method for large scale optimization, Math. Program. B 45 (1989) 503–528; J. Nocedal. http://www.ece.northwestern.edu/~nocedal/lbfgs.html].  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号