首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In view of the minimization of a nonsmooth nonconvex function f, we prove an abstract convergence result for descent methods satisfying a sufficient-decrease assumption, and allowing a relative error tolerance. Our result guarantees the convergence of bounded sequences, under the assumption that the function f satisfies the Kurdyka–?ojasiewicz inequality. This assumption allows to cover a wide range of problems, including nonsmooth semi-algebraic (or more generally tame) minimization. The specialization of our result to different kinds of structured problems provides several new convergence results for inexact versions of the gradient method, the proximal method, the forward–backward splitting algorithm, the gradient projection and some proximal regularization of the Gauss–Seidel method in a nonconvex setting. Our results are illustrated through feasibility problems, or iterative thresholding procedures for compressive sensing.  相似文献   

2.
This paper presents a method for minimizing the sum of a possibly nonsmooth convex function and a continuously differentiable function. As in the convex case developed by the author, the algorithm is a descent method which generates successive search directions by solving quadratic programming subproblems. An inexact line search ensures global convergence of the method to stationary points.  相似文献   

3.
This paper gives a simpler proof of the global convergence of Broyden class quasi-Newton method with inexact line search. This proof generalizes and modifies the proof of the global convergence of BFGS method by Nocedal and Wright in [3].  相似文献   

4.
We present modifications of the generalized conjugate gradient algorithm of Liu and Storey for unconstrained optimization problems (Ref. 1), extending its applicability to situations where the search directions are not defined. The use of new search directions is proposed and one additional condition is imposed on the inexact line search. The convergence of the resulting algorithm can be established under standard conditions for a twice continuously differentiable function with a bounded level set. Algorithms based on these modifications have been tested on a number of problems, showing considerable improvements. Comparisons with the BFGS and other quasi-Newton methods are also given.  相似文献   

5.
6.
This work is an attempt to develop multiobjective versions of some well-known single objective quasi-Newton methods, including BFGS, self-scaling BFGS (SS-BFGS), and the Huang BFGS (H-BFGS). A comprehensive and comparative study of these methods is presented in this paper. The Armijo line search is used for the implementation of these methods. The numerical results show that the Armijo rule does not work the same way for the multiobjective case as for the single objective case, because, in this case, it imposes a large computational effort and significantly decreases the speed of convergence in contrast to the single objective case. Hence, we consider two cases of all multi-objective versions of quasi-Newton methods: in the presence of the Armijo line search and in the absence of any line search. Moreover, the convergence of these methods without using any line search under some mild conditions is shown. Also, by introducing a multiobjective subproblem for finding the quasi-Newton multiobjective search direction, a simple representation of the Karush–Kuhn–Tucker conditions is derived. The H-BFGS quasi-Newton multiobjective optimization method provides a higher-order accuracy in approximating the second order curvature of the problem functions than the BFGS and SS-BFGS methods. Thus, this method has some benefits compared to the other methods as shown in the numerical results. All mentioned methods proposed in this paper are evaluated and compared with each other in different aspects. To do so, some well-known test problems and performance assessment criteria are employed. Moreover, these methods are compared with each other with regard to the expended CPU time, the number of iterations, and the number of function evaluations.  相似文献   

7.
This article studies a modified BFGS algorithm for solving smooth unconstrained strongly convex minimization problem. The modified BFGS method is based on the new quasi-Newton equation Bk+1sk=yk where yk*, =yk + Aksk andA k is a matrix. Wei, Li and Qi [WLQ] have proven that the average performance of two of those algorithms is better than that of the classical one. In this paper, we prove the global convergence of these algorithms associated to a general line search rule.  相似文献   

8.
In this work we present and analyze a new scaled conjugate gradient algorithm and its implementation, based on an interpretation of the secant equation and on the inexact Wolfe line search conditions. The best spectral conjugate gradient algorithm SCG by Birgin and Martínez (2001), which is mainly a scaled variant of Perry’s (1977), is modified in such a manner to overcome the lack of positive definiteness of the matrix defining the search direction. This modification is based on the quasi-Newton BFGS updating formula. The computational scheme is embedded in the restart philosophy of Beale–Powell. The parameter scaling the gradient is selected as spectral gradient or in an anticipative manner by means of a formula using the function values in two successive points. In very mild conditions it is shown that, for strongly convex functions, the algorithm is global convergent. Preliminary computational results, for a set consisting of 500 unconstrained optimization test problems, show that this new scaled conjugate gradient algorithm substantially outperforms the spectral conjugate gradient SCG algorithm. The author was awarded the Romanian Academy Grant 168/2003.  相似文献   

9.
对一般目标函数极小化问题的拟牛顿法及其全局收敛性的研究,已经成为拟牛顿法理论中最基本的开问题之一.本文对这个问题做了进一步的研究,对无约束优化问题提出一类新的广义拟牛顿算法,并结合Goldstein线搜索证明了算法对一般非凸目标函数极小化问题的全局收敛性.  相似文献   

10.

The optimisation of nonsmooth, nonconvex functions without access to gradients is a particularly challenging problem that is frequently encountered, for example in model parameter optimisation problems. Bilevel optimisation of parameters is a standard setting in areas such as variational regularisation problems and supervised machine learning. We present efficient and robust derivative-free methods called randomised Itoh–Abe methods. These are generalisations of the Itoh–Abe discrete gradient method, a well-known scheme from geometric integration, which has previously only been considered in the smooth setting. We demonstrate that the method and its favourable energy dissipation properties are well defined in the nonsmooth setting. Furthermore, we prove that whenever the objective function is locally Lipschitz continuous, the iterates almost surely converge to a connected set of Clarke stationary points. We present an implementation of the methods, and apply it to various test problems. The numerical results indicate that the randomised Itoh–Abe methods can be superior to state-of-the-art derivative-free optimisation methods in solving nonsmooth problems while still remaining competitive in terms of efficiency.

  相似文献   

11.
This paper concerns with a new nonmonotone strategy and its application to the line search approach for unconstrained optimization. It has been believed that nonmonotone techniques can improve the possibility of finding the global optimum and increase the convergence rate of the algorithms. We first introduce a new nonmonotone strategy which includes a convex combination of the maximum function value of some preceding successful iterates and the current function value. We then incorporate the proposed nonmonotone strategy into an inexact Armijo-type line search approach to construct a more relaxed line search procedure. The global convergence to first-order stationary points is subsequently proved and the R-linear convergence rate are established under suitable assumptions. Preliminary numerical results finally show the efficiency and the robustness of the proposed approach for solving unconstrained nonlinear optimization problems.  相似文献   

12.
Min Li 《Optimization Letters》2018,12(8):1911-1927
Based on the memoryless BFGS quasi-Newton method, a family of three-term nonlinear conjugate gradient methods are proposed. For any line search, the directions generated by the new methods are sufficient descent. Using some efficient techniques, global convergence results are established when the line search fulfills the Wolfe or the Armijo conditions. Moreover, the r-linear convergence rate of the methods are analyzed as well. Numerical comparisons show that the proposed methods are efficient for the unconstrained optimization problems in the CUTEr library.  相似文献   

13.
In this paper, a method is developed for solving nonsmooth nonconvex minimization problems. This method extends the classical BFGS framework. First, we generalize the Wolfe conditions for locally Lipschitz functions and prove that this generalization is well defined. Then, a line search algorithm is presented to find a step length satisfying the generalized Wolfe conditions. Next, the Goldstein e-subgradient is approximated by an iterative method and a descent direction is computed using a positive definite matrix. This matrix is updated using the BFGS method. Finally, a minimization algorithm based on the BFGS method is described. The algorithm is implemented in MATLAB and numerical results using it are reported.  相似文献   

14.
Since 1965, there has been significant progress in the theoretical study on quasi-Newton methods for solving nonlinear equations, especially in the local convergence analysis. However, the study on global convergence of quasi-Newton methods is relatively fewer, especially for the BFGS method. To ensure global convergence, some merit function such as the squared norm merit function is typically used. In this paper, we propose an algorithm for solving nonlinear monotone equations, which combines the BFGS method and the hyperplane projection method. We also prove that the proposed BFGS method converges globally if the equation is monotone and Lipschitz continuous without differentiability requirement on the equation, which makes it possible to solve some nonsmooth equations. An attractive property of the proposed method is that its global convergence is independent of any merit function.We also report some numerical results to show efficiency of the proposed method.

  相似文献   


15.
To the unconstrained programme of non-convex function, this article give a modified BFGS algorithm. The idea of the algorithm is to modify the approximate Hessian matrix for obtaining the descent direction and guaranteeing the efficacious of the quasi-Newton iteration pattern. We prove the global convergence properties of the algorithm associating with the general form of line search, and prove the quadratic convergence rate of the algorithm under some conditions.  相似文献   

16.
Yuan  Gonglin  Li  Pengyuan  Lu  Junyu 《Numerical Algorithms》2022,91(1):353-365
Numerical Algorithms - The BFGS method, which has great numerical stability, is one of the quasi-Newton line search methods. However, the global convergence of the BFGS method with a Wolfe line...  相似文献   

17.
To the unconstrained programme of non-convex function, this article give a modified BFGS algorithm associated with the general line search model. The idea of the algorithm is to modify the approximate Hessian matrix for obtaining the descent direction and guaranteeing the efficacious of the new quasi-Newton iteration equationB k +1s k =y k * ,, wherey k * is the sum ofy k andA k s k , andA k is some matrix. The global convergence properties of the algorithm associating with the general form of line search is proved.  相似文献   

18.
Quasi-Newton method is a well-known effective method for solving optimization problems. Since it is a line search method, which needs a line search procedure after determining a search direction at each iteration, we must decide a line search rule to choose a step size along a search direction. In this paper, we propose a new inexact line search rule for quasi-Newton method and establish some global convergent results of this method. These results are useful in designing new quasi-Newton methods. Moreover, we analyze the convergence rate of quasi-Newton method with the new line search rule.  相似文献   

19.
New accelerated nonlinear conjugate gradient algorithms which are mainly modifications of Dai and Yuan’s for unconstrained optimization are proposed. Using the exact line search, the algorithm reduces to the Dai and Yuan conjugate gradient computational scheme. For inexact line search the algorithm satisfies the sufficient descent condition. Since the step lengths in conjugate gradient algorithms may differ from 1 by two orders of magnitude and tend to vary in a very unpredictable manner, the algorithms are equipped with an acceleration scheme able to improve the efficiency of the algorithms. Computational results for a set consisting of 750 unconstrained optimization test problems show that these new conjugate gradient algorithms substantially outperform the Dai-Yuan conjugate gradient algorithm and its hybrid variants, Hestenes-Stiefel, Polak-Ribière-Polyak, CONMIN conjugate gradient algorithms, limited quasi-Newton algorithm LBFGS and compare favorably with CG_DESCENT. In the frame of this numerical study the accelerated scaled memoryless BFGS preconditioned conjugate gradient ASCALCG algorithm proved to be more robust.  相似文献   

20.
One impediment to the use of neural networks in pattern classification problems is the excessive time required for supervised learning in larger multilayer feedforward networks. The use of nonlinear optimization techniques to perform neural network training offers a means of reducing that computing time. Two key issues in the implementation of nonlinear programming are the choice of a method for computing search direction and the degree of accuracy required of the subsequent line search. This paper examines these issues through a designed experiment using six different pattern classification tasks, four search direction methods (conjugate gradient, quasi-Newton, and two levels of limited memory quasi-Newton), and three levels of line search accuracy. It was found that for the simplest pattern classification problems, the conjugate gradient performed well. For more complicated pattern classification problems, the limited memory BFGS or the BFGS should be preferred. For very large problems, the best choice seems to be the limited memory BFGS. It was also determined that, for the line search methods used in this study, increasing accuracy did not improve efficiency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号