首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 755 毫秒
1.
In this paper, we propose a globally convergent Polak-Ribière-Polyak (PRP)conjugate gradient method for nonconvex minimization of differentiable functions by employing an Armijo-type line search which is simpler and less demanding than those defined in [4,10]. A favorite property of this method is that we can choose the initial stepsize as the one-dimensional minimizer of a quadratic model Φ(t):= f(xk)+tgTkdk+1/2t2dTkQkdk, where Qk is a positive definite matrix that carries some second order information of the objective function f. So, this line search may make the stepsize tk more easily accepted. Preliminary numerical results show that this method is efficient.  相似文献   

2.
The linear conjugate gradient method is an optimal method for convex quadratic minimization due to the Krylov subspace minimization property. The proposition of limited-memory BFGS method and Barzilai-Borwein gradient method, however, heavily restricted the use of conjugate gradient method for large-scale nonlinear optimization. This is, to the great extent, due to the requirement of a relatively exact line search at each iteration and the loss of conjugacy property of the search directions in various occasions. On the contrary, the limited-memory BFGS method and the Barzilai-Bowein gradient method share the so-called asymptotical one stepsize per line-search property, namely, the trial stepsize in the method will asymptotically be accepted by the line search when the iteration is close to the solution. This paper will focus on the analysis of the subspace minimization conjugate gradient method by Yuan and Stoer (1995). Specifically, if choosing the parameter in the method by combining the Barzilai-Borwein idea, we will be able to provide some efficient Barzilai-Borwein conjugate gradient (BBCG) methods. The initial numerical experiments show that one of the variants, BBCG3, is specially efficient among many others without line searches. This variant of the BBCG method might enjoy the asymptotical one stepsize per line-search property and become a strong candidate for large-scale nonlinear optimization.  相似文献   

3.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

4.
This work is an attempt to develop multiobjective versions of some well-known single objective quasi-Newton methods, including BFGS, self-scaling BFGS (SS-BFGS), and the Huang BFGS (H-BFGS). A comprehensive and comparative study of these methods is presented in this paper. The Armijo line search is used for the implementation of these methods. The numerical results show that the Armijo rule does not work the same way for the multiobjective case as for the single objective case, because, in this case, it imposes a large computational effort and significantly decreases the speed of convergence in contrast to the single objective case. Hence, we consider two cases of all multi-objective versions of quasi-Newton methods: in the presence of the Armijo line search and in the absence of any line search. Moreover, the convergence of these methods without using any line search under some mild conditions is shown. Also, by introducing a multiobjective subproblem for finding the quasi-Newton multiobjective search direction, a simple representation of the Karush–Kuhn–Tucker conditions is derived. The H-BFGS quasi-Newton multiobjective optimization method provides a higher-order accuracy in approximating the second order curvature of the problem functions than the BFGS and SS-BFGS methods. Thus, this method has some benefits compared to the other methods as shown in the numerical results. All mentioned methods proposed in this paper are evaluated and compared with each other in different aspects. To do so, some well-known test problems and performance assessment criteria are employed. Moreover, these methods are compared with each other with regard to the expended CPU time, the number of iterations, and the number of function evaluations.  相似文献   

5.
Descent property is very important for an iterative method to be globally convergent. In this paper, we propose a way to construct sufficient descent directions for unconstrained optimization. We then apply the technique to derive a PSB (Powell-Symmetric-Broyden) based method. The PSB based method locally reduces to the standard PSB method with unit steplength. Under appropriate conditions, we show that the PSB based method with Armijo line search or Wolfe line search is globally and superlinearly convergent for uniformly convex problems. We also do some numerical experiments. The results show that the PSB based method is competitive with the standard BFGS method.  相似文献   

6.
In this paper, we propose a modified BFGS (Broyden–Fletcher–Goldfarb–Shanno) method with nonmonotone line search for unconstrained optimization. Under some mild conditions, we show that the method is globally convergent without a convexity assumption on the objective function. We also report some preliminary numerical results to show the efficiency of the proposed method.  相似文献   

7.
This paper presents a family of projected descent direction algorithms with inexact line search for solving large-scale minimization problems subject to simple bounds on the decision variables. The global convergence of algorithms in this family is ensured by conditions on the descent directions and line search. Whenever a sequence constructed by an algorithm in this family enters a sufficiently small neighborhood of a local minimizer satisfying standard second-order sufficiency conditions, it gets trapped and converges to this local minimizer. Furthermore, in this case, the active constraint set at is identified in a finite number of iterations. This fact is used to ensure that the rate of convergence to a local minimizer, satisfying standard second-order sufficiency conditions, depends only on the behavior of the algorithm in the unconstrained subspace. As a particular example, we present projected versions of the modified Polak–Ribière conjugate gradient method and the limited-memory BFGS quasi-Newton method that retain the convergence properties associated with those algorithms applied to unconstrained problems.  相似文献   

8.
This paper studies convergence properties of regularized Newton methods for minimizing a convex function whose Hessian matrix may be singular everywhere. We show that if the objective function is LC2, then the methods possess local quadratic convergence under a local error bound condition without the requirement of isolated nonsingular solutions. By using a backtracking line search, we globalize an inexact regularized Newton method. We show that the unit stepsize is accepted eventually. Limited numerical experiments are presented, which show the practical advantage of the method.  相似文献   

9.
In this paper, a new nonmonotone inexact line search rule is proposed and applied to the trust region method for unconstrained optimization problems. In our line search rule, the current nonmonotone term is a convex combination of the previous nonmonotone term and the current objective function value, instead of the current objective function value . We can obtain a larger stepsize in each line search procedure and possess nonmonotonicity when incorporating the nonmonotone term into the trust region method. Unlike the traditional trust region method, the algorithm avoids resolving the subproblem if a trial step is not accepted. Under suitable conditions, global convergence is established. Numerical results show that the new method is effective for solving unconstrained optimization problems.  相似文献   

10.
Nonmonotone line search approach is a new technique for solving optimization problems. It relaxes the line search range and finds a larger step-size at each iteration, so as to possibly avoid local minimizer and run away from narrow curved valley. It is helpful to find the global minimizer of optimization problems. In this paper we develop a new modification of matrix-free nonmonotone Armijo line search and analyze the global convergence and convergence rate of the resulting method. We also address several approaches to estimate the Lipschitz constant of the gradient of objective functions that would be used in line search algorithms. Numerical results show that this new modification of Armijo line search is efficient for solving large scale unconstrained optimization problems.  相似文献   

11.
In this paper, we scale the quasiNewton equation and propose a spectral scaling BFGS method. The method has a good selfcorrecting property and can improve the behavior of the BFGS method. Compared with the standard BFGS method, the single-step convergence rate of the spectral scaling BFGS method will not be inferior to that of the steepest descent method when minimizing an n-dimensional quadratic function. In addition, when the method with exact line search is applied to minimize an n-dimensional strictly convex function, it terminates within n steps. Under appropriate conditions, we show that the spectral scaling BFGS method with Wolfe line search is globally and R-linear convergent for uniformly convex optimization problems. The reported numerical results show that the spectral scaling BFGS method outperforms the standard BFGS method.  相似文献   

12.
一类新的非单调记忆梯度法及其全局收敛性   总被引:1,自引:0,他引:1  
在非单调Armijo线搜索的基础上提出一种新的非单调线搜索,研究了一类在该线搜索下的记忆梯度法,在较弱条件下证明了其全局收敛性。与非单调Armijo线搜索相比,新的非单调线搜索在每次迭代时可以产生更大的步长,从而使目标函数值充分下降,降低算法的计算量。  相似文献   

13.
We present examples of divergence for the BFGS and Gauss Newton methods. These examples have objective functions with bounded level sets and other properties concerning the examples published recently in this journal, like unit steps and convexity along the search lines. As these other examples, the iterates, function values and gradients in the new examples fit into the general formulation in our previous work Mascarenhas (Comput Appl Math 26(1), 2007), which also presents an example of divergence for Newton’s method.  相似文献   

14.
BFGS算法对非凸函数优化问题的收敛性   总被引:1,自引:0,他引:1  
BFGS算法是无约束最优化中最著名的数值算法之一,对非凸函数BFGS算法是否具有整体收敛性,这是一个open问题,本文考虑Wolfo线搜索下目标函数非凸的BFGS算法,我们给出一个使该算法收敛的充分条件。  相似文献   

15.
A new subspace minimization conjugate gradient algorithm with a nonmonotone Wolfe line search is proposed and analyzed. In the scheme, we propose two choices of the search direction by minimizing a quadratic approximation of the objective function in special subspaces, and state criterions on how to choose the direction. Under given conditions, we obtain the significant conclusion that each choice of the direction satisfies the sufficient descent property. Based on the idea on how the function is close to a quadratic function, a new strategy for choosing the initial stepsize is presented for the line search. With the used nonmonotone Wolfe line search, we prove the global convergence of the proposed method for general nonlinear functions under mild assumptions. Numerical comparisons are given with well-known CGOPT and CG_DESCENT and show that the proposed algorithm is very promising.  相似文献   

16.
In this paper, we introduce a new concept of approximate optimal stepsize for gradient method, use it to interpret the Barzilai-Borwein (BB) method, and present an efficient gradient method with approximate optimal stepsize for large unconstrained optimization. If the objective function f is not close to a quadratic on a line segment between the current iterate x k and the latest iterate x k?1, we construct a conic model to generate the approximate optimal stepsize for gradient method if the conic model is suitable to be used. Otherwise, we construct a new quadratic model or two other new approximation models to generate the approximate optimal stepsize for gradient method. We analyze the convergence of the proposed method under some suitable conditions. Numerical results show the proposed method is very promising.  相似文献   

17.
A new adaptive scaled Broyden-Fletcher-Goldfarb-Shanno (BFGS) method for unconstrained optimization is presented. The third term in the standard BFGS update formula is scaled in order to reduce the large eigenvalues of the approximation to the Hessian of the minimizing function. Under the inexact Wolfe line search conditions, the global convergence of the adaptive scaled BFGS method is proved in very general conditions without assuming the convexity of the minimizing function. Using 80 unconstrained optimization test functions with a medium number of variables, the preliminary numerical experiments show that this variant of the scaled BFGS method is more efficient than the standard BFGS update or than some other scaled BFGS methods.  相似文献   

18.
Adaptive Two-Point Stepsize Gradient Algorithm   总被引:7,自引:0,他引:7  
Combined with the nonmonotone line search, the two-point stepsize gradient method has successfully been applied for large-scale unconstrained optimization. However, the numerical performances of the algorithm heavily depend on M, one of the parameters in the nonmonotone line search, even for ill-conditioned problems. This paper proposes an adaptive nonmonotone line search. The two-point stepsize gradient method is shown to be globally convergent with this adaptive nonmonotone line search. Numerical results show that the adaptive nonmonotone line search is specially suitable for the two-point stepsize gradient method.  相似文献   

19.
Recently an affine scaling, interior point algorithm ASL was developed for box constrained optimization problems with a single linear constraint (Gonzalez-Lima et al., SIAM J. Optim. 21:361–390, 2011). This note extends the algorithm to handle more general polyhedral constraints. With a line search, the resulting algorithm ASP maintains the global and R-linear convergence properties of ASL. In addition, it is shown that the unit step version of the algorithm (without line search) is locally R-linearly convergent at a nondegenerate local minimizer where the second-order sufficient optimality conditions hold. For a quadratic objective function, a sublinear convergence property is obtained without assuming either nondegeneracy or the second-order sufficient optimality conditions.  相似文献   

20.
A novel method, entitled the discrete global descent method, is developed in this paper to solve discrete global optimization problems and nonlinear integer programming problems. This method moves from one discrete minimizer of the objective function f to another better one at each iteration with the help of an auxiliary function, entitled the discrete global descent function. The discrete global descent function guarantees that its discrete minimizers coincide with the better discrete minimizers of f under some standard assumptions. This property also ensures that a better discrete minimizer of f can be found by some classical local search methods. Numerical experiments on several test problems with up to 100 integer variables and up to 1.38 × 10104 feasible points have demonstrated the applicability and efficiency of the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号