首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we present two new Dai–Liao-type conjugate gradient methods for unconstrained optimization problems. Their convergence under the strong Wolfe line search conditions is analysed for uniformly convex objective functions and general objective functions, respectively. Numerical experiments show that our methods can outperform some existing Dai–Liao-type methods by using Dolan and Moré’s performance profile.  相似文献   

2.
Three hybrid methods for solving unconstrained optimization problems are introduced. These methods are defined using proper combinations of the search directions and included parameters in conjugate gradient and quasi-Newton methods. The convergence of proposed methods with the underlying backtracking line search is analyzed for general objective functions and particularly for uniformly convex objective functions. Numerical experiments show the superiority of the proposed methods with respect to some existing methods in view of the Dolan and Moré’s performance profile.  相似文献   

3.
In this article, by slightly modifying the search direction of the nonmonotone Hestenes–Stiefel method, a variant Hestenes–Stiefel conjugate gradient method is proposed that satisfies the su?cient descent condition independent of any line search. This algorithm also possesses information about the gradient value and the function value. We establish the global convergence of our methods without the assumption that the steplength is bounded away from zero. Numerical results illustrate that our method can e?ciently solve the test problems, and therefore is promising.  相似文献   

4.
This paper presents a new trust region algorithm for solving a class of composite nonsmooth optimizations. It is distinguished by the fact that this method does not enforce strict monotonicity of the objective function values at successive iterates and that this method extends the existing results for this type of nonlinear optimization with smooth, or piecewise smooth, or convex objective functions or their composition. It is proved that this algorithm is globally convergent under certain conditions. Finally, some numerical results for several optimization problems are reported which show that the nonmonotonic trust region method is competitive with the usual trust region method.  相似文献   

5.
The regularized Newton method (RNM) is one of the efficient solution methods for the unconstrained convex optimization. It is well-known that the RNM has good convergence properties as compared to the steepest descent method and the pure Newton’s method. For example, Li, Fukushima, Qi and Yamashita showed that the RNM has a quadratic rate of convergence under the local error bound condition. Recently, Polyak showed that the global complexity bound of the RNM, which is the first iteration k such that ‖ f(x k )‖≤ε, is O(ε −4), where f is the objective function and ε is a given positive constant. In this paper, we consider a RNM extended to the unconstrained “nonconvex” optimization. We show that the extended RNM (E-RNM) has the following properties. (a) The E-RNM has a global convergence property under appropriate conditions. (b) The global complexity bound of the E-RNM is O(ε −2) if 2 f is Lipschitz continuous on a certain compact set. (c) The E-RNM has a superlinear rate of convergence under the local error bound condition.  相似文献   

6.
We discuss a filter-based pattern search method for unconstrained optimization in this paper. For the purpose to broaden the search range we use both filter technique and frames, which are fragments of grids, to provide a new criterion of iterate acceptance. The convergence can be ensured under some conditions. The numerical result shows that this method is practical and efficient.  相似文献   

7.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

8.
A Simple Multistart Algorithm for Global Optimization   总被引:1,自引:0,他引:1  
1.IntroductionConsidertheunconstrainedoptimizationproblem:findx*suchthatf(x*)~caf(x),(1)wheref(x)isanonlinearfllnctiondefinedonW"andXCR".Ourobjectiveistofindtheglobalminimizeroff(x)inthefeasibleset.Withoutassuminganyconditionsonf(x)globaloptimizationproblemsareunsolvableinthefollowingsensefnoalgorithmcanbeguaranteedtofindaglobalminimizerofageneralnonlinearfunctionwithinfinitelymanyiterations.Supposethatanalgorithmappliedtoanonlinearfunctionf(x)producesiteratesxlandterminatesafterKiterations.…  相似文献   

9.
This paper presents a restarted conjugate gradient iterative algorithm for solving ill-posed problems.The damped Morozov‘s discrepancy principle is used as a stopping rule,Numerical experiments are given to illustrate the efficiency of the method.  相似文献   

10.
本文提出了一种求解约束优化问题的新算法—投影梯度型中心方法.在连续可微和非退化的假设条件下,证明了其全局收敛性.本文算法计算简单且形式灵活.  相似文献   

11.
We propose an algorithm for the global optimization of continuous minimax problems involving polynomials. The method can be described as a discretization approach to the well known semi-infinite formulation of the problem. We proceed by approximating the infinite number of constraints using tools and techniques from semidefinite programming. We then show that, under appropriate conditions, the SDP approximation converges to the globally optimal solution of the problem. We also discuss the numerical performance of the method on some test problems. Financial support of EPSRC Grant GR/T02560/01 gratefully acknowledged.  相似文献   

12.
Conjugate Gradient Methods with Armijo-type Line Searches   总被引:14,自引:0,他引:14  
Abstract Two Armijo-type line searches are proposed in this paper for nonlinear conjugate gradient methods.Under these line searches, global convergence results are established for several famous conjugate gradientmethods, including the Fletcher-Reeves method, the Polak-Ribiere-Polyak method, and the conjugate descentmethod.  相似文献   

13.
14.
This paper studies a substitution secant/finite difference (SSFD) method for solving large scale sparse unconstrained optimization problems. This method is a combination of a secant method and a finite difference method, which depends on a consistent partition of the columns of the lower triangular part of the Hessian matrix. A q-superlinear convergence result and an r-convergence rate estimate show that this method has good local convergence properties. The numerical results show that this method may be competitive with some currently used algorithms.  相似文献   

15.
We consider the problem of minimizing the weighted sum of a smooth function f and a convex function P of n real variables subject to m linear equality constraints. We propose a block-coordinate gradient descent method for solving this problem, with the coordinate block chosen by a Gauss-Southwell-q rule based on sufficient predicted descent. We establish global convergence to first-order stationarity for this method and, under a local error bound assumption, linear rate of convergence. If f is convex with Lipschitz continuous gradient, then the method terminates in O(n 2/ε) iterations with an ε-optimal solution. If P is separable, then the Gauss-Southwell-q rule is implementable in O(n) operations when m=1 and in O(n 2) operations when m>1. In the special case of support vector machines training, for which f is convex quadratic, P is separable, and m=1, this complexity bound is comparable to the best known bound for decomposition methods. If f is convex, then, by gradually reducing the weight on P to zero, the method can be adapted to solve the bilevel problem of minimizing P over the set of minima of f+δ X , where X denotes the closure of the feasible set. This has application in the least 1-norm solution of maximum-likelihood estimation. This research was supported by the National Science Foundation, Grant No. DMS-0511283.  相似文献   

16.
Optimal control problems with the control variable appearing linearly are studied. A method for optimization with respect to the switching times of controls containing both bang-bang and singular arcs is presented. This method is based on the transformation of the control problem into a finite-dimensional optimization problem. Therein, first and second-order optimality conditions are thoroughly discussed. Explicit representations of first and second-order variational derivatives of the state trajectory with respect to the switching times are given. These formulas are used to prove that the second-order sufficient conditions can be verified on the basis of only first-order variational derivatives of the state trajectory. The effectiveness of the proposed method is tested with two numerical examples.  相似文献   

17.
The aim of this paper is to give dual representations for different convex risk measures by employing their conjugate functions. To establish the formulas for the conjugates, we use on the one hand some classical results from convex analysis and on the other hand some tools from the conjugate duality theory. Some characterizations of so-called deviation measures recently given in the literature turn out to be direct consequences of our results.  相似文献   

18.
Optimization problems over compact Lie groups have been studied extensively due to their broad applications in linear programming and optimal control. This paper analyzes an optimization problem over a noncompact symplectic Lie group Sp(2N,ℝ), i.e., minimizing the Frobenius distance from a target symplectic transformation, which can be used to assess the fidelity function over dynamical transformations in classical mechanics and quantum optics. The topology of the set of critical points is proven to have a unique local minimum and a number of saddlepoint submanifolds, exhibiting the absence of local suboptima that may hinder the search for ultimate optimal solutions. Compared with those of previously studied problems on compact Lie groups, such as the orthogonal and unitary groups, the topology is more complicated due to the significant nonlinearity brought by the incompatibility of the Frobenius norm with the pseudo-Riemannian structure on the symplectic group.  相似文献   

19.
20.
We propose a family of gradient algorithms for minimizing a quadratic function f(x)=(Ax,x)/2−(x,y) in ℝ d or a Hilbert space, with simple rules for choosing the step-size at each iteration. We show that when the step-sizes are generated by a dynamical system with ergodic distribution having the arcsine density on a subinterval of the spectrum of A, the asymptotic rate of convergence of the algorithm can approach the (tight) bound on the rate of convergence of a conjugate gradient algorithm stopped before d iterations, with d≤∞ the space dimension.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号