首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents a method for finding the minimum for a class of nonconvex and nondifferentiable functions consisting of the sum of a convex function and a continuously differentiable function. The algorithm is a descent method which generates successive search directions by solving successive convex subproblems. The algorithm is shown to converge to a critical point.The authors wish to express their appreciation to the referees for their careful review and helpful comments.  相似文献   

2.
This paper deals with approximate value iteration (AVI) algorithms applied to discounted dynamic programming (DP) problems. For a fixed control policy, the span semi-norm of the so-called Bellman residual is shown to be convex in the Banach space of candidate solutions to the DP problem. This fact motivates the introduction of an AVI algorithm with local search that seeks to minimize the span semi-norm of the Bellman residual in a convex value function approximation space. The novelty here is that the optimality of a point in the approximation architecture is characterized by means of convex optimization concepts and necessary and sufficient conditions to local optimality are derived. The procedure employs the classical AVI algorithm direction (Bellman residual) combined with a set of independent search directions, to improve the convergence rate. It has guaranteed convergence and satisfies, at least, the necessary optimality conditions over a prescribed set of directions. To illustrate the method, examples are presented that deal with a class of problems from the literature and a large state space queueing problem setting.  相似文献   

3.
本文给出了一类线性约束下不可微量优化问题的可行下降方法,这类问题的目标函数是凸函数和可微函数的合成函数,算法通过解系列二次规划寻找可行下降方向,新的迭代点由不精确线搜索产生,在较弱的条件下,我们证明了算法的全局收敛性  相似文献   

4.
A method is presented for the construction of test problems involving the minimization over convex sets of sums of ratios of affine functions. Given a nonempty, compact convex set, the method determines a function that is the sum of linear fractional functions and attains a global minimum over the set at a point that can be found by convex programming and univariate search. Generally, the function will have also local minima over the set that are not global minima.  相似文献   

5.
The Powell singular function was introduced 1962 by M.J.D. Powell as an unconstrained optimization problem. The function is also used as nonlinear least squares problem and system of nonlinear equations. The function is a classic test function included in collections of test problems in optimization as well as an example problem in text books. In the global optimization literature the function is stated as a difficult test case. The function is convex and the Hessian has a double singularity at the solution. In this paper we consider Newton’s method and methods in Halley class and we discuss the relationship between these methods on the Powell Singular Function. We show that these methods have global but linear rate of convergence. The function is in a subclass of unary functions and results for Newton’s method and methods in the Halley class can be extended to this class. Newton’s method is often made globally convergent by introducing a line search. We show that a full Newton step will satisfy many of standard step length rules and that exact line searches will yield slightly faster linear rate of convergence than Newton’s method. We illustrate some of these properties with numerical experiments.  相似文献   

6.
A branch-and-bound method is proposed for minimizing a convex-concave function over a convex set. The minimization of a DC-function is a special case, where the subproblems connected with the bounding operation can be solved effectively.on leave at Mannheim University by a grant from the Alexander von Humboldt Foundation.  相似文献   

7.
Based on an augmented Lagrangian line search function, a sequential quadratically constrained quadratic programming method is proposed for solving nonlinearly constrained optimization problems. Compared to quadratic programming solved in the traditional SQP methods, a convex quadratically constrained quadratic programming is solved here to obtain a search direction, and the Maratos effect does not occur without any other corrections. The “active set” strategy used in this subproblem can avoid recalculating the unnecessary gradients and (approximate) Hessian matrices of the constraints. Under certain assumptions, the proposed method is proved to be globally, superlinearly, and quadratically convergent. As an extension, general problems with inequality and equality constraints as well as nonmonotone line search are also considered.  相似文献   

8.
In this paper, we present a nonmonotone algorithm for solving nonsmooth composite optimization problems. The objective function of these problems is composited by a nonsmooth convex function and a differentiable function. The method generates the search directions by solving quadratic programming successively, and makes use of the nonmonotone line search instead of the usual Armijo-type line search. Global convergence is proved under standard assumptions. Numerical results are given.  相似文献   

9.
Any constraintg(x)0 is called a reverse convex constraint ifg: R n R 1 is a continuous convex function. This paper establishes a finite method for finding an optimal solution to a concave program with an additional reverse convex constraint. The method presented is a new approach to global optimization problems since it combines the idea of the branch and bound method with the idea of the cutting plane method.This paper is dedicated to Professor A. Pelczar  相似文献   

10.
In this paper,we present a successive quadratic programming(SQP)method for minimizing a class of nonsmooth functions,which are the sum of a convex function and a nonsmooth composite function.The method generates new iterations by using the Armijo-type line search technique after having found the search directions.Global convergence property is established under mild assumptions.Numerical results are also offered.  相似文献   

11.
In this paper, a new steplength formula is proposed for unconstrained optimization,which can determine the step-size only by one step and avoids the line search step. Global convergence of the five well-known conjugate gradient methods with this formula is analyzed,and the corresponding results are as follows:(1) The DY method globally converges for a strongly convex LC~1 objective function;(2) The CD method, the FR method, the PRP method and the LS method globally converge for a general, not necessarily convex, LC~1 objective function.  相似文献   

12.
This paper presents a coordinate gradient descent approach for minimizing the sum of a smooth function and a nonseparable convex function. We find a search direction by solving a subproblem obtained by a second-order approximation of the smooth function and adding a separable convex function. Under a local Lipschitzian error bound assumption, we show that the algorithm possesses global and local linear convergence properties. We also give some numerical tests (including image recovery examples) to illustrate the efficiency of the proposed method.  相似文献   

13.
《Optimization》2012,61(3-4):237-248
A lineraly constrained global optimization problem is studied, where the objective function is the saum of a convex function g(x)a nd a nonconvex function f(x) satisfying a rank two condition. Roughly speaking, the latter means that all the nonconvexity of f(x) is concentrated on a linear manifold of dimension 2. A solution method based on exploiting this special structure is prop  相似文献   

14.
The problem of globally minimizing a convex function subject to general continuous inequality constraints is investigated. A convergent outer approximation method is proposed which systematically exploits the convexity of the objective function in order to transcend local optimality. Also the question of finding a good starting point by using a local approach is discussed.  相似文献   

15.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

16.
We modify a Lagrangian penalty function method proposed in [4] for constrained convex mathematical programming problems in order to obtain a geometric rate of convergence. For nonconvex problems we show that a special case of the algorithm in the above paper is still convergent without coercivity and convexity assumptions.On leave from the Institute of Mathematics, Hanoi, by a grant from Alexander-von-Humboldt-Stiftung.  相似文献   

17.
We generalize the concept of a gap function previously defined for a convex (scalar) optimization problem to a convex multicriteria optimization problem and study its various properties.  相似文献   

18.
In this paper, we scale the quasiNewton equation and propose a spectral scaling BFGS method. The method has a good selfcorrecting property and can improve the behavior of the BFGS method. Compared with the standard BFGS method, the single-step convergence rate of the spectral scaling BFGS method will not be inferior to that of the steepest descent method when minimizing an n-dimensional quadratic function. In addition, when the method with exact line search is applied to minimize an n-dimensional strictly convex function, it terminates within n steps. Under appropriate conditions, we show that the spectral scaling BFGS method with Wolfe line search is globally and R-linear convergent for uniformly convex optimization problems. The reported numerical results show that the spectral scaling BFGS method outperforms the standard BFGS method.  相似文献   

19.
We consider the gradient (or steepest) descent method with exact line search applied to a strongly convex function with Lipschitz continuous gradient. We establish the exact worst-case rate of convergence of this scheme, and show that this worst-case behavior is exhibited by a certain convex quadratic function. We also give the tight worst-case complexity bound for a noisy variant of gradient descent method, where exact line-search is performed in a search direction that differs from negative gradient by at most a prescribed relative tolerance. The proofs are computer-assisted, and rely on the resolutions of semidefinite programming performance estimation problems as introduced in the paper (Drori and Teboulle, Math Progr 145(1–2):451–482, 2014).  相似文献   

20.
李梅霞  籍法俊 《应用数学》2008,21(1):213-218
在本文中,我们提出了一种新的带扰动项的三项记忆梯度混合投影算法.在这种方法中应用了广义Armijo线搜索,并且仅在梯度函数在包含迭代序列的开凸集上一致连续的条件下证明了该算法的全局收敛性.最后给出了几个数值算例.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号