首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper introduces an algorithm for convex minimization which includes quasi-Newton updates within a proximal point algorithm that depends on a preconditioned bundle subalgorithm. The method uses the Hessian of a certain outer function which depends on the Jacobian of a proximal point mapping which, in turn, depends on the preconditioner matrix and on a Lagrangian Hessian relative to a certain tangent space. Convergence is proved under boundedness assumptions on the preconditioner sequence. Research supported by NSF Grant No. DMS-9402018 and by Institut National de Recherche en Informatique et en Automatique, France.  相似文献   

2.
In this paper we describe a number of new variants of bundle methods for nonsmooth unconstrained and constrained convex optimization, convex—concave games and variational inequalities. We outline the ideas underlying these methods and present rate-of-convergence estimates.Corresponding author.  相似文献   

3.
We develop a class of methods for minimizing a nondifferentiable function which is the maximum of a finite number of smooth functions. The methods proceed by solving iteratively quadratic programming problems to generate search directions. For efficiency the matrices in the quadratic programming problems are suggested to be updated in a variable metric way. By doing so, the methods possess many attractive features of variable metric methods and can be viewed as their natural extension to the nondifferentiable case. To avoid the difficulties of an exact line search, a practical stepsize procedure is also introduced. Under mild assumptions the resulting method converge globally.Research supported by National Science Foundation under grant number ENG 7903881.  相似文献   

4.
提出了一类修正的近似点算法并讨论了算法的收敛性质及其Budle变形的收敛性质。  相似文献   

5.
We give a bundle method for constrained convex optimization. Instead of using penalty functions, it shifts iterates towards feasibility, by way of a Slater point, assumed to be known. Besides, the method accepts an oracle delivering function and subgradient values with unknown accuracy. Our approach is motivated by a number of applications in column generation, in which constraints are positively homogeneous—so that zero is a natural Slater point—and an exact oracle may be time consuming. Finally, our convergence analysis employs arguments which have been little used so far in the bundle community. The method is illustrated on a number of cutting-stock problems. Research supported by INRIA New Investigation Grant “Convex Optimization and Dantzig–Wolfe Decomposition”.  相似文献   

6.
We study proximal level methods for convex optimization that use projections onto successive approximations of level sets of the objective corresponding to estimates of the optimal value. We show that they enjoy almost optimal efficiency estimates. We give extensions for solving convex constrained problems, convex-concave saddle-point problems and variational inequalities with monotone operators. We present several variants, establish their efficiency estimates, and discuss possible implementations. In particular, our methods require bounded storage in contrast to the original level methods of Lemaréchal, Nemirovskii and Nesterov.This research was supported by the Polish Academy of Sciences.Supported by a grant from the French Ministry of Research and Technology.  相似文献   

7.
We propose a class of self-adaptive proximal point methods suitable for degenerate optimization problems where multiple minimizers may exist, or where the Hessian may be singular at a local minimizer. If the proximal regularization parameter has the form where η∈[0,2) and β>0 is a constant, we obtain convergence to the set of minimizers that is linear for η=0 and β sufficiently small, superlinear for η∈(0,1), and at least quadratic for η∈[1,2). Two different acceptance criteria for an approximate solution to the proximal problem are analyzed. These criteria are expressed in terms of the gradient of the proximal function, the gradient of the original function, and the iteration difference. With either acceptance criterion, the convergence results are analogous to those of the exact iterates. Preliminary numerical results are presented using some ill-conditioned CUTE test problems. This material is based upon work supported by the National Science Foundation under Grant Nos. 0203270, 0619080, and 0620286.  相似文献   

8.
Attouch and Wets have introduced recently a variational metric between closed proper convex functions. The aim of this note is to give an estimation of this metric in the case of the exponential penalties. We can therefore recover some convergence results for the exponential penalty method.The authors would like to thank the referees for their suggestions.  相似文献   

9.
This paper is concerned with some of the most powerful methods of minimizing functionals on Hilbert space. It is established that certain classes of these methods are equivalent and their convergence is proved for certain nonquadratic functionals on a Hilbert space. A computational study of these methods applied to a control problem is also included with particular reference to the equivalence of methods mentioned above.  相似文献   

10.
Semidefinite programming (SDP) has recently turned out to be a very powerful tool for approximating some NP-hard problems. The nature of the quadratic assignment problem (QAP) suggests SDP as a way to derive tractable relaxations. We recall some SDP relaxations of QAP and solve them approximately using a dynamic version of the bundle method. The computational results demonstrate the efficiency of the approach. Our bounds are currently among the strongest ones available for QAP. We investigate their potential for branch and bound settings by looking also at the bounds in the first levels of the branching tree.   相似文献   

11.
Two recent suggestions in the field of variable metric methods for function minimization are reviewed: the self-scaling method, first introduced by Oren and Luenberger, and the method of Biggs. The two proposals are considered both from a theoretical and computational aspect. They are compared with methods which use correction formulae from the Broyden one-parameter family, in particular the BFGS formula and the Fletcher switching strategy.  相似文献   

12.
We derive compact representations of BFGS and symmetric rank-one matrices for optimization. These representations allow us to efficiently implement limited memory methods for large constrained optimization problems. In particular, we discuss how to compute projections of limited memory matrices onto subspaces. We also present a compact representation of the matrices generated by Broyden's update for solving systems of nonlinear equations.These authors were supported by the Air Force Office of Scientific Research under Grant AFOSR-90-0109, the Army Research Office under Grant DAAL03-91-0151 and the National Science Foundation under Grants CCR-8920519 and CCR-9101795.This author was supported by the U.S. Department of Energy, under Grant DE-FG02-87ER25047-A001, and by National Science Foundation Grants CCR-9101359 and ASC-9213149.  相似文献   

13.
The aim of this work is to propose implicit and explicit viscosity-like methods for finding specific common fixed points of infinite countable families of nonexpansive self-mappings in Hilbert spaces. Two numerical approaches to solving this problem are considered: an implicit anchor-like algorithm and a nonimplicit one. The considered methods appear to be of practical interests from the numerical point of view and strong convergence results are proved.  相似文献   

14.
This paper presents a quasi-Newton-type algorithm for nonconvex multiobjective optimization. In this algorithm, the iterations are repeated until termination conditions are met, which is when a suitable descent direction cannot be found anymore. Under suitable assumptions, global convergence is established.  相似文献   

15.
In this paper we discuss the main concepts of structural optimization, a field of nonlinear programming, which was formed by the intensive development of modern interior-point schemes.  相似文献   

16.
Nonlinear rescaling and proximal-like methods in convex optimization   总被引:4,自引:0,他引:4  
The nonlinear rescaling principle (NRP) consists of transforming the objective function and/or the constraints of a given constrained optimization problem into another problem which is equivalent to the original one in the sense that their optimal set of solutions coincides. A nonlinear transformation parameterized by a positive scalar parameter and based on a smooth sealing function is used to transform the constraints. The methods based on NRP consist of sequential unconstrained minimization of the classical Lagrangian for the equivalent problem, followed by an explicit formula updating the Lagrange multipliers. We first show that the NRP leads naturally to proximal methods with an entropy-like kernel, which is defined by the conjugate of the scaling function, and establish that the two methods are dually equivalent for convex constrained minimization problems. We then study the convergence properties of the nonlinear rescaling algorithm and the corresponding entropy-like proximal methods for convex constrained optimization problems. Special cases of the nonlinear rescaling algorithm are presented. In particular a new class of exponential penalty-modified barrier functions methods is introduced. Partially supported by the National Science Foundation, under Grants DMS-9201297, and DMS-9401871. Partially supported by NASA Grant NAG3-1397 and NSF Grant DMS-9403218.  相似文献   

17.
This paper studies the vector optimization problem of finding weakly efficient points for mappings in a Banach space Y, with respect to the partial order induced by a closed, convex, and pointed cone C ⊂ Y with a nonempty interior. The proximal method in vector optimization is extended to develop an approximate proximal method for this problem by virtue of the approximate proximal point method for finding a root of a maximal monotone operator. In this approximate proximal method, the subproblems consist of finding weakly efficient points for suitable regularizations of the original mapping. We present both an absolute and a relative version, in which the subproblems are solved only approximately. Weak convergence of the generated sequence to a weak efficient point is established. In addition, we also discuss an extension to Bregman-function-based proximal algorithms for finding weakly efficient points for mappings.  相似文献   

18.
The subject of this paper is the inexact proximal point algorithm of usual and Halpern type in non-positive curvature metric spaces. We study the convergence of the sequences given by the inexact proximal point algorithm with non-summable errors. We also prove the strong convergence of the Halpern proximal point algorithm to a minimum point of the convex function. The results extend several results in Hilbert spaces, Hadamard manifolds and non-positive curvature metric spaces.  相似文献   

19.
In the lines of our previous approach to devise proximal algorithms for nonsmooth convex optimization by applying Nesterov fast gradient concept to the Moreau–Yosida regularization of a convex function, we develop three new proximal algorithms for nonsmooth convex optimization. In these algorithms, the errors in computing approximate solutions for the Moreau–Yosida regularization are not fixed beforehand, while preserving the complexity estimates already established. We report some preliminary computational results to give a first estimate of their performance.  相似文献   

20.
《Optimization》2012,61(11):2289-2306
In this paper, existence of critical point and weak efficient point of vector optimization problem is studied. A sequence of points in n-dimension is generated using positive definite matrices like Quasi-Newton method. It is proved that accumulation points of this sequence are critical points or weak efficient points under different conditions. An algorithm is provided in this context. This method is free from any kind of priori chosen weighting factors or any other form of a priori ranking or ordering information for objective functions. Also, this method does not depend upon initial point. The algorithm is verified in numerical examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号