共查询到20条相似文献,搜索用时 280 毫秒
1.
Error bounds for proximal point subproblems and associated inexact proximal point algorithms 总被引:1,自引:0,他引:1
We study various error measures for approximate solution of proximal point regularizations of the variational inequality problem,
and of the closely related problem of finding a zero of a maximal monotone operator. A new merit function is proposed for
proximal point subproblems associated with the latter. This merit function is based on Burachik-Iusem-Svaiter’s concept of
ε-enlargement of a maximal monotone operator. For variational inequalities, we establish a precise relationship between the
regularized gap function, which is a natural error measure in this context, and our new merit function. Some error bounds
are derived using both merit functions for the corresponding formulations of the proximal subproblem. We further use the regularized
gap function to devise a new inexact proximal point algorithm for solving monotone variational inequalities. This inexact
proximal point method preserves all the desirable global and local convergence properties of the classical exact/inexact method,
while providing a constructive error tolerance criterion, suitable for further practical applications. The use of other tolerance
rules is also discussed.
Received: April 28, 1999 / Accepted: March 24, 2000?Published online July 20, 2000 相似文献
2.
On the superlinear convergence of the variable metric proximal point algorithm using Broyden and BFGS matrix secant updating 总被引:2,自引:0,他引:2
In previous work, the authors provided a foundation for the theory of variable metric proximal point algorithms in Hilbert
space. In that work conditions are developed for global, linear, and super–linear convergence. This paper focuses attention
on two matrix secant updating strategies for the finite dimensional case. These are the Broyden and BFGS updates. The BFGS
update is considered for application in the symmetric case, e.g., convex programming applications, while the Broyden update
can be applied to general monotone operators. Subject to the linear convergence of the iterates and a quadratic growth condition
on the inverse of the operator at the solution, super–linear convergence of the iterates is established for both updates.
These results are applied to show that the Chen–Fukushima variable metric proximal point algorithm is super–linearly convergent
when implemented with the BFGS update.
Received: September 12, 1996 / Accepted: January 7, 2000?Published online March 15, 2000 相似文献
3.
Improving the robustness of descent-based methods for semismooth equations using proximal perturbations 总被引:1,自引:0,他引:1
Stephen C. Billups 《Mathematical Programming》2000,87(1):153-175
A common difficulty encountered by descent-based equation solvers is convergence to a local (but not global) minimum of an
underlying merit function. Such difficulties can be avoided by using a proximal perturbation strategy, which allows the iterates
to escape the local minimum in a controlled fashion. This paper presents the proximal perturbation strategy for a general
class of methods for solving semismooth equations and proves subsequential convergence to a solution based upon a pseudomonotonicity
assumption. Based on this approach, two sample algorithms for solving mixed complementarity problems are presented. The first
uses an extremely simple (but not very robust) basic algorithm enhanced by the proximal perturbation strategy. The second
uses a more sophisticated basic algorithm based on the Fischer-Burmeister function. Test results on the MCPLIB and GAMSLIB
complementarity problem libraries demonstrate the improvement in robustness realized by employing the proximal perturbation
strategy.
Received July 15, 1998 / Revised version received June 28, 1999?Published online November 9, 1999 相似文献
4.
An efficient algorithm for globally minimizing a quadratic function under convex quadratic constraints 总被引:12,自引:0,他引:12
Le Thi Hoai An 《Mathematical Programming》2000,87(3):401-426
In this paper we investigate two approaches to minimizing a quadratic form subject to the intersection of finitely many ellipsoids.
The first approach is the d.c. (difference of convex functions) optimization algorithm (abbr. DCA) whose main tools are the
proximal point algorithm and/or the projection subgradient method in convex minimization. The second is a branch-and-bound
scheme using Lagrangian duality for bounding and ellipsoidal bisection in branching. The DCA was first introduced by Pham
Dinh in 1986 for a general d.c. program and later developed by our various work is a local method but, from a good starting
point, it provides often a global solution. This motivates us to combine the DCA and our branch and bound algorithm in order
to obtain a good initial point for the DCA and to prove the globality of the DCA. In both approaches we attempt to use the
ellipsoidal constrained quadratic programs as the main subproblems. The idea is based upon the fact that these programs can
be efficiently solved by some available (polynomial and nonpolynomial time) algorithms, among them the DCA with restarting
procedure recently proposed by Pham Dinh and Le Thi has been shown to be the most robust and fast for large-scale problems.
Several numerical experiments with dimension up to 200 are given which show the effectiveness and the robustness of the DCA
and the combined DCA-branch-and-bound algorithm.
Received: April 22, 1999 / Accepted: November 30, 1999?Published online February 23, 2000 相似文献
5.
Stephen M. Robinson 《Mathematical Programming》1999,86(1):41-50
This paper establishes a linear convergence rate for a class of epsilon-subgradient descent methods for minimizing certain
convex functions on ℝ
n
. Currently prominent methods belonging to this class include the resolvent (proximal point) method and the bundle method
in proximal form (considered as a sequence of serious steps). Other methods, such as a variant of the proximal point method
given by Correa and Lemaréchal, can also fit within this framework, depending on how they are implemented. The convex functions
covered by the analysis are those whose conjugates have subdifferentials that are locally upper Lipschitzian at the origin,
a property generalizing classical regularity conditions.
Received March 29, 1996 / Revised version received March 5, 1999? Published online June 11, 1999 相似文献
6.
A class of affine-scaling interior-point methods for bound constrained optimization problems is introduced which are locally
q–superlinear or q–quadratic convergent. It is assumed that the strong second order sufficient optimality conditions at the
solution are satisfied, but strict complementarity is not required. The methods are modifications of the affine-scaling interior-point
Newton methods introduced by T. F. Coleman and Y. Li (Math. Programming, 67, 189–224, 1994). There are two modifications. One is a modification of the scaling matrix, the other one is the use of a
projection of the step to maintain strict feasibility rather than a simple scaling of the step. A comprehensive local convergence
analysis is given. A simple example is presented to illustrate the pitfalls of the original approach by Coleman and Li in
the degenerate case and to demonstrate the performance of the fast converging modifications developed in this paper.
Received October 2, 1998 / Revised version received April 7, 1999?Published online July 19, 1999 相似文献
7.
An interior Newton method for quadratic programming 总被引:2,自引:0,他引:2
We propose a new (interior) approach for the general quadratic programming problem. We establish that the new method has strong
convergence properties: the generated sequence converges globally to a point satisfying the second-order necessary optimality
conditions, and the rate of convergence is 2-step quadratic if the limit point is a strong local minimizer. Published alternative
interior approaches do not share such strong convergence properties for the nonconvex case. We also report on the results
of preliminary numerical experiments: the results indicate that the proposed method has considerable practical potential.
Received October 11, 1993 / Revised version received February 20, 1996
Published online July 19, 1999 相似文献
8.
n . The method is based on Rockafellar’s proximal point algorithm and a cutting-plane technique. At each step, we use an approximate
proximal point pa(xk) of xk to define a vk∈∂εkf(pa(xk)) with εk≤α∥vk∥, where α is a constant. The method monitors the reduction in the value of ∥vk∥ to identify when a line search on f should be used. The quasi-Newton step is used to reduce the value of ∥vk∥. Without the differentiability of f, the method converges globally and the rate of convergence is Q-linear. Superlinear
convergence is also discussed to extend the characterization result of Dennis and Moré. Numerical results show the good performance
of the method.
Received October 3, 1995 / Revised version received August 20, 1998
Published online January 20, 1999 相似文献
9.
Jan-J. Rückmann 《Mathematical Programming》1999,86(2):387-415
The paper deals with semi-infinite optimization problems which are defined by finitely many equality constraints and infinitely
many inequality constraints. We generalize the concept of strongly stable stationary points which was introduced by Kojima
for finite problems; it refers to the local existence and uniqueness of a stationary point for each sufficiently small perturbed
problem, where perturbations up to second order are allowed. Under the extended Mangasarian-Fromovitz constraint qualification
we present equivalent conditions for the strong stability of a considered stationary point in terms of first and second derivatives
of the involved functions. In particular, we discuss the case where the reduction approach is not satisfied.
Received June 30, 1995 / Revised version received October 9, 1998?
Published online June 11, 1999 相似文献
10.
The alternating directions method (ADM) is an effective method for solving a class of variational inequalities (VI) when the
proximal and penalty parameters in sub-VI problems are properly selected. In this paper, we propose a new ADM method which
needs to solve two strongly monotone sub-VI problems in each iteration approximately and allows the parameters to vary from
iteration to iteration. The convergence of the proposed ADM method is proved under quite mild assumptions and flexible parameter
conditions.
Received: January 4, 2000 / Accepted: October 2001?Published online February 14, 2002 相似文献
11.
In this paper we show that the cut does not need to go through the query point: it can be deep or shallow. The primal framework
leads to a simple analysis of the potential variation, which shows that the inequality needed for convergence of the algorithm
is in fact attained at the first iterate of the feasibility step.
Received July 3, 1996 / Revised version received July 11, 1997 Published online August 18, 1998 相似文献
12.
A trust region and affine scaling interior point method for nonconvex minimization with linear inequality constraints 总被引:12,自引:0,他引:12
A trust region and affine scaling interior point method (TRAM) is proposed for a general nonlinear minimization with linear
inequality constraints [8]. In the proposed approach, a Newton step is derived from the complementarity conditions. Based
on this Newton step, a trust region subproblem is formed, and the original objective function is monotonically decreased.
Explicit sufficient decrease conditions are proposed for satisfying the first order and second order necessary conditions.?The
objective of this paper is to establish global and local convergence properties of the proposed trust region and affine scaling
interior point method. It is shown that the proposed explicit decrease conditions are sufficient for satisfy complementarity,
dual feasibility and second order necessary conditions respectively. It is also established that a trust region solution is
asymptotically in the interior of the proposed trust region subproblem and a properly damped trust region step can achieve
quadratic convergence.
Received: January 29, 1999 / Accepted: November 22, 1999?Published online February 23, 2000 相似文献
13.
Piecewise affine functions arise from Lagrangian duals of integer programming problems, and optimizing them provides good
bounds for use in a branch and bound method. Methods such as the subgradient method and bundle methods assume only one subgradient
is available at each point, but in many situations there is more information available. We present a new method for optimizing
such functions, which is related to steepest descent, but uses an outer approximation to the subdifferential to avoid some
of the numerical problems with the steepest descent approach. We provide convergence results for a class of outer approximations,
and then develop a practical algorithm using such an approximation for the compact dual to the linear programming relaxation
of the uncapacitated facility location problem. We make a numerical comparison of our outer approximation method with the
projection method of Conn and Cornuéjols, and the bundle method of Schramm and Zowe.
Received September 10, 1998 / Revised version received August 1999?Published online December 15, 1999 相似文献
14.
《Numerical Functional Analysis & Optimization》2013,34(7-8):1013-1035
We present a unified framework for the design and convergence analysis of a class of algorithms based on approximate solution of proximal point subproblems. Our development further enhances the constructive approximation approach of the recently proposed hybrid projection–proximal and extragradient–proximal methods. Specifically, we introduce an even more flexible error tolerance criterion, as well as provide a unified view of these two algorithms. Our general method possesses global convergence and local (super)linear rate of convergence under standard assumptions, while using a constructive approximation criterion suitable for a number of specific implementations. For example, we show that close to a regular solution of a monotone system of semismooth equations, two Newton iterations are sufficient to solve the proximal subproblem within the required error tolerance. Such systems of equations arise naturally when reformulating the nonlinear complementarity problem. 相似文献
15.
Entropic proximal decomposition methods for convex programs and variational inequalities 总被引:2,自引:0,他引:2
We consider convex optimization and variational inequality problems with a given separable structure. We propose a new decomposition
method for these problems which combines the recent logarithmic-quadratic proximal theory introduced by the authors with a
decomposition method given by Chen-Teboulle for convex problems with particular structure. The resulting method allows to
produce for the first time provably convergent decomposition schemes based on C
∞ Lagrangians for solving convex structured problems. Under the only assumption that the primal-dual problems have nonempty
solution sets, global convergence of the primal-dual sequences produced by the algorithm is established.
Received: October 6, 1999 / Accepted: February 2001?Published online September 17, 2001 相似文献
16.
This paper introduces and analyses a new algorithm for minimizing a convex function subject to a finite number of convex inequality
constraints. It is assumed that the Lagrangian of the problem is strongly convex. The algorithm combines interior point methods
for dealing with the inequality constraints and quasi-Newton techniques for accelerating the convergence. Feasibility of the
iterates is progressively enforced thanks to shift variables and an exact penalty approach. Global and q-superlinear convergence is obtained for a fixed penalty parameter; global convergence to the analytic center of the optimal
set is ensured when the barrier parameter tends to zero, provided strict complementarity holds.
Received: December 21, 2000 / Accepted: July 13, 2001?Published online February 14, 2002 相似文献
17.
Inexact implicit methods for monotone general variational inequalities 总被引:32,自引:0,他引:32
Bingsheng He 《Mathematical Programming》1999,86(1):199-217
Solving a variational inequality problem is equivalent to finding a solution of a system of nonsmooth equations. Recently,
we proposed an implicit method, which solves monotone variational inequality problem via solving a series of systems of nonlinear
smooth (whenever the operator is smooth) equations. It can exploit the facilities of the classical Newton–like methods for
smooth equations. In this paper, we extend the method to solve a class of general variational inequality problems Moreover, we improve the implicit method to allow inexact solutions of the systems of nonlinear equations at each iteration.
The method is shown to preserve the same convergence properties as the original implicit method.
Received July 31, 1995 / Revised version received January 15, 1999? Published online May 28, 1999 相似文献
18.
A new look at smoothing Newton methods for nonlinear complementarity problems and box constrained variational inequalities 总被引:17,自引:0,他引:17
In this paper we take a new look at smoothing Newton methods for solving the nonlinear complementarity problem (NCP) and the
box constrained variational inequalities (BVI). Instead of using an infinite sequence of smoothing approximation functions,
we use a single smoothing approximation function and Robinson’s normal equation to reformulate NCP and BVI as an equivalent
nonsmooth equation H(u,x)=0, where H:ℜ
2n
→ℜ
2n
, u∈ℜ
n
is a parameter variable and x∈ℜ
n
is the original variable. The central idea of our smoothing Newton methods is that we construct a sequence {z
k
=(u
k
,x
k
)} such that the mapping H(·) is continuously differentiable at each z
k
and may be non-differentiable at the limiting point of {z
k
}. We prove that three most often used Gabriel-Moré smoothing functions can generate strongly semismooth functions, which
play a fundamental role in establishing superlinear and quadratic convergence of our new smoothing Newton methods. We do not
require any function value of F or its derivative value outside the feasible region while at each step we only solve a linear system of equations and if
we choose a certain smoothing function only a reduced form needs to be solved. Preliminary numerical results show that the
proposed methods for particularly chosen smoothing functions are very promising.
Received June 23, 1997 / Revised version received July 29, 1999?Published online December 15, 1999 相似文献
19.
G. Still 《Mathematical Programming》2001,91(1):53-69
The discretization approach for solving semi-infinite optimization problems is considered. We are interested in the convergence
rate of the error between the solution of the semi-infinite problem and the solution of the discretized program depending
on the discretization mesh-size. It will be shown how this rate depends on whether the minimizer is strict of order one or
two and on whether the discretization includes boundary points of the index set in a specific way. This is done for ordinary
and for generalized semi-infinite problems.
Received: November 21, 2000 / Accepted: May 2001?Published online September 17, 2001 相似文献
20.
Javad Mashreghi 《Numerical Functional Analysis & Optimization》2013,34(9):1053-1071
We develop an inexact proximal point algorithm for solving equilibrium problems in Banach spaces which consists of two principal steps and admits an interesting geometric interpretation. At a certain iterate, first we solve an inexact regularized equilibrium problem with a flexible error criterion to obtain an axillary point. Using this axillary point and the inexact solution of the previous iterate, we construct two appropriate hyperplanes which separate the current iterate from the solution set of the given problem. Then the next iterate is defined as the Bregman projection of the initial point onto the intersection of two halfspaces obtained from the two constructed hyperplanes containing the solution set of the original problem. Assuming standard hypotheses, we present a convergence analysis for our algorithm, establishing that the generated sequence strongly and globally converges to a solution of the problem which is the closest one to the starting point of the algorithm. 相似文献