共查询到20条相似文献,搜索用时 612 毫秒
1.
This paper presents a unified framework of proximal point algorithms (PPAs) for solving general variational inequalities (GVIs).
Some existing PPAs for classical variational inequalities, including both the exact and inexact versions, are extended to
solving GVIs. Consequently, several new PPA-based algorithms are proposed.
M. Li was supported by NSFC Grant 10571083 and SRFDP Grant 200802861031.
L.Z. Liao was supported in part by grants from Hong Kong Baptist University and the Research Grant Council of Hong Kong.
X.M. Yuan was supported in part by FRG/08-09/II-40 from Hong Kong Baptist University and NSFC Grant 10701055. 相似文献
2.
Since proximal point algorithms (PPAs) are attractive for solving monotone variational inequalities, various versions of PPAs are developed for variant variational inequalities. In this paper, we prove a worst-case $O(1/t)$ convergence rate in an ergodic sense for the PPA-based numerical algorithm in literature. Some numerical results are also reported to verify the computational preference. 相似文献
3.
We extend some results due to Thanh-Hao (Acta Math. Vietnam. 31: 283–289, [2006]) and Noor (J. Optim. Theory Appl. 115:447–452, [2002]). The first paper established a convergence theorem for the Tikhonov regularization method (TRM) applied to finite-dimensional
pseudomonotone variational inequalities (VIs), answering in the affirmative an open question stated by Facchinei and Pang
(Finite-Dimensional Variational Inequalities and Complementarity Problems, Springer, New York, [2003]). The second paper discussed the application of the proximal point algorithm (PPA) to pseudomonotone VIs. In this paper,
new facts on the convergence of TRM and PPA (both the exact and inexact versions of PPA) for pseudomonotone VIs in Hilbert
spaces are obtained and a partial answer to a question stated in (Acta Math. Vietnam. 31:283–289, [2006]) is given. As a byproduct, we show that the convergence theorem for inexact PPA applied to infinite-dimensional monotone
variational inequalities can be proved without using the theory of maximal monotone operators.
This research was supported in part by a grant from the National Sun Yat-Sen University, Kaohsiung, Taiwan. It has been carried
out under the agreement between the National Sun Yat-Sen University, Kaohsiung, Taiwan and the University of Pisa, Pisa, Italy.
The authors thank the anonymous referee for useful comments and suggestions. 相似文献
4.
On the convergence of projection methods: Application to the decomposition of affine variational inequalities 总被引:3,自引:0,他引:3
In this paper, we first discuss the global convergence of symmetric projection methods for solving nonlinear monotone variational inequalities under a cocoercivity assumption. A similar analysis is applied to asymmetric projection methods, when the mapping is affine and monotone. Under a suitable choice of the projection matrix, decomposition can be achieved. It is proved that this scheme achieves a linear convergence rate, thus enhancing results previously obtained by Tseng (Ref. 1) and by Luo and Tseng (Ref. 2).The research of the first author was supported by NSERC Grant A5789 and DND-FUHBP. The research of the second author was supported by NSERC Grant OGP-0157735.The authors are indebted to the referees and Associate Editor P. Tseng for their constructive comments. 相似文献
5.
An extended descent framework for variational inequalities 总被引:1,自引:0,他引:1
In this paper, we develop a very general descent framework for solving asymmetric, monotone variational inequalities. We introduce two classes of differentiable merit functions and the associated global convergence frameworks which include, as special instances, the projection, Newton, quasi-Newton, linear Jacobi, and nonlinear methods. The generic algorithm is very flexible and consequently well suited for exploiting any particular structure of the problem.This research was supported by the National Science and Engineering Research Council of Canada, Grant A5789, and by the Department of National Defence of Canada, Grant FUHBP. 相似文献
6.
To solve monotone variational inequalities, some existing APPA-based descent methods utilize the iterates generated by the well-known approximate proximal point algorithms (APPA) to construct descent directions. This paper aims at improving these APPA-based descent methods by incorporating optimal step-sizes in both the extra-gradient steps and the descent steps. Global convergence is proved under mild assumptions. The superiority to existing methods is verified both theoretically and computationally. 相似文献
7.
8.
关于单调变分不等式的不精确邻近点算法的收敛性分析 总被引:7,自引:0,他引:7
王治华 《高等学校计算数学学报》2003,25(4):336-343
We consider a proximal point algorithm(PPA) for solving monotone variational inequalities. PPA generates a sequence by solving a sequence of strongly monotone subproblems .However,solving the subproblems is either expensive or impossible. Some inexact proximal point algorithms(IPPA) have been developed in many literatures. In this paper, we present a criterion for approximately solving subproblems. It only needs one simple additional work on the basis of original algorithm, and the convergence criterion becomes milder. We show that this method converges globally under new criterion provided that the solution set of the problem is nonempty. 相似文献
9.
Error bounds for proximal point subproblems and associated inexact proximal point algorithms 总被引:1,自引:0,他引:1
We study various error measures for approximate solution of proximal point regularizations of the variational inequality problem,
and of the closely related problem of finding a zero of a maximal monotone operator. A new merit function is proposed for
proximal point subproblems associated with the latter. This merit function is based on Burachik-Iusem-Svaiter’s concept of
ε-enlargement of a maximal monotone operator. For variational inequalities, we establish a precise relationship between the
regularized gap function, which is a natural error measure in this context, and our new merit function. Some error bounds
are derived using both merit functions for the corresponding formulations of the proximal subproblem. We further use the regularized
gap function to devise a new inexact proximal point algorithm for solving monotone variational inequalities. This inexact
proximal point method preserves all the desirable global and local convergence properties of the classical exact/inexact method,
while providing a constructive error tolerance criterion, suitable for further practical applications. The use of other tolerance
rules is also discussed.
Received: April 28, 1999 / Accepted: March 24, 2000?Published online July 20, 2000 相似文献
10.
The augmented Lagrangian method is attractive in constraint optimizations. When it is applied to a class of constrained variational
inequalities, the sub-problem in each iteration is a nonlinear complementarity problem (NCP). By introducing a logarithmic-quadratic
proximal term, the sub-NCP becomes a system of nonlinear equations, which we call the LQP system. Solving a system of nonlinear equations is easier than the related NCP, because the solution of the NCP has combinatorial
properties. In this paper, we present an inexact logarithmic-quadratic proximal augmented Lagrangian method for a class of
constrained variational inequalities, in which the LQP system is solved approximately under a rather relaxed inexactness criterion.
The generated sequence is Fejér monotone and the global convergence is proved. Finally, some numerical test results for traffic
equilibrium problems are presented to demonstrate the efficiency of the method.
相似文献
11.
This paper is devoted to the study of a new necessary condition in variational inequality problems: approximated gradient
projection (AGP). A feasible point satisfies such condition if it is the limit of a sequence of the approximated solutions
of approximations of the variational problem. This condition comes from optimization where the error in the approximated solution
is measured by the projected gradient onto the approximated feasible set, which is obtained from a linearization of the constraints
with slack variables to make the current point feasible.
We state the AGP condition for variational inequality problems and show that it is necessary for a point being a solution
even without constraint qualifications (e.g., Abadie’s). Moreover, the AGP condition is sufficient in convex variational inequalities.
Sufficiency also holds for variational inequalities involving maximal monotone operators subject to the boundedness of the
vectors in the image of the operator (playing the role of the gradients). Since AGP is a condition verified by a sequence,
it is particularly interesting for iterative methods.
Research of R. Gárciga Otero was partially supported by CNPq, FAPERJ/Cientistas do Nosso Estado, and PRONEX Optimization.
Research of B.F. Svaiter was partially supported by CNPq Grants 300755/2005-8 and 475647/2006-8 and by PRONEX Optimization. 相似文献
12.
近似邻近点算法是求解单调变分不等式的一个有效方法,该算法通过解决一系列强单调子问题,产生近似邻近点序列来逼近变分不等式的解,而外梯度算法则通过每次迭代中增加一个投影来克服一般投影算法限制太强的缺点,但它们均未能改变迭代步骤中不规则闭凸区域上投影难计算的问题.于是,本文结合外梯度算法的迭代格式,构造包含原投影区域的半空间,将投影建立在半空间上,简化了投影的求解过程,并对新的邻近点序列作相应限制,使得改进的算法具有较好的收敛性. 相似文献
13.
P. Tossings 《Applied Mathematics and Optimization》1994,29(2):125-159
Following the works of R. T. Rockafellar, to search for a zero of a maximal monotone operator, and of B. Lemaire, to solve convex optimization problems, we present a perturbed version of the proximal point algorithm. We apply this new algorithm to convex optimization and to variational inclusions or, more particularly, to variational inequalities. 相似文献
14.
Convergence rate analysis of iteractive algorithms for solving variational inequality problems 总被引:3,自引:0,他引:3
M.V. Solodov 《Mathematical Programming》2003,96(3):513-528
We present a unified convergence rate analysis of iterative methods for solving the variational inequality problem. Our results
are based on certain error bounds; they subsume and extend the linear and sublinear rates of convergence established in several
previous studies. We also derive a new error bound for $\gamma$-strictly monotone variational inequalities. The class of algorithms
covered by our analysis in fairly broad. It includes some classical methods for variational inequalities, e.g., the extragradient,
matrix splitting, and proximal point methods. For these methods, our analysis gives estimates not only for linear convergence
(which had been studied extensively), but also sublinear, depending on the properties of the solution. In addition, our framework
includes a number of algorithms to which previous studies are not applicable, such as the infeasible projection methods, a
separation-projection method, (inexact) hybrid proximal point methods, and some splitting techniques. Finally, our analysis
covers certain feasible descent methods of optimization, for which similar convergence rate estimates have been recently obtained
by Luo [14].
Received: April 17, 2001 / Accepted: December 10, 2002
Published online: April 10, 2003
RID="⋆"
ID="⋆" Research of the author is partially supported by CNPq Grant 200734/95–6, by PRONEX-Optimization, and by FAPERJ.
Key Words. Variational inequality – error bound – rate of convergence
Mathematics Subject Classification (2000): 90C30, 90C33, 65K05 相似文献
15.
Using duality, we reformulate the asymmetric variational inequality (VI) problem over a conic region as an optimization problem. We give sufficient conditions for the convexity of this reformulation. We thereby identify a class of VIs that includes monotone affine VIs over polyhedra, which may be solved by commercial optimization solvers. 相似文献
16.
Paulo J. S. Silva Jonathan Eckstein 《Computational Optimization and Applications》2006,33(2-3):115-156
We consider the variational inequality problem formed by a general set-valued maximal monotone operator and a possibly unbounded
“box” in
, and study its solution by proximal methods whose distance regularizations are coercive over the box. We prove convergence
for a class of double regularizations generalizing a previously-proposed class of Auslender et al. Using these results, we derive a broadened class of augmented
Lagrangian methods. We point out some connections between these methods and earlier work on “pure penalty” smoothing methods
for complementarity; this connection leads to a new form of augmented Lagrangian based on the “neural” smoothing function.
Finally, we computationally compare this new kind of augmented Lagrangian to three previously-known varieties on the MCPLIB
problem library, and show that the neural approach offers some advantages. In these tests, we also consider primal-dual approaches
that include a primal proximal term. Such a stabilizing term tends to slow down the algorithms, but makes them more robust.
This author was partially supported by CNPq, Grant PQ 304133/2004-3 and PRONEX-Optimization. 相似文献
17.
Bruce Cox Anatoli Juditsky Arkadi Nemirovski 《Journal of Optimization Theory and Applications》2017,172(2):402-435
The majority of first-order methods for large-scale convex–concave saddle point problems and variational inequalities with monotone operators are proximal algorithms. To make such an algorithm practical, the problem’s domain should be proximal-friendly—admit a strongly convex function with easy to minimize linear perturbations. As a by-product, this domain admits a computationally cheap linear minimization oracle (LMO) capable to minimize linear forms. There are, however, important situations where a cheap LMO indeed is available, but the problem domain is not proximal-friendly, which motivates search for algorithms based solely on LMO. For smooth convex minimization, there exists a classical algorithm using LMO—conditional gradient. In contrast, known to us similar techniques for other problems with convex structure (nonsmooth convex minimization, convex–concave saddle point problems, even as simple as bilinear ones, and variational inequalities with monotone operators, even as simple as affine) are quite recent and utilize common approach based on Fenchel-type representations of the associated objectives/vector fields. The goal of this paper was to develop alternative (and seemingly much simpler) decomposition techniques based on LMO for bilinear saddle point problems and for variational inequalities with affine monotone operators. 相似文献
18.
In this paper, we prove that each monotone variational inequality is equivalent to a two-mapping variational inequality problem. On the basis of this fact, a new class of iterative methods for the solution of nonlinear monotone variational inequality problems is presented. The global convergence of the proposed methods is established under the monotonicity assumption. The conditions concerning the implementability of the algorithms are also discussed. The proposed methods have a close relationship to the Douglas–Rachford operator splitting method for monotone variational inequalities. 相似文献
19.
For variational inequalities characterizing saddle points of Lagrangians associated with convex programming problems in Hilbert spaces, the convergence of an interior proximal method based on Bregman distance functionals is studied. The convergence results admit a successive approximation of the variational inequality and an inexact treatment of the proximal iterations.An analogous analysis is performed for finite-dimensional complementarity problems with multi-valued monotone operators. 相似文献
20.
M. Boulbrachene 《Applied Mathematics Letters》2002,15(8):6443-1017
We establish optimal L∞-error estimate for a class of variational inequalities (VIs) with nonlinear source term, using a very simple argument mainly based on the discrete L∞-stability property with respect to the right-hand side in elliptic VIs. We also show that the same approach extends to the corresponding noncoercive problems and optimal uniform convergence order is obtained as well. 相似文献