首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 546 毫秒
1.
Abstract

This paper is devoted to the study of proximal distances defined over symmetric cones, which include the non-negative orthant, the second-order cone and the cone of positive semi-definite symmetric matrices. Specifically, our first aim is to provide two ways to build them. For this, we consider two classes of real-valued functions satisfying some assumptions. Then, we show that its corresponding spectrally defined function defines a proximal distance. In addition, we present several examples and some properties of this distance. Taking into account these properties, we analyse the convergence of proximal-type algorithms for solving convex symmetric cone programming (SCP) problems, and we study the asymptotic behaviour of primal central paths associated with a proximal distance. Finally, for linear SCP problems, we provide a relationship between the proximal sequence and the primal central path.  相似文献   

2.
We present an interior proximal method with Bregman distance, for solving the minimization problem with quasiconvex objective function under nonnegative constraints. The Bregman function is considered separable and zone coercive, and the zone is the interior of the positive orthant. Under the assumption that the solution set is nonempty and the objective function is continuously differentiable, we establish the well definedness of the sequence generated by our algorithm and obtain two important convergence results, and show in the main one that the sequence converges to a solution point of the problem when the regularization parameters go to zero.  相似文献   

3.
In this paper, we study the weak sharpness of the solution set of variational inequality problem (in short, VIP) and the finite convergence property of the sequence generated by some algorithm for finding the solutions of VIP. In particular, we give some characterizations of weak sharpness of the solution set of VIP without considering the primal or dual gap function. We establish an abstract result on the finite convergence property for a sequence generated by some iterative methods. We then apply such abstract result to discuss the finite termination property of the sequence generated by proximal point method, exact proximal point method and gradient projection method. We also give an estimate on the number of iterates by which the sequence converges to a solution of the VIP.  相似文献   

4.
We consider an extended second-order cone linear complementarity problem (SOCLCP), including the generalized SOCLCP, the horizontal SOCLCP, the vertical SOCLCP, and the mixed SOCLCP as special cases. In this paper, we present some simple second-order cone constrained and unconstrained reformulation problems, and under mild conditions prove the equivalence between the stationary points of these optimization problems and the solutions of the extended SOCLCP. Particularly, we develop a proximal gradient descent method for solving the second-order cone constrained problems. This method is very simple and at each iteration makes only one Euclidean projection onto second-order cones. We establish global convergence and, under a local Lipschitzian error bound assumption, linear rate of convergence. Numerical comparisons are made with the limited-memory BFGS method for the unconstrained reformulations, which verify the effectiveness of the proposed method.  相似文献   

5.
《Optimization》2012,61(8):1259-1274
We analyse a proximal point method for equilibrium problems in Hilbert spaces, improving upon previously known convergence results. We prove global weak convergence of the generated sequence to a solution of the problem, assuming existence of solutions and rather mild monotonicity properties of the bifunction which defines the equilibrium problem, and we establish existence of solutions of the proximal subproblems. We also present a new reformulation of equilibrium problems as variational inequalities ones.  相似文献   

6.
《Optimization》2012,61(8):1173-1197
We consider a class of derivative-free descent methods for solving the second-order cone complementarity problem (SOCCP). The algorithm is based on the Fischer–Burmeister (FB) unconstrained minimization reformulation of the SOCCP, and utilizes a convex combination of the negative partial gradients of the FB merit function ψFB as the search direction. We establish the global convergence results of the algorithm under monotonicity and the uniform Jordan P-property, and show that under strong monotonicity the merit function value sequence generated converges at a linear rate to zero. Particularly, the rate of convergence is dependent on the structure of second-order cones. Numerical comparisons are also made with the limited BFGS method used by Chen and Tseng (An unconstrained smooth minimization reformulation of the second-order cone complementarity problem, Math. Program. 104(2005), pp. 293–327), which confirm the theoretical results and the effectiveness of the algorithm.  相似文献   

7.
In this paper, we introduce the notion of a weak sharp set of solutions to a variational inequality problem (VIP) in a reflexive, strictly convex and smooth Banach space, and present its several equivalent conditions. We also prove, under some continuity and monotonicity assumptions, that if any sequence generated by an algorithm for solving (VIP) converges to a weak sharp solution, then we can obtain solutions for (VIP) by solving a finite number of convex optimization subproblems with linear objective. Moreover, in order to characterize finite convergence of an iterative algorithm, we introduce the notion of a weak subsharp set of solutions to a variational inequality problem (VIP), which is more general than that of weak sharp solutions in Hilbert spaces. We establish a sufficient and necessary condition for the finite convergence of an algorithm for solving (VIP) which satisfies that the sequence generated by which converges to a weak subsharp solution of (VIP), and show that the proximal point algorithm satisfies this condition. As a consequence, we prove that the proximal point algorithm possesses finite convergence whenever the sequence generated by which converges to a weak subsharp solution of (VIP).  相似文献   

8.
Interior estimates are proved in the L norm for stable finite element discretizations of the Stokes equations on translation invariant meshes. These estimates yield information about the quality of the finite element solution in subdomains a positive distance from the boundary. While they have been established for second-order elliptic problems, these interior, or local, maximum norm estimates for the Stokes equations are new. By applying finite differenciation methods on a translation invariant mesh, we obtain optimal convergence rates in the mesh size h in the maximum norm. These results can be used for analyzing superconvergence in finite element methods for the Stokes equations.  相似文献   

9.
In this paper, we present a measure of distance in a second-order cone based on a class of continuously differentiable strictly convex functions on ℝ++. Since the distance function has some favorable properties similar to those of the D-function (Censor and Zenios in J. Optim. Theory Appl. 73:451–464 [1992]), we refer to it as a quasi D-function. Then, a proximal-like algorithm using the quasi D-function is proposed and applied to the second-cone programming problem, which is to minimize a closed proper convex function with general second-order cone constraints. Like the proximal point algorithm using the D-function (Censor and Zenios in J. Optim. Theory Appl. 73:451–464 [1992]; Chen and Teboulle in SIAM J. Optim. 3:538–543 [1993]), under some mild assumptions we establish the global convergence of the algorithm expressed in terms of function values; we show that the sequence generated by the proposed algorithm is bounded and that every accumulation point is a solution to the considered problem. Research of Shaohua Pan was partially supported by the Doctoral Starting-up Foundation (B13B6050640) of GuangDong Province. Jein-Shan Chen is a member of the Mathematics Division, National Center for Theoretical Sciences, Taipei Office. The author’s work was partially supported by National Science Council of Taiwan.  相似文献   

10.
The proximal point method for convex optimization has been extended recently through the use of generalized distances (e.g., Bregman distances) instead of the Euclidean one. One advantage of these extensions is the possibility of eliminating certain constraints (mainly positivity) from the subproblems, transforming an inequality constrained problem into a sequence of unconstrained or equality constrained problems. We consider here methods obtained using a certain class of Bregman functions applied to convex quadratic (including linear) programming, which are of the interior-point type. We prove that the limit of the sequence generated by the method lies in the relative interior of the solution set, and furthermore is the closest optimal solution to the initial point, in the sense of the Bregman distance. These results do not hold for the standard proximal point method, i.e., when the square of the Euclidean norm is used as the Bregman distance.The research leading to this paper was partially supported by CNPq Grant 301280/86.We thank two anonymous referees whose comments and suggestions allowed us to improve our results in a significant way.  相似文献   

11.
Abstract

We present an interior proximal method for solving constrained nonconvex optimization problems where the objective function is given by the difference of two convex function (DC function). To this end, we consider a linearized proximal method with a proximal distance as regularization. Convergence analysis of particular choices of the proximal distance as second-order homogeneous proximal distances and Bregman distances are considered. Finally, some academic numerical results are presented for a constrained DC problem and generalized Fermat–Weber location problems.  相似文献   

12.
This paper is devoted to the study of strong convergence in inexact proximal like methods for finding zeroes of maximal monotone operators in Banach spaces. Convergence properties of proximal point methods in Banach spaces can be summarized as follows: if the operator have zeroes then the sequence of iterates is bounded and all its weak accumulation points are solutions. Whether or not the whole sequence converges weakly to a solution and which is the relation of the weak limit with the initial iterate are key questions. We present a hybrid proximal Bregman projection method, allowing for inexact solutions of the proximal subproblems, that guarantees strong convergence of the sequence to the closest solution, in the sense of the Bregman distance, to the initial iterate.  相似文献   

13.
In a Hilbert space, we study the finite termination of iterative methods for solving a monotone variational inequality under a weak sharpness assumption. Most results to date require that the sequence generated by the method converges strongly to a solution. In this paper, we show that the proximal point algorithm for solving the variational inequality terminates at a solution in a finite number of iterations if the solution set is weakly sharp. Consequently, we derive finite convergence results for the gradient projection and extragradient methods. Our results show that the assumption of strong convergence of sequences can be removed in the Hilbert space case.  相似文献   

14.
We discuss here generalized proximal point methods applied to variational inequality problems. These methods differ from the classical point method in that a so-called Bregman distance substitutes for the Euclidean distance and forces the sequence generated by the algorithm to remain in the interior of the feasible region, assumed to be nonempty. We consider here the case in which this region is a polyhedron (which includes linear and nonlinear programming, monotone linear complementarity problems, and also certain nonlinear complementarity problems), and present two alternatives to deal with linear equality constraints. We prove that the sequences generated by any of these alternatives, which in general are different, converge to the same point, namely the solution of the problem which is closest, in the sense of the Bregman distance, to the initial iterate, for a certain class of operators. This class consists essentially of point-to-point and differentiable operators such that their Jacobian matrices are positive semidefinite (not necessarily symmetric) and their kernels are constant in the feasible region and invariant through symmetrization. For these operators, the solution set of the problem is also a polyhedron. Thus, we extend a previous similar result which covered only linear operators with symmetric and positive-semidefinite matrices.  相似文献   

15.
In this paper we propose an extension of proximal methods to solve minimization problems with quasiconvex objective functions on the nonnegative orthant. Assuming that the function is bounded from below and lower semicontinuous and using a general proximal distance, it is proved that the iterations given by our algorithm are well defined and stay in the positive orthant. If the objective function is quasiconvex we obtain the convergence of the iterates to a certain set which contains the set of optimal solutions and convergence to a KKT point if the function is continuously differentiable and the proximal parameters are bounded. Furthermore, we introduce a sufficient condition on the proximal distance such that the sequence converges to an optimal solution of the problem.  相似文献   

16.
We develop and analyze an affine scaling inexact generalized Newton algorithm in association with nonmonotone interior backtracking line technique for solving systems of semismooth equations subject to bounds on variables. By combining inexact affine scaling generalized Newton with interior backtracking line search technique, each iterate switches to inexact generalized Newton backtracking step to strict interior point feasibility. The global convergence results are developed in a very general setting of computing trial steps by the affine scaling generalized Newton-like method that is augmented by an interior backtracking line search technique projection onto the feasible set. Under some reasonable conditions we establish that close to a regular solution the inexact generalized Newton method is shown to converge locally p-order q-superlinearly. We characterize the order of local convergence based on convergence behavior of the quality of the approximate subdifferentials and indicate how to choose an inexact forcing sequence which preserves the rapid convergence of the proposed algorithm. A nonmonotonic criterion should bring about speeding up the convergence progress in some ill-conditioned cases.  相似文献   

17.
Stabilized sequential quadratic programming (sSQP) methods for nonlinear optimization generate a sequence of iterates with fast local convergence regardless of whether or not the active-constraint gradients are linearly dependent. This paper concerns the local convergence analysis of an sSQP method that uses a line search with a primal-dual augmented Lagrangian merit function to enforce global convergence. The method is provably well-defined and is based on solving a strictly convex quadratic programming subproblem at each iteration. It is shown that the method has superlinear local convergence under assumptions that are no stronger than those required by conventional stabilized SQP methods. The fast local convergence is obtained by allowing a small relaxation of the optimality conditions for the quadratic programming subproblem in the neighborhood of a solution. In the limit, the line search selects the unit step length, which implies that the method does not suffer from the Maratos effect. The analysis indicates that the method has the same strong first- and second-order global convergence properties that have been established for augmented Lagrangian methods, yet is able to transition seamlessly to sSQP with fast local convergence in the neighborhood of a solution. Numerical results on some degenerate problems are reported.  相似文献   

18.
This paper studies a general vector optimization problem of finding weakly efficient points for mappings from Hilbert spaces to arbitrary Banach spaces, where the latter are partially ordered by some closed, convex, and pointed cones with nonempty interiors. To find solutions of this vector optimization problem, we introduce an auxiliary variational inequality problem for a monotone and Lipschitz continuous mapping. The approximate proximal method in vector optimization is extended to develop a hybrid approximate proximal method for the general vector optimization problem under consideration by combining an extragradient method to find a solution of the variational inequality problem and an approximate proximal point method for finding a root of a maximal monotone operator. In this hybrid approximate proximal method, the subproblems consist of finding approximate solutions to the variational inequality problem for monotone and Lipschitz continuous mapping, and then finding weakly efficient points for a suitable regularization of the original mapping. We present both absolute and relative versions of our hybrid algorithm in which the subproblems are solved only approximately. The weak convergence of the generated sequence to a weak efficient point is established under quite mild assumptions. In addition, we develop some extensions of our hybrid algorithms for vector optimization by using Bregman-type functions.  相似文献   

19.
We develop a new notion of second-order complementarity with respect to the tangent subspace related to second-order necessary optimality conditions by the introduction of so-called tangent multipliers. We prove that around a local minimizer, a second-order stationarity residual can be driven to zero while controlling the growth of Lagrange multipliers and tangent multipliers, which gives a new second-order optimality condition without constraint qualifications stronger than previous ones associated with global convergence of algorithms. We prove that second-order variants of augmented Lagrangian (under an additional smoothness assumption based on the Lojasiewicz inequality) and interior point methods generate sequences satisfying our optimality condition. We present also a companion minimal constraint qualification, weaker than the ones known for second-order methods, that ensures usual global convergence results to a classical second-order stationary point. Finally, our optimality condition naturally suggests a definition of second-order stationarity suitable for the computation of iteration complexity bounds and for the definition of stopping criteria.  相似文献   

20.
Iterative methods for variational and complementarity problems   总被引:12,自引:0,他引:12  
In this paper, we study both the local and global convergence of various iterative methods for solving the variational inequality and the nonlinear complementarity problems. Included among such methods are the Newton and several successive overrelaxation algorithms. For the most part, the study is concerned with the family of linear approximation methods. These are iterative methods in which a sequence of vectors is generated by solving certain linearized subproblems. Convergence to a solution of the given variational or complementarity problem is established by using three different yet related approaches. The paper also studies a special class of variational inequality problems arising from such applications as computing traffic and economic spatial equilibria. Finally, several convergence results are obtained for some nonlinear approximation methods.This research was based on work supported by the National Science Foundation under grant ECS-7926320.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号