首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we propose a non-interior continuation method for solving generalized linear complementarity problems (GLCP) introduced by Cottle and Dantzig. The method is based on a smoothing function derived from the exponential penalty function first introduced by Kort and Bertsekas for constrained minimization. This smoothing function can also be viewed as a natural extension of Chen-Mangasarian’s neural network smooth function. By using the smoothing function, we approximate GLCP as a family of parameterized smooth equations. An algorithm is presented to follow the smoothing path. Under suitable assumptions, it is shown that the algorithm is globally convergent and local Q-quadratically convergent. Few preliminary numerical results are also reported. Received September 3, 1997 / Revised version received April 27, 1999?Published online July 19, 1999  相似文献   

2.
We study various error measures for approximate solution of proximal point regularizations of the variational inequality problem, and of the closely related problem of finding a zero of a maximal monotone operator. A new merit function is proposed for proximal point subproblems associated with the latter. This merit function is based on Burachik-Iusem-Svaiter’s concept of ε-enlargement of a maximal monotone operator. For variational inequalities, we establish a precise relationship between the regularized gap function, which is a natural error measure in this context, and our new merit function. Some error bounds are derived using both merit functions for the corresponding formulations of the proximal subproblem. We further use the regularized gap function to devise a new inexact proximal point algorithm for solving monotone variational inequalities. This inexact proximal point method preserves all the desirable global and local convergence properties of the classical exact/inexact method, while providing a constructive error tolerance criterion, suitable for further practical applications. The use of other tolerance rules is also discussed. Received: April 28, 1999 / Accepted: March 24, 2000?Published online July 20, 2000  相似文献   

3.
Smooth methods of multipliers for complementarity problems   总被引:2,自引:0,他引:2  
This paper describes several methods for solving nonlinear complementarity problems. A general duality framework for pairs of monotone operators is developed and then applied to the monotone complementarity problem, obtaining primal, dual, and primal-dual formulations. We derive Bregman-function-based generalized proximal algorithms for each of these formulations, generating three classes of complementarity algorithms. The primal class is well-known. The dual class is new and constitutes a general collection of methods of multipliers, or augmented Lagrangian methods, for complementarity problems. In a special case, it corresponds to a class of variational inequality algorithms proposed by Gabay. By appropriate choice of Bregman function, the augmented Lagrangian subproblem in these methods can be made continuously differentiable. The primal-dual class of methods is entirely new and combines the best theoretical features of the primal and dual methods. Some preliminary computation shows that this class of algorithms is effective at solving many of the standard complementarity test problems. Received February 21, 1997 / Revised version received December 11, 1998? Published online May 12, 1999  相似文献   

4.
Forcing strong convergence of proximal point iterations in a Hilbert space   总被引:1,自引:1,他引:0  
This paper concerns with convergence properties of the classical proximal point algorithm for finding zeroes of maximal monotone operators in an infinite-dimensional Hilbert space. It is well known that the proximal point algorithm converges weakly to a solution under very mild assumptions. However, it was shown by Güler [11] that the iterates may fail to converge strongly in the infinite-dimensional case. We propose a new proximal-type algorithm which does converge strongly, provided the problem has a solution. Moreover, our algorithm solves proximal point subproblems inexactly, with a constructive stopping criterion introduced in [31]. Strong convergence is forced by combining proximal point iterations with simple projection steps onto intersection of two halfspaces containing the solution set. Additional cost of this extra projection step is essentially negligible since it amounts, at most, to solving a linear system of two equations in two unknowns. Received January 6, 1998 / Revised version received August 9, 1999?Published online November 30, 1999  相似文献   

5.
A trust region and affine scaling interior point method (TRAM) is proposed for a general nonlinear minimization with linear inequality constraints [8]. In the proposed approach, a Newton step is derived from the complementarity conditions. Based on this Newton step, a trust region subproblem is formed, and the original objective function is monotonically decreased. Explicit sufficient decrease conditions are proposed for satisfying the first order and second order necessary conditions.?The objective of this paper is to establish global and local convergence properties of the proposed trust region and affine scaling interior point method. It is shown that the proposed explicit decrease conditions are sufficient for satisfy complementarity, dual feasibility and second order necessary conditions respectively. It is also established that a trust region solution is asymptotically in the interior of the proposed trust region subproblem and a properly damped trust region step can achieve quadratic convergence. Received: January 29, 1999 / Accepted: November 22, 1999?Published online February 23, 2000  相似文献   

6.
We obtain local estimates of the distance to a set defined by equality constraints under assumptions which are weaker than those previously used in the literature. Specifically, we assume that the constraints mapping has a Lipschitzian derivative, and satisfies a certain 2-regularity condition at the point under consideration. This setting directly subsumes the classical regular case and the twice differentiable 2-regular case, for which error bounds are known, but it is significantly richer than either of these two cases. When applied to a certain equation-based reformulation of the nonlinear complementarity problem, our results yield an error bound under an assumption more general than b-regularity. The latter appears to be the weakest assumption under which a local error bound for complementarity problems was previously available. We also discuss an application of our results to the convergence rate analysis of the exterior penalty method for solving irregular problems. Received: February 2000 / Accepted: November 2000?Published online January 17, 2001  相似文献   

7.
We consider convex optimization and variational inequality problems with a given separable structure. We propose a new decomposition method for these problems which combines the recent logarithmic-quadratic proximal theory introduced by the authors with a decomposition method given by Chen-Teboulle for convex problems with particular structure. The resulting method allows to produce for the first time provably convergent decomposition schemes based on C Lagrangians for solving convex structured problems. Under the only assumption that the primal-dual problems have nonempty solution sets, global convergence of the primal-dual sequences produced by the algorithm is established. Received: October 6, 1999 / Accepted: February 2001?Published online September 17, 2001  相似文献   

8.
Optimal solutions of interior point algorithms for linear and quadratic programming and linear complementarity problems provide maximally complementary solutions. Maximally complementary solutions can be characterized by optimal partitions. On the other hand, the solutions provided by simplex–based pivot algorithms are given in terms of complementary bases. A basis identification algorithm is an algorithm which generates a complementary basis, starting from any complementary solution. A partition identification algorithm is an algorithm which generates a maximally complementary solution (and its corresponding partition), starting from any complementary solution. In linear programming such algorithms were respectively proposed by Megiddo in 1991 and Balinski and Tucker in 1969. In this paper we will present identification algorithms for quadratic programming and linear complementarity problems with sufficient matrices. The presented algorithms are based on the principal pivot transform and the orthogonality property of basis tableaus. Received April 9, 1996 / Revised version received April 27, 1998? Published online May 12, 1999  相似文献   

9.
We propose a one-step smoothing Newton method for solving the non-linear complementarity problem with P0-function (P0-NCP) based on the smoothing symmetric perturbed Fisher function(for short, denoted as the SSPF-function). The proposed algorithm has to solve only one linear system of equations and performs only one line search per iteration. Without requiring any strict complementarity assumption at the P0-NCP solution, we show that the proposed algorithm converges globally and superlinearly under mild conditions. Furthermore, the algorithm has local quadratic convergence under suitable conditions. The main feature of our global convergence results is that we do not assume a priori the existence of an accumulation point. Compared to the previous literatures, our algorithm has stronger convergence results under weaker conditions.  相似文献   

10.
Feasible descent algorithms for mixed complementarity problems   总被引:6,自引:0,他引:6  
In this paper we consider a general algorithmic framework for solving nonlinear mixed complementarity problems. The main features of this framework are: (a) it is well-defined for an arbitrary mixed complementarity problem, (b) it generates only feasible iterates, (c) it has a strong global convergence theory, and (d) it is locally fast convergent under standard regularity assumptions. This framework is applied to the PATH solver in order to show viability of the approach. Numerical results for an appropriate modification of the PATH solver indicate that this framework leads to substantial computational improvements. Received April 9, 1998 / Revised version received November 23, 1998?Published online March 16, 1999  相似文献   

11.
This paper introduces and analyses a new algorithm for minimizing a convex function subject to a finite number of convex inequality constraints. It is assumed that the Lagrangian of the problem is strongly convex. The algorithm combines interior point methods for dealing with the inequality constraints and quasi-Newton techniques for accelerating the convergence. Feasibility of the iterates is progressively enforced thanks to shift variables and an exact penalty approach. Global and q-superlinear convergence is obtained for a fixed penalty parameter; global convergence to the analytic center of the optimal set is ensured when the barrier parameter tends to zero, provided strict complementarity holds. Received: December 21, 2000 / Accepted: July 13, 2001?Published online February 14, 2002  相似文献   

12.
Let the DFP algorithm for unconstrained optimization be applied to an objective function that has continuous second derivatives and bounded level sets, where each line search finds the first local minimum. It is proved that the calculated gradients are not bounded away from zero if there are only two variables. The new feature of this work is that there is no need for the objective function to be convex. Received: June 16, 1999 / Accepted: December 24, 1999?Published online March 15, 2000  相似文献   

13.
Under some assumptions, the solution set of a nonlinear complementarity problem coincides with the set of local minima of the corresponding minimization problem. This paper uses a family of new merit functions to deal with nonlinear complementarity problem where the underlying function is assumed to be a continuous but not necessarily locally Lipschitzian map and gives a descent algorithm for solving the nonsmooth continuous complementarity problems. In addition, the global convergence of the derivative free descent algorithm is also proved.  相似文献   

14.
We propose an infeasible non-interior path-following method for nonlinear complementarity problems with uniform P-functions. This method is based on the smoothing techniques introduced by Kanzow. A key to our analysis is the introduction of a new notion of neighborhood for the central path which is suitable for infeasible non-interior path-following methods. By restricting the iterates in the neighborhood of the central path, we provide a systematic procedure to update the smoothing parameter and establish the global linear convergence of this method. Some preliminary computational results are reported. Received: March 13, 1997 / Accepted: December 17, 1999?Published online February 23, 2000  相似文献   

15.
The Josephy-Newton method attacks nonlinear complementarity problems which consists of solving, possibly inexactly, a sequence of linear complementarity problems. Under appropriate regularity assumptions, this method is known to be locally (superlinearly) convergent. Utilizing the filter method, we presented a new globalization strategy for this Newton method applied to nonlinear complementarity problem without any merit function. The strategy is based on the projection-proximal point and filter methodology. Our linesearch procedure uses the regularized Newton direction to force global convergence by means of a projection step which reduces the distance to the solution of the problem. The resulting algorithm is globally convergent to a solution. Under natural assumptions, locally superlinear rate of convergence was established.  相似文献   

16.
A nonsmooth Levenberg-Marquard (LM) method with double parameter adjusting strategies is presented for solving vertical complementarity problems based on the computation of an element of a vextor-valued minimum function’s B-differential in this paper. At each iteration, the LM parameter is adjusted based on the norm of the vector-valued minimum function and the ratio between the actual reduction and the predicted reduction. Under the local error bound condition, which is strictly weaker than nonsingular assumption, the local convergence rate is discussed. Finally, the numerical tests indicate that the present algorithm is effective.  相似文献   

17.
Piecewise affine functions arise from Lagrangian duals of integer programming problems, and optimizing them provides good bounds for use in a branch and bound method. Methods such as the subgradient method and bundle methods assume only one subgradient is available at each point, but in many situations there is more information available. We present a new method for optimizing such functions, which is related to steepest descent, but uses an outer approximation to the subdifferential to avoid some of the numerical problems with the steepest descent approach. We provide convergence results for a class of outer approximations, and then develop a practical algorithm using such an approximation for the compact dual to the linear programming relaxation of the uncapacitated facility location problem. We make a numerical comparison of our outer approximation method with the projection method of Conn and Cornuéjols, and the bundle method of Schramm and Zowe. Received September 10, 1998 / Revised version received August 1999?Published online December 15, 1999  相似文献   

18.
In this paper we investigate two approaches to minimizing a quadratic form subject to the intersection of finitely many ellipsoids. The first approach is the d.c. (difference of convex functions) optimization algorithm (abbr. DCA) whose main tools are the proximal point algorithm and/or the projection subgradient method in convex minimization. The second is a branch-and-bound scheme using Lagrangian duality for bounding and ellipsoidal bisection in branching. The DCA was first introduced by Pham Dinh in 1986 for a general d.c. program and later developed by our various work is a local method but, from a good starting point, it provides often a global solution. This motivates us to combine the DCA and our branch and bound algorithm in order to obtain a good initial point for the DCA and to prove the globality of the DCA. In both approaches we attempt to use the ellipsoidal constrained quadratic programs as the main subproblems. The idea is based upon the fact that these programs can be efficiently solved by some available (polynomial and nonpolynomial time) algorithms, among them the DCA with restarting procedure recently proposed by Pham Dinh and Le Thi has been shown to be the most robust and fast for large-scale problems. Several numerical experiments with dimension up to 200 are given which show the effectiveness and the robustness of the DCA and the combined DCA-branch-and-bound algorithm. Received: April 22, 1999 / Accepted: November 30, 1999?Published online February 23, 2000  相似文献   

19.
A class of affine-scaling interior-point methods for bound constrained optimization problems is introduced which are locally q–superlinear or q–quadratic convergent. It is assumed that the strong second order sufficient optimality conditions at the solution are satisfied, but strict complementarity is not required. The methods are modifications of the affine-scaling interior-point Newton methods introduced by T. F. Coleman and Y. Li (Math. Programming, 67, 189–224, 1994). There are two modifications. One is a modification of the scaling matrix, the other one is the use of a projection of the step to maintain strict feasibility rather than a simple scaling of the step. A comprehensive local convergence analysis is given. A simple example is presented to illustrate the pitfalls of the original approach by Coleman and Li in the degenerate case and to demonstrate the performance of the fast converging modifications developed in this paper. Received October 2, 1998 / Revised version received April 7, 1999?Published online July 19, 1999  相似文献   

20.
We propose a class of non-interior point algorithms for solving the complementarity problems(CP): Find a nonnegative pair (x,y)∈ℝ 2n satisfying y=f(x) and x i y i =0 for every i∈{1,2,...,n}, where f is a continuous mapping from ℝ n to ℝ n . The algorithms are based on the Chen-Harker-Kanzow-Smale smoothing functions for the CP, and have the following features; (a) it traces a trajectory in ℝ 3n which consists of solutions of a family of systems of equations with a parameter, (b) it can be started from an arbitrary (not necessarily positive) point in ℝ 2n in contrast to most of interior-point methods, and (c) its global convergence is ensured for a class of problems including (not strongly) monotone complementarity problems having a feasible interior point. To construct the algorithms, we give a homotopy and show the existence of a trajectory leading to a solution under a relatively mild condition, and propose a class of algorithms involving suitable neighborhoods of the trajectory. We also give a sufficient condition on the neighborhoods for global convergence and two examples satisfying it. Received April 9, 1997 / Revised version received September 2, 1998? Published online May 28, 1999  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号