首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
《Optimization》2012,61(1-2):63-73
Serial and parallel implementations of the interior dual proximal point algorithm for the solution of large linear programs are described. A preconditioned conjugate gradient method is used to solve the linear system of equations that arises at each interior point interation. Numerical results for a set of multicommodity network flow problems are given. For larger problem preconditioned conjugate gradient method outperforms direct methods of solution. In fact it is impossible to handle very large problems by direct methods  相似文献   

2.
《Optimization》2012,61(1-4):117-147
The paper deals with the theoretical analysis of a regularized logarithmic barrier method for solving ill-posed convex programming problems. In this method a multi-step proximal regularization of the auxiliary problems is performed in the framework of the interior point approach. The termination of the proximal terations for each auxiliary problem is controlled by means of estimates, characterizing the efficiency of the iterations. A special deleting rule permits to use only a part of the constraints of the original problem for constructing the auxiliary Problems.

Convergence and rate of convergence of the method suggested are studied as well as its stability with respect to data perturbations. An example is given showing the behavior of the classical barrier method in the case of ill-posed convex programming problems.  相似文献   

3.
We investigate an ellipsoid algorithm for nonlinear programming. After describing the basic steps of the algorithm, we discuss its computer implementation and present a method for measuring computational efficiency. The computational results obtained from experimenting with the algorithm are discussed and the algorithm's performance is compared with that of a widely used commercial code. This research was supported in part by The National Science Foundation, Grant No. MCS78-02096.  相似文献   

4.
The pivot and probe algorithm for solving a linear program   总被引:1,自引:0,他引:1  
In [7] we defined acandidate constraint as one which, for at least one pivot step, contains a potential pivot, discovered that most constraints are never candidate, and devised a modification of the simplex method in which only constraints which are currently candidates are updated. In this paper we present another way to take advantage of this fact. We begin by solving a relaxed linear program consisting of the constraints of the original problem which are initially candidates. Its solution gives an upper bound to the value of the original problem. We also introduce the idea of a probe, that is, a line segment joining two vectors for the primal problem, one of which is primal feasible, and use it to identify a most violated constraint; at the same time this gives a lower bound to the objective value of the original problem. This violated constraint is added to the relaxed problem which is solved again, which gives a new upper bound etc. We present computational experience indicating that time savings of 50–80% over the simplex method can be obtained by this method, which we call PAPA, the Pivot and Probe Algorithm. This report was prepared as part of the activities of the Management Science Research Group, Carnegie-Mellon University, under Contract No. N00014-75-C-0621 NR 047-048 with the U.S. Office of Naval Research. Reproduction in whole or part is permitted for any purpose of the U.S. Government.  相似文献   

5.
In this study, we combine least-index pivot selection rules with Keller's algorithm for quadratic programming to obtain a finite method for processing degenerate problems.Research and reproduction of this report were partially supported by National Science Foundation Grant MCS76-81259; and the Office of Naval Research Contract N00014-75-C-0267.  相似文献   

6.
《Optimization》2012,61(1-4):57-68
The purpose of the present paper is a statement of the nonlinear complementarity problem associated to monotone operators in Hilbert spaces. Existence results are proved, and proximal point algorithms are given  相似文献   

7.
We propose a proximal Newton method for solving nondifferentiable convex optimization. This method combines the generalized Newton method with Rockafellar’s proximal point algorithm. At each step, the proximal point is found approximately and the regularization matrix is preconditioned to overcome inexactness of this approximation. We show that such a preconditioning is possible within some accuracy and the second-order differentiability properties of the Moreau-Yosida regularization are invariant with respect to this preconditioning. Based upon these, superlinear convergence is established under a semismoothness condition. This work is supported by the Australian Research Council.  相似文献   

8.
The goal of this paper is to discover some possibilities for applying the proximal point method to nonconvex problems. It can be proved that – for a wide class of problems – proximal regularization performed with appropriate regularization parameters ensures convexity of the auxiliary problems and each accumulation point of the method satisfies the necessary optimality conditions.  相似文献   

9.
This paper studies the vector optimization problem of finding weakly efficient points for mappings in a Banach space Y, with respect to the partial order induced by a closed, convex, and pointed cone C ⊂ Y with a nonempty interior. The proximal method in vector optimization is extended to develop an approximate proximal method for this problem by virtue of the approximate proximal point method for finding a root of a maximal monotone operator. In this approximate proximal method, the subproblems consist of finding weakly efficient points for suitable regularizations of the original mapping. We present both an absolute and a relative version, in which the subproblems are solved only approximately. Weak convergence of the generated sequence to a weak efficient point is established. In addition, we also discuss an extension to Bregman-function-based proximal algorithms for finding weakly efficient points for mappings.  相似文献   

10.
In a Hilbert space, we study the convergence of a proximal point method to a common zero of a finite family of maximal monotone operators under the presence of computational errors. Most results known in the literature establish the convergence of proximal point methods, when computational errors are summable. In the present paper, the convergence of the method is established for nonsummable computational errors. We show that the proximal point method generates a good approximate solution, if the sequence of computational errors is bounded from above by a constant.  相似文献   

11.
We study various error measures for approximate solution of proximal point regularizations of the variational inequality problem, and of the closely related problem of finding a zero of a maximal monotone operator. A new merit function is proposed for proximal point subproblems associated with the latter. This merit function is based on Burachik-Iusem-Svaiter’s concept of ε-enlargement of a maximal monotone operator. For variational inequalities, we establish a precise relationship between the regularized gap function, which is a natural error measure in this context, and our new merit function. Some error bounds are derived using both merit functions for the corresponding formulations of the proximal subproblem. We further use the regularized gap function to devise a new inexact proximal point algorithm for solving monotone variational inequalities. This inexact proximal point method preserves all the desirable global and local convergence properties of the classical exact/inexact method, while providing a constructive error tolerance criterion, suitable for further practical applications. The use of other tolerance rules is also discussed. Received: April 28, 1999 / Accepted: March 24, 2000?Published online July 20, 2000  相似文献   

12.
The proximal point mapping is the basis of many optimization techniques for convex functions. By means of variational analysis, the concept of proximal mapping was recently extended to nonconvex functions that are prox-regular and prox-bounded. In such a setting, the proximal point mapping is locally Lipschitz continuous and its set of fixed points coincide with the critical points of the original function. This suggests that the many uses of proximal points, and their corresponding proximal envelopes (Moreau envelopes), will have a natural extension from convex optimization to nonconvex optimization. For example, the inexact proximal point methods for convex optimization might be redesigned to work for nonconvex functions. In order to begin the practical implementation of proximal points in a nonconvex setting, a first crucial step would be to design efficient methods of approximating nonconvex proximal points. This would provide a solid foundation on which future design and analysis for nonconvex proximal point methods could flourish. In this paper we present a methodology based on the computation of proximal points of piecewise affine models of the nonconvex function. These models can be built with only the knowledge obtained from a black box providing, for each point, the function value and one subgradient. Convergence of the method is proved for the class of nonconvex functions that are prox-bounded and lower- ${\mathcal{C}}^2$ and encouraging preliminary numerical testing is reported.  相似文献   

13.
In a finite-dimensional Euclidean space, we study the convergence of a proximal point method to a solution of the inclusion induced by a maximal monotone operator, under the presence of computational errors. Most results known in the literature establish the convergence of proximal point methods, when computational errors are summable. In the present paper, the convergence of the method is established for nonsummable computational errors. We show that the proximal point method generates a good approximate solution, if the sequence of computational errors is bounded from above by a constant.  相似文献   

14.
This paper is devoted to the study of the proximal point algorithm for solving monotone second-order cone complementarity problems. The proximal point algorithm is to generate a sequence by solving subproblems that are regularizations of the original problem. After given an appropriate criterion for approximate solutions of subproblems by adopting a merit function, the proximal point algorithm is verified to have global and superlinear convergence properties. For the purpose of solving the subproblems efficiently, we introduce a generalized Newton method and show that only one Newton step is eventually needed to obtain a desired approximate solution that approximately satisfies the appropriate criterion under mild conditions. Numerical comparisons are also made with the derivative-free descent method used by Pan and Chen (Optimization 59:1173–1197, 2010), which confirm the theoretical results and the effectiveness of the algorithm.  相似文献   

15.
In this paper, we concentrate on the maximal inclusion problem of locating the zeros of the sum of maximal monotone operators in the framework of proximal point method. Such problems arise widely in several applied mathematical fields such as signal and image processing. We define two new maximal monotone operators and characterize the solutions of the considered problem via the zeros of the new operators. The maximal monotonicity and resolvent of both of the defined operators are proved and calculated, respectively. The traditional proximal point algorithm can be therefore applied to the considered maximal inclusion problem, and the convergence is ensured. Furthermore, by exploring the relationship between the proposed method and the generalized forward‐backward splitting algorithm, we point out that this algorithm is essentially the proximal point algorithm when the operator corresponding to the forward step is the zero operator. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
We study the local convergence of a proximal point method in a metric space under the presence of computational errors. We show that the proximal point method generates a good approximate solution if the sequence of computational errors is bounded from above by some constant. The principle assumption is a local error bound condition which relates the growth of an objective function to the distance to the set of minimizers introduced by Hager and Zhang (SIAM J Control Optim 46:1683–1704, 2007).  相似文献   

17.
In this paper, we study some non-traditional schemes of proximal point algorithm for nonsmooth convex functionals in a Banach space. The proximal approximations to their minimal points and/or their minimal values are considered separately for unconstrained and constrained minimization problems on convex closed sets. For the latter we use proximal point algorithms with the metric projection operators and first establish the estimates of the convergence rate with respect to functionals. We also investigate the perturbed projection proximal point algorithms and prove their stability. Some results concerning the classical proximal point method for minimization problems in a Banach space is also presented in this paper.  相似文献   

18.
Local convergence analysis of the proximal point method for a special class of nonconvex functions on Hadamard manifold is presented in this paper. The well definedness of the sequence generated by the proximal point method is guaranteed. Moreover, it is proved that each cluster point of this sequence satisfies the necessary optimality conditions and, under additional assumptions, its convergence for a minimizer is obtained.  相似文献   

19.
Parin Chaipunya 《Optimization》2017,66(10):1647-1665
Proximal point method is one of the most influential procedure in solving nonlinear variational problems. It has recently been introduced in Hadamard spaces for solving convex optimization, and later for variational inequalities. In this paper, we study the general proximal point method for finding a zero point of a maximal monotone set-valued vector field defined on a Hadamard space and valued in its dual. We also give the relation between the maximality and Minty’s surjectivity condition, which is essential for the proximal point method to be well-defined. By exploring the properties of monotonicity and the surjectivity condition, we were able to show under mild assumptions that the proximal point method converges weakly to a zero point. Additionally, by taking into account the metric subregularity, we obtained the local strong convergence in linear and super-linear rates.  相似文献   

20.
This paper reports the development of a new algorithm for solving the general constrained optimization problem (that of optimizing an objective function subject to both equality and inequality constraints). The approach is based on the complementary pivoting algorithms which have been developed to solve certain classes of fixed point problems. The specific approach is to use the equality constraints to solve for some variables in terms of the remaining ones thus enabling one to eliminate the equality constraints altogether. The result, under certain circumstances, is an optimization problem which may be transformed into a fixed point problem in such a way that a complementary pivoting code may be used to search for a solution.Seventeen test problems have been solved by this method and the results are compared against those obtained from GRG (Generalized Reduced Gradient method). The results of the tests indicate that the fixed point approach is robust (all 17 problems were solved by this method where as GRG solved 16). As to the computer times, the fixed point code proved to be as fast or faster than GRG on the lower dimensional problems; however, as the dimension increased, the trend reversed and on a 40 dimensional problem GRG was approximately 11 times faster. The conclusion from these tests is that when the dimension of the original problem can be reduced sufficiently by the equality constraints, the fixed point approach appears to be more effective than GRG.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号