首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Mascarenhas gave an instance of linear programming problems to show that the long-step affine scaling algorithm can fail to converge to an optimal solution with the step-size λ=0.999 . In this note, we give a simple and clear geometrical explanation for this phenomenon in terms of the Newton barrier flow induced by projecting the homogeneous affine scaling vector field conically onto a hyperplane where the objective function is constant. Based on this interpretation, we show that the algorithm can fail for "any" λ greater than about 0.91 (a more precise value is 0.91071), which is considerably shorter than λ = 0.95 and 0.99 recommended for efficient implementations. Accepted 17 February 1998  相似文献   

2.
In this paper we show that a variant of the long-step affine scaling algorithm (with variable stepsizes) is two-step superlinearly convergent when applied to general linear programming (LP) problems. Superlinear convergence of the sequence of dual estimates is also established. For homogeneous LP problems having the origin as the unique optimal solution, we also show that 2/3 is a sharp upper bound on the (fixed) stepsize that provably guarantees that the sequence of primal iterates converge to the optimal solution along a unique direction of approach. Since the point to which the sequence of dual estimates converge depend on the direction of approach of the sequence of primal iterates, this result gives a plausible (but not accurate) theoretical explanation for why 2/3 is a sharp upper bound on the (fixed) stepsize that guarantees the convergence of the dual estimates. The work of this author was based on research supported by the Overseas Research Scholars of the Ministry of Education, Science and Culture of Japan, 1992. The work of this author was based on research supported by the National Science Foundation (NSF) under grant DDM-9109404 and the Office of Naval Research (ONR) under grant N00014-93-1-0234. This work was done while the second author was a faculty member of the Systems and Industrial Engineering Department at the University of Arizona.  相似文献   

3.
This paper presents a simplified and self-contained global convergence proof for the affine scaling algorithm applied to degenerate linear programming problems. Convergence of the sequence of dual estimates to the center of the optimal dual face is also proven. In addition, we give a sharp rate of convergence result for the sequence of objective function values. All these results are proved with respect to the long step version of the affine scaling algorithm in which we move a fraction , where (0,2/3), of the step to the boundary of the feasible region.This research was supported by the National Science Foundation (NSF) under Grant No. DDM-9109404 and the Overseas Research Scholars of the Ministry of Education, Science and Culture of Japan.  相似文献   

4.
We study a trust region affine scaling algorithm for solving the linearly constrained convex or concave programming problem. Under primal nondegeneracy assumption, we prove that every accumulation point of the sequence generated by the algorithm satisfies the first order necessary condition for optimality of the problem. For a special class of convex or concave functions satisfying a certain invariance condition on their Hessians, it is shown that the sequences of iterates and objective function values generated by the algorithm convergeR-linearly andQ-linearly, respectively. Moreover, under primal nondegeneracy and for this class of objective functions, it is shown that the limit point of the sequence of iterates satisfies the first and second order necessary conditions for optimality of the problem. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.The work of these authors was based on research supported by the National Science Foundation under grant INT-9600343 and the Office of Naval Research under grants N00014-93-1-0234 and N00014-94-1-0340.  相似文献   

5.
This paper presents an extension of the variant of Mehrotra’s predictor–corrector algorithm which was proposed by Salahi and Mahdavi-Amiri (Appl. Math. Comput. 183:646–658, 2006) for linear programming to symmetric cones. This algorithm incorporates a safeguard in Mehrotra’s original predictor–corrector algorithm, which keeps the iterates in the prescribed neighborhood and allows us to get a reasonably large step size. In our algorithm, the safeguard strategy is not necessarily used when the affine scaling step behaves poorly, which is different from the algorithm of Salahi and Mahdavi-Amiri. We slightly modify the maximum step size in the affine scaling step and extend the algorithm to symmetric cones using the machinery of Euclidean Jordan algebras. Based on the Nesterov–Todd direction, we show that the iteration-complexity bound of the proposed algorithm is , where r is the rank of the associated Euclidean Jordan algebras and ε>0 is the required precision.  相似文献   

6.
In this paper, we introduce an affine scaling algorithm for semidefinite programming (SDP), and give an example of a semidefinite program such that the affine scaling algorithm converges to a non-optimal point. Both our program and its dual have interior feasible solutions and unique optimal solutions which satisfy strict complementarity, and they are non-degenerate everywhere.  相似文献   

7.
Interior-point methods for semidefinite optimization have been studied intensively, due to their polynomial complexity and practical efficiency. Recently, the second author designed a primal-dual infeasible interior-point algorithm with the currently best iteration bound for linear optimization problems. Since the algorithm uses only full Newton steps, it has the advantage that no line-searches are needed. In this paper we extend the algorithm to semidefinite optimization. The algorithm constructs strictly feasible iterates for a sequence of perturbations of the given problem and its dual problem, close to their central paths. Two types of full-Newton steps are used, feasibility steps and (ordinary) centering steps, respectively. The algorithm starts from strictly feasible iterates of a perturbed pair, on its central path, and feasibility steps find strictly feasible iterates for the next perturbed pair. By using centering steps for the new perturbed pair, we obtain strictly feasible iterates close enough to the central path of the new perturbed pair. The starting point depends on a positive number ζ. The algorithm terminates either by finding an ε-solution or by detecting that the primal-dual problem pair has no optimal solution (X *,y *,S *) with vanishing duality gap such that the eigenvalues of X * and S * do not exceed ζ. The iteration bound coincides with the currently best iteration bound for semidefinite optimization problems.  相似文献   

8.
We consider the continuous trajectories of the vector field induced by the primal affine scaling algorithm as applied to linear programming problems in standard form. By characterizing these trajectories as solutions of certain parametrized logarithmic barrier families of problems, we show that these trajectories tend to an optimal solution which in general depends on the starting point. By considering the trajectories that arise from the Lagrangian multipliers of the above mentioned logarithmic barrier families of problems, we show that the trajectories of the dual estimates associated with the affine scaling trajectories converge to the so called centered optimal solution of the dual problem. We also present results related to asymptotic direction of the affine scaling trajectories. We briefly discuss how to apply our results to linear programs formulated in formats different from the standard form. Finally, we extend the results to the primal-dual affine scaling algorithm.  相似文献   

9.
We present a new projective interior point method for linear programming with unknown optimal value. This algorithm requires only that an interior feasible point be provided. It generates a strictly decreasing sequence of objective values and within polynomial time, either determines an optimal solution, or proves that the problem is unbounded. We also analyze the asymptotic convergence rate of our method and discuss its relationship to other polynomial time projective interior point methods and the affine scaling method.This research was supported in part by NSF Grants DMS-85-12277 and CDR-84-21402 and ONR Contract N00014-87-K0214.  相似文献   

10.
We describe an O(n 4 hmin{logU,n 2logn}) capacity scaling algorithm for the minimum cost submodular flow problem. Our algorithm modifies and extends the Edmonds–Karp capacity scaling algorithm for minimum cost flow to solve the minimum cost submodular flow problem. The modification entails scaling a relaxation parameter δ. Capacities are relaxed by attaching a complete directed graph with uniform arc capacity δ in each scaling phase. We then modify a feasible submodular flow by relaxing the submodular constraints, so that complementary slackness is satisfied. This creates discrepancies between the boundary of the flow and the base polyhedron of a relaxed submodular function. To reduce these discrepancies, we use a variant of the successive shortest path algorithm that augments flow along minimum cost paths of residual capacity at least δ. The shortest augmenting path subroutine we use is a variant of Dijkstra’s algorithm modified to handle exchange capacity arcs efficiently. The result is a weakly polynomial time algorithm whose running time is better than any existing submodular flow algorithm when U is small and C is big. We also show how to use maximum mean cuts to make the algorithm strongly polynomial. The resulting algorithm is the first capacity scaling algorithm to match the current best strongly polynomial bound for submodular flow. Received: August 6, 1999 / Accepted: July 2001?Published online October 2, 2001  相似文献   

11.
In this paper we deal with global convergence of the affine scaling algorithm for strictly convex QP problems satisfying a dual nondegeneracy condition. By means of the local Karmarkar potential function which was successfully applied to demonstrate global convergence of the affine scaling algorithm for LP, we show global convergence of the algorithm when the step-size 1/8 is adopted without requiring any primal nondegeneracy condition.This paper was presented at the 14th International Symposium on Mathematical Programming, held August 5–9, 1991, in Amsterdam, The Netherlands.  相似文献   

12.
We study a new trust region affine scaling method for general bound constrained optimization problems. At each iteration, we compute two trial steps. We compute one along some direction obtained by solving an appropriate quadratic model in an ellipsoidal region. This region is defined by an affine scaling technique. It depends on both the distances of current iterate to boundaries and the trust region radius. For convergence and avoiding iterations trapped around nonstationary points, an auxiliary step is defined along some newly defined approximate projected gradient. By choosing the one which achieves more reduction of the quadratic model from the two above steps as the trial step to generate next iterate, we prove that the iterates generated by the new algorithm are not bounded away from stationary points. And also assuming that the second-order sufficient condition holds at some nondegenerate stationary point, we prove the Q-linear convergence of the objective function values. Preliminary numerical experience for problems with bound constraints from the CUTEr collection is also reported.  相似文献   

13.
Consider linear programs in dual standard form with n constraints and m variables. When typical interior-point algorithms are used for the solution of such problems, updating the iterates, using direct methods for solving the linear systems and assuming a dense constraint matrix A, requires O(nm2)\mathcal{O}(nm^{2}) operations per iteration. When nm it is often the case that at each iteration most of the constraints are not very relevant for the construction of a good update and could be ignored to achieve computational savings. This idea was considered in the 1990s by Dantzig and Ye, Tone, Kaliski and Ye, den Hertog et al. and others. More recently, Tits et al. proposed a simple “constraint-reduction” scheme and proved global and local quadratic convergence for a dual-feasible primal-dual affine-scaling method modified according to that scheme. In the present work, similar convergence results are proved for a dual-feasible constraint-reduced variant of Mehrotra’s predictor-corrector algorithm, under less restrictive nondegeneracy assumptions. These stronger results extend to primal-dual affine scaling as a limiting case. Promising numerical results are reported.  相似文献   

14.
We analyze several affine potential reduction algorithms for linear programming based on simplifying assumptions. We show that, under a strong probabilistic assumption regarding the distribution of the data in an iteration, the decrease in the primal potential function will be with high probability, compared to the guaranteed(1). ( 2n is a parameter in the potential function andn is the number of variables.) Under the same assumption, we further show that the objective reduction rate of Dikin's affine scaling algorithm is with high probability, compared to no guaranteed convergence rate.Research supported in part by NSF Grant DDM-8922636.  相似文献   

15.
We present an extension of Karmarkar's linear programming algorithm for solving a more general group of optimization problems: convex quadratic programs. This extension is based on the iterated application of the objective augmentation and the projective transformation, followed by optimization over an inscribing ellipsoid centered at the current solution. It creates a sequence of interior feasible points that converge to the optimal feasible solution in O(Ln) iterations; each iteration can be computed in O(Ln 3) arithmetic operations, wheren is the number of variables andL is the number of bits in the input. In this paper, we emphasize its convergence property, practical efficiency, and relation to the ellipsoid method.  相似文献   

16.
In this paper, we propose a new trust-region-projected Hessian algorithm with nonmonotonic backtracking interior point technique for linear constrained optimization. By performing the QR decomposition of an affine scaling equality constraint matrix, the conducted subproblem in the algorithm is changed into the general trust-region subproblem defined by minimizing a quadratic function subject only to an ellipsoidal constraint. By using both the trust-region strategy and the line-search technique, each iteration switches to a backtracking interior point step generated by the trustregion subproblem. The global convergence and fast local convergence rates for the proposed algorithm are established under some reasonable assumptions. A nonmonotonic criterion is used to speed up the convergence in some ill-conditioned cases. Selected from Journal of Shanghai Normal University (Natural Science), 2003, 32(4): 7–13  相似文献   

17.
In this paper, we establish a theoretical framework of path-following interior point algorithms for the linear complementarity problems over symmetric cones (SCLCP) with the Cartesian P *(κ)-property, a weaker condition than the monotonicity. Based on the Nesterov-Todd, xy and yx directions employed as commutative search directions for semidefinite programming, we extend the variants of the short-, semilong-, and long-step path-following algorithms for symmetric conic linear programming proposed by Schmieta and Alizadeh to the Cartesian P *(κ)-SCLCP, and particularly show the global convergence and the iteration complexities of the proposed algorithms. This work was supported by National Natural Science Foundation of China (Grant Nos. 10671010, 70841008)  相似文献   

18.
A primal, interior point method is developed for linear programming problems for which the linear objective function is to be maximised over polyhedra that are not necessarily in standard form. This algorithm concurs with the affine scaling method of Dikin when the polyhedron is in standard form, and satisfies the usual conditions imposed for using that method. If the search direction is regarded as a function of the current iterate, then it is shown that this function has a unique, continuous extension to the boundary. In fact, on any given face, this extension is just the value the search direction would have for the problem of maximising the objective function over that face. This extension is exploited to prove convergence. The algorithm presented here can be used to exploit such special constraint structure as bounds, ranges, and free variables without increasing the size of the linear programming problem.This paper is in final form and no version of it will be submitted for publication elsewhere.  相似文献   

19.
Karmarkar's linear programming algorithm and Newton's method   总被引:1,自引:0,他引:1  
This paper describes a full-dimensional version of Karmarkar's linear programming algorithm, theprojective scaling algorithm, which is defined for any linear program in n having a bounded, full-dimensional polytope of feasible solutions. If such a linear program hasm inequality constraints, then it is equivalent under an injective affine mappingJ: n m to Karmarkar's original algorithm for a linear program in m havingm—n equality constraints andm inequality constraints. Karmarkar's original algorithm minimizes a potential functiong(x), and the projective scaling algorithm is equivalent to that version of Karmarkar's algorithm whose step size minimizes the potential function in the step direction.The projective scaling algorithm is shown to be a global Newton method for minimizing a logarithmic barrier function in a suitable coordinate system. The new coordinate system is obtained from the original coordinate system by a fixed projective transformationy = (x) which maps the hyperplaneH opt ={x:c, x =c 0} specified by the optimal value of the objective function to the hyperplane at infinity. The feasible solution set is mapped under to anunbounded polytope. Letf LB(y) denote the logarithmic barrier function associated to them inequality constraints in the new coordinate system. It coincides up to an additive constant with Karmarkar's potential function in the new coordinate system. Theglobal Newton method iterate y * for a strictly convex functionf(y) defined on a suitable convex domain is that pointy * that minimizesf(y) on the search ray {y+ v N(y): 0} wherev N(y) =–(2 f(y))–1(f(y)) is the Newton's method vector. If {x (k)} are a set of projective scaling algorithm iterates in the original coordinate system andy (k) =(x (k)) then {y (k)} are a set of global Newton method iterates forf LB(y) and conversely.Karmarkar's algorithm with step size chosen to minimize the potential function is known to converge at least at a linear rate. It is shown (by example) that this algorithm does not have a superlinear convergence rate.  相似文献   

20.
The Fermat—Weber location problem is to find a point in n that minimizes the sum of the weighted Euclidean distances fromm given points in n . A popular iterative solution method for this problem was first introduced by Weiszfeld in 1937. In 1973 Kuhn claimed that if them given points are not collinear then for all but a denumerable number of starting points the sequence of iterates generated by Weiszfeld's scheme converges to the unique optimal solution. We demonstrate that Kuhn's convergence theorem is not always correct. We then conjecture that if this algorithm is initiated at the affine subspace spanned by them given points, the convergence is ensured for all but a denumerable number of starting points.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号