首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Lagrangean dualization and subgradient optimization techniques are frequently used within the field of computational optimization for finding approximate solutions to large, structured optimization problems. The dual subgradient scheme does not automatically produce primal feasible solutions; there is an abundance of techniques for computing such solutions (via penalty functions, tangential approximation schemes, or the solution of auxiliary primal programs), all of which require a fair amount of computational effort. We consider a subgradient optimization scheme applied to a Lagrangean dual formulation of a convex program, and construct, at minor cost, an ergodic sequence of subproblem solutions which converges to the primal solution set. Numerical experiments performed on a traffic equilibrium assignment problem under road pricing show that the computation of the ergodic sequence results in a considerable improvement in the quality of the primal solutions obtained, compared to those generated in the basic subgradient scheme. Received February 11, 1997 / Revised version received June 19, 1998?Published online June 28, 1999  相似文献   

2.
The volume algorithm: producing primal solutions with a subgradient method   总被引:1,自引:0,他引:1  
We present an extension to the subgradient algorithm to produce primal as well as dual solutions. It can be seen as a fast way to carry out an approximation of Dantzig-Wolfe decomposition. This gives a fast method for producing approximations for large scale linear programs. It is based on a new theorem in linear programming duality. We present successful experience with linear programs coming from set partitioning, set covering, max-cut and plant location. Received: June 15, 1998 / Accepted: November 15, 1999?Published online March 15, 2000  相似文献   

3.
In this paper we investigate two approaches to minimizing a quadratic form subject to the intersection of finitely many ellipsoids. The first approach is the d.c. (difference of convex functions) optimization algorithm (abbr. DCA) whose main tools are the proximal point algorithm and/or the projection subgradient method in convex minimization. The second is a branch-and-bound scheme using Lagrangian duality for bounding and ellipsoidal bisection in branching. The DCA was first introduced by Pham Dinh in 1986 for a general d.c. program and later developed by our various work is a local method but, from a good starting point, it provides often a global solution. This motivates us to combine the DCA and our branch and bound algorithm in order to obtain a good initial point for the DCA and to prove the globality of the DCA. In both approaches we attempt to use the ellipsoidal constrained quadratic programs as the main subproblems. The idea is based upon the fact that these programs can be efficiently solved by some available (polynomial and nonpolynomial time) algorithms, among them the DCA with restarting procedure recently proposed by Pham Dinh and Le Thi has been shown to be the most robust and fast for large-scale problems. Several numerical experiments with dimension up to 200 are given which show the effectiveness and the robustness of the DCA and the combined DCA-branch-and-bound algorithm. Received: April 22, 1999 / Accepted: November 30, 1999?Published online February 23, 2000  相似文献   

4.
We propose an inexact proximal bundle method for constrained nonsmooth nonconvex optimization problems whose objective and constraint functions are known through oracles which provide inexact information. The errors in function and subgradient evaluations might be unknown, but are merely bounded. To handle the nonconvexity, we first use the redistributed idea, and consider even more difficulties by introducing inexactness in the available information. We further examine the modified improvement function for a series of difficulties caused by the constrained functions. The numerical results show the good performance of our inexact method for a large class of nonconvex optimization problems. The approach is also assessed on semi-infinite programming problems, and some encouraging numerical experiences are provided.  相似文献   

5.
We present an approximate bundle method for solving nonsmooth equilibrium problems. An inexact cutting-plane linearization of the objective function is established at each iteration, which is actually an approximation produced by an oracle that gives inaccurate values for the functions and subgradients. The errors in function and subgradient evaluations are bounded and they need not vanish in the limit. A descent criterion adapting the setting of inexact oracles is put forward to measure the current descent behavior. The sequence generated by the algorithm converges to the approximately critical points of the equilibrium problem under proper assumptions. As a special illustration, the proposed algorithm is utilized to solve generalized variational inequality problems. The numerical experiments show that the algorithm is effective in solving nonsmooth equilibrium problems.  相似文献   

6.
This paper establishes a linear convergence rate for a class of epsilon-subgradient descent methods for minimizing certain convex functions on ℝ n . Currently prominent methods belonging to this class include the resolvent (proximal point) method and the bundle method in proximal form (considered as a sequence of serious steps). Other methods, such as a variant of the proximal point method given by Correa and Lemaréchal, can also fit within this framework, depending on how they are implemented. The convex functions covered by the analysis are those whose conjugates have subdifferentials that are locally upper Lipschitzian at the origin, a property generalizing classical regularity conditions. Received March 29, 1996 / Revised version received March 5, 1999? Published online June 11, 1999  相似文献   

7.
In this paper, we consider a special class of nonconvex programming problems for which the objective function and constraints are defined in terms of general nonconvex factorable functions. We propose a branch-and-bound approach based on linear programming relaxations generated through various approximation schemes that utilize, for example, the Mean-Value Theorem and Chebyshev interpolation polynomials coordinated with a Reformulation-Linearization Technique (RLT). A suitable partitioning process is proposed that induces convergence to a global optimum. The algorithm has been implemented in C++ and some preliminary computational results are reported on a set of fifteen engineering process control and design test problems from various sources in the literature. The results indicate that the proposed procedure generates tight relaxations, even via the initial node linear program itself. Furthermore, for nine of these fifteen problems, the application of a local search method that is initialized at the LP relaxation solution produced the actual global optimum at the initial node of the enumeration tree. Moreover, for two test cases, the global optimum found improves upon the solutions previously reported in the source literature. Received: January 14, 1998 / Accepted: June 7, 1999?Published online December 15, 2000  相似文献   

8.
In this paper, we develop a version of the bundle method to solve unconstrained difference of convex (DC) programming problems. It is assumed that a DC representation of the objective function is available. Our main idea is to utilize subgradients of both the first and second components in the DC representation. This subgradient information is gathered from some neighborhood of the current iteration point and it is used to build separately an approximation for each component in the DC representation. By combining these approximations we obtain a new nonconvex cutting plane model of the original objective function, which takes into account explicitly both the convex and the concave behavior of the objective function. We design the proximal bundle method for DC programming based on this new approach and prove the convergence of the method to an \(\varepsilon \)-critical point. The algorithm is tested using some academic test problems and the preliminary numerical results have shown the good performance of the new bundle method. An interesting fact is that the new algorithm finds nearly always the global solution in our test problems.  相似文献   

9.
The conditional gradient method and the steepest descent method, which are conventionally used for solving convex programming problems, are extended to the case where the feasible set is the set-theoretic difference between a convex set and the union of several convex sets. Iterative algorithms are proposed, and their convergence is examined.  相似文献   

10.
We will propose a branch and bound algorithm for calculating a globally optimal solution of a portfolio construction/rebalancing problem under concave transaction costs and minimal transaction unit constraints. We will employ the absolute deviation of the rate of return of the portfolio as the measure of risk and solve linear programming subproblems by introducing (piecewise) linear underestimating function for concave transaction cost functions. It will be shown by a series of numerical experiments that the algorithm can solve the problem of practical size in an efficient manner. Received: July 15, 1999 / Accepted: October 1, 2000?Published online December 15, 2000  相似文献   

11.
Logarithmic SUMT limits in convex programming   总被引:1,自引:1,他引:0  
The limits of a class of primal and dual solution trajectories associated with the Sequential Unconstrained Minimization Technique (SUMT) are investigated for convex programming problems with non-unique optima. Logarithmic barrier terms are assumed. For linear programming problems, such limits – of both primal and dual trajectories – are strongly optimal, strictly complementary, and can be characterized as analytic centers of, loosely speaking, optimality regions. Examples are given, which show that those results do not hold in general for convex programming problems. If the latter are weakly analytic (Bank et al. [3]), primal trajectory limits can be characterized in analogy to the linear programming case and without assuming differentiability. That class of programming problems contains faithfully convex, linear, and convex quadratic programming problems as strict subsets. In the differential case, dual trajectory limits can be characterized similarly, albeit under different conditions, one of which suffices for strict complementarity. Received: November 13, 1997 / Accepted: February 17, 1999?Published online February 22, 2001  相似文献   

12.
An interior Newton method for quadratic programming   总被引:2,自引:0,他引:2  
We propose a new (interior) approach for the general quadratic programming problem. We establish that the new method has strong convergence properties: the generated sequence converges globally to a point satisfying the second-order necessary optimality conditions, and the rate of convergence is 2-step quadratic if the limit point is a strong local minimizer. Published alternative interior approaches do not share such strong convergence properties for the nonconvex case. We also report on the results of preliminary numerical experiments: the results indicate that the proposed method has considerable practical potential. Received October 11, 1993 / Revised version received February 20, 1996 Published online July 19, 1999  相似文献   

13.
We propose a variant of the numerical method of steepest descent for oscillatory integrals by using a low-cost explicit polynomial approximation of the paths of steepest descent. A loss of asymptotic order is observed, but in the most relevant cases the overall asymptotic order remains higher than a truncated asymptotic expansion at similar computational effort. Theoretical results based on number theory underpinning the mechanisms behind this effect are presented.  相似文献   

14.
In this paper, we introduce a new method for solving nonconvex nonsmooth optimization problems. It uses quasisecants, which are subgradients computed in some neighborhood of a point. The proposed method contains simple procedures for finding descent directions and for solving line search subproblems. The convergence of the method is studied and preliminary results of numerical experiments are presented. The comparison of the proposed method with the subgradient and the proximal bundle methods is demonstrated using results of numerical experiments.  相似文献   

15.
We describe an algorithm for minimizing convex, not necessarily smooth, functions of several variables, based on a descent direction finding procedure that inherits some characteristics both of standard bundle method and of Wolfe’s conjugate subgradient method. This is obtained by allowing appropriate upward shifting of the affine approximations of the objective function which contribute to the classic definition of the cutting plane function. The algorithm embeds a proximity control strategy. Finite termination is proved at a point satisfying an approximate optimality condition and some numerical results are provided.  相似文献   

16.
Optimal solutions of interior point algorithms for linear and quadratic programming and linear complementarity problems provide maximally complementary solutions. Maximally complementary solutions can be characterized by optimal partitions. On the other hand, the solutions provided by simplex–based pivot algorithms are given in terms of complementary bases. A basis identification algorithm is an algorithm which generates a complementary basis, starting from any complementary solution. A partition identification algorithm is an algorithm which generates a maximally complementary solution (and its corresponding partition), starting from any complementary solution. In linear programming such algorithms were respectively proposed by Megiddo in 1991 and Balinski and Tucker in 1969. In this paper we will present identification algorithms for quadratic programming and linear complementarity problems with sufficient matrices. The presented algorithms are based on the principal pivot transform and the orthogonality property of basis tableaus. Received April 9, 1996 / Revised version received April 27, 1998? Published online May 12, 1999  相似文献   

17.
Given an undirected graph G=(V,E) with |V|=n and an integer k between 0 and n, the maximization graph partition (MAX-GP) problem is to determine a subset SV of k nodes such that an objective function w(S) is maximized. The MAX-GP problem can be formulated as a binary quadratic program and it is NP-hard. Semidefinite programming (SDP) relaxations of such quadratic programs have been used to design approximation algorithms with guaranteed performance ratios for various MAX-GP problems. Based on several earlier results, we present an improved rounding method using an SDP relaxation, and establish improved approximation ratios for several MAX-GP problems, including Dense-Subgraph, Max-Cut, Max-Not-Cut, and Max-Vertex-Cover. Received: March 10, 2000 / Accepted: July 13, 2001?Published online February 14, 2002  相似文献   

18.
Semidefinite relaxations of quadratic 0-1 programming or graph partitioning problems are well known to be of high quality. However, solving them by primal-dual interior point methods can take much time even for problems of moderate size. The recent spectral bundle method of Helmberg and Rendl can solve quite efficiently large structured equality-constrained semidefinite programs if the trace of the primal matrix variable is fixed, as happens in many applications. We extend the method so that it can handle inequality constraints without seriously increasing computation time. In addition, we introduce inexact null steps. This abolishes the need of computing exact eigenvectors for subgradients, which brings along significant advantages in theory and in practice. Encouraging preliminary computational results are reported. Received: February 1, 2000 / Accepted: September 26, 2001?Published online August 27, 2002 RID="*" ID="*"A preliminary version of this paper appeared in the proceedings of IPCO ’98 [12].  相似文献   

19.
One way of solving multiple objective mathematical programming problems is finding discrete representations of the efficient set. A modified goal of finding good discrete representations of the efficient set would contribute to the practicality of vector maximization algorithms. We define coverage, uniformity and cardinality as the three attributes of quality of discrete representations and introduce a framework that includes these attributes in which discrete representations can be evaluated, compared to each other, and judged satisfactory or unsatisfactory by a Decision Maker. We provide simple mathematical programming formulations that can be used to compute the coverage error of a given discrete representation. Our formulations are practically implementable when the problem under study is a multiobjective linear programming problem. We believe that the interactive algorithms along with the vector maximization methods can make use of our framework and its tools. Received April 7, 1998 / Revised version received March 1999?Published online November 9, 1999  相似文献   

20.
This paper studies the possibility of combining interior point strategy with a steepest descent method when solving convex programming problems, in such a way that the convergence property of the interior point method remains valid but many iterations do not request the solution of a system of equations. Motivated by this general idea, we propose a hybrid algorithm which combines a primal–dual potential reduction algorithm with the use of the steepest descent direction of the potential function. The complexity of the potential reduction algorithm remains valid but the overall computational cost can be reduced. Our numerical experiments are also reported. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号