首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
In this paper we introduce a minimax model for network connection problems with interval parameters. We consider how to connect given nodes in a network with a path or a spanning tree under a given budget, where each link is associated with an interval and can be established at a cost of any value in the interval. The quality of an individual link (or the risk of link failure, etc.) depends on its construction cost and associated interval. To achieve fairness of the network connection, our model aims at the minimization of the maximum risk over all links used. We propose two algorithms that find optimal paths and spanning trees in polynomial time, respectively. The polynomial solvability indicates salient difference between our minimax model and the model of robust deviation criterion for network connection with interval data, which gives rise to NP-hard optimization problems.  相似文献   

2.
We propose techniques for the solution of the LP relaxation and the Lagrangean dual in combinatorial optimization and nonlinear programming problems. Our techniques find the optimal solution value and the optimal dual multipliers of the LP relaxation and the Lagrangean dual in polynomial time using as a subroutine either the Ellipsoid algorithm or the recent algorithm of Vaidya. Moreover, in problems of a certain structure our techniques find not only the optimal solution value, but the solution as well. Our techniques lead to significant improvements in the theoretical running time compared with previously known methods (interior point methods, Ellipsoid algorithm, Vaidya's algorithm). We use our method to the solution of the LP relaxation and the Langrangean dual of several classical combinatorial problems, like the traveling salesman problem, the vehicle routing problem, the Steiner tree problem, thek-connected problem, multicommodity flows, network design problems, network flow problems with side constraints, facility location problems,K-polymatroid intersection, multiple item capacitated lot sizing problem, and stochastic programming. In all these problems our techniques significantly improve the theoretical running time and yield the fastest way to solve them.  相似文献   

3.
We investigate here the class—denoted R-LP-RHSU—of two-stage robust linear programming problems with right-hand-side uncertainty. Such problems arise in many applications e.g: robust PERT scheduling (with uncertain task durations); robust maximum flow (with uncertain arc capacities); robust network capacity expansion problems; robust inventory management; some robust production planning problems in the context of power production/distribution systems. It is shown that such problems can be formulated as large scale linear programs with associated nonconvex separation subproblem. A formal proof of strong NP-hardness for the general case is then provided, and polynomially solvable subclasses are exhibited. Differences with other previously described robust LP problems (featuring row-wise uncertainty instead of column wise uncertainty) are highlighted.  相似文献   

4.
We study the behavior of some polynomial interior-point algorithms for solving random linear programming (LP) problems. We show that the average number of iterations of these algorithms, coupled with a finite termination technique, is bounded above byO(n 1.5). The random LP problem is Todd’s probabilistic model with the standard Gauss distribution.  相似文献   

5.
In this paper, we propose to enhance Reformulation-Linearization Technique (RLT)-based linear programming (LP) relaxations for polynomial programming problems by developing cutting plane strategies using concepts derived from semidefinite programming. Given an RLT relaxation, we impose positive semidefiniteness on suitable dyadic variable-product matrices, and correspondingly derive implied semidefinite cuts. In the case of polynomial programs, there are several possible variants for selecting such particular variable-product matrices on which positive semidefiniteness restrictions can be imposed in order to derive implied valid inequalities. This leads to a new class of cutting planes that we call v-semidefinite cuts. We explore various strategies for generating such cuts, and exhibit their relative effectiveness towards tightening the RLT relaxations and solving the underlying polynomial programming problems in conjunction with an RLT-based branch-and-cut scheme, using a test-bed of problems from the literature as well as randomly generated instances. Our results demonstrate that these cutting planes achieve a significant tightening of the lower bound in contrast with using RLT as a stand-alone approach, thereby enabling a more robust algorithm with an appreciable reduction in the overall computational effort, even in comparison with the commercial software BARON and the polynomial programming problem solver GloptiPoly.  相似文献   

6.
Abstract. In this paper,a new model for inverse network flow problems,robust partial inverseproblem is presented. For a given partial solution,the robust partial inverse problem is to modify the coefficients optimally such that all full solutions containing the partial solution becomeoptimal under new coefficients. It has been shown that the robust partial inverse spanning treeproblem can be formulated as a combinatorial linear program,while the robust partial inverseminimum cut problem and the robust partial inverse assignment problem can be solved by combinatorial strongly polynomial algorithms.  相似文献   

7.
In this paper, we address uncapacitated network design problems characterised by uncertainty in the input data. Network design choices have a determinant impact on the effectiveness of the system. Design decisions are frequently made with a great degree of uncertainty about the conditions under which the system will be required to operate. Instead of finding optimal designs for a given future scenario, designers often search for network configurations that are “good” for a variety of likely future scenarios. This approach is referred to as the “robustness” approach to system design. We present a formal definition of “robustness” for the uncapacitated network design problem, and develop algorithms aimed at finding robust network designs. These algorithms are adaptations of the Benders decomposition methodology that are tailored so they can efficiently identify robust network designs. We tested the proposed algorithms on a set of randomly generated problems. Our computational experiments showed two important properties. First, robust solutions are abundant in uncapacitated network design problems, and second, the proposed algorithms performance is satisfactory in terms of cost and number of robust network designs obtained.  相似文献   

8.
The simplex algorithm is still the best known and most frequently used way to solve LP problems. Khachian has suggested a method to solve these problems in polynomial time. The average behaviour of his method, however, is still inferior to the modern simplex based LP codes. A new gradient based approach which also has polynomial worst-case behaviour has been suggested by Karmarkar. This method was modified, programmed and compared with other available LP codes. It is shown that the numerical efficiency of Karmarkar's method compares favourably with other LP codes, particularly for problems with high numbers of variables and few constraints.  相似文献   

9.
We study the behavior of some polynomial interior-point algorithms for solving random linear programming (LP) problems. We show that the expected and anticipated number of iterations of theseTodd‘s probabilisticalgorithms is bounded above by O(n^1.5). The random LP problem is model with the Cauchy distribution.  相似文献   

10.
The best formulations for some combinatorial optimization problems are integer linear programming models with an exponential number of rows and/or columns, which are solved incrementally by generating missing rows and columns only when needed. As an alternative to row generation, some exponential formulations can be rewritten in a compact extended form, which have only a polynomial number of constraints and a polynomial, although larger, number of variables. As an alternative to column generation, there are compact extended formulations for the dual problems, which lead to compact equivalent primal formulations, again with only a polynomial number of constraints and variables. In this this paper we introduce a tool to derive compact extended formulations and survey many combinatorial optimization problems for which it can be applied. The tool is based on the possibility of formulating the separation procedure by an LP model. It can be seen as one further method to generate compact extended formulations besides other tools of geometric and combinatorial nature present in the literature.  相似文献   

11.
Multicategory Classification by Support Vector Machines   总被引:8,自引:0,他引:8  
We examine the problem of how to discriminate between objects of three or more classes. Specifically, we investigate how two-class discrimination methods can be extended to the multiclass case. We show how the linear programming (LP) approaches based on the work of Mangasarian and quadratic programming (QP) approaches based on Vapnik's Support Vector Machine (SVM) can be combined to yield two new approaches to the multiclass problem. In LP multiclass discrimination, a single linear program is used to construct a piecewise-linear classification function. In our proposed multiclass SVM method, a single quadratic program is used to construct a piecewise-nonlinear classification function. Each piece of this function can take the form of a polynomial, a radial basis function, or even a neural network. For the k > 2-class problems, the SVM method as originally proposed required the construction of a two-class SVM to separate each class from the remaining classes. Similarily, k two-class linear programs can be used for the multiclass problem. We performed an empirical study of the original LP method, the proposed k LP method, the proposed single QP method and the original k QP methods. We discuss the advantages and disadvantages of each approach.  相似文献   

12.
Down  D.  Meyn  S.P. 《Queueing Systems》1997,27(3-4):205-226
We develop the use of piecewise linear test functions for the analysis of stability of multiclass queueing networks and their associated fluid limit models. It is found that if an associated LP admits a positive solution, then a Lyapunov function exists. This implies that the fluid limit model is stable and hence that the network model is positive Harris recurrent with a finite polynomial moment. Also, it is found that if a particular LP admits a solution, then the network model is transient. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

13.
We address the problem of determining a robust maximum flow value in a network with uncertain link capacities taken in a polyhedral uncertainty set. Besides a few polynomial cases, we focus on the case where the uncertainty set is taken to be the solution set of an associated (continuous) knapsack problem. This class of problems is shown to be polynomially solvable for planar graphs, but NP-hard for graphs without special structure. The latter result provides evidence of the fact that the problem investigated here has a structure fundamentally different from the robust network flow models proposed in various other published works.  相似文献   

14.
This contribution gives an overview on the state-of-the-art and recent advances in mixed integer optimization to solve planning and design problems in the process industry. In some case studies specific aspects are stressed and the typical difficulties of real world problems are addressed. Mixed integer linear optimization is widely used to solve supply chain planning problems. Some of the complicating features such as origin tracing and shelf life constraints are discussed in more detail. If properly done the planning models can also be used to do product and customer portfolio analysis. We also stress the importance of multi-criteria optimization and correct modeling for optimization under uncertainty. Stochastic programming for continuous LP problems is now part of most optimization packages, and there is encouraging progress in the field of stochastic MILP and robust MILP. Process and network design problems often lead to nonconvex mixed integer nonlinear programming models. If the time to compute the solution is not bounded, there are already a commercial solvers available which can compute the global optima of such problems within hours. If time is more restricted, then tailored solution techniques are required.  相似文献   

15.
In this paper, we present a new class of alternative theorems for SOS-convex inequality systems without any qualifications. This class of theorems provides an alternative equations in terms of sums of squares to the solvability of the given inequality system. A strong separation theorem for convex sets, described by convex polynomial inequalities, plays a key role in establishing the class of alternative theorems. Consequently, we show that the optimal values of various classes of robust convex optimization problems are equal to the optimal values of related semidefinite programming problems (SDPs) and so, the value of the robust problem can be found by solving a single SDP. The class of problems includes programs with SOS-convex polynomials under data uncertainty in the objective function such as uncertain quadratically constrained quadratic programs. The SOS-convexity is a computationally tractable relaxation of convexity for a real polynomial. We also provide an application of our theorem of the alternative to a multi-objective convex optimization under data uncertainty.  相似文献   

16.
The Simplex Stochastic Collocation (SSC) method is an efficient algorithm for uncertainty quantification (UQ) in computational problems with random inputs. In this work, we show how its formulation based on simplex tessellation, high degree polynomial interpolation and adaptive refinements can be employed in problems involving optimization under uncertainty. The optimization approach used is the Nelder-Mead algorithm (NM), also known as Downhill Simplex Method. The resulting SSC/NM method, called Simplex2, is based on (i) a coupled stopping criterion and (ii) the use of an high-degree polynomial interpolation in the optimization space for accelerating some NM operators. Numerical results show that this method is very efficient for mono-objective optimization and minimizes the global number of deterministic evaluations to determine a robust design. This method is applied to some analytical test cases and a realistic problem of robust optimization of a multi-component airfoil.  相似文献   

17.
The estimation of the Lyapunov spectrum for a chaotic time series is discussed in this study. Three models: the local linear (LL) model; the local polynomial (LP) model and the global radial basis function (RBF) model, are compared for estimating the Lyapunov spectrum in this study. The number of neighbors for training the LL model and the LP model; the number of centers for building the RBF model, have been determined by the generalized degree of freedom for a chaotic time series. The above models have been applied to three artificial chaotic time series and two real-world time series, the numerical results show that the model-chosen LL model provides more accurate estimation than other models for clean data set while the RBF model behaves more robust to noise than other models for noisy data set.  相似文献   

18.
In this paper, we investigate how an embedded pure network structure arising in many linear programming (LP) problems can be exploited to create improved sparse simplex solution algorithms. The original coefficient matrix is partitioned into network and non-network parts. For this partitioning, a decomposition technique can be applied. The embedded network flow problem can be solved to optimality using a fast network flow algorithm. We investigate two alternative decompositions namely, Lagrangean and Benders. In the Lagrangean approach, the optimal solution of a network flow problem and in Benders the combined solution of the master and the subproblem are used to compute good (near optimal and near feasible) solutions for a given LP problem. In both cases, we terminate the decomposition algorithms after a preset number of passes and active variables identified by this procedure are then used to create an advanced basis for the original LP problem. We present comparisons with unit basis and a well established crash procedure. We find that the computational results of applying these techniques to a selection of Netlib models are promising enough to encourage further research in this area.  相似文献   

19.
Constraint Programming (CP) has been successful in a number of combinatorial search and discrete optimisation problems. Yet other more traditional approaches, such as Integer Programming (IP), can still give a better performance on the same problem types. Central to IP's success is its reliance on a fast Linear Programming (LP) solver providing solutions during the search to the corresponding relaxed problems. These solutions are used to guide the search within IP as well as a means of detecting infeasibility and integrality. This paper shows that there is scope also to include LP within the CP framework, in order to similarly guide the CP search. The problems examined here are one for which CP on its own had proved markedly inferior to IP. Hence a hybrid solver based on the CP search and using an LP solver is configured and run on these problems. The outcome shows that using the LP solver within the CP search is a valuable addition to the available search strategies. An improved performance over the CP-only strategies is obtained and, further, comparable results are obtained to those from IP. Overall, CP+LP can be considered as a more robust approach than either CP or IP on their own on a variety of combinatorial search problems.  相似文献   

20.
In problems of portfolio selection the reward-risk ratio criterion is optimized to search for a risky portfolio offering the maximum increase of the mean return, compared to the risk-free investment opportunities. In the classical model, following Markowitz, the risk is measured by the variance thus representing the Sharpe ratio optimization and leading to the quadratic optimization problems. Several polyhedral risk measures, being linear programming (LP) computable in the case of discrete random variables represented by their realizations under specified scenarios, have been introduced and applied in portfolio optimization. The reward-risk ratio optimization with polyhedral risk measures can be transformed into LP formulations. The LP models typically contain the number of constraints proportional to the number of scenarios while the number of variables (matrix columns) proportional to the total of the number of scenarios and the number of instruments. Real-life financial decisions are usually based on more advanced simulation models employed for scenario generation where one may get several thousands scenarios. This may lead to the LP models with huge number of variables and constraints thus decreasing their computational efficiency and making them hardly solvable by general LP tools. We show that the computational efficiency can be then dramatically improved by alternative models based on the inverse ratio minimization and taking advantages of the LP duality. In the introduced models the number of structural constraints (matrix rows) is proportional to the number of instruments thus not affecting seriously the simplex method efficiency by the number of scenarios and therefore guaranteeing easy solvability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号