首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
The inverse traveling salesman problem belongs to the class of ??inverse combinatorial optimization?? problems. In an inverse combinatorial optimization problem, we are given a feasible solution for an instance of a particular combinatorial optimization problem, and the task is to adjust the instance parameters as little as possible so that the given solution becomes optimal in the new instance. In this paper, we consider a variant of the inverse traveling salesman problem, denoted by ITSP W,A , by taking into account a set W of admissible weight systems and a specific algorithm. We are given an edge-weighted complete graph (an instance of TSP), a Hamiltonian tour (a feasible solution of TSP) and a specific algorithm solving TSP. Then, ITSP W,A , is the problem to find a new weight system in W which minimizes the difference from the original weight system so that the given tour can be selected by the algorithm as a solution. We consider the cases ${W \in \{\mathbb{R}^{+m}, \{1, 2\}^m , \Delta\}}$ where ?? denotes the set of edge weight systems satisfying the triangular inequality and m is the number of edges. As for algorithms, we consider a local search algorithm 2-opt, a greedy algorithm closest neighbor and any optimal algorithm. We devise both complexity and approximation results. We also deal with the inverse traveling salesman problem on a line for which we modify the positions of vertices instead of edge weights. We handle the cases ${W \in \{\mathbb{R}^{+n}, \mathbb{N}^n\}}$ where n is the number of vertices.  相似文献   

2.
In this paper we develop the Complex method; an algorithm for solving linear programming (LP) problems with interior search directions. The Complex Interior-Boundary method (as the name suggests) moves in the interior of the feasible region from one boundary point to another of the feasible region bypassing several extreme points at a time. These directions of movement are guaranteed to improve the objective function. As a result, the Complex method aims to reach the optimal point faster than the Simplex method on large LP programs. The method also extends to nonlinear programming (NLP) with linear constraints as compared to the generalized-reduced gradient.The Complex method is based on a pivoting operation which is computationally efficient operation compared to some interior-point methods. In addition, our algorithm offers more flexibility in choosing the search direction than other pivoting methods (such as reduced gradient methods). The interior direction of movement aims at reducing the number of iterations and running time to obtain the optimal solution of the LP problem compared to the Simplex method. Furthermore, this method is advantageous to Simplex and other convex programs in regard to starting at a Basic Feasible Solution (BFS); i.e. the method has the ability to start at any given feasible solution.Preliminary testing shows that the reduction in the computational effort is promising compared to the Simplex method.  相似文献   

3.
Location-allocation with l p distances is studied. It is shown that this structure can be expressed as a concave minimization programming problem. Since concave minimization algorithms are not yet well developed, five solution methods are developed which utilize the special properties of the location-allocation problem. Using the rectilinear distance measure, two of these algorithms achieved optimal solutions in all 102 test problems for which solutions were known. The algorithms can be applied to much larger problems than any existing exact methods.  相似文献   

4.
We propose a generalization of the inverse problem which we will call the adjustment problem. For an optimization problem with linear objective function and its restriction defined by a given subset of feasible solutions, the adjustment problem consists in finding the least costly perturbations of the original objective function coefficients, which guarantee that an optimal solution of the perturbed problem is also feasible for the considered restriction. We describe a method of solving the adjustment problem for continuous linear programming problems when variables in the restriction are required to be binary.  相似文献   

5.
In this paper, we consider different formulations for the r-separation problem, where the objective is to choose as as many points as possible from a given set of points subject to the constraint that no two selected points can be closer than r units to one another. Our goal is to devise a mathematical programming formulation with an LP-relaxation which yields integer solutions with great frequency. We consider six different formulations of the r-separation problem. We show that the LP-relaxations of the most obvious formulations will yield fractional results in all instances of the problem if an optimal solution contains fewer than half of the given points. To build computationally effective formulations for the r-separation problem, we write dense constraints with unit right-hand-sides. The LP formulation that performs the best in our computational tests almost always finds 0–1 solutions to the problem.  相似文献   

6.
In this paper, we study inverse optimization for linearly constrained convex separable programming problems that have wide applications in industrial and managerial areas. For a given feasible point of a convex separable program, the inverse optimization is to determine whether the feasible point can be made optimal by adjusting the parameter values in the problem, and when the answer is positive, find the parameter values that have the smallest adjustments. A sufficient and necessary condition is given for a feasible point to be able to become optimal by adjusting parameter values. Inverse optimization formulations are presented with 1 and 2 norms. These inverse optimization problems are either linear programming when 1 norm is used in the formulation, or convex quadratic separable programming when 2 norm is used.  相似文献   

7.
A new exact penalty function method, called the l1 exact exponential penalty function method, is introduced. In this approach, the so-called the exponential penalized optimization problem with the l1 exact exponential penalty function is associated with the original optimization problem with both inequality and equality constraints. The l1 exact exponential penalty function method is used to solve nonconvex mathematical programming problems with r-invex functions (with respect to the same function η). The equivalence between sets of optimal solutions of the original mathematical programming problem and of its associated exponential penalized optimization problem is established under suitable r-invexity assumption. Also lower bounds on the penalty parameter are given, for which above these values, this result is true.  相似文献   

8.
Consider the relaxation of an integer programming (IP) problem in which the feasible region is replaced by the intersection of the linear programming (LP) feasible region and the corner polyhedron for a particular LP basis. Recently a primal-dual ascent algorithm has been given for solving this relaxation. Given an optimal solution of this relaxation, we state criteria for selecting a new LP basis for which the associated relaxation is stronger. These criteria may be successively applied to obtain either an optimal IP solution or a lower bound on the cost of such a solution. Conditions are given for equality of the convex hull of feasible IP solutions and the intersection of all corner polyhedra.  相似文献   

9.
We consider 0–1 programming problems with a minimax objective function and any set of constraints. Upon appropriate transformations of its cost coefficients, such a minimax problem can be reduced to a linear minisum problem with the same set of feasible solutions such that an optimal solution to the latter will also solve the original minimax problem.Although this reducibility applies for any 0–1 programming problem, it is of particular interest for certain locational decision models. Among the obvious implications are that an algorithm for solving a p-median (minisum) problem in a network will also solve a corresponding p-center (minimax) problem.It should be emphasized that the results presented will in general only hold for 0–1 problems due to intrinsic properties of the minimax criterion.  相似文献   

10.
This paper considers the following inverse optimization problem: given a linear program, a desired optimal objective value, and a set of feasible cost vectors, determine a cost vector such that the corresponding optimal objective value of the linear program is closest to the desired value. The above problem, referred here as the inverse optimal value problem, is significantly different from standard inverse optimization problems that involve determining a cost vector for a linear program such that a pre-specified solution vector is optimal. In this paper, we show that the inverse optimal value problem is NP-hard in general. We identify conditions under which the problem reduces to a concave maximization or a concave minimization problem. We provide sufficient conditions under which the associated concave minimization problem and, correspondingly, the inverse optimal value problem is polynomially solvable. For the case when the set of feasible cost vectors is polyhedral, we describe an algorithm for the inverse optimal value problem based on solving linear and bilinear programming problems. Some preliminary computational experience is reported.Mathematics Subject Classification (1999):49N45, 90C05, 90C25, 90C26, 90C31, 90C60Acknowledgement This research has been supported in part by the National Science Foundation under CAREER Award DMII-0133943. The authors thank two anonymous reviewers for valuable comments.  相似文献   

11.
For the integrodifferential viscoelasticity equations, we study the problem of determining the coefficients of the equations and the kernels occurring in the integral terms of the system of equations. The density of the medium is assumed to be given. We suppose that the inhomogeneity support of the sought functions is included in some compact domain B 0. We consider a series of inverse problems in which an impulse source is concentrated at the points y of the boundary of B 0. The point y is the parameter of the problem. The given information about the solution is the trace of the solution to the Cauchy problem with zero initial data. This trace is given on the boundary of B 0 for all y ∈ ?B 0 and for a finite time interval. The main result of the article consists in obtaining uniqueness theorems for a solution to the initial inverse problem.  相似文献   

12.
A method for solving the following inverse linear programming (LP) problem is proposed. For a given LP problem and one of its feasible vectors, it is required to adjust the objective function vector as little as possible so that the given vector becomes optimal. The closeness of vectors is estimated by means of the Euclidean vector norm. The inverse LP problem is reduced to a problem of unconstrained minimization for a convex piecewise quadratic function. This minimization problem is solved by means of the generalized Newton method.  相似文献   

13.
Feature selection consists of choosing a subset of available features that capture the relevant properties of the data. In supervised pattern classification, a good choice of features is fundamental for building compact and accurate classifiers. In this paper, we develop an efficient feature selection method using the zero-norm l 0 in the context of support vector machines (SVMs). Discontinuity at the origin for l 0 makes the solution of the corresponding optimization problem difficult to solve. To overcome this drawback, we use a robust DC (difference of convex functions) programming approach which is a general framework for non-convex continuous optimisation. We consider an appropriate continuous approximation to l 0 such that the resulting problem can be formulated as a DC program. Our DC algorithm (DCA) has a finite convergence and requires solving one linear program at each iteration. Computational experiments on standard datasets including challenging feature-selection problems of the NIPS 2003 feature selection challenge and gene selection for cancer classification show that the proposed method is promising: while it suppresses up to more than 99% of the features, it can provide a good classification. Moreover, the comparative results illustrate the superiority of the proposed approach over standard methods such as classical SVMs and feature selection concave.  相似文献   

14.
The l1 and constrained l1 estimation problems are viewed in the light of extended geometric programming. As a result of this point of view we are able to establish an equivalence between geometric and linear programming duality results for these classes of problems. In addition, the duality results provide some useful insights into the properties of the l1 estimation problems. Finally we establish an equivalence between the l norm problem and a class of constrained l1 estimation problems.  相似文献   

15.
We propose techniques for the solution of the LP relaxation and the Lagrangean dual in combinatorial optimization and nonlinear programming problems. Our techniques find the optimal solution value and the optimal dual multipliers of the LP relaxation and the Lagrangean dual in polynomial time using as a subroutine either the Ellipsoid algorithm or the recent algorithm of Vaidya. Moreover, in problems of a certain structure our techniques find not only the optimal solution value, but the solution as well. Our techniques lead to significant improvements in the theoretical running time compared with previously known methods (interior point methods, Ellipsoid algorithm, Vaidya's algorithm). We use our method to the solution of the LP relaxation and the Langrangean dual of several classical combinatorial problems, like the traveling salesman problem, the vehicle routing problem, the Steiner tree problem, thek-connected problem, multicommodity flows, network design problems, network flow problems with side constraints, facility location problems,K-polymatroid intersection, multiple item capacitated lot sizing problem, and stochastic programming. In all these problems our techniques significantly improve the theoretical running time and yield the fastest way to solve them.  相似文献   

16.
We consider a quadratic programming (QP) problem (Π) of the form min x T C x subject to Axb, x ≥ 0 where \({C\in {\mathbb R}^{n \times n}_+, {\rm rank}(C)=1}\) and \({A\in {\mathbb R}^{m \times n}, b\in {\mathbb R}^m}\) . We present an fully polynomial time approximation scheme (FPTAS) for this problem by reformulating the QP (Π) as a parameterized LP and “rounding” the optimal solution. Furthermore, our algorithm returns an extreme point solution of the polytope. Therefore, our results apply directly to 0–1 problems for which the convex hull of feasible integer solutions is known such as spanning tree, matchings and sub-modular flows. They also apply to problems for which the convex hull of the dominant of the feasible integer solutions is known such as s, t-shortest paths and s, t-min-cuts. For the above discrete problems, the quadratic program Π models the problem of obtaining an integer solution that minimizes the product of two linear non-negative cost functions.  相似文献   

17.
18.
In this article, the vector exact l1 penalty function method used for solving nonconvex nondifferentiable multiobjective programming problems is analyzed. In this method, the vector penalized optimization problem with the vector exact l1 penalty function is defined. Conditions are given guaranteeing the equivalence of the sets of (weak) Pareto optimal solutions of the considered nondifferentiable multiobjective programming problem and of the associated vector penalized optimization problem with the vector exact l1 penalty function. This equivalence is established for nondifferentiable invex vector optimization problems. Some examples of vector optimization problems are presented to illustrate the results established in the article.  相似文献   

19.
In connection with the optimal design of centralized circuit-free networks linear 0–1 programming problems arise which are related to rooted trees. For each problem the variables correspond to the edges of a given rooted tree T. Each path from a leaf to the root of T, together with edge weights, defines a linear constraint, and a global linear objective is to be maximized. We consider relaxations of such problems where the variables are not restricted to 0 or 1 but are allowed to vary continouosly between these bounds. The values of the optimal solutions of such relaxations may be used in a branch and bound procedure for the original 0–1 problem. While in [10] a primal algorithm for these relaxations is discussed, in this paper, we deal with the dual linear program and present a version of the simplex algorithm for its solution which can be implemented to run in time O(n2). For balanced trees T this time can be reduced to O(n log n).  相似文献   

20.
In linear inverse problems considered in this paper a vector with positive components is to be selected from a feasible set defined by linear constraints. The selection rule involves minimization of a certain function which is a measure of distance from a priori guess. Csiszar made an axiomatic approach towards defining a family of functions, we call it α-divergence, that can serve as logically consistent selection rules. In this paper we present an explicit and perfect dual of the resulting convex programming problem, prove the corresponding duality theorem and optimality criteria, and make some suggestions on an algorithmic solution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号