首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
On the Windy Postman Problem on eulerian graphs   总被引:1,自引:0,他引:1  
  相似文献   

2.
A truncated permutation matrix polytope is defined as the convex hull of a proper subset of n-permutations represented as 0/1 matrices. We present a linear system that models the coNP-complete non-Hamilton tour decision problem based upon constructing the convex hull of a set of truncated permutation matrix polytopes. Define polytope Pn–1 as the convex hull of all n-1 by n-1 permutation matrices. Each extreme point of Pn–1 is placed in correspondence (a bijection) with each Hamilton tour of a complete directed graph on n vertices. Given any n vertex graph Gn, a polynomial sized linear system F(n) is constructed for which the image of its solution set, under an orthogonal projection, is the convex hull of the complete set of extrema of a subset of truncated permutation matrix polytopes, where each extreme point is in correspondence with each Hamilton tour not in Gn. The non-Hamilton tour decision problem is modeled by F(n) such that Gn is non-Hamiltonian if and only if, under an orthogonal projection, the image of the solution set of F(n) is Pn–1. The decision problem Is the projection of the solution set of F(n)=Pn–1? is therefore coNP-complete, and this particular model of the non-Hamilton tour problem appears to be new.Dedicated to the 250+ families in Kelowna BC, who lost their homes due to forest fires in 2003.I visited Ted at his home in Kelowna during this time - his family opened their home to evacuees and we shared happy and sad times with many wonderful people.  相似文献   

3.
We give an O(n 2) time algorithm to find the population variance of tour costs over the solution space of the n city symmetric Traveling Salesman Problem (TSP). The algorithm has application in both the stochastic case, where the problem is specified in terms of edge costs which are pairwise independently distributed random variables with known mean and variance, and the numeric edge cost case. We apply this result to provide empirical evidence that, in a range of real world problem sets, the optimal tour cost correlates with a simple function of the mean and variance of tour costs.  相似文献   

4.
Consider the shortest tour throughn pointsX 1,...,X n independently uniformly distributed over [0,1]2. Then we show that for some universal constantK, the number of edges of length at leastun –1/2 is at mostKnxp(–u)2/K)with overwhelmingprobability.This research is in part supported by an NSF grant.  相似文献   

5.
We develop k-interchange procedures to perform local search in a precedence-constrained routing problem. The problem in question is known in the Transportation literature as the single vehicle many-to-many Dial-A-Ride Problem, or DARP. The DARP is the problem of minimizing the length of the tour traveled by a vehicle to service N customers, each of whom wishes to go from a distinct origin to a distinct destination. The vehicle departs from a specified point and returns to that point upon service of all customers. Precedence constraints in the DARP exist because the origin of each customer must precede his/her destination on the route. As in the interchange procedure of Lin for the Traveling Salesman Problem (TSP), a k-interchange is a substitution of k of the links of an initial feasible DARP tour with k other links, and a DARP tour is k-optimal if it is impossible to obtain a shorter tour by replacing any k of its links by k other links. However, in contrast to the TSP where each individual interchange takes O(1) time, checking whether each individual DARP interchange satisfies the origin-destination precedence constraints normally requires O(N2) time. In this paper we develop a method which still finds the best k-interchange that can be produced from an initial feasible DARP tour in O(Nk) time, the same order of magnitude as in the Lin heuristic for the TSP. This method is then embedded in a breadth-first or a depth-first search procedure to produce a k-optimal DARP tour. The paper focuses on the k = 2 and k = 3 cases. Experience with the procedures is presented. in which k-optimal tours are produced by applying a 2-opt or 3-opt search to initial DARP tours produced either randomly or by a fast O(N2) heuristic. The breadth-first and depth-first search modes are compared. The heuristics are seen to produce very good or near-optimal DARP tours.  相似文献   

6.
7.
A “cover tour” of a connected graph G from a vertex x is a random walk that begins at x, moves at each step with equal probability to any neighbor of its current vertex, and ends when it has hit every vertex of G. The cycle Cn is well known to have the curious property that a cover tour from any vertex is equally likely to end at any other vertex; the complete graph Kn shares this property, trivially, by symmetry. Ronald L. Graham has asked whether there are any other graphs with this property; we show that there are not. © 1993 John Wiley & Sons, Inc.  相似文献   

8.
E. Cuesta 《PAMM》2007,7(1):1030203-1030204
In this paper we show adaptive time discretizations of a fractional integro–differential equation ∂αtu = Δu + f, where A is a linear operator in a complex Banach space X and ∂αt stands for the fractional time derivative, for 1 < α < 2. Some numerical illustrations are provided showing practical applications where the computational cost is one of drawbacks, e.g., some problems related to images processing. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

9.
It is common to subsample Markov chain output to reduce the storage burden. Geyer shows that discarding k ? 1 out of every k observations will not improve statistical efficiency, as quantified through variance in a given computational budget. That observation is often taken to mean that thinning Markov chain Monte Carlo (MCMC) output cannot improve statistical efficiency. Here, we suppose that it costs one unit of time to advance a Markov chain and then θ > 0 units of time to compute a sampled quantity of interest. For a thinned process, that cost θ is incurred less often, so it can be advanced through more stages. Here, we provide examples to show that thinning will improve statistical efficiency if θ is large and the sample autocorrelations decay slowly enough. If the lag ? ? 1 autocorrelations of a scalar measurement satisfy ρ? > ρ? + 1 > 0, then there is always a θ < ∞ at which thinning becomes more efficient for averages of that scalar. Many sample autocorrelation functions resemble first order AR(1) processes with ρ? = ρ|?| for some ? 1 < ρ < 1. For an AR(1) process, it is possible to compute the most efficient subsampling frequency k. The optimal k grows rapidly as ρ increases toward 1. The resulting efficiency gain depends primarily on θ, not ρ. Taking k = 1 (no thinning) is optimal when ρ ? 0. For ρ > 0, it is optimal if and only if θ ? (1 ? ρ)2/(2ρ). This efficiency gain never exceeds 1 + θ. This article also gives efficiency bounds for autocorrelations bounded between those of two AR(1) processes. Supplementary materials for this article are available online.  相似文献   

10.
On Approximate Geometric k -Clustering   总被引:1,自引:0,他引:1  
For a partition of an n -point set into k subsets (clusters) S 1 ,S 2 ,. . .,S k , we consider the cost function , where c(S i ) denotes the center of gravity of S i . For k=2 and for any fixed d and ε >0 , we present a deterministic algorithm that finds a 2-clustering with cost no worse than (1+ε) -times the minimum cost in time O(n log n); the constant of proportionality depends polynomially on ε . For an arbitrary fixed k , we get an O(n log k n) algorithm for a fixed ε , again with a polynomial dependence on ε . Received October 19, 1999, and in revised form January 19, 2000.  相似文献   

11.
The question whether large turbulent drag reduction can be achieved at the high values of Re typical of applications is addressed. Answering such question, either by experiments or DNS, is obviously challenging. For DNS, the problem lies in the tremendous increase of the computational cost with Re, that has to be appreciated in view of the need of carrying out an entire parametric study at every Re, owing to the unknown location of the optimal forcing parameters. In this paper we limit ourselves to considering an open-loop technique based on spanwise forcing, the streamwise-traveling waves introduced by [1], and explore via Direct Numerical Simulations (DNS) how the drag reduction varies when the friction Reynolds number is increased from Reτ = 200 to Reτ = 2000. To achieve high Re while keeping the computational cost affordable, computational domains of reduced size are employed. We adopted special care to interpret results that are indeed still box-size dependent, as well as strategies to compute the random errors and give the results an error bar. Our results indicate that still R = 0.29 can be obtained at Reτ = 2000 in the partial region of the parameter space studied. The maximum R is found to decrease as R ˜ Reτ−0.22 in the Reynolds range investigated. As most important outcome, we find that the sensitivity of R to Re becomes smaller when far from the low-Re optimum parameters: in this region, we suggest R ˜ Reτ−0.08. (© 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

12.
We consider the problem of preemptively scheduling a set of imprecise computation tasks on m ≥ 1 identical processors, with each task Ti having two weights, wi and wi. Two performance metrics are considered: (1) the maximum w′-weighted error; (2) the total w-weighted error subject to the constraint that the maximum w′-weighted error is minimized. For the problem of minimizing the maximum w′-weighted error, we give an algorithm which runs in O(n3 log2n) time for multiprocessors and O(n2) time for a single processor. For the problem of minimizing the total w-weighted error subject to the constraint that the maximum w′-weighted error is minimized, we give an algorithm which also has the same time complexity.  相似文献   

13.
《Optimization》2012,61(4):575-587
We consider a linear discrete-time systems controlled by inputs on L 2([0, t N ], U), where (t i )1?≤?i?≤?N is a given sequence of times. The final time t N (or N) is considered to be free. Given an initial state x 0 and a final one x d , we investigate the optimal control which steers the system from x 0 to x d with a minimal cost J(N, u) that includes the final time and energy terms. We treat this problem for both infinte and finite dimensional state space. We use a method similar to the Hilbert Uniqueness Method. A numerical simulation is given.  相似文献   

14.
Being mainly interested in the control of satellites, we investigate the problem of maneuvering a rigid body from a given initial attitude to a desired final attitude at a specified end time in such a way that a cost functional measuring the overall angular velocity is minimized.This problem is solved by applying a recent technique of Jurdjevic in geometric control theory. Essentially, this technique is just the classical calculus of variations approach to optimal control problems without control constraints, but formulated for control problems on arbitrary manifolds and presented in coordinate-free language. We model the state evolution as a differential equation on the nonlinear state spaceG=SO(3), thereby completely circumventing the inevitable difficulties (singularities and ambiguities) associated with the use of parameters such as Euler angles or quaternions. The angular velocities k about the body's principal axes are used as (unbounded) control variables. Applying Pontryagin's Maximum Principle, we lift any optimal trajectorytg*(t) to a trajectory onT *G which is then revealed as an integral curve of a certain time-invariant Hamiltonian vector field. Next, the calculus of Poisson brackets is applied to derive a system of differential equations for the optimal angular velocitiest k * (t); once these are known the controlling torques which need to be applied are determined by Euler's equations.In special cases an analytical solution in closed form can be obtained. In general, the unknown initial values k * (t0) can be found by a shooting procedure which is numerically much less delicate than the straightforward transformation of the optimization problem into a two-point boundary-value problem. In fact, our approach completely avoids the explicit introduction of costate (or adjoint) variables and yields a differential equation for the control variables rather than one for the adjoint variables. This has the consequence that only variables with a clear physical significance (namely angular velocities) are involved for which gooda priori estimates of the initial values are available.  相似文献   

15.
The edges of a complete graph on n vertices are assigned i.i.d. random costs from a distribution for which the interval [0, t] has probability asymptotic to t as t→0 through positive values. In this so called pseudo-dimension 1 mean field model, we study several optimization problems, of which the traveling salesman is the best known. We prove that, as n→∞, the cost of the minimum traveling salesman tour converges in probability to a certain number, approximately 2.0415, which is characterized analytically.  相似文献   

16.
This paper is concerned with classical concave cost multi-echelon production/inventory control problems studied by W. Zangwill and others. It is well known that the problem with m production steps and n time periods can be solved by a dynamic programming algorithm in O(n 4 m) steps, which is considered as the fastest algorithm for solving this class of problems. In this paper, we will show that an alternative 0–1 integer programming approach can solve the same problem much faster particularly when n is large and the number of 0–1 integer variables is relatively few. This class of problems include, among others problem with set-up cost function and piecewise linear cost function with fewer linear pieces. The new approach can solve problems with mixed concave/convex cost functions, which cannot be solved by dynamic programming algorithms.  相似文献   

17.
A clique-transversal of a graph G is a subset of vertices intersecting all the cliques of G. It is NP-hard to determine the minimum cardinality τ c of a clique-transversal of G. In this work, first we propose an algorithm for determining this parameter for a general graph, which runs in polynomial time, for fixed τ c . This algorithm is employed for finding the minimum cardinality clique-transversal of [`(3K2)]\overline{3K_{2}} -free circular-arc graphs in O(n 4) time. Further we describe an algorithm for determining τ c of a Helly circular-arc graph in O(n) time. This represents an improvement over an existing algorithm by Guruswami and Pandu Rangan which requires O(n 2) time. Finally, the last proposed algorithm is modified, so as to solve the weighted version of the corresponding problem, in O(n 2) time.  相似文献   

18.
In this paper we consider the natural generalizations of two fundamental problems, the Set-Cover problem and the Min-Knapsack problem. We are given a hypergraph, each vertex of which has a nonnegative weight, and each edge of which has a nonnegative length. For a given threshold , our objective is to find a subset of the vertices with minimum total cost, such that at least a length of of the edges is covered. This problem is called the partial set cover problem. We present an O(|V|2 + |H|)-time, ΔE-approximation algorithm for this problem, where ΔE ≥ 2 is an upper bound on the edge cardinality of the hypergraph and |H| is the size of the hypergraph (i.e., the sum of all its edges cardinalities). The special case where ΔE = 2 is called the partial vertex cover problem. For this problem a 2-approximation was previously known, however, the time complexity of our solution, i.e., O(|V|2), is a dramatic improvement.We show that if the weights are homogeneous (i.e., proportional to the potential coverage of the sets) then any minimal cover is a good approximation. Now, using the local-ratio technique, it is sufficient to repeatedly subtract a homogeneous weight function from the given weight function.  相似文献   

19.
This paper is concerned with the problem of scheduling n jobs with a common due date on a single machine so as to minimize the total cost arising from earliness and tardiness. A general model is examined, in which earliness penalty and tardiness penalty are, respectively, arbitrary non-decreasing functions. Moreover, the model includes two important features that commonly appear in practical problems, namely, 1) earliness and tardiness are penalized with different weights which are job-dependent, and 2) the earliness (or tardiness) penalty consists of two parts, one is a variable cost dependent on the length of earliness (or tardiness), while the other is a fixed cost incurred when a job is early (or tardy). This model provides a general and flexible performance measure for earliness/tardiness scheduling, which has not been addressed before. We establish a number of results on the characterizations of optimal and sub-optimal solutions, and propose two algorithms based on these results. The first algorithm can find, under an agreeable weight condition, an optimum in time O(n2 Pn), and the second algorithm can generate a sub-optimum in time O(nPn), where Pn is the sum of the processing times. Further, we derive an upper bound on the relative error of the sub-optimal solution and show that, under certain conditions, the error tends to zero as n increases. Computational results are also reported to demonstrate the effectiveness of the algorithms proposed.  相似文献   

20.
The accuracy of many schemes for interpolating scattered data with radial basis functions depends on a shape parameter c of the radial basis function. In this paper we study the effect of c on the quality of fit of the multiquadric, inverse multiquadric and Gaussian interpolants. We show, numerically, that the value of the optimal c (the value of c that minimizes the interpolation error) depends on the number and distribution of data points, on the data vector, and on the precision of the computation. We present an algorithm for selecting a good value for c that implicitly takes all the above considerations into account. The algorithm selects c by minimizing a cost function that imitates the error between the radial interpolant and the (unknown) function from which the data vector was sampled. The cost function is defined by taking some norm of the error vector E = (E 1, ... , EN)T where E k = Ek = fk - Sk xk) and S k is the interpolant to a reduced data set obtained by removing the point x k and the corresponding data value f k from the original data set. The cost function can be defined for any radial basis function and any dimension. We present the results of many numerical experiments involving interpolation of two dimensional data sets by the multiquadric, inverse multiquadric and Gaussian interpolants and we show that our algorithm consistently produces good values for the parameter c. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号