首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We given anO(n logn)-time method for finding a bestk-link piecewise-linear function approximating ann-point planar point set using the well-known uniform metric to measure the error, ε≥0, of the approximation. Our methods is based upon new characterizations of such functions, which we exploit to design an efficient algorithm using a plane sweep in “ε space” followed by several applications of the parametric-searching technique. The previous best running time for this problems wasO(n 2). This research was announced in preliminary form at the 10th ACM Symposium on Computational Geometry. The author was partially supported by the NSF and DARPA under Grant CCR-8908092, and by the NSF under Grants IRI-9116843 and CCR-9300079.  相似文献   

2.
We consider the problem of testing the uniqueness of maximum matchings, both in the unweighted and in the weighted case. For the unweighted case, we have two results. First, given a graph with n vertices and m edges, we can test whether the graph has a unique perfect matching, and find it if it exists, in O(m log4 n) time. This algorithm uses a recent dynamic connectivity algorithm and an old result of Kotzig characterizing unique perfect matchings in terms of bridges. For the special case of planar graphs, we improve the algorithm to run in O(n log n) time. Second, given one perfect matching, we can test for the existence of another in linear time. This algorithm is a modification of Edmonds' blossom-shrinking algorithm implemented using depth-first search. A generalization of Kotzig's theorem proved by Jackson and Whitty allows us to give a modification of the first algorithm that tests whether a given graph has a unique f-factor, and find it if it exists. We also show how to modify the second algorithm to check whether a given f-factor is unique. Both extensions have the same time bounds as their perfect matching counterparts. For the weighted case, we can test in linear time whether a maximum-weight matching is unique, given the output from Edmonds' algorithm for computing such a matching. The method is an extension of our algorithm for the unweighted case.  相似文献   

3.
A typical problem in network design is to find a minimum-cost sub-network H of a given network G such that H satisfies some prespecified connectivity requirements. Our focus is on approximation algorithms for designing networks that satisfy vertex connectivity requirements. Our main tool is a linear programming relaxation of the following setpair formulation due to Frank and Jordan: a setpair consists of two subsets of vertices (of the given network G); each setpair has an integer requirement, and the goal is to find a minimum-cost subset of the edges of G sucht hat each setpair is covered by at least as many edges as its requirement. We introduce the notion of skew bisupermodular functions and use it to prove that the basic solutions of the linear program are characterized by “non-crossing families” of setpairs. This allows us to apply Jain’s iterative rounding method to find approximately optimal integer solutions. We give two applications. (1) In the k-vertex connectivity problem we are given a (directed or undirected) graph G=(V,E) with non-negative edge costs, and the task is to find a minimum-cost spanning subgraph H such that H is k-vertex connected. Let n=|V|, and let ε<1 be a positive number such that k≤(1−ε)n. We give an -approximation algorithm for both problems (directed or undirected), improving on the previous best approximation guarantees for k in the range . (2)We give a 2-approximation algorithm for the element connectivity problem, matching the previous best approximation guarantee due to Fleischer, Jain and Williamson. * Supported in part by NSERC researchgran t OGP0138432. † Supported in part by NSF Career Award CCR-9875024.  相似文献   

4.
The dynamic programming algorithm of [12.] for the bandwidth minimization problem is improved. It is shown that, for all k > 1, BANDWIDTH(k) can be solved in O(nk) steps and simultaneous O(nk) space, where n is the number of vertices in the graph, and that each such problem is in NSPACE(log n). The same improved dynamic programming algorithm approach works to show that the MINCUT LINEAR ARRANGEMENT problem restricted to the fixed value k, denoted by MINCUT(k), is solvable in O(nk) steps and simultaneous O(nk) space and is in the class NSPACE(log n).  相似文献   

5.
We analyze the complexity of the restrictions of linear arrangement problems that are obtained if the legal permutations of the nodes are restricted to those that can be obtained by orderings of a binary tree structuring the nodes of the graph, the so-called p-tree. These versions of the linear arrangement problems occur in several places in current circuit layout systems. There the p-tree is the result of a recursive partitioning process of the graph. We show that the MINCUT LINEAR ARRANGEMENT problem and the OPTIMAL LINEAR ARRANGEMENT problem can be solved in polynomial time, if the p-tree is balanced. All other versions of the linear arrangement problems we analyzed are NP-complete.  相似文献   

6.
We show that for every initial dataa εL 2(Ω) there exists a weak solutionu of the Navier-Stokes equations satisfying the generalized energy inequality introduced by Caffarelli-Kohn-Nirenberg forn=3. We also show that if a weak solutionu εL s (0,T;L q (Ω)) with 2/q + 2/s ≤ 1 and 3/q + 1/s ≤ 1 forn=3, or 2/q + 2/s ≤ 1 andq ≥ 4 forn ≥ 4, thenu satisfies both the generalized and the usual energy equalities. Moreover we show that the generalized energy equality holds only under the local hypothesis thatu εL s (ε, T; L q (K)) for all compact setsK ⊂ ⊂ Ω and all 0 <ε <T with the same (q, s) as above when 3 ≤n ≤ 10.  相似文献   

7.
In this paper we study primal-dual path-following algorithms for the second-order cone programming (SOCP) based on a family of directions that is a natural extension of the Monteiro-Zhang (MZ) family for semidefinite programming. We show that the polynomial iteration-complexity bounds of two well-known algorithms for linear programming, namely the short-step path-following algorithm of Kojima et al. and Monteiro and Adler, and the predictor-corrector algorithm of Mizuno et al., carry over to the context of SOCP, that is they have an O( logε-1) iteration-complexity to reduce the duality gap by a factor of ε, where n is the number of second-order cones. Since the MZ-type family studied in this paper includes an analogue of the Alizadeh, Haeberly and Overton pure Newton direction, we establish for the first time the polynomial convergence of primal-dual algorithms for SOCP based on this search direction. Received: June 5, 1998 / Accepted: September 8, 1999?Published online April 20, 2000  相似文献   

8.
In this paper we study the problem of finding placement tours for pick-and-place robots, also known as the printed circuit board assembly problem with m positions on a board, n bins containing m components and n locations for the bins. In the standard model where the working time of the robot is proportional to the distances travelled, the general problem appears as a combination of the travelling salesman problem and the matching problem, and for m=n we have an Euclidean, bipartite travelling salesman problem. We give a polynomial-time algorithm which achieves an approximation guarantee of 3+. An important special instance of the problem is the case of a fixed assignment of bins to bin-locations. This appears as a special case of a bipartite TSP satisfying the quadrangle inequality and given some fixed matching arcs. We obtain a 1.8 factor approximation with the stacker crane algorithm of Frederikson, Hecht and Kim. For the general bipartite case we also show a 2.0 factor approximation algorithm which is based on a new insertion technique for bipartite TSPs with quadrangle inequality. Implementations and experiments on real-world as well as random point configurations conclude this paper.  相似文献   

9.
We consider the problem of computing a (1+ε)-approximation to the minimum volume enclosing ellipsoid (MVEE) of a given set of m points in R n . Based on the idea of sequential minimal optimization (SMO) method, we develop a rank-two update algorithm. This algorithm computes an approximate solution to the dual optimization formulation of the MVEE problem, which updates only two weights of the dual variable at each iteration. We establish that this algorithm computes a (1+ε)-approximation to MVEE in O(mn 3/ε) operations and returns a core set of size O(n 2/ε) for ε∈(0,1). In addition, we give an extension of this rank-two update algorithm. Computational experiments show the proposed algorithms are very efficient for solving large-scale problem with a high accuracy.  相似文献   

10.
Recently, B. Y. Chen introduced a new intrinsic invariant of a manifold, and proved that everyn-dimensional submanifold of real space formsR m (ε) of constant sectional curvature ε satisfies a basic inequality δ(n 1,…,n k )≤c(n 1,…,n k )H 2+b(n 1,…,n k )ε, whereH is the mean curvature of the immersion, andc(n 1,…,n k ) andb(n 1,…,n k ) are constants depending only onn 1,…,n k ,n andk. The immersion is calledideal if it satisfies the equality case of the above inequality identically for somek-tuple (n 1,…,n k ). In this paper, we first prove that every ideal Einstein immersion satisfyingnn 1+…+n k +1 is totally geodesic, and that every ideal conformally flat immersion satisfyingnn 1+…+n k +2 andk≥2 is also totally geodesic. Secondly we completely classify all ideal semi-symmetric hypersurfaces in real space forms. The author was supported by the NSFC and RFDP.  相似文献   

11.
Matching forests generalize branchings in a directed graph and matchings in an undirected graph. We present an efficient algorithm, the PMF Algorithm, for the problem: given a mixed graphG and a real weight on each of its edges, find a perfect matching forest of maximum weight-sum. The PMF Algorithm proves the sufficiency of a linear system which definesP = (G) andP(G), the convex hull of incidence vectors of perfect matching forests and matching forests respectively ofG. The algorithm also provides a generalization of Tutte's theorem on the existence of perfect matchings in an undirected graph.Research partially supported by a N.R.C. of Canada Postdoctorate Fellowship.  相似文献   

12.
In this paper, we study the problems of (approximately) representing a functional curve in 2-D by a set of curves with fewer peaks. Representing a function (or its curve) by certain classes of structurally simpler functions (or their curves) is a basic mathematical problem. Problems of this kind also find applications in applied areas such as intensity-modulated radiation therapy (IMRT). Let f\bf f be an input piecewise linear functional curve of size n. We consider several variations of the problems. (1) Uphill–downhill pair representation (UDPR): Find two nonnegative piecewise linear curves, one nondecreasing (uphill) and one nonincreasing (downhill), such that their sum exactly or approximately represents f\bf f. (2) Unimodal representation (UR): Find a set of unimodal (single-peak) curves such that their sum exactly or approximately represents f\bf f. (3) Fewer-peak representation (FPR): Find a piecewise linear curve with at most k peaks that exactly or approximately represents f\bf f. Furthermore, for each problem, we consider two versions. For the UDPR problem, we study its feasibility version: Given ε>0, determine whether there is a feasible UDPR solution for f\bf f with an approximation error ε; its min-ε version: Compute the minimum approximation error ε such that there is a feasible UDPR solution for f\bf f with error ε . For the UR problem, we study its min-k version: Given ε>0, find a feasible solution with the minimum number k of unimodal curves for f\bf f with an error ε; its min-ε version: given k>0, compute the minimum error ε such that there is a feasible solution with at most k unimodal curves for f\bf f with error ε . For the FPR problem, we study its min-k version: Given ε>0, find one feasible curve with the minimum number k of peaks for f\bf f with an error ε; its min-ε version: given k≥0, compute the minimum error ε such that there is a feasible curve with at most k peaks for f\bf f with error ε . Little work has been done previously on solving these functional curve representation problems. We solve all the problems (except the UR min-ε version) in optimal O(n) time, and the UR min-ε version in O(n+mlog m) time, where m<n is the number of peaks of f\bf f. Our algorithms are based on new geometric observations and interesting techniques.  相似文献   

13.
Let P be a set of n points in ℝ d . A subset of P is called a (k,ε)-kernel if for every direction, the directional width of ε-approximates that of P, when k “outliers” can be ignored in that direction. We show that a (k,ε)-kernel of P of size O(k/ε (d−1)/2) can be computed in time O(n+k 2/ε d−1). The new algorithm works by repeatedly “peeling” away (0,ε)-kernels from the point set. We also present a simple ε-approximation algorithm for fitting various shapes through a set of points with at most k outliers. The algorithm is incremental and works by repeatedly “grating” critical points into a working set, till the working set provides the required approximation. We prove that the size of the working set is independent of n, and thus results in a simple and practical, near-linear ε-approximation algorithm for shape fitting with outliers in low dimensions. We demonstrate the practicality of our algorithms by showing their empirical performance on various inputs and problems. A preliminary version of this paper appeared in Proceedings of the 17th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 182–191. P.A. and H.Y. are supported by NSF under grants CCR-00-86013, EIA-01-31905, CCR-02-04118, and DEB-04-25465, by ARO grants W911NF-04-1-0278 and DAAD19-03-1-0352, and by a grant from the U.S.–Israel Binational Science Foundation. S.H.-P. is supported by a NSF CAREER award CCR-0132901.  相似文献   

14.
ε-fuzzy dissimilarity relation is defined by using the concept of ε- fuzzy equivalence relation and a strong negator. It is proved that the ε- fuzzy dissimilarity relation so defined satisfies inequalities resembling to generalized triangle inequality.  相似文献   

15.
We show that if 0<ε≦1, 1≦p<2 andx 1, …,x n is a sequence of unit vectors in a normed spaceX such thatE ‖∑ l n εi x l‖≧n 1/p, then one can find a block basisy 1, …,y m ofx 1, …,x n which is (1+ε)-symmetric and has cardinality at leastγn 2/p-1(logn)−1, where γ depends on ε only. Two examples are given which show that this bound is close to being best possible. The first is a sequencex 1, …,x n satisfying the above conditions with no 2-symmetric block basis of cardinality exceeding 2n 2/p-1. This sequence is not linearly independent. The second example is a sequence which satisfies a lowerp-estimate but which has no 2-symmetric block basis of cardinality exceedingCn 2/p-1(logn)4/3, whereC is an absolute constant. This applies when 1≦p≦3/2. Finally, we obtain improvements of the lower bound when the spaceX containing the sequence satisfies certain type-condition. These results extend results of Amir and Milman in [1] and [2]. We include an appendix giving a simple counterexample to a question about norm-attaining operators.  相似文献   

16.
Some modified Levitin-Polyak projection methods are proposed in this paper for solving monotone linear variational inequality x∈Ω,(x′-x)^T(Hx c)≤0,for any x′∈Ω.It is pointed out that there are similar methods for solving a general linear variational inequality.  相似文献   

17.

Relaxation and rounding approaches became a standard and extremely versatile tool for constrained submodular function maximization. One of the most common rounding techniques in this context are contention resolution schemes. Such schemes round a fractional point by first rounding each coordinate independently, and then dropping some elements to reach a feasible set. Also the second step, where elements are dropped, is typically randomized. This leads to an additional source of randomization within the procedure, which can complicate the analysis. We suggest a different, polyhedral viewpoint to design contention resolution schemes, which avoids to deal explicitly with the randomization in the second step. This is achieved by focusing on the marginals of a dropping procedure. Apart from avoiding one source of randomization, our viewpoint allows for employing polyhedral techniques. Both can significantly simplify the construction and analysis of contention resolution schemes. We show how, through our framework, one can obtain an optimal monotone contention resolution scheme for bipartite matchings, which has a balancedness of 0.4762. So far, only very few results are known about optimality of monotone contention resolution schemes. Our contention resolution scheme for the bipartite case also improves the lower bound on the correlation gap for bipartite matchings. Furthermore, we derive a monotone contention resolution scheme for matchings that significantly improves over the previously best one. More precisely, we obtain a balancedness of 0.4326, improving on a prior 0.1997-balanced scheme. At the same time, our scheme implies that the currently best lower bound on the correlation gap for matchings is not tight. Our results lead to improved approximation factors for various constrained submodular function maximization problems over a combination of matching constraints with further constraints.

  相似文献   

18.
The use of matchings is a powerful technique for scaling and ordering sparse matrices prior to the solution of a linear system Ax = b. Traditional methods such as implemented by the HSL software package MC64 use the Hungarian algorithm to solve the maximum weight maximum cardinality matching problem. However, with advances in the algorithms and hardware used by direct methods for the parallelization of the factorization and solve phases, the serial Hungarian algorithm can represent an unacceptably large proportion of the total solution time for such solvers. Recently, auction algorithms and approximation algorithms have been suggested as alternatives for achieving near‐optimal solutions for the maximum weight maximum cardinality matching problem. In this paper, the efficacy of auction and approximation algorithms as replacements for the Hungarian algorithm is assessed in the context of sparse symmetric direct solvers when used in problems arising from a range of practical applications. High‐cardinality suboptimal matchings are shown to be as effective as optimal matchings for the purposes of scaling. However, matching‐based ordering techniques require that matchings are much closer to optimality before they become effective. The auction algorithm is demonstrated to be capable of finding such matchings significantly faster than the Hungarian algorithm, but our ‐approximation matching approach fails to consistently achieve a sufficient cardinality. Copyright © 2015 The Authors Numerical Linear Algebra with Applications Published by John Wiley & Sons Ltd.  相似文献   

19.
We apply Megiddo's parametric searching technique to several geometric optimization problems and derive significantly improved solutions for them. We obtain, for any fixed ε>0, anO(n 1+ε) algorithm for computing the diameter of a point set in 3-space, anO(8/5+ε) algorithm for computing the width of such a set, and onO(n 8/5+ε) algorithm for computing the closest pair in a set ofn lines in space. All these algorithms are deterministic. Work by Bernard Chazelle was supported by NSF Grant CCR-90-02352. Work by Herbert Edelsbrunner was supported by NSF Grant CCR-89-21421. Work by Leonidas Guibas and Micha Sharir was supported by a grant from the U.S.-Israeli Binational Science Foundation. Work by Micha Sharir was also supported by ONR Grant N00014-90-J-1284, by NSF Grant CCR-89-01484, and by grants from the Fund for Basic Research administered by the Israeli Academy of Sciences, and the G.I.F., the German-Israeli Foundation for Scientific Research and Development.  相似文献   

20.
In this paper we consider an optimization version of the multicommodity flow problem which is known as the maximum concurrent flow problem. We show that an approximate solution to this problem can be computed deterministically using O(k(ε −2 + logk) logn) 1-commodity minimum-cost flow computations, wherek is the number of commodities,n is the number of nodes, andε is the desired precision. We obtain this bound by proving that in the randomized algorithm developed by Leighton et al. (1995) the random selection of commodities can be replaced by the deterministic round-robin without increasing the total running time. Our bound significantly improves the previously known deterministic upper bounds and matches the best known randomized upper bound for the approximation concurrent flow problem. A preliminary version of this paper appeared inProceedings of the 6th ACM-SIAM Symposium on Discrete Algorithms, San Francisco CA, 1995, pp. 486–492.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号