首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 547 毫秒
1.
This paper develops an algorithm for solving a standard-form linear program directly from an infeasible “warm start”, i.e., directly from a given infeasible solution \(\hat x\) that satisfies \(A\hat x = b\) but \(\hat x \ngeqslant 0\) . The algorithm is a potential function reduction algorithm, but the potential function is somewhat different than other interior-point method potential functions, and is given by $$F(x,B) = q\ln (c^T x - B) - \sum\limits_{j = 1}^n {\ln (x_j + h_j (c^T x - B))}$$ where \(q = n + \sqrt n\) is a given constant,h is a given strictly positive shift vector used to shift the nonnegativity constaints, andB is a lower bound on the optimal value of the linear program. The duality gapc T x ? B is used both in the leading term as well as in the barrier term to help shift the nonnegativity constraints. The algorithm is shown under suitable conditions to achieve a constant decrease in the potential function and so achieves a constant decrease in the duality gap (and hence also in the infeasibility) in O(n) iterations. Under more restrictive assumptions regarding the dual feasible region, this algorithm is modified by the addition of a dual barrier term, and will achieve a constant decrease in the duality gap (and in the infeasibility) in \(O(\sqrt n )\) iterations.  相似文献   

2.
The hypergraph matching problem is to find a largest collection of disjoint hyperedges in a hypergraph. This is a well-studied problem in combinatorial optimization and graph theory with various applications. The best known approximation algorithms for this problem are all local search algorithms. In this paper we analyze different linear and semidefinite programming relaxations for the hypergraph matching problem, and study their connections to the local search method. Our main results are the following:
  • We consider the standard linear programming relaxation of the problem. We provide an algorithmic proof of a result of Füredi, Kahn and Seymour, showing that the integrality gap is exactly ${k-1+\frac{1}{k}}$ for k-uniform hypergraphs, and is exactly k ? 1 for k-partite hypergraphs. This yields an improved approximation algorithm for the weighted 3-dimensional matching problem. Our algorithm combines the use of the iterative rounding method and the fractional local ratio method, showing a new way to round linear programming solutions for packing problems.
  • We study the strengthening of the standard LP relaxation by local constraints. We show that, even after linear number of rounds of the Sherali-Adams lift-and-project procedure on the standard LP relaxation, there are k-uniform hypergraphs with integrality gap at least k ? 2. On the other hand, we prove that for every constant k, there is a strengthening of the standard LP relaxation by only a polynomial number of constraints, with integrality gap at most ${\frac{k+1}{2}}$ for k-uniform hypergraphs. The construction uses a result in extremal combinatorics.
  • We consider the standard semidefinite programming relaxation of the problem. We prove that the Lovász ${\vartheta}$ -function provides an SDP relaxation with integrality gap at most ${\frac{k+1}{2}}$ . The proof gives an indirect way (not by a rounding algorithm) to bound the ratio between any local optimal solution and any optimal SDP solution. This shows a new connection between local search and linear and semidefinite programming relaxations.
  •   相似文献   

    3.
    We present two linearization-based algorithms for mixed-integer nonlinear programs (MINLPs) having a convex continuous relaxation. The key feature of these algorithms is that, in contrast to most existing linearization-based algorithms for convex MINLPs, they do not require the continuous relaxation to be defined by convex nonlinear functions. For example, these algorithms can solve to global optimality MINLPs with constraints defined by quasiconvex functions. The first algorithm is a slightly modified version of the LP/NLP-based branch-and-bouund \((\text{ LP/NLP-BB })\) algorithm of Quesada and Grossmann, and is closely related to an algorithm recently proposed by Bonami et al. (Math Program 119:331–352, 2009). The second algorithm is a hybrid between this algorithm and nonlinear programming based branch-and-bound. Computational experiments indicate that the modified LP/NLP-BB method has comparable performance to LP/NLP-BB on instances defined by convex functions. Thus, this algorithm has the potential to solve a wider class of MINLP instances without sacrificing performance.  相似文献   

    4.
    We propose an entire space polynomial-time algorithm for linear programming. First, we give a class of penalty functions on entire space for linear programming by which the dual of a linear program of standard form can be converted into an unconstrained optimization problem. The relevant properties on the unconstrained optimization problem such as the duality, the boundedness of the solution and the path-following lemma, etc, are proved. Second, a self-concordant function on entire space which can be used as penalty for linear programming is constructed. For this specific function, more results are obtained. In particular, we show that, by taking a parameter large enough, the optimal solution for the unconstrained optimization problem is located in the increasing interval of the self-concordant function, which ensures the feasibility of solutions. Then by means of the self-concordant penalty function on entire space, a path-following algorithm on entire space for linear programming is presented. The number of Newton steps of the algorithm is no more than $O(nL\log (nL/ {\varepsilon }))$ , and moreover, in short step, it is no more than $O(\sqrt{n}\log (nL/{\varepsilon }))$ .  相似文献   

    5.
    6.
    This paper presents extensions and further analytical properties of algorithms for linear programming based only on primal scaling and projected gradients of a potential function. The paper contains extensions and analysis of two polynomial-time algorithms for linear programming. We first present an extension of Gonzaga's O(nL) iteration algorithm, that computes dual variables and does not assume a known optimal objective function value. This algorithm uses only affine scaling, and is based on computing the projected gradient of the potential function $$q\ln (x^T s) - \sum\limits_{j = 1}^n {\ln (x_j )} $$ wherex is the vector of primal variables ands is the vector of dual slack variables, and q = n + \(\sqrt n \) . The algorithm takes either a primal step or recomputes dual variables at each iteration. We next present an alternate form of Ye's O( \(\sqrt n \) L) iteration algorithm, that is an extension of the first algorithm of the paper, but uses the potential function $$q\ln (x^T s) - \sum\limits_{j = 1}^n {\ln (x_j ) - \sum\limits_{j - 1}^n {\ln (s_j )} } $$ where q = n + \(\sqrt n \) . We use this alternate form of Ye's algorithm to show that Ye's algorithm is optimal with respect to the choice of the parameterq in the following sense. Suppose thatq = n + n t wheret?0. Then the algorithm will solve the linear program in O(n r L) iterations, wherer = max{t, 1 ? t}. Thus the value oft that minimizes the complexity bound ist = 1/2, yielding Ye's O( \(\sqrt n \) L) iteration bound.  相似文献   

    7.
    We prove that there are 0/1 polytopes ${P \subseteq \mathbb{R}^{n}}$ that do not admit a compact LP formulation. More precisely we show that for every n there is a set ${X \subseteq \{ 0,1\}^n}$ such that conv(X) must have extension complexity at least ${2^{n/2\cdot(1-o(1))}}$ . In other words, every polyhedron Q that can be linearly projected on conv(X) must have exponentially many facets. In fact, the same result also applies if conv(X) is restricted to be a matroid polytope. Conditioning on ${\mathbf{NP}\not\subseteq \mathbf{P_{/poly}}}$ , our result rules out the existence of a compact formulation for any ${\mathbf{NP}}$ -hard optimization problem even if the formulation may contain arbitrary real numbers.  相似文献   

    8.
    For $x\in [0,1)$ x ∈ [ 0 , 1 ) , let $x=[a_1(x), a_2(x),\ldots ]$ x = [ a 1 ( x ) , a 2 ( x ) , ... ] be its continued fraction expansion with partial quotients $\{a_n(x), n\ge 1\}$ { a n ( x ) , n ≥ 1 } . Let $\psi : \mathbb{N } \rightarrow \mathbb{N }$ ψ : N → N be a function with $\psi (n)/n\rightarrow \infty $ ψ ( n ) / n → ∞ as $n\rightarrow \infty $ n → ∞ . In this note, the fast Khintchine spectrum, i.e., the Hausdorff dimension of the set $$\begin{aligned} E(\psi ):=\left\{ x\in [0,1): \lim _{n\rightarrow \infty }\frac{1}{\psi (n)}\sum _{j=1}^n\log a_j(x)=1\right\} \end{aligned}$$ E ( ψ ) : = x ∈ [ 0 , 1 ) : lim n → ∞ 1 ψ ( n ) ∑ j = 1 n log a j ( x ) = 1 is completely determined without any extra condition on $\psi $ ψ . This fills a gap of the former work in Fan et al. (Ergod Theor Dyn Syst 29:73–109, 2009).  相似文献   

    9.
    We study the category $\mathcal I _{\mathrm{gr }}$ of graded representations with finite-dimensional graded pieces for the current algebra $\mathfrak{g }\otimes \mathbf{C }[t]$ where $\mathfrak{g }$ is a simple Lie algebra. This category has many similarities with the category $\mathcal O $ of modules for $\mathfrak{g }$ , and in this paper, we prove an analog of the famous BGG duality in the case of $\mathfrak{sl }_{n+1}$ .  相似文献   

    10.
    We present an approximation algorithm for computing shortest paths in weighted three-dimensional domains. Given a polyhedral domain $\mathcal D $ D , consisting of $n$ n tetrahedra with positive weights, and a real number $\varepsilon \in (0,1)$ ε ∈ ( 0 , 1 ) , our algorithm constructs paths in $\mathcal D $ D from a fixed source vertex to all vertices of $\mathcal D $ D , the costs of which are at most $1+\varepsilon $ 1 + ε times the costs of (weighted) shortest paths, in $O(\mathcal{C }(\mathcal D )\frac{n}{\varepsilon ^{2.5}}\log \frac{n}{\varepsilon }\log ^3\frac{1}{\varepsilon })$ O ( C ( D ) n ε 2.5 log n ε log 3 1 ε ) time, where $\mathcal{C }(\mathcal D )$ C ( D ) is a geometric parameter related to the aspect ratios of tetrahedra. The efficiency of the proposed algorithm is based on an in-depth study of the local behavior of geodesic paths and additive Voronoi diagrams in weighted three-dimensional domains, which are of independent interest. The paper extends the results of Aleksandrov et al. (J ACM 52(1):25–53, 2005), to three dimensions.  相似文献   

    11.
    The Golub–Kahan–Lanczos (GKL) bidiagonal reduction generates, by recurrence, the matrix factorization of $X \in \mathbb{R }^{m \times n}, m \ge n$ , given by $$\begin{aligned} X = UBV^T \end{aligned}$$ where $U \in \mathbb{R }^{m \times n}$ is left orthogonal, $V \in \mathbb{R }^{n \times n}$ is orthogonal, and $B \in \mathbb{R }^{n \times n}$ is bidiagonal. When the GKL recurrence is implemented in finite precision arithmetic, the columns of $U$ and $V$ tend to lose orthogonality, making a reorthogonalization strategy necessary to preserve convergence of the singular values. The use of an approach started by Simon and Zha (SIAM J Sci Stat Comput, 21:2257–2274, 2000) that reorthogonalizes only one of the two left orthogonal matrices $U$ and $V$ is shown to be very effective by the results presented here. Supposing that $V$ is the matrix reorthogonalized, the reorthogonalized GKL algorithm proposed here is modeled as the Householder Q–R factorization of $\left( \begin{array}{c} 0_{n \times k} \\ X V_k \end{array}\right) $ where $V_k = V(:,1:k)$ . That model is used to show that if $\varepsilon _M $ is the machine unit and $$\begin{aligned} \bar{\eta }= \Vert \mathbf{tril }(I-V^T\!~V)\Vert _F, \end{aligned}$$ where $\mathbf{tril }(\cdot )$ is the strictly lower triangular part of the contents, then: (1) the GKL recurrence produces Krylov spaces generated by a nearby matrix $X + \delta X$ , $\Vert \delta X\Vert _F = \mathcal O (\varepsilon _M + \bar{\eta }) \Vert X\Vert _F$ ; (2) singular values converge in the Lanczos process at the rate expected from the GKL algorithm in exact arithmetic on a nearby matrix; (3) a new proposed algorithm for recovering leading left singular vectors produces better bounds on loss of orthogonality and residual errors.  相似文献   

    12.
    Let $(Q(k):k\ge 0)$ be an $M/M/1$ queue with traffic intensity $\rho \in (0,1).$ Consider the quantity $$\begin{aligned} S_{n}(p)=\frac{1}{n}\sum _{j=1}^{n}Q\left( j\right) ^{p} \end{aligned}$$ for any $p>0.$ The ergodic theorem yields that $S_{n}(p) \rightarrow \mu (p) :=E[Q(\infty )^{p}]$ , where $Q(\infty )$ is geometrically distributed with mean $\rho /(1-\rho ).$ It is known that one can explicitly characterize $I(\varepsilon )>0$ such that $$\begin{aligned} \lim \limits _{n\rightarrow \infty }\frac{1}{n}\log P\big (S_{n}(p)<\mu \left( p\right) -\varepsilon \big ) =-I\left( \varepsilon \right) ,\quad \varepsilon >0. \end{aligned}$$ In this paper, we show that the approximation of the right tail asymptotics requires a different logarithm scaling, giving $$\begin{aligned} \lim \limits _{n\rightarrow \infty }\frac{1}{n^{1/(1+p)}}\log P\big (S_{n} (p)>\mu \big (p\big )+\varepsilon \big )=-C\big (p\big ) \varepsilon ^{1/(1+p)}, \end{aligned}$$ where $C(p)>0$ is obtained as the solution of a variational problem. We discuss why this phenomenon—Weibullian right tail asymptotics rather than exponential asymptotics—can be expected to occur in more general queueing systems.  相似文献   

    13.
    We present a new algorithm for computing motorcycle graphs that runs in \(O(n^{4/3+\varepsilon })\) time for any \(\varepsilon >0\) , improving on all previously known algorithms. The main application of this result is to computing the straight skeleton of a polygon. It allows us to compute the straight skeleton of a non-degenerate polygon with \(h\) holes in \(O(n \sqrt{h+1} \log ^2 n+n^{4/3+\varepsilon })\) expected time. If all input coordinates are \(O(\log n)\) -bit rational numbers, we can compute the straight skeleton of a (possibly degenerate) polygon with \(h\) holes in \(O(n \sqrt{h+1}\log ^3 n)\) expected time. In particular, it means that we can compute the straight skeleton of a simple polygon in \(O(n\log ^3n)\) expected time if all input coordinates are \(O(\log n)\) -bit rationals, while all previously known algorithms have worst-case running time \(\omega (n^{3/2})\) .  相似文献   

    14.
    Given a distribution \(\rho \) on persistence diagrams and observations \(X_{1},\ldots ,X_{n} \mathop {\sim }\limits ^{iid} \rho \) we introduce an algorithm in this paper that estimates a Fréchet mean from the set of diagrams \(X_{1},\ldots ,X_{n}\) . If the underlying measure \(\rho \) is a combination of Dirac masses \(\rho = \frac{1}{m} \sum _{i=1}^{m} \delta _{Z_{i}}\) then we prove the algorithm converges to a local minimum and a law of large numbers result for a Fréchet mean computed by the algorithm given observations drawn iid from \(\rho \) . We illustrate the convergence of an empirical mean computed by the algorithm to a population mean by simulations from Gaussian random fields.  相似文献   

    15.
    We investigate in this paper the duality gap between the binary quadratic optimization problem and its semidefinite programming relaxation. We show that the duality gap can be underestimated by ${\xi_{r+1}\delta^2}$ , where ?? is the distance between {?1, 1} n and certain affine subspace, and ?? r+1 is the smallest positive eigenvalue of a perturbed matrix. We also establish the connection between the computation of ?? and the cell enumeration of hyperplane arrangement in discrete geometry.  相似文献   

    16.
    Around 1958, Hill described how to draw the complete graph $K_n$ K n with $$\begin{aligned} Z(n) :=\frac{1}{4}\Big \lfloor \frac{n}{2}\Big \rfloor \Big \lfloor \frac{n-1}{2}\Big \rfloor \Big \lfloor \frac{n-2}{2}\Big \rfloor \Big \lfloor \frac{n-3}{2}\Big \rfloor \end{aligned}$$ Z ( n ) : = 1 4 ? n 2 ? ? n ? 1 2 ? ? n ? 2 2 ? ? n ? 3 2 ? crossings, and conjectured that the crossing number ${{\mathrm{cr}}}(K_{n})$ cr ( K n ) of $K_n$ K n is exactly $Z(n)$ Z ( n ) . This is also known as Guy’s conjecture as he later popularized it. Towards the end of the century, substantially different drawings of $K_{n}$ K n with $Z(n)$ Z ( n ) crossings were found. These drawings are 2-page book drawings, that is, drawings where all the vertices are on a line $\ell $ ? (the spine) and each edge is fully contained in one of the two half-planes (pages) defined by  $\ell $ ? . The 2-page crossing number of $K_{n} $ K n , denoted by $\nu _{2}(K_{n})$ ν 2 ( K n ) , is the minimum number of crossings determined by a 2-page book drawing of $K_{n}$ K n . Since ${{\mathrm{cr}}}(K_{n}) \le \nu _{2}(K_{n})$ cr ( K n ) ≤ ν 2 ( K n ) and $\nu _{2}(K_{n}) \le Z(n)$ ν 2 ( K n ) ≤ Z ( n ) , a natural step towards Hill’s Conjecture is the weaker conjecture $\nu _{2}(K_{n}) = Z(n)$ ν 2 ( K n ) = Z ( n ) , popularized by Vrt’o. In this paper we develop a new technique to investigate crossings in drawings of $K_{n}$ K n , and use it to prove that $\nu _{2}(K_{n}) = Z(n) $ ν 2 ( K n ) = Z ( n ) . To this end, we extend the inherent geometric definition of $k$ k -edges for finite sets of points in the plane to topological drawings of $K_{n}$ K n . We also introduce the concept of ${\le }{\le }k$ ≤ ≤ k -edges as a useful generalization of ${\le }k$ ≤ k -edges and extend a powerful theorem that expresses the number of crossings in a rectilinear drawing of $K_{n}$ K n in terms of its number of ${\le }k$ ≤ k -edges to the topological setting. Finally, we give a complete characterization of crossing minimal 2-page book drawings of $K_{n}$ K n and show that, up to equivalence, they are unique for $n$ n even, but that there exist an exponential number of non homeomorphic such drawings for $n$ n odd.  相似文献   

    17.
    We generalize the second pinching theorem for minimal hypersurfaces in a sphere due to Peng–Terng, Wei–Xu, Zhang, and Ding–Xin to the case of hypersurfaces with small constant mean curvature. Let $M^n$ be a compact hypersurface with constant mean curvature $H$ in $S^{n+1}$ . Denote by $S$ the squared norm of the second fundamental form of $M$ . We prove that there exist two positive constants $\gamma (n)$ and $\delta (n)$ depending only on $n$ such that if $|H|\le \gamma (n)$ and $\beta (n,H)\le S\le \beta (n,H)+\delta (n)$ , then $S\equiv \beta (n,H)$ and $M$ is one of the following cases: (i) $S^{k}\Big (\sqrt{\frac{k}{n}}\Big )\times S^{n-k}\Big (\sqrt{\frac{n-k}{n}}\Big )$ , $\,1\le k\le n-1$ ; (ii) $S^{1}\Big (\frac{1}{\sqrt{1+\mu ^2}}\Big )\times S^{n-1}\Big (\frac{\mu }{\sqrt{1+\mu ^2}}\Big )$ . Here $\beta (n,H)=n+\frac{n^3}{2(n-1)}H^2+\frac{n(n-2)}{2(n-1)} \sqrt{n^2H^4+4(n-1)H^2}$ and $\mu =\frac{n|H|+\sqrt{n^2H^2+ 4(n-1)}}{2}$ .  相似文献   

    18.
    Let $P \subseteq \mathbb{R }^d$ P ? R d be a $d$ d -dimensional $n$ n -point set. A Tverberg partition is a partition of $P$ P into $r$ r sets $P_1, \dots , P_r$ P 1 , ? , P r such that the convex hulls $\hbox {conv}(P_1), \dots , \hbox {conv}(P_r)$ conv ( P 1 ) , ? , conv ( P r ) have non-empty intersection. A point in $\bigcap _{i=1}^{r} \hbox {conv}(P_i)$ ? i = 1 r conv ( P i ) is called a Tverberg point of depth $r$ r for $P$ P . A classic result by Tverberg shows that there always exists a Tverberg partition of size $\lceil n/(d+1) \rceil $ ? n / ( d + 1 ) ? , but it is not known how to find such a partition in polynomial time. Therefore, approximate solutions are of interest. We describe a deterministic algorithm that finds a Tverberg partition of size $\lceil n/4(d+1)^3 \rceil $ ? n / 4 ( d + 1 ) 3 ? in time $d^{O(\log d)} n$ d O ( log d ) n . This means that for every fixed dimension we can compute an approximate Tverberg point (and hence also an approximate centerpoint) in linear time. Our algorithm is obtained by combining a novel lifting approach with a recent result by Miller and Sheehy (Comput Geom Theory Appl 43(8):647–654, 2010).  相似文献   

    19.
    We give a recursive algorithm for computing the character of the cohomology of the moduli space ${\overline{M}}_{0,n}$ of stable $n$ -pointed genus zero curves as a representation of the symmetric group $\mathbb{S }_n$ on $n$ letters. Using the algorithm we can show a formula for the maximum length of this character. Our main tool is connected to the moduli spaces of weighted stable curves introduced by Hassett.  相似文献   

    20.
    In this paper, a full-Newton step feasible interior-point algorithm is proposed for solving $P_*(\kappa )$ -linear complementarity problems. We prove that the full-Newton step to the central path is local quadratically convergent and the proposed algorithm has polynomial iteration complexity, namely, $O\left( (1+4\kappa )\sqrt{n}\log {\frac{n}{\varepsilon }}\right) $ , which matches the currently best known iteration bound for $P_*(\kappa )$ -linear complementarity problems. Some preliminary numerical results are provided to demonstrate the computational performance of the proposed algorithm.  相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号