首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The problem of finding thekth smallest ofn elements can be solved either with O(n) algorithms or with O(n 2) algorithms. Although they require a higher number of operations in the worst case, O(n 2) algorithms are generally preferred to O(n) algorithms because of their better average performance. We present a hybrid algorithm which is O(n) in the worst case and efficient in the average case.  相似文献   

2.
Under study are the two problems of choosing a subset of m vectors with the maximum norm of the sum of the elements from a set of n vectors in Euclidean space ℝ k . The vectors are assumed to have integer coordinates. Using the dynamic programming technique, some new optimal algorithms are suggested for solving the problems; these algorithms have pseudopolynomial time when the dimension of the space is fixed. The new algorithms have certain advantages over the availables: the vector subset problem can be solved faster for m < (k/2) k , while, after taking into account an additional restriction on the order of the vectors, the time complexity is k k−1 times less independently on m.  相似文献   

3.
In this paper, we describe an algorithm to stably sort an array ofn elements using only a linear number of data movements and constant extra space, albeit in quadratic time. It was not known previously whether such an algorithm existed. When the input contains only a constant number of distinct values, we present a sequence ofin situ stable sorting algorithms makingO(n lg(k+1) n+kn) comparisons (lg(K) means lg iteratedk times and lg* the number of times the logarithm must be taken to give a result 0) andO(kn) data movements for any fixed valuek, culminating in one that makesO(n lg*n) comparisons and data movements. Stable versions of quicksort follow from these algorithms.Research supported by Natural Sciences and Engineering Research Council of Canada grant No.A-8237 and the Information Technology Research Centre of Ontario.Supported in part by a Research Initiation Grant from the Virginia Engineering Foundation.  相似文献   

4.
Summary Ak-in-a-row procedure is proposed to select the most demanded element in a set ofn elements. We show that the least favorable configuration of the proposed procedure which always selects the element when the same element has been demanded (or observed)k times in a row has a simple form similar to those of classical selection procedures. Moreover, numerical evidences are provided to illustrate the fact thatk-in-a-row procedure is better than the usual inverse sampling procedure and fixed sample size procedure when the distance between the most demanded element and the other elements is large and when the number of elements is small.  相似文献   

5.
In this paper we consider an optimization version of the multicommodity flow problem which is known as the maximum concurrent flow problem. We show that an approximate solution to this problem can be computed deterministically using O(k(ε −2 + logk) logn) 1-commodity minimum-cost flow computations, wherek is the number of commodities,n is the number of nodes, andε is the desired precision. We obtain this bound by proving that in the randomized algorithm developed by Leighton et al. (1995) the random selection of commodities can be replaced by the deterministic round-robin without increasing the total running time. Our bound significantly improves the previously known deterministic upper bounds and matches the best known randomized upper bound for the approximation concurrent flow problem. A preliminary version of this paper appeared inProceedings of the 6th ACM-SIAM Symposium on Discrete Algorithms, San Francisco CA, 1995, pp. 486–492.  相似文献   

6.
This paper proposes a new model that generalizes the linear sliding window system to the case of multiple failures. The considered k ‐within‐ m ‐from‐ r / n sliding window system consists of n linearly ordered multi‐state elements and fails if at least k groups out of m consecutive groups of r consecutive multi‐state elements have cumulative performance lower than the demand W . A reliability evaluation algorithm is suggested for the proposed system. In order to increase the system availability, maintenance actions can be performed, and the elements can be optimally allocated. A joint element allocation and maintenance optimization model is formulated with the objective of minimizing the total maintenance cost subjected to the pre‐specified system availability requirement. Basic procedures of genetic algorithms are adapted to solve the optimization problem. Numerical experiments are presented to illustrate the applications. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

7.
Let G be a connected simple graph on n vertices. The Laplacian index of G, namely, the greatest Laplacian eigenvalue of G, is well known to be bounded above by n. In this paper, we give structural characterizations for graphs G with the largest Laplacian index n. Regular graphs, Hamiltonian graphs and planar graphs with the largest Laplacian index are investigated. We present a necessary and sufficient condition on n and k for the existence of a k-regular graph G of order n with the largest Laplacian index n. We prove that for a graph G of order n ⩾ 3 with the largest Laplacian index n, G is Hamiltonian if G is regular or its maximum vertex degree is Δ(G) = n/2. Moreover, we obtain some useful inequalities concerning the Laplacian index and the algebraic connectivity which produce miscellaneous related results. The first author is supported by NNSF of China (No. 10771080) and SRFDP of China (No. 20070574006). The work was done when Z. Chen was on sabbatical in China.  相似文献   

8.
The q-round Rényi–Ulam pathological liar game with k lies on the set [n]{1,…,n} is a 2-player perfect information zero sum game. In each round Paul chooses a subset A[n] and Carole either assigns 1 lie to each element of A or to each element of [n]A. Paul wins if after q rounds there is at least one element with k or fewer lies. The game is dual to the original Rényi–Ulam liar game for which the winning condition is that at most one element has k or fewer lies. Define to be the minimum n such that Paul can win the q-round pathological liar game with k lies and initial set [n]. For fixed k we prove that is within an absolute constant (depending only on k) of the sphere bound, ; this is already known to hold for the original Rényi–Ulam liar game due to a result of J. Spencer.  相似文献   

9.
The graph coloring problem is to color a given graph with the minimum number of colors. This problem is known to be NP-hard even if we are only aiming at approximate solutions. On the other hand, the best known approximation algorithms require nδ (δ>0) colors even for bounded chromatic (k-colorable for fixed k) n-vertex graphs. The situation changes dramatically if we look at the average performance of an algorithm rather than its worst case performance. A k-colorable graph drawn from certain classes of distributions can be k-colored almost surely in polynomial time. It is also possible to k-color such random graphs in polynomial average time. In this paper, we present polynomial time algorithms for k-coloring graphs drawn from the semirandom model. In this model, the graph is supplied by an adversary each of whose decisions regarding inclusion of edges is reversed with some probability p. In terms of randomness, this model lies between the worst case model and the usual random model where each edge is chosen with equal probability. We present polynomial time algorithms of two different types. The first type of algorithms always run in polynomial time and succeed almost surely. Blum and Spencer [J. Algorithms, 19 , 204–234 (1995)] have also obtained independently such algorithms, but our results are based on different proof techniques which are interesting in their own right. The second type of algorithms always succeed and have polynomial running time on the average. Such algorithms are more useful and more difficult to obtain than the first type of algorithms. Our algorithms work for semirandom graphs drawn from a wide range of distributions and work as long as pn−α(k)+ϵ where α(k)=(2k)/((k−1)(k+2)) and ϵ is a positive constant. © 1998 John Wiley & Sons, Inc. Random Struct. Alg., 13, 125–158 (1998)  相似文献   

10.
We are given n coins of which k are heavy (defective), while the remaining nk are light (good). We know both the weight of the good coins and the weight of the defective ones. Therefore, if we weigh a subset Q ? S with a spring scale, then the outcome will tell us exactly the number of defectives contained in Q. The problem, known as Counterfeit Coins problem, is to identify the set of defective coins by minimizing the number of weighings, also called queries. It is well known that Θ(klog k +1(n/k)) queries are enough, even for non‐adaptive algorithms, in case kcn for some constant 0 < c < 1. A natural interesting generalization arises when we are required to identify any subset of mk defectives. We show that while for randomized algorithms \begin{align*}\tilde{\Theta}(m)\end{align*} queries are sufficient, the deterministic non‐adaptive counterpart still requires Θ(klog k +1(n/k)) queries, in case kn/28; therefore, finding any subset of defectives is not easier than finding all of them by a non‐adaptive deterministic algorithm. © 2012 Wiley Periodicals, Inc. Random Struct. Alg., 2012  相似文献   

11.
LetG be an algebraic group over a fieldk. We callg εG(k) real ifg is conjugate tog −1 inG(k). In this paper we study reality for groups of typeG 2 over fields of characteristic different from 2. LetG be such a group overk. We discuss reality for both semisimple and unipotent elements. We show that a semisimple element inG(k) is real if and only if it is a product of two involutions inG(k). Every unipotent element inG(k) is a product of two involutions inG(k). We discuss reality forG 2 over special fields and construct examples to show that reality fails for semisimple elements inG 2 over ℚ and ℚp. We show that semisimple elements are real forG 2 overk withcd(k) ≤ 1. We conclude with examples of nonreal elements inG 2 overk finite, with characteristick not 2 or 3, which are not semisimple or unipotent.  相似文献   

12.
The main theme is the distribution of polynomials of given degree which split into a product of linear factors over a finite field. The work was motivated by the following problem on regular directed graphs. Extending a notion of Chung, Katz has defined a regular directed graph based on thek-algebrak[X]/(f), wherekis the finite field of orderqandfa monic polynomial of degreenoverk. It is shown that the diameter of this graph is at mostn+2 wheneverqB(n)=[n(n+2)!]2. This improves on the work of Katz who gave a similar result for square-free polynomialsfwithout specifyingB(n).  相似文献   

13.
Faster Subtree Isomorphism   总被引:2,自引:0,他引:2  
We study the subtree isomorphism problem: Given trees H and G, find a subtree of G which is isomorphic to H or decide that there is no such subtree. We give an O((k1.5/log k)n)-time algorithm for this problem, where k and n are the number of vertices in H and G, respectively. This improves over the O(k1.5n) algorithms of Chung and Matula. We also give a randomized (Las Vegas) O(k1.376n)-time algorithm for the decision problem.  相似文献   

14.
We study resilient functions and exposure‐resilient functions in the low‐entropy regime. A resilient function (a.k.a. deterministic extractor for oblivious bit‐fixing sources) maps any distribution on n ‐bit strings in which k bits are uniformly random and the rest are fixed into an output distribution that is close to uniform. With exposure‐resilient functions, all the input bits are random, but we ask that the output be close to uniform conditioned on any subset of nk input bits. In this paper, we focus on the case that k is sublogarithmic in n. We simplify and improve an explicit construction of resilient functions for k sublogarithmic in n due to Kamp and Zuckerman (SICOMP 2006), achieving error exponentially small in k rather than polynomially small in k. Our main result is that when k is sublogarithmic in n, the short output length of this construction (O(log k) output bits) is optimal for extractors computable by a large class of space‐bounded streaming algorithms. Next, we show that a random function is a resilient function with high probability if and only if k is superlogarithmic in n, suggesting that our main result may apply more generally. In contrast, we show that a random function is a static (resp. adaptive) exposure‐resilient function with high probability even if k is as small as a constant (resp. loglog n). No explicit exposure‐resilient functions achieving these parameters are known. © 2012 Wiley Periodicals, Inc. Random Struct. Alg., 2013  相似文献   

15.
Random orders     
Peter Winkler 《Order》1985,1(4):317-331
Letk andn be positive integers and fix a setS of cardinalityn; letP k (n) be the (partial) order onS given by the intersection ofk randomly and independently chosen linear orders onS. We begin study of the basic parameters ofP k (n) (e.g., height, width, number of extremal elements) for fixedk and largen. Our object is to illustrate some techniques for dealing with these random orders and to lay the groundwork for future research, hoping that they will be found to have useful properties not obtainable by known constructions.Supported by NSF grant MCS 84-02054.  相似文献   

16.
Let h, k be fixed positive integers, and let A be any set of positive integers. Let hA ≔ {a 1 + a 2 + ... + a r : a i A, rh} denote the set of all integers representable as a sum of no more than h elements of A, and let n(h, A) denote the largest integer n such that {1, 2,...,n} ⊆ hA. Let n(h, k) := : n(h, A), where the maximum is taken over all sets A with k elements. We determine n(h, A) when the elements of A are in geometric progression. In particular, this results in the evaluation of n(h, 2) and yields surprisingly sharp lower bounds for n(h, k), particularly for k = 3.  相似文献   

17.
In this paper several recurrences and formulas are presented leading to upper and lower bounds, both logarithmic, for the expected height of a node in a heap. These bounds are of interest for algorithms that select thekth smallest element in a heap.  相似文献   

18.
Summary LetX 1,X 2, ..., be a sequence of independent and identically distributed random variables in the domain of normal attraction of a nonnormal stabler law. It is known that only the sum of thek n largest andk n smallest extreme values in thenth partial sum withk n andk n /n0 are responsible for the asymptotic stable distribution of the whole sum. We investigate the rate at which such sums of extreme values converge to a stable law in conjunction with the rate at which the sums of the middle terms become asymptotically negligible. In terms of rates of convergence our results provide in many cases a quantitative measure of exactly what portion of the sample is asymptotically stable.Research partially supported by the Deutsche Forschungsgemeinschaft while visiting the University of DelawareResearch partially supported by NSF Grant no. DMS-8803209  相似文献   

19.
Standard methods for calculating over GF(pn), the finite field of pn elements, require an irreducible polynomial of degree n with coefficients in GF(p). Such a polynomial is usually obtained by choosing it randomly and then verifying that it is irreducible, using a probabilistic algorithm. If it is not, the procedure is repeated. Here we given an explicit basis, with multiplication table, for the fields GF(ppk), for k = 0, 1, 2,…, and their union. This leads to efficient computational methods, not requiring the preliminary calculation of irreducible polynomials over finite fields and, at the same time, yields a simple recursive formula for irreducible polynomials which generate the fields. The fast Fourier transform (FFT) is a method for efficiently evaluating (or interpolating) a polynomial of degree < n at all of the nth roots of unity, i.e., on the finite multiplicate subgroups of F, in O(nlogn) operations in the underlying field. We give an analogue of the fast Fourier transform which efficiently evaluates a polynomial on some of the additive subgroups ofF. This yields new “fast” algorithms for polynomial computation.  相似文献   

20.
It is well known that finite element spaces used for approximating the velocity and the pressure in an incompressible flow problem have to be stable in the sense of the inf-sup condition of Babuška and Brezzi if a stabilization of the incompressibility constraint is not applied. In this paper we consider a recently introduced class of triangular nonconforming finite elements of nth order accuracy in the energy norm called P n mod elements. For n ≤ 3 we show that the stability condition holds if the velocity space is constructed using the P n mod elements and the pressure space consists of continuous piecewise polynomial functions of degree n. This research has been supported by the Grant Agency of the Czech Republic under the grant No. 201/05/0005 and by the grant MSM 0021620839.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号