首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The study of locally testable codes (LTCs) has benefited from a number of nontrivial constructions discovered in recent years. Yet, we still lack a good understanding of what makes a linear error correcting code locally testable and as a result we do not know what is the rate‐limit of LTCs and whether asymptotically good linear LTCs with constant query complexity exist. In this paper, we provide a combinatorial characterization of smooth locally testable codes, which are locally testable codes whose associated tester queries every bit of the tested word with equal probability. Our main contribution is a combinatorial property defined on the Tanner graph associated with the code tester (“well‐structured tester”). We show that a family of codes is smoothly locally testable if and only if it has a well‐structured tester. As a case study we show that the standard tester for the Hadamard code is “well‐structured,” giving an alternative proof of the local testability of the Hadamard code, originally proved by (Blum, Luby, Rubinfeld, J. Comput. Syst. Sci. 47 (1993) 549–595) (STOC 1990). Additional connections to the works of (Ben‐Sasson, Harsha, Raskhodnikova, SIAM J. Comput 35 (2005) 1–21) (SICOMP 2005) and of (Lachish, Newman and Shapira, Comput Complex 17 (2008) 70–93) are also discussed. © 2016 Wiley Periodicals, Inc. Random Struct. Alg., 49, 280–307, 2016  相似文献   

2.
The genetic code is the interface between the genetic information stored in DNA molecules and the proteins. Considering the hypothesis that the genetic code evolved to its current structure, some researches use optimization algorithms to find hypothetical codes to be compared to the canonical genetic code. For this purpose, a function with only one objective is employed to evaluate the codes, generally a function based on the robustness of the code against mutations. Very few random codes are better than the canonical genetic code when the evaluation function based on robustness is considered. However, most codons are associated with a few amino acids in the best hypothetical codes when only robustness is employed to evaluate the codes, what makes hard to believe that the genetic code evolved based on only one objective, i.e., the robustness against mutations. In this way, we propose here to use entropy as a second objective for the evaluation of the codes. We propose a Pareto approach to deal with both objectives. The results indicate that the Pareto approach generates codes closer to the canonical genetic code when compared to the codes generated by the approach with only one objective employed in the literature.  相似文献   

3.
Coding theoretic and complexity theoretic considerations naturally lead to the question of generating symmetric, sparse, redundant linear systems. This paper provides a new way of construction with better parameters and new lower bounds.Low Density Parity Check (LDPC) codes are linear codes defined by short constraints (a property essential for local testing of a code). Some of the best (theoretically and practically) used codes are LDPC. Symmetric codes are those in which all coordinates “look the same,” namely there is some transitive group acting on the coordinates which preserves the code. Some of the most commonly used locally testable codes (especially in PCPs and other proof systems), including all “low-degree” codes, are symmetric. Requiring that a symmetric binary code of length n has large (linear or near-linear) distance seems to suggest a “con ict” between 1/rate and density (constraint length). In known constructions, if one is constant, then the other is almost the worst possible - n/poly(logn).Our main positive result simultaneously achieves symmetric low density, constant rate codes generated by a single constraint. We present an explicit construction of a symmetric and transitive binary code of length n, near-linear distance n/(log logn)2, of constant rate and with constraints of length (logn)4. The construction is in the spirit of Tanner codes, namely the codewords are indexed by the edges of a sparse regular expander graph. The main novelty is in our construction of a transitive (non Abelian!) group acting on these edges which preserves the code. Our construction is one instantiation of a framework we call Cayley Codes developed here, that may be viewed as extending zig-zag product to symmetric codes.Our main negative result is that the parameters obtained above cannot be significantly improved, as long as the acting group is solvable (like the one we use). More specifically, we show that in constant rate and linear distance codes (aka “good” codes) invariant under solvable groups, the density (length of generating constraints) cannot go down to a constant, and is bounded below by (log(Ω(?)) n)(an Ω(?) iterated logarithm) if the group has a derived series of length ?. This negative result precludes natural local tests with constantly many queries for such solvable “good” codes.  相似文献   

4.
Despite the recent wave of interest in the social and physical sciences regarding “complexity,” relatively title attention has been given to the logical foundation of complexity measurement. With this in mind, a number of fairly simple, “reasonable” axioms for the measurement of network complexity are here presented, and some of the implications of these axioms are considered. It is shown that the only family of graph complexity measures satisfying the “reasonable” axioms is of limited theoretical utility, and hence that those seeking more interesting measures of complexity must be willing to sacrifice at least one intuitively reasonable constraint. Several existing complexity measures are also described, and are differentiated from one another on an axiomatic basis. Finally, some suggestions are offered regarding future efforts at measuring graph complexity.  相似文献   

5.
We generalize Gabidulin codes to a large family of fields, non necessarily finite, possibly with characteristic zero. We consider a general field extension and any automorphism in the Galois group of the extension. This setting enables one to give several definitions of metrics related to the rank-metric, yet potentially different. We provide sufficient conditions on the given automorphism to ensure that the associated rank metrics are indeed all equal and proper, in coherence with the usual definition from linearized polynomials over finite fields. Under these conditions, we generalize the notion of Gabidulin codes. We also present an algorithm for decoding errors and erasures, whose complexity is given in terms of arithmetic operations. Over infinite fields the notion of code alphabet is essential, and more issues appear that in the finite field case. We first focus on codes over integer rings and study their associated decoding problem. But even if the code alphabet is small, we have to deal with the growth of intermediate values. A classical solution to this problem is to perform the computations modulo a prime ideal. For this, we need study the reduction of generalized Gabidulin codes modulo an ideal. We show that the codes obtained by reduction are the classical Gabidulin codes over finite fields. As a consequence, under some conditions, decoding generalized Gabidulin codes over integer rings can be reduced to decoding Gabidulin codes over a finite field.  相似文献   

6.
We define two measures, γ and c, of complexity for Boolean functions. These measures are related to issues of functional decomposition which (for continuous functions) were studied by Arnol'd, Kolmogorov, Vitu?kin and others in connection with Hilbert's 13th Problem. This perspective was first applied to Boolean functions in [1]. Our complexity measures differ from those which were considered earlier [3, 5, 6, 9, 10] and which were used by Ehrenfeucht and others to demonstrate the great complexity of most decision procedures. In contrast to other measures, both γ and c (which range between 0 and 1) have a more combinatorial flavor and it is easy to show that both of them are close to 0 for literally all “meaningful” Boolean functions of many variables. It is not trivial to prove that there exist functions for which c is close to 1, and for γ the same question is still open. The same problem for all traditional measures of complexity is easily resolved by statistical considerations.  相似文献   

7.
Bora Moon 《Discrete Mathematics》2018,341(11):3174-3181
It is known that the binary generalized Goppa codes are perfect codes for the weighted Hamming metrics. In this paper, we present the existence of a weighted Hamming metric that admits a binary Hamming code (resp. an extended binary Hamming code) to be perfect code. For a special weighted Hamming metric, we also give some structures of a 2-perfect code, show how to construct a 2-perfect linear code and obtain the weight distribution of a 2-perfect code from the partial information of the code.  相似文献   

8.
Project dynamics and emergent complexity   总被引:1,自引:0,他引:1  
This paper presents a theoretical analysis of project dynamics and emergent complexity in new product development (NPD) projects subjected to the management concept of concurrent engineering. To provide a comprehensive study, the complexity frameworks, theories and measures that have been developed in organizational theory, systematic engineering design and basic scientific research are reviewed. For the evaluation of emergent complexity in NPD projects, an information-theory quantity—termed “effective measure complexity” (EMC)—is selected from a variety of measures, because it can be derived from first principles and therefore has high construct validity. Furthermore, it can be calculated efficiently from dynamic generative models or purely from historical data, without intervening models. The EMC measures the mutual information between the infinite past and future histories of a stochastic process. According to this principle, it is particularly interesting to evaluate the time-dependent complexity in NPD and to uncover the relevant interactions. To obtain analytical results, a model-driven approach is taken and a vector autoregression (VAR) model of cooperative work is formulated. The formulated VAR model provided the foundation for the calculation of a closed-form solution of the EMC in the original state space. This solution can be used to analyze and optimize complexity based on the model’s independent parameters. Moreover, a transformation into the spectral basis is carried out to obtain more expressive solutions in matrix form. The matrix form allows identification of the surprisingly few essential parameters and calculation of two lower complexity bounds. The essential parameters include the eigenvalues of the work transformation matrix of the VAR model and the correlations between components of performance fluctuations.  相似文献   

9.
A new Lee–Carter model parameterization is introduced with two advantages. First, the Lee–Carter parameters are normalized such that they have a direct and intuitive interpretation, comparable across populations. Second, the model is stated in terms of the “needed-exposure” (NE). The NE is the number required in order to get one expected death and is closely related to the “needed-to-treat” measure used to communicate risks and benefits of medical treatments. In the new parameterization, time parameters are directly interpretable as an overall across-age NE. Age parameters are interpretable as age-specific elasticities: percentage changes in the NE at a particular age in response to a percent change in the overall NE. A similar approach can be used to confer interpretability on parameters of other mortality models.  相似文献   

10.
The complexity of linear programming is discussed in the “integer” and “real number” models of computation. Even though the integer model is widely used in theoretical computer science, the real number model is more useful for estimating an algorithm's running time in actual computation.Although the ellipsoid algorithm is a polynomial-time algorithm in the integer model, we prove that it has unbounded complexity in the real number model. We conjecture that there exists no polynomial-time algorithm for the linear inequalities problem in the real number model. We also conjecture that linear inequalities are strictly harder than linear equalities in all “reasonable” models of computation.  相似文献   

11.
王昱  邱涌钦  武玮 《运筹与管理》2022,31(5):143-149
本文使用中国A股上市制造业企业,结合海关数据库与CEPII的BACI进出口数据,基于B-样条展开的非参数分位数模型探究金融化对出口技术复杂度的非线性异质影响,发现:(1)金融化对出口复杂度存在非线性影响,在企业配置金融资产初期,金融资产对复杂度主要表现为挤出效应,随着金融资产增加逐渐呈现促进效应。(2)金融化对出口复杂度存在异质影响,低分位点企业随金融化程度不断提高呈现U形特征,而高分位点则表现为倒N形挤出特征。(3)金融化对国企出口复杂度的抑制效应要强于总体水平,对非国企的促进效应则强于总体水平;金融化程度较低时,对高技术企业影响呈现出较强抑制作用。对非高技术行业,在追赶效应影响下更多发挥“蓄水池”作用。研究结论为通过选择合理的金融化水平,达到提升企业出口复杂度目标提供决策依据。  相似文献   

12.
13.
A new algorithm for clique-detection in a graph is introduced. The method rests on the so-called “decomposition of a graph into a chain of subgraphs” and on the corresponding so-called “quasi-blockdiagonalisation” of the adjacency matrix. A FORTRAN IV computer-program is presented.  相似文献   

14.
15.
New efficient methods are developed for the optimal maximum-likelihood (ML) decoding of an arbitrary binary linear code based on data received from any discrete Gaussian channel. The decoding algorithm is based on monotonic optimization that is minimizing a difference of monotonic (d.m.) objective functions subject to the 0–1 constraints of bit variables. The iterative process converges to the global optimal ML solution after finitely many steps. The proposed algorithm’s computational complexity depends on input sequence length k which is much less than the codeword length n, especially for a codes with small code rate. The viability of the developed is verified through simulations on different coding schemes.  相似文献   

16.
We present game-theoretic characterizations of the complexity/expressibility measures “dimension” and “the number of variables” as Least Fixed Point queries. As an example, we use these characterizations to compute the dimension and number of variables of Connectivity and Connectivity.  相似文献   

17.
18.
Let G(n,k) be a graph whose vertices are the k-element subsets of an n-set represented as n-tuples of “O's” and “1's” with k “1's”. Two such subsets are adjacent if one can be obtained from the other by switching a “O” and a “1” which are in adjacent positions, where the first and nth positions are also considered adjacent. The problem of finding hamiltonian cycles in G(n,k) is discussed. This may be considered a problem of finding “Gray codes” of the k-element subsets of an n-set. It is shown that no such cycle exists if n and k are both even or if k=2 and n?7 and that such a cycle does exist in all other cases where k?5.  相似文献   

19.
We consider locally balanced Gray codes.We say that a Gray code is locally balanced if every “short” subword in its transition sequence contains all letters of the alphabet |1, 2,..., n~. The minimal length of these subwords is the window width of the code. We show that for each n ≥ 3 there exists a Gray code with window width at most n + 3?log n?.  相似文献   

20.
In this paper, two general methods for constructing self-dual codes are presented. These methods use circulant matrices in circulant or bordered circulant structures to construct the suitable generator matrices. The necessary and sufficient conditions, for the generated codes to be self-dual, are provided. Special cases of the proposed methods include the well known “Pure Double Circulant” construction and the “Bordered Double circulant” construction of self-dual codes. As an example, the methods were applied to search for self-dual codes in GF(5). Many new inequivalent self-dual codes with best known distance are found.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号