首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
In this paper, we describe a solution to the register synthesis problem for a class of sequence generators known as algebraic feedback shift registers (AFSRs). These registers are based on the algebra of -adic numbers, where is an element in a ring R, and produce sequences of elements in R/(). We give several cases where the register synthesis problem can be solved by an efficient algorithm. Consequently, any keystreams over R/() used in stream ciphers must be unable to be generated by a small register in these classes. This paper extends the analyses of feedback with carry shift registers and algebraic feedback shift registers by Goresky, Klapper, and Xu.  相似文献   

2.
Sequences with almost perfect linear complexity profile defined by Niederreiter (1997, Lecture Notes in Computer Science, Vol. 304, pp. 37–51, Springer-Verlag, Berlin/New York) are quite important for stream ciphers. In this paper, we investigate multi-sequences with almost perfect linear complexity profile and obtain a construction of such multi-sequences by using function fields over finite fields. Some interesting examples from this construction are presented to illustrate our construction.  相似文献   

3.
Stream ciphers based on linear feedback shift registers have been subject to algebraic attacks. To avoid these kinds of attacks, feedback with carry shift registers (FCSRs) have been proposed as an alternative. They are suitable for hardware implementations. FCSRs have been implemented using ring representation, in order to circumvent some weaknesses in the traditional representations. In this paper, we explore the simplest case of FCSRs, called binary FCSRs, which are common in applications. We give a fast algorithm to construct binary ring FCSRs for hardware stream ciphers.  相似文献   

4.
For several decades the standard algorithm for factoring polynomials f with rational coefficients has been the Berlekamp-Zassenhaus algorithm. The complexity of this algorithm depends exponentially on n, where n is the number of modular factors of f. This exponential time complexity is due to a combinatorial problem: the problem of choosing the right subsets of these n factors. In this paper, this combinatorial problem is reduced to a type of Knapsack problem that can be solved with lattice reduction algorithms. The result is a practical algorithm that can factor polynomials that are far out of reach for previous algorithms. The presented solution to the combinatorial problem is different from previous lattice-based factorizers; these algorithms avoided the combinatorial problem by solving the entire factorization problem with lattice reduction. This led to lattices of large dimension and coefficients, and thus poor performance. This is why lattice-based algorithms, despite their polynomial time complexity, did not replace Berlekamp-Zassenhaus as the standard method. That is now changing; new versions of computer algebra systems such as Maple, Magma, NTL and Pari have already switched to the algorithm presented here.  相似文献   

5.
Nonlinear filter generators are commonly used as keystream generators in stream ciphers. A nonlinear filter generator utilizes a nonlinear filtering function to combine the outputs of a linear feedback shift register (LFSR) to improve the linear complexity of keystream sequences. However, the LFSR-based stream ciphers are still potentially vulnerable to algebraic attacks that recover the key from some keystream bits. Although the known algebraic attacks only require polynomial time complexity of computations, all have their own constraints. This paper uses the linearization of nonlinear filter generators to cryptanalyze LFSR-based stream ciphers. Such a method works for any nonlinear filter generators. Viewing a nonlinear filter generator as a Boolean network that evolves as an automaton through Boolean functions, we first give its linearization representation. Compared to the linearization representation in Limniotis et al. (2008), this representation requires lower spatial complexity of computations in most cases. Based on the representation, the key recoverability is analyzed via the observability of Boolean networks. An algorithm for key recovery is given as well. Compared to the exhaustive search to recover the key, using this linearization representation requires lower time complexity of computations, though it leads to exponential time complexity.  相似文献   

6.
This paper considers security implications of k-normal Boolean functions when they are employed in certain stream ciphers. A generic algorithm is proposed for cryptanalysis of the considered class of stream ciphers based on a security weakness of k-normal Boolean functions. The proposed algorithm yields a framework for mounting cryptanalysis against particular stream ciphers within the considered class. Also, the proposed algorithm for cryptanalysis implies certain design guidelines for avoiding certain weak stream cipher constructions. A particular objective of this paper is security evaluation of stream cipher Grain-128 employing the developed generic algorithm. Contrary to the best known attacks against Grain-128 which provide complexity of a secret key recovery lower than exhaustive search only over a subset of secret keys which is just a fraction (up to 5%) of all possible secret keys, the cryptanalysis proposed in this paper provides significantly lower complexity than exhaustive search for any secret key. The proposed approach for cryptanalysis primarily depends on the order of normality of the employed Boolean function in Grain-128. Accordingly, in addition to the security evaluation insights of Grain-128, the results of this paper are also an evidence of the cryptographic significance of the normality criteria of Boolean functions.  相似文献   

7.
Maximal length FCSR sequences, or l-sequences, are an important type of nonlinear sequences used for building stream ciphers. This paper studies the linearity properties of l-sequences. Although it is widely accepted that l-sequences have high linear complexities close to their half periods, it is shown that for most of the l-sequences, linear relations with large statistical advantage exist.   相似文献   

8.
We study the stream ciphers that are based on the feedback shift registers. For a stream generator (in general form), we prove a theorem which allows us to equate the concept of invertibility of the next-state function and the concept of recurrency of the shift control function. Then we study a generator for the stream cipher A5/1 used in the GSM cellular telephone standard to ensure the confidentiality of conversations. For this generator, we count the number of states that can be obtained after t clock cycles from the initial states without predecessors and cannot be obtained in this way after the smaller number of cycles.We show how to exponentially reduce the key space of A5/1 while clocking. The results can be directly used in cryptanalysis of A5/1.  相似文献   

9.
The construction of shortest feedback shift registers for a finite sequence \(S_1,\ldots ,S_N\) is considered over finite chain rings, such as \({\mathbb Z}_{p^r}\). A novel algorithm is presented that yields a parametrization of all shortest feedback shift registers for the sequence of numbers \(S_1,\ldots ,S_N\), thus solving an open problem in the literature. The algorithm iteratively processes each number, starting with \(S_1\), and constructs at each step a particular type of minimal basis. The construction involves a simple update rule at each step which leads to computational efficiency. It is shown that the algorithm simultaneously computes a similar parametrization for the reverse sequence \(S_N,\ldots ,S_1\). The complexity order of the algorithm is shown to be \(O(r N^2)\).  相似文献   

10.
In this paper we present an improved algorithm for finding low-weight multiples of polynomials over the binary field using coding theoretic methods. The associated code defined by the given polynomial has a cyclic structure, allowing an algorithm to search for shifts of the sought minimum-weight codeword. Therefore, a code with higher dimension is constructed, having a larger number of low-weight codewords and through some additional processing also reduced minimum distance. Applying an algorithm for finding low-weight codewords in the constructed code yields a lower complexity for finding low-weight polynomial multiples compared to previous approaches. As an application, we show a key-recovery attack against  that has a lower complexity than the chosen security level indicate. Using similar ideas we also present a new probabilistic algorithm for finding a multiple of weight 4, which is faster than previous approaches. For example, this is relevant in correlation attacks on stream ciphers.  相似文献   

11.
Ralph Freese 《Order》1987,3(4):331-344
In the late 1930s Phillip Whitman gave an algorithm for deciding for lattice terms v and u if vu in the free lattice on the variables in v and u. He also showed that each element of the free lattice has a shortest term representing it and this term is unique up to commutivity and associativity. He gave an algorithm for finding this term. Almost all the work on free lattices uses these algorithms. Building on the work of Ralph McKenzie, J. B. Nation and the author have developed very efficient algorithms for deciding if a lattice term v has a lower cover (i.e., if there is a w with w covered by v, which is denoted by w) and for finding them if it does. This paper studies the efficiency of both Whitman's algorithm and the algorithms of Freese and Nation. It is shown that although it is often quite fast, the straightforward implementation of Whitman's algorithm for testing vu is exponential in time in the worst case. A modification of Whitman's algorithm is given which is polynomial and has constant minimum time. The algorithms of Freese and Nation are then shown to be polynomial.  相似文献   

12.

This paper is concerned with algorithms for computing in the divisor class group of a nonsingular plane curve of the form which has only one point at infinity. Divisors are represented as ideals, and an ideal reduction algorithm based on lattice reduction is given. We obtain a unique representative for each divisor class and the algorithms for addition and reduction of divisors run in polynomial time. An algorithm is also given for solving the discrete logarithm problem when the curve is defined over a finite field.

  相似文献   


13.
Complexity measures for sequences over finite fields, such as the linear complexity and the k-error linear complexity, play an important role in cryptology. Recent developments in stream ciphers point towards an interest in word-based stream ciphers, which require the study of the complexity of multisequences. We introduce various options for error linear complexity measures for multisequences. For finite multisequences as well as for periodic multisequences with prime period, we present formulas for the number of multisequences with given error linear complexity for several cases, and we present lower bounds for the expected error linear complexity.  相似文献   

14.
Nonlinear feedback shift registers (NFSRs) are widely used in stream cipher design as building blocks. The cascade connection of NFSRs, known as an important architecture, has been adopted in Grain family of stream ciphers. In this paper, a new sufficient condition under which an NFSR cannot be decomposed into the cascade connection of two smaller NFSRs is presented, which is easy to be verified from the algebraic normal form (ANF) of the characteristic function. In fact, our results are also applicable to nonsingular Boolean functions, which actually improve a previous research of Rhodes [6] where the characteristic functions of NFSRs cannot be contained.  相似文献   

15.
Being able to compute efficiently a low-weight multiple of a given binary polynomial is often a key ingredient of correlation attacks to LFSR-based stream ciphers. The best known general purpose algorithm is based on the generalized birthday problem. We describe an alternative approach which is based on discrete logarithms and can take advantage of the structure of the polynomial. In some cases it has much lower memory complexity requirements with a comparable time complexity.  相似文献   

16.
We outline a relatively new research agenda aiming at building a new approximation paradigm by matching two distinct domains, the polynomial approximation and the exact solution of NP -hard problems by algorithms with guaranteed and non-trivial upper complexity bounds. We show how one can design approximation algorithms achieving ratios that are “forbidden” in polynomial time (unless a very unlikely complexity conjecture is confirmed) with worst-case complexity much lower than that of an exact computation.  相似文献   

17.
18.
This study presents an algorithm for efficient scheduling in terms of total flow time and maximum earliness. All the algorithms in the literature for solving this problem are based on heuristic procedures, and cannot necessarily generate all efficient schedules. This study shows that this problem can actually be solved in pseudo-polynomial time, and develops an algorithm for so doing. The complexity of the algorithm is O (n2p? log n). Its computational performance in solving problems of various sizes is determined.  相似文献   

19.
The vertex k-center selection problem is a well known NP-Hard minimization problem that can not be solved in polynomial time within a \(\rho < 2\) approximation factor, unless \(P=NP\). Even though there are algorithms that achieve this 2-approximation bound, they perform poorly on most benchmarks compared to some heuristic algorithms. This seems to happen because the 2-approximation algorithms take, at every step, very conservative decisions in order to keep the approximation guarantee. In this paper we propose an algorithm that exploits the same structural properties of the problem that the 2-approximation algorithms use, but in a more relaxed manner. Instead of taking the decision that guarantees a 2-approximation, our algorithm takes the best decision near the one that guarantees the 2-approximation. This results in an algorithm with a worse approximation factor (a 3-approximation), but that outperforms all the previously known approximation algorithms on the most well known benchmarks for the problem, namely, the pmed instances from OR-Lib (Beasly in J Oper Res Soc 41(11):1069–1072, 1990) and some instances from TSP-Lib (Reinelt in ORSA J Comput 3:376–384, 1991). However, the \(O(n^4)\) running time of this algorithm becomes unpractical as the input grows. In order to improve its running time, we modified this algorithm obtaining a \(O(n^2 \log n)\) heuristic that outperforms not only all the previously known approximation algorithms, but all the polynomial heuristics proposed up to date.  相似文献   

20.
A critical step in a cutting plane algorithm is separation, i.e., establishing whether a given vector x violates an inequality belonging to a specific class. It is customary to express the time complexity of a separation algorithm in the number of variables n. Here, we argue that a separation algorithm may instead process the vector containing the positive components of x,  denoted as supp(x),  which offers a more compact representation, especially if x is sparse; we also propose to express the time complexity in terms of |supp(x)|. Although several well-known separation algorithms exploit the sparsity of x,  we revisit this idea in order to take sparsity explicitly into account in the time-complexity of separation and also design faster algorithms. We apply this approach to two classes of facet-defining inequalities for the three-index assignment problem, and obtain separation algorithms whose time complexity is linear in |supp(x)| instead of n. We indicate that this can be generalized to the axial k-index assignment problem and we show empirically how the separation algorithms exploiting sparsity improve on existing ones by running them on the largest instances reported in the literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号