首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We describe a lattice attack on the Digital Signature Algorithm (DSA) when used to sign many messages, mi, under the assumption that a proportion of the bits of each of the associated ephemeral keys, yi, can be recovered by alternative techniques.  相似文献   

2.
In a related-key attack, an attacker seeks to discover the secret key by requesting encryptions under keys related to the secret key in a manner chosen by the attacker. We describe a new related-key attack against generic ciphers, requiring just O(1) work to distinguish a cipher from random, and O(key length) to completely recover the secret key. This attack applies within a model which was not previously known to be vulnerable, undermining the theoretical foundation of the related-key attack concept. We propose a new definition of related-key security, which prevents all known generic attacks including this new attack. We discuss the theoretical consequences of this new definition.  相似文献   

3.
Nguyen and Shparlinski have recently presented a polynomial-time algorithm that provably recovers the signer's secret DSA key when a few consecutive bits of the random nonces k (used at each signature generation) are known for a number of DSA signatures at most linear in log q (q denoting as usual the small prime of DSA), under a reasonable assumption on the hash function used in DSA. The number of required bits is about log 1/2 q, but can be decreased to log log q with a running time q O(1/log log q) subexponential in log q, and even further to two in polynomial time if one assumes access to ideal lattice basis reduction, namely an oracle for the lattice closest vector problem for the infinity norm. All previously known results were only heuristic, including those of Howgrave-Graham and Smart who introduced the topic. Here, we obtain similar results for the elliptic curve variant of DSA (ECDSA).  相似文献   

4.
The Advanced Encryption Standard (AES) is a 128-bit block cipher with a user key of 128, 192 or 256 bits, released by NIST in 2001 as the next-generation data encryption standard for use in the USA. It was adopted as an ISO international standard in 2005. Impossible differential cryptanalysis and the boomerang attack are powerful variants of differential cryptanalysis for analysing the security of a block cipher. In this paper, building on the notions of impossible differential cryptanalysis and the boomerang attack, we propose a new cryptanalytic technique, which we call the impossible boomerang attack, and then describe an extension of this attack which applies in a related-key attack scenario. Finally, we apply the impossible boomerang attack to break 6-round AES with 128 key bits and 7-round AES with 192/256 key bits, and using two related keys we apply the related-key impossible boomerang attack to break 8-round AES with 192 key bits and 9-round AES with 256 key bits. In the two-key related-key attack scenario, our results, which were the first to achieve this amount of attacked rounds, match the best currently known results for AES with 192/256 key bits in terms of the numbers of attacked rounds. The (related-key) impossible boomerang attack is a general cryptanalytic technique, and can potentially be used to cryptanalyse other block ciphers.  相似文献   

5.
Generic Groups,Collision Resistance,and ECDSA   总被引:1,自引:0,他引:1  
Proved here is the sufficiency of certain conditions to ensure the Elliptic Curve Digital Signature Algorithm (ECDSA) existentially unforgeable by adaptive chosen-message attacks. The sufficient conditions include (i) a uniformity property and collision-resistance for the underlying hash function, (ii) pseudorandomness in the private key space for the ephemeral private key generator, (iii) generic treatment of the underlying group, and (iv) a further condition on how the ephemeral public keys are mapped into the private key space. For completeness, a brief survey of necessary security conditions is also given. Some of the necessary conditions are weaker than the corresponding sufficient conditions used in the security proofs here, but others are identical. Despite the similarity between DSA and ECDSA, the main result is not appropriate for DSA, because the fourth condition above seems to fail for DSA. (The corresponding necessary condition is plausible for DSA, but is not proved here nor is the security of DSA proved assuming this weaker condition.) Brickell et al. [Vol. 1751 of Lecture Notes in computer Science, pp. 276--292], Jakobsson et al. [Vol. 1976 of Lecture Notes in computer Science, pp. 73--89] and Pointcheval et al. [Vol. 13 of Journal of Cryptology, pp. 361--396] only consider signature schemes that include the ephemeral public key in the hash input, which ECDSA does not do, and moreover, assume a condition on the hash function stronger than the first condition above. This work seems to be the first advance in the provable security of ECDSA.AMS classification: 94A60Supported in part by a National Science and Engineering Research Council of Canada Industrial Research Fellowship.  相似文献   

6.
An identity-based non-interactive public key distribution system is presented that is based on a novel trapdoor one-way function allowing a trusted authority to compute the discrete logarithms modulo a publicly known composite number m while this is infeasible for an adversary not knowing the factorization of m. Without interaction with a key distribution center or with the recipient of a given message, a user can generate a mutual secure cipher key based solely on the recipient's identity and his own secret key, and subsequently send the message, encrypted with the generated cipher used in a conventional cipher, over an insecure channel to the recipient. In contrast to previously proposed identity-based systems, no public keys, certificates for public keys or other information need to be exchanged and thus the system is suitable for certain applications that do not allow for interaction. The paper solves an open problem proposed by Shamir in 1984.  相似文献   

7.
Recently, an image encryption scheme based on chaotic standard and logistic maps was proposed by Patidar et al. It was later reported by Rhouma et al. that an equivalent secret key can be reconstructed with only one known/chosen-plaintext and the corresponding ciphertext. Patidar et al. soon modified the original scheme and claimed that the modified scheme is secure against Rhouma et al.’s attack. In this paper, we point out that the modified scheme is still insecure against the same known/chosen-plaintext attack. In addition, some other security defects existing in both the original and the modified schemes are also reported.  相似文献   

8.
Many round-based chaotic image encryption algorithms employ the permutation–diffusion structure. This structure has been found insecure when the iteration round is equal to one and the secret permutation of some existing schemes can be recovered even a higher round is adopted. In this paper, we present a single round permutation–diffusion chaotic cipher for gray image, in which some temp-value feedback mechanisms are introduced to resist the known attacks. Specifically, we firstly embed the plaintext feedback technique in the permutation process to develop different permutation sequences for different plain-images and then employ plaintext/ciphertext feedback for diffusion to generate equivalent secret key dynamically. Experimental results show that the new scheme owns large key space and can resist the differential attack. It is also efficient.  相似文献   

9.
Although various hash functions based on chaos or chaotic neural network were proposed, most of them can not work efficiently in parallel computing environment. Recently, an algorithm for parallel keyed hash function construction based on chaotic neural network was proposed [13]. However, there is a strict limitation in this scheme that its secret keys must be nonce numbers. In other words, if the keys are used more than once in this scheme, there will be some potential security flaw. In this paper, we analyze the cause of vulnerability of the original one in detail, and then propose the corresponding enhancement measures, which can remove the limitation on the secret keys. Theoretical analysis and computer simulation indicate that the modified hash function is more secure and practical than the original one. At the same time, it can keep the parallel merit and satisfy the other performance requirements of hash function, such as good statistical properties, high message and key sensitivity, and strong collision resistance, etc.  相似文献   

10.
In this paper we provide upper and lower bounds on the randomness required by the dealer to set up a secret sharing scheme for infinite classes of access structures. Lower bounds are obtained using entropy arguments. Upper bounds derive from a decomposition construction based on combinatorial designs (in particular, t-(v,k,) designs). We prove a general result on the randomness needed to construct a scheme for the cycle Cn; when n is odd our bound is tight. We study the access structures on at most four participants and the connected graphs on five vertices, obtaining exact values for the randomness for all them. Also, we analyze the number of random bits required to construct anonymous threshold schemes, giving upper bounds. (Informally, anonymous threshold schemes are schemes in which the secret can be reconstructed without knowledge of which participants hold which shares.)  相似文献   

11.
Lightweight cipher designs try to minimize the implementation complexity of the cipher while maintaining some specified security level. Using only a small number of AND gates lowers the implementation costs, and enables easier protections against side-channel attacks. In our paper we study the connection between the number of AND gates (multiplicative complexity) and the complexity of algebraic attacks. We model the encryption with multiple right-hand sides (MRHS) equations. The resulting equation system is transformed into a syndrome decoding problem. The complexity of the decoding problem depends on the number of AND gates, and on the relative number of known output bits with respect to the number of unknown key bits. This allows us to apply results from coding theory, and to explicitly connect the complexity of the algebraic cryptanalysis to the multiplicative complexity of the cipher. This means that we can provide asymptotic upper bounds on the complexity of algebraic attacks on selected families of ciphers based on the hardness of the decoding problem.  相似文献   

12.
Tight Bounds on the Information Rate of Secret Sharing Schemes   总被引:4,自引:0,他引:4  
A secret sharing scheme is a protocol by means of which a dealer distributes a secret s among a set of participants P in such a way that only qualified subsets of P can reconstruct the value of s whereas any other subset of P, non-qualified to know s, cannot determine anything about the value of the secret.In this paper we provide a general technique to prove upper bounds on the information rate of secret sharing schemes. The information rate is the ratio between the size of the secret and the size of the largest share given to any participant. Most of the recent upper bounds on the information rate obtained in the literature can be seen as corollaries of our result. Moreover, we prove that for any integer d there exists a d-regular graph for which any secret sharing scheme has information rate upper bounded by 2/(d+1). This improves on van Dijk's result dik and matches the corresponding lower bound proved by Stinson in [22].  相似文献   

13.
This work emphasizes an important problem of braid based cryptography: the random generation of good keys. We present a deterministic, polynomial algorithm that reduces the conjugacy search problem in braid group. The algorithm is based on the decomposition of braids into products of canonical factors and gives a partial factorization of the secret: a divisor and a multiple. The tests we performed on different keys of existing protocols showed that many protocols in their current form are broken and that the efficiency of our attack depends on the random generator used to create the key. Therefore, this method gives new critera for testing weak keys. We also propose a new random generator of key which is secure against our attack and the one of Hofheinz and Steinwandt.  相似文献   

14.
The concepts of convex order and comonotonicity have become quite popular in risk theory, essentially since Kaas et al. [Kaas, R., Dhaene, J., Goovaerts, M.J., 2000. Upper and lower bounds for sums of random variables. Insurance: Math. Econ. 27, 151-168] constructed bounds in the convex order sense for a sum S of random variables without imposing any dependence structure upon it. Those bounds are especially helpful, if the distribution of S cannot be calculated explicitly or is too cumbersome to work with. This will be the case for sums of lognormally distributed random variables, which frequently appear in the context of insurance and finance.In this article we quantify the maximal error in terms of truncated first moments, when S is approximated by a lower or an upper convex order bound to it. We make use of geometrical arguments; from the unknown distribution of S only its variance is involved in the computation of the error bounds. The results are illustrated by pricing an Asian option. It is shown that under certain circumstances our error bounds outperform other known error bounds, e.g. the bound proposed by Nielsen and Sandmann [Nielsen, J.A., Sandmann, K., 2003. Pricing bounds on Asian options. J. Financ. Quant. Anal. 38, 449-473].  相似文献   

15.
Based on the analysis of a chaos block cipher for wireless sensor network (WSN), it is found that there is a fatal flaw in its security because the number of rounds is too small and the calculation precision of round function is too short. The scheme could be cryptanalyzed by utilizing differential cryptanalysis theory. First, the third round key is recovered by chosen plaintext attack according to the characteristics of the round function. Then, the second round key can be deduced from the relationship of the sub-keys between the second and the third rounds. Based on the above successful attacks, the first round key could also be broken by brute-force attack. Finally, by employing the characteristics of Feistel structure, the fourth round key could also be obtained. Since all round keys have been cryptanalyzed, the plaintext can then be decrypted. The encryption scheme is proven to be insecure consequently.  相似文献   

16.
This paper provides new combinatorial bounds and characterizations of authentication codes (A-codes) and key predistribution schemes (KPS). We first prove a new lower bound on the number of keys in an A-code without secrecy, which can be thought of as a generalization of the classical Rao bound for orthogonal arrays. We also prove a new lower bound on the number of keys in a general A-code, which is based on the Petrenjuk, Ray-Chaudhuri and Wilson bound for t-designs. We also present new lower bounds on the size of keys and the amount of users' secret information in KPS, the latter of which is accomplished by showing that a certain A-code is hiding inside any KPS.  相似文献   

17.
Perfect Secret Sharing Schemes on Five Participants   总被引:1,自引:0,他引:1  
A perfect secret sharing scheme is a system for the protection of a secret among a number of participants in such a way that only certain subsets of these participants can reconstruct the secret, and the remaining subsets can obtain no additional information about the secret. The efficiency of a perfect secret sharing scheme can be assessed in terms of its information rates. In this paper we discuss techniques for obtaining bounds on the information rates of perfect secret sharing schemes and illustrate these techniques using the set of monotone access structures on five participants. We give a full listing of the known information rate bounds for all the monotone access structures on five participants.  相似文献   

18.
We cryptanalyse here two variants of the McEliece cryptosystem based on quasi-cyclic codes. Both aim at reducing the key size by restricting the public and secret generator matrices to be in quasi-cyclic form. The first variant considers subcodes of a primitive BCH code. The aforementioned constraint on the public and secret keys implies to choose very structured permutations. We prove that this variant is not secure by producing many linear equations that the entries of the secret permutation matrix have to satisfy by using the fact that the secret code is a subcode of a known BCH code. This attack has been implemented and in all experiments we have performed the solution space of the linear system was of dimension one and revealed the permutation matrix. The other variant uses quasi-cyclic low density parity-check (LDPC) codes. This scheme was devised to be immune against general attacks working for McEliece type cryptosystems based on LDPC codes by choosing in the McEliece scheme more general one-to-one mappings than permutation matrices. We suggest here a structural attack exploiting the quasi-cyclic structure of the code and a certain weakness in the choice of the linear transformations that hide the generator matrix of the code. This cryptanalysis adopts a polynomial-oriented approach and basically consists in searching for two polynomials of low weight such that their product is a public polynomial. Our analysis shows that with high probability a parity-check matrix of a punctured version of the secret code can be recovered with time complexity O(n 3) where n is the length of the considered code. The complete reconstruction of the secret parity-check matrix of the quasi-cyclic LDPC codes requires the search of codewords of low weight which can be done with about 237 operations for the specific parameters proposed.  相似文献   

19.
We investigate in this paper the security of HFE and Multi-HFE schemes as well as their minus and embedding variants. Multi-HFE is a generalization of the well-known HFE schemes. The idea is to use a multivariate quadratic system—instead of a univariate polynomial in HFE—over an extension field as a private key. According to the authors, this should make the classical direct algebraic (message-recovery) attack proposed by Faugère and Joux on HFE no longer efficient against Multi-HFE. We consider here the hardness of the key-recovery in Multi-HFE and its variants, but also in HFE (both for odd and even characteristic). We first improve and generalize the basic key recovery proposed by Kipnis and Shamir on HFE. To do so, we express this attack as matrix/vector operations. In one hand, this permits to improve the basic Kipnis-Shamir (KS) attack on HFE. On the other hand, this allows to generalize the attack on Multi-HFE. Due to its structure, we prove that a Multi-HFE scheme has much more equivalent keys than a basic HFE. This induces a structural weakness which can be exploited to adapt the KS attack against classical modifiers of multivariate schemes such as minus and embedding. Along the way, we discovered that the KS attack as initially described cannot be applied against HFE in characteristic 2. We have then strongly revised KS in characteristic 2 to make it work. In all cases, the cost of our attacks is related to the complexity of solving MinRank. Thanks to recent complexity results on this problem, we prove that our attack is polynomial in the degree of the extension field for all possible practical settings used in HFE and Multi-HFE. This makes then Multi-HFE less secure than basic HFE for equally-sized keys. As a proof of concept, we have been able to practically break the most conservative proposed parameters of multi-HFE in few days (256 bits security broken in 9 days).  相似文献   

20.
The complexity of a secret sharing scheme is defined as the ratio between the maximum length of the shares and the length of the secret. This paper deals with the open problem of optimizing this parameter for secret sharing schemes with general access structures. Specifically, our objective is to determine the optimal complexity of the access structures with exactly four minimal qualified subsets. Lower bounds on the optimal complexity are obtained by using the known polymatroid technique in combination with linear programming. Upper bounds are derived from decomposition constructions of linear secret sharing schemes. In this way, the exact value of the optimal complexity is determined for several access structures in that family. For the other ones, we present the best known lower and upper bounds.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号