首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Network coding is an emerging telecommunication technique, where any intermediate node is allowed to recombine incoming data if necessary. This technique helps to increase the throughput, however, very likely at the cost of huge amount of computational overhead, due to the packet recombination performed (ie coding operations). Hence, it is of practical importance to reduce coding operations while retaining the benefits that network coding brings to us. In this paper, we propose a novel evolutionary algorithm (EA) to minimize the amount of coding operations involved. Different from the state-of-the-art EAs which all use binary encodings for the problem, our EA is based on path-oriented encoding. In this new encoding scheme, each chromosome is represented by a union of paths originating from the source and terminating at one of the receivers. Employing path-oriented encoding leads to a search space where all solutions are feasible, which fundamentally facilitates more efficient search of EAs. Based on the new encoding, we develop three basic operators, that is, initialization, crossover and mutation. In addition, we design a local search operator to improve the solution quality and hence the performance of our EA. The simulation results demonstrate that our EA significantly outperforms the state-of-the-art algorithms in terms of global exploration and computational time.  相似文献   

2.
In this paper, we provide a semantic study of the first-order predicate logic for situations involving uncertainty. We introduce the concepts of uncertain predicate proposition, uncertain predicate formula, uncertain interpretation and degree of truth in the framework of uncertainty theory. Compared with classical predicate formula taking true value in \(\{0,1\}\) , the degree of truth of uncertain predicate formula may take any value in the unit interval \([0,1]\) . We also show that the uncertain first-order predicate logic is consistent with the classical first-order predicate logic on some laws of the degree of truth.  相似文献   

3.
4.
Zhou  Yanwei  Yang  Bo  Xia  Zhe  Zhang  Mingwu  Mu  Yi 《Designs, Codes and Cryptography》2021,89(7):1575-1614

Leakage of private state information (e.g. the secret keys) through various leakage attacks (e.g. side channel attacks, cold-boot attacks, etc) has become a serious threat to the security of computer systems in practice. Nowadays, it has become a common requirement that cryptographic schemes should withstand the leakage attacks. Although some research progresses have been made towards designing leakage-resilient cryptographic schemes, there are still some unsolved issues. For example, the computational costs of the existing generic construction of leakage-resilient public-key encryption (PKE) schemes is generally very high. One of the main reasons is that the underlying building blocks, e.g. non-interactive zero-knowledge argument, one-time lossy filter or one-time signature, are computationally expensive. Moreover, the above constructions of PKE with leakage resilience normally require the upper bound of leakage to be fixed. However, in many real-world applications, this requirement cannot provide sufficient protection against various leakage attacks. In order to mitigate the above problems, this paper presents a generic method of designing leakage amplified PKE schemes with leakage resilience and chosen-ciphertext attacks (CCA) security. Firstly, we define a new cryptography primitive, called identity-based hash proof system with two encapsulated key (T-IB-HPS). Then, two generic constructions of leakage-resilient PKE schemes are proposed using T-IB-HPS and message authentication code (MAC). The CCA security of our proposed constructions can be reduced to the security of the underlying T-IB-HPS and MAC. In the proposed generic method, the leakage parameter has an arbitrary length that can be flexibly adjusted according to the specific leakage requirements. In order to demonstrate the practicability of our generic method, two instantiations of T-IB-HPS are introduced. The first instantiation is proved based on the truncated augmented bilinear Diffie–Hellman exponent assumption, and the second instantiation is proved based on the related security assumptions over the composite order bilinear group.

  相似文献   

5.
Formal topologies are today an established topic in the development of constructive mathematics. One of the main tools in formal topology is inductive generation since it allows to introduce inductive methods in topology. The problem of inductively generating formal topologies with a cover relation and a unary positivity predicate has been solved in [CSSV]. However, to deal both with open and closed subsets, a binary positivity predicate has to be considered. In this paper we will show how to adapt to this framework the method used to generate inductively formal topologies with a unary positivity predicate; the main problem that one has to face in such a new setting is that, as a consequence of the lack of a complete formalization, both the cover relation and the positivity predicate can have proper axioms.  相似文献   

6.
During the last years a number of approaches to information modeling have been presented. An information model is then assumed to be expressed in some formalism based on a set of basic concepts and construction rules. Some approaches also include inference rules, but few include consistency criteria for information models. Two different approaches to information modeling have been analyzed within the framework of first-order predicate logic. In particular, their consistency criteria are compared with that of predicate logic. The approaches are completely expressible in predicate logic and the consistency criteria have a logical counterpart only when a set of implicit assumptions is stated explicitly.This work is supported by the National Swedish Board for Technical Development.  相似文献   

7.
In this study, the impacts of different crossover and encoding schemes on the performance of a genetic algorithm (GA) in finding optimal pump-and-treat (P&T) remediation designs are investigated. For this purpose, binary and Gray encodings of the decision variables are tested. Uniform and two-point crossover schemes are evaluated for two different crossover probabilities. Analysis is performed for two P&T system optimization scenarios. Results show that uniform crossover operator with Gray encoding outperforms the other alternatives for the complex problem with higher number of decision variables. On the other hand, when a simpler problem, which had a lower number of decision variables, is solved, the efficiency of GA is independent of the encoding and crossover schemes.  相似文献   

8.
We prove that every proper subclass of the 321-avoiding permutations that is defined either by only finitely many additional restrictions or is well-quasi-ordered has a rational generating function. To do so we show that any such class is in bijective correspondence with a regular language. The proof makes significant use of formal languages and of a host of encodings, including a new mapping called the panel encoding that maps languages over the infinite alphabet of positive integers avoiding certain subwords to languages over finite alphabets.  相似文献   

9.
10.
An elementary algebraic approach to unified quantum information theory is given. The operational meaning of entanglement as specifically quantum encoding is disclosed. General relative entropy as information divergence is introduced, and three most important types of relative information, namely, the Araki-Umegaki type (A-type), the Belavkin-Staszewski type (B-type), and the thermodynamical (C-type) are discussed. It is shown that true quantum entanglement-assisted entropy is greater than semiclassical (von Neumann) quantum entropy, and the proper positive quantum conditional entropy is introduced. The general quantum mutual information via entanglement is defined, and the corresponding types of quantum channel capacities as a supremum via the generalized encodings are formulated. The additivity problem for quantum logarithmic capacities for products of arbitrary quantum channels under appropriate constraints on encodings is discussed. It is proved that true quantum capacity, which is achieved on the standard entanglement as an optimal quantum encoding, retains the additivity property of the logarithmic quantum channel entanglement-assisted capacities on the products of quantum input states. This result for quantum logarithmic information of A-type, which was obtained earlier by the author, is extended to any type of quantum information.  相似文献   

11.
We introduce a logical system in which the principles of fuzzy logic are interpreted from the point of view of the constructive approach The language of predicate formulas without functional symbols and symbols of constants is considered. The notion of identically trae predicate formula in the framework of the introduced logic is defined; two variants of this definition are given. Theorems concerning identically true predicate formulas are proved. Some connections between the introduced logic and the constructive (intuitionistic) predicate calculus are established. Bibliography: 40 titles. Translated from Zapiski Nauchnykh Seminarov POMI, Vol. 358, 2008, pp. 130–152.  相似文献   

12.
Embedding a partially ordered set into a product of chains is a classical way to encode it. Such encodings have been used in various fields such as object oriented programming or distributed computing. The embedding associates with each element a sequence of integers which is used to perform comparisons between elements. A critical measure is the space required by the encoding, and several authors have investigated ways to minimize it, which comes to embedding partially ordered sets into small products of chains. The minimum size of such an encoding has been called the encoding dimension (Habib et al. 1995), and the string dimension (Garg and Skawratananand 2001) for a slightly different definition of embeddings. This paper investigates some new properties of the encoding dimension. We clear up the links with the string dimension and we answer the computational complexity questions raised in Garg and Skawratananand (2001) and Habib et al. (1995): both these parameters are NP{\ensuremath{\mathcal{NP}}}-hard to compute.  相似文献   

13.
The capability of logical systems to express their own satisfaction relation is a key issue in mathematical logic. Our notion of self definability is based on encodings of pairs of the type (structure, formula) into single structures wherein the two components can be clearly distinguished. Hence, the ambiguity between structures and formulas, forming the basis for many classical results, is avoided. We restrict ourselves to countable, regular, logics over finite vocabularies. Our main theorem states that self definability, in this framework, is equivalent to the existence of complete problems under quantifier free reductions. Whereas this holds true for arbitrary structures, we focus on examples from Finite Model Theory. Here, the theorem sheds a new light on nesting hierarchies for certain generalized quantifiers. They can be interpreted as failure of self definability in the according extensions of first order logic. As a further application we study the possibility of the existence of recursive logics for PTIME. We restate a result of Dawar concluding from recursive logics to complete problems. We show that for the model checking Turing machines associated with a recursive logic, it makes no difference whether or not they may use built in clocks. Received: 7 February 1997  相似文献   

14.
Sampling information using timing is an approach that has received renewed attention in sampling theory. The question is how to map amplitude information into the timing domain. One such encoder, called time encoding machine, was introduced by Lazar and Tóth (2004 [23]) for the special case of band-limited functions. In this paper, we extend their result to a general framework including shift-invariant subspaces. We prove that time encoding machines may be considered as non-uniform sampling devices, where time locations are unknown a priori. Using this fact, we show that perfect representation and reconstruction of a signal with a time encoding machine is possible whenever this device satisfies some density property. We prove that this method is robust under timing quantization, and therefore can lead to the design of simple and energy efficient sampling devices.  相似文献   

15.
This paper studies the solution of graph coloring problems by encoding into propositional satisfiability problems. The study covers three kinds of satisfiability solvers, based on postorder reasoning (e.g., grasp, chaff), preorder reasoning (e.g., 2cl, 2clsEq), and back-chaining (modoc). The study evaluates three encodings, one of them believed to be new. Some new symmetry-breaking methods, specific to coloring, are used to reduce the redundancy of solutions. A by-product of this research is an implemented lower-bound technique that has shown improved lower bounds for the chromatic numbers of the long-standing unsolved random graphs known as DSJC125.5 and DSJC125.9. Independent-set analysis shows that the chromatic numbers of DSJC125.5 and DSJC125.9 are at least 18 and 40, respectively, but satisfiability encoding was able to demonstrate only that the chromatic numbers are at least 13 and 38, respectively, within available time and space.  相似文献   

16.
Regular-SAT is a constraint programming language between CSP and SAT that—by combining many of the good properties of each paradigm—offers a good compromise between performance and expressive power. Its similarity to SAT allows us to define a uniform encoding formalism, to extend existing SAT algorithms to Regular-SAT without incurring excessive overhead in terms of computational cost, and to identify phase transition phenomena in randomly generated instances. On the other hand, Regular-SAT inherits from CSP more compact and natural encodings that maintain more the structure of the original problem. Our experimental results—using a range of benchmark problems—provide evidence that Regular-SAT offers practical computational advantages for solving combinatorial problems.  相似文献   

17.
Recently, Chen et al. proposed a framework for authenticated key exchange (AKE) protocols (referred to as CMYSG scheme) in Designs, Codes and Cryptography (available at http://link.springer.com/article/10.1007/s10623-016-0295-3). It is claimed that the proposed AKE protocol is secure in a new leakage-resilient eCK model w.r.t. auxiliary inputs (AI-LR-eCK). The main tool used for the generic construction is the smooth projective hash function (SPHF). In this note, we revisit the CMYSG scheme and point out a subtle flaw in the original security proof. Precisely, we show that the AI-LR-eCK security of the proposed construction cannot be successfully reduced to a pseudo-random SPHF and thus the CMYSG scheme is not secure as claimed. To restore the security proof, we replace the underlying typical SPHF with a 2-smooth SPHF, and show that such a replacement combined with a \(\pi \hbox {PRF}\) suffices to overcome the subtle flaw.  相似文献   

18.
Can stochastic search algorithms outperform existing deterministic heuristics for the NP-hard problemNumber Partitioning if given a sufficient, but practically realizable amount of time? In a thorough empirical investigation using a straightforward implementation of one such algorithm, simulated annealing, Johnson et al. (Ref. 1) concluded tentatively that the answer is negative. In this paper, we show that the answer can be positive if attention is devoted to the issue of problem representation (encoding). We present results from empirical tests of several encodings ofNumber Partitioning with problem instances consisting of multiple-precision integers drawn from a uniform probability distribution. With these instances and with an appropriate choice of representation, stochastic and deterministic searches can—routinely and in a practical amount of time—find solutions several orders of magnitude better than those constructed by the best heuristic known (Ref. 2), which does not employ searching.  相似文献   

19.
Two tree encoding techniques for arrays are compared and contrasted. It is shown that the pyramids and their generalization ford-dimensional arrays are much superior to binary tree encodings for three measures; access time, storage and average proximity.Work supported by a National Research Council of Canada Grant No. A-7700.  相似文献   

20.
Hidden vector encryption (HVE) is a particular kind of predicate encryption that is an important cryptographic primitive having many applications, and it provides conjunctive equality, subset, and comparison queries on encrypted data. In predicate encryption, a ciphertext is associated with attributes and a token corresponds to a predicate. The token that corresponds to a predicate f can decrypt the ciphertext associated with attributes x if and only if f(x) = 1. Currently, several HVE schemes were proposed where the ciphertext size, the token size, and the decryption cost are proportional to the number of attributes in the ciphertext. In this paper, we construct efficient HVE schemes where the token consists of just four group elements and the decryption only requires four bilinear map computations, independent of the number of attributes in the ciphertext. We first construct an HVE scheme in composite order bilinear groups and prove its selective security under the well-known assumptions. Next, we convert it to use prime order asymmetric bilinear groups where there are no efficiently computable isomorphisms between two groups.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号