首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 25 毫秒
1.
Caching technique is a promising approach to reduce the heavy traffic load and improve user latency experience for the Internet of Things (IoT). In this paper, by exploiting edge cache resources and communication opportunities in device-to-device (D2D) networks and broadcast networks, two novel coded caching schemes are proposed that greatly reduce transmission latency for the centralized and decentralized caching settings, respectively. In addition to the multicast gain, both schemes obtain an additional cooperation gain offered by user cooperation and an additional parallel gain offered by the parallel transmission among the server and users. With a newly established lower bound on the transmission delay, we prove that the centralized coded caching scheme is order-optimal, i.e., achieving a constant multiplicative gap within the minimum transmission delay. The decentralized coded caching scheme is also order-optimal if each user’s cache size is larger than a threshold which approaches zero as the total number of users tends to infinity. Moreover, theoretical analysis shows that to reduce the transmission delay, the number of users sending signals simultaneously should be appropriately chosen according to the user’s cache size, and always letting more users send information in parallel could cause high transmission delay.  相似文献   

2.
The empirical entropy is a key statistical measure of data frequency vectors, enabling one to estimate how diverse the data are. From the computational point of view, it is important to quickly compute, approximate, or bound the entropy. In a distributed system, the representative (“global”) frequency vector is the average of the “local” frequency vectors, each residing in a distinct node. Typically, the trivial solution of aggregating the local vectors and computing their average incurs a huge communication overhead. Hence, the challenge is to approximate, or bound, the entropy of the global vector, while reducing communication overhead. In this paper, we develop algorithms which achieve this goal.  相似文献   

3.
Random Boolean Networks (RBNs for short) are strongly simplified models of gene regulatory networks (GRNs), which have also been widely studied as abstract models of complex systems and have been used to simulate different phenomena. We define the “common sea” (CS) as the set of nodes that take the same value in all the attractors of a given network realization, and the “specific part” (SP) as the set of all the other nodes, and we study their properties in different ensembles, generated with different parameter values. Both the CS and of the SP can be composed of one or more weakly connected components, which are emergent intermediate-level structures. We show that the study of these sets provides very important information about the behavior of the model. The distribution of distances between attractors is also examined. Moreover, we show how the notion of a “common sea” of genes can be used to analyze data from single-cell experiments.  相似文献   

4.
Extraction of subsets of highly connected nodes (“communities” or modules) is a standard step in the analysis of complex social and biological networks. We here consider the problem of finding a relatively small set of nodes in two labeled weighted graphs that is highly connected in both. While many scoring functions and algorithms tackle the problem, the typically high computational cost of permutation testing required to establish the p-value for the observed pattern presents a major practical obstacle. To address this problem, we here extend the recently proposed CTD (“Connect the Dots”) approach to establish information-theoretic upper bounds on the p-values and lower bounds on the size and connectedness of communities that are detectable. This is an innovation on the applicability of CTD, broadening its use to pairs of graphs.  相似文献   

5.
We present a novel algorithm for dynamic routing with dedicated path protection which, as the presented simulation results suggest, can be efficient and exact. We present the algorithm in the setting of optical networks, but it should be applicable to other networks, where services have to be protected, and where the network resources are finite and discrete, e.g., wireless radio or networks capable of advance resource reservation. To the best of our knowledge, we are the first to propose an algorithm for this long-standing fundamental problem, which can be efficient and exact, as suggested by simulation results. The algorithm can be efficient because it can solve large problems, and it can be exact because its results are optimal, as demonstrated and corroborated by simulations. We offer a worst-case analysis to argue that the search space is polynomially upper bounded. Network operations, management, and control require efficient and exact algorithms, especially now, when greater emphasis is placed on network performance, reliability, softwarization, agility, and return on investment. The proposed algorithm uses our generic Dijkstra algorithm on a search graph generated “on-the-fly” based on the input graph. We corroborated the optimality of the results of the proposed algorithm with brute-force enumeration for networks up to 15 nodes large. We present the extensive simulation results of dedicated-path protection with signal modulation constraints for elastic optical networks of 25, 50, and 100 nodes, and with 160, 320, and 640 spectrum units. We also compare the bandwidth blocking probability with the commonly-used edge-exclusion algorithm. We had 48,600 simulation runs with about 41 million searches.  相似文献   

6.
The properties of decays that take place during jet formation cannot be easily deduced from the final distribution of particles in a detector. In this work, we first simulate a system of particles with well-defined masses, decay channels, and decay probabilities. This presents the “true system” for which we want to reproduce the decay probability distributions. Assuming we only have the data that this system produces in the detector, we decided to employ an iterative method which uses a neural network as a classifier between events produced in the detector by the “true system” and some arbitrary “test system”. In the end, we compare the distributions obtained with the iterative method to the “true” distributions.  相似文献   

7.
In 1976 we reported our first autopsied case with diffuse Lewy body disease (DLBD), the term of which we proposed in 1984. We also proposed the term “Lewy body disease” (LBD) in1980. Subsequently, we classified LBD into three types according to the distribution pattern of Lewy bodies: a brain stem type, a transitional type and a diffuse type. Later, we added the cerebral type. As we have proposed since 1980, LBD has recently been used as a generic term to include Parkinson’s disease (PD), Parkinson’s disease with dementia (PDD) and dementia with Lewy bodies (DLB), which was proposed in 1996 on the basis of our reports of DLBD.DLB is now known to be the second most frequent dementia following Alzheimer’s disease (AD).In this paper we introduce our studies of DLBD and LBD.  相似文献   

8.
Assessing where and how information is stored in biological networks (such as neuronal and genetic networks) is a central task both in neuroscience and in molecular genetics, but most available tools focus on the network’s structure as opposed to its function. Here, we introduce a new information-theoretic tool—information fragmentation analysis—that, given full phenotypic data, allows us to localize information in complex networks, determine how fragmented (across multiple nodes of the network) the information is, and assess the level of encryption of that information. Using information fragmentation matrices we can also create information flow graphs that illustrate how information propagates through these networks. We illustrate the use of this tool by analyzing how artificial brains that evolved in silico solve particular tasks, and show how information fragmentation analysis provides deeper insights into how these brains process information and “think”. The measures of information fragmentation and encryption that result from our methods also quantify complexity of information processing in these networks and how this processing complexity differs between primary exposure to sensory data (early in the lifetime) and later routine processing.  相似文献   

9.
针对磨矿自动控制过程中设备间通讯不便的问题,提出了基于OPC技术的磨机负荷自动控制系统。分析了OPC技术的数据访问规范,并用VB编写OPC通讯客户端,以Wincc作为OPC服务器,实现Wincc与VB程序的数据交换,达到对磨机负荷实时自动控制的目的。  相似文献   

10.
In recent years, the task of translating from one language to another has attracted wide attention from researchers due to numerous practical uses, ranging from the translation of various texts and speeches, including the so-called “machine” translation, to the dubbing of films and numerous other video materials. To study this problem, we propose to use the information-theoretic method for assessing the quality of translations. We based our approach on the classification of sources of text variability proposed by A.N. Kolmogorov: information content, form, and unconscious author’s style. It is clear that the unconscious “author’s” style is influenced by the translator. So researchers need special methods to determine how accurately the author’s style is conveyed, because it, in a sense, determines the quality of the translation. In this paper, we propose a method that allows us to estimate the quality of translation from different translators. The method is used to study translations of classical English-language works into Russian and, conversely, Russian classics into English. We successfully used this method to determine the attribution of literary texts.  相似文献   

11.
Integrated information has been recently suggested as a possible measure to identify a necessary condition for a system to display conscious features. Recently, we have shown that astrocytes contribute to the generation of integrated information through the complex behavior of neuron–astrocyte networks. Still, it remained unclear which underlying mechanisms governing the complex behavior of a neuron–astrocyte network are essential to generating positive integrated information. This study presents an analytic consideration of this question based on exact and asymptotic expressions for integrated information in terms of exactly known probability distributions for a reduced mathematical model (discrete-time, discrete-state stochastic model) reflecting the main features of the “spiking–bursting” dynamics of a neuron–astrocyte network. The analysis was performed in terms of the empirical “whole minus sum” version of integrated information in comparison to the “decoder based” version. The “whole minus sum” information may change sign, and an interpretation of this transition in terms of “net synergy” is available in the literature. This motivated our particular interest in the sign of the “whole minus sum” information in our analytical considerations. The behaviors of the “whole minus sum” and “decoder based” information measures are found to bear a lot of similarity—they have mutual asymptotic convergence as time-uncorrelated activity increases, and the sign transition of the “whole minus sum” information is associated with a rapid growth in the “decoder based” information. The study aims at creating a theoretical framework for using the spiking–bursting model as an analytically tractable reference point for applying integrated information concepts to systems exhibiting similar bursting behavior. The model can also be of interest as a new discrete-state test bench for different formulations of integrated information.  相似文献   

12.
The 5G technology is a promising technology to cope with the increasing demand for higher data rate and quality of service. In this paper, two proposed techniques are implemented for multiple input multiple output (MIMO) self-heterodyne OFDM system to enhance data rate and minimize the bit error rate (BER). In both of the two proposed techniques, Band Selection (BS) approach is used, once with Space Time Block Coded (STBC) for the first proposed technique (BS- STBC), and once again with Frequency Space Time Block Coded (FSTBC) for the second proposed technique (BS-FSTBC). The use of the BS in the proposed techniques helps to choose the sub-band with better subchannels gains for sending the information and consequently, minimize the BER. Moreover, the use of the FSTBC instead of STBC helps to use the spectral efficiently and hence increase data rate. The simulation results show that the proposed techniques BS-STBC and BS-FSTBC, for the MIMO self-heterodyne OFDM system, provide a great enhancement in the BER performance when compared to the conventional techniques. Moreover, the simulation results show that the first proposed technique BS-FSTBC outperform the second propose technique BS-STBC in term of the BER performance.  相似文献   

13.
We analyze the price return distributions of currency exchange rates, cryptocurrencies, and contracts for differences (CFDs) representing stock indices, stock shares, and commodities. Based on recent data from the years 2017–2020, we model tails of the return distributions at different time scales by using power-law, stretched exponential, and q-Gaussian functions. We focus on the fitted function parameters and how they change over the years by comparing our results with those from earlier studies and find that, on the time horizons of up to a few minutes, the so-called “inverse-cubic power-law” still constitutes an appropriate global reference. However, we no longer observe the hypothesized universal constant acceleration of the market time flow that was manifested before in an ever faster convergence of empirical return distributions towards the normal distribution. Our results do not exclude such a scenario but, rather, suggest that some other short-term processes related to a current market situation alter market dynamics and may mask this scenario. Real market dynamics is associated with a continuous alternation of different regimes with different statistical properties. An example is the COVID-19 pandemic outburst, which had an enormous yet short-time impact on financial markets. We also point out that two factors—speed of the market time flow and the asset cross-correlation magnitude—while related (the larger the speed, the larger the cross-correlations on a given time scale), act in opposite directions with regard to the return distribution tails, which can affect the expected distribution convergence to the normal distribution.  相似文献   

14.
We consider information-theoretic bounds on the expected generalization error for statistical learning problems in a network setting. In this setting, there are K nodes, each with its own independent dataset, and the models from the K nodes have to be aggregated into a final centralized model. We consider both simple averaging of the models as well as more complicated multi-round algorithms. We give upper bounds on the expected generalization error for a variety of problems, such as those with Bregman divergence or Lipschitz continuous losses, that demonstrate an improved dependence of 1/K on the number of nodes. These “per node” bounds are in terms of the mutual information between the training dataset and the trained weights at each node and are therefore useful in describing the generalization properties inherent to having communication or privacy constraints at each node.  相似文献   

15.
Information bottleneck (IB) and privacy funnel (PF) are two closely related optimization problems which have found applications in machine learning, design of privacy algorithms, capacity problems (e.g., Mrs. Gerber’s Lemma), and strong data processing inequalities, among others. In this work, we first investigate the functional properties of IB and PF through a unified theoretical framework. We then connect them to three information-theoretic coding problems, namely hypothesis testing against independence, noisy source coding, and dependence dilution. Leveraging these connections, we prove a new cardinality bound on the auxiliary variable in IB, making its computation more tractable for discrete random variables. In the second part, we introduce a general family of optimization problems, termed “bottleneck problems”, by replacing mutual information in IB and PF with other notions of mutual information, namely f-information and Arimoto’s mutual information. We then argue that, unlike IB and PF, these problems lead to easily interpretable guarantees in a variety of inference tasks with statistical constraints on accuracy and privacy. While the underlying optimization problems are non-convex, we develop a technique to evaluate bottleneck problems in closed form by equivalently expressing them in terms of lower convex or upper concave envelope of certain functions. By applying this technique to a binary case, we derive closed form expressions for several bottleneck problems.  相似文献   

16.
The paper addresses the problem of complex socio-economic phenomena assessment using questionnaire surveys. The data are represented on an ordinal scale; the object assessments may contain positive, negative, no answers, a “difficult to say” or “no opinion” answers. The general framework for Intuitionistic Fuzzy Synthetic Measure (IFSM) based on distances to the pattern object (ideal solution) is used to analyze the survey data. First, Euclidean and Hamming distances are applied in the procedure. Second, two pattern object constructions are proposed in the procedure: one based on maximum values from the survey data, and the second on maximum intuitionistic values. Third, the method for criteria comparison with the Intuitionistic Fuzzy Synthetic Measure is presented. Finally, a case study solving the problem of rank-ordering of the cities in terms of satisfaction from local public administration obtained using different variants of the proposed method is discussed. Additionally, the comparative analysis results using the Intuitionistic Fuzzy Synthetic Measure and the Intuitionistic Fuzzy TOPSIS (IFT) framework are presented.  相似文献   

17.
Maxwell’s demon is an entity in a 150-year-old thought experiment that paradoxically appears to violate the second law of thermodynamics by reducing entropy without doing work. It has increasingly practical implications as advances in nanomachinery produce devices that push the thermodynamic limits imposed by the second law. A well-known explanation claiming that information erasure restores second law compliance fails to resolve the paradox because it assumes the second law a priori, and does not predict irreversibility. Instead, a purely mechanical resolution that does not require information theory is presented. The transport fluxes of mass, momentum, and energy involved in the demon’s operation are analyzed and show that they imply “hidden” external work and dissipation. Computing the dissipation leads to a new lower bound on entropy production by the demon. It is strictly positive in all nontrivial cases, providing a more stringent limit than the second law and implying intrinsic thermodynamic irreversibility. The thermodynamic irreversibility is linked with mechanical irreversibility resulting from the spatial asymmetry of the demon’s speed selection criteria, indicating one mechanism by which macroscopic irreversibility may emerge from microscopic dynamics.  相似文献   

18.
We present an unsupervised method to detect anomalous time series among a collection of time series. To do so, we extend traditional Kernel Density Estimation for estimating probability distributions in Euclidean space to Hilbert spaces. The estimated probability densities we derive can be obtained formally through treating each series as a point in a Hilbert space, placing a kernel at those points, and summing the kernels (a “point approach”), or through using Kernel Density Estimation to approximate the distributions of Fourier mode coefficients to infer a probability density (a “Fourier approach”). We refer to these approaches as Functional Kernel Density Estimation for Anomaly Detection as they both yield functionals that can score a time series for how anomalous it is. Both methods naturally handle missing data and apply to a variety of settings, performing well when compared with an outlyingness score derived from a boxplot method for functional data, with a Principal Component Analysis approach for functional data, and with the Functional Isolation Forest method. We illustrate the use of the proposed methods with aviation safety report data from the International Air Transport Association (IATA).  相似文献   

19.
In recent studies of generative adversarial networks (GAN), researchers have attempted to combine adversarial perturbation with data hiding in order to protect the privacy and authenticity of the host image simultaneously. However, most of the studied approaches can only achieve adversarial perturbation through a visible watermark; the quality of the host image is low, and the concealment of data hiding cannot be achieved. In this work, we propose a true data hiding method with adversarial effect for generating high-quality covers. Based on GAN, the data hiding area is selected precisely by limiting the modification strength in order to preserve the fidelity of the image. We devise a genetic algorithm that can explore decision boundaries in an artificially constrained search space to improve the attack effect as well as construct aggressive covert adversarial samples by detecting “sensitive pixels” in ordinary samples to place discontinuous perturbations. The results reveal that the stego-image has good visual quality and attack effect. To the best of our knowledge, this is the first attempt to use covert data hiding to generate adversarial samples based on GAN.  相似文献   

20.
Abdominal aortic aneurysm (AAA) is a localized enlargement of the abdominal aorta. Once ruptured AAA (rAAA) happens, repairing procedures need to be applied immediately, for which there are two main options: open aortic repair (OAR) and endovascular aortic repair (EVAR). It is of great clinical significance to objectively compare the survival outcomes of OAR versus EVAR using randomized clinical trials; however, this has serious feasibility issues. In this study, with the Medicare data, we conduct an emulation analysis and explicitly “assemble” a clinical trial with rigorously defined inclusion/exclusion criteria. A total of 7826 patients are “recruited”, with 3866 and 3960 in the OAR and EVAR arms, respectively. Mimicking but significantly advancing from the regression-based literature, we adopt a deep learning-based analysis strategy, which consists of a propensity score step, a weighted survival analysis step, and a bootstrap step. The key finding is that for both short- and long-term mortality, EVAR has survival advantages. This study delivers a new big data strategy for addressing critical clinical problems and provides valuable insights into treating rAAA using OAR and EVAR.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号