首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Integrated information has been recently suggested as a possible measure to identify a necessary condition for a system to display conscious features. Recently, we have shown that astrocytes contribute to the generation of integrated information through the complex behavior of neuron–astrocyte networks. Still, it remained unclear which underlying mechanisms governing the complex behavior of a neuron–astrocyte network are essential to generating positive integrated information. This study presents an analytic consideration of this question based on exact and asymptotic expressions for integrated information in terms of exactly known probability distributions for a reduced mathematical model (discrete-time, discrete-state stochastic model) reflecting the main features of the “spiking–bursting” dynamics of a neuron–astrocyte network. The analysis was performed in terms of the empirical “whole minus sum” version of integrated information in comparison to the “decoder based” version. The “whole minus sum” information may change sign, and an interpretation of this transition in terms of “net synergy” is available in the literature. This motivated our particular interest in the sign of the “whole minus sum” information in our analytical considerations. The behaviors of the “whole minus sum” and “decoder based” information measures are found to bear a lot of similarity—they have mutual asymptotic convergence as time-uncorrelated activity increases, and the sign transition of the “whole minus sum” information is associated with a rapid growth in the “decoder based” information. The study aims at creating a theoretical framework for using the spiking–bursting model as an analytically tractable reference point for applying integrated information concepts to systems exhibiting similar bursting behavior. The model can also be of interest as a new discrete-state test bench for different formulations of integrated information.  相似文献   

2.
Assessing where and how information is stored in biological networks (such as neuronal and genetic networks) is a central task both in neuroscience and in molecular genetics, but most available tools focus on the network’s structure as opposed to its function. Here, we introduce a new information-theoretic tool—information fragmentation analysis—that, given full phenotypic data, allows us to localize information in complex networks, determine how fragmented (across multiple nodes of the network) the information is, and assess the level of encryption of that information. Using information fragmentation matrices we can also create information flow graphs that illustrate how information propagates through these networks. We illustrate the use of this tool by analyzing how artificial brains that evolved in silico solve particular tasks, and show how information fragmentation analysis provides deeper insights into how these brains process information and “think”. The measures of information fragmentation and encryption that result from our methods also quantify complexity of information processing in these networks and how this processing complexity differs between primary exposure to sensory data (early in the lifetime) and later routine processing.  相似文献   

3.
Many small biological objects, such as viruses, survive in a water environment and cannot remain active in dry air without condensation of water vapor. From a physical point of view, these objects belong to the mesoscale, where small thermal fluctuations with the characteristic kinetic energy of kBT (where kB is the Boltzmann’s constant and T is the absolute temperature) play a significant role. The self-assembly of viruses, including protein folding and the formation of a protein capsid and lipid bilayer membrane, is controlled by hydrophobic forces (i.e., the repulsing forces between hydrophobic particles and regions of molecules) in a water environment. Hydrophobic forces are entropic, and they are driven by a system’s tendency to attain the maximum disordered state. On the other hand, in information systems, entropic forces are responsible for erasing information, if the energy barrier between two states of a switch is on the order of kBT, which is referred to as Landauer’s principle. We treated hydrophobic interactions responsible for the self-assembly of viruses as an information-processing mechanism. We further showed a similarity of these submicron-scale processes with the self-assembly in colloidal crystals, droplet clusters, and liquid marbles.  相似文献   

4.
Deep learning methods have had outstanding performances in various fields. A fundamental query is why they are so effective. Information theory provides a potential answer by interpreting the learning process as the information transmission and compression of data. The information flows can be visualized on the information plane of the mutual information among the input, hidden, and output layers. In this study, we examine how the information flows are shaped by the network parameters, such as depth, sparsity, weight constraints, and hidden representations. Here, we adopt autoencoders as models of deep learning, because (i) they have clear guidelines for their information flows, and (ii) they have various species, such as vanilla, sparse, tied, variational, and label autoencoders. We measured their information flows using Rényi’s matrix-based α-order entropy functional. As learning progresses, they show a typical fitting phase where the amounts of input-to-hidden and hidden-to-output mutual information both increase. In the last stage of learning, however, some autoencoders show a simplifying phase, previously called the “compression phase”, where input-to-hidden mutual information diminishes. In particular, the sparsity regularization of hidden activities amplifies the simplifying phase. However, tied, variational, and label autoencoders do not have a simplifying phase. Nevertheless, all autoencoders have similar reconstruction errors for training and test data. Thus, the simplifying phase does not seem to be necessary for the generalization of learning.  相似文献   

5.
By assimilating biological systems, both structural and functional, into multifractal objects, their behavior can be described in the framework of the scale relativity theory, in any of its forms (standard form in Nottale’s sense and/or the form of the multifractal theory of motion). By operating in the context of the multifractal theory of motion, based on multifractalization through non-Markovian stochastic processes, the main results of Nottale’s theory can be generalized (specific momentum conservation laws, both at differentiable and non-differentiable resolution scales, specific momentum conservation law associated with the differentiable–non-differentiable scale transition, etc.). In such a context, all results are explicated through analyzing biological processes, such as acute arterial occlusions as scale transitions. Thus, we show through a biophysical multifractal model that the blocking of the lumen of a healthy artery can happen as a result of the “stopping effect” associated with the differentiable-non-differentiable scale transition. We consider that blood entities move on continuous but non-differentiable (multifractal) curves. We determine the biophysical parameters that characterize the blood flow as a Bingham-type rheological fluid through a normal arterial structure assimilated with a horizontal “pipe” with circular symmetry. Our model has been validated based on experimental clinical data.  相似文献   

6.
Information bottleneck (IB) and privacy funnel (PF) are two closely related optimization problems which have found applications in machine learning, design of privacy algorithms, capacity problems (e.g., Mrs. Gerber’s Lemma), and strong data processing inequalities, among others. In this work, we first investigate the functional properties of IB and PF through a unified theoretical framework. We then connect them to three information-theoretic coding problems, namely hypothesis testing against independence, noisy source coding, and dependence dilution. Leveraging these connections, we prove a new cardinality bound on the auxiliary variable in IB, making its computation more tractable for discrete random variables. In the second part, we introduce a general family of optimization problems, termed “bottleneck problems”, by replacing mutual information in IB and PF with other notions of mutual information, namely f-information and Arimoto’s mutual information. We then argue that, unlike IB and PF, these problems lead to easily interpretable guarantees in a variety of inference tasks with statistical constraints on accuracy and privacy. While the underlying optimization problems are non-convex, we develop a technique to evaluate bottleneck problems in closed form by equivalently expressing them in terms of lower convex or upper concave envelope of certain functions. By applying this technique to a binary case, we derive closed form expressions for several bottleneck problems.  相似文献   

7.
Fitness landscapes are a powerful metaphor for understanding the evolution of biological systems. These landscapes describe how genotypes are connected to each other through mutation and related through fitness. Empirical studies of fitness landscapes have increasingly revealed conserved topographical features across diverse taxa, e.g., the accessibility of genotypes and “ruggedness”. As a result, theoretical studies are needed to investigate how evolution proceeds on fitness landscapes with such conserved features. Here, we develop and study a model of evolution on fitness landscapes using the lens of Gene Regulatory Networks (GRNs), where the regulatory products are computed from multiple genes and collectively treated as phenotypes. With the assumption that regulation is a binary process, we prove the existence of empirically observed, topographical features such as accessibility and connectivity. We further show that these results hold across arbitrary fitness functions and that a trade-off between accessibility and ruggedness need not exist. Then, using graph theory and a coarse-graining approach, we deduce a mesoscopic structure underlying GRN fitness landscapes where the information necessary to predict a population’s evolutionary trajectory is retained with minimal complexity. Using this coarse-graining, we develop a bottom-up algorithm to construct such mesoscopic backbones, which does not require computing the genotype network and is therefore far more efficient than brute-force approaches. Altogether, this work provides mathematical results of high-dimensional fitness landscapes and a path toward connecting theory to empirical studies.  相似文献   

8.
The article argues that—at least in certain interpretations, such as the one assumed in this article under the heading of “reality without realism”—the quantum-theoretical situation appears as follows: While—in terms of probabilistic predictions—connected to and connecting the information obtained in quantum phenomena, the mathematics of quantum theory (QM or QFT), which is continuous, does not represent and is discontinuous with both the emergence of quantum phenomena and the physics of these phenomena, phenomena that are physically discontinuous with each other as well. These phenomena, and thus this information, are described by classical physics. All actually available information (in the mathematical sense of information theory) is classical: it is composed of units, such as bits, that are—or are contained in—entities described by classical physics. On the other hand, classical physics cannot predict this information when it is created, as manifested in measuring instruments, in quantum experiments, while quantum theory can. In this epistemological sense, this information is quantum. The article designates the discontinuity between quantum theory and the emergence of quantum phenomena the “Heisenberg discontinuity”, because it was introduced by W. Heisenberg along with QM, and the discontinuity between QM or QFT and the classical physics of quantum phenomena, the “Bohr discontinuity”, because it was introduced as part of Bohr’s interpretation of quantum phenomena and QM, under the assumption of Heisenberg discontinuity. Combining both discontinuities precludes QM or QFT from being connected to either physical reality, that ultimately responsible for quantum phenomena or that of these phenomena themselves, other than by means of probabilistic predictions concerning the information, classical in character, contained in quantum phenomena. The nature of quantum information is, in this view, defined by this situation. A major implication, discussed in the Conclusion, is the existence and arguably the necessity of two—classical and quantum—or with relativity, three and possibly more essentially different theories in fundamental physics.  相似文献   

9.
This paper starts from Schrödinger’s famous question “what is life” and elucidates answers that invoke, in particular, Friston’s free energy principle and its relation to the method of Bayesian inference and to Synergetics 2nd foundation that utilizes Jaynes’ maximum entropy principle. Our presentation reflects the shift from the emphasis on physical principles to principles of information theory and Synergetics. In view of the expected general audience of this issue, we have chosen a somewhat tutorial style that does not require special knowledge on physics but familiarizes the reader with concepts rooted in information theory and Synergetics.  相似文献   

10.
Neuroscience extensively uses the information theory to describe neural communication, among others, to calculate the amount of information transferred in neural communication and to attempt the cracking of its coding. There are fierce debates on how information is represented in the brain and during transmission inside the brain. The neural information theory attempts to use the assumptions of electronic communication; despite the experimental evidence that the neural spikes carry information on non-discrete states, they have shallow communication speed, and the spikes’ timing precision matters. Furthermore, in biology, the communication channel is active, which enforces an additional power bandwidth limitation to the neural information transfer. The paper revises the notions needed to describe information transfer in technical and biological communication systems. It argues that biology uses Shannon’s idea outside of its range of validity and introduces an adequate interpretation of information. In addition, the presented time-aware approach to the information theory reveals pieces of evidence for the role of processes (as opposed to states) in neural operations. The generalized information theory describes both kinds of communication, and the classic theory is the particular case of the generalized theory.  相似文献   

11.
The varied cognitive abilities and rich adaptive behaviors enabled by the animal nervous system are often described in terms of information processing. This framing raises the issue of how biological neural circuits actually process information, and some of the most fundamental outstanding questions in neuroscience center on understanding the mechanisms of neural information processing. Classical information theory has long been understood to be a natural framework within which information processing can be understood, and recent advances in the field of multivariate information theory offer new insights into the structure of computation in complex systems. In this review, we provide an introduction to the conceptual and practical issues associated with using multivariate information theory to analyze information processing in neural circuits, as well as discussing recent empirical work in this vein. Specifically, we provide an accessible introduction to the partial information decomposition (PID) framework. PID reveals redundant, unique, and synergistic modes by which neurons integrate information from multiple sources. We focus particularly on the synergistic mode, which quantifies the “higher-order” information carried in the patterns of multiple inputs and is not reducible to input from any single source. Recent work in a variety of model systems has revealed that synergistic dynamics are ubiquitous in neural circuitry and show reliable structure–function relationships, emerging disproportionately in neuronal rich clubs, downstream of recurrent connectivity, and in the convergence of correlated activity. We draw on the existing literature on higher-order information dynamics in neuronal networks to illustrate the insights that have been gained by taking an information decomposition perspective on neural activity. Finally, we briefly discuss future promising directions for information decomposition approaches to neuroscience, such as work on behaving animals, multi-target generalizations of PID, and time-resolved local analyses.  相似文献   

12.
13.
Representation and abstraction are two of the fundamental concepts of computer science. Together they enable “high-level” programming: without abstraction programming would be tied to machine code; without a machine representation, it would be a pure mathematical exercise. Representation begins with an abstract structure and seeks to find a more concrete one. Abstraction does the reverse: it starts with concrete structures and abstracts away. While formal accounts of representation are easy to find, abstraction is a different matter. In this paper, we provide an analysis of data abstraction based upon some contemporary work in the philosophy of mathematics. The paper contains a mathematical account of how Frege’s approach to abstraction may be interpreted, modified, extended and imported into type theory. We argue that representation and abstraction, while mathematical siblings, are philosophically quite different. A case of special interest concerns the abstract/physical interface which houses both the physical representation of abstract structures and the abstraction of physical systems.  相似文献   

14.
Quantum information theorists have created axiomatic reconstructions of quantum mechanics (QM) that are very successful at identifying precisely what distinguishes quantum probability theory from classical and more general probability theories in terms of information-theoretic principles. Herein, we show how one such principle, Information Invariance and Continuity, at the foundation of those axiomatic reconstructions, maps to “no preferred reference frame” (NPRF, aka “the relativity principle”) as it pertains to the invariant measurement of Planck’s constant h for Stern-Gerlach (SG) spin measurements. This is in exact analogy to the relativity principle as it pertains to the invariant measurement of the speed of light c at the foundation of special relativity (SR). Essentially, quantum information theorists have extended Einstein’s use of NPRF from the boost invariance of measurements of c to include the SO(3) invariance of measurements of h between different reference frames of mutually complementary spin measurements via the principle of Information Invariance and Continuity. Consequently, the “mystery” of the Bell states is understood to result from conservation per Information Invariance and Continuity between different reference frames of mutually complementary qubit measurements, and this maps to conservation per NPRF in spacetime. If one falsely conflates the relativity principle with the classical theory of SR, then it may seem impossible that the relativity principle resides at the foundation of non-relativisitic QM. In fact, there is nothing inherently classical or quantum about NPRF. Thus, the axiomatic reconstructions of QM have succeeded in producing a principle account of QM that reveals as much about Nature as the postulates of SR.  相似文献   

15.
With the increasing number of connected devices, complex systems such as smart homes record a multitude of events of various types, magnitude and characteristics. Current systems struggle to identify which events can be considered more memorable than others. In contrast, humans are able to quickly categorize some events as being more “memorable” than others. They do so without relying on knowledge of the system’s inner working or large previous datasets. Having this ability would allow the system to: (i) identify and summarize a situation to the user by presenting only memorable events; (ii) suggest the most memorable events as possible hypotheses in an abductive inference process. Our proposal is to use Algorithmic Information Theory to define a “memorability” score by retrieving events using predicative filters. We use smart-home examples to illustrate how our theoretical approach can be implemented in practice.  相似文献   

16.
Bell inequalities were created with the goal of improving the understanding of foundational questions in quantum mechanics. To this end, they are typically applied to measurement results generated from entangled systems of particles. They can, however, also be used as a statistical tool for macroscopic systems, where they can describe the connection strength between two components of a system under a causal model. We show that, in principle, data from macroscopic observations analyzed with Bell’ s approach can invalidate certain causal models. To illustrate this use, we describe a macroscopic game setting, without a quantum mechanical measurement process, and analyze it using the framework of Bell experiments. In the macroscopic game, violations of the inequalities can be created by cheating with classically defined strategies. In the physical context, the meaning of violations is less clear and is still vigorously debated. We discuss two measures for optimal strategies to generate a given statistic that violates the inequalities. We show their mathematical equivalence and how they can be computed from CHSH-quantities alone, if non-signaling applies. As a macroscopic example from the financial world, we show how the unfair use of insider knowledge could be picked up using Bell statistics. Finally, in the discussion of realist interpretations of quantum mechanical Bell experiments, cheating strategies are often expressed through the ideas of free choice and locality. In this regard, violations of free choice and locality can be interpreted as two sides of the same coin, which underscores the view that the meaning these terms are given in Bell’s approach should not be confused with their everyday use. In general, we conclude that Bell’s approach also carries lessons for understanding macroscopic systems of which the connectedness conforms to different causal structures.  相似文献   

17.
Entropy is a concept that emerged in the 19th century. It used to be associated with heat harnessed by a thermal machine to perform work during the Industrial Revolution. However, there was an unprecedented scientific revolution in the 20th century due to one of its most essential innovations, i.e., the information theory, which also encompasses the concept of entropy. Therefore, the following question is naturally raised: “what is the difference, if any, between concepts of entropy in each field of knowledge?” There are misconceptions, as there have been multiple attempts to conciliate the entropy of thermodynamics with that of information theory. Entropy is most commonly defined as “disorder”, although it is not a good analogy since “order” is a subjective human concept, and “disorder” cannot always be obtained from entropy. Therefore, this paper presents a historical background on the evolution of the term “entropy”, and provides mathematical evidence and logical arguments regarding its interconnection in various scientific areas, with the objective of providing a theoretical review and reference material for a broad audience.  相似文献   

18.
In most of the existing multi-task learning (MTL) models, multiple tasks’ public information is learned by sharing parameters across hidden layers, such as hard sharing, soft sharing, and hierarchical sharing. One promising approach is to introduce model pruning into information learning, such as sparse sharing, which is regarded as being outstanding in knowledge transferring. However, the above method performs inefficiently in conflict tasks, with inadequate learning of tasks’ private information, or through suffering from negative transferring. In this paper, we propose a multi-task learning model (Pruning-Based Feature Sharing, PBFS) that merges a soft parameter sharing structure with model pruning and adds a prunable shared network among different task-specific subnets. In this way, each task can select parameters in a shared subnet, according to its requirements. Experiments are conducted on three benchmark public datasets and one synthetic dataset; the impact of the different subnets’ sparsity and tasks’ correlations to the model performance is analyzed. Results show that the proposed model’s information sharing strategy is helpful to transfer learning and superior to the several comparison models.  相似文献   

19.
Integrated information theory (IIT) provides a mathematical framework to characterize the cause-effect structure of a physical system and its amount of integrated information (Φ). An accompanying Python software package (“PyPhi”) was recently introduced to implement this framework for the causal analysis of discrete dynamical systems of binary elements. Here, we present an update to PyPhi that extends its applicability to systems constituted of discrete, but multi-valued elements. This allows us to analyze and compare general causal properties of random networks made up of binary, ternary, quaternary, and mixed nodes. Moreover, we apply the developed tools for causal analysis to a simple non-binary regulatory network model (p53-Mdm2) and discuss commonly used binarization methods in light of their capacity to preserve the causal structure of the original system with multi-valued elements.  相似文献   

20.
The consensus regarding quantum measurements rests on two statements: (i) von Neumann’s standard quantum measurement theory leaves undetermined the basis in which observables are measured, and (ii) the environmental decoherence of the measuring device (the “meter”) unambiguously determines the measuring (“pointer”) basis. The latter statement means that the environment monitors (measures) selected observables of the meter and (indirectly) of the system. Equivalently, a measured quantum state must end up in one of the “pointer states” that persist in the presence of the environment. We find that, unless we restrict ourselves to projective measurements, decoherence does not necessarily determine the pointer basis of the meter. Namely, generalized measurements commonly allow the observer to choose from a multitude of alternative pointer bases that provide the same information on the observables, regardless of decoherence. By contrast, the measured observable does not depend on the pointer basis, whether in the presence or in the absence of decoherence. These results grant further support to our notion of Quantum Lamarckism, whereby the observer’s choices play an indispensable role in quantum mechanics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号