首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Assessing where and how information is stored in biological networks (such as neuronal and genetic networks) is a central task both in neuroscience and in molecular genetics, but most available tools focus on the network’s structure as opposed to its function. Here, we introduce a new information-theoretic tool—information fragmentation analysis—that, given full phenotypic data, allows us to localize information in complex networks, determine how fragmented (across multiple nodes of the network) the information is, and assess the level of encryption of that information. Using information fragmentation matrices we can also create information flow graphs that illustrate how information propagates through these networks. We illustrate the use of this tool by analyzing how artificial brains that evolved in silico solve particular tasks, and show how information fragmentation analysis provides deeper insights into how these brains process information and “think”. The measures of information fragmentation and encryption that result from our methods also quantify complexity of information processing in these networks and how this processing complexity differs between primary exposure to sensory data (early in the lifetime) and later routine processing.  相似文献   

2.
Causal Geometry     
Information geometry has offered a way to formally study the efficacy of scientific models by quantifying the impact of model parameters on the predicted effects. However, there has been little formal investigation of causation in this framework, despite causal models being a fundamental part of science and explanation. Here, we introduce causal geometry, which formalizes not only how outcomes are impacted by parameters, but also how the parameters of a model can be intervened upon. Therefore, we introduce a geometric version of “effective information”—a known measure of the informativeness of a causal relationship. We show that it is given by the matching between the space of effects and the space of interventions, in the form of their geometric congruence. Therefore, given a fixed intervention capability, an effective causal model is one that is well matched to those interventions. This is a consequence of “causal emergence,” wherein macroscopic causal relationships may carry more information than “fundamental” microscopic ones. We thus argue that a coarse-grained model may, paradoxically, be more informative than the microscopic one, especially when it better matches the scale of accessible interventions—as we illustrate on toy examples.  相似文献   

3.
Active inference is a normative framework for explaining behaviour under the free energy principle—a theory of self-organisation originating in neuroscience. It specifies neuronal dynamics for state-estimation in terms of a descent on (variational) free energy—a measure of the fit between an internal (generative) model and sensory observations. The free energy gradient is a prediction error—plausibly encoded in the average membrane potentials of neuronal populations. Conversely, the expected probability of a state can be expressed in terms of neuronal firing rates. We show that this is consistent with current models of neuronal dynamics and establish face validity by synthesising plausible electrophysiological responses. We then show that these neuronal dynamics approximate natural gradient descent, a well-known optimisation algorithm from information geometry that follows the steepest descent of the objective in information space. We compare the information length of belief updating in both schemes, a measure of the distance travelled in information space that has a direct interpretation in terms of metabolic cost. We show that neural dynamics under active inference are metabolically efficient and suggest that neural representations in biological agents may evolve by approximating steepest descent in information space towards the point of optimal inference.  相似文献   

4.
5.
Evolution is full of coevolving systems characterized by complex spatio-temporal interactions that lead to intertwined processes of adaptation. Yet, how adaptation across multiple levels of temporal scales and biological complexity is achieved remains unclear. Here, we formalize how evolutionary multi-scale processing underlying adaptation constitutes a form of metacognition flowing from definitions of metaprocessing in machine learning. We show (1) how the evolution of metacognitive systems can be expected when fitness landscapes vary on multiple time scales, and (2) how multiple time scales emerge during coevolutionary processes of sufficiently complex interactions. After defining a metaprocessor as a regulator with local memory, we prove that metacognition is more energetically efficient than purely object-level cognition when selection operates at multiple timescales in evolution. Furthermore, we show that existing modeling approaches to coadaptation and coevolution—here active inference networks, predator–prey interactions, coupled genetic algorithms, and generative adversarial networks—lead to multiple emergent timescales underlying forms of metacognition. Lastly, we show how coarse-grained structures emerge naturally in any resource-limited system, providing sufficient evidence for metacognitive systems to be a prevalent and vital component of (co-)evolution. Therefore, multi-scale processing is a necessary requirement for many evolutionary scenarios, leading to de facto metacognitive evolutionary outcomes.  相似文献   

6.
What information-processing strategies and general principles are sufficient to enable self-organized morphogenesis in embryogenesis and regeneration? We designed and analyzed a minimal model of self-scaling axial patterning consisting of a cellular network that develops activity patterns within implicitly set bounds. The properties of the cells are determined by internal ‘genetic’ networks with an architecture shared across all cells. We used machine-learning to identify models that enable this virtual mini-embryo to pattern a typical axial gradient while simultaneously sensing the set boundaries within which to develop it from homogeneous conditions—a setting that captures the essence of early embryogenesis. Interestingly, the model revealed several features (such as planar polarity and regenerative re-scaling capacity) for which it was not directly selected, showing how these common biological design principles can emerge as a consequence of simple patterning modes. A novel “causal network” analysis of the best model furthermore revealed that the originally symmetric model dynamically integrates into intercellular causal networks characterized by broken-symmetry, long-range influence and modularity, offering an interpretable macroscale-circuit-based explanation for phenotypic patterning. This work shows how computation could occur in biological development and how machine learning approaches can generate hypotheses and deepen our understanding of how featureless tissues might develop sophisticated patterns—an essential step towards predictive control of morphogenesis in regenerative medicine or synthetic bioengineering contexts. The tools developed here also have the potential to benefit machine learning via new forms of backpropagation and by leveraging the novel distributed self-representation mechanisms to improve robustness and generalization.  相似文献   

7.
With the increasing number of connected devices, complex systems such as smart homes record a multitude of events of various types, magnitude and characteristics. Current systems struggle to identify which events can be considered more memorable than others. In contrast, humans are able to quickly categorize some events as being more “memorable” than others. They do so without relying on knowledge of the system’s inner working or large previous datasets. Having this ability would allow the system to: (i) identify and summarize a situation to the user by presenting only memorable events; (ii) suggest the most memorable events as possible hypotheses in an abductive inference process. Our proposal is to use Algorithmic Information Theory to define a “memorability” score by retrieving events using predicative filters. We use smart-home examples to illustrate how our theoretical approach can be implemented in practice.  相似文献   

8.
For a large ensemble of complex systems, a Many-System Problem (MSP) studies how heterogeneity constrains and hides structural mechanisms, and how to uncover and reveal hidden major factors from homogeneous parts. All member systems in an MSP share common governing principles of dynamics, but differ in idiosyncratic characteristics. A typical dynamic is found underlying response features with respect to covariate features of quantitative or qualitative data types. Neither all-system-as-one-whole nor individual system-specific functional structures are assumed in such response-vs-covariate (Re–Co) dynamics. We developed a computational protocol for identifying various collections of major factors of various orders underlying Re–Co dynamics. We first demonstrate the immanent effects of heterogeneity among member systems, which constrain compositions of major factors and even hide essential ones. Secondly, we show that fuller collections of major factors are discovered by breaking heterogeneity into many homogeneous parts. This process further realizes Anderson’s “More is Different” phenomenon. We employ the categorical nature of all features and develop a Categorical Exploratory Data Analysis (CEDA)-based major factor selection protocol. Information theoretical measurements—conditional mutual information and entropy—are heavily used in two selection criteria: C1—confirmable and C2—irreplaceable. All conditional entropies are evaluated through contingency tables with algorithmically computed reliability against the finite sample phenomenon. We study one artificially designed MSP and then two real collectives of Major League Baseball (MLB) pitching dynamics with 62 slider pitchers and 199 fastball pitchers, respectively. Finally, our MSP data analyzing techniques are applied to resolve a scientific issue related to the Rosenberg Self-Esteem Scale.  相似文献   

9.
In a blind adaptive deconvolution problem, the convolutional noise observed at the output of the deconvolution process, in addition to the required source signal, is—according to the literature—assumed to be a Gaussian process when the deconvolution process (the blind adaptive equalizer) is deep in its convergence state. Namely, when the convolutional noise sequence or, equivalently, the residual inter-symbol interference (ISI) is considered small. Up to now, no closed-form approximated expression is given for the residual ISI, where the Gaussian model can be used to describe the convolutional noise probability density function (pdf). In this paper, we use the Maximum Entropy density technique, Lagrange’s Integral method, and quasi-moment truncation technique to obtain an approximated closed-form equation for the residual ISI where the Gaussian model can be used to approximately describe the convolutional noise pdf. We will show, based on this approximated closed-form equation for the residual ISI, that the Gaussian model can be used to approximately describe the convolutional noise pdf just before the equalizer has converged, even at a residual ISI level where the “eye diagram” is still very closed, namely, where the residual ISI can not be considered as small.  相似文献   

10.
Active inference is an increasingly prominent paradigm in theoretical biology. It frames the dynamics of living systems as if they were solving an inference problem. This rests upon their flow towards some (non-equilibrium) steady state—or equivalently, their maximisation of the Bayesian model evidence for an implicit probabilistic model. For many models, these self-evidencing dynamics manifest as messages passed among elements of a system. Such messages resemble synaptic communication at a neuronal network level but could also apply to other network structures. This paper attempts to apply the same formulation to biochemical networks. The chemical computation that occurs in regulation of metabolism relies upon sparse interactions between coupled reactions, where enzymes induce conditional dependencies between reactants. We will see that these reactions may be viewed as the movement of probability mass between alternative categorical states. When framed in this way, the master equations describing such systems can be reformulated in terms of their steady-state distribution. This distribution plays the role of a generative model, affording an inferential interpretation of the underlying biochemistry. Finally, we see that—in analogy with computational neurology and psychiatry—metabolic disorders may be characterized as false inference under aberrant prior beliefs.  相似文献   

11.
The inference of causal relations between observable phenomena is paramount across scientific disciplines; however, the means for such enterprise without experimental manipulation are limited. A commonly applied principle is that of the cause preceding and predicting the effect, taking into account other circumstances. Intuitively, when the temporal order of events is reverted, one would expect the cause and effect to apparently switch roles. This was previously demonstrated in bivariate linear systems and used in design of improved causal inference scores, while such behaviour in linear systems has been put in contrast with nonlinear chaotic systems where the inferred causal direction appears unchanged under time reversal. The presented work explores the conditions under which the causal reversal happens—either perfectly, approximately, or not at all—using theoretical analysis, low-dimensional examples, and network simulations, focusing on the simplified yet illustrative linear vector autoregressive process of order one. We start with a theoretical analysis that demonstrates that a perfect coupling reversal under time reversal occurs only under very specific conditions, followed up by constructing low-dimensional examples where indeed the dominant causal direction is even conserved rather than reversed. Finally, simulations of random as well as realistically motivated network coupling patterns from brain and climate show that level of coupling reversal and conservation can be well predicted by asymmetry and anormality indices introduced based on the theoretical analysis of the problem. The consequences for causal inference are discussed.  相似文献   

12.
Dissipative accounts of structure formation show that the self-organisation of complex structures is thermodynamically favoured, whenever these structures dissipate free energy that could not be accessed otherwise. These structures therefore open transition channels for the state of the universe to move from a frustrated, metastable state to another metastable state of higher entropy. However, these accounts apply as well to relatively simple, dissipative systems, such as convection cells, hurricanes, candle flames, lightning strikes, or mechanical cracks, as they do to complex biological systems. Conversely, interesting computational properties—that characterize complex biological systems, such as efficient, predictive representations of environmental dynamics—can be linked to the thermodynamic efficiency of underlying physical processes. However, the potential mechanisms that underwrite the selection of dissipative structures with thermodynamically efficient subprocesses is not completely understood. We address these mechanisms by explaining how bifurcation-based, work-harvesting processes—required to sustain complex dissipative structures—might be driven towards thermodynamic efficiency. We first demonstrate a simple mechanism that leads to self-selection of efficient dissipative structures in a stochastic chemical reaction network, when the dissipated driving chemical potential difference is decreased. We then discuss how such a drive can emerge naturally in a hierarchy of self-similar dissipative structures, each feeding on the dissipative structures of a previous level, when moving away from the initial, driving disequilibrium.  相似文献   

13.
Electron crystallography is especially useful for studying the structure and function of membrane proteins — key molecules with important functions in neural and other cells. Electron crystallography is now an established technique for analyzing the structures of membrane proteins in lipid bilayers that closely simulate their natural biological environment. Utilizing cryo-electron microscopes with helium-cooled specimen stages that were developed through a personal motivation to understand the functions of neural systems from a structural point of view, the structures of membrane proteins can be analyzed at a higher than 3 Å resolution. This review covers four objectives. First, I introduce the new research field of structural physiology. Second, I recount some of the struggles involved in developing cryo-electron microscopes. Third, I review the structural and functional analyses of membrane proteins mainly by electron crystallography using cryo-electron microscopes. Finally, I discuss multifunctional channels named “adhennels” based on structures analyzed using electron and X-ray crystallography.  相似文献   

14.
This paper is a review of a particular approach to the method of maximum entropy as a general framework for inference. The discussion emphasizes pragmatic elements in the derivation. An epistemic notion of information is defined in terms of its relation to the Bayesian beliefs of ideally rational agents. The method of updating from a prior to posterior probability distribution is designed through an eliminative induction process. The logarithmic relative entropy is singled out as a unique tool for updating (a) that is of universal applicability, (b) that recognizes the value of prior information, and (c) that recognizes the privileged role played by the notion of independence in science. The resulting framework—the ME method—can handle arbitrary priors and arbitrary constraints. It includes the MaxEnt and Bayes’ rules as special cases and, therefore, unifies entropic and Bayesian methods into a single general inference scheme. The ME method goes beyond the mere selection of a single posterior, and also addresses the question of how much less probable other distributions might be, which provides a direct bridge to the theories of fluctuations and large deviations.  相似文献   

15.
In this treatment of random dynamical systems, we consider the existence—and identification—of conditional independencies at nonequilibrium steady-state. These independencies underwrite a particular partition of states, in which internal states are statistically secluded from external states by blanket states. The existence of such partitions has interesting implications for the information geometry of internal states. In brief, this geometry can be read as a physics of sentience, where internal states look as if they are inferring external states. However, the existence of such partitions—and the functional form of the underlying densities—have yet to be established. Here, using the Lorenz system as the basis of stochastic chaos, we leverage the Helmholtz decomposition—and polynomial expansions—to parameterise the steady-state density in terms of surprisal or self-information. We then show how Markov blankets can be identified—using the accompanying Hessian—to characterise the coupling between internal and external states in terms of a generalised synchrony or synchronisation of chaos. We conclude by suggesting that this kind of synchronisation may provide a mathematical basis for an elemental form of (autonomous or active) sentience in biology.  相似文献   

16.
The article argues that—at least in certain interpretations, such as the one assumed in this article under the heading of “reality without realism”—the quantum-theoretical situation appears as follows: While—in terms of probabilistic predictions—connected to and connecting the information obtained in quantum phenomena, the mathematics of quantum theory (QM or QFT), which is continuous, does not represent and is discontinuous with both the emergence of quantum phenomena and the physics of these phenomena, phenomena that are physically discontinuous with each other as well. These phenomena, and thus this information, are described by classical physics. All actually available information (in the mathematical sense of information theory) is classical: it is composed of units, such as bits, that are—or are contained in—entities described by classical physics. On the other hand, classical physics cannot predict this information when it is created, as manifested in measuring instruments, in quantum experiments, while quantum theory can. In this epistemological sense, this information is quantum. The article designates the discontinuity between quantum theory and the emergence of quantum phenomena the “Heisenberg discontinuity”, because it was introduced by W. Heisenberg along with QM, and the discontinuity between QM or QFT and the classical physics of quantum phenomena, the “Bohr discontinuity”, because it was introduced as part of Bohr’s interpretation of quantum phenomena and QM, under the assumption of Heisenberg discontinuity. Combining both discontinuities precludes QM or QFT from being connected to either physical reality, that ultimately responsible for quantum phenomena or that of these phenomena themselves, other than by means of probabilistic predictions concerning the information, classical in character, contained in quantum phenomena. The nature of quantum information is, in this view, defined by this situation. A major implication, discussed in the Conclusion, is the existence and arguably the necessity of two—classical and quantum—or with relativity, three and possibly more essentially different theories in fundamental physics.  相似文献   

17.
The heterogeneous graphical Granger model (HGGM) for causal inference among processes with distributions from an exponential family is efficient in scenarios when the number of time observations is much greater than the number of time series, normally by several orders of magnitude. However, in the case of “short” time series, the inference in HGGM often suffers from overestimation. To remedy this, we use the minimum message length principle (MML) to determinate the causal connections in the HGGM. The minimum message length as a Bayesian information-theoretic method for statistical model selection applies Occam’s razor in the following way: even when models are equal in their measure of fit-accuracy to the observed data, the one generating the most concise explanation of data is more likely to be correct. Based on the dispersion coefficient of the target time series and on the initial maximum likelihood estimates of the regression coefficients, we propose a minimum message length criterion to select the subset of causally connected time series with each target time series and derive its form for various exponential distributions. We propose two algorithms—the genetic-type algorithm (HMMLGA) and exHMML to find the subset. We demonstrated the superiority of both algorithms in synthetic experiments with respect to the comparison methods Lingam, HGGM and statistical framework Granger causality (SFGC). In the real data experiments, we used the methods to discriminate between pregnancy and labor phase using electrohysterogram data of Islandic mothers from Physionet databasis. We further analysed the Austrian climatological time measurements and their temporal interactions in rain and sunny days scenarios. In both experiments, the results of HMMLGA had the most realistic interpretation with respect to the comparison methods. We provide our code in Matlab. To our best knowledge, this is the first work using the MML principle for causal inference in HGGM.  相似文献   

18.
Quantum candies (qandies) represent a type of pedagogical simple model that describes many concepts from quantum information processing (QIP) intuitively without the need to understand or make use of superpositions and without the need of using complex algebra. One of the topics in quantum cryptography that has gained research attention in recent years is quantum digital signatures (QDS), which involve protocols to securely sign classical bits using quantum methods. In this paper, we show how the “qandy model” can be used to describe three QDS protocols in order to provide an important and potentially practical example of the power of “superpositionless” quantum information processing for individuals without background knowledge in the field.  相似文献   

19.
This is an informal and sketchy review of five topical, somewhat unrelated subjects in quantitative finance and econophysics: (i) models of price changes; (ii) linear correlations and random matrix theory; (iii) non-linear dependence copulas; (iv) high-frequency trading and market stability; and finally—but perhaps most importantly—(v) “radical complexity” that prompts a scenario-based approach to macroeconomics heavily relying on Agent-Based Models. Some open questions and future research directions are outlined.  相似文献   

20.
We analyze the price return distributions of currency exchange rates, cryptocurrencies, and contracts for differences (CFDs) representing stock indices, stock shares, and commodities. Based on recent data from the years 2017–2020, we model tails of the return distributions at different time scales by using power-law, stretched exponential, and q-Gaussian functions. We focus on the fitted function parameters and how they change over the years by comparing our results with those from earlier studies and find that, on the time horizons of up to a few minutes, the so-called “inverse-cubic power-law” still constitutes an appropriate global reference. However, we no longer observe the hypothesized universal constant acceleration of the market time flow that was manifested before in an ever faster convergence of empirical return distributions towards the normal distribution. Our results do not exclude such a scenario but, rather, suggest that some other short-term processes related to a current market situation alter market dynamics and may mask this scenario. Real market dynamics is associated with a continuous alternation of different regimes with different statistical properties. An example is the COVID-19 pandemic outburst, which had an enormous yet short-time impact on financial markets. We also point out that two factors—speed of the market time flow and the asset cross-correlation magnitude—while related (the larger the speed, the larger the cross-correlations on a given time scale), act in opposite directions with regard to the return distribution tails, which can affect the expected distribution convergence to the normal distribution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号