首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this treatment of random dynamical systems, we consider the existence—and identification—of conditional independencies at nonequilibrium steady-state. These independencies underwrite a particular partition of states, in which internal states are statistically secluded from external states by blanket states. The existence of such partitions has interesting implications for the information geometry of internal states. In brief, this geometry can be read as a physics of sentience, where internal states look as if they are inferring external states. However, the existence of such partitions—and the functional form of the underlying densities—have yet to be established. Here, using the Lorenz system as the basis of stochastic chaos, we leverage the Helmholtz decomposition—and polynomial expansions—to parameterise the steady-state density in terms of surprisal or self-information. We then show how Markov blankets can be identified—using the accompanying Hessian—to characterise the coupling between internal and external states in terms of a generalised synchrony or synchronisation of chaos. We conclude by suggesting that this kind of synchronisation may provide a mathematical basis for an elemental form of (autonomous or active) sentience in biology.  相似文献   

2.
Evolution is full of coevolving systems characterized by complex spatio-temporal interactions that lead to intertwined processes of adaptation. Yet, how adaptation across multiple levels of temporal scales and biological complexity is achieved remains unclear. Here, we formalize how evolutionary multi-scale processing underlying adaptation constitutes a form of metacognition flowing from definitions of metaprocessing in machine learning. We show (1) how the evolution of metacognitive systems can be expected when fitness landscapes vary on multiple time scales, and (2) how multiple time scales emerge during coevolutionary processes of sufficiently complex interactions. After defining a metaprocessor as a regulator with local memory, we prove that metacognition is more energetically efficient than purely object-level cognition when selection operates at multiple timescales in evolution. Furthermore, we show that existing modeling approaches to coadaptation and coevolution—here active inference networks, predator–prey interactions, coupled genetic algorithms, and generative adversarial networks—lead to multiple emergent timescales underlying forms of metacognition. Lastly, we show how coarse-grained structures emerge naturally in any resource-limited system, providing sufficient evidence for metacognitive systems to be a prevalent and vital component of (co-)evolution. Therefore, multi-scale processing is a necessary requirement for many evolutionary scenarios, leading to de facto metacognitive evolutionary outcomes.  相似文献   

3.
In many applications of interacting systems, we are only interested in the dynamic behavior of a subset of all possible active species. For example, this is true in combustion models (many transient chemical species are not of interest in a given reaction) and in epidemiological models (only certain subpopulations are consequential). Thus, it is common to use greatly reduced or partial models in which only the interactions among the species of interest are known. In this work, we explore the use of an embedded, sparse, and data-driven discrepancy operator to augment these partial interaction models. Preliminary results show that the model error caused by severe reductions—e.g., elimination of hundreds of terms—can be captured with sparse operators, built with only a small fraction of that number. The operator is embedded within the differential equations of the model, which allows the action of the operator to be interpretable. Moreover, it is constrained by available physical information and calibrated over many scenarios. These qualities of the discrepancy model—interpretability, physical consistency, and robustness to different scenarios—are intended to support reliable predictions under extrapolative conditions.  相似文献   

4.
The free energy principle from neuroscience has recently gained traction as one of the most prominent brain theories that can emulate the brain’s perception and action in a bio-inspired manner. This renders the theory with the potential to hold the key for general artificial intelligence. Leveraging this potential, this paper aims to bridge the gap between neuroscience and robotics by reformulating an FEP-based inference scheme—Dynamic Expectation Maximization—into an algorithm that can perform simultaneous state, input, parameter, and noise hyperparameter estimation of any stable linear state space system subjected to colored noises. The resulting estimator was proved to be of the form of an augmented coupled linear estimator. Using this mathematical formulation, we proved that the estimation steps have theoretical guarantees of convergence. The algorithm was rigorously tested in simulation on a wide variety of linear systems with colored noises. The paper concludes by demonstrating the superior performance of DEM for parameter estimation under colored noise in simulation, when compared to the state-of-the-art estimators like Sub Space method, Prediction Error Minimization (PEM), and Expectation Maximization (EM) algorithm. These results contribute to the applicability of DEM as a robust learning algorithm for safe robotic applications.  相似文献   

5.
We compare and contrast three different, but complementary views of “structure” and “pattern” in spatial processes. For definiteness and analytical clarity, we apply all three approaches to the simplest class of spatial processes: one-dimensional Ising spin systems with finite-range interactions. These noncritical systems are well-suited for this study since the change in structure as a function of system parameters is more subtle than that found in critical systems where, at a phase transition, many observables diverge, thereby making the detection of change in structure obvious. This survey demonstrates that the measures of pattern from information theory and computational mechanics differ from known thermodynamic and statistical mechanical functions. Moreover, they capture important structural features that are otherwise missed. In particular, a type of mutual information called the excess entropy—an information theoretic measure of memory—serves to detect ordered, low entropy density patterns. It is superior in several respects to other functions used to probe structure, such as magnetization and structure factors. ϵ-Machines—the main objects of computational mechanics—are seen to be the most direct approach to revealing the (group and semigroup) symmetries possessed by the spatial patterns and to estimating the minimum amount of memory required to reproduce the configuration ensemble, a quantity known as the statistical complexity. Finally, we argue that the information theoretic and computational mechanical analyses of spatial patterns capture the intrinsic computational capabilities embedded in spin systems—how they store, transmit, and manipulate configurational information to produce spatial structure.  相似文献   

6.
In theoretical biology, we are often interested in random dynamical systems—like the brain—that appear to model their environments. This can be formalized by appealing to the existence of a (possibly non-equilibrium) steady state, whose density preserves a conditional independence between a biological entity and its surroundings. From this perspective, the conditioning set, or Markov blanket, induces a form of vicarious synchrony between creature and world—as if one were modelling the other. However, this results in an apparent paradox. If all conditional dependencies between a system and its surroundings depend upon the blanket, how do we account for the mnemonic capacity of living systems? It might appear that any shared dependence upon past blanket states violates the independence condition, as the variables on either side of the blanket now share information not available from the current blanket state. This paper aims to resolve this paradox, and to demonstrate that conditional independence does not preclude memory. Our argument rests upon drawing a distinction between the dependencies implied by a steady state density, and the density dynamics of the system conditioned upon its configuration at a previous time. The interesting question then becomes: What determines the length of time required for a stochastic system to ‘forget’ its initial conditions? We explore this question for an example system, whose steady state density possesses a Markov blanket, through simple numerical analyses. We conclude with a discussion of the relevance for memory in cognitive systems like us.  相似文献   

7.
Probabilistic inference—the process of estimating the values of unobserved variables in probabilistic models—has been used to describe various cognitive phenomena related to learning and memory. While the study of biological realizations of inference has focused on animal nervous systems, single-celled organisms also show complex and potentially “predictive” behaviors in changing environments. Yet, it is unclear how the biochemical machinery found in cells might perform inference. Here, we show how inference in a simple Markov model can be approximately realized, in real-time, using polymerizing biochemical circuits. Our approach relies on assembling linear polymers that record the history of environmental changes, where the polymerization process produces molecular complexes that reflect posterior probabilities. We discuss the implications of realizing inference using biochemistry, and the potential of polymerization as a form of biological information-processing.  相似文献   

8.
9.
One of the most effective image processing techniques is the use of convolutional neural networks that use convolutional layers. In each such layer, the value of the layer’s output signal at each point is a combination of the layer’s input signals corresponding to several neighboring points. To improve the accuracy, researchers have developed a version of this technique, in which only data from some of the neighboring points is processed. It turns out that the most efficient case—called dilated convolution—is when we select the neighboring points whose differences in both coordinates are divisible by some constant . In this paper, we explain this empirical efficiency by proving that for all reasonable optimality criteria, dilated convolution is indeed better than possible alternatives.  相似文献   

10.
11.
Active inference is a normative framework for explaining behaviour under the free energy principle—a theory of self-organisation originating in neuroscience. It specifies neuronal dynamics for state-estimation in terms of a descent on (variational) free energy—a measure of the fit between an internal (generative) model and sensory observations. The free energy gradient is a prediction error—plausibly encoded in the average membrane potentials of neuronal populations. Conversely, the expected probability of a state can be expressed in terms of neuronal firing rates. We show that this is consistent with current models of neuronal dynamics and establish face validity by synthesising plausible electrophysiological responses. We then show that these neuronal dynamics approximate natural gradient descent, a well-known optimisation algorithm from information geometry that follows the steepest descent of the objective in information space. We compare the information length of belief updating in both schemes, a measure of the distance travelled in information space that has a direct interpretation in terms of metabolic cost. We show that neural dynamics under active inference are metabolically efficient and suggest that neural representations in biological agents may evolve by approximating steepest descent in information space towards the point of optimal inference.  相似文献   

12.
Active inference is an increasingly prominent paradigm in theoretical biology. It frames the dynamics of living systems as if they were solving an inference problem. This rests upon their flow towards some (non-equilibrium) steady state—or equivalently, their maximisation of the Bayesian model evidence for an implicit probabilistic model. For many models, these self-evidencing dynamics manifest as messages passed among elements of a system. Such messages resemble synaptic communication at a neuronal network level but could also apply to other network structures. This paper attempts to apply the same formulation to biochemical networks. The chemical computation that occurs in regulation of metabolism relies upon sparse interactions between coupled reactions, where enzymes induce conditional dependencies between reactants. We will see that these reactions may be viewed as the movement of probability mass between alternative categorical states. When framed in this way, the master equations describing such systems can be reformulated in terms of their steady-state distribution. This distribution plays the role of a generative model, affording an inferential interpretation of the underlying biochemistry. Finally, we see that—in analogy with computational neurology and psychiatry—metabolic disorders may be characterized as false inference under aberrant prior beliefs.  相似文献   

13.
We illustrate how, contrary to common belief, transient Fluctuation Relations (FRs) for systems in constant external magnetic field hold without the inversion of the field. Building on previous work providing generalized time-reversal symmetries for systems in parallel external magnetic and electric fields, we observe that the standard proof of these important nonequilibrium properties can be fully reinstated in the presence of net dissipation. This generalizes recent results for the FRs in orthogonal fields—an interesting but less commonly investigated geometry—and enables direct comparison with existing literature. We also present for the first time a numerical demonstration of the validity of the transient FRs with nonzero magnetic field via nonequilibrium molecular dynamics simulations of a realistic model of liquid NaCl.  相似文献   

14.
Causal Geometry     
Information geometry has offered a way to formally study the efficacy of scientific models by quantifying the impact of model parameters on the predicted effects. However, there has been little formal investigation of causation in this framework, despite causal models being a fundamental part of science and explanation. Here, we introduce causal geometry, which formalizes not only how outcomes are impacted by parameters, but also how the parameters of a model can be intervened upon. Therefore, we introduce a geometric version of “effective information”—a known measure of the informativeness of a causal relationship. We show that it is given by the matching between the space of effects and the space of interventions, in the form of their geometric congruence. Therefore, given a fixed intervention capability, an effective causal model is one that is well matched to those interventions. This is a consequence of “causal emergence,” wherein macroscopic causal relationships may carry more information than “fundamental” microscopic ones. We thus argue that a coarse-grained model may, paradoxically, be more informative than the microscopic one, especially when it better matches the scale of accessible interventions—as we illustrate on toy examples.  相似文献   

15.
Dissipative accounts of structure formation show that the self-organisation of complex structures is thermodynamically favoured, whenever these structures dissipate free energy that could not be accessed otherwise. These structures therefore open transition channels for the state of the universe to move from a frustrated, metastable state to another metastable state of higher entropy. However, these accounts apply as well to relatively simple, dissipative systems, such as convection cells, hurricanes, candle flames, lightning strikes, or mechanical cracks, as they do to complex biological systems. Conversely, interesting computational properties—that characterize complex biological systems, such as efficient, predictive representations of environmental dynamics—can be linked to the thermodynamic efficiency of underlying physical processes. However, the potential mechanisms that underwrite the selection of dissipative structures with thermodynamically efficient subprocesses is not completely understood. We address these mechanisms by explaining how bifurcation-based, work-harvesting processes—required to sustain complex dissipative structures—might be driven towards thermodynamic efficiency. We first demonstrate a simple mechanism that leads to self-selection of efficient dissipative structures in a stochastic chemical reaction network, when the dissipated driving chemical potential difference is decreased. We then discuss how such a drive can emerge naturally in a hierarchy of self-similar dissipative structures, each feeding on the dissipative structures of a previous level, when moving away from the initial, driving disequilibrium.  相似文献   

16.
Psychotherapy involves the modification of a client’s worldview to reduce distress and enhance well-being. We take a human dynamical systems approach to modeling this process, using Reflexively Autocatalytic foodset-derived (RAF) networks. RAFs have been used to model the self-organization of adaptive networks associated with the origin and early evolution of both biological life, as well as the evolution and development of the kind of cognitive structure necessary for cultural evolution. The RAF approach is applicable in these seemingly disparate cases because it provides a theoretical framework for formally describing under what conditions systems composed of elements that interact and ‘catalyze’ the formation of new elements collectively become integrated wholes. In our application, the elements are mental representations, and the whole is a conceptual network. The initial components—referred to as foodset items—are mental representations that are innate, or were acquired through social learning or individual learning (of pre-existing information). The new elements—referred to as foodset-derived items—are mental representations that result from creative thought (resulting in new information). In clinical psychology, a client’s distress may be due to, or exacerbated by, one or more beliefs that diminish self-esteem. Such beliefs may be formed and sustained through distorted thinking, and the tendency to interpret ambiguous events as confirmation of these beliefs. We view psychotherapy as a creative collaborative process between therapist and client, in which the output is not an artwork or invention but a more well-adapted worldview and approach to life on the part of the client. In this paper, we model a hypothetical albeit representative example of the formation and dissolution of such beliefs over the course of a therapist–client interaction using RAF networks. We show how the therapist is able to elicit this worldview from the client and create a conceptualization of the client’s concerns. We then formally demonstrate four distinct ways in which the therapist is able to facilitate change in the client’s worldview: (1) challenging the client’s negative interpretations of events, (2) providing direct evidence that runs contrary to and counteracts the client’s distressing beliefs, (3) using self-disclosure to provide examples of strategies one can use to diffuse a negative conclusion, and (4) reinforcing the client’s attempts to assimilate such strategies into their own ways of thinking. We then discuss the implications of such an approach to expanding our knowledge of the development of mental health concerns and the trajectory of the therapeutic change.  相似文献   

17.
For a large ensemble of complex systems, a Many-System Problem (MSP) studies how heterogeneity constrains and hides structural mechanisms, and how to uncover and reveal hidden major factors from homogeneous parts. All member systems in an MSP share common governing principles of dynamics, but differ in idiosyncratic characteristics. A typical dynamic is found underlying response features with respect to covariate features of quantitative or qualitative data types. Neither all-system-as-one-whole nor individual system-specific functional structures are assumed in such response-vs-covariate (Re–Co) dynamics. We developed a computational protocol for identifying various collections of major factors of various orders underlying Re–Co dynamics. We first demonstrate the immanent effects of heterogeneity among member systems, which constrain compositions of major factors and even hide essential ones. Secondly, we show that fuller collections of major factors are discovered by breaking heterogeneity into many homogeneous parts. This process further realizes Anderson’s “More is Different” phenomenon. We employ the categorical nature of all features and develop a Categorical Exploratory Data Analysis (CEDA)-based major factor selection protocol. Information theoretical measurements—conditional mutual information and entropy—are heavily used in two selection criteria: C1—confirmable and C2—irreplaceable. All conditional entropies are evaluated through contingency tables with algorithmically computed reliability against the finite sample phenomenon. We study one artificially designed MSP and then two real collectives of Major League Baseball (MLB) pitching dynamics with 62 slider pitchers and 199 fastball pitchers, respectively. Finally, our MSP data analyzing techniques are applied to resolve a scientific issue related to the Rosenberg Self-Esteem Scale.  相似文献   

18.
《Molecular physics》2012,110(11-12):1069-1079
We present a detailed study on the finite size scaling behaviour of thermodynamic properties for small systems of particles embedded in a reservoir. Previously, we derived that the leading finite size effects of thermodynamic properties for small systems scale with the inverse of the linear length of the small system, and we showed how this can be used to describe systems in the thermodynamic limit [Chem. Phys. Lett. 504, 199 (2011)]. This approach takes into account an effective surface energy, as a result of the non-periodic boundaries of the small embedded system. Deviations from the linear behaviour occur when the small system becomes very small, i.e. smaller than three times the particle diameter in each direction. At this scale, so-called nook- and corner effects will become important. In this work, we present a detailed analysis to explain this behaviour. In addition, we present a model for the finite size scaling when the size of the small system is of the same order of magnitude as the reservoir. The developed theory is validated using molecular simulations of systems containing Lennard-Jones and WCA particles, and leads to significant improvements over our previous approach. Our approach eventually leads to an efficient method to compute the thermodynamic factor of macroscopic systems from finite size scaling, which is for example required for converting Fick and Maxwell–Stefan transport diffusivities.  相似文献   

19.
We discuss how to construct a direct and experientially natural path to entropy as a extensive quantity of a macroscopic theory of thermal systems and processes. The scientific aspects of this approach are based upon continuum thermodynamics. We ask what the roots of an experientially natural approach might be—to this end we investigate and describe in some detail (a) how humans experience and conceptualize an extensive thermal quantity (i.e., an amount of heat), and (b) how this concept evolved during the early development of the science of thermal phenomena (beginning with the Experimenters of the Accademia del Cimento and ending with Sadi Carnot). We show that a direct approach to entropy, as the extensive quantity of models of thermal systems and processes, is possible and how it can be applied to the teaching of thermodynamics for various audiences.  相似文献   

20.
The dependability of systems and networks has been the target of research for many years now. In the 1970s, what is now known as the top conference on dependability—The IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)—emerged gathering international researchers and sparking the interest of the scientific community. Although it started in niche systems, nowadays dependability is viewed as highly important in most computer systems. The goal of this work is to analyze the research published in the proceedings of well-established dependability conferences (i.e., DSN, International Symposium on Software Reliability Engineering (ISSRE), International Symposium on Reliable Distributed Systems (SRDS), European Dependable Computing Conference (EDCC), Latin-American Symposium on Dependable Computing (LADC), Pacific Rim International Symposium on Dependable Computing (PRDC)), while using Natural Language Processing (NLP) and namely the Latent Dirichlet Allocation (LDA) algorithm to identify active, collapsing, ephemeral, and new lines of research in the dependability field. Results show a strong emphasis on terms, like ‘security’, despite the general focus of the conferences in dependability and new trends that are related with ’machine learning’ and ‘blockchain’. We used the PRDC conference as a use case, which showed similarity with the overall set of conferences, although we also found specific terms, like ‘cyber-physical’, being popular at PRDC and not in the overall dataset.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号