首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the paper, we begin with introducing a novel scale mixture of normal distribution such that its leptokurticity and fat-tailedness are only local, with this “locality” being separately controlled by two censoring parameters. This new, locally leptokurtic and fat-tailed (LLFT) distribution makes a viable alternative for other, globally leptokurtic, fat-tailed and symmetric distributions, typically entertained in financial volatility modelling. Then, we incorporate the LLFT distribution into a basic stochastic volatility (SV) model to yield a flexible alternative for common heavy-tailed SV models. For the resulting LLFT-SV model, we develop a Bayesian statistical framework and effective MCMC methods to enable posterior sampling of the parameters and latent variables. Empirical results indicate the validity of the LLFT-SV specification for modelling both “non-standard” financial time series with repeating zero returns, as well as more “typical” data on the S&P 500 and DAX indices. For the former, the LLFT-SV model is also shown to markedly outperform a common, globally heavy-tailed, t-SV alternative in terms of density forecasting. Applications of the proposed distribution in more advanced SV models seem to be easily attainable.  相似文献   

2.
Extraction of subsets of highly connected nodes (“communities” or modules) is a standard step in the analysis of complex social and biological networks. We here consider the problem of finding a relatively small set of nodes in two labeled weighted graphs that is highly connected in both. While many scoring functions and algorithms tackle the problem, the typically high computational cost of permutation testing required to establish the p-value for the observed pattern presents a major practical obstacle. To address this problem, we here extend the recently proposed CTD (“Connect the Dots”) approach to establish information-theoretic upper bounds on the p-values and lower bounds on the size and connectedness of communities that are detectable. This is an innovation on the applicability of CTD, broadening its use to pairs of graphs.  相似文献   

3.
In this paper, I investigate a connection between a common characterisation of freedom and how uncertainty is managed in a Bayesian hierarchical model. To do this, I consider a distributed factorization of a group’s optimization of free energy, in which each agent is attempting to align with the group and with its own model. I show how this can lead to equilibria for groups, defined by the capacity of the model being used, essentially how many different datasets it can handle. In particular, I show that there is a “sweet spot” in the capacity of a normal model in each agent’s decentralized optimization, and that this “sweet spot” corresponds to minimal free energy for the group. At the sweet spot, an agent can predict what the group will do and the group is not surprised by the agent. However, there is an asymmetry. A higher capacity model for an agent makes it harder for the individual to learn, as there are more parameters. Simultaneously, a higher capacity model for the group, implemented as a higher capacity model for each member agent, makes it easier for a group to integrate a new member. To optimize for a group of agents then requires one to make a trade-off in capacity, as each individual agent seeks to decrease capacity, but there is pressure from the group to increase capacity of all members. This pressure exists because as individual agent’s capacities are reduced, so too are their abilities to model other agents, and thereby to establish pro-social behavioural patterns. I then consider a basic two-level (dual process) Bayesian model of social reasoning and a set of three parameters of capacity that are required to implement such a model. Considering these three capacities as dependent elements in a free energy minimization for a group leads to a “sweet surface” in a three-dimensional space defining the triplet of parameters that each agent must use should they hope to minimize free energy as a group. Finally, I relate these three parameters to three notions of freedom and equality in human social organization, and postulate a correspondence between freedom and model capacity. That is, models with higher capacity, have more freedom as they can interact with more datasets.  相似文献   

4.
“No free lunch” results state the impossibility of obtaining meaningful bounds on the error of a learning algorithm without prior assumptions and modelling, which is more or less realistic for a given problem. Some models are “expensive” (strong assumptions, such as sub-Gaussian tails), others are “cheap” (simply finite variance). As it is well known, the more you pay, the more you get: in other words, the most expensive models yield the more interesting bounds. Recent advances in robust statistics have investigated procedures to obtain tight bounds while keeping the cost of assumptions minimal. The present paper explores and exhibits what the limits are for obtaining tight probably approximately correct (PAC)-Bayes bounds in a robust setting for cheap models.  相似文献   

5.
6.
Integrated information has been recently suggested as a possible measure to identify a necessary condition for a system to display conscious features. Recently, we have shown that astrocytes contribute to the generation of integrated information through the complex behavior of neuron–astrocyte networks. Still, it remained unclear which underlying mechanisms governing the complex behavior of a neuron–astrocyte network are essential to generating positive integrated information. This study presents an analytic consideration of this question based on exact and asymptotic expressions for integrated information in terms of exactly known probability distributions for a reduced mathematical model (discrete-time, discrete-state stochastic model) reflecting the main features of the “spiking–bursting” dynamics of a neuron–astrocyte network. The analysis was performed in terms of the empirical “whole minus sum” version of integrated information in comparison to the “decoder based” version. The “whole minus sum” information may change sign, and an interpretation of this transition in terms of “net synergy” is available in the literature. This motivated our particular interest in the sign of the “whole minus sum” information in our analytical considerations. The behaviors of the “whole minus sum” and “decoder based” information measures are found to bear a lot of similarity—they have mutual asymptotic convergence as time-uncorrelated activity increases, and the sign transition of the “whole minus sum” information is associated with a rapid growth in the “decoder based” information. The study aims at creating a theoretical framework for using the spiking–bursting model as an analytically tractable reference point for applying integrated information concepts to systems exhibiting similar bursting behavior. The model can also be of interest as a new discrete-state test bench for different formulations of integrated information.  相似文献   

7.
We introduce the Redundant Information Neural Estimator (RINE), a method that allows efficient estimation for the component of information about a target variable that is common to a set of sources, known as the “redundant information”. We show that existing definitions of the redundant information can be recast in terms of an optimization over a family of functions. In contrast to previous information decompositions, which can only be evaluated for discrete variables over small alphabets, we show that optimizing over functions enables the approximation of the redundant information for high-dimensional and continuous predictors. We demonstrate this on high-dimensional image classification and motor-neuroscience tasks.  相似文献   

8.
We present a critique of the many-world interpretation of quantum mechanics, based on different “pictures” that describe the time evolution of an isolated quantum system. Without an externally imposed frame to restrict these possible pictures, the theory cannot yield non-trivial interpretational statements. This is analogous to Goodman’s famous “grue-bleen” problem of language and induction. Using a general framework applicable to many kinds of dynamical theories, we try to identify the kind of additional structure (if any) required for the meaningful interpretation of a theory. We find that the “grue-bleen” problem is not restricted to quantum mechanics, but also affects other theories including classical Hamiltonian mechanics. For all such theories, absent external frame information, an isolated system has no interpretation.  相似文献   

9.
Causal Geometry     
Information geometry has offered a way to formally study the efficacy of scientific models by quantifying the impact of model parameters on the predicted effects. However, there has been little formal investigation of causation in this framework, despite causal models being a fundamental part of science and explanation. Here, we introduce causal geometry, which formalizes not only how outcomes are impacted by parameters, but also how the parameters of a model can be intervened upon. Therefore, we introduce a geometric version of “effective information”—a known measure of the informativeness of a causal relationship. We show that it is given by the matching between the space of effects and the space of interventions, in the form of their geometric congruence. Therefore, given a fixed intervention capability, an effective causal model is one that is well matched to those interventions. This is a consequence of “causal emergence,” wherein macroscopic causal relationships may carry more information than “fundamental” microscopic ones. We thus argue that a coarse-grained model may, paradoxically, be more informative than the microscopic one, especially when it better matches the scale of accessible interventions—as we illustrate on toy examples.  相似文献   

10.
Entropy indicates irregularity or randomness of a dynamic system. Over the decades, entropy calculated at different scales of the system through subsampling or coarse graining has been used as a surrogate measure of system complexity. One popular multi-scale entropy analysis is the multi-scale sample entropy (MSE), which calculates entropy through the sample entropy (SampEn) formula at each time scale. SampEn is defined by the “logarithmic likelihood” that a small section (within a window of a length m) of the data “matches” with other sections will still “match” the others if the section window length increases by one. “Match” is defined by a threshold of r times standard deviation of the entire time series. A problem of current MSE algorithm is that SampEn calculations at different scales are based on the same matching threshold defined by the original time series but data standard deviation actually changes with the subsampling scales. Using a fixed threshold will automatically introduce systematic bias to the calculation results. The purpose of this paper is to mathematically present this systematic bias and to provide methods for correcting it. Our work will help the large MSE user community avoiding introducing the bias to their multi-scale SampEn calculation results.  相似文献   

11.
This article reconsiders the concept of physical reality in quantum theory and the concept of quantum measurement, following Bohr, whose analysis of quantum measurement led him to his concept of a (quantum) “phenomenon,” referring to “the observations obtained under the specified circumstances,” in the interaction between quantum objects and measuring instruments. This situation makes the terms “observation” and “measurement,” as conventionally understood, inapplicable. These terms are remnants of classical physics or still earlier history, from which classical physics inherited it. As defined here, a quantum measurement does not measure any preexisting property of the ultimate constitution of the reality responsible for quantum phenomena. An act of measurement establishes a quantum phenomenon by an interaction between the instrument and the quantum object or in the present view the ultimate constitution of the reality responsible for quantum phenomena and, at the time of measurement, also quantum objects. In the view advanced in this article, in contrast to that of Bohr, quantum objects, such as electrons or photons, are assumed to exist only at the time of measurement and not independently, a view that redefines the concept of quantum object as well. This redefinition becomes especially important in high-energy quantum regimes and quantum field theory and allows this article to define a new concept of quantum field. The article also considers, now following Bohr, the quantum measurement as the entanglement between quantum objects and measurement instruments. The argument of the article is grounded in the concept “reality without realism” (RWR), as underlying quantum measurement thus understood, and the view, the RWR view, of quantum theory defined by this concept. The RWR view places a stratum of physical reality thus designated, here the reality ultimately responsible for quantum phenomena, beyond representation or knowledge, or even conception, and defines the corresponding set of interpretations quantum mechanics or quantum field theory, such as the one assumed in this article, in which, again, not only quantum phenomena but also quantum objects are (idealizations) defined by measurement. As such, the article also offers a broadly conceived response to J. Bell’s argument “against ‘measurement’”.  相似文献   

12.
Wigner’s friend scenarios involve an Observer, or Observers, measuring a Friend, or Friends, who themselves make quantum measurements. In recent discussions, it has been suggested that quantum mechanics may not always be able to provide a consistent account of a situation involving two Observers and two Friends. We investigate this problem by invoking the basic rules of quantum mechanics as outlined by Feynman in the well-known “Feynman Lectures on Physics”. We show here that these “Feynman rules” constrain the a priori assumptions which can be made in generalised Wigner’s friend scenarios, because the existence of the probabilities of interest ultimately depends on the availability of physical evidence (material records) of the system’s past. With these constraints obeyed, a non-ambiguous and consistent account of all measurement outcomes is obtained for all agents, taking part in various Wigner’s Friend scenarios.  相似文献   

13.
The task of reconstructing the system’s state from the measurements results, known as the Pauli problem, usually requires repetition of two successive steps. Preparation in an initial state to be determined is followed by an accurate measurement of one of the several chosen operators in order to provide the necessary “Pauli data”. We consider a similar yet more general problem of recovering Feynman’s transition (path) amplitudes from the results of at least three consecutive measurements. The three-step histories of a pre- and post-selected quantum system are subjected to a type of interference not available to their two-step counterparts. We show that this interference can be exploited, and if the intermediate measurement is “fuzzy”, the path amplitudes can be successfully recovered. The simplest case of a two-level system is analysed in detail. The “weak measurement” limit and the usefulness of the path amplitudes are also discussed.  相似文献   

14.
In recent years, law enforcement authorities have increasingly used mathematical tools to support criminal investigations, such as those related to terrorism. In this work, two relevant questions are discussed: “How can the different roles of members of a terrorist organization be recognized?” and “are there early signs of impending terrorist acts?” These questions are addressed using the tools of entropy and network theory, more specifically centralities (degree, betweenness, clustering) and their entropies. These tools were applied to data (physical contacts) of four real terrorist networks from different countries. The different roles of the members are clearly recognized from the values of the selected centralities. An early sign of impending terrorist acts is the evolutionary pattern of the values of the entropies of the selected centralities. These results have been confirmed in all four terrorist networks. The conclusion is expected to be useful to law enforcement authorities to identify the roles of the members of terrorist organizations as the members with high centrality and to anticipate when a terrorist attack is imminent, by observing the evolution of the entropies of the centralities.  相似文献   

15.
The problem of data exchange between multiple nodes with storage and communication capabilities models several current multi-user communication problems like Coded Caching, Data Shuffling, Coded Computing, etc. The goal in such problems is to design communication schemes which accomplish the desired data exchange between the nodes with the optimal (minimum) amount of communication load. In this work, we present a converse to such a general data exchange problem. The expression of the converse depends only on the number of bits to be moved between different subsets of nodes, and does not assume anything further specific about the parameters in the problem. Specific problem formulations, such as those in Coded Caching, Coded Data Shuffling, and Coded Distributed Computing, can be seen as instances of this generic data exchange problem. Applying our generic converse, we can efficiently recover known important converses in these formulations. Further, for a generic coded caching problem with heterogeneous cache sizes at the clients with or without a central server, we obtain a new general converse, which subsumes some existing results. Finally we relate a “centralized” version of our bound to the known generalized independence number bound in index coding and discuss our bound’s tightness in this context.  相似文献   

16.
In this paper, we focus on the critical periods in the economy that are characterized by unusual and large fluctuations in macroeconomic indicators, like those measuring inflation and unemployment. We analyze U.S. data for 70 years from 1948 until 2018. To capture their fluctuation essence, we concentrate on the non-Gaussianity of their distributions. We investigate how the non-Gaussianity of these variables affects the coupling structure of them. We distinguish “regular” from “rare” events, in calculating the correlation coefficient, emphasizing that both cases might lead to a different response of the economy. Through the “multifractal random wall” model, one can see that the non-Gaussianity depends on time scales. The non-Gaussianity of unemployment is noticeable only for periods shorter than one year; for longer periods, the fluctuation distribution tends to a Gaussian behavior. In contrast, the non-Gaussianities of inflation fluctuations persist for all time scales. We observe through the “bivariate multifractal random walk” that despite the inflation features, the non-Gaussianity of the coupled structure is finite for scales less than one year, drops for periods larger than one year, and becomes small for scales greater than two years. This means that the footprint of the monetary policies intentionally influencing the inflation and unemployment couple is observed only for time horizons smaller than two years. Finally, to improve some understanding of the effect of rare events, we calculate high moments of the variables’ increments for various q orders and various time scales. The results show that coupling with high moments sharply increases during crises.  相似文献   

17.
Time-varying autoregressive (TVAR) models are widely used for modeling of non-stationary signals. Unfortunately, online joint adaptation of both states and parameters in these models remains a challenge. In this paper, we represent the TVAR model by a factor graph and solve the inference problem by automated message passing-based inference for states and parameters. We derive structured variational update rules for a composite “AR node” with probabilistic observations that can be used as a plug-in module in hierarchical models, for example, to model the time-varying behavior of the hyper-parameters of a time-varying AR model. Our method includes tracking of variational free energy (FE) as a Bayesian measure of TVAR model performance. The proposed methods are verified on a synthetic data set and validated on real-world data from temperature modeling and speech enhancement tasks.  相似文献   

18.
19.
This study aimed to investigate consumers’ visual image evaluation of wrist wearables based on Kansei engineering. A total of 8 representative samples were screened from 99 samples using the multidimensional scaling (MDS) method. Five groups of adjectives were identified to allow participants to express their visual impressions of wrist wearable devices through a questionnaire survey and factor analysis. The evaluation of eight samples using the five groups of adjectives was analyzed utilizing the triangle fuzzy theory. The results showed a relatively different evaluation of the eight samples in the groups of “fashionable and individual” and “rational and decent”, but little distinction in the groups of “practical and durable”, “modern and smart” and “convenient and multiple”. Furthermore, wrist wearables with a shape close to a traditional watch dial (round), with a bezel and mechanical buttons (moderate complexity) and asymmetric forms received a higher evaluation. The acceptance of square- and elliptical-shaped wrist wearables was relatively low. Among the square- and rectangular-shaped wrist wearables, the greater the curvature of the chamfer, the higher the acceptance. Apparent contrast between the color of the screen and the casing had good acceptance. The influence of display size on consumer evaluations was relatively small. Similar results were obtained in the evaluation of preferences and willingness to purchase. The results of this study objectively and effectively reflect consumers’ evaluation and potential demand for the visual images of wrist wearables and provide a reference for designers and industry professionals.  相似文献   

20.
Influenza A virus (IAV) causes significant morbidity and mortality. The knowledge gained within the last decade on the pandemic IAV(H1N1)2009 improved our understanding not only of the viral pathogenicity but also the host cellular factors involved in the pathogenicity of multiorgan failure (MOF), such as cellular trypsin-type hemagglutinin (HA0) processing proteases for viral multiplication, cytokine storm, metabolic disorders and energy crisis. The HA processing proteases in the airway and organs for all IAV known to date have been identified. Recently, a new concept on the pathogenicity of MOF, the “influenza virus–cytokine–trypsin” cycle, has been proposed involving up-regulation of trypsin through pro-inflammatory cytokines, and potentiation of viral multiplication in various organs. Furthermore, the relationship between causative factors has been summarized as the “influenza virus–cytokine–trypsin” cycle interconnected with the “metabolic disorders–cytokine” cycle. These cycles provide new treatment concepts for ATP crisis and MOF. This review discusses IAV pathogenicity on cellular proteases, cytokines, metabolites and therapeutic options.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号