首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
Integrated information has been recently suggested as a possible measure to identify a necessary condition for a system to display conscious features. Recently, we have shown that astrocytes contribute to the generation of integrated information through the complex behavior of neuron–astrocyte networks. Still, it remained unclear which underlying mechanisms governing the complex behavior of a neuron–astrocyte network are essential to generating positive integrated information. This study presents an analytic consideration of this question based on exact and asymptotic expressions for integrated information in terms of exactly known probability distributions for a reduced mathematical model (discrete-time, discrete-state stochastic model) reflecting the main features of the “spiking–bursting” dynamics of a neuron–astrocyte network. The analysis was performed in terms of the empirical “whole minus sum” version of integrated information in comparison to the “decoder based” version. The “whole minus sum” information may change sign, and an interpretation of this transition in terms of “net synergy” is available in the literature. This motivated our particular interest in the sign of the “whole minus sum” information in our analytical considerations. The behaviors of the “whole minus sum” and “decoder based” information measures are found to bear a lot of similarity—they have mutual asymptotic convergence as time-uncorrelated activity increases, and the sign transition of the “whole minus sum” information is associated with a rapid growth in the “decoder based” information. The study aims at creating a theoretical framework for using the spiking–bursting model as an analytically tractable reference point for applying integrated information concepts to systems exhibiting similar bursting behavior. The model can also be of interest as a new discrete-state test bench for different formulations of integrated information.  相似文献   

2.
The most known and used abstract model of the financial market is based on the concept of the informational efficiency (EMH) of that market. The paper proposes an alternative which could be named the behavioural efficiency of the financial market, which is based on the behavioural entropy instead of the informational entropy. More specifically, the paper supports the idea that, in the financial market, the only measure (if any) of the entropy is the available behaviours indicated by the implicit information. Therefore, the behavioural entropy is linked to the concept of behavioural efficiency. The paper argues that, in fact, in the financial markets, there is not a (real) informational efficiency, but there exists a behavioural efficiency instead. The proposal is based both on a new typology of information in the financial market (which provides the concept of implicit information—that is, that information ”translated” by the economic agents from observing the actual behaviours) and on a non-linear (more exactly, a logistic) curve linking the behavioural entropy to the behavioural efficiency of the financial markets. Finally, the paper proposes a synergic overcoming of both EMH and AMH based on the new concept of behavioural entropy in the financial market.  相似文献   

3.
In the paper, we begin with introducing a novel scale mixture of normal distribution such that its leptokurticity and fat-tailedness are only local, with this “locality” being separately controlled by two censoring parameters. This new, locally leptokurtic and fat-tailed (LLFT) distribution makes a viable alternative for other, globally leptokurtic, fat-tailed and symmetric distributions, typically entertained in financial volatility modelling. Then, we incorporate the LLFT distribution into a basic stochastic volatility (SV) model to yield a flexible alternative for common heavy-tailed SV models. For the resulting LLFT-SV model, we develop a Bayesian statistical framework and effective MCMC methods to enable posterior sampling of the parameters and latent variables. Empirical results indicate the validity of the LLFT-SV specification for modelling both “non-standard” financial time series with repeating zero returns, as well as more “typical” data on the S&P 500 and DAX indices. For the former, the LLFT-SV model is also shown to markedly outperform a common, globally heavy-tailed, t-SV alternative in terms of density forecasting. Applications of the proposed distribution in more advanced SV models seem to be easily attainable.  相似文献   

4.
Text mining is applied to 510 articles on econophysics to reconstruct the lexical evolution of the discipline from 1999 to 2020. The analysis of the relative frequency of the words used in the articles and their “visualization” allow us to draw some conclusions about the evolution of the discipline. The traditional areas of research, financial markets and distribution of wealth, remain central, but they are flanked by other strands of research—production, currencies, networks—which broaden the discipline by pushing towards a dialectical application of traditional concepts and tools drawn from statistical physics.  相似文献   

5.
To estimate the amount of evapotranspiration in a river basin, the “short period water balance method” was formulated. Then, by introducing the “complementary relationship method,” the amount of evapotranspiration was estimated seasonally, and with reasonable accuracy, for both small and large areas. Moreover, to accurately estimate river discharge in the low water season, the “weighted statistical unit hydrograph method” was proposed and a procedure for the calculation of the unit hydrograph was developed. Also, a new model, based on the “equivalent roughness method,” was successfully developed for the estimation of flood runoff from newly reclaimed farmlands. Based on the results of this research, a “composite reservoir model” was formulated to analyze the repeated use of irrigation water in large spatial areas. The application of this model to a number of watershed areas provided useful information with regard to the realities of water demand-supply systems in watersheds predominately dedicated to paddy fields, in Japan.  相似文献   

6.
Entropy is a concept that emerged in the 19th century. It used to be associated with heat harnessed by a thermal machine to perform work during the Industrial Revolution. However, there was an unprecedented scientific revolution in the 20th century due to one of its most essential innovations, i.e., the information theory, which also encompasses the concept of entropy. Therefore, the following question is naturally raised: “what is the difference, if any, between concepts of entropy in each field of knowledge?” There are misconceptions, as there have been multiple attempts to conciliate the entropy of thermodynamics with that of information theory. Entropy is most commonly defined as “disorder”, although it is not a good analogy since “order” is a subjective human concept, and “disorder” cannot always be obtained from entropy. Therefore, this paper presents a historical background on the evolution of the term “entropy”, and provides mathematical evidence and logical arguments regarding its interconnection in various scientific areas, with the objective of providing a theoretical review and reference material for a broad audience.  相似文献   

7.
Causal Geometry     
Information geometry has offered a way to formally study the efficacy of scientific models by quantifying the impact of model parameters on the predicted effects. However, there has been little formal investigation of causation in this framework, despite causal models being a fundamental part of science and explanation. Here, we introduce causal geometry, which formalizes not only how outcomes are impacted by parameters, but also how the parameters of a model can be intervened upon. Therefore, we introduce a geometric version of “effective information”—a known measure of the informativeness of a causal relationship. We show that it is given by the matching between the space of effects and the space of interventions, in the form of their geometric congruence. Therefore, given a fixed intervention capability, an effective causal model is one that is well matched to those interventions. This is a consequence of “causal emergence,” wherein macroscopic causal relationships may carry more information than “fundamental” microscopic ones. We thus argue that a coarse-grained model may, paradoxically, be more informative than the microscopic one, especially when it better matches the scale of accessible interventions—as we illustrate on toy examples.  相似文献   

8.
The financial market is a complex system, which has become more complicated due to the sudden impact of the COVID-19 pandemic in 2020. As a result there may be much higher degree of uncertainty and volatility clustering in stock markets. How does this “black swan” event affect the fractal behaviors of the stock market? How to improve the forecasting accuracy after that? Here we study the multifractal behaviors of 5-min time series of CSI300 and S&P500, which represents the two stock markets of China and United States. Using the Overlapped Sliding Window-based Multifractal Detrended Fluctuation Analysis (OSW-MF-DFA) method, we found that the two markets always have multifractal characteristics, and the degree of fractal intensified during the first panic period of pandemic. Based on the long and short-term memory which are described by fractal test results, we use the Gated Recurrent Unit (GRU) neural network model to forecast these indices. We found that during the large volatility clustering period, the prediction accuracy of the time series can be significantly improved by adding the time-varying Hurst index to the GRU neural network.  相似文献   

9.
In this paper, a new parametric compound G family of continuous probability distributions called the Poisson generalized exponential G (PGEG) family is derived and studied. Relevant mathematical properties are derived. Some new bivariate G families using the theorems of “Farlie-Gumbel-Morgenstern copula”, “the modified Farlie-Gumbel-Morgenstern copula”, “the Clayton copula”, and “the Renyi’s entropy copula” are presented. Many special members are derived, and a special attention is devoted to the exponential and the one parameter Pareto type II model. The maximum likelihood method is used to estimate the model parameters. A graphical simulation is performed to assess the finite sample behavior of the estimators of the maximum likelihood method. Two real-life data applications are proposed to illustrate the importance of the new family.  相似文献   

10.
The cerebral cortex performs its computations with many six-layered fundamental units, collectively spreading along the cortical sheet. What is the local network structure and the operating dynamics of such a fundamental unit? Previous investigations of primary sensory areas revealed a classic “canonical” circuit model, leading to an expectation of similar circuit organization and dynamics throughout the cortex. This review clarifies the different circuit dynamics at play in the higher association cortex of primates that implements computation for high-level cognition such as memory and attention. Instead of feedforward processing of response selectivity through Layers 4 to 2/3 that the classic canonical circuit stipulates, memory recall in primates occurs in Layer 5/6 with local backward projection to Layer 2/3, after which the retrieved information is sent back from Layer 6 to lower-level cortical areas for further retrieval of nested associations of target attributes. In this review, a novel “dynamic multimode module (D3M)” in the primate association cortex is proposed, as a new “canonical” circuit model performing this operation.  相似文献   

11.
The lack of adequate indicators in the research of digital economy may lead to the shortage of data support on decision making for governments. To solve this problem, first we establish a digital economy indicator evaluation system by dividing the digital economy into four types: “basic type”, “technology type”, “integration type” and “service type” and select 5 indicators for each type. On this basis, the weight of each indicator is calculated to find the deficiencies in the development of some digital economic fields by the improved entropy method. By drawing on the empowerment idea of Analytic Hierarchy Process, the improved entropy method firstly compares the difference coefficient of indicators in pairs and maps the comparison results to the scales 1–9. Then, the judgment matrix is constructed based on the information entropy, which can solve as much as possible the problem that the difference among the weight of each indicator is too large in traditional entropy method. The results indicate that: the development of digital economy in Guangdong Province was relatively balanced from 2015 to 2018 and will be better in the future while the development of rural e-commerce in Guangdong Province is relatively backward, and there is an obvious digital gap between urban and rural areas. Next we extract two new variables respectively to replace the 20 indicators we select through principal component analysis and factor analysis methods in multivariate statistical analysis, which can retain the original information to the greatest extent and provide convenience for further research in the future. Finally, we and provide constructive comments of digital economy in Guangdong Province from 2015 to 2018.  相似文献   

12.
Quantum candies (qandies) represent a type of pedagogical simple model that describes many concepts from quantum information processing (QIP) intuitively without the need to understand or make use of superpositions and without the need of using complex algebra. One of the topics in quantum cryptography that has gained research attention in recent years is quantum digital signatures (QDS), which involve protocols to securely sign classical bits using quantum methods. In this paper, we show how the “qandy model” can be used to describe three QDS protocols in order to provide an important and potentially practical example of the power of “superpositionless” quantum information processing for individuals without background knowledge in the field.  相似文献   

13.
Random Boolean Networks (RBNs for short) are strongly simplified models of gene regulatory networks (GRNs), which have also been widely studied as abstract models of complex systems and have been used to simulate different phenomena. We define the “common sea” (CS) as the set of nodes that take the same value in all the attractors of a given network realization, and the “specific part” (SP) as the set of all the other nodes, and we study their properties in different ensembles, generated with different parameter values. Both the CS and of the SP can be composed of one or more weakly connected components, which are emergent intermediate-level structures. We show that the study of these sets provides very important information about the behavior of the model. The distribution of distances between attractors is also examined. Moreover, we show how the notion of a “common sea” of genes can be used to analyze data from single-cell experiments.  相似文献   

14.
In this paper, we focus on the critical periods in the economy that are characterized by unusual and large fluctuations in macroeconomic indicators, like those measuring inflation and unemployment. We analyze U.S. data for 70 years from 1948 until 2018. To capture their fluctuation essence, we concentrate on the non-Gaussianity of their distributions. We investigate how the non-Gaussianity of these variables affects the coupling structure of them. We distinguish “regular” from “rare” events, in calculating the correlation coefficient, emphasizing that both cases might lead to a different response of the economy. Through the “multifractal random wall” model, one can see that the non-Gaussianity depends on time scales. The non-Gaussianity of unemployment is noticeable only for periods shorter than one year; for longer periods, the fluctuation distribution tends to a Gaussian behavior. In contrast, the non-Gaussianities of inflation fluctuations persist for all time scales. We observe through the “bivariate multifractal random walk” that despite the inflation features, the non-Gaussianity of the coupled structure is finite for scales less than one year, drops for periods larger than one year, and becomes small for scales greater than two years. This means that the footprint of the monetary policies intentionally influencing the inflation and unemployment couple is observed only for time horizons smaller than two years. Finally, to improve some understanding of the effect of rare events, we calculate high moments of the variables’ increments for various q orders and various time scales. The results show that coupling with high moments sharply increases during crises.  相似文献   

15.
16.
Since the “high stock dividend” of A-share companies in China often leads to the short-term stock price increase, this phenomenon’s prediction has been widely concerned by academia and industry. In this study, a new multi-layer stacking ensemble algorithm is proposed. Unlike the classic stacking ensemble algorithm that focused on the differentiation of base models, this paper used the equal weight comprehensive feature evaluation method to select features before predicting the base model and used a genetic algorithm to match the optimal feature subset for each base model. After the base model’s output prediction, the LightGBM (LGB) model was added to the algorithm as a secondary information extraction layer. Finally, the algorithm inputs the extracted information into the Logistic Regression (LR) model to complete the prediction of the “high stock dividend” phenomenon. Using the A-share market data from 2010 to 2019 for simulation and evaluation, the proposed model improves the AUC (Area Under Curve) and F1 score by 0.173 and 0.303, respectively, compared to the baseline model. The prediction results shed light on event-driven investment strategies.  相似文献   

17.
Objectives: To reveal self-rated changes of health status during stay-at-home orders among older adults and to verify whether decrease in frequency of going outdoors during these orders was related to self-rated changes in health status. Method: A self-completed questionnaire for older adults was provided in 2 dayservice facilities and a nursing station. We operationally defined health status with 4 domains (motor function, oral and swallowing function, depression, and social networks) and designed the questionnaire to determine self-rated changes in health status using factor analysis. After factor analysis, regression analyses were conducted. Dependent variable was each factor score (self-rated changes of health status), and independent variable was decrease in frequency of going outdoors. Results: Approximately 80% of participants answered that their health status had “worsened” in motor function (75.0%-87.2%). Moreover, more than 70% of participants answered “worsened” in “Feeling energy” and “Getting together and speaking with friends” (72.3% and 75.7%, respectively). Regression analyses demonstrated that, after adjusting for covariates, the decrease in frequency of going outdoors was related to self-rated changes of motor function and friend network. Conclusion: During stay-at-home orders, older adults felt deterioration in their motor function, in feeling energy, and in their friend network, especially people who had decreased their frequency of going outdoors felt more deterioration in their motor function and in their friend network.  相似文献   

18.
The task of reconstructing the system’s state from the measurements results, known as the Pauli problem, usually requires repetition of two successive steps. Preparation in an initial state to be determined is followed by an accurate measurement of one of the several chosen operators in order to provide the necessary “Pauli data”. We consider a similar yet more general problem of recovering Feynman’s transition (path) amplitudes from the results of at least three consecutive measurements. The three-step histories of a pre- and post-selected quantum system are subjected to a type of interference not available to their two-step counterparts. We show that this interference can be exploited, and if the intermediate measurement is “fuzzy”, the path amplitudes can be successfully recovered. The simplest case of a two-level system is analysed in detail. The “weak measurement” limit and the usefulness of the path amplitudes are also discussed.  相似文献   

19.
We analyze the price return distributions of currency exchange rates, cryptocurrencies, and contracts for differences (CFDs) representing stock indices, stock shares, and commodities. Based on recent data from the years 2017–2020, we model tails of the return distributions at different time scales by using power-law, stretched exponential, and q-Gaussian functions. We focus on the fitted function parameters and how they change over the years by comparing our results with those from earlier studies and find that, on the time horizons of up to a few minutes, the so-called “inverse-cubic power-law” still constitutes an appropriate global reference. However, we no longer observe the hypothesized universal constant acceleration of the market time flow that was manifested before in an ever faster convergence of empirical return distributions towards the normal distribution. Our results do not exclude such a scenario but, rather, suggest that some other short-term processes related to a current market situation alter market dynamics and may mask this scenario. Real market dynamics is associated with a continuous alternation of different regimes with different statistical properties. An example is the COVID-19 pandemic outburst, which had an enormous yet short-time impact on financial markets. We also point out that two factors—speed of the market time flow and the asset cross-correlation magnitude—while related (the larger the speed, the larger the cross-correlations on a given time scale), act in opposite directions with regard to the return distribution tails, which can affect the expected distribution convergence to the normal distribution.  相似文献   

20.
The properties of decays that take place during jet formation cannot be easily deduced from the final distribution of particles in a detector. In this work, we first simulate a system of particles with well-defined masses, decay channels, and decay probabilities. This presents the “true system” for which we want to reproduce the decay probability distributions. Assuming we only have the data that this system produces in the detector, we decided to employ an iterative method which uses a neural network as a classifier between events produced in the detector by the “true system” and some arbitrary “test system”. In the end, we compare the distributions obtained with the iterative method to the “true” distributions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号