首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper investigates the statistical properties of within-country gross domestic product (GDP) and industrial production (IP) growth-rate distributions. Many empirical contributions have recently pointed out that cross-section growth rates of firms, industries and countries all follow Laplace distributions. In this work, we test whether also within-country, time-series GDP and IP growth rates can be approximated by tent-shaped distributions. We fit output growth rates with the exponential-power (Subbotin) family of densities, which includes as particular cases both Gaussian and Laplace distributions. We find that, for a large number of OECD (Organization for Economic Cooperation and Development) countries including the US, both GDP and IP growth rates are Laplace distributed. Moreover, we show that fat-tailed distributions robustly emerge even after controlling for outliers, autocorrelation and heteroscedasticity.  相似文献   

2.
We address the issue of the distribution of firm size. To this end we propose a model of firms in a closed, conserved economy populated with zero-intelligence agents who continuously move from one firm to another. We then analyze the size distribution and related statistics obtained from the model. There are three well known statistical features obtained from the panel study of the firms i.e., the power law in size (in terms of income and/or employment), the Laplace distribution in the growth rates and the slowly declining standard deviation of the growth rates conditional on the firm size. First, we show that the model generalizes the usual kinetic exchange models with binary interaction to interactions between an arbitrary number of agents. When the number of interacting agents is in the order of the system itself, it is possible to decouple the model. We provide exact results on the distributions which are not known yet for binary interactions. Our model easily reproduces the power law for the size distribution of firms (Zipf’s law). The fluctuations in the growth rate falls with increasing size following a power law (though the exponent does not match with the data). However, the distribution of the difference of the firm size in this model has Laplace distribution whereas the real data suggests that the difference of the log of sizes has the same distribution.  相似文献   

3.
A Laplace distribution for firm profit rates (or returns on assets) can be obtained through the sum of many independent shocks if the number of shocks is Poisson distributed. Interpreting this as a linear chain of events, we generalize the process to a hierarchical network structure. The hierarchical model reproduces the observed distributional patterns of firm profitability, which crucially depend on the life span of firms. While the profit rates of long-lived firms obey a symmetric Laplacian, short-lived firms display a different behavior depending on whether they are capable of generating positive profits or not. Successful short-lived firms exhibit a symmetric yet more leptokurtic pdf than long-lived firms. Our model suggests that these firms are more dynamic in their organizational capabilities, but on average also face more risk than long-lived firms. Finally, short-lived firms that fail to generate positive profits have the most leptokurtic distribution among the three classes, and on average lose slightly more than their total assets within a year.  相似文献   

4.
5.
We analyze the hitting time distributions of stock price returns in different time windows, characterized by different levels of noise present in the market. The study has been performed on two sets of data from US markets. The first one is composed by daily price of 1071 stocks trade for the 12-year period 1987-1998, the second one is composed by high frequency data for 100 stocks for the 4-year period 1995-1998. We compare the probability distribution obtained by our empirical analysis with those obtained from different models for stock market evolution. Specifically by focusing on the statistical properties of the hitting times to reach a barrier or a given threshold, we compare the probability density function (PDF) of three models, namely the geometric Brownian motion, the GARCH model and the Heston model with that obtained from real market data. We will present also some results of a generalized Heston model.  相似文献   

6.
We introduce a model of proportional growth to explain the distribution P(g) of business firm growth rates. The model predicts that P(g) is Laplace in the central part and depicts an asymptotic power-law behavior in the tails with an exponent ζ = 3. Because of data limitations, previous studies in this field have been focusing exclusively on the Laplace shape of the body of the distribution. We test the model at different levels of aggregation in the economy, from products, to firms, to countries, and we find that the predictions are in good agreement with empirical evidence on both growth distributions and size-variance relationships.  相似文献   

7.
A new method for the approximation of multivariate scalar probability density functions (PDFs) in turbulent reacting flow by means of a joint presumed discrete distribution (jPDD) is presented. The jPDDs can be generated with specified mean values and variances as well as covariances. Correlations between variables – e.g. fluctuating mixture fractions and/or reaction progress – can thereby be taken into account. In this way the new approach overcomes an important limitation of ordinary presumed PDF methods, where statistical independence between the variables is often assumed. Different methods are presented to generate discrete distributions, based either on biased random number generators or on mixing models familiar from PDF transport models.

The new approach is extensively validated on a turbulent flow configuration with simultaneous mixing and reaction. Large eddy simulation data as well as results from a transported PDF model are used for the validation of the jPDD approach. The comparison shows that in particular distributions generated with mixing models are able to predict mean reaction rates accurately. For the configuration considered, the neglect of correlations results in significant underestimation of reaction rates. Moreover it is found that higher statistical moments (e.g. the skewness) can influence reaction rates. The consequences for the generation of jPDDs are discussed.

In summary, the new jPDD model has the potential to be significantly more accurate than established presumed PDF methods, because correlations between fluctuating variables can be taken into account. At the same time, the new approach is nearly as efficient as standard presumed PDF formulations, since mean rates are computed in a pre-processing step and stored in look-up tables as a function of the first and second moments of the relevant variables.  相似文献   

8.
In this article, the “truncated-composed” scheme was applied to the Burr X distribution to motivate a new family of univariate continuous-type distributions, called the truncated Burr X generated family. It is mathematically simple and provides more modeling freedom for any parental distribution. Additional functionality is conferred on the probability density and hazard rate functions, improving their peak, asymmetry, tail, and flatness levels. These characteristics are represented analytically and graphically with three special distributions of the family derived from the exponential, Rayleigh, and Lindley distributions. Subsequently, we conducted asymptotic, first-order stochastic dominance, series expansion, Tsallis entropy, and moment studies. Useful risk measures were also investigated. The remainder of the study was devoted to the statistical use of the associated models. In particular, we developed an adapted maximum likelihood methodology aiming to efficiently estimate the model parameters. The special distribution extending the exponential distribution was applied as a statistical model to fit two sets of actuarial and financial data. It performed better than a wide variety of selected competing non-nested models. Numerical applications for risk measures are also given.  相似文献   

9.
The key idea of this model is that firms are the result of an evolutionary process. Based on demand and supply considerations the evolutionary model presented here derives explicitly Gibrat’s law of proportionate effects as the result of the competition between products. Applying a preferential attachment mechanism for firms, the theory allows to establish the size distribution of products and firms. Also established are the growth rate and price distribution of consumer goods. Taking into account the characteristic property of human activities to occur in bursts, the model allows also an explanation of the size–variance relationship of the growth rate distribution of products and firms. Further the product life cycle, the learning (experience) curve and the market size in terms of the mean number of firms that can survive in a market are derived. The model also suggests the existence of an invariant of a market as the ratio of total profit to total revenue. The relationship between a neo-classic and an evolutionary view of a market is discussed. The comparison with empirical investigations suggests that the theory is able to describe the main stylized facts concerning the size and growth of firms.  相似文献   

10.
Low-frequency variability (LFV) of the atmosphere refers to its behavior on time scales of 10–100 days, longer than the life cycle of a mid-latitude cyclone but shorter than a season. This behavior is still poorly understood and hard to predict. The present study compares various model reduction strategies that help in deriving simplified models of LFV.Three distinct strategies are applied here to reduce a fairly realistic, high-dimensional, quasi-geostrophic, 3-level (QG3) atmospheric model to lower dimensions: (i) an empirical–dynamical method, which retains only a few components in the projection of the full QG3 model equations onto a specified basis, and finds the linear deterministic and the stochastic corrections empirically as in Selten (1995) [5]; (ii) a purely dynamics-based technique, employing the stochastic mode reduction strategy of Majda et al. (2001) [62]; and (iii) a purely empirical, multi-level regression procedure, which specifies the functional form of the reduced model and finds the model coefficients by multiple polynomial regression as in Kravtsov et al. (2005) [3]. The empirical–dynamical and dynamical reduced models were further improved by sequential parameter estimation and benchmarked against multi-level regression models; the extended Kalman filter was used for the parameter estimation.Overall, the reduced models perform better when more statistical information is used in the model construction. Thus, the purely empirical stochastic models with quadratic nonlinearity and additive noise reproduce very well the linear properties of the full QG3 model’s LFV, i.e. its autocorrelations and spectra, as well as the nonlinear properties, i.e. the persistent flow regimes that induce non-Gaussian features in the model’s probability density function. The empirical–dynamical models capture the basic statistical properties of the full model’s LFV, such as the variance and integral correlation time scales of the leading LFV modes, as well as some of the regime behavior features, but fail to reproduce the detailed structure of autocorrelations and distort the statistics of the regimes. Dynamical models that use data assimilation corrections do capture the linear statistics to a degree comparable with that of empirical–dynamical models, but do much less well on the full QG3 model’s nonlinear dynamics. These results are discussed in terms of their implications for a better understanding and prediction of LFV.  相似文献   

11.
The heterogeneous graphical Granger model (HGGM) for causal inference among processes with distributions from an exponential family is efficient in scenarios when the number of time observations is much greater than the number of time series, normally by several orders of magnitude. However, in the case of “short” time series, the inference in HGGM often suffers from overestimation. To remedy this, we use the minimum message length principle (MML) to determinate the causal connections in the HGGM. The minimum message length as a Bayesian information-theoretic method for statistical model selection applies Occam’s razor in the following way: even when models are equal in their measure of fit-accuracy to the observed data, the one generating the most concise explanation of data is more likely to be correct. Based on the dispersion coefficient of the target time series and on the initial maximum likelihood estimates of the regression coefficients, we propose a minimum message length criterion to select the subset of causally connected time series with each target time series and derive its form for various exponential distributions. We propose two algorithms—the genetic-type algorithm (HMMLGA) and exHMML to find the subset. We demonstrated the superiority of both algorithms in synthetic experiments with respect to the comparison methods Lingam, HGGM and statistical framework Granger causality (SFGC). In the real data experiments, we used the methods to discriminate between pregnancy and labor phase using electrohysterogram data of Islandic mothers from Physionet databasis. We further analysed the Austrian climatological time measurements and their temporal interactions in rain and sunny days scenarios. In both experiments, the results of HMMLGA had the most realistic interpretation with respect to the comparison methods. We provide our code in Matlab. To our best knowledge, this is the first work using the MML principle for causal inference in HGGM.  相似文献   

12.
J. Jiang  W. Li  X. Cai 《Physica A》2009,388(9):1893-1907
We investigate the statistical properties of the empirical data taken from the Chinese stock market during the time period from January, 2006 to July, 2007. By using the methods of detrended fluctuation analysis (DFA) and calculating correlation coefficients, we acquire the evidence of strong correlations among different stock types, stock index, stock volume turnover, A share (B share) seat number, and GDP per capita. In addition, we study the behavior of “volatility”, which is now defined as the difference between the new account numbers for two consecutive days. It is shown that the empirical power-law of the number of aftershock events exceeding the selected threshold is analogous to the Omori law originally observed in geophysics. Furthermore, we find that the cumulative distributions of stock return, trade volume and trade number are all exponential-like, which does not belong to the universality class of such distributions found by Xavier Gabaix et al. [Xavier Gabaix, Parameswaran Gopikrishnan, Vasiliki Plerou, H. Eugene Stanley, Nature, 423 (2003)] for major western markets. Through the comparison, we draw a conclusion that regardless of developed stock markets or emerging ones, “cubic law of returns” is valid only in the long-term absolute return, and in the short-term one, the distributions are exponential-like. Specifically, the distributions of both trade volume and trade number display distinct decaying behaviors in two separate regimes. Lastly, the scaling behavior of the relation is analyzed between dispersion and the mean monthly trade value for each administrative area in China.  相似文献   

13.
When dating older sedimentary deposits using quartz, there are no unambiguous methods for identifying the presence of incomplete bleaching. Current statistical analysis of dose distributions depends entirely on the assumption that incomplete bleaching and mixing are the main causes of any excess dispersion in the distribution; the only existing way to test this assumption is using independent age control. Here we suggest a new approach to this question, based on the differential bleaching rates of quartz and feldspar luminescence signals. We first present data that confirm the differences in relative bleaching rates of quartz optically stimulated luminescence (OSL) and feldspar luminescence stimulated at 50 °C by infrared light (IR50) and feldspar luminescence stimulated at 290 °C by infrared light after a stimulation at 50 °C (pIRIR290), and use recently deposited samples to determine the likely significance of the difficult-to-bleach residual feldspar signals in non-aeolian samples. For a set of mainly Late Pleistocene non-aeolian sediments, large aliquot quartz doses are then used to predict feldspar doses (based on a knowledge of the sample dose rates). The differences between observed and predicted feldspar doses as a function of the quartz dose, combined with a conservative assumption concerning the relative feldspar and quartz residual signals after natural bleaching prior to deposition, are used to identify those samples for which the quartz is very likely to be well bleached (20 out of 24). Two of these apparently well-bleached samples are then examined using single-grain quartz dose distributions; one of these is consistent with the well-bleached hypothesis, and one indicates poor bleaching or a multi-component mixture. However, independent age control makes it clear that the large aliquot data are more likely to be correct. We conclude that a comparison of quartz and feldspar doses provides a useful independent method for identifying well-bleached quartz samples, and that it is unwise to apply statistical models to dose distributions without clear evidence for the physical origins of the distributions.  相似文献   

14.
To account quantitatively for many reported “natural” fat tail distributions in Nature and Economy, we propose the stretched exponential family as a complement to the often used power law distributions. It has many advantages, among which to be economical with only two adjustable parameters with clear physical interpretation. Furthermore, it derives from a simple and generic mechanism in terms of multiplicative processes. We show that stretched exponentials describe very well the distributions of radio and light emissions from galaxies, of US GOM OCS oilfield reserve sizes, of World, US and French agglomeration sizes, of country population sizes, of daily Forex US-Mark and Franc-Mark price variations, of Vostok (near the south pole) temperature variations over the last 400 000 years, of the Raup-Sepkoski's kill curve and of citations of the most cited physicists in the world. We also discuss its potential for the distribution of earthquake sizes and fault displacements. We suggest physical interpretations of the parameters and provide a short toolkit of the statistical properties of the stretched exponentials. We also provide a comparison with other distributions, such as the shifted linear fractal, the log-normal and the recently introduced parabolic fractal distributions. Received: 20 January 1998 / Received in final form: 27 January 1998 / Accepted: 6 February 1998  相似文献   

15.
Measurements of first and second moments of γ-ray multiplicity distributions from deep inelastic collisions of 86Kr + 154Sm are reported. A global systematics of the angular momentum distributions from deep inelastic reactions with projectile masses ? 40 is presented. The average angular momentum is found to depend linearly on the incident channel average angular momentum, while no simple systematics for the second moment appears obvious. In order to illuminate the question whether the angular momentum transfer process reaches statistical equilibrium in deep inelastic collisions, numerical calculations have been performed on two models: a two-sphere classical model including the collective modes of twisting, bending, wriggling and tilting, and a statistical equilibrium Fermi-gas model. The two-sphere classical model is not able to account for the observed second moments, and neither does the Fermi-gas model give an explanation of the deep inelastic multiplicity data.  相似文献   

16.
In this paper, we consider daily financial data of a collection of different stock market indices, exchange rates, and interest rates, and we analyze their multi-scaling properties by estimating a simple specification of the Markov-switching multifractal (MSM) model. In order to see how well the estimated model captures the temporal dependence of the data, we estimate and compare the scaling exponents H(q) (for q=1,2) for both empirical data and simulated data of the MSM model. In most cases the multifractal model appears to generate ‘apparent’ long memory in agreement with the empirical scaling laws.  相似文献   

17.
Calculation of the total dielectronic recombination (DR) rates was done in the frame of a statistical model of atoms. The model is based on the idea of collective excitations of atomic electrons with the local plasma frequency, which depends on atomic electrons density distribution. The electron density is described in a frame of the Thomas‐Fermi model of atoms. Simple scaling laws for temperature Te and nuclear charge Z dependences follow from the statistical model of DR. Results of the statistical model were compared with other numerical data following detailed level‐by‐level computations for different multielectron ions. The specific attention is paid to Ni‐like ion sequences of different chemical elements in order to check the Z ‐dependence of DR rates. A comparison with numerical data of Flexible Atomic Code (FAC) is presented for tungsten ions. The reasonable correspondence between the statistical model and the detailed numerical data is demonstrated. The application of the statistical model provides very simple and fast calculations of the DR rates useful in modern plasma modelling. (© 2016 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

18.
Correlation of foreign exchange rates in currency markets is investigated based on the empirical data of USD/DEM and USD/JPY exchange rates for a period from February 1 1986 to December 31 1996. The return of exchange time series is first decomposed into a number of intrinsic mode functions (IMFs) by the empirical mode decomposition method. The instantaneous phases of the resultant IMFs calculated by the Hilbert transform are then used to characterize the behaviors of pricing transmissions, and the correlation is probed by measuring the phase differences between two IMFs in the same order. From the distribution of phase differences, our results show explicitly that the correlations are stronger in daily time scale than in longer time scales. The demonstration for the correlations in periods of 1986–1989 and 1990–1993 indicates two exchange rates in the former period were more correlated than in the latter period. The result is consistent with the observations from the cross-correlation calculation.  相似文献   

19.
This paper addresses two questions in the context of neuronal networks dynamics, using methods from dynamical systems theory and statistical physics: (i) How to characterize the statistical properties of sequences of action potentials (“spike trains”) produced by neuronal networks? and; (ii) what are the effects of synaptic plasticity on these statistics? We introduce a framework in which spike trains are associated to a coding of membrane potential trajectories, and actually, constitute a symbolic coding in important explicit examples (the so-called gIF models). On this basis, we use the thermodynamic formalism from ergodic theory to show how Gibbs distributions are natural probability measures to describe the statistics of spike trains, given the empirical averages of prescribed quantities. As a second result, we show that Gibbs distributions naturally arise when considering “slow” synaptic plasticity rules where the characteristic time for synapse adaptation is quite longer than the characteristic time for neurons dynamics.  相似文献   

20.
The effects of internal quantum number consevation are considered in the statistical chain decay of fireballs. A simple model containing I = 1, G = ?1, S = 0 mesons (ground state and excited pions) and I = 12, S = 0 baryons (ground state and excited nucleons) is investigated in detail: inclusive distributions, multiplicity distributions and semi-inclusive distributions are determined. The methods are completely general and can be used in more general models with any set of states. As examples, a model with ? and π mesons, and nucleons and another one with π and K mesons, and N and Λ baryons are considered.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号