首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Low-frequency variability (LFV) of the atmosphere refers to its behavior on time scales of 10–100 days, longer than the life cycle of a mid-latitude cyclone but shorter than a season. This behavior is still poorly understood and hard to predict. The present study compares various model reduction strategies that help in deriving simplified models of LFV.Three distinct strategies are applied here to reduce a fairly realistic, high-dimensional, quasi-geostrophic, 3-level (QG3) atmospheric model to lower dimensions: (i) an empirical–dynamical method, which retains only a few components in the projection of the full QG3 model equations onto a specified basis, and finds the linear deterministic and the stochastic corrections empirically as in Selten (1995) [5]; (ii) a purely dynamics-based technique, employing the stochastic mode reduction strategy of Majda et al. (2001) [62]; and (iii) a purely empirical, multi-level regression procedure, which specifies the functional form of the reduced model and finds the model coefficients by multiple polynomial regression as in Kravtsov et al. (2005) [3]. The empirical–dynamical and dynamical reduced models were further improved by sequential parameter estimation and benchmarked against multi-level regression models; the extended Kalman filter was used for the parameter estimation.Overall, the reduced models perform better when more statistical information is used in the model construction. Thus, the purely empirical stochastic models with quadratic nonlinearity and additive noise reproduce very well the linear properties of the full QG3 model’s LFV, i.e. its autocorrelations and spectra, as well as the nonlinear properties, i.e. the persistent flow regimes that induce non-Gaussian features in the model’s probability density function. The empirical–dynamical models capture the basic statistical properties of the full model’s LFV, such as the variance and integral correlation time scales of the leading LFV modes, as well as some of the regime behavior features, but fail to reproduce the detailed structure of autocorrelations and distort the statistics of the regimes. Dynamical models that use data assimilation corrections do capture the linear statistics to a degree comparable with that of empirical–dynamical models, but do much less well on the full QG3 model’s nonlinear dynamics. These results are discussed in terms of their implications for a better understanding and prediction of LFV.  相似文献   

2.
Stochastic analysis of random heterogeneous media provides useful information only if realistic input models of the material property variations are used. These input models are often constructed from a set of experimental samples of the underlying random field. To this end, the Karhunen–Loève (K–L) expansion, also known as principal component analysis (PCA), is the most popular model reduction method due to its uniform mean-square convergence. However, it only projects the samples onto an optimal linear subspace, which results in an unreasonable representation of the original data if they are non-linearly related to each other. In other words, it only preserves the first-order (mean) and second-order statistics (covariance) of a random field, which is insufficient for reproducing complex structures. This paper applies kernel principal component analysis (KPCA) to construct a reduced-order stochastic input model for the material property variation in heterogeneous media. KPCA can be considered as a nonlinear version of PCA. Through use of kernel functions, KPCA further enables the preservation of higher-order statistics of the random field, instead of just two-point statistics as in the standard Karhunen–Loève (K–L) expansion. Thus, this method can model non-Gaussian, non-stationary random fields. In this work, we also propose a new approach to solve the pre-image problem involved in KPCA. In addition, polynomial chaos (PC) expansion is used to represent the random coefficients in KPCA which provides a parametric stochastic input model. Thus, realizations, which are statistically consistent with the experimental data, can be generated in an efficient way. We showcase the methodology by constructing a low-dimensional stochastic input model to represent channelized permeability in porous media.  相似文献   

3.
Using the Markovian method, we study the stochastic nature of electrical discharge current fluctuations in the Helium plasma. Sinusoidal trends are extracted from the data set by the Fourier-Detrended Fluctuation analysis and consequently cleaned data is retrieved. We determine the Markov time scale of the detrended data set by using likelihood analysis. We also estimate the Kramers-Moyal’s coefficients of the discharge current fluctuations and derive the corresponding Fokker-Planck equation. In addition, the obtained Langevin equation enables us to reconstruct discharge time series with similar statistical properties compared with the observed in the experiment. We also provide an exact decomposition of temporal correlation function by using Kramers-Moyal’s coefficients. We show that for the stationary time series, the two point temporal correlation function has an exponential decaying behavior with a characteristic correlation time scale. Our results confirm that, there is no definite relation between correlation and Markov time scales. However both of them behave as monotonic increasing function of discharge current intensity. Finally to complete our analysis, the multifractal behavior of reconstructed time series using its Keramers-Moyal’s coefficients and original data set are investigated. Extended self similarity analysis demonstrates that fluctuations in our experimental setup deviates from Kolmogorov (K41) theory for fully developed turbulence regime.  相似文献   

4.
We generalize the Ornstein–Uhlenbeck (OU) process using Doob’s theorem. We relax the Gaussian and stationary conditions, assuming a linear and time-homogeneous process. The proposed generalization retains much of the simplicity of the original stochastic process, while exhibiting a somewhat richer behavior. Analytical results are obtained using transition probability and the characteristic function formalism and compared with empirical stock market data, which are notorious for the non-Gaussian behavior. The analysis focus on the decay patterns and the convergence study of the first four cumulants considering the logarithmic returns of stock prices. It is shown that the proposed model offers a good improvement over the classical OU model.  相似文献   

5.
We analyze the S&P 500 index data for the 13-year period, from January 1, 1984 to December 31, 1996, with one data point every 10 min. For this database, we study the distribution and clustering of volatility return intervals, which are defined as the time intervals between successive volatilities above a certain threshold q. We find that the long memory in the volatility leads to a clustering of above-median as well as below-median return intervals. In addition, it turns out that the short return intervals form larger clusters compared to the long return intervals. When comparing the empirical results to the ARMA-FIGARCH and fBm models for volatility, we find that the fBm model predicts scaling better than the ARMA-FIGARCH model, which is consistent with the argument that both ARMA-FIGARCH and fBm capture the long-term dependence in return intervals to a certain extent, but only fBm accounts for the scaling. We perform the Student's t-test to compare the empirical data with the shuffled records, ARMA-FIGARCH and fBm. We analyze separately the clusters of above-median return intervals and the clusters of below-median return intervals for different thresholds q. We find that the empirical data are statistically different from the shuffled data for all thresholds q. Our results also suggest that the ARMA-FIGARCH model is statistically different from the S&P 500 for intermediate q for both above-median and below-median clusters, while fBm is statistically different from S&P 500 for small and large q for above-median clusters and for small q for below-median clusters. Neither model can fully explain the entire regime of q studied.  相似文献   

6.
The empirical relationship between the return of an asset and the volatility of the asset has been well documented in the financial literature. Named the leverage effect or sometimes risk-premium effect, it is observed in real data that, when the return of the asset decreases, the volatility increases and vice versa.Consequently, it is important to demonstrate that any formulated model for the asset price is capable of generating this effect observed in practice. Furthermore, we need to understand the conditions on the parameters present in the model that guarantee the apparition of the leverage effect.In this paper we analyze two general specifications of stochastic volatility models and their capability of generating the perceived leverage effect. We derive conditions for the apparition of leverage effect in both of these stochastic volatility models. We exemplify using stochastic volatility models used in practice and we explicitly state the conditions for the existence of the leverage effect in these examples.  相似文献   

7.
Yu Wei  Peng Wang 《Physica A》2008,387(7):1585-1592
In this paper, taking about 7 years’ high-frequency data of the Shanghai Stock Exchange Composite Index (SSEC) as an example, we propose a daily volatility measure based on the multifractal spectrum of the high-frequency price variability within a trading day. An ARFIMA model is used to depict the dynamics of this multifractal volatility (MFV) measures. The one-day ahead volatility forecasting performances of the MFV model and some other existing volatility models, such as the realized volatility model, stochastic volatility model and GARCH, are evaluated by the superior prediction ability (SPA) test. The empirical results show that under several loss functions, the MFV model obtains the best forecasting accuracy.  相似文献   

8.
9.
T.S. Biró 《Physica A》2008,387(7):1603-1612
In this paper we study the possible microscopic origin of heavy-tailed probability density distributions for the price variation of financial instruments. We extend the standard log-normal process to include another random component in the so-called stochastic volatility models. We study these models under an assumption, akin to the Born-Oppenheimer approximation, in which the volatility has already relaxed to its equilibrium distribution and acts as a background to the evolution of the price process. In this approximation, we show that all models of stochastic volatility should exhibit a scaling relation in the time lag of zero-drift modified log-returns. We verify that the Dow-Jones Industrial Average index indeed follows this scaling. We then focus on two popular stochastic volatility models, the Heston and Hull-White models. In particular, we show that in the Hull-White model the resulting probability distribution of log-returns in this approximation corresponds to the Tsallis (t-Student) distribution. The Tsallis parameters are given in terms of the microscopic stochastic volatility model. Finally, we show that the log-returns for 30 years Dow Jones index data is well fitted by a Tsallis distribution, obtaining the relevant parameters.  相似文献   

10.
Abby Tan   《Physica A》2006,370(2):689-696
The aim of this work is to take into account the effects of long memory in volatility on derivative hedging. This idea is an extension of the work by Fedotov and Tan [Stochastic long memory process in option pricing, Int. J. Theor. Appl. Finance 8 (2005) 381–392] where they incorporate long-memory stochastic volatility in option pricing and derive pricing bands for option values. The starting point is the stochastic Black–Scholes hedging strategy which involves volatility with a long-range dependence. The stochastic hedging strategy is the sum of its deterministic term that is classical Black–Scholes hedging strategy with a constant volatility and a random deviation term which describes the risk arising from the random volatility. Using the fact that stock price and volatility fluctuate on different time scales, we derive an asymptotic equation for this deviation in terms of the Green's function and the fractional Brownian motion. The solution to this equation allows us to find hedging confidence intervals.  相似文献   

11.
A computational methodology is developed to efficiently perform uncertainty quantification for fluid transport in porous media in the presence of both stochastic permeability and multiple scales. In order to capture the small scale heterogeneity, a new mixed multiscale finite element method is developed within the framework of the heterogeneous multiscale method (HMM) in the spatial domain. This new method ensures both local and global mass conservation. Starting from a specified covariance function, the stochastic log-permeability is discretized in the stochastic space using a truncated Karhunen–Loève expansion with several random variables. Due to the small correlation length of the covariance function, this often results in a high stochastic dimensionality. Therefore, a newly developed adaptive high dimensional stochastic model representation technique (HDMR) is used in the stochastic space. This results in a set of low stochastic dimensional subproblems which are efficiently solved using the adaptive sparse grid collocation method (ASGC). Numerical examples are presented for both deterministic and stochastic permeability to show the accuracy and efficiency of the developed stochastic multiscale method.  相似文献   

12.
Josep Perelló 《Physica A》2007,382(1):213-218
The expOU stochastic volatility model is capable of reproducing fairly well most important statistical properties of financial markets daily data. Among them, the presence of multiple time scales in the volatility autocorrelation is perhaps the most relevant which makes appear fat tails in the return distributions. This paper wants to go further on with the expOU model we have studied in Ref. [J. Masoliver, J. Perelló, Quant. Finance 6 (2006) 423] by exploring an aspect of practical interest. Having as a benchmark the parameters estimated from the Dow Jones daily data, we want to compute the price for the European option. This is actually done by Monte Carlo, running a large number of simulations. Our main interest is to “see” the effects of a long-range market memory from our expOU model in its subsequent European call option. We pay attention to the effects of the existence of a broad range of time scales in the volatility. We find that a richer set of time scales brings the price of the option higher. This appears in clear contrast to the presence of memory in the price itself which makes the price of the option cheaper.  相似文献   

13.
In this paper, we provide a simple, “generic” interpretation of multifractal scaling laws and multiplicative cascade process paradigms in terms of volatility correlations. We show that in this context 1/f power spectra, as recently observed in reference [23], naturally emerge. We then propose a simple solvable “stochastic volatility” model for return fluctuations. This model is able to reproduce most of recent empirical findings concerning financial time series: no correlation between price variations, long-range volatility correlations and multifractal statistics. Moreover, its extension to a multivariate context, in order to model portfolio behavior, is very natural. Comparisons to real data and other models proposed elsewhere are provided. Received 22 May 2000  相似文献   

14.
Pekka Malo 《Physica A》2009,388(22):4763-4779
Electricity prices are known to exhibit multifractal properties. We accommodate this finding by investigating multifractal models for electricity prices. In this paper we propose a flexible Copula-MSM (Markov Switching Multifractal) approach for modeling spot and weekly futures price dynamics. By using a conditional copula function, the framework allows us to separately model the dependence structure, while enabling use of multifractal stochastic volatility models to characterize fluctuations in marginal returns. An empirical experiment is carried out using data from Nord Pool. A study of volatility forecasting performance for electricity spot prices reveals that multifractal techniques are a competitive alternative to GARCH models. We also demonstrate how the Copula-MSM model can be employed for finding optimal portfolios, which minimizes the Conditional Value-at-Risk.  相似文献   

15.
This paper aims to investigate the direct relationship between inflation and inflation uncertainty by employing a dynamic method for the monthly country–region–place United States data for the time period 1976–2007. While the bulk of previous studies has employed GARCH models in investigating the link between inflation and inflation uncertainty, in this study Stochastic Volatility in Mean models are used to capture the shocks to inflation uncertainty within a dynamic framework. These models allow researchers to assess the dynamic effects of innovations in inflation as well as inflation volatility on inflation and inflation volatility over time, by incorporating the unobserved volatility as an explanatory variable in the mean (inflation) equation. Empirical findings suggest that innovations in inflation volatility increases inflation. This evidence is robust across various definitions of inflation and different sub-periods.  相似文献   

16.
This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.  相似文献   

17.
We investigate the time behaviour of the Italian MIB30 stock index collected every minute during two months in the period from May 17, 2006, up to July 24, 2006. We find short-range correlations in the price returns and, on the contrary, a long persistent time lag and slow decay in the autocorrelation functions of volatility. Besides, we find that the probability density functions (PDFs) of returns show fat tails, which are well fit by the log-normal model of Castaing [B. Castaing, Y. Gagne, E.J. Hopfinger, Physica D 46 (1990) 177], and a convergence toward a normal distribution for large time scales; we also find that the PDFs of volatility, for short time horizons, fit better with a log-normal distribution than with a Gaussian. Most of these features characterize the indexes and stocks of the largest American, European and Asian markets.We also investigate the distribution of stochastic separation between isolated strong events in the volatility signal. This is interesting because this gives us a deeper understanding about the price formation process. By using a test for the occurrence of local Poisson hypothesis, we show that the process we examined strongly departs from a Poisson statistics, the origin of this failure stemming from the presence of temporal clustering and of a certain amount of memory.  相似文献   

18.
《Physica A》2006,362(2):450-464
This paper develops a multivariate long-memory stochastic volatility model which allows the multi-asset long-range dependence in the volatility process. The motivation is from the fact that both autocorrelations and cross-correlations of some proxies of exchange rate volatility exhibit strong evidence of long-memory behavior. The statistical properties of the new stochastic volatility model provide theoretical explanation to the common findings that long memory volatility properties are more apparent if we use absolute return as a volatility proxy than squared return. Results of the real data application show that our model outperforms an existing multivariate stochastic volatility model.  相似文献   

19.
Motivated by recent developments on solvable directed polymer models, we define a ‘multi-layer’ extension of the stochastic heat equation involving non-intersecting Brownian motions. By developing a connection with Darboux transformations and the two-dimensional Toda equations, we conjecture a Markovian evolution in time for this multi-layer process. As a first step in this direction, we establish an analogue of the Karlin-McGregor formula for the stochastic heat equation and use it to prove a special case of this conjecture.  相似文献   

20.
The value of stocks, indices and other assets, are examples of stochastic processes with unpredictable dynamics. In this paper, we discuss asymmetries in short term price movements that can not be associated with a long term positive trend. These empirical asymmetries predict that stock index drops are more common on a relatively short time scale than the corresponding raises. We present several empirical examples of such asymmetries. Furthermore, a simple model featuring occasional short periods of synchronized dropping prices for all stocks constituting the index is introduced with the aim of explaining these facts. The collective negative price movements are imagined triggered by external factors in our society, as well as internal to the economy, that create fear of the future among investors. This is parameterized by a “fear factor” defining the frequency of synchronized events. It is demonstrated that such a simple fear factor model can reproduce several empirical facts concerning index asymmetries. It is also pointed out that in its simplest form, the model has certain shortcomings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号