首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
In this paper, we introduce a robust extension of the three‐factor model of Diebold and Li (J. Econometrics, 130: 337–364, 2006) using the class of symmetric scale mixtures of normal distributions. Specific distributions examined include the multivariate normal, Student‐t, slash, and variance gamma distributions. In the presence of non‐normality in the data, these distributions provide an appealing robust alternative to the routine use of the normal distribution. Using a Bayesian paradigm, we developed an efficient MCMC algorithm for parameter estimation. Moreover, the mixing parameters obtained as a by‐product of the scale mixture representation can be used to identify outliers. Our results reveal that the Diebold–Li models based on the Student‐t and slash distributions provide significant improvement in in‐sample fit and out‐of‐sample forecast to the US yield data than the usual normal‐based model. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

2.
The optimal investment–consumption problem under the constant elasticity of variance (CEV) model is solved using the invariant approach. Firstly, the invariance criteria for scalar linear second‐order parabolic partial differential equations in two independent variables are reviewed. The criteria is then employed to reduce the CEV model to one of the four Lie canonical forms. It is found that the invariance criteria help in transforming the original equation to the second Lie canonical form and with a proper parameter selection; the required transformation converts the original equation to the first Lie canonical form that is the heat equation. As a consequence, we find some new classes of closed‐form solutions of the CEV model for the case of reduction into heat equation and also into second Lie canonical form. The closed‐form analytical solution of the Cauchy initial value problems for the CEV model under investigation is also obtained. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
When the data has heavy tail feature or contains outliers, conventional variable selection methods based on penalized least squares or likelihood functions perform poorly. Based on Bayesian inference method, we study the Bayesian variable selection problem for median linear models. The Bayesian estimation method is proposed by using Bayesian model selection theory and Bayesian estimation method through selecting the Spike and Slab prior for regression coefficients, and the effective posterior Gibbs sampling procedure is also given. Extensive numerical simulations and Boston house price data analysis are used to illustrate the effectiveness of the proposed method.  相似文献   

4.
??When the data has heavy tail feature or contains outliers, conventional variable selection methods based on penalized least squares or likelihood functions perform poorly. Based on Bayesian inference method, we study the Bayesian variable selection problem for median linear models. The Bayesian estimation method is proposed by using Bayesian model selection theory and Bayesian estimation method through selecting the Spike and Slab prior for regression coefficients, and the effective posterior Gibbs sampling procedure is also given. Extensive numerical simulations and Boston house price data analysis are used to illustrate the effectiveness of the proposed method.  相似文献   

5.
This article proposes a four-pronged approach to efficient Bayesian estimation and prediction for complex Bayesian hierarchical Gaussian models for spatial and spatiotemporal data. The method involves reparameterizing the covariance structure of the model, reformulating the means structure, marginalizing the joint posterior distribution, and applying a simplex-based slice sampling algorithm. The approach permits fusion of point-source data and areal data measured at different resolutions and accommodates nonspatial correlation and variance heterogeneity as well as spatial and/or temporal correlation. The method produces Markov chain Monte Carlo samplers with low autocorrelation in the output, so that fewer iterations are needed for Bayesian inference than would be the case with other sampling algorithms. Supplemental materials are available online.  相似文献   

6.
Generally, it is well known that the constant elasticity of variance (CEV) model fails to capture the empirical results verifying that the implied volatility of equity options displays smile and skew curves at the same time. In this study, to overcome the limitation of the CEV model, we introduce a new model, which is a generalization of the CEV model, and show that it can capture the smile and skew effects of implied volatility. Using an asymptotic analysis for two small parameters that determine the volatility shape, we obtain approximated solutions for option prices in the extended model. In addition, we demonstrate the stability of the solution for the expansion of the option price. Furthermore, we show the convergence rate of the solutions in Monte-Carlo simulation and compare our model with the CEV, Heston, and other extended stochastic volatility models to verify its flexibility and efficiency compared with these other models when fitting option data from the S&P 500 index.  相似文献   

7.
We consider an extended constant elasticity of variance (CEV) model in which the elasticity follows a stochastic process driven by a fast mean-reverting Ornstein–Uhlenbeck process. Then, we use the proposed model to examine a turbo warrant option, which is a type of exotic option. Based on an asymptotic analysis, we derive the partial differential equation of the leading and the corrected terms, which we use to determine the analytic formula for the turbo warrant call option. The parameter analysis using the extended CEV model provides us with a better understanding of the price structure of a turbo warrant call. Moreover, by comparing the turbo warrant call with a European vanilla call, we can examine the sensitivity of options with respect to the model parameters.  相似文献   

8.
Abstract

We consider pricing of various types of exotic discrete variance swaps, like the gamma swaps and corridor variance swaps, under the 3/2-stochastic volatility models (SVMs) with jumps in asset price. The class of SVMs that use a constant-elasticity-of-variance (CEV) process for the instantaneous variance exhibits good analytical tractability only when the CEV parameter takes just a few special values (namely 0, 1/2, 1 and 3/2). The popular Heston model corresponds to the choice of the CEV parameter to be 1/2. However, the stochastic volatility dynamics implied by the Heston model fails to capture some important empirical features of the market data. The choice of 3/2 for the CEV parameter in the SVM shows better agreement with empirical studies while it maintains a good level of analytical tractability. Using the partial integro-differential equation (PIDE) formulation, we manage to derive quasi-closed-form pricing formulas for the fair strike prices of various types of exotic discrete variance swaps with various weight processes and different return specifications under the 3/2-model. Pricing properties of these exotic discrete variance swaps with respect to various model parameters are explored.  相似文献   

9.
李素芳  张虎  吴芳 《运筹与管理》2019,28(10):89-99
针对传统面板协整检验在建模过程中易受异常值影响以及其原假设设置的主观选择问题,本文利用动态公共因子刻画面板数据潜在的截面相关结构,提出基于动态因子的截面相关结构的贝叶斯分位面板协整检验,结合各个主要分位数水平下参数的条件后验分布,设计结合卡尔曼滤波的Gibbs抽样算法,进行贝叶斯分位面板协整检验;并进行Monte Carlo仿真实验验证贝叶斯分位面板协整检验的可行性与有效性。同时,采用中国各省金融发展和经济增长的面板数据进行实证研究,结果发现在各主要分位数水平下中国金融发展和经济增长之间具有协整关系。研究结果表明:贝叶斯分位面板协整检验方法避免了传统面板数据协整方法由于原假设设置不同而发生误判的问题,克服了异常值的影响,能够提供全面准确的模型参数估计和协整检验结果。  相似文献   

10.
We develop a highly efficient procedure to forecast the parameters of the constant elasticity of variance (CEV) model implied by American options. In particular, first of all, the American option prices predicted by the CEV model are calculated using an accurate and fast finite difference scheme. Then, the parameters of the CEV model are obtained by minimizing the distance between theoretical and empirical option prices, which yields an optimization problem that is solved using an ad-hoc numerical procedure. The proposed approach, which turns out to be very efficient from the computational standpoint, is used to test the goodness-of-fit of the CEV model in predicting the prices of American options traded on the NYSE. The results obtained reveal that the CEV model does not provide a very good agreement with real market data and yields only a marginal improvement over the more popular Black–Scholes model.  相似文献   

11.
This paper develops a Bayesian approach to analyzing quantile regression models for censored dynamic panel data. We employ a likelihood-based approach using the asymmetric Laplace error distribution and introduce lagged observed responses into the conditional quantile function. We also deal with the initial conditions problem in dynamic panel data models by introducing correlated random effects into the model. For posterior inference, we propose a Gibbs sampling algorithm based on a location-scale mixture representation of the asymmetric Laplace distribution. It is shown that the mixture representation provides fully tractable conditional posterior densities and considerably simplifies existing estimation procedures for quantile regression models. In addition, we explain how the proposed Gibbs sampler can be utilized for the calculation of marginal likelihood and the modal estimation. Our approach is illustrated with real data on medical expenditures.  相似文献   

12.
This paper studies the pricing of Asian options whose payoffs depend on the average value of an underlying asset during the period to a maturity. Since the Asian option is not so sensitive to the value of underlying asset, the possibility of manipulation is relatively small than the other options such as European vanilla and barrier options. We derive the pricing formula of geometric Asian options under the constant elasticity of variance (CEV) model that is one of local volatility models, and investigate the implication of the CEV model for geometric Asian options.  相似文献   

13.
Generalized linear mixed models (GLMMs) have been applied widely in the analysis of longitudinal data. This model confers two important advantages, namely, the flexibility to include random effects and the ability to make inference about complex covariances. In practice, however, the inference of variance components can be a difficult task due to the complexity of the model itself and the dimensionality of the covariance matrix of random effects. Here we first discuss for GLMMs the relation between Bayesian posterior estimates and penalized quasi-likelihood (PQL) estimates, based on the generalization of Harville’s result for general linear models. Next, we perform fully Bayesian analyses for the random covariance matrix using three different reference priors, two with Jeffreys’ priors derived from approximate likelihoods and one with the approximate uniform shrinkage prior. Computations are carried out via the combination of asymptotic approximations and Markov chain Monte Carlo methods. Under the criterion of the squared Euclidean norm, we compare the performances of Bayesian estimates of variance components with that of PQL estimates when the responses are non-normal, and with that of the restricted maximum likelihood (REML) estimates when data are assumed normal. Three applications and simulations of binary, normal, and count responses with multiple random effects and of small sample sizes are illustrated. The analyses examine the differences in estimation performance when the covariance structure is complex, and demonstrate the equivalence between PQL and the posterior modes when the former can be derived. The results also show that the Bayesian approach, particularly under the approximate Jeffreys’ priors, outperforms other procedures.  相似文献   

14.
The gamma distribution is one of the commonly used statistical distribution in reliability. While maximum likelihood has traditionally been the main method for estimation of gamma parameters, Hirose has proposed a continuation method to parameter estimation for the three-parameter gamma distribution. In this paper, we propose to apply Markov chain Monte Carlo techniques to carry out a Bayesian estimation procedure using Hirose’s simulated data as well as two real data sets. The method is indeed flexible and inference for any quantity of interest is readily available.  相似文献   

15.
Flexible modelling of the response variance in regression is interesting for understanding the causes of variability in the responses, and is crucial for efficient estimation and correct inference for mean parameters. In this paper we describe methods for mean and variance estimation where the responses are modelled using the double exponential family of distributions and mean and dispersion parameters are described as an additive function of predictors. The additive terms in the model are represented by penalized splines. A simple and unified computational methodology is presented for carrying out the calculations required for Bayesian inference in this class of models based on an adaptive Metropolis algorithm. Application of the adaptive Metropolis algorithm is fully automatic and does not require any kind of pretuning runs. The methodology presented provides flexible methods for modelling heterogeneous Gaussian data, as well as overdispersed and underdispersed count data. Performance is considered in a variety of examples involving real and simulated data sets.  相似文献   

16.
This paper highlights recent developments in a rich class of counting process models for the micromovement of asset price and in the Bayesian inference (estimation and model selection) via filtering for the class of models. A specific micromovement model built upon linear Brownian motion with jumping stochastic volatility is used to demonstrate the procedure to develop a micromovement model with specific tick-level sample characteristics. The model is further used to demonstrate the procedure to implement Bayes estimation via filtering, namely, to construct a recursive algorithm for computing the trade-by-trade Bayes parameter estimates, especially for the stochastic volatility. The consistency of the recursive algorithm model is proven. Simulation and real-data examples are provided as well as a brief example of Bayesian model selection via filtering.  相似文献   

17.
We propose sequential Monte Carlo-based algorithms for maximum likelihood estimation of the static parameters in hidden Markov models with an intractable likelihood using ideas from approximate Bayesian computation. The static parameter estimation algorithms are gradient-based and cover both offline and online estimation. We demonstrate their performance by estimating the parameters of three intractable models, namely the α-stable distribution, g-and-k distribution, and the stochastic volatility model with α-stable returns, using both real and synthetic data.  相似文献   

18.
Models with intractable likelihood functions arise in areas including network analysis and spatial statistics, especially those involving Gibbs random fields. Posterior parameter estimation in these settings is termed a doubly intractable problem because both the likelihood function and the posterior distribution are intractable. The comparison of Bayesian models is often based on the statistical evidence, the integral of the un-normalized posterior distribution over the model parameters which is rarely available in closed form. For doubly intractable models, estimating the evidence adds another layer of difficulty. Consequently, the selection of the model that best describes an observed network among a collection of exponential random graph models for network analysis is a daunting task. Pseudolikelihoods offer a tractable approximation to the likelihood but should be treated with caution because they can lead to an unreasonable inference. This article specifies a method to adjust pseudolikelihoods to obtain a reasonable, yet tractable, approximation to the likelihood. This allows implementation of widely used computational methods for evidence estimation and pursuit of Bayesian model selection of exponential random graph models for the analysis of social networks. Empirical comparisons to existing methods show that our procedure yields similar evidence estimates, but at a lower computational cost. Supplementary material for this article is available online.  相似文献   

19.
The Black-Scholes model does not account non-Markovian property and volatility smile or skew although asset price might depend on the past movement of the asset price and real market data can find a non-flat structure of the implied volatility surface. So, in this paper, we formulate an underlying asset model by adding a delayed structure to the constant elasticity of variance (CEV) model that is one of renowned alternative models resolving the geometric issue. However, it is still one factor volatility model which usually does not capture full dynamics of the volatility showing discrepancy between its predicted price and market price for certain range of options. Based on this observation we combine a stochastic volatility factor with the delayed CEV structure and develop a delayed hybrid model of stochastic and local volatilities. Using both a martingale approach and a singular perturbation method, we demonstrate the delayed CEV correction effects on the European vanilla option price under this hybrid volatility model as a direct extension of our previous work [12].  相似文献   

20.
This paper studies the threshold estimation of a TAR model when the underlying threshold parameter is a random variable. It is shown that the Bayesian estimator is consistent and its limit distribution is expressed in terms of a limit likelihood ratio. Furthermore, convergence of moments of the estimators is also established. The limit distribution can be computed via explicit simulations from which testing and inference for the threshold parameter can be conducted. The obtained results are illustrated with numerical simulations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号