首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 187 毫秒
1.
在经济领域中,时间序列具有序列相关和长记忆等特征,用考虑了时间序列短记忆性和长记忆的ARFIMA来模型分析研究经济时间序列有利于提高拟合及预测的精度。近几十年来对ARFIMA模型参数估计和分数差分算子阶数d的研究越来越多,该模型的应用也越来越广泛。基于贝叶斯方法在参数估计中的优越性,本文结合众多应用此方法的文献所得到的后验分布特点,提出了合理的先验分布,考虑到计算难度,采用MCMC方法对模型的参数进行估计,最后应用我国过去几十年的GDP数据进行实证分析,得到了ARFIMA模型参数的后验分布图、均值、方差及95%的置信区间。  相似文献   

2.
在金融时间序列波动具有显著的长记忆性这一背景之下,研究了LMSV模型长记忆参数的估计问题。首先,分析了LMSV模型的相关性质;接着,根据LMSV模型和ARFIMA模型的良好对应关系,提出了估计LMSV模型长记忆参数的半参数方法;最后,基于股市数据,验证了波动半参数方法的有效性。  相似文献   

3.
考虑了误差为NA序列的半参数回归模型,利用非参数估计方法给出了模型参数的最小二乘估计和加权最小二乘估计,并在适当条件下得到了它们的矩相合性.  相似文献   

4.
基于一类长记忆过程的经济时间序列建模研究   总被引:6,自引:2,他引:4  
本文介绍了长记忆的概念 ,首次提出了一种与 ADF检验相结合的长记忆性判断方法。给出了将参数 d的初估计与近似极大似然估计相结合 ,将时间序列长记忆分析与短记忆分析相结合的建立ARFIMA模型的方法。论文进行了实证研究 ,并证明了该模型与其它模型相比的优效性  相似文献   

5.
改进的自适应Lasso方法在股票市场中的应用   总被引:1,自引:0,他引:1  
《数理统计与管理》2019,(4):750-760
在金融领域,自适应Lasso被广泛的用于股票价格预测模型中的变量选择和参数估计。然而,自适应Lasso是针对非时间序列模型提出的,忽略了时间序列模型特定的结构,比如时间序列模型中通常会出现滞后阶数越靠后,对未来的预测能力越弱的特性,从而,容易造成估计及预测不精确。因此,时间序列模型的变量选择惩罚参数的设计应与滞后阶数相关,即对越靠后的滞后阶数应加上越大的惩罚。为了充分考虑时间序列模型的特性且保留自适应Lasso的优点,本文针对时间序列AR(p)模型提出一种改进的自适应Lasso(MA Lasso)方法,通过在自适应Lasso惩罚基础上乘以一个关于滞后阶数单调不减的函数来达到目标。这样设计的惩罚参数的另一个优点是通过选取特定的惩罚参数,Lasso,自适应Lasso方法都是MA Lasso方法的特例。进一步,对于AR(p)模型中另一个重要参数p的选择问题,本文提出一种改进的BIC模型准则来选择p。最后,将MA Lasso方法应用到中证100指数中,实证分析表明,与Lasso和自适应Lasso相比,MA Lasso选择最简模型且预测效果最佳,即选择最少的预测变量的同时且具有最小的模型预测误差。  相似文献   

6.
在结构方程恰好被识别时,研究了外生变量设计矩阵X复共线时联立方程模型的参数估计问题,提出了参数的一种修正间接岭估计方法,并证明了这种参数估计的良好统计性质,最后给出了在修正间接岭估计均方误差最小意义下岭参数的一种选择方法.  相似文献   

7.
非参数估计方法是现代统计学的一个新发展方向,在各领域有重要的应用价值.本文针对非参数模型的三种估计方法,将其应用于沪铜期价与LME现价的相关关系分析,得到模型的拟合值和拟合曲线.最后,与最小二乘法的结果进行比较分析,证实了非参数估计在不同历史时期有预测精度高的优点.  相似文献   

8.
基于极值理论模型,对中国与日本高龄人口死亡率进行拟合和预测,克服了其他死亡率参数外推模型的主观性.在极值理论高龄死亡率模型的基础上使用加权最小二乘法,通过反复试验方式选择最优门限年龄和模型参数估计值,并且预测中国与日本人口最高年龄以及最高年龄区间估计.此研究为我国经验生命表的编制工作提供借鉴.  相似文献   

9.
研究了半参数回归模型的参数估计问题,利用压缩估计方法给出了模型的一类有偏估计,并与最小二乘估计、岭估计、几乎无偏岭估计进行了比较.在均方误差意义下,新的压缩估计明显优于最小二乘估计.最后讨论了有偏参数选取的问题.  相似文献   

10.
本文应用SAS软件对1952-2009年的中国人均GDP建立时间序列模型并对2010-2013年的中国人均GDP进行了预测;在此基础上建立了以时间序列模型得到的参数信息作为先验信息的两种贝叶斯修匀模型与算法。由此所得的参数贝叶斯估计及预测,能充分利用样本信息和参数的先验信息,因而具有更小的方差或平方误差,估计参数更科学。为了检验该方法对先验分布的灵敏性,我们做了基于两种先验分布的模拟预测。将预测结果与传统时间序列预测相比,发现单一正态观测值、方差已知的先验分布的贝叶斯模型得到的预测值更准确,而基于先验分布为指数分布的贝叶斯模型的预测误差较大,预测效果差。  相似文献   

11.
Computing efficient frontiers using estimated parameters   总被引:3,自引:0,他引:3  
The mean-variance model for portfolio selection requires estimates of many parameters. This paper investigates the effect of errors in parameter estimates on the results of mean-variance analysis. Using a small amount of historical data to estimate parameters exposes the model to estimation errors. However, using a long time horizon to estimate parametes increasers the possibility of nonstationarity in the parameters. This paper investigates the tradeoff between estimation error and stationarity. A simulation study shows that the effects of estimation error can be surprisingly large. The magnitude of the errors increase with the number of securities in the analysis. Due to the error maximization property of mean-variance analysis, estimates of portfolio performance are optimistically biased predictors of actual portfolio performance. It is important for users of mean-variance analysis to recognize and correct for this phenomenon in order to develop more realistic expectations of the future performance of a portfolio. This paper suggests a method for adjusting for the bias. A statistical test is proposed to check for nonstationarity in historical data.  相似文献   

12.
We address the problem of parameter estimation of long memory time series. We consider k-factors Gegenbauer Autoregressive Moving Average (k-GARMA) processes and we estimate their parameters by the minimum Hellinger distance estimator. We establish the consistency of the estimator and the asymptotic normality for some bandwidth choice.  相似文献   

13.
We consider band-limited frequency-domain goodness-of-fit testing for stationary time series, without smoothing or tapering the periodogram, while taking into account the effects of parameter uncertainty (from maximum-likelihood estimation). We are principally interested in modeling short econometric time series, typically with 100 to 150 observations, for which data-driven bandwidth selection procedures for kernel-smoothed spectral density estimates are unlikely to have adequate levels. Our mathematical results take parameter uncertainty directly into account, allowing us to obtain adequate level properties at small sample sizes. The main theorems provide very general results involving joint normality for linear functionals of powers of the periodogram, while accounting for parameter uncertainty, which can be used to determine the level and power of a wide array of statistics. We discuss several applications, such as spectral peak testing and testing for the inclusion of an Unobserved Component, and illustrate our methods on a time series from the Energy Information Administration.  相似文献   

14.
In the context of semi-functional partial linear regression model, we study the problem of error density estimation. The unknown error density is approximated by a mixture of Gaussian densities with means being the individual residuals, and variance a constant parameter. This mixture error density has a form of a kernel density estimator of residuals, where the regression function, consisting of parametric and nonparametric components, is estimated by the ordinary least squares and functional Nadaraya–Watson estimators. The estimation accuracy of the ordinary least squares and functional Nadaraya–Watson estimators jointly depends on the same bandwidth parameter. A Bayesian approach is proposed to simultaneously estimate the bandwidths in the kernel-form error density and in the regression function. Under the kernel-form error density, we derive a kernel likelihood and posterior for the bandwidth parameters. For estimating the regression function and error density, a series of simulation studies show that the Bayesian approach yields better accuracy than the benchmark functional cross validation. Illustrated by a spectroscopy data set, we found that the Bayesian approach gives better point forecast accuracy of the regression function than the functional cross validation, and it is capable of producing prediction intervals nonparametrically.  相似文献   

15.
We study the accuracy of estimation of unknown parameters in the case of two-step statistical estimates admitting special representations. An approach to the study of such problems previously proposed by the authors is extended to the case of the estimation of a multidimensional parameter. As a result, we obtain necessary and sufficient conditions for the weak convergence of the normalized estimation error to a multidimensional normal distribution.  相似文献   

16.
We construct a two-sample test for comparison of long memory parameters based on ratios of two rescaled variance (V/S) statistics studied in Giraitis et al. [L. Giraitis, R. Leipus, A. Philippe, A test for stationarity versus trends and unit roots for a wide class of dependent errors, Econometric Theory 21 (2006) 989-1029]. The two samples have the same length and can be mutually independent or dependent. In the latter case, the test statistic is modified to make it asymptotically free of the long-run correlation coefficient between the samples. To diminish the sensitivity of the test on the choice of the bandwidth parameter, an adaptive formula for the bandwidth parameter is derived using the asymptotic expansion in Abadir et al. [K. Abadir, W. Distaso, L. Giraitis, Two estimators of the long-run variance: beyond short memory, Journal of Econometrics 150 (2009) 56-70]. A simulation study shows that the above choice of bandwidth leads to a good size of our comparison test for most values of fractional and ARMA parameters of the simulated series.  相似文献   

17.
Based on the weekly closing price of Shenzhen Integrated Index, this article studies the volatility of Shenzhen Stock Market using three different models: Logistic, AR(1) and AR(2). The time-variable parameters of Logistic regression model is estimated by using both the index smoothing method and the time-variable parameter estimation method. And both the AR(1) model and the AR(2) model of zero-mean series of the weekly closing price and its zero-mean series of volatility rate are established based on the analysis results of zero-mean series of the weekly closing price. Six common statistical methods for error prediction are used to test the predicting results. These methods are: mean error (ME), mean absolute error (MAE), root mean squared error (RMSE), mean absolute percentage error (MAPE), Akaike's information criterion (AIC), and Bayesian information criterion (BIC). The investigation shows that AR(1) model exhibits the best predicting result, whereas AR(2) model exhibits predicting results that is intermediate between AR(1) model and the Logistic regression model.  相似文献   

18.
We study the parameter estimation for parabolic, linear, second-order, stochastic partial differential equations (SPDEs) observing a mild solution on a discrete grid in time and space. A high-frequency regime is considered where the mesh of the grid in the time variable goes to zero. Focusing on volatility estimation, we provide an explicit and easy to implement method of moments estimator based on squared increments. The estimator is consistent and admits a central limit theorem. This is established moreover for the joint estimation of the integrated volatility and parameters in the differential operator in a semi-parametric framework. Starting from a representation of the solution of the SPDE with Dirichlet boundary conditions as an infinite factor model and exploiting mixing-type properties of time series, the theory considerably differs from the statistics for semi-martingales literature. The performance of the method is illustrated in a simulation study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号