首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 984 毫秒
1.
上海股市收益率序列簇生特征局部线性平滑分析   总被引:1,自引:1,他引:0  
本文从分析上海股票市场收益率序列的基本特征入手,重点利用非参数方法分析收益率序列波动性的簇生特征.首先通过一系列描述指标说明股市收益率序列具有的基本特点,利用非参数方法估计收益率序列的密度函数.进一步利用非参数回归分析的方法,分析股票市场的波动性,说明股市收益率序列的簇生特征是一个一般规律,在防范股市风险的时候应该注意到这一特点.  相似文献   

2.
基于函数型非参数核回归估计的方法分析安徽省1955年至2010年月度平均气温数据,建立函数型非参数回归模型,并对2010年气温数据进行实证研究.同时,与经典的非参数回归模型的预测结果相比,本文方法的预测均方误差明显优于经典的非参数回归方法,体现出函数型非参数模型的优越性.  相似文献   

3.
本文试用非参数统计方法,对评分类比赛中各裁判员的评分水平进行分析评价,并对一组裁判员的整体评分效果作检验.最后将分析的结果与用参数方法所得结果作比较。  相似文献   

4.
非参数回归估计是研究非线性时间序列的一种有用工具.在混合相依样本的条件下,基于非参数核回归估计方法来研究收益率的波动性,利用改良的交叉核实函数选取光滑参数,并给出上海和深圳股市实证分析的一些有趣结果.  相似文献   

5.
发展了一种半参数面板空间滞后模型的两阶段最小二乘估计方法.证明了参数分量估计具有渐近正态性且收敛速度为n~(-1/2),非参数分量估计在内点处具有渐近正态性,其收敛速度达到了非参数函数估计的最优收敛速度.并将方法应用于外商直接投资对劳动收入份额的影响分析.  相似文献   

6.
近年来, 已有一些在半参数密度函数比模型下建立半参数统计分析方法的报道, 这些方法往往比参数方法稳健, 比非参数方法有效. 在本文里, 我们提出一种半参数的假设检验方法用于对两总体均值差进行假设检验. 该方法主要建立在对两总体均值差进行半参数估计的基础上. 我们报告了一些理论和统计模拟的结果, 得出该方法在数据符合正态性假设时, 比常用的参数和非参数方法略好; 而在数据不符合正态性假设时, 它的优势就非常明显. 我们还将提出的方法用到了两组真实数据的分析上.  相似文献   

7.
联立方程模型在经济政策制定、经济结构分析和预测方面起重要作用,目前关于非参数计量经济模型的研究主要停留在单方程模型上,而联立方程模型的研究在国际上刚刚起步,本将非参数回归模型的局部线性估计方法与传统联立方程模型估计方法相结合,首次提出了非参数计量经济联立模型的局部线性工具变量变窗宽估计并应用于我国宏观经济非参数联立模型,结果表明:我国宏观经济非参数联立模型优于线性联立模型且线性模型将造成不必要的人为设定误差;对于非参数联立模型,局部线性工具变量变窗宽估计优于局部线性估计。  相似文献   

8.
赵川  薛红 《运筹与管理》2013,22(1):252-255
高效的物流管理模式是连锁零售企业快速发展的基础和保障.在日益复杂的市场经济环境下,连锁零售企业必须解决库存量高,配送成本高,断货,配送无序,配送滞后等问题.针对这些问题,提出了一种在非等周期补货情况下,门店和配送中心库存水平的优化模型,解决了连锁零售业多级库存优化问题;建立了基于Multi-Agent-System的多级库存智能管理系统,解决了在连锁零售企业多级库存中普遍存在的配送无序、配送滞后等问题.  相似文献   

9.
王淑玲  冯予  杭丹 《大学数学》2012,28(2):29-33
主要研究了随机删失非参数固定设计回归模型的离差度量.首先利用随机删失非参数回归模型的性质和生存分布的Kaplan-Meier乘积限估计,将原模型转化为非参数回归模型进行研究;然后对模型作了离差度量分析,得到了模型的离差度量序列;最后通过实例分析,验证了上述诊断方法的有效性.  相似文献   

10.
对损失分布的估计一直是保险公司的重要问题. 有多种参数方法以及非参数方法拟合损失分布. 本文作者提出了结合参数和非参数的方法来解决损失分布拟合问题. 首先通过超额均值图确定大小损失之间的阈限,再利用广义Pareto分布拟合阈值以上损失, 转换后的核密度估计拟合阈值以下损失. 最后, 通过实证分析将该方法和其他方法进行了误差分析比较, 取得了理想的结果.  相似文献   

11.
Several different approaches have been suggested for the numerical solution of the global optimization problem: space covering methods, trajectory methods, random sampling, random search and methods based on a stochastic model of the objective function are considered in this paper and their relative computational effectiveness is discussed. A closer analysis is performed of random sampling methods along with cluster analysis of sampled data and of Bayesian nonparametric stopping rules.  相似文献   

12.
Isotonic nonparametric least squares (INLS) is a regression method for estimating a monotonic function by fitting a step function to data. In the literature of frontier estimation, the free disposal hull (FDH) method is similarly based on the minimal assumption of monotonicity. In this paper, we link these two separately developed nonparametric methods by showing that FDH is a sign-constrained variant of INLS. We also discuss the connections to related methods such as data envelopment analysis (DEA) and convex nonparametric least squares (CNLS). Further, we examine alternative ways of applying isotonic regression to frontier estimation, analogous to corrected and modified ordinary least squares (COLS/MOLS) methods known in the parametric stream of frontier literature. We find that INLS is a useful extension to the toolbox of frontier estimation both in the deterministic and stochastic settings. In the absence of noise, the corrected INLS (CINLS) has a higher discriminating power than FDH. In the case of noisy data, we propose to apply the method of non-convex stochastic envelopment of data (non-convex StoNED), which disentangles inefficiency from noise based on the skewness of the INLS residuals. The proposed methods are illustrated by means of simulated examples.  相似文献   

13.
A penalized approach is proposed for performing large numbers of parallel nonparametric analyses of either of two types: restricted likelihood ratio tests of a parametric regression model versus a general smooth alternative, and nonparametric regression. Compared with naïvely performing each analysis in turn, our techniques reduce computation time dramatically. Viewing the large collection of scatterplot smooths produced by our methods as functional data, we develop a clustering approach to summarize and visualize these results. Our approach is applicable to ultra-high-dimensional data, particularly data acquired by neuroimaging; we illustrate it with an analysis of developmental trajectories of functional connectivity at each of approximately 70,000 brain locations. Supplementary materials, including an appendix and an R package, are available online.  相似文献   

14.
The additive model is a more flexible nonparametric statistical model which allows a data-analytic transform of the covariates.When the number of covariates is big and grows exponentially with the sample size the urgent issue is to reduce dimensionality from high to a moderate scale. In this paper, we propose and investigate marginal empirical likelihood screening methods in ultra-high dimensional additive models. The proposed nonparametric screening method selects variables by ranking a measure of the marginal empirical likelihood ratio evaluated at zero to differentiate contributions of each covariate given to a response variable. We show that, under some mild technical conditions, the proposed marginal empirical likelihood screening methods have a sure screening property and the extent to which the dimensionality can be reduced is also explicitly quantified. We also propose a data-driven thresholding and an iterative marginal empirical likelihood methods to enhance the finite sample performance for fitting sparse additive models. Simulation results and real data analysis demonstrate the proposed methods work competitively and performs better than competitive methods in error of a heteroscedastic case.  相似文献   

15.
黄亚伟 《经济数学》2017,34(1):59-64
首次利用短期利率模型,分析香港银行同业拆借利率(Hibor),揭示了最近十年内香港银行同业拆借利率的基本特征.初步分析表明,Hibor数据的平稳性不能保证,因此采用了非参数统计方法.利用bandi文章中的方法,给出了函数的漂移项和扩散项的非参数估计,同时还得到了过程的局部时估计.通过实证分析,发现香港银行间同业拆借利率在2006至2015年间,以2009年为界,前后两个时间段的数据表现出不同的特征,样本数据的局部时函数也表现为双峰分布.  相似文献   

16.
So far, in the nonparametric literature only full frontier nonparametric methods have been applied to search for economies of scope and scale, particularly the data envelopment analysis method (DEA). However, these methods present some drawbacks that might lead to biased results. This paper proposes a methodology based on more robust partial frontier nonparametric methods to look for scope and scale economies. Through this methodology it is possible to assess the robustness of these economies, and in particular to assess the influence that extreme data or outliers might have on them. The influence of the imposition of convexity on the production set of firms was also investigated. This methodology was applied to the water utilities that operated in Portugal between 2002 and 2008. There is evidence of economies of vertical integration and economies of scale in drinking water supply utilities and in water and wastewater utilities operating mainly in the retail segment. Economies of scale were found in water and wastewater utilities operating exclusively in the wholesale, and in some of these utilities diseconomies of scope were also found. The proposed methodology also allowed us to conclude that the existence of some smaller utilities makes the minimum optimal scales go down.  相似文献   

17.
本文根据某商业银行反映效率的真实数据,分别采用因子分析、聚类和线性回归、DEA分析三种方法,计算了银行分支机构的效率,并对效率评价的参数与非参数方法进行了差异比较。进而,本文用历史数据对效率影响因素分析方法的有效性进行了实证检验。  相似文献   

18.
In this review paper we summarise several nonparametric methods recently applied to the pricing of financial options. After a short introduction to martingale-based option pricing theory, we focus on two possible fields of application for nonparametric methods: the estimation of risk-neutral probabilities and the estimation of the dynamics of the underlying instruments in order to construct an internally consistent model.  相似文献   

19.
In this paper, the Conditional Value-at-Risk (CVaR) is adopted to measure the total loss of multiple lines of insurance business and two nonparametric estimation methods are introduced to explore the optimal multivariate quota-share reinsurance under a mean-CVaR framework. While almost all the existing literature on optimal reinsurance are based on a probabilistic derivation, the present paper relies on a statistical analysis. The proposed optimal reinsurance models are directly formulated on empirical data and no explicit distributional assumption on the underlying risk vector is required. The resulting nonparametric reinsurance models are convex and computationally amenable, circumventing the difficulty of computing CVaR of the sum of a generally dependent random vector. Statistical consistency of the resulting estimators for the best CVaR is established for both nonparametric models, allowing empirical data to be generated from any stationary process satisfying strong mixing conditions. Finally, numerical experiments are presented to show that a routine bootstrap procedure can capture the distributions of the resulting risk measures well for independent data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号