共查询到20条相似文献,搜索用时 93 毫秒
1.
过离散次数分布模型的尾部特征 总被引:1,自引:0,他引:1
在保险精算和生物统计等领域,离散型次数分布模型的应用十分广泛.当实际数据的尾部较长(即过离散),且零点的概率较大时,许多模型的拟合效果往往欠佳.本文通过计算概率之比的极限和偏度系数,对混合泊松分布和复合泊松分布的右尾特征和零点概率进行了比较,给出了它们的尾部排列顺序,以及尾部长短与零点概率的关系,从而为模型的构造或选择提供了一种指导.本文最后应用一组实际数据说明了在构造或选择次数分布模型时如何考虑尾部特征,从而改善对实际数据的拟合效果. 相似文献
2.
将黄金数据的尖峰厚尾、异方差性及杠杆效应等统计特征与马尔科夫概率转移矩阵所具有的动态变化规律结合,提出一种改进的灰色马尔科夫模型.模型首先对数据进行统计分析,建立相应的概率统计模型并用此模型对系统发展变化趋势进行拟合.在拟合序列的基础上利用马尔科夫链的动态转移变化建立状态转移概率矩阵,采用动态数据驱动原理对未来每一步数据进行动态预测.模型既是统计方法与数据动态驱动的结合,克服了传统的灰色马尔科夫模型中对数据内在统计规律的忽视,实证表明其预测精度较灰色马尔科夫模型预测高,具有较好的实用性. 相似文献
3.
《数学的实践与认识》2013,(20)
车辆保险产品的定价一般会考虑保单持有人的索赔概率和期望索赔额等两个因素,零调整逆高斯回归模型作为解决这类问题的一个有力工具,由于变量分布的限定,从而具有一定的局限性.针对该问题,本文基于零调整逆高斯回归模型和分位数回归模型的思想,提出零调整分位数回归模型,并结合实际数据进行了拟合分析.与零调整逆高斯回归模型拟合的结果比较表明,零调整分位数回归模型可以作为研究车辆保险中索赔额的一个有力工具. 相似文献
4.
在应用Weibull模型研究新产品市场渗透时,"永不采用人群"、消费者个体之间的差异、消费者群体之间的差异是研究新产品采用时需要考虑的三种因素。本文基于这三种因素分别建立了三个拓展的Weibull模型,并利用面板数据进行了实证研究,发现三种拓展之后的模型在数据拟合和数据预测方面均有显著的提高.然后,本文将三种因素整合至一个模型之中形成了一种新的综合Weibull模型,实证分析结果显示新的模型有很好的新产品市场渗透数据拟合和预测能力. 相似文献
5.
非概率抽样在大数据时代有广阔的应用空间,但其统计推断问题仍有待研究和发展.针对这一问题,提出利用基于模型的推断方法结合配额抽样实现非概率样本的统计推断,其思路是先设定线性回归形式的超总体模型,再利用配额样本观测数据拟合模型估计未知参数,进而利用模型对非观测单元进行预测,案例分析结果显示基于超总体模型的推断方法是解决非概率样本统计推断的有力途径,具有较大的深入研究价值. 相似文献
6.
7.
数据缺失在实际应用中普遍存在,数据缺失会降低研究效率,导致参数估计有偏.在协变量随机缺失(MAR)的假定下,本文基于众数回归和逆概率加权估计方法对线性模型进行参数估计.该方法结合参数Logistic回归和非参数Nadaraya-Watson估计两种倾向得分估计方法,分别构建IPWM-L估计量和IPWM-NW估计量.模拟研究和实例分析表明,众数回归模型比均值回归模型更具稳健性,逆概率加权众数(IPWM)估计方法在缺失数据下表现出了更好的拟合效果,与IPWM-L估计量相比, IPWM-NW估计量更稳健. 相似文献
8.
确定双指数曲线参数初始值的循环搜索法 总被引:3,自引:3,他引:0
朱珉仁 《数学的实践与认识》2003,33(12):72-81
提出了在最小二乘意义下用 Gauss-Newton法拟合双指数曲线时 ,充分利用观测值确定参数初始值的一种算法——循环搜索法 .据此可编制一个能自动拟合 2 0种单、双指数曲线中指定曲线的 Qbasic程序 .并成功地以多个模型为例对此进行了验证 相似文献
9.
10.
本文在无金标准情况下探讨皮肤毛孔标准照片制定的合理性和可行性,对医师诊断正确性进行评价。按照毛孔粗大程度制定分类为5水平的毛孔标准照片。对128名女性志愿者制作鼻翼毛孔照片,5位年资相近的皮肤科医师按照诊断标准和标准照片对128例自愿者照片进行独立的等级评分。诊断结果数据采用潜在分类变量模型(Latent Class Model,LCM)进行分析,分别拟合5位医师诊断条件概率一致的模型和诊断条件概率不一致的模型。计算医师诊断的条件概率和后验概率。潜变量分析结果提示诊断标准过于细化且分类模糊,依据条件概率将原始分类重新划分为3类的模型较好地拟合了诊断数据。运用客观和准确的能够真实反应和区分个体情况的诊断标准是诊断试验评价的基础和前提。潜在分类模型能够有效地处理无金标准的诊断重复性或一致性研究数据。 相似文献
11.
12.
Generalized additive models for location, scale and, shape define a flexible, semi-parametric class of regression models for analyzing insurance data in which the exponential family assumption for the response is relaxed. This approach allows the actuary to include risk factors not only in the mean but also in other key parameters governing the claiming behavior, like the degree of residual heterogeneity or the no-claim probability. In this broader setting, the Negative Binomial regression with cell-specific heterogeneity and the zero-inflated Poisson regression with cell-specific additional probability mass at zero are applied to model claim frequencies. New models for claim severities that can be applied either per claim or aggregated per year are also presented. Bayesian inference is based on efficient Markov chain Monte Carlo simulation techniques and allows for the simultaneous estimation of linear effects as well as of possible nonlinear effects, spatial variations and interactions between risk factors within the data set. To illustrate the relevance of this approach, a detailed case study is proposed based on the Belgian motor insurance portfolio studied in Denuit and Lang (2004). 相似文献
13.
14.
Binomial coefficients are used in many fields such as computational and applied mathematics, statistics and probability, theoretical physics and chemistry. For accurate numerical results, the correct calculation of these coefficients is very important. We present some new recurrence relationships and numerical methods for the evaluation of binomial coefficients for negative integers. For this purpose, we give some comparisons of the outputs for different computer programming languages in case of negative integers, and also we wrote two new algorithms for computations. 相似文献
15.
Maria Iannario 《Advances in Data Analysis and Classification》2012,6(3):163-184
In this paper, we propose preliminary estimators for the parameters of a mixture distribution introduced for the analysis of ordinal data where the mixture components are given by a Combination of a discrete Uniform and a shifted Binomial distribution (cub model). After reviewing some preliminary concepts related to the meaning of parameters which characterize such models, we introduce estimators which are related to the location and heterogeneity of the observed distributions, respectively, in order to accelerate the EM procedure for the maximum likelihood estimation. A simulation experiment has been performed to investigate their main features and to confirm their usefulness. A check of the proposal on real case studies and some comments conclude the paper. 相似文献
16.
Firms are increasingly looking to provide a satisfactory prediction of customer lifetime value (CLV), a determining metric to target future profitable customers and to optimize marketing resources. One of the major challenges associated with the measurement of CLV is the choice of the appropriate model for predicting customer value because of the large number of models proposed in the literature. Earlier models to forecast CLV are relatively unsuccessful, whereas simple models often provide results which are equivalent or even better than sophisticated ones. To predict CLV, Rust et al. (2011) proposed a framework model that performs better than simple managerial heuristic models, but its implementation excludes cases where customer's profit is negative and does not handle lost‐for‐good situations. In this paper, we propose a modified model that handles both negative and positive profits based on Markov chain model (MCM), hence offering a greater flexibility by covering always‐a‐share and lost‐for‐good situations. The proposed model is compared with the Pareto/Negative Binomial Distribution (Pareto/NBD), the Beta Geometric/Negative Binomial Distribution (BG/NBD), the MCM, and the Rust et al. (2011) models. Based on customer credit card transactions provided by the North African retail bank, an empirical study shows that the proposed model has better forecasting performance than competing models. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
17.
18.
对着发射击高射武器系统毁伤概率的计算公式提出了一种新算法 ,称为二项展开优化法 .它是在原有的常用算法二项展开法基础上改进得到的 .新算法保留了二项展开法实施容易、计算快的特点 ,又在一定程度上克服了原算法由于计算组合数 Ck N而带来的计算机计算过程中的舍入误差 .最后 ,文章又用实例比较了几种算法在计算毁伤概率值的精度与差异 . 相似文献
19.
20.
Discrete time Markov chains with interval probabilities 总被引:1,自引:0,他引:1
Damjan kulj 《International Journal of Approximate Reasoning》2009,50(8):1314-1329
The parameters of Markov chain models are often not known precisely. Instead of ignoring this problem, a better way to cope with it is to incorporate the imprecision into the models. This has become possible with the development of models of imprecise probabilities, such as the interval probability model. In this paper we discuss some modelling approaches which range from simple probability intervals to the general interval probability models and further to the models allowing completely general convex sets of probabilities. The basic idea is that precisely known initial distributions and transition matrices are replaced by imprecise ones, which effectively means that sets of possible candidates are considered. Consequently, sets of possible results are obtained and represented using similar imprecise probability models.We first set up the model and then show how to perform calculations of the distributions corresponding to the consecutive steps of a Markov chain. We present several approaches to such calculations and compare them with respect to the accuracy of the results. Next we consider a generalisation of the concept of regularity and study the convergence of regular imprecise Markov chains. We also give some numerical examples to compare different approaches to calculations of the sets of probabilities. 相似文献