首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
几何分布的参数估计及应用   总被引:1,自引:0,他引:1  
基于几何分布的一次观察数据,应用假设检验与参数估计的关系给出了几何分布的参数估计方法,并计算了估计偏差和估计量的均方误差,表明该估计是可取的,最后给出了该方法在离散型可靠性增长模型中的应用.  相似文献   

2.
风险值的估计及其周期分析   总被引:1,自引:0,他引:1  
本文提出了两种风险值的估计方法,这两种方法均是先估计出收益的分布,然后求得分布左侧p分位点作为风险值的估计.第一种方法是用核估计方法得到收益的分布估计;第二种方法则是由分布的核估计算得收益的众数,引入所谓的广义半t分布拟合众数左侧的样本.文章以上证指数为实例验证了这两种方法的可行性与精确性.最后我们利用上述两种估计方法得到了上证指数风险值的波动主周期.  相似文献   

3.
上海市老年护理互助会会员会费交付平衡研究   总被引:1,自引:0,他引:1  
上海市老年护理问题正成为一个非常突出的社会问题,为此,希望通过成立“老年护理互助会”来解决此问题,在本文中,首先引入一种频率的方法,用以估计多阶段生存模型的分布函数,并用一个参数化的分布族来近似该多阶段生存模型的分布函数。其次我们导出了“老年互理互助会”会费缴纳的平衡条件。基于对分布的估计,给出了某些情形下“老年互理互助会”会费缴纳平衡条件的数值结果。  相似文献   

4.
《数理统计与管理》2019,(5):836-848
论文提出一种新的疲劳寿命分布—两参数广义Birnbaum-Saunders极小值分布(BSMin(α,β)),研究了该分布的密度函数与失效率函数的图像特征。其次,给出了该分布在全样本下两个参数的分位数估计与回归估计,并通过蒙特卡罗模拟比较发现分位数估计较优,同时也探讨了两个参数的矩估计、极大似然估计以及对数矩估计。此外,论文还指出BSMin(α,β)分布取对数后用泰勒展开可近似看作两参数极小值分布,由此得到两个参数的近似区间估计,并通过蒙特卡罗模拟考察了近似区间估计的精度。最后,利用模拟数据说明了论文所提的点估计和近似区间估计方法的应用。  相似文献   

5.
汪浩 《应用概率统计》2003,19(3):267-276
由于金融市场中的日周期或短周期对数回报率的样本数据多数呈现胖尾分布,于是现有的正态或对数正态分布模型都在不同程度上失效,为了准确模拟这种胖尾分布和提高投资风险估计及金融管理,本文引进了一种可根据实际金融市场数据作出调正的蒙特卡洛模拟方法.这个方法可以有效地复制金融产品价格的日周期对数回报率数据的胖尾分布.结合非参数估计方法,利用该模拟方法还得到投资高风险值以及高风险置信区间的准确估计。  相似文献   

6.
在许多实际同题中,存在一些不可直接观测的变量,对此统计学家们提出了反卷积和混合分布模型来解决这一变量的分布的估计问题。本文对这一问题采用bootstrap模拟方法得出分布函数的估计,并进一步建立该分布函数的非参bootstrap百分位区间,在数值试验中将我们的处理方式与传统的EM算法得到的分布估计和正态逼近区间作比较,数值结果表明用bootstrap模拟方法得到的准确度更好,数值效果更理想。  相似文献   

7.
Raoand Zhao(1992)提出了一种用随机加权的方法去逼近线性回归模型中M-估计的渐近分布。之前,Fang and zhao(2002)把这种方法推广到设计阵是随机的删失回归模型.本文,我们把这个结果推广到设计阵是非随机的删失回归模型,并证明该随机加权方法的一些大样本性质。  相似文献   

8.
储存可靠性试验数据的保序回归分析法   总被引:1,自引:0,他引:1  
本文对一类分布,提出一种在储存可靠性试验场合下分布参数的非迭代估计─ISRDF估计,这种估计是相合的,渐近正态的.该方法方便实用,模拟表明对某些分布精度良好.应用于实测数据的分析中,得到了较好的效果.  相似文献   

9.
本文基于Bnyesian估计理论,讨论了MA模型阶的识别,在关于阶和参数的一个一般先验分布下,给出了阶的估计的Bayesian准则,并证明了该估计方法具有强相合性.  相似文献   

10.
本文考虑了部分线性模型中非参数部分带有可加测量误差的估计问题.本文提出了两种估计方法,第一种是基于逆卷积的积分矩估计方法,给出该估计的强相合收敛性.第二种是基于模拟的估计方法,该方法避免了积分矩估计方法中的积分问题.最后本文用一些数值结果来说明估计方法的估计效果.  相似文献   

11.
The problem of determining probability densities of positive random variables from empirical data is important in many fields, in particular in insurance and risk analysis. The method of maximum entropy has proven to be a powerful tool to determine probability densities from a few values of its Laplace transform. This is so even when the amount of data to compute numerically the Laplace transform is small. But in this case, the variability of the reconstruction due to the sample variability in the available data can lead to quite different results. It is the purpose of this note to quantify as much as possible the variability of the densities reconstructed by means of two maxentropic methods: the standard maximum entropy method and its extension to incorporate data with errors.The issues that we consider are of special interest for the advanced measurement approach in operational risk, which is based on loss data analysis to determine regulatory capital, as well as to determine the loss distribution of risks that occur with low frequency.  相似文献   

12.
In this paper, we focused on computing the minimal relative entropy between the original probability and all of the equivalent martin gale measure for the Lévy process. For this purpose, the quasiMonte Carlo method is used. The probability with minimal relative entropy has many suitable properties. This probability has the minimal Kullback-Leibler distance to the original probability. Also, by using the minimal relative entropy the exponential utility indifference price can be found. In this paper, the Monte Carlo and quasi-Monte Carlo methods have been applied. In the quasi-Monte Carlo method, two types of widely used lowdiscrepancy sequences, Halton sequence and Sobol sequence, are used. These methods have been used for exponential Lévy process such as variance gamma and CGMY process. In these two processes, the minimal relative entropy has been computed by Monte Carlo and quasi-Monte Carlo, and compared their results. The results show that quasi-Monte Carlo with Sobol sequence performs better in terms of fast convergence and less error. Finally, this method by fitting the variance gamma model and parameters estimation for the model has been implemented for financial data and the corresponding minimal relative entropy has been computed.  相似文献   

13.
In this work we present two different numerical methods to determine the probability of ultimate ruin as a function of the initial surplus. Both methods use moments obtained from the Pollaczek–Kinchine identity for the Laplace transform of the probability of ultimate ruin. One method uses fractional moments combined with the maximum entropy method and the other is a probabilistic approach that uses integer moments directly to approximate the density.  相似文献   

14.
This paper describes methods for calculating the most likely values of link flows in networks with incomplete data. The object is to present a thorough and rigorous treatment of maximum entropy flow estimation methods and to develop a methodological framework capable of handling different types of network problems. A multiple probability space conditional entropy approach is described for the general network problem. Results are presented and discussed for an example network intended for water supply.  相似文献   

15.
The combination of mathematical models and uncertainty measures can be applied in the area of data mining for diverse objectives with as final aim to support decision making. The maximum entropy function is an excellent measure of uncertainty when the information is represented by a mathematical model based on imprecise probabilities. In this paper, we present algorithms to obtain the maximum entropy value when the information available is represented by a new model based on imprecise probabilities: the nonparametric predictive inference model for multinomial data (NPI-M), which represents a type of entropy-linear program. To reduce the complexity of the model, we prove that the NPI-M lower and upper probabilities for any general event can be expressed as a combination of the lower and upper probabilities for the singleton events, and that this model can not be associated with a closed polyhedral set of probabilities. An algorithm to obtain the maximum entropy probability distribution on the set associated with NPI-M is presented. We also consider a model which uses the closed and convex set of probability distributions generated by the NPI-M singleton probabilities, a closed polyhedral set. We call this model A-NPI-M. A-NPI-M can be seen as an approximation of NPI-M, this approximation being simpler to use because it is not necessary to consider the set of constraints associated with the exact model.  相似文献   

16.
Increasingly, fuzzy partitions are being used in multivariate classification problems as an alternative to the crisp classification procedures commonly used. One such fuzzy partition, the grade of membership model, partitions individuals into fuzzy sets using multivariate categorical data. Although the statistical methods used to estimate fuzzy membership for this model are based on maximum likelihood methods, large sample properties of the estimation procedure are problematic for two reasons. First, the number of incidental parameters increases with the size of the sample. Second, estimated parameters fall on the boundary of the parameter space with non-zero probability. This paper examines the consistency of the likelihood approach when estimating the components of a particular probability model that gives rise to a fuzzy partition. The results of the consistency proof are used to determine the large sample distribution of the estimates. Common methods of classifying individuals based on multivariate observations attempt to place each individual into crisply defined sets. The fuzzy partition allows for individual to individual heterogeneity, beyond simply errors in measurement, by defining a set of pure type characteristics and determining each individual's distance from these pure types. Both the profiles of the pure types and the heterogeneity of the individuals must be estimated from data. These estimates empirically define the fuzzy partition. In the current paper, this data is assumed to be categorical data. Because of the large number of parameters to be estimated and the limitations of categorical data, one may be concerned about whether or not the fuzzy partition can be estimated consistently. This paper shows that if heterogeneity is measured with respect to a fixed number of moments of the grade of membership scores of each individual, the estimated fuzzy partition is consistent.  相似文献   

17.
Voltage sag caused by faults on an electric power transmission line is one of the most intractable power quality issues for both utility companies and customers. The fault in power system randomly exists along transmission lines due to the combination of many uncertain factors. To predict and assess the annual expected sag frequency (ESF) deriving from the faults along lines, a stochastic‐based method that employs maximum entropy principle, namely the maximum entropy probability method (MEPM), has been introduced in this paper. Moreover, various types of faults have been considered systematically. With the fault line intervals and the sample moments taken into account, the discrete values of distribution probability of fault locations along the transmission lines have been estimated by means of the MEPM. For a given voltage sag magnitude corresponding to the voltage tolerant level of sensitive equipment at the tested bus, the ESF has been calculated in view of the statistical fault rates of interrelated transmission lines. The implementation and application to a classical five‐bus system and IEEE RTS‐30 test system have been presented consequently. The simulations have revealed that compared with the four existing methods, the MEPM is accurate, flexible, and immune of round‐off errors, and therefore it can be widely applied to the various aspects of power systems. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

18.
The reliability for Weibull distribution with homogeneous heavily censored data is analyzed in this study. The universal model of heavily censored data and existing methods, including maximum likelihood, least-squares, E-Bayesian estimation, and hierarchical Bayesian methods, are introduced. An improved method is proposed based on Bayesian inference and least-squares method. In this method, the Bayes estimations of failure probabilities are focused on for all the samples. The conjugate prior distribution of failure probability is set, and an optimization model is developed by maximizing the information entropy of prior distribution to determine the hyper-parameters. By integrating the likelihood function, the posterior distribution of failure probability is then derived to yield the Bayes estimation of failure probability. The estimations of reliability parameters are obtained by fitting distribution curve using least-squares method. The four existing methods are compared with the proposed method in terms of applicability, precision, efficiency, robustness, and simplicity. Specifically, the closed form expressions concerning E-Bayesian estimation and hierarchical Bayesian methods are derived and used. The comparisons demonstrate that the improved method is superior. Finally, three illustrative examples are presented to show the application of the proposed method.  相似文献   

19.
20.
最大熵——均值方差保费原则   总被引:1,自引:0,他引:1  
本为利用熵在金融市场的两个功能;度量风险资产的投资风险和推测资产的概率分布,抓住了不确定性的本质,用熵值来度量由概率分布向信息转化的不确定性,建立了新的保费原则;最大熵—均值方差保费原则,使保费的制定更趋于合理.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号