首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
In this paper, we consider Bayesian inference and estimation of finite time ruin probabilities for the Sparre Andersen risk model. The dense family of Coxian distributions is considered for the approximation of both the inter‐claim time and claim size distributions. We illustrate that the Coxian model can be well fitted to real, long‐tailed claims data and that this compares well with the generalized Pareto model. The main advantage of using the Coxian model for inter‐claim times and claim sizes is that it is possible to compute finite time ruin probabilities making use of recent results from queueing theory. In practice, finite time ruin probabilities are much more useful than infinite time ruin probabilities as insurance companies are usually interested in predictions for short periods of future time and not just in the limit. We show how to obtain predictive distributions of these finite time ruin probabilities, which are more informative than simple point estimations and take account of model and parameter uncertainty. We illustrate the procedure with simulated data and the well‐known Danish fire loss data set. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

2.
Discovering the preferences and the behaviour of consumers is a key challenge in marketing. Information about such topics can be gathered through surveys in which the respondents must assign a score to a number of items. A strategy based on different latent class models can be used to analyze such data and achieve this objective: it consists in identifying groups of consumers whose response patterns are similar and characterizing them in terms of preferences and covariates. The basic latent class model can be extended by including covariates to model differences in (1) latent class probabilities and (2) conditional probabilities. A strategy for fitting and choosing a suitable model among them is proposed taking into account identifiability issues, the identification of potential covariates and the checking of goodness-of-fit. The tools to perform this analysis are implemented in the R package covLCA available from CRAN. We illustrate and explain the application of this strategy using data about the preferences of Belgian households for supermarkets.  相似文献   

3.
We present a novel stochastic model for claims reserving that allows us to combine claims payments and incurred losses information. The main idea is to combine two claims reserving models (Hertig’s (1985) model and Gogol’s (1993) model ) leading to a log-normal paid-incurred chain (PIC) model. Using a Bayesian point of view for the parameter modelling we derive in this Bayesian PIC model the full predictive distribution of the outstanding loss liabilities. On the one hand, this allows for an analytical calculation of the claims reserves and the corresponding conditional mean square error of prediction. On the other hand, simulation algorithms provide any other statistics and risk measure on these claims reserves.  相似文献   

4.
A univariate polynomial over the real or the complex numbers is given approximately. We present a Bayesian method for the computation of the posterior probabilities of different multiplicity patterns. The method is based on interpreting the root computation problem as an inverse problem which is then treated in the Bayesian framework. The method can be used to select the most probable multiplicity pattern when the coefficients of a univariate polynomial are not known exactly. The method is illustrated by several numerical examples.  相似文献   

5.
In multivariate categorical data, models based on conditional independence assumptions, such as latent class models, offer efficient estimation of complex dependencies. However, Bayesian versions of latent structure models for categorical data typically do not appropriately handle impossible combinations of variables, also known as structural zeros. Allowing nonzero probability for impossible combinations results in inaccurate estimates of joint and conditional probabilities, even for feasible combinations. We present an approach for estimating posterior distributions in Bayesian latent structure models with potentially many structural zeros. The basic idea is to treat the observed data as a truncated sample from an augmented dataset, thereby allowing us to exploit the conditional independence assumptions for computational expediency. As part of the approach, we develop an algorithm for collapsing a large set of structural zero combinations into a much smaller set of disjoint marginal conditions, which speeds up computation. We apply the approach to sample from a semiparametric version of the latent class model with structural zeros in the context of a key issue faced by national statistical agencies seeking to disseminate confidential data to the public: estimating the number of records in a sample that are unique in the population on a set of publicly available categorical variables. The latent class model offers remarkably accurate estimates of population uniqueness, even in the presence of a large number of structural zeros.  相似文献   

6.
The Bradley–Terry model is a popular approach to describe probabilities of the possible outcomes when elements of a set are repeatedly compared with one another in pairs. It has found many applications including animal behavior, chess ranking, and multiclass classification. Numerous extensions of the basic model have also been proposed in the literature including models with ties, multiple comparisons, group comparisons, and random graphs. From a computational point of view, Hunter has proposed efficient iterative minorization-maximization (MM) algorithms to perform maximum likelihood estimation for these generalized Bradley–Terry models whereas Bayesian inference is typically performed using Markov chain Monte Carlo algorithms based on tailored Metropolis–Hastings proposals. We show here that these MM algorithms can be reinterpreted as special instances of expectation-maximization algorithms associated with suitable sets of latent variables and propose some original extensions. These latent variables allow us to derive simple Gibbs samplers for Bayesian inference. We demonstrate experimentally the efficiency of these algorithms on a variety of applications.  相似文献   

7.
Functional magnetic resonance imaging (fMRI) is the most popular technique in human brain mapping, with statistical parametric mapping (SPM) as a classical benchmark tool for detecting brain activity. Smith and Fahrmeir (J Am Stat Assoc 102(478):417–431, 2007) proposed a competing method based on a spatial Bayesian variable selection in voxelwise linear regressions, with an Ising prior for latent activation indicators. In this article, we alternatively link activation probabilities to two types of latent Gaussian Markov random fields (GMRFs) via a probit model. Statistical inference in resulting high-dimensional hierarchical models is based on Markov chain Monte Carlo approaches, providing posterior estimates of activation probabilities and enhancing formation of activation clusters. Three algorithms are proposed depending on GMRF type and update scheme. An application to an active acoustic oddball experiment and a simulation study show a substantial increase in sensitivity compared to existing fMRI activation detection methods like classical SPM and the Ising model.  相似文献   

8.
The intention of this paper is to estimate a Bayesian distribution-free chain ladder (DFCL) model using approximate Bayesian computation (ABC) methodology. We demonstrate how to estimate quantities of interest in claims reserving and compare the estimates to those obtained from classical and credibility approaches. In this context, a novel numerical procedure utilizing a Markov chain Monte Carlo (MCMC) technique, ABC and a Bayesian bootstrap procedure was developed in a truly distribution-free setting. The ABC methodology arises because we work in a distribution-free setting in which we make no parametric assumptions, meaning we cannot evaluate the likelihood point-wise or in this case simulate directly from the likelihood model. The use of a bootstrap procedure allows us to generate samples from the intractable likelihood without the requirement of distributional assumptions; this is crucial to the ABC framework. The developed methodology is used to obtain the empirical distribution of the DFCL model parameters and the predictive distribution of the outstanding loss liabilities conditional on the observed claims. We then estimate predictive Bayesian capital estimates, the value at risk (VaR) and the mean square error of prediction (MSEP). The latter is compared with the classical bootstrap and credibility methods.  相似文献   

9.
复合二项过程风险模型的精细大偏差及有限时间破产概率   总被引:1,自引:0,他引:1  
马学敏  胡亦钧 《数学学报》2008,51(6):1119-113
讨论基于客户到来的复合二项过程风险模型.在该风险模型中,假设索赔额序列是独立同分布的重尾随机变量序列,不同保单发生实际索赔的概率可以不同,则在索赔额服从ERV的条件下,得到了损失过程的精细大偏差;进一步地,得到了有限时间破产概率的Lundberg极限结果.  相似文献   

10.
We compare different selection criteria to choose the number of latent states of a multivariate latent Markov model for longitudinal data. This model is based on an underlying Markov chain to represent the evolution of a latent characteristic of a group of individuals over time. Then, the response variables observed at different occasions are assumed to be conditionally independent given this chain. Maximum likelihood estimation of the model is carried out through an Expectation–Maximization algorithm based on forward–backward recursions which are well known in the hidden Markov literature for time series. The selection criteria we consider are based on penalized versions of the maximum log-likelihood or on the posterior probabilities of belonging to each latent state, that is, the conditional probability of the latent state given the observed data. Among the latter criteria, we propose an appropriate entropy measure tailored for the latent Markov models. We show the results of a Monte Carlo simulation study aimed at comparing the performance of the above states selection criteria on the basis of a wide set of model specifications.  相似文献   

11.
We discuss Bayesian modelling of the delay between dates of diagnosis and settlement of claims in Critical Illness Insurance using a Burr distribution. The data are supplied by the UK Continuous Mortality Investigation and relate to claims settled in the years 1999-2005. There are non-recorded dates of diagnosis and settlement and these are included in the analysis as missing values using their posterior predictive distribution and MCMC methodology. The possible factors affecting the delay (age, sex, smoker status, policy type, benefit amount, etc.) are investigated under a Bayesian approach. A 3-parameter Burr generalised-linear-type model is fitted, where the covariates are linked to the mean of the distribution. Variable selection using Bayesian methodology to obtain the best model with different prior distribution setups for the parameters is also applied. In particular, Gibbs variable selection methods are considered, and results are confirmed using exact marginal likelihood findings and related Laplace approximations. For comparison purposes, a lognormal model is also considered.  相似文献   

12.
This article presents a Bayesian kernel-based clustering method. The associated model arises as an embedding of the Potts density for class membership probabilities into an extended Bayesian model for joint data and class membership probabilities. The method may be seen as a principled extension of the super-paramagnetic clustering. The model depends on two parameters: the temperature and the kernel bandwidth. The clustering is obtained from the posterior marginal adjacency membership probabilities and does not depend on any particular value of the parameters. We elicit an informative prior based on random graph theory and kernel density estimation. A stochastic population Monte Carlo algorithm, based on parallel runs of the Wang–Landau algorithm, is developed to estimate the posterior adjacency membership probabilities and the parameter posterior. The convergence of the algorithm is also established. The method is applied to the whole human proteome to uncover human genes that share common evolutionary history. Our experiments and application show that good clustering results are obtained at many different values of the temperature and bandwidth parameters. Hence, instead of focusing on finding adequate values of the parameters, we advocate making clustering inference based on the study of the distribution of the posterior adjacency membership probabilities. This article has online supplementary material.  相似文献   

13.
We consider Bayesian shrinkage predictions for the Normal regression problem under the frequentist Kullback-Leibler risk function.Firstly, we consider the multivariate Normal model with an unknown mean and a known covariance. While the unknown mean is fixed, the covariance of future samples can be different from that of training samples. We show that the Bayesian predictive distribution based on the uniform prior is dominated by that based on a class of priors if the prior distributions for the covariance and future covariance matrices are rotation invariant.Then, we consider a class of priors for the mean parameters depending on the future covariance matrix. With such a prior, we can construct a Bayesian predictive distribution dominating that based on the uniform prior.Lastly, applying this result to the prediction of response variables in the Normal linear regression model, we show that there exists a Bayesian predictive distribution dominating that based on the uniform prior. Minimaxity of these Bayesian predictions follows from these results.  相似文献   

14.
Bayesian predictive densities for the 2-dimensional Wishart model are investigated. The performance of predictive densities is evaluated by using the Kullback–Leibler divergence. It is proved that a Bayesian predictive density based on a prior exactly dominates that based on the Jeffreys prior if the prior density satisfies some geometric conditions. An orthogonally invariant prior is introduced and it is shown that the Bayesian predictive density based on the prior is minimax and dominates that based on the right invariant prior with respect to the triangular group.  相似文献   

15.
A flexible Bayesian periodic autoregressive model is used for the prediction of quarterly and monthly time series data. As the unknown autoregressive lag order, the occurrence of structural breaks and their respective break dates are common sources of uncertainty these are treated as random quantities within the Bayesian framework. Since no analytical expressions for the corresponding marginal posterior predictive distributions exist a Markov Chain Monte Carlo approach based on data augmentation is proposed. Its performance is demonstrated in Monte Carlo experiments. Instead of resorting to a model selection approach by choosing a particular candidate model for prediction, a forecasting approach based on Bayesian model averaging is used in order to account for model uncertainty and to improve forecasting accuracy. For model diagnosis a Bayesian sign test is introduced to compare the predictive accuracy of different forecasting models in terms of statistical significance. In an empirical application, using monthly unemployment rates of Germany, the performance of the model averaging prediction approach is compared to those of model selected Bayesian and classical (non)periodic time series models.  相似文献   

16.
提出了一个基于客户到来的泊松过程风险模型,其中不同保单发生实际索赔的概率不同,假设潜在索赔额序列为负相依同分布的重尾随机变量序列,且属于重尾族L∩D族的条件下,得到了有限时间破产概率的渐近表达式.  相似文献   

17.
In this article, we propose generalized Bayesian dynamic factor models for jointly modeling mixed-measurement time series. The framework allows mixed-scale measurements associated with each time series, with different measurements having different distributions in the exponential family conditionally on time-varying latent factor(s). Efficient Bayesian computational algorithms are developed for posterior inference on both the latent factors and model parameters, based on a Metropolis–Hastings algorithm with adaptive proposals. The algorithm relies on a Greedy Density Kernel Approximation and parameter expansion with latent factor normalization. We tested the framework and algorithms in simulated studies and applied them to the analysis of intertwined credit and recovery risk for Moody’s rated firms from 1982 to 2008, illustrating the importance of jointly modeling mixed-measurement time series. The article has supplementary materials available online.  相似文献   

18.
To predict future claims, it is well-known that the most recent claims are more predictive than older ones. However, classic panel data models for claim counts, such as the multivariate negative binomial distribution, do not put any time weight on past claims. More complex models can be used to consider this property, but often need numerical procedures to estimate parameters. When we want to add a dependence between different claim count types, the task would be even more difficult to handle. In this paper, we propose a bivariate dynamic model for claim counts, where past claims experience of a given claim type is used to better predict the other type of claims. This new bivariate dynamic distribution for claim counts is based on random effects that come from the Sarmanov family of multivariate distributions. To obtain a proper dynamic distribution based on this kind of bivariate priors, an approximation of the posterior distribution of the random effects is proposed. The resulting model can be seen as an extension of the dynamic heterogeneity model described in Bolancé et al. (2007). We apply this model to two samples of data from a major Canadian insurance company, where we show that the proposed model is one of the best models to adjust the data. We also show that the proposed model allows more flexibility in computing predictive premiums because closed-form expressions can be easily derived for the predictive distribution, the moments and the predictive moments.  相似文献   

19.
Non-linear structural equation models are widely used to analyze the relationships among outcomes and latent variables in modern educational, medical, social and psychological studies. However, the existing theories and methods for analyzing non-linear structural equation models focus on the assumptions of outcomes from an exponential family, and hence can’t be used to analyze non-exponential family outcomes. In this paper, a Bayesian method is developed to analyze non-linear structural equation models in which the manifest variables are from a reproductive dispersion model (RDM) and/or may be missing with non-ignorable missingness mechanism. The non-ignorable missingness mechanism is specified by a logistic regression model. A hybrid algorithm combining the Gibbs sampler and the Metropolis–Hastings algorithm is used to obtain the joint Bayesian estimates of structural parameters, latent variables and parameters in the logistic regression model, and a procedure calculating the Bayes factor for model comparison is given via path sampling. A goodness-of-fit statistic is proposed to assess the plausibility of the posited model. A simulation study and a real example are presented to illustrate the newly developed Bayesian methodologies.  相似文献   

20.
In this paper,we propose a customer-based individual risk model,in which potential claims by customers are described as i.i.d.heavy-tailed random variables,but different insurance policy holders are allowed to have different probabilities to make actual claims.Some precise large deviation results for the prospective-loss process are derived under certain mild assumptions,with emphasis on the case of heavy-tailed distribution function class ERV(extended regular variation).Lundberg type limiting results on the finite time ruin probabilities are also investigated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号