首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Our article considers the class of recently developed stochastic models that combine claims payments and incurred losses information into a coherent reserving methodology. In particular, we develop a family of hierarchical Bayesian paid–incurred claims models, combining the claims reserving models of Hertig (1985) and Gogol (1993). In the process we extend the independent log-normal model of Merz and Wüthrich (2010) by incorporating different dependence structures using a Data-Augmented mixture Copula paid–incurred claims model.In this way the paper makes two main contributions: firstly we develop an extended class of model structures for the paid–incurred chain ladder models where we develop precisely the Bayesian formulation of such models; secondly we explain how to develop advanced Markov chain Monte Carlo sampling algorithms to make inference under these copula dependence PIC models accurately and efficiently, making such models accessible to practitioners to explore their suitability in practice. In this regard the focus of the paper should be considered in two parts, firstly development of Bayesian PIC models for general dependence structures with specialised properties relating to conjugacy and consistency of tail dependence across the development years and accident years and between Payment and incurred loss data are developed. The second main contribution is the development of techniques that allow general audiences to efficiently work with such Bayesian models to make inference. The focus of the paper is not so much to illustrate that the PIC paper is a good class of models for a particular data set, the suitability of such PIC type models is discussed in Merz and Wüthrich (2010) and Happ and Wüthrich (2013). Instead we develop generalised model classes for the PIC family of Bayesian models and in addition provide advanced Monte Carlo methods for inference that practitioners may utilise with confidence in their efficiency and validity.  相似文献   

2.
We study the class of state-space models and perform maximum likelihood estimation for the model parameters. We consider a stochastic approximation expectation–maximization (SAEM) algorithm to maximize the likelihood function with the novelty of using approximate Bayesian computation (ABC) within SAEM. The task is to provide each iteration of SAEM with a filtered state of the system, and this is achieved using an ABC sampler for the hidden state, based on sequential Monte Carlo methodology. It is shown that the resulting SAEM-ABC algorithm can be calibrated to return accurate inference, and in some situations it can outperform a version of SAEM incorporating the bootstrap filter. Two simulation studies are presented, first a nonlinear Gaussian state-space model then a state-space model having dynamics expressed by a stochastic differential equation. Comparisons with iterated filtering for maximum likelihood inference, and Gibbs sampling and particle marginal methods for Bayesian inference are presented.  相似文献   

3.
In earlier articles, we developed an automated methodology for using cubic splines with tail linear constraints to model the logarithm of a univariate density function. This methodology was subsequently modified so that the knots were determined by stepwise addition-deletion and the remaining coefficients were determined by maximum likelihood estimation. An alternative approach, referred to as the free knot spline procedure, is to use the maximum likelihood method to estimate the knot locations as well as the remaining coefficients. This article compares various approaches to constructing confidence intervals for logspline density estimates, for both the stepwise procedure and the free knot procedure. It is concluded that a variation of the bootstrap, in which only a limited number of bootstrap simulations are used to estimate standard errors that are combined with standard normal quantiles, seems to perform the best, especially when coverages and computing time are both taken into account.  相似文献   

4.
We discuss Bayesian modelling of the delay between dates of diagnosis and settlement of claims in Critical Illness Insurance using a Burr distribution. The data are supplied by the UK Continuous Mortality Investigation and relate to claims settled in the years 1999-2005. There are non-recorded dates of diagnosis and settlement and these are included in the analysis as missing values using their posterior predictive distribution and MCMC methodology. The possible factors affecting the delay (age, sex, smoker status, policy type, benefit amount, etc.) are investigated under a Bayesian approach. A 3-parameter Burr generalised-linear-type model is fitted, where the covariates are linked to the mean of the distribution. Variable selection using Bayesian methodology to obtain the best model with different prior distribution setups for the parameters is also applied. In particular, Gibbs variable selection methods are considered, and results are confirmed using exact marginal likelihood findings and related Laplace approximations. For comparison purposes, a lognormal model is also considered.  相似文献   

5.
In this paper, we consider the additive loss reserving (ALR) method in a Bayesian and credibility setup. The classical ALR method is a simple claims reserving method that combines prior information (e.g., premiums, number of contracts, market statistics) with claims observations. The Bayesian setup, which we present, in addition, allows for combining the information from a single runoff portfolio (e.g., company‐specific data) with the information from a collective (e.g., industry‐wide data) to analyze the claims reserves and the claims development result. However, in insurance practice, the associated distributions are usually unknown. Therefore, we do not follow the full Bayesian approach but apply credibility theory, which is distribution free and where we only need to know the first and second moments. That is, we derive the credibility predictors that minimize the expected squared loss within the class of affine‐linear functions of the observations (i.e., we derive linear Bayesian predictors). Using non‐informative priors, we link our credibility‐based ALR method to the classical ALR method and show that the credibility predictors coincide with the predictors in the classical ALR method. Moreover, we quantify the 1‐year risk and the full reserve risk by means of the conditional mean square error of prediction. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

6.
A computationally simple approach to inference in state space models is proposed, using approximate Bayesian computation (ABC). ABC avoids evaluation of an intractable likelihood by matching summary statistics for the observed data with statistics computed from data simulated from the true process, based on parameter draws from the prior. Draws that produce a “match” between observed and simulated summaries are retained, and used to estimate the inaccessible posterior. With no reduction to a low-dimensional set ofsufficient statistics being possible in the state space setting, we define the summaries as the maximum of an auxiliary likelihood function, and thereby exploit the asymptotic sufficiency of this estimator for the auxiliary parameter vector. We derive conditions under which this approach—including a computationally efficient version based on the auxiliary score—achieves Bayesian consistency. To reduce the well-documented inaccuracy of ABC in multiparameter settings, we propose the separate treatment of each parameter dimension using an integrated likelihood technique. Three stochastic volatility models for which exact Bayesian inference is either computationally challenging, or infeasible, are used for illustration. We demonstrate that our approach compares favorably against an extensive set of approximate and exact comparators. An empirical illustration completes the article. Supplementary materials for this article are available online.  相似文献   

7.
This paper introduces the “piggyback bootstrap.” Like the weighted bootstrap, this bootstrap procedure can be used to generate random draws that approximate the joint sampling distribution of the parametric and nonparametric maximum likelihood estimators in various semiparametric models, but the dimension of the maximization problem for each bootstrapped likelihood is smaller. This reduction results in significant computational savings in comparison to the weighted bootstrap. The procedure can be stated quite simply. First obtain a valid random draw for the parametric component of the model. Then take the draw for the nonparametric component to be the maximizer of the weighted bootstrap likelihood with the parametric component fixed at the parametric draw. We prove the procedure is valid for a class of semiparametric models that includes frailty regression models airsing in survival analysis and biased sampling models that have application to vaccine efficacy trials. Bootstrap confidence sets from the piggyback, and weighted bootstraps are compared for biased sampling data from simulated vaccine efficacy trials.  相似文献   

8.
A bootstrap procedure useful in latent class, or more general mixture models has been developed to determine the sufficient number of latent classes or components required to account for systematic group differences in the data. The procedure is illustrated in the context of a multidimensional scaling latent class model, CLASCAL. Also presented is a bootstrap technique for determining standard errors for estimates of the stimulus co‐ordinates, parameters of the multidimensional scaling model. Real and artificial data are presented. The bootstrap procedure for selecting a sufficient number of classes seems to correctly select the correct number of latent classes at both low and high error levels. At higher error levels it outperforms Hope's (J. Roy. Statist. Soc. Ser B 1968; 30 : 582) procedure. The bootstrap procedures to estimate parameter stability appear to correctly re‐produce Monte Carlo results. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

9.
Having the ability to work with complex models can be highly beneficial. However, complex models often have intractable likelihoods, so methods that involve evaluation of the likelihood function are infeasible. In these situations, the benefits of working with likelihood-free methods become apparent. Likelihood-free methods, such as parametric Bayesian indirect likelihood that uses the likelihood of an alternative parametric auxiliary model, have been explored throughout the literature as a viable alternative when the model of interest is complex. One of these methods is called the synthetic likelihood (SL), which uses a multivariate normal approximation of the distribution of a set of summary statistics. This article explores the accuracy and computational efficiency of the Bayesian version of the synthetic likelihood (BSL) approach in comparison to a competitor known as approximate Bayesian computation (ABC) and its sensitivity to its tuning parameters and assumptions. We relate BSL to pseudo-marginal methods and propose to use an alternative SL that uses an unbiased estimator of the SL, when the summary statistics have a multivariate normal distribution. Several applications of varying complexity are considered to illustrate the findings of this article. Supplemental materials are available online. Computer code for implementing the methods on all examples is available at https://github.com/cdrovandi/Bayesian-Synthetic-Likelihood.  相似文献   

10.
ABC (approximate Bayesian computation) is a general approach for dealing with models with an intractable likelihood. In this work, we derive ABC algorithms based on QMC (quasi-Monte Carlo) sequences. We show that the resulting ABC estimates have a lower variance than their Monte Carlo counter-parts. We also develop QMC variants of sequential ABC algorithms, which progressively adapt the proposal distribution and the acceptance threshold. We illustrate our QMC approach through several examples taken from the ABC literature.  相似文献   

11.
12.
In this report, the distribution for setting up a system reliability exposed to some stress is studied. The standard two-sided power distribution is assumed to be the underlying distribution. We obtained the exact expressions and estimates for the reliability by applying different methods such as maximum likelihood and Bayesian estimators. Three different scenarios were examined: known and equal reflection parameters, known but unequal reflection parameters, and all parameters are unknown, providing practical guidance and recommendations for the estimator design. For large samples, we recommend use of the parametric bootstrap method with the maximum likelihood estimate. Real data sets were used to illustrate the performances of the estimators.  相似文献   

13.
Parametric method for assessing individual bioequivalence (IBE) may concentrate on the hypothesis that the PK responses are normal. Nonparametric method for evaluating IBE would be bootstrap method. In 2001, the United States Food and Drug Administration (FDA) proposed a draft guidance. The purpose of this article is to evaluate the IBE between test drug and reference drug by bootstrap and Bayesian bootstrap method. We study the power of bootstrap test procedures and the parametric test procedures in FDA (2001). We find that the Bayesian bootstrap method is the most excellent.  相似文献   

14.
Approximate Bayesian computation (ABC) is typically used when the likelihood is either unavailable or intractable but where data can be simulated under different parameter settings using a forward model. Despite the recent interest in ABC, high-dimensional data and costly simulations still remain a bottleneck in some applications. There is also no consensus as to how to best assess the performance of such methods without knowing the true posterior. We show how a nonparametric conditional density estimation (CDE) framework, which we refer to as ABC–CDE, help address three nontrivial challenges in ABC: (i) how to efficiently estimate the posterior distribution with limited simulations and different types of data, (ii) how to tune and compare the performance of ABC and related methods in estimating the posterior itself, rather than just certain properties of the density, and (iii) how to efficiently choose among a large set of summary statistics based on a CDE surrogate loss. We provide theoretical and empirical evidence that justify ABC–CDE procedures that directly estimate and assess the posterior based on an initial ABC sample, and we describe settings where standard ABC and regression-based approaches are inadequate. Supplemental materials for this article are available online.  相似文献   

15.
Bootstrap likelihood ratio tests of cointegration rank are commonly used because they tend to have rejection probabilities that are closer to the nominal level than the rejection probabilities of asymptotic tests. The effect of bootstrapping the test on its power is largely unknown. We show that a new computationally inexpensive procedure can be applied to the estimation of the power function of the bootstrap test of cointegration rank. The bootstrap test is found to have a power function close to that of the level-adjusted asymptotic test. The bootstrap test therefore estimates the level-adjusted power of the asymptotic test highly accurately. The bootstrap test may have low power to reject the null hypothesis of cointegration rank zero, or underestimate the cointegration rank. An empirical application to Euribor interest rates is provided as an illustration of the findings.  相似文献   

16.
本文建立了贝叶斯模型,讨论了帕累托索赔额分布中参数的估计问题,得到了风险参数的极大似然估计、贝叶斯估计和信度估计,并证明了这些估计的强相合性.在均方误差的意义下比较了这些估计的好坏,并通过数值模拟对均方误差进行了验证,结果表明,贝叶斯估计比其他估计具有较小的均方误差.最后,给出了结构参数的估计并证明了经验贝叶斯估计和经验贝叶斯信度估计的渐近最优性.  相似文献   

17.
This article proposes a non-Bayesian procedure for constructing inferential distributions which can be used for producing predictive distributions. The concepts of bootstrap and of predictive likelihood are employed for developing the method. A result is obtained for exponential families, and the Bayesian prediction based on Jeffreys' prior is newly justified.  相似文献   

18.
Variational Bayes (VB) is rapidly becoming a popular tool for Bayesian inference in statistical modeling. However, the existing VB algorithms are restricted to cases where the likelihood is tractable, which precludes their use in many interesting situations such as in state--space models and in approximate Bayesian computation (ABC), where application of VB methods was previously impossible. This article extends the scope of application of VB to cases where the likelihood is intractable, but can be estimated unbiasedly. The proposed VB method therefore makes it possible to carry out Bayesian inference in many statistical applications, including state--space models and ABC. The method is generic in the sense that it can be applied to almost all statistical models without requiring too much model-based derivation, which is a drawback of many existing VB algorithms. We also show how the proposed method can be used to obtain highly accurate VB approximations of marginal posterior distributions. Supplementary material for this article is available online.  相似文献   

19.
In this paper, we continue the development of the ideas introduced in England and Verrall (2001) by suggesting the use of a reparameterized version of the generalized linear model (GLM) which is frequently used in stochastic claims reserving. This model enables us to smooth the origin, development and calendar year parameters in a similar way as is often done in practice, but still keep the GLM structure. Specifically, we use this model structure in order to obtain reserve estimates and to systemize the model selection procedure that arises in the smoothing process. Moreover, we provide a bootstrap procedure to achieve a full predictive distribution.  相似文献   

20.
The defining feature of the Cape Cod algorithm in current literature is its assumption of a constant loss ratio over accident periods. This is a highly simplifying assumption relative to the chain ladder model which, in effect, allows loss ratio to vary freely over accident period.Much of the literature on Cape Cod reserving treats it as essentially just an algorithm. It does not posit a parametric model supporting the algorithm. There are one or two exceptions to this. The present paper extends them by introducing a couple of more general stochastic models under which maximum likelihood estimation yields parameters estimates closely resembling those of the classical Cape Cod algorithm.For one of these models, these estimators are shown to be minimum variance unbiased, and so are superior to the conventional estimators, which rely on the chain ladder.A Bayesian Cape Cod model is also introduced, and a MAP estimator calculated.A numerical example is included.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号