首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Quantile regression model estimates the relationship between the quantile of a response distribution and the regression parameters, and has been developed for linear models with continuous responses. In this paper, we apply Bayesian quantile regression model for the Malaysian motor insurance claim count data to study the effects of change in the estimates of regression parameters (or the rating factors) on the magnitude of the response variable (or the claim count). We also compare the results of quantile regression models from the Bayesian and frequentist approaches and the results of mean regression models from the Poisson and negative binomial. Comparison from Poisson and Bayesian quantile regression models shows that the effects of vehicle year decrease as the quantile increases, suggesting that the rating factor has lower risk for higher claim counts. On the other hand, the effects of vehicle type increase as the quantile increases, indicating that the rating factor has higher risk for higher claim counts.  相似文献   

2.
Multilevel (hierarchical) modeling is a generalization of linear and generalized linear modeling in which regression coefficients are modeled through a model, whose parameters are also estimated from data. Multilevel model fails to fit well typically by the use of the EM algorithm once one of level error variance (like Cauchy distribution) tends to infinity. This paper proposes a composite multilevel to combine the nested structure of multilevel data and the robustness of the composite quantile regression, which greatly improves the efficiency and precision of the estimation. The new approach, which is based on the Gauss-Seidel iteration and takes a full advantage of the composite quantile regression and multilevel models, still works well when the error variance tends to infinity, We show that even the error distribution is normal, the MSE of the estimation of composite multilevel quantile regression models nearly equals to mean regression. When the error distribution is not normal, our method still enjoys great advantages in terms of estimation efficiency.  相似文献   

3.
Regression models are popular tools for rate-making in the framework of heterogeneous insurance portfolios; however, the traditional regression methods have some disadvantages particularly their sensitivity to the assumptions which significantly restrict the area of their applications. This paper is devoted to an alternative approach–quantile regression. It is free of some disadvantages of the traditional models. The quality of estimators for the approach described is approximately the same as or sometimes better than that for the traditional regression methods. Moreover, the quantile regression is consistent with the idea of using the distribution quantile for rate-making. This paper provides detailed comparisons between the approaches and it gives the practical example of using the new methodology.  相似文献   

4.
In actuarial practice, regression models serve as a popular statistical tool for analyzing insurance data and tariff ratemaking. In this paper, we consider classical credibility models that can be embedded within the framework of mixed linear models. For inference about fixed effects and variance components, likelihood-based methods such as (restricted) maximum likelihood estimators are commonly pursued. However, it is well-known that these standard and fully efficient estimators are extremely sensitive to small deviations from hypothesized normality of random components as well as to the occurrence of outliers. To obtain better estimators for premium calculation and prediction of future claims, various robust methods have been successfully adapted to credibility theory in the actuarial literature. The objective of this work is to develop robust and efficient methods for credibility when heavy-tailed claims are approximately log-location-scale distributed. To accomplish that, we first show how to express additive credibility models such as Bühlmann-Straub and Hachemeister ones as mixed linear models with symmetric or asymmetric errors. Then, we adjust adaptively truncated likelihood methods and compute highly robust credibility estimates for the ordinary but heavy-tailed claims part. Finally, we treat the identified excess claims separately and find robust-efficient credibility premiums. Practical performance of this approach is examined-via simulations-under several contaminating scenarios. A widely studied real-data set from workers’ compensation insurance is used to illustrate functional capabilities of the new robust credibility estimators.  相似文献   

5.
Recent developments in actuarial literature have shown that credibility theory can serve as an effective tool in mortality modelling, leading to accurate forecasts when applied to single or multi-population datasets. This paper presents a crossed classification credibility formulation of the Lee–Carter method particularly designed for multi-population mortality modelling. Differently from the standard Lee–Carter methodology, where the time index is assumed to follow an appropriate time series process, herein, future mortality dynamics are estimated under a crossed classification credibility framework, which models the interactions between various risk factors (e.g. genders, countries). The forecasting performances between the proposed model, the original Lee–Carter model and two multi-population Lee–Carter extensions are compared for both genders of multiple countries. Numerical results indicate that the proposed model produces more accurate forecasts than the Lee–Carter type models, as evaluated by the mean absolute percentage forecast error measure. Applications with life insurance and annuity products are also provided and a stochastic version of the proposed model is presented.  相似文献   

6.
??In classical credibility theory, the claim amounts of different
insurance policies in a portfolio are assumed to be independent and the premiums are derived
under squared-error loss function. Wen et al. (2012) studied the credibility models with a
dependence structure among the claim amounts of one insurance policy that is called time
changeable effects and obtained the credibility formula. In this paper, we generalized this
dependence structure called time changeable effects to the claim amounts of different
insurance policies in a portfolio. Credibility premiums are obtained for Buhlmann and
Buhlmann-Straub credibility models with dependence structure under balanced loss function.  相似文献   

7.
纵向数据常常用正态混合效应模型进行分析.然而,违背正态性的假定往往会导致无效的推断.与传统的均值回归相比较,分位回归可以给出响应变量条件分布的完整刻画,对于非正态误差分布也可以给稳健的估计结果.本文主要考虑右删失响应下纵向混合效应模型的分位回归估计和变量选择问题.首先,逆删失概率加权方法被用来得到模型的参数估计.其次,结合逆删失概率加权和LASSO惩罚变量选择方法考虑了模型的变量选择问题.蒙特卡洛模拟显示所提方法要比直接删除删失数据的估计方法更具优势.最后,分析了一组艾滋病数据集来展示所提方法的实际应用效果.  相似文献   

8.
Applications of regression models for binary response are very common and models specific to these problems are widely used. Quantile regression for binary response data has recently attracted attention and regularized quantile regression methods have been proposed for high dimensional problems. When the predictors have a natural group structure, such as in the case of categorical predictors converted into dummy variables, then a group lasso penalty is used in regularized methods. In this paper, we present a Bayesian Gibbs sampling procedure to estimate the parameters of a quantile regression model under a group lasso penalty for classification problems with a binary response. Simulated and real data show a good performance of the proposed method in comparison to mean-based approaches and to quantile-based approaches which do not exploit the group structure of the predictors.  相似文献   

9.
In this paper we discuss the asymptotic properties of quantile processes under random censoring. In contrast to most work in this area we prove weak convergence of an appropriately standardized quantile process under the assumption that the quantile regression model is only linear in the region, where the process is investigated. Additionally, we also discuss properties of the quantile process in sparse regression models including quantile processes obtained from the Lasso and adaptive Lasso. The results are derived by a combination of modern empirical process theory, classical martingale methods and a recent result of Kato (2009).  相似文献   

10.
Support vector machines (SVMs) belong to the class of modern statistical machine learning techniques and can be described as M-estimators with a Hilbert norm regularization term for functions. SVMs are consistent and robust for classification and regression purposes if based on a Lipschitz continuous loss and a bounded continuous kernel with a dense reproducing kernel Hilbert space. For regression, one of the conditions used is that the output variable Y has a finite first absolute moment. This assumption, however, excludes heavy-tailed distributions. Recently, the applicability of SVMs was enlarged to these distributions by considering shifted loss functions. In this review paper, we briefly describe the approach of SVMs based on shifted loss functions and list some properties of such SVMs. Then, we prove that SVMs based on a bounded continuous kernel and on a convex and Lipschitz continuous, but not necessarily differentiable, shifted loss function have a bounded Bouligand influence function for all distributions, even for heavy-tailed distributions including extreme value distributions and Cauchy distributions. SVMs are thus robust in this sense. Our result covers the important loss functions ${\epsilon}$ -insensitive for regression and pinball for quantile regression, which were not covered by earlier results on the influence function. We demonstrate the usefulness of SVMs even for heavy-tailed distributions by applying SVMs to a simulated data set with Cauchy errors and to a data set of large fire insurance claims of Copenhagen Re.  相似文献   

11.
The insurance industry is known to have high operating expenses in the financial services sector. Insurers, investors and regulators are interested in models to understand the behavior of expenses. However, the current practice ignores skewness, occasional negative values as well as their temporal dependence.Addressing these three features, this paper develops a longitudinal model of insurance company expenses that can be used for prediction, to identify unusual behavior, and to measure firm efficiency. Specifically, we use a three-parameter asymmetric Laplace density for the marginal distribution of insurers’ expenses in each year. Copula functions are employed to accommodate their temporal dependence. As a function of explanatory variables, the location parameter allows us to analyze an insurer’s expenses in light of the firm’s characteristics. Our model can be interpreted as a longitudinal quantile regression.The analysis is performed using property-casualty insurance company data from the National Association of Insurance Commissioners of years 2001-2006. Due to the long-tailed nature of insurers’ expenses, two alternative approaches are proposed to improve the performance of the longitudinal quantile regression model: rescaling and transformation. Predictive densities are derived that allow one to compare the predictions for individual insurers in a hold-out-sample. Both predictive models are shown to be reasonable with the rescaling method outperforming the transformation method. Compared with standard longitudinal models, our model is shown to be superior in identifying insurers’ unusual behavior.  相似文献   

12.
In this paper, a Bayesian hierarchical model for variable selection and estimation in the context of binary quantile regression is proposed. Existing approaches to variable selection in a binary classification context are sensitive to outliers, heteroskedasticity or other anomalies of the latent response. The method proposed in this study overcomes these problems in an attractive and straightforward way. A Laplace likelihood and Laplace priors for the regression parameters are proposed and estimated with Bayesian Markov Chain Monte Carlo. The resulting model is equivalent to the frequentist lasso procedure. A conceptional result is that by doing so, the binary regression model is moved from a Gaussian to a full Laplacian framework without sacrificing much computational efficiency. In addition, an efficient Gibbs sampler to estimate the model parameters is proposed that is superior to the Metropolis algorithm that is used in previous studies on Bayesian binary quantile regression. Both the simulation studies and the real data analysis indicate that the proposed method performs well in comparison to the other methods. Moreover, as the base model is binary quantile regression, a much more detailed insight in the effects of the covariates is provided by the approach. An implementation of the lasso procedure for binary quantile regression models is available in the R-package bayesQR.  相似文献   

13.
The insurance industry is known to have high operating expenses in the financial services sector. Insurers, investors and regulators are interested in models to understand the behavior of expenses. However, the current practice ignores skewness, occasional negative values as well as their temporal dependence.Addressing these three features, this paper develops a longitudinal model of insurance company expenses that can be used for prediction, to identify unusual behavior, and to measure firm efficiency. Specifically, we use a three-parameter asymmetric Laplace density for the marginal distribution of insurers’ expenses in each year. Copula functions are employed to accommodate their temporal dependence. As a function of explanatory variables, the location parameter allows us to analyze an insurer’s expenses in light of the firm’s characteristics. Our model can be interpreted as a longitudinal quantile regression.The analysis is performed using property–casualty insurance company data from the National Association of Insurance Commissioners of years 2001–2006. Due to the long-tailed nature of insurers’ expenses, two alternative approaches are proposed to improve the performance of the longitudinal quantile regression model: rescaling and transformation. Predictive densities are derived that allow one to compare the predictions for individual insurers in a hold-out-sample. Both predictive models are shown to be reasonable with the rescaling method outperforming the transformation method. Compared with standard longitudinal models, our model is shown to be superior in identifying insurers’ unusual behavior.  相似文献   

14.
It is very common in AIDS studies that response variable (e.g., HIV viral load) may be subject to censoring due to detection limits while covariates (e.g., CD4 cell count) may be measured with error. Failure to take censoring in response variable and measurement errors in covariates into account may introduce substantial bias in estimation and thus lead to unreliable inference. Moreover, with non-normal and/or heteroskedastic data, traditional mean regression models are not robust to tail reactions. In this case, one may find it attractive to estimate extreme causal relationship of covariates to a dependent variable, which can be suitably studied in quantile regression framework. In this paper, we consider joint inference of mixed-effects quantile regression model with right-censored responses and errors in covariates. The inverse censoring probability weighted method and the orthogonal regression method are combined to reduce the biases of estimation caused by censored data and measurement errors. Under some regularity conditions, the consistence and asymptotic normality of estimators are derived. Finally, some simulation studies are implemented and a HIV/AIDS clinical data set is analyzed to to illustrate the proposed procedure.  相似文献   

15.
In the classical credibility theory, the credibility premium is derived on the basis of pure premium. However, the insurance practice demands that the premium must be charged under some adaptable premium principle and serves the purpose for insurance business. In this paper, the balanced credibility models have been built under exponential principle, and the credibility estimator of individual exponential premium is derived. This result is also extended to the versions of multitude contracts, and the estimation of the structure parameters is investigated. Finally, the simulations have been introduced to show the consistency of the credibility estimator and its differences from the classical one.  相似文献   

16.
To better forecast the Value-at-Risk of the aggregate insurance losses, Heras et al. (2018) propose a two-step inference of using logistic regression and quantile regression without providing detailed model assumptions, deriving the related asymptotic properties, and quantifying the inference uncertainty. This paper argues that the application of quantile regression at the second step is not necessary when explanatory variables are categorical. After describing the explicit model assumptions, we propose another two-step inference of using logistic regression and the sample quantile. Also, we provide an efficient empirical likelihood method to quantify the uncertainty. A simulation study confirms the good finite sample performance of the proposed method.  相似文献   

17.

This paper considers estimation and inference in semiparametric quantile regression models when the response variable is subject to random censoring. The paper considers both the cases of independent and dependent censoring and proposes three iterative estimators based on inverse probability weighting, where the weights are estimated from the censoring distribution using the Kaplan–Meier, a fully parametric and the conditional Kaplan–Meier estimators. The paper proposes a computationally simple resampling technique that can be used to approximate the finite sample distribution of the parametric estimator. The paper also considers inference for both the parametric and nonparametric components of the quantile regression model. Monte Carlo simulations show that the proposed estimators and test statistics have good finite sample properties. Finally, the paper contains a real data application, which illustrates the usefulness of the proposed methods.

  相似文献   

18.
本文对两个样本数据不完全的线性模型展开讨论, 其中线性模型协变量的观测值不缺失, 响应变量的观测值随机缺失(MAR). 我们采用逆概率加权填补方法对响应变量的缺失值进行补足, 得到两个线性回归模型``完全'样本数据, 在``完全'样本数据的基础上构造了响应变量分位数差异的对数经验似然比统计量. 与以往研究结果不同的是本文在一定条件下证明了该统计量的极限分布为标准, 降低了由于权系数估计带来的误差, 进一步构造出了精度更高的分位数差异的经验似然置信区间.  相似文献   

19.
零膨胀广义泊松回归模型与保险费率厘定   总被引:1,自引:0,他引:1  
在保险产品的分类费率厘定中,最常使用的模型之一是泊松回归模型.当损失数据存在零膨胀(zero-in flated)特征时,通常会采用零膨胀泊松回归模型.在零膨胀泊松回归模型中,一般假设结构零的比例参数φ为常数,不受费率因子的影响,这有可能背离实际情况.假设参数φ与费率因子之间存在一定关系,并在此基础上建立了零膨胀广义泊松回归模型,即Z IGP(τ)回归模型.通过对一组汽车保险损失数据的拟合表明,Z IGP(τ)回归模型可以有效地改善对实际数据的拟合效果,从而提高费率厘定结果的合理性.  相似文献   

20.
This paper develops a Bayesian approach to analyzing quantile regression models for censored dynamic panel data. We employ a likelihood-based approach using the asymmetric Laplace error distribution and introduce lagged observed responses into the conditional quantile function. We also deal with the initial conditions problem in dynamic panel data models by introducing correlated random effects into the model. For posterior inference, we propose a Gibbs sampling algorithm based on a location-scale mixture representation of the asymmetric Laplace distribution. It is shown that the mixture representation provides fully tractable conditional posterior densities and considerably simplifies existing estimation procedures for quantile regression models. In addition, we explain how the proposed Gibbs sampler can be utilized for the calculation of marginal likelihood and the modal estimation. Our approach is illustrated with real data on medical expenditures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号