首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper R 2-type measures of the explanatory power of multivariate linear and categorical probit models proposed in the literature are reviewed and their deficiencies discussed. It is argued that a measure of the explanatory power should take into account the components which are explicitly modelled when a regression model is estimated while it should be indifferent to components not explicitly modelled. Based on this view three different measures for multivariate probit models are proposed. Results of a simulation study are presented, designed to compare two measures in various situations, to evaluate the BC a bootstrap technique for testing the hypothesis that the corresponding measure is zero, and to calculate approximate confidence intervals. The BC a bootstrap technique turned out to work quite well for a wide range of situations, but may lead to misleading results if the true values of the corresponding measure are close to zero.  相似文献   

2.
ABSTRACT. Data collected regarding homeowner preferences for various Formosan Subterranean termite control methods were analyzed using ordered probit and exploded logit models. Ordered probit model for first, second and fourth preferences had similar variables significant, although the first preference model had many significant variables. The most important variable, in terms of significance, across all preferences was the perception that a respondent thinks that termite existence is a problem in their neighborhood. The results from the exploded logit model indicated that a control option with liquid treatment option and more visits by a pest control agency is a less preferred treatment option. This paper was presented at the 2004 Research Modeling Association World Conference on Natural Resource Modeling in Melbourne, Australia.  相似文献   

3.
We calibrate and contrast the recent generalized multinomial logit model and the widely used latent class logit model approaches for studying heterogeneity in consumer purchases. We estimate the parameters of the models on panel data of household ketchup purchases, and find that the generalized multinomial logit model outperforms the best‐fitting latent class logit model in terms of the Bayesian information criterion. We compare the posterior estimates of coefficients for individual customers based on the two different models and discuss how the differences could affect marketing strategies (such as pricing), which could be affected by applying each of the models. We also describe extensions to the scale heterogeneity model that includes the effects of state dependence and purchase history. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

4.
The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with this constant in place of a numerical or binary response. In a linear model with a positive response, dividing by its values yields a regression of constant output by the relative shares of individual predictors into the total response. Chemical reaction models use the agents' concentration, summing to a constant 100%. Another example can be found in priority modelling by Thurstone scaling for ranked or paired comparison data. The Thurstone scale can be estimated by probit or logit models with identical output across all the responses. Models with a unitary output can be constructed by software for regular regressions, but they give a different interpretation of results. For instance, the coefficient of multiple determination is not an estimate of the explained variance in the total response variance (which is zero), but a measure of the fitting quality of the constant approximated by an aggregate of predictors.  相似文献   

5.
保险损失数据的一个重要特点是尖峰厚尾性,即既有大量的小额损失,又有少量的高额损失,使得通常的损失分布模型很难拟合此类数据,从而出现了对各种损失分布模型进行改进的尝试.改进后的模型一方面要有较高的峰度,另一方面又要有较厚的尾部.最近几年文献中出现的改进模型主要是组合模型,即把一个具有非零众数的模型(如对数正态分布或威布尔分布)与一个厚尾分布模型(如帕累托分布或广义帕累托分布)进行组合.讨论了这些组合模型的性质和特点,并与偏t正态分布和偏t分布进行了比较分析,最后应用MCMC方法估计模型参数,并通过一个实际损失数据的拟合分析,表明偏t分布对尖峰厚尾损失数据的拟合要优于目前已经提出的各种组合模型.  相似文献   

6.
In this paper, we extend the concept of tail subadditivity (Belles-Sampera et al., 2014a; Belles-Sampera et al., 2014b) for distortion risk measures and give sufficient and necessary conditions for a distortion risk measure to be tail subadditive. We also introduce the generalized GlueVaR risk measures, which can be used to approach any coherent distortion risk measure. To further illustrate the applications of the tail subadditivity, we propose multivariate tail distortion (MTD) risk measures and generalize the multivariate tail conditional expectation (MTCE) risk measure introduced by Landsman et al. (2016). The properties of multivariate tail distortion risk measures, such as positive homogeneity, translation invariance, monotonicity, and subadditivity, are discussed as well. Moreover, we discuss the applications of the multivariate tail distortion risk measures in capital allocations for a portfolio of risks and explore the impacts of the dependence between risks in a portfolio and extreme tail events of a risk portfolio in capital allocations.  相似文献   

7.
This paper considers statistical modeling of the types of claim in a portfolio of insurance policies. For some classes of insurance contracts, in a particular period, it is possible to have a record of whether or not there is a claim on the policy, the types of claims made on the policy, and the amount of claims arising from each of the types. A typical example is automobile insurance where in the event of a claim, we are able to observe the amounts that arise from say injury to oneself, damage to one’s own property, damage to a third party’s property, and injury to a third party. Modeling the frequency and the severity components of the claims can be handled using traditional actuarial procedures. However, modeling the claim-type component is less known and in this paper, we recommend analyzing the distribution of these claim-types using multivariate probit models, which can be viewed as latent variable threshold models for the analysis of multivariate binary data. A recent article by Valdez and Frees [Valdez, E.A., Frees, E.W., Longitudinal modeling of Singapore motor insurance. University of New South Wales and the University of Wisconsin-Madison. Working Paper. Dated 28 December 2005, available from: http://wwwdocs.fce.unsw.edu.au/actuarial/research/papers/2006/Valdez-Frees-2005.pdf] considered this decomposition to extend the traditional model by including the conditional claim-type component, and proposed the multinomial logit model to empirically estimate this component. However, it is well known in the literature that this type of model assumes independence across the different outcomes. We investigate the appropriateness of fitting a multivariate probit model to the conditional claim-type component in which the outcomes may in fact be correlated, with possible inclusion of important covariates. Our estimation results show that when the outcomes are correlated, the multinomial logit model produces substantially different predictions relative to the true predictions; and second, through a simulation analysis, we find that even in ideal conditions under which the outcomes are independent, multinomial logit is still a poor approximation to the true underlying outcome probabilities relative to the multivariate probit model. The results of this paper serve to highlight the trade-off between tractability and flexibility when choosing the appropriate model.  相似文献   

8.
Business failure prediction is one of the most essential problems in the field of financial management. The research on developing quantitative business failure prediction models has been focused on building discriminant models to distinguish among failed and non-failed firms. Several researchers in this field have proposed multivariate statistical discrimination techniques. This paper explores the applicability of multicriteria analysis to predict business failure. Four preference disaggregation methods, namely the UTADIS method and three of its variants, are compared to three well-known multivariate statistical and econometric techniques, namely discriminant analysis, logit and probit analyses. A basic (learning) sample and a holdout (testing) sample are used to perform the comparison. Through this comparison, the relative performance of all the aforementioned methods is investigated regarding their discriminating and predicting ability.  相似文献   

9.
Network equilibrium models are widely used by traffic practitioners to aid them in making decisions concerning the operation and management of traffic networks. The common practice is to test a prescribed range of hypothetical changes or policy measures through adjustments to the input data, namely the trip demands, the arc performance (travel time) functions, and policy variables such as tolls or signal timings. Relatively little use is made, however, of the full implicit relationship between model inputs and outputs inherent in these models. By exploiting the representation of such models as an equivalent optimisation problem, classical results on the sensitivity analysis of non-linear programs may be applied, to produce linear relationships between input data perturbations and model outputs. We specifically focus on recent results relating to the probit Stochastic User Equilibrium (PSUE) model, which has the advantage of greater behavioural realism and flexibility relative to the conventional Wardrop user equilibrium and logit SUE models. The paper goes on to explore four applications of these sensitivity expressions in gaining insight into the operation of road traffic networks. These applications are namely: identification of sensitive, ‘critical’ parameters; computation of approximate, re-equilibrated solutions following a change (post-optimisation); robustness analysis of model forecasts to input data errors, in the form of confidence interval estimation; and the solution of problems of the bi-level, optimal network design variety. Finally, numerical experiments applying these methods are reported.  相似文献   

10.
This paper addresses one of the main challenges faced by insurance companies and risk management departments, namely, how to develop standardised framework for measuring risks of underlying portfolios and in particular, how to most reliably estimate loss severity distribution from historical data. This paper investigates tail conditional expectation (TCE) and tail variance premium (TVP) risk measures for the family of symmetric generalised hyperbolic (SGH) distributions. In contrast to a widely used Value-at-Risk (VaR) measure, TCE satisfies the requirement of the “coherent” risk measure taking into account the expected loss in the tail of the distribution while TVP incorporates variability in the tail, providing the most conservative estimator of risk. We examine various distributions from the class of SGH distributions, which turn out to fit well financial data returns and allow for explicit formulas for TCE and TVP risk measures. In parallel, we obtain asymptotic behaviour for TCE and TVP risk measures for large quantile levels. Furthermore, we extend our analysis to the multivariate framework, allowing multivariate distributions to model combinations of correlated risks, and demonstrate how TCE can be decomposed into individual components, representing contribution of individual risks to the aggregate portfolio risk.  相似文献   

11.
When outcome variables are ordinal rather than continuous, the ordered logit model, aka the proportional odds model (ologit/po), is a popular analytical method. However, generalized ordered logit/partial proportional odds models (gologit/ppo) are often a superior alternative. Gologit/ppo models can be less restrictive than proportional odds models and more parsimonious than methods that ignore the ordering of categories altogether. However, the use of gologit/ppo models has itself been problematic or at least sub-optimal. Researchers typically note that such models fit better but fail to explain why the ordered logit model was inadequate or the substantive insights gained by using the gologit alternative. This paper uses both hypothetical examples and data from the 2012 European Social Survey to address these shortcomings.  相似文献   

12.
Summary Reaction-diffusion processes were introduced by Nicolis and Prigogine, and Haken. Existence theorems have been established for most models, but not much is known about ergodic properties. In this paper we study a class of models which have a reversible measure. We show that the stationary distribution is unique and is the limit starting from any initial distribution.The work was begun while the first author was visiting Cornell and supported by the Chinese government. The initial results (for Schlögl's first model) was generalized while the three authors were visiting the Nankai Institute for Mathematics, Tianjin, People's Republic of ChinaPartially supported by the National Science Foundation and the Army Research Office through the Mathematical Sciences Institute at Cornell UniversityPartially supported by NSF grant DMS 86-01800  相似文献   

13.
We discuss properties of the score statistics for testing the null hypothesis of homogeneity in a Weibull mixing model in which the group effect is modelled as a random variable and some of the covariates are measured with error. The statistics proposed are based on the corrected score approach and they require estimation only under the conventional Weibull model with measurement errors and does not require that the distribution of the random effect be specified. The results in this paper extend results in Gimenez, Bolfarine, and Colosimo (Annals of the Institute of Statistical Mathematics, 52, 698–711, 2000) for the case of independent Weibull models. A simulation study is provided. An erratum to this article can be found at  相似文献   

14.
Predicting insurance losses is an eternal focus of actuarial science in the insurance sector. Due to the existence of complicated features such as skewness, heavy tail, and multi-modality, traditional parametric models are often inadequate to describe the distribution of losses, calling for a mature application of Bayesian methods. In this study we explore a Gaussian mixture model based on Dirichlet process priors. Using three automobile insurance datasets, we employ the probit stick-breaking method to incorporate the effect of covariates into the weight of the mixture component, improve its hierarchical structure, and propose a Bayesian nonparametric model that can identify the unique regression pattern of different samples. Moreover, an advanced updating algorithm of slice sampling is integrated to apply an improved approximation to the infinite mixture model. We compare our framework with four common regression techniques: three generalized linear models and a dependent Dirichlet process ANOVA model. The empirical results show that the proposed framework flexibly characterizes the actual loss distribution in the insurance datasets and demonstrates superior performance in the accuracy of data fitting and extrapolating predictions, thus greatly extending the application of Bayesian methods in the insurance sector.  相似文献   

15.
The generalized logit model of nominal type with random regressors is studied for bootstrapping. We assess the accuracy of some estimators for our generalized logit model, using a Monte Carlo simulation. That is, we study the finite sample properties containing the consistency and asymptotic normality of the maximum likelihood estimators. Also, we compare Newton Raphson algorithm with BHHH algorithm.  相似文献   

16.
Summary New Bayesian cohort models designed to resolve the identification problem in cohort analysis are proposed in this paper. At first, the basic cohort model which represents the statistical structure of time-series social survey data in terms of age, period and cohort effects is explained. The logit cohort model for qualitative data from a binomial distribution and the normal-type cohort model for quantitative data from a normal distribution are considered as two special cases of the basic model. In order to overcome the identification problem in cohort analysis, a Bayesian approach is adopted, based on the assumption that the effect parameters change gradually. A Bayesian information criterion ABIC is introduced for the selection of the optimal model. This approach is so flexible that both the logit and the normal-type cohort models can be made applicable, not only to standard cohort tables but also to general cohort tables in which the range of age group is not equal to the interval between periods. The practical utility of the proposed models is demonstrated by analysing two data sets from the literature on cohort analysis. The Institute of Statistical Mathematics  相似文献   

17.
A realized generalized autoregressive conditional heteroskedastic (GARCH) model is developed within a Bayesian framework for the purpose of forecasting value at risk and conditional value at risk. Student‐t and skewed‐t return distributions are combined with Gaussian and student‐t distributions in the measurement equation to forecast tail risk in eight international equity index markets over a 4‐year period. Three realized measures are considered within this framework. A Bayesian estimator is developed that compares favourably, in simulations, with maximum likelihood, both in estimation and forecasting. The realized GARCH models show a marked improvement compared with ordinary GARCH for both value‐at‐risk and conditional value‐at‐risk forecasting. This improvement is consistent across a variety of data and choice of distributions. Realized GARCH models incorporating a skewed student‐t distribution for returns are favoured overall, with the choice of measurement equation error distribution and realized measure being of lesser importance. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

18.
We develop several new composite models based on the Weibull distribution for heavy tailed insurance loss data. The composite model assumes different weighted distributions for the head and tail of the distribution and several such models have been introduced in the literature for modeling insurance loss data. For each model proposed in this paper, we specify two parameters as a function of the remaining parameters. These models are fitted to two real insurance loss data sets and their goodness-of-fit is tested. We also present an application to risk measurements and compare the suitability of the models to empirical results.  相似文献   

19.
Logit models have been widely used in marketing to predict brand choice and to make inference about the impact of marketing mix variables on these choices. Most researchers have followed the pioneering example of Guadagni and Little, building choice models and drawing inference conditional on the assumption that the logit model is the correct specification for household purchase behaviour. To the extent that logit models fail to adequately describe household purchase behaviour, statistical inferences from them may be flawed. More importantly, marketing decisions based on these models may be incorrect. This research applies White's robust inference method to logit brand choice models. The method does not impose the restrictive assumption that the assumed logit model specification be true. A sandwich estimator of the covariance ‘corrected’ for possible mis‐specification is the basis for inference about logit model parameters. An important feature of this method is that it yields correct standard errors for the marketing mix parameter estimates even if the assumed logit model specification is not correct. Empirical examples include using household panel data sets from three different product categories to estimate logit models of brand choice. The standard errors obtained using traditional methods are compared with those obtained by White's robust method. The findings illustrate that incorrectly assuming the logit model to be true typically yields standard errors which are biased downward by 10–40 per cent. Conditions under which the bias is particularly severe are explored. Under these conditions, the robust approach is recommended. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

20.
We analyze the concept of credibility in claim frequency in two generalized count models–Mittag-Leffler and Weibull count models–which can handle both underdispersion and overdispersion in count data and nest the commonly used Poisson model as a special case. We find evidence, using data from a Danish insurance company, that the simple Poisson model can set the credibility weight to one even when there are only three years of individual experience data resulting from large heterogeneity among policyholders, and in doing so, it can thus break down the credibility model. The generalized count models, on the other hand, allow the weight to adjust according to the number of years of experience available. We propose parametric estimators for the structural parameters in the credibility formula using the mean and variance of the assumed distributions and a maximum likelihood estimation over a collective data. As an example, we show that the proposed parameters from Mittag-Leffler provide weights that are consistent with the idea of credibility. A simulation study is carried out investigating the stability of the maximum likelihood estimates from the Weibull count model. Finally, we extend the analyses to multidimensional lines and explain how our approach can be used in selecting profitable customers in cross-selling; customers can now be selected by estimating a function of their unknown risk profiles, which is the mean of the assumed distribution on their number of claims.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号