首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Loss Given Default (LGD) is the loss borne by the bank when a customer defaults on a loan. LGD for unsecured retail loans is often found difficult to model. In the frequentist (non-Bayesian) two-step approach, two separate regression models are estimated independently, which can be considered potentially problematic when trying to combine them to make predictions about LGD. The result is a point estimate of LGD for each loan. Alternatively, LGD can be modelled using Bayesian methods. In the Bayesian framework, one can build a single, hierarchical model instead of two separate ones, which makes this a more coherent approach. In this paper, Bayesian methods as well as the frequentist approach are applied to the data on personal loans provided by a large UK bank. As expected, the posterior means of parameters that have been produced using Bayesian methods are very similar to the frequentist estimates. The most important advantage of the Bayesian model is that it generates an individual predictive distribution of LGD for each loan. Potential applications of such distributions include the downturn LGD and the stressed LGD under Basel II.  相似文献   

2.
The multimodel inference makes statistical inferences from a set of plausible models rather than from a single model. In this paper, we focus on the multimodel inference based on smoothed information criteria proposed by seminal monographs(see Buckland et al.(1997) and Burnham and Anderson(2003)), which are termed as smoothed Akaike information criterion(SAIC) and smoothed Bayesian information criterion(SBIC)methods. Due to their simplicity and applicability, these methods are very widely used in many fields. By using an illustrative example and deriving limiting properties for the weights in the linear regression, we find that the existing variance estimation for SAIC is not applicable because of a restrictive condition, but for SBIC it is applicable. Especially, we propose a simulation-based inference for SAIC based on the limiting properties. Both the simulation study and the real data example show the promising performance of the proposed simulationbased inference.  相似文献   

3.
Model selection strategies have been routinely employed to determine a model for data analysis in statistics, and further study and inference then often proceed as though the selected model were the true model that were known a priori. Model averaging approaches, on the other hand, try to combine estimators for a set of candidate models. Specifically, instead of deciding which model is the 'right' one, a model averaging approach suggests to fit a set of candidate models and average over the estimators using data adaptive weights.In this paper we establish a general frequentist model averaging framework that does not set any restrictions on the set of candidate models. It broaden, the scope of the existing methodologies under the frequentist model averaging development. Assuming the data is from an unknown model, we derive the model averaging estimator and study its limiting distributions and related predictions while taking possible modeling biases into account.We propose a set of optimal weights to combine the individual estimators so that the expected mean squared error of the average estimator is minimized. Simulation studies are conducted to compare the performance of the estimator with that of the existing methods. The results show the benefits of the proposed approach over traditional model selection approaches as well as existing model averaging methods.  相似文献   

4.
Fiducial inference in the pivotal family of distributions   总被引:11,自引:0,他引:11  
In this paper a family, called the pivotal family, of distributions is considered. A pivotal family is determined by a generalized pivotal model. Analytical results show that a great many parametric families of distributions are pivotal. In a pivotal family of distributions a general method of deriving fiducial distributions of parameters is proposed. In the method a fiducial model plays an important role. A fiducial model is a function of a random variable with a known distribution, called the pivotal random element, when the observation of a statistic is given. The method of this paper includes some other methods of deriving fiducial distributions. Specially the first fiducial distribution given by Fisher can be derived by the method. For the monotone likelihood ratio family of distributions, which is a pivotal family, the fiducial distributions have a frequentist property in the Neyman-Pearson view. Fiducial distributions of regular parametric functions also have the above frequentist property. Some advantages of the fiducial inference are exhibited in four applications of the fiducial distribution. Many examples are given, in which the fiducial distributions cannot be derived by the existing methods.  相似文献   

5.
This article develops Bayesian inference of spatial models with a flexible skew latent structure. Using the multivariate skew-normal distribution of Sahu et al., a valid random field model with stochastic skewing structure is proposed to take into account non-Gaussian features. The skewed spatial model is further improved via scale mixing to accommodate more extreme observations. Finally, the skewed and heavy-tailed random field model is used to describe the parameters of extreme value distributions. Bayesian prediction is done with a well-known Gibbs sampling algorithm, including slice sampling and adaptive simulation techniques. The model performance—as far as the identifiability of the parameters is concerned—is assessed by a simulation study and an analysis of extreme wind speeds across Iran. We conclude that our model provides more satisfactory results according to Bayesian model selection and predictive-based criteria. R code to implement the methods used is available as online supplementary material.  相似文献   

6.
Count data, most often modeled by a Poisson distribution, are common in statistical process control. They are traditionally monitored by frequentist c or u charts, by cumulative sum and by exponentially weighted moving average charts. These charts all assume that the in‐control true mean is known, a common fiction that is addressed by gathering a large Phase I sample and using it to estimate the mean. “Self‐starting” proposals that ameliorate the need for a large Phase I sample have also appeared. All these methods are frequentist, ie, they allow only retrospective inference during Phase I, and they have no coherent way to incorporate less‐than‐perfect prior information about the in‐control mean. In this paper, we introduce a Bayesian procedure that can incorporate prior information, allow online inference, and should be particularly attractive for short‐run settings where large Phase I calibration exercises are impossible or unreasonable.  相似文献   

7.
In this article we study penalized regression splines (P-splines), which are low-order basis splines with a penalty to avoid undersmoothing. Such P-splines are typically not spatially adaptive, and hence can have trouble when functions are varying rapidly. Our approach is to model the penalty parameter inherent in the P-spline method as a heteroscedastic regression function. We develop a full Bayesian hierarchical structure to do this and use Markov chain Monte Carlo techniques for drawing random samples from the posterior for inference. The advantage of using a Bayesian approach to P-splines is that it allows for simultaneous estimation of the smooth functions and the underlying penalty curve in addition to providing uncertainty intervals of the estimated curve. The Bayesian credible intervals obtained for the estimated curve are shown to have pointwise coverage probabilities close to nominal. The method is extended to additive models with simultaneous spline-based penalty functions for the unknown functions. In simulations, the approach achieves very competitive performance with the current best frequentist P-spline method in terms of frequentist mean squared error and coverage probabilities of the credible intervals, and performs better than some of the other Bayesian methods.  相似文献   

8.
Abstract A fundamental problem of interest to contemporary natural resource scientists is that of assessing whether a critical population parameter such as population proportion p has been maintained above (or below) a specified critical threshold level pc. This problem has been traditionally analyzed using frequentist estimation of parameters with confidence intervals or frequentist hypothesis testing. Bayesian statistical analysis provides an alternative approach that has many advantages. It has a more intuitive interpretation, providing probability assessments of parameters. It provides the Bayesian logic of “if (data), then probability (parameters)” rather than the frequentist logic of “if (parameters), then probability (data).” It provides a sequential, cumulative, scientific approach to analysis, using prior information and reassessing the probability distribution of parameters for adaptive management decision making. It has been integrated with decision theory and provides estimates of risk. Natural resource scientists have the opportunity of using Bayesian statistical analysis to their advantage now that this alternative approach to statistical inference has become practical and accessible.  相似文献   

9.
Widely used parametric generalized linear models are, unfortunately, a somewhat limited class of specifications. Nonparametric aspects are often introduced to enrich this class, resulting in semiparametric models. Focusing on single or k-sample problems, many classical nonparametric approaches are limited to hypothesis testing. Those that allow estimation are limited to certain functionals of the underlying distributions. Moreover, the associated inference often relies upon asymptotics when nonparametric specifications are often most appealing for smaller sample sizes. Bayesian nonparametric approaches avoid asymptotics but have, to date, been limited in the range of inference. Working with Dirichlet process priors, we overcome the limitations of existing simulation-based model fitting approaches which yield inference that is confined to posterior moments of linear functionals of the population distribution. This article provides a computational approach to obtain the entire posterior distribution for more general functionals. We illustrate with three applications: investigation of extreme value distributions associated with a single population, comparison of medians in a k-sample problem, and comparison of survival times from different populations under fairly heavy censoring.  相似文献   

10.
贝叶斯向量自回归分析方法及其应用   总被引:3,自引:1,他引:2  
由于经济环境的多变,使得经济预测面临数据量少的建模难题,贝叶斯方法对小样本数据建模问题具有明显优势。本文在共轭条件似然函数"矩阵正态-Wishart分布"意义下,首先讨论了向量自回归模型的贝叶斯分析方法,得到了模型参数的后验分布与一步预测分布。其次,给出了分量方程的对应结果,说明了模型阶数的推断方法。最后,列出了计算步骤,并作为应用,对上海房地产价格指数数据进行预测建模,取得了较好效果。  相似文献   

11.
Higher-order asymptotic arguments for a scalar parameter of interest have been widely investigated for Bayesian inference. In this paper the theory of asymptotic expansions is discussed for a vector parameter of interest. A modified loglikelihood ratio is suggested, which can be used to derive approximate Bayesian credible sets with accurate frequentist coverage. Three examples are illustrated.  相似文献   

12.
One of the main advantages of Bayesian approaches is that they offer principled methods of inference in models of varying dimensionality and of models of infinite dimensionality. What is less widely appreciated is how the model inference is sensitive to prior distributions and therefore how priors should be set for real problems. In this paper prior sensitivity is considered with respect to the problem of inference in Gaussian mixture models. Two distinct Bayesian approaches have been proposed. The first is to use Bayesian model selection based upon the marginal likelihood; the second is to use an infinite mixture model which ‘side steps’ model selection. Explanations for the prior sensitivity are given in order to give practitioners guidance in setting prior distributions. In particular the use of conditionally conjugate prior distributions instead of purely conjugate prior distributions are advocated as a method for investigating prior sensitivity of the mean and variance individually.  相似文献   

13.
We consider Bayesian shrinkage predictions for the Normal regression problem under the frequentist Kullback-Leibler risk function.Firstly, we consider the multivariate Normal model with an unknown mean and a known covariance. While the unknown mean is fixed, the covariance of future samples can be different from that of training samples. We show that the Bayesian predictive distribution based on the uniform prior is dominated by that based on a class of priors if the prior distributions for the covariance and future covariance matrices are rotation invariant.Then, we consider a class of priors for the mean parameters depending on the future covariance matrix. With such a prior, we can construct a Bayesian predictive distribution dominating that based on the uniform prior.Lastly, applying this result to the prediction of response variables in the Normal linear regression model, we show that there exists a Bayesian predictive distribution dominating that based on the uniform prior. Minimaxity of these Bayesian predictions follows from these results.  相似文献   

14.
One of the issues contributing to the success of any extreme value modeling is the choice of the number of upper order statistics used for inference, or equivalently, the selection of an appropriate threshold. In this paper we propose a Bayesian predictive approach to the peaks over threshold method with the purpose of estimating extreme quantiles beyond the range of the data. In the peaks over threshold (POT) method, we assume that the threshold identifies a model with a specified prior probability, from a set of possible models. For each model, the predictive distribution of a future excess over the corresponding threshold is computed, as well as a conditional estimate for the corresponding tail probability. The unconditional tail probability for a given future extreme observation from the unknown distribution is then obtained as an average of the conditional tail estimates with weights given by the posterior probability of each model.  相似文献   

15.
Statistical inference about unknown parameter values that have known constraints is a challenging problem for both frequentist and Bayesian methods. As an alternative, inferential models created with the weak belief method can generate inferential results with desirable frequency properties for constrained parameter problems. To accomplish this, we propose an extension of weak belief called the elastic belief method. Compared to an existing rule for conditioning on constraint information, the elastic belief method produces more efficient probabilistic inference while maintaining desirable frequency properties. The application of this new method is demonstrated in two well-studied examples: inference about a nonnegative quantity measured with Gaussian error and inference about the signal rate of a Poisson count with a known background rate. Compared to several previous interval-forming methods for the constrained Poisson signal rate, the new method gives an interval with better coverage probability or a simpler construction. More importantly, the inferential model provides a post-data predictive measure of uncertainty about the unknown parameter value that is not inherent in other interval-forming methods.  相似文献   

16.
A Bayesian inference for a linear Gaussian random coefficient regression model with inhomogeneous within-class variances is presented. The model is motivated by an application in metrology, but it may well find interest in other fields. We consider the selection of a noninformative prior for the Bayesian inference to address applications where the available prior knowledge is either vague or shall be ignored. The noninformative prior is derived by applying the Berger and Bernardo reference prior principle with the means of the random coefficients forming the parameters of interest. We show that the resulting posterior is proper and specify conditions for the existence of first and second moments of the marginal posterior. Simulation results are presented which suggest good frequentist properties of the proposed inference. The calibration of sonic nozzle data is considered as an application from metrology. The proposed inference is applied to these data and the results are compared to those obtained by alternative approaches.  相似文献   

17.
The fitting of finite mixture models is an ill-defined estimation problem, as completely different parameterizations can induce similar mixture distributions. This leads to multiple modes in the likelihood, which is a problem for frequentist maximum likelihood estimation, and complicates statistical inference of Markov chain Monte Carlo draws in Bayesian estimation. For the analysis of the posterior density of these draws, a suitable separation into different modes is desirable. In addition, a unique labelling of the component specific estimates is necessary to solve the label switching problem. This paper presents and compares two approaches to achieve these goals: relabelling under multimodality and constrained clustering. The algorithmic details are discussed, and their application is demonstrated on artificial and real-world data.  相似文献   

18.
In this paper, the objective Bayesian method is applied to investigate the competing risks model involving both catastrophic and degradation failures. By modeling soft failure as the Wiener degradation process, and hard failures as a Weibull distribution, we obtain the noninformative priors (Jefferys prior and two reference priors) for the parameters. Moreover, we show that their posterior distributions have good properties and we propose Gibbs sampling algorithms for the Bayesian inference based on the Jefferys prior and two reference priors. Some simulation studies are conducted to illustrate the superiority of objective Bayesian method. Finally, we apply our methods to two real data examples and compare the objective Bayesian estimates with the other estimates.  相似文献   

19.
This paper introduces a new mathematical object: the confidence structure. A confidence structure represents inferential uncertainty in an unknown parameter by defining a belief function whose output is commensurate with Neyman–Pearson confidence. Confidence structures on a group of input variables can be propagated through a function to obtain a valid confidence structure on the output of that function. The theory of confidence structures is created by enhancing the extant theory of confidence distributions with the mathematical generality of Dempster–Shafer evidence theory. Mathematical proofs grounded in random set theory demonstrate the operative properties of confidence structures. The result is a new theory which achieves the holistic goals of Bayesian inference while maintaining the empirical rigor of frequentist inference.  相似文献   

20.
Estimating the probability of extreme temperature events is difficult because of limited records across time and the need to extrapolate the distributions of these events, as opposed to just the mean, to locations where observations are not available. Another related issue is the need to characterize the uncertainty in the estimated probability of extreme events at different locations. Although the tools for statistical modeling of univariate extremes are well-developed, extending these tools to model spatial extreme data is an active area of research. In this paper, in order to make inference about spatial extreme events, we introduce a new nonparametric model for extremes. We present a Dirichlet-based copula model that is a flexible alternative to parametric copula models such as the normal and t-copula. The proposed modelling approach is fitted using a Bayesian framework that allow us to take into account different sources of uncertainty in the data and models. We apply our methods to annual maximum temperature values in the east-south-central United States.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号