首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Recent results concerning the instability of Bayes Factor search over Bayesian Networks (BN’s) lead us to ask whether learning the parameters of a selected BN might also depend heavily on the often rather arbitrary choice of prior density. Robustness of inferences to misspecification of the prior density would at least ensure that a selected candidate model would give similar predictions of future data points given somewhat different priors and a given large training data set. In this paper we derive new explicit total variation bounds on the calculated posterior density as the function of the closeness of the genuine prior to the approximating one used and certain summary statistics of the calculated posterior density. We show that the approximating posterior density often converges to the genuine one as the number of sample point increases and our bounds allow us to identify when the posterior approximation might not. To prove our general results we needed to develop a new family of distance measures called local DeRobertis distances. These provide coarse non-parametric neighbourhoods and allowed us to derive elegant explicit posterior bounds in total variation. The bounds can be routinely calculated for BNs even when the sample has systematically missing observations and no conjugate analyses are possible.  相似文献   

2.
Adaptive smoothing has been proposed for curve-fitting problems where the underlying function is spatially inhomogeneous. Two Bayesian adaptive smoothing models, Bayesian adaptive smoothing splines on a lattice and Bayesian adaptive P-splines, are studied in this paper. Estimation is fully Bayesian and carried out by efficient Gibbs sampling. Choice of prior is critical in any Bayesian non-parametric regression method. We use objective priors on the first level parameters where feasible, specifically independent Jeffreys priors (right Haar priors) on the implied base linear model and error variance, and we derive sufficient conditions on higher level components to ensure that the posterior is proper. Through simulation, we demonstrate that the common practice of approximating improper priors by proper but diffuse priors may lead to invalid inference, and we show how appropriate choices of proper but only weakly informative priors yields satisfactory inference.  相似文献   

3.
In this paper, objective Bayesian method is applied to analyze degradation model based on the inverse Gaussian process. Noninformative priors (Jefferys prior and two reference priors) for model parameters are obtained and their properties are discussed. Moreover, we propose a class of modified reference priors to remedy weaknesses of the usual reference priors and show that the modified reference priors not only have proper posterior distributions but also have probability matching properties for model parameters. Gibbs sampling algorithms for Bayesian inference based on the Jefferys prior and the modified reference priors are studied. Simulations are conducted to compare the objective Bayesian estimates with the maximum likelihood estimates and subjective Bayesian estimates and shows better performance of the objective method than the other two estimates especially for the case of small sample size. Finally, two real data examples are analyzed for illustration.  相似文献   

4.
This paper introduces a new family of local density separations for assessing robustness of finite-dimensional Bayesian posterior inferences with respect to their priors. Unlike for their global equivalents, under these novel separations posterior robustness is recovered even when the functioning posterior converges to a defective distribution, irrespectively of whether the prior densities are grossly misspecified and of the form and the validity of the assumed data sampling distribution. For exponential family models, the local density separations are shown to form the basis of a weak topology closely linked to the Euclidean metric on the natural parameters. In general, the local separations are shown to measure relative roughness of the prior distribution with respect to its corresponding posterior and provide explicit bounds for the total variation distance between an approximating posterior density to a genuine posterior. We illustrate the application of these bounds for assessing robustness of the posterior inferences for a dynamic time series model of blood glucose concentration in diabetes mellitus patients with respect to alternative prior specifications.  相似文献   

5.
Conditional autoregressive (CAR) models have been extensively used for the analysis of spatial data in diverse areas, such as demography, economy, epidemiology and geography, as models for both latent and observed variables. In the latter case, the most common inferential method has been maximum likelihood, and the Bayesian approach has not been used much. This work proposes default (automatic) Bayesian analyses of CAR models. Two versions of Jeffreys prior, the independence Jeffreys and Jeffreys-rule priors, are derived for the parameters of CAR models and properties of the priors and resulting posterior distributions are obtained. The two priors and their respective posteriors are compared based on simulated data. Also, frequentist properties of inferences based on maximum likelihood are compared with those based on the Jeffreys priors and the uniform prior. Finally, the proposed Bayesian analysis is illustrated by fitting a CAR model to a phosphate dataset from an archaeological region.  相似文献   

6.
As a compromise between nonhomogeneous Poisson process and renewal process, the modulated power law process is more appropriate to model the failures of repairable systems. In this article, objective Bayesian methods are proposed to analyze the modulated power law process. Seven reference priors, one of which is also the Jeffreys prior, are derived. However, only four of them are taken into consideration because of their practicality. Propriety of the posterior densities considering the four reference priors is proved. Predictive distribution of the future failure time is obtained additionally. For the purpose of comparison, the simulation work and real data analysis are carried out based on both the objective Bayesian and maximum likelihood approaches, which show that the objective Bayesian estimation and prediction have much better statistical properties in a frequentist context, and outperforms the maximum likelihood method even with small or moderate sample sizes.  相似文献   

7.
We apply Bayesian approach, through noninformative priors, to analyze a Random Coefficient Regression (RCR) model. The Fisher information matrix, the Jeffreys prior and reference priors are derived for this model. Then, we prove that the corresponding posteriors are proper when the number of full rank design matrices are greater than or equal to twice the number of regression coefficient parameters plus 1 and that the posterior means for all parameters exist if one more additional full rank design matrix is available. A hybrid Markov chain sampling scheme is developed for computing the Bayesian estimators for parameters of interest. A small-scale simulation study is conducted for comparing the performance of different noninformative priors. A real data example is also provided and the data are analyzed by a non-Bayesian method as well as Bayesian methods with noninformative priors.  相似文献   

8.
Step-stress accelerated degradation test (SSADT) is a useful tool for assessing the lifetime distribution of highly reliable products when the available test items are very few. In this paper, we discuss multiple-steps step-stress accelerated degradation models based on Wiener process, and we apply the objective Bayesian method for such analytically intractable models to obtain the noninformative priors (Jefferys prior and two Reference priors). Moreover, we show that their posterior distributions are proper, and we propose Gibbs sampling algorithms for the Bayesian inference based on the Jefferys prior and two Reference priors. Finally, we present some simulation studies to compare the objective Bayesian estimates with the other Bayesian estimate and the maximum likelihood estimates (MLEs). Simulation results demonstrate the superiority of objective Bayesian analysis method.  相似文献   

9.
We consider the problem of robust Bayesian inference on the mean regression function allowing the residual density to change flexibly with predictors. The proposed class of models is based on a Gaussian process (GP) prior for the mean regression function and mixtures of Gaussians for the collection of residual densities indexed by predictors. Initially considering the homoscedastic case, we propose priors for the residual density based on probit stick-breaking mixtures. We provide sufficient conditions to ensure strong posterior consistency in estimating the regression function, generalizing existing theory focused on parametric residual distributions. The homoscedastic priors are generalized to allow residual densities to change nonparametrically with predictors through incorporating GP in the stick-breaking components. This leads to a robust Bayesian regression procedure that automatically down-weights outliers and influential observations in a locally adaptive manner. The methods are illustrated using simulated and real data applications.  相似文献   

10.
Geyer (J. Roy. Statist. Soc. 56 (1994) 291) proposed Monte Carlo method to approximate the whole likelihood function. His method is limited to choosing a proper reference point. We attempt to improve the method by assigning some prior information to the parameters and using the Gibbs output to evaluate the marginal likelihood and its derivatives through a Monte Carlo approximation. Vague priors are assigned to the parameters as well as the random effects within the Bayesian framework to represent a non-informative setting. Then the maximum likelihood estimates are obtained through the Newton Raphson method. Thus, out method serves as a bridge between Bayesian and classical approaches. The method is illustrated by analyzing the famous salamander mating data by generalized linear mixed models.  相似文献   

11.
This paper describes and tests methods for piecewise polynomial approximation of probability density functions using orthogonal polynomials. Empirical tests indicate that the procedure described in this paper can provide very accurate estimates of probabilities and means when the probability density function cannot be integrated in closed form. Furthermore, the procedure lends itself to approximating convolutions of probability densities. Such approximations are useful in project management, inventory modeling, and reliability calculations, to name a few applications. In these applications, decision makers desire an approximation method that is robust rather than customized. Also, for these applications the most appropriate criterion for accuracy is the average percent error over the support of the density function as opposed to the conventional average absolute error or average squared error. In this paper, we develop methods for using five well-known orthogonal polynomials for approximating density functions and recommend one of them as giving the best performance overall.  相似文献   

12.
We establish the posterior consistency for parametric, partially observed, fully dominated Markov models. The prior is assumed to assign positive probability to all neighborhoods of the true parameter, for a distance induced by the expected Kullback–Leibler divergence between the parametric family members’ Markov transition densities. This assumption is easily checked in general. In addition, we show that the posterior consistency is implied by the consistency of the maximum likelihood estimator. The result is extended to possibly improper priors and non-stationary observations. Finally, we check our assumptions on a linear Gaussian model and a well-known stochastic volatility model.  相似文献   

13.
We consider Bayesian updating of demand in a lost sales newsvendor model with censored observations. In a lost sales environment, where the arrival process is not recorded, the exact demand is not observed if it exceeds the beginning stock level, resulting in censored observations. Adopting a Bayesian approach for updating the demand distribution, we develop expressions for the exact posteriors starting with conjugate priors, for negative binomial, gamma, Poisson and normal distributions. Having shown that non-informative priors result in degenerate predictive densities except for negative binomial demand, we propose an approximation within the conjugate family by matching the first two moments of the posterior distribution. The conjugacy property of the priors also ensure analytical tractability and ease of computation in successive updates. In our numerical study, we show that the posteriors and the predictive demand distributions obtained exactly and with the approximation are very close to each other, and that the approximation works very well from both probabilistic and operational perspectives in a sequential updating setting as well.  相似文献   

14.
The Weibull distribution is one of the most widely used lifetime distributions in reliability engineering. Here, the noninformative priors for the ratio of the shape parameters of two Weibull models are introduced. The first criterion used is the asymptotic matching of the coverage probabilities of Bayesian credible intervals with the corresponding frequentist coverage probabilities. We develop the probability matching priors for the ratio of the shape parameters using the following matching criteria: quantile matching, matching of the distribution function, highest posterior density matching, and matching via inversion of the test statistics. We obtain one particular prior that meets all the matching criteria. Next, we derive the reference priors for different groups of ordering. Our findings show that some of the reference priors satisfy a first-order matching criterion and the one-at-a-time reference prior is a second-order matching prior. Lastly, we perform a simulation study and provide a real-world example.  相似文献   

15.
The solution of nonparametric regression problems is addressed via polynomial approximators and one-hidden-layer feedforward neural approximators. Such families of approximating functions are compared as to both complexity and experimental performances in finding a nonparametric mapping that interpolates a finite set of samples according to the empirical risk minimization approach. The theoretical background that is necessary to interpret the numerical results is presented. Two simulation case studies are analyzed to fully understand the practical issues that may arise in solving such problems. The issues depend on both the approximation capabilities of the approximating functions and the effectiveness of the methodologies that are available to select the tuning parameters, i.e., the coefficients of the polynomials and the weights of the neural networks. The simulation results show that the neural approximators perform better than the polynomial ones with the same number of parameters. However, this superiority can be jeopardized by the presence of local minima, which affects the neural networks but does not regard the polynomial approach.  相似文献   

16.
The status of sequential analysis in Bayesian inference is revisited. The information on the experimental design, including the stopping rule, is one part of the evidence, prior to the sampling. Consequently this information must be incorporated in the prior distribution. This approach allows to relax the likelihood principle when appropriate. It is illustrated in the case of successive Binomial trials. Using Jeffreys' rule, a prior based on the Fisher information and conditional on the design characteristics is derived. The corrected Jeffreys prior, which involves a new distribution called Beta-J, extends the classical Jeffreys priors for the Binomial and Pascal sampling models to more general stopping rules. As an illustration, we show that the correction induced on the posterior is proportional to the bias induced by the stopping rule on the maximum likelihood estimator. To cite this article: P. Bunouf, B. Lecoutre, C. R. Acad. Sci. Paris, Ser. I 343 (2006).  相似文献   

17.
A maximum a posteriori method has been developed for Gaussian priors over infinite-dimensional function spaces. In particular, variational equations based on a generalisation of the representer theorem and an equivalent optimisation problem are presented. This amounts to a generalisation of the ordinary Bayesian maximum a posteriori approach which is nontrivial as infinite-dimensional domains do not admit any probability densities. Instead of the gradient of the density, the logarithmic gradient of the probability distribution is used. Galerkin methods are proposed for the approximate solution of the variational equations. In summary, a framework and some foundations are provided which are required for the application of numerical approximation to an important class of machine learning problems.  相似文献   

18.
构造出9类具有函数的泛逼近性能的模糊控制器,这些模糊控制器均由模糊蕴涵算子构造而成.利用倒车仿真说明采用具有函数的泛逼近性能的模糊控制器可以用于实际的模糊控制系统中.  相似文献   

19.
It is well known that nonlinear approximation has an advantage over linear schemes in the sense that it provides comparable approximation rates to those of the linear schemes, but to a larger class of approximands. This was established for spline approximations and for wavelet approximations, and more recently by DeVore and Ron (in press) [2] for homogeneous radial basis function (surface spline) approximations. However, no such results are known for the Gaussian function, the preferred kernel in machine learning and several engineering problems. We introduce and analyze in this paper a new algorithm for approximating functions using translates of Gaussian functions with varying tension parameters. At heart it employs the strategy for nonlinear approximation of DeVore-Ron, but it selects kernels by a method that is not straightforward. The crux of the difficulty lies in the necessity to vary the tension parameter in the Gaussian function spatially according to local information about the approximand: error analysis of Gaussian approximation schemes with varying tension are, by and large, an elusive target for approximators. We show that our algorithm is suitably optimal in the sense that it provides approximation rates similar to other established nonlinear methodologies like spline and wavelet approximations. As expected and desired, the approximation rates can be as high as needed and are essentially saturated only by the smoothness of the approximand.  相似文献   

20.
A variety of methods for approximating probability density functions on the positive half-line are presented and discussed. In particular, the method of moments and orthogonal expansion methods are studied. We give a new, computational proof that continuous probability densities vanishing at can be uniformly approximated by generalized hyper-exponential densities. The same denseness property is also shown to hold for families of densities expressible as sums of Erlang densitieswith common fixed rate parameter.This research was supported in part by Air Force Office of Scientific Research Contract F49620-86-C-0022.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号