首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Random processes, from which a single sample path data are available on a fine time scale, abound in many areas including finance and genetics. An effective way to model such data is to consider a suitable continuous-time-scale analog, X t say, for the underlying process. We consider three diffusion models for the process X t and address model selection under improper priors. Specifically, fractional and intrinsic Bayes factors (FBF and IBF) for model selection are considered. Here, we focus on the asymptotic stability of the IBF's and FBF's for comparing these models. Specifically, we propose to employ certain novel transformations of the data in order to ensure the asymptotic stability of the IBF's. While we use different transformations for pairwise comparisons of the models, we also show that a single common transformation can be used when simultaneously comparing all three models. We then demonstrate that, when FBF's are used to compare these models, we may have to employ different, model-specific training fractions in order to achieve asymptotic stability of the FBF's.  相似文献   

2.
A Bayesian shrinkage estimate for the mean in the generalized linear empirical Bayes model is proposed. The posterior mean under the empirical Bayes model has a shrinkage pattern. The shrinkage factor is estimated by using a Bayesian method with the regression coefficients to be fixed at the maximum extended quasi-likelihood estimates. This approach develops a Bayesian shrinkage estimate of the mean which is numerically quite tractable. The method is illustrated with a data set, and the estimate is compared with an earlier one based on an empirical Bayes method. In a special case of the homogeneous model with exchangeable priors, the performance of the Bayesian estimate is illustrated by computer simulations. The simulation result shows as improvement of the Bayesian estimate over the empirical Bayes estimate in some situations.  相似文献   

3.
Lerchez (Ann. Statist. 14, 1986b, 1030–1048) considered a sequential Bayes-test problem for the drift of the Wiener process. In the case of a normal prior an o(c)-optimal test could be constructed. In this paper a new martingale approach is presented, which provides an expansion of the Bayes risk for a one-sided SPRT. Relations to the optimal Bayes risk are given, which show the o(c)-optimality for suitable nonnormal priors.  相似文献   

4.
A two-parameter distribution was revisited by Chen (2000) [7]. This distribution can have a bathtub-shaped or increasing failure rate function which enables it to fit real lifetime data sets. Maximum likelihood and Bayes estimates of the two unknown parameters are discussed in this paper. It is assumed in the Bayes case that the unknown parameters have gamma priors. Explicit forms of Bayes estimators cannot be obtained. Different approximations are used to establish point estimates and two sided Bayesian probability intervals for the parameters. Monte Carlo simulations are applied to the comparison between the maximum likelihood estimates and the approximate Bayes estimates obtained under non-informative prior assumptions. Analysis of a real data set is also been presented for illustrative purposes.  相似文献   

5.
We consider estimation of loss for generalized Bayes or pseudo-Bayes estimators of a multivariate normal mean vector, θ. In 3 and higher dimensions, the MLEX is UMVUE and minimax but is inadmissible. It is dominated by the James-Stein estimator and by many others. Johnstone (1988, On inadmissibility of some unbiased estimates of loss,Statistical Decision Theory and Related Topics, IV (eds. S. S. Gupta and J. O. Berger), Vol. 1, 361–379, Springer, New York) considered the estimation of loss for the usual estimatorX and the James-Stein estimator. He found improvements over the Stein unbiased estimator of risk. In this paper, for a generalized Bayes point estimator of θ, we compare generalized Bayes estimators to unbiased estimators of loss. We find, somewhat surprisingly, that the unbiased estimator often dominates the corresponding generalized Bayes estimator of loss for priors which give minimax estimators in the original point estimation problem. In particular, we give a class of priors for which the generalized Bayes estimator of θ is admissible and minimax but for which the unbiased estimator of loss dominates the generalized Bayes estimator of loss. We also give a general inadmissibility result for a generalized Bayes estimator of loss. Research supported by NSF Grant DMS-97-04524.  相似文献   

6.
Motivated by the problem of minefield detection, we investigate the problem of classifying mixtures of spatial point processes. In particular we are interested in testing the hypothesis that a given dataset was generated by a Poisson process versus a mixture of a Poisson process and a hard-core Strauss process. We propose testing this hypothesis by comparing the evidence for each model by using partial Bayes factors. We use the term partial Bayes factor to describe a Bayes factor, a ratio of integrated likelihoods, based on only part of the available information, namely that information contained in a small number of functionals of the data. We applied our method to both real and simulated data, and considering the difficulty of classifying these point patterns by eye, our approach overall produced good results.  相似文献   

7.
An objective Bayesian model selection procedure is proposed for the one way analysis of variance under homoscedasticity. Bayes factors for the usual default prior distributions are not well defined and thus Bayes factors for intrinsic priors are used instead. The intrinsic priors depend on a training sample which is typically a unique random vector. However, for the homoscedastic ANOVA it is not the case. Nevertheless, we are able to illustrate that the Bayes factors for the intrinsic priors are not sensitive to the minimal training sample chosen; furthermore, we propose an alternative pooled prior that yields similar Bayes factors. To compute these Bayes factors Bayesian computing methods are required when the sample sizes of the involved populations are large. Finally, a one to one relationship—which we call the calibration curve—between the posterior probability of the null hypothesis and the classical $p$ value is found, thus allowing comparisons between these two measures of evidence. The behavior of the calibration curve as a function of the sample size is studied and conclusions relating both procedures are stated.  相似文献   

8.
This paper considers the problem of testing on the common mean of several normal distributions. We propose a solution based on a Bayesian model selection procedure in which no subjective input is considered. We construct the proper priors for testing hypotheses about the common mean based on measures of divergence between competing models. This method is called the divergence-based priors (Bayarri and García-Donato in J R Stat Soc B 70:981–1003, 2008). The behavior of the Bayes factors based DB priors is compared with the fractional Bayes factor in a simulation study and compared with the existing tests in two real examples.  相似文献   

9.
研究Bayes统计分析中利用验前信息的稳健性.首先,用一般方法研究了指数寿命型分布中失效率的验前分布的稳健性.然后利用Gamma分布函数的典型性质,并以平方损失下的后验期望损失为判别准则,讨论了失效率的最优Bayes稳健区间.给出了失效率的最优Bayes稳健点估计.  相似文献   

10.
The article develops a hybrid variational Bayes (VB) algorithm that combines the mean-field and stochastic linear regression fixed-form VB methods. The new estimation algorithm can be used to approximate any posterior without relying on conjugate priors. We propose a divide and recombine strategy for the analysis of large datasets, which partitions a large dataset into smaller subsets and then combines the variational distributions that have been learned in parallel on each separate subset using the hybrid VB algorithm. We also describe an efficient model selection strategy using cross-validation, which is straightforward to implement as a by-product of the parallel run. The proposed method is applied to fitting generalized linear mixed models. The computational efficiency of the parallel and hybrid VB algorithm is demonstrated on several simulated and real datasets. Supplementary material for this article is available online.  相似文献   

11.
多源验前信息之下Bayes可靠性估计   总被引:4,自引:0,他引:4  
本文考虑存在多源验前信息的情况,以二项分布为例,首先把各种验前信息化成不同的约束条件,并运用最大熵准则推导出各种验前信息所对应的验前分布,然后将这些分布综合成最终的验前分布,最后根据系统的寿命试验数据得出可靠性参数的验后分布并进行了Bayes推断,文中给出了仿真实例以说明方法的有效性。  相似文献   

12.
In this paper, the objective Bayesian method is applied to investigate the competing risks model involving both catastrophic and degradation failures. By modeling soft failure as the Wiener degradation process, and hard failures as a Weibull distribution, we obtain the noninformative priors (Jefferys prior and two reference priors) for the parameters. Moreover, we show that their posterior distributions have good properties and we propose Gibbs sampling algorithms for the Bayesian inference based on the Jefferys prior and two reference priors. Some simulation studies are conducted to illustrate the superiority of objective Bayesian method. Finally, we apply our methods to two real data examples and compare the objective Bayesian estimates with the other estimates.  相似文献   

13.
An objective Bayesian procedure for testing in the two way analysis of variance is proposed. In the classical methodology the main effects of the two factors and the interaction effect are formulated as linear contrasts between means of normal populations, and hypotheses of the existence of such effects are tested. In this paper, for the first time these hypotheses have been formulated as objective Bayesian model selection problems. Our development is under homoscedasticity and heteroscedasticity, providing exact solutions in both cases. Bayes factors are the key tool to choose between the models under comparison but for the usual default prior distributions they are not well defined. To avoid this difficulty Bayes factors for intrinsic priors are proposed and they are applied in this setting to test the existence of the main effects and the interaction effect. The method has been illustrated with an example and compared with the classical method. For this example, both approaches went in the same direction although the large P value for interaction (0.79) only prevents us against to reject the null, and the posterior probability of the null (0.95) was conclusive.  相似文献   

14.
We give a sufficient condition for admissibility of generalized Bayes estimators of the location vector of spherically symmetric distribution under squared error loss. Compared to the known results for the multivariate normal case, our sufficient condition is very tight and is close to being a necessary condition. In particular, we establish the admissibility of generalized Bayes estimators with respect to the harmonic prior and priors with slightly heavier tail than the harmonic prior. We use the theory of regularly varying functions to construct a sequence of smooth proper priors approaching an improper prior fast enough for establishing the admissibility. We also discuss conditions of minimaxity of the generalized Bayes estimator with respect to the harmonic prior.  相似文献   

15.
In this paper Bayesian statistical analysis of masked data is considered based on the Pareto distribution. The likelihood function is simplified by introducing auxiliary variables, which describe the causes of failure. Three Bayesian approaches (Bayes using subjective priors, hierarchical Bayes and empirical Bayes) are utilized to estimate the parameters, and we compare these methods by analyzing a real data. Finally we discuss the method of avoiding the choice of the hyperparameters in the prior distributions.  相似文献   

16.
As a flexible Bayesian test criterion for nested point null hypotheses, asymmetric and multiple Bayes factors are introduced in the form of a modified Savage-Dickey density ratio. This leads to a simple method for obtaining pairwise comparisons of hypotheses in a statistical experiment with a partition on the parameter space. The method is derived from the fact that in general, the asymmetric Bayes factor can be written as the product of the Savage-Dickey ratio and a correction factor where both terms are easily estimated by means of posterior simulation. Analyses of a censored data problem and a serial correlation problem are illustrated for the method. For these cases, the method is straightforward for specifying distributionally and to implement computationally, with output readily adapted for required tests.  相似文献   

17.
Summary The Bayes method is seldom applied to nonparametric statistical problems, for the reason that it is hard to find mathematically tractable prior distributions on a set of probability measures. However, it is found that the Dirichlet process generates randomly a family of probability distributions which can be taken as a family of prior distributions for an application of the Bayes method to such problems. This paper presents a Bayesian analysis of a nonparametric problem of selecting a distribution with the largestpth quantile value, fromk≧2 given distributions. It is assumed a priori that the given distributions have been generated from a Dirichlet process. This work was supported by the U.S. Office of Naval Research under Contract No. 00014-75-C-0451.  相似文献   

18.
In this article, we propose an improvement on the sequential updating and greedy search (SUGS) algorithm for fast fitting of Dirichlet process mixture models. The SUGS algorithm provides a means for very fast approximate Bayesian inference for mixture data which is particularly of use when datasets are so large that many standard Markov chain Monte Carlo (MCMC) algorithms cannot be applied efficiently, or take a prohibitively long time to converge. In particular, these ideas are used to initially interrogate the data, and to refine models such that one can potentially apply exact data analysis later on. SUGS relies upon sequentially allocating data to clusters and proceeding with an update of the posterior on the subsequent allocations and parameters which assumes this allocation is correct. Our modification softens this approach, by providing a probability distribution over allocations, with a similar computational cost; this approach has an interpretation as a variational Bayes procedure and hence we term it variational SUGS (VSUGS). It is shown in simulated examples that VSUGS can outperform, in terms of density estimation and classification, a version of the SUGS algorithm in many scenarios. In addition, we present a data analysis for flow cytometry data, and SNP data via a three-class Dirichlet process mixture model, illustrating the apparent improvement over the original SUGS algorithm.  相似文献   

19.
This article introduces a model that can be considered as an autoregressive extension of the ordered probit model. For parameter estimation we first develop a standard Gibbs sampler which however exhibits bad convergence properties. Using a special transformation group on the sample space we develop a grouped move multigrid Monte Carlo (GM-MGMC) Gibbs sampler and illustrate its fundamental superiority in convergence compared to the standard sampler. To be able to compare the autoregressive ordered probit (AOP) model to other models we further provide an estimation procedure for the marginal likelihood which enables us to compute Bayes factors. We apply the new model to absolute price changes of the IBM stock traded on December 4, 2000, at the New York Stock Exchange. To detect whether the data contain an autoregressive structure we then fit the AOP model as well as the common ordered probit (OP) model to the data. By estimating the corresponding Bayes factor we show that the AOP model fits the data decisively better than the common OP model.  相似文献   

20.
在正态-逆Wishart先验下研究了多元线性模型中参数的经验Bayes估计及其优良性问题.当先验分布中含有未知参数时,构造了回归系数矩阵和误差方差矩阵的经验Bayes估计,并在Bayes均方误差(简称BMSE)准则和Bayes均方误差阵(简称BMSEM)准则下,证明了经验Bayes估计优于最小二乘估计.最后,进行了Monte Carlo模拟研究,进一步验证了理论结果.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号