首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 224 毫秒
1.
We present a Bayesian decision theoretic approach for developing replacement strategies. In so doing, we consider a semiparametric model to describe the failure characteristics of systems by specifying a nonparametric form for cumulative intensity function and by taking into account effect of covariates by a parametric form. Use of a gamma process prior for the cumulative intensity function complicates the Bayesian analysis when the updating is based on failure count data. We develop a Bayesian analysis of the model using Markov chain Monte Carlo methods and determine replacement strategies. Adoption of Markov chain Monte Carlo methods involves a data augmentation algorithm. We show the implementation of our approach using actual data from railroad tracks. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
Abstract

We consider Bayesian inference when priors and likelihoods are both available for inputs and outputs of a deterministic simulation model. This problem is fundamentally related to the issue of aggregating (i.e., pooling) expert opinion. We survey alternative strategies for aggregation, then describe computational approaches for implementing pooled inference for simulation models. Our approach (1) numerically transforms all priors to the same space; (2) uses log pooling to combine priors; and (3) then draws standard Bayesian inference. We use importance sampling methods, including an iterative, adaptive approach that is more flexible and has less bias in some instances than a simpler alternative. Our exploratory examples are the first steps toward extension of the approach for highly complex and even noninvertible models.  相似文献   

3.
4.
We demonstrate that, if there are sufficiently many players, any Bayesian equilibrium of an incomplete information game can be “ε-purified” . That is, close to any Bayesian equilibrium there is an approximate Bayesian equilibrium in pure strategies. Our main contribution is obtaining this result for games with a countable set of pure strategies. In order to do so we derive a mathematical result, in the spirit of the Shapley–Folkman Theorem, permitting countable strategy sets. Our main assumption is a “large game property,” dictating that the actions of relatively small subsets of players cannot have large affects on the payoffs of other players. E. Cartwright and M. Wooders are indebted to Phillip Reny, Frank Page and two anonymous referees for helpful comments.  相似文献   

5.
We present large deviation results for estimators of unknown probabilities which satisfy a suitable exponential decay condition. These results provide some extensions of the large deviation estimates given in Macci and Petrella (2006). Furthermore we propose a classical approach which is different from the one presented in Ganesh et al. (1998) and we cannot say that the Bayesian approach is more conservative as in that paper.  相似文献   

6.
This paper introduces some Bayesian optimal design methods for step-stress accelerated life test planning with one accelerating variable, when the acceleration model is linear in the accelerated variable or its function, based on censored data from a log-location-scale distributions. In order to find the optimal plan, we propose different Monte Carlo simulation algorithms for different Bayesian optimal criteria. We present an example using the lognormal life distribution with Type-I censoring to illustrate the different Bayesian methods and to examine the effects of the prior distribution and sample size. By comparing the different Bayesian methods we suggest that when the data have large(small) sample size B1(τ) (B2(τ)) method is adopted. Finally, the Bayesian optimal plans are compared with the plan obtained by maximum likelihood method.  相似文献   

7.
This article describes advances in statistical computation for large-scale data analysis in structured Bayesian mixture models via graphics processing unit (GPU) programming. The developments are partly motivated by computational challenges arising in fitting models of increasing heterogeneity to increasingly large datasets. An example context concerns common biological studies using high-throughput technologies generating many, very large datasets and requiring increasingly high-dimensional mixture models with large numbers of mixture components. We outline important strategies and processes for GPU computation in Bayesian simulation and optimization approaches, give examples of the benefits of GPU implementations in terms of processing speed and scale-up in ability to analyze large datasets, and provide a detailed, tutorial-style exposition that will benefit readers interested in developing GPU-based approaches in other statistical models. Novel, GPU-oriented approaches to modifying existing algorithms software design can lead to vast speed-up and, critically, enable statistical analyses that presently will not be performed due to compute time limitations in traditional computational environments. Supplemental materials are provided with all source code, example data, and details that will enable readers to implement and explore the GPU approach in this mixture modeling context.  相似文献   

8.
This paper derives a particle filter algorithm within the Dempster–Shafer framework. Particle filtering is a well-established Bayesian Monte Carlo technique for estimating the current state of a hidden Markov process using a fixed number of samples. When dealing with incomplete information or qualitative assessments of uncertainty, however, Dempster–Shafer models with their explicit representation of ignorance often turn out to be more appropriate than Bayesian models.The contribution of this paper is twofold. First, the Dempster–Shafer formalism is applied to the problem of maintaining a belief distribution over the state space of a hidden Markov process by deriving the corresponding recursive update equations, which turn out to be a strict generalization of Bayesian filtering. Second, it is shown how the solution of these equations can be efficiently approximated via particle filtering based on importance sampling, which makes the Dempster–Shafer approach tractable even for large state spaces. The performance of the resulting algorithm is compared to exact evidential as well as Bayesian inference.  相似文献   

9.
This paper investigates strategy selection for a participant in a two-party non-cooperative conflict which involves both uncertainty and multiple goals. Uncertainty arises from the players not knowing the utility functions. Multiple objectives appear as the result of the payoff being a vector of prizes and the players attempt to attain various goals for each prize separately. The main objective is to present a fuzzy set/fuzzy programming solution concept to the conflict situation. In doing so, we compare a Bayesian player to one that employs fuzzy set techniques. We point out some of the advantages of the fuzzy set method. The necessary computations in the fuzzy set method are explained in detail through an example.  相似文献   

10.
We investigate the class of σ-stable Poisson–Kingman random probability measures (RPMs) in the context of Bayesian nonparametric mixture modeling. This is a large class of discrete RPMs, which encompasses most of the popular discrete RPMs used in Bayesian nonparametrics, such as the Dirichlet process, Pitman–Yor process, the normalized inverse Gaussian process, and the normalized generalized Gamma process. We show how certain sampling properties and marginal characterizations of σ-stable Poisson–Kingman RPMs can be usefully exploited for devising a Markov chain Monte Carlo (MCMC) algorithm for performing posterior inference with a Bayesian nonparametric mixture model. Specifically, we introduce a novel and efficient MCMC sampling scheme in an augmented space that has a small number of auxiliary variables per iteration. We apply our sampling scheme to a density estimation and clustering tasks with unidimensional and multidimensional datasets, and compare it against competing MCMC sampling schemes. Supplementary materials for this article are available online.  相似文献   

11.
本运用Bayes决策理论研究指数分布和随机截尾试验的抽样接收方案的一般模型,我们证明了最优Bayes法则具有单调性,并对二个特殊的决策损失函数给出了最优Bayes法则和Bayes风险的具体表达式。  相似文献   

12.
This note is a rejoinder to comments by Dubois and Moral about my paper “Likelihood-based belief function: justification and some extensions to low-quality data” published in this issue. The main comments concern (1) the axiomatic justification for defining a consonant belief function in the parameter space from the likelihood function and (2) the Bayesian treatment of statistical inference from uncertain observations, when uncertainty is quantified by belief functions. Both issues are discussed in this note, in response to the discussants' comments.  相似文献   

13.
This paper investigates the effect of resale allowance on entry strategies in a second price auction with two bidders whose entries are sequential and costly. We first characterize the perfect Bayesian equilibrium in cutoff strategies. We then show that there exists a unique threshold such that if the reseller’s bargaining power is greater (less) than the threshold, resale allowance causes the leading bidder (the following bidder) to have a higher (lower) incentive on entry; i.e., the cutoff of entry becomes lower (higher). We also discuss asymmetric bidders and the original seller’s expected revenue.  相似文献   

14.
Data assimilation refers to the methodology of combining dynamical models and observed data with the objective of improving state estimation. Most data assimilation algorithms are viewed as approximations of the Bayesian posterior (filtering distribution) on the signal given the observations. Some of these approximations are controlled, such as particle filters which may be refined to produce the true filtering distribution in the large particle number limit, and some are uncontrolled, such as ensemble Kalman filter methods which do not recover the true filtering distribution in the large ensemble limit. Other data assimilation algorithms, such as cycled 3DVAR methods, may be thought of as controlled estimators of the state, in the small observational noise scenario, but are also uncontrolled in general in relation to the true filtering distribution. For particle filters and ensemble Kalman filters it is of practical importance to understand how and why data assimilation methods can be effective when used with a fixed small number of particles, since for many large-scale applications it is not practical to deploy algorithms close to the large particle limit asymptotic. In this paper, the authors address this question for particle filters and, in particular, study their accuracy (in the small noise limit) and ergodicity (for noisy signal and observation) without appealing to the large particle number limit. The authors first overview the accuracy and minorization properties for the true filtering distribution, working in the setting of conditional Gaussianity for the dynamics-observation model. They then show that these properties are inherited by optimal particle filters for any fixed number of particles, and use the minorization to establish ergodicity of the filters. For completeness we also prove large particle number consistency results for the optimal particle filters, by writing the update equations for the underlying distributions as recursions. In addition to looking at the optimal particle filter with standard resampling, they derive all the above results for (what they term) the Gaussianized optimal particle filter and show that the theoretical properties are favorable for this method, when compared to the standard optimal particle filter.  相似文献   

15.
Consider a family of zero-sum games indexed by a parameter that determines each player’s payoff function and feasible strategies. Our first main result characterizes continuity assumptions on the payoffs and the constraint correspondence such that the equilibrium value and strategies depend continuously and upper hemicontinuously (respectively) on the parameter. This characterization uses two topologies in order to overcome a topological tension that arises when players’ strategy sets are infinite-dimensional. Our second main result is an application to Bayesian zero-sum games in which each player’s information is viewed as a parameter. We model each player’s information as a sub-σ-field, so that it determines her feasible strategies: those that are measurable with respect to the player’s information. We thereby characterize conditions under which the equilibrium value and strategies depend continuously and upper hemicontinuously (respectively) on each player’s information.  相似文献   

16.
We show that the value of a zero-sum Bayesian game is a Lipschitz continuous function of the players?? common prior belief with respect to the total variation metric on beliefs. This is unlike the case of general Bayesian games where lower semi-continuity of Bayesian equilibrium (BE) payoffs rests on the ??almost uniform?? convergence of conditional beliefs. We also show upper semi-continuity (USC) and approximate lower semi-continuity (ALSC) of the optimal strategy correspondence, and discuss ALSC of the BE correspondence in the context of zero-sum games. In particular, the interim BE correspondence is shown to be ALSC for some classes of information structures with highly non-uniform convergence of beliefs, that would not give rise to ALSC of BE in non-zero-sum games.  相似文献   

17.
We present a three-player Bayesian game for which there are no ∈-equilibria in Borel measurable strategies for small enough positive ∈, however there are non-measurable equilibria. The structure of the game employs a nonamenable semi-group action corresponding to the knowledge of the players. The equilibrium property is related to the proper colouring of graphs and the Borel chromatic number; but rather than keeping adjacent vertices coloured differently there are algebraic conditions relating to the topology of the space and some ergodic operators.  相似文献   

18.
We consider an MRI scanning facility run by a Radiology department. Several hospital departments compete for capacity and have private information regarding their demand for scans. The fairness of the capacity allocation by the Radiology department depends on the quality of the information provided by the hospital departments. We employ a generic Bayesian game approach that stimulates the disclosure of true demand (truth-telling), so that capacity can be allocated fairly. We derive conditions under which truth-telling is a Bayesian Nash equilibrium. The usefulness of the approach is illustrated with a numerical example.  相似文献   

19.
Genome-wide association studies (GWAS) aim to assess relationships between single nucleotide polymorphisms (SNPs) and diseases. They are one of the most popular problems in genetics, and have some peculiarities given the large number of SNPs compared to the number of subjects in the study. Individuals might not be independent, especially in animal breeding studies or genetic diseases in isolated populations with highly inbred individuals. We propose a family-based GWAS model in a two-stage approach comprising a dimension reduction and a subsequent model selection. The first stage, in which the genetic relatedness between the subjects is taken into account, selects the promising SNPs. The second stage uses Bayes factors for comparison among all candidate models and a random search strategy for exploring the space of all the regression models in a fully Bayesian approach. A simulation study shows that our approach is superior to Bayesian lasso for model selection in this setting. We also illustrate its performance in a study on Beta-thalassemia disorder in an isolated population from Sardinia. Supplementary Material describing the implementation of the method proposed in this article is available online.  相似文献   

20.
Bayesian Inference for Extremes: Accounting for the Three Extremal Types   总被引:2,自引:0,他引:2  
The Extremal Types Theorem identifies three distinct types of extremal behaviour. Two different strategies for statistical inference for extreme values have been developed to exploit this asymptotic representation. One strategy uses a model for which the three types are combined into a unified parametric family with the shape parameter of the family determining the type: positive (Fréchet), zero (Gumbel), and negative (negative Weibull). This form of approach never selects the Gumbel type as that type is reduced to a single point in a continuous parameter space. The other strategy first selects the extremal type, based on hypothesis tests, and then estimates the best fitting model within the selected type. Such approaches ignore the uncertainty of the choice of extremal type on the subsequent inference. We overcome these deficiencies by applying the Bayesian inferential framework to an extended model which explicitly allocates a non-zero probability to the Gumbel type. Application of our procedure suggests that the effect of incorporating the knowledge of the Extremal Types Theorem into the inference for extreme values is to reduce uncertainty, with the degree of reduction depending on the shape parameter of the true extremal distribution and the prior weight given to the Gumbel type.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号