首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Optimization》2012,61(5):681-694
As global or combinatorial optimization problems are not effectively tractable by means of deterministic techniques, Monte Carlo methods are used in practice for obtaining ”good“ approximations to the optimum. In order to test the accuracy achieved after a sample of finite size, the Bayesian nonparametric approach is proposed as a suitable context, and the theoretical as well as computational implications of prior distributions in the class of neutral to the right distributions are examined. The feasibility of the approach relatively to particular Monte Carlo procedures is finally illustrated both for the global optimization problem and the {0 - 1} programming problem.  相似文献   

2.
Expected gain in Shannon information is commonly suggested as a Bayesian design evaluation criterion. Because estimating expected information gains is computationally expensive, examples in which they have been successfully used in identifying Bayes optimal designs are both few and typically quite simplistic. This article discusses in general some properties of estimators of expected information gains based on Markov chain Monte Carlo (MCMC) and Laplacian approximations. We then investigate some issues that arise when applying these methods to the problem of experimental design in the (technically nontrivial) random fatigue-limit model of Pascual and Meeker. An example comparing follow-up designs for a laminate panel study is provided.  相似文献   

3.
Abstract

The problem of finding marginal distributions of multidimensional random quantities has many applications in probability and statistics. Many of the solutions currently in use are very computationally intensive. For example, in a Bayesian inference problem with a hierarchical prior distribution, one is often driven to multidimensional numerical integration to obtain marginal posterior distributions of the model parameters of interest. Recently, however, a group of Monte Carlo integration techniques that fall under the general banner of successive substitution sampling (SSS) have proven to be powerful tools for obtaining approximate answers in a very wide variety of Bayesian modeling situations. Answers may also be obtained at low cost, both in terms of computer power and user sophistication. Important special cases of SSS include the “Gibbs sampler” described by Gelfand and Smith and the “IP algorithm” described by Tanner and Wong. The major problem plaguing users of SSS is the difficulty in ascertaining when “convergence” of the algorithm has been obtained. This problem is compounded by the fact that what is produced by the sampler is not the functional form of the desired marginal posterior distribution, but a random sample from this distribution. This article gives a general proof of the convergence of SSS and the sufficient conditions for both strong and weak convergence, as well as a convergence rate. We explore the connection between higher-order eigenfunctions of the transition operator and accelerated convergence via good initial distributions. We also provide asymptotic results for the sampling component of the error in estimating the distributions of interest. Finally, we give two detailed examples from familiar exponential family settings to illustrate the theory.  相似文献   

4.
This paper describes a method for an objective selection of the optimal prior distribution, or for adjusting its hyper-parameter, among the competing priors for a variety of Bayesian models. In order to implement this method, the integration of very high dimensional functions is required to get the normalizing constants of the posterior and even of the prior distribution. The logarithm of the high dimensional integral is reduced to the one-dimensional integration of a cerain function with respect to the scalar parameter over the range of the unit interval. Having decided the prior, the Bayes estimate or the posterior mean is used mainly here in addition to the posterior mode. All of these are based on the simulation of Gibbs distributions such as Metropolis' Monte Carlo algorithm. The improvement of the integration's accuracy is substantial in comparison with the conventional crude Monte Carlo integration. In the present method, we have essentially no practical restrictions in modeling the prior and the likelihood. Illustrative artificial data of the lattice system are given to show the practicability of the present procedure.  相似文献   

5.
Uncovering hidden change-points in an observed signal sequence is challenging both mathematically and computationally. We tackle this by developing an innovative methodology based on Markov chain Monte Carlo and statistical information theory. It consists of an empirical Bayesian information criterion (emBIC) to assess the fitness and virtue of candidate configurations of change-points, and a stochastic search algorithm induced from Gibbs sampling to find the optimal change-points configuration. Our emBIC is derived by treating the unknown change-point locations as latent data rather than parameters as is in traditional BIC, resulting in significant improvement over the latter which is known to mostly over-detect change-points. The use of the Gibbs sampler induced search enables one to quickly find the optimal change-points configuration with high probability and without going through computationally infeasible enumeration. We also integrate the Gibbs sampler induced search with a current BIC-based change-points sequential testing method, significantly improving the method’s performance and computing feasibility. We further develop two comprehensive 3-step computing procedures to implement the proposed methodology for practical use. Finally, simulation studies and real examples analyzing business and genetic data are presented to illustrate and assess the procedures.  相似文献   

6.
An efficient algorithm for the determination of Bayesian optimal discriminating designs for competing regression models is developed, where the main focus is on models with general distributional assumptions beyond the “classical” case of normally distributed homoscedastic errors. For this purpose, we consider a Bayesian version of the Kullback–Leibler (KL). Discretizing the prior distribution leads to local KL-optimal discriminating design problems for a large number of competing models. All currently available methods either require a large amount of computation time or fail to calculate the optimal discriminating design, because they can only deal efficiently with a few model comparisons. In this article, we develop a new algorithm for the determination of Bayesian optimal discriminating designs with respect to the Kullback–Leibler criterion. It is demonstrated that the new algorithm is able to calculate the optimal discriminating designs with reasonable accuracy and computational time in situations where all currently available procedures are either slow or fail.  相似文献   

7.
A commonly used paradigm in modeling count data is to assume that individual counts are generated from a Binomial distribution, with probabilities varying between individuals according to a Beta distribution. The marginal distribution of the counts is then BetaBinomial. Bradlow, Hardie, and Fader (2002, p. 189) make use of polynomial expansions to simplify Bayesian computations with Negative-Binomial distributed data. This article exploits similar expansions to facilitate Bayesian inference with data from the Beta-Binomial model. This has great application and computational importance to many problems, as previous research has resorted to computationally intensive numerical integration or Markov chain Monte Carlo techniques.  相似文献   

8.
9.
To date, Bayesian inferences for the negative binomial distribution (NBD) have relied on computationally intensive numerical methods (e.g., Markov chain Monte Carlo) as it is thought that the posterior densities of interest are not amenable to closed-form integration. In this article, we present a “closed-form” solution to the Bayesian inference problem for the NBD that can be written as a sum of polynomial terms. The key insight is to approximate the ratio of two gamma functions using a polynomial expansion, which then allows for the use of a conjugate prior. Given this approximation, we arrive at closed-form expressions for the moments of both the marginal posterior densities and the predictive distribution by integrating the terms of the polynomial expansion in turn (now feasible due to conjugacy). We demonstrate via a large-scale simulation that this approach is very accurate and that the corresponding gains in computing time are quite substantial. Furthermore, even in cases where the computing gains are more modest our approach provides a method for obtaining starting values for other algorithms, and a method for data exploration.  相似文献   

10.
This paper concerns our approach to the EVA2017 challenge, the aim of which was to predict extreme precipitation quantiles across several sites in the Netherlands. Our approach uses a Bayesian hierarchical structure, which combines Gamma and generalised Pareto distributions. We impose a spatio-temporal structure in the model parameters via an autoregressive prior. Estimates are obtained using Markov chain Monte Carlo techniques and spatial interpolation. This approach has been successful in the context of the challenge, providing reasonable improvements over the benchmark.  相似文献   

11.
内部欺诈事件类型是中国商业银行最严重的操作风险类型。但由于操作风险本质特征和中国商业银行内部欺诈损失数据收集年度较短,数据匮乏,小样本数据容易导致参数结果不稳定。为了在小样本数据下进行更准确的度量,本文采用贝叶斯马尔科夫蒙特卡洛模拟方法,在损失分布法框架下,假设损失频率服从泊松-伽马分布,而损失强度服从广义帕累托-混合伽马分布,分析后验分布的形式,获得中国商业银行不同业务线的内部欺诈损失频率和损失强度的后验分布估计,并进行蒙特卡罗模拟获得不同业务线内部欺诈的风险联合分布。结果表明,拟合结果很好,与传统极值分析法相比,基于利用贝叶斯的分析获得的后验分布可以作为未来的先验分布,有利于在较小样本下获得较真实的参数估计,本方法有助于银行降低监管资本要求。  相似文献   

12.
This article proposes a new Bayesian approach to prediction on continuous covariates. The Bayesian partition model constructs arbitrarily complex regression and classification surfaces by splitting the covariate space into an unknown number of disjoint regions. Within each region the data are assumed to be exchangeable and come from some simple distribution. Using conjugate priors, the marginal likelihoods of the models can be obtained analytically for any proposed partitioning of the space where the number and location of the regions is assumed unknown a priori. Markov chain Monte Carlo simulation techniques are used to obtain predictive distributions at the design points by averaging across posterior samples of partitions.  相似文献   

13.
This paper introduces some Bayesian optimal design methods for step-stress accelerated life test planning with one accelerating variable, when the acceleration model is linear in the accelerated variable or its function, based on censored data from a log-location-scale distributions. In order to find the optimal plan, we propose different Monte Carlo simulation algorithms for different Bayesian optimal criteria. We present an example using the lognormal life distribution with Type-I censoring to illustrate the different Bayesian methods and to examine the effects of the prior distribution and sample size. By comparing the different Bayesian methods we suggest that when the data have large(small) sample size B1(τ) (B2(τ)) method is adopted. Finally, the Bayesian optimal plans are compared with the plan obtained by maximum likelihood method.  相似文献   

14.
In the common nonparametric regression model we consider the problem of constructing optimal designs, if the unknown curve is estimated by a smoothing spline. A special basis for the space of natural splines is introduced and the local minimax property for these splines is used to derive two optimality criteria for the construction of optimal designs. The first criterion determines the design for a most precise estimation of the coefficients in the spline representation and corresponds to D-optimality, while the second criterion is the G-optimality criterion and corresponds to an accurate prediction of the curve. Several properties of the optimal designs are derived. In general, D- and G-optimal designs are not equivalent. Optimal designs are determined numerically and compared with the uniform design.  相似文献   

15.
Determining whether a solution is of high quality (optimal or near optimal) is fundamental in optimization theory and algorithms. In this paper, we develop Monte Carlo sampling-based procedures for assessing solution quality in stochastic programs. Quality is defined via the optimality gap and our procedures' output is a confidence interval on this gap. We review a multiple-replications procedure that requires solution of, say, 30 optimization problems and then, we present a result that justifies a computationally simplified single-replication procedure that only requires solving one optimization problem. Even though the single replication procedure is computationally significantly less demanding, the resulting confidence interval might have low coverage probability for small sample sizes for some problems. We provide variants of this procedure that require two replications instead of one and that perform better empirically. We present computational results for a newsvendor problem and for two-stage stochastic linear programs from the literature. We also discuss when the procedures perform well and when they fail, and we propose using ɛ-optimal solutions to strengthen the performance of our procedures.  相似文献   

16.
In this paper, we discuss Bayesian joint quantile regression of mixed effects models with censored responses and errors in covariates simultaneously using Markov Chain Monte Carlo method. Under the assumption of asymmetric Laplace error distribution, we establish a Bayesian hierarchical model and derive the posterior distributions of all unknown parameters based on Gibbs sampling algorithm. Three cases including multivariate normal distribution and other two heavy-tailed distributions are considered for fitting random effects of the mixed effects models. Finally, some Monte Carlo simulations are performed and the proposed procedure is illustrated by analyzing a group of AIDS clinical data set.  相似文献   

17.
Abstract

Many statistical multiple integration problems involve integrands that have a dominant peak. In applying numerical methods to solve these problems, statisticians have paid relatively little attention to existing quadrature methods and available software developed in the numerical analysis literature. One reason these methods have been largely overlooked, even though they are known to be more efficient than Monte Carlo for well-behaved problems of low dimensionality, may be that when applied naively they are poorly suited for peaked-integrand problems. In this article we use transformations based on “split t” distributions to allow the integrals to be efficiently computed using a subregion-adaptive numerical integration algorithm. Our split t distributions are modifications of those suggested by Geweke and may also be used to define Monte Carlo importance functions. We then compare our approach to Monte Carlo. In the several examples we examine here, we find subregion-adaptive integration to be substantially more efficient than importance sampling.  相似文献   

18.
Single-index models have found applications in econometrics and biometrics, where multidimensional regression models are often encountered. This article proposes a nonparametric estimation approach that combines wavelet methods for nonequispaced designs with Bayesian models. We consider a wavelet series expansion of the unknown regression function and set prior distributions for the wavelet coefficients and the other model parameters. To ensure model identifiability, the direction parameter is represented via its polar coordinates. We employ ad hoc hierarchical mixture priors that perform shrinkage on wavelet coefficients and use Markov chain Monte Carlo methods for a posteriori inference. We investigate an independence-type Metropolis-Hastings algorithm to produce samples for the direction parameter. Our method leads to simultaneous estimates of the link function and of the index parameters. We present results on both simulated and real data, where we look at comparisons with other methods.  相似文献   

19.
We study numerical integration of Lipschitz functionals on a Banach space by means of deterministic and randomized (Monte Carlo) algorithms. This quadrature problem is shown to be closely related to the problem of quantization and to the average Kolmogorov widths of the underlying probability measure. In addition to the general setting, we analyze, in particular, integration with respect to Gaussian measures and distributions of diffusion processes. We derive lower bounds for the worst case error of every algorithm in terms of its cost, and we present matching upper bounds, up to logarithms, and corresponding almost optimal algorithms. As auxiliary results, we determine the asymptotic behavior of quantization numbers and Kolmogorov widths for diffusion processes.   相似文献   

20.
Recently, the use of Bayesian optimal designs for discrete choice experiments, also called stated choice experiments or conjoint choice experiments, has gained much attention, stimulating the development of Bayesian choice design algorithms. Characteristic for the Bayesian design strategy is that it incorporates the available information about people's preferences for various product attributes in the choice design. This is in contrast with the linear design methodology, which is also used in discrete choice design and which depends for any claims of optimality on the unrealistic assumption that people have no preference for any of the attribute levels. Although linear design principles have often been used to construct discrete choice experiments, we show using an extensive case study that the resulting utility‐neutral optimal designs are not competitive with Bayesian optimal designs for estimation purposes. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号