首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The study of factors affecting human fertility is an important problem affording interesting statistical and computational challenges. Analyses of human fertility rates must cope with extra variability in fecundability parameters as well as a host of covariates ranging from the obvious, such as coital frequency, to the subtle, like the smoking habits of the female’s mother. In retrospective human fecundity studies, researchers ask couples the time required to conceive. This time-to-pregnancy data often exhibits digit preference bias, among other problems. We introduce computationally intensive models with sufficient flexibility to represent such bias and other causes yielding a similar lack of monotonicity in conception probabilities.  相似文献   

2.
Bayesian spatial modeling of genetic population structure   总被引:2,自引:0,他引:2  
Natural populations of living organisms often have complex histories consisting of phases of expansion and decline, and the migratory patterns within them may fluctuate over space and time. When parts of a population become relatively isolated, e.g., due to geographical barriers, stochastic forces reshape certain DNA characteristics of the individuals over generations such that they reflect the restricted migration and mating/reproduction patterns. Such populations are typically termed as genetically structured and they may be statistically represented in terms of several clusters between which DNA variations differ clearly from each other. When detailed knowledge of the ancestry of a natural population is lacking, the DNA characteristics of a sample of current generation individuals often provide a wealth of information in this respect. Several statistical approaches to model-based clustering of such data have been introduced, and in particular, the Bayesian approach to modeling the genetic structure of a population has attained a vivid interest among biologists. However, the possibility of utilizing spatial information from sampled individuals in the inference about genetic clusters has been incorporated into such analyses only very recently. While the standard Bayesian hierarchical modeling techniques through Markov chain Monte Carlo simulation provide flexible means for describing even subtle patterns in data, they may also result in computationally challenging procedures in practical data analysis. Here we develop a method for modeling the spatial genetic structure using a combination of analytical and stochastic methods. We achieve this by extending a novel theory of Bayesian predictive classification with the spatial information available, described here in terms of a colored Voronoi tessellation over the sample domain. Our results for real and simulated data sets illustrate well the benefits of incorporating spatial information to such an analysis.  相似文献   

3.
Random distribution functions are the basic tool for solving nonparametric decision-theoretic problems. In 1974, Doksum introduced the family of distributions neutral to the right, that is, distributions such thatF(t 1),[F(t 2)–F(t 1)]/[1 –F(t 1)],...,[F(t k)–F(t k – 1)]/[1 –F(t k – 1)] are independent whenevert 1 < ... <t kIn practice, application of distributions neutral to the right has been prevented by the lack of a manageable analytical expression for probabilities of the typeP(F(t)<q) for fixedt andq. A subclass of such distributions can be provided which allows for a close expression of the characteristic function of log[1–F(t)], given the sample. Then, thea posteriori distribution ofF(t) is obtained by numerical evaluation of a Fourier integral. As an application, the global optimization problem is formulated as a problem of inference about the quantiles of the distributionF(y) of the random variableY=f(X), wheref is the objective function andX is a random point in the search domain.The author thanks J. Koronacki and R. Zielinski of the Polish Academy of Sciences for their valuable criticism during the final draft of the paper.  相似文献   

4.
基于改进的Cholesky分解,研究分析了纵向数据下半参数联合均值协方差模型的贝叶斯估计和贝叶斯统计诊断,其中非参数部分采用B样条逼近.主要通过应用Gibbs抽样和Metropolis-Hastings算法相结合的混合算法获得模型中未知参数的贝叶斯估计和贝叶斯数据删除影响诊断统计量.并利用诊断统计量的大小来识别数据的异常点.模拟研究和实例分析都表明提出的贝叶斯估计和诊断方法是可行有效的.  相似文献   

5.
《Optimization》2012,61(5):681-694
As global or combinatorial optimization problems are not effectively tractable by means of deterministic techniques, Monte Carlo methods are used in practice for obtaining ”good“ approximations to the optimum. In order to test the accuracy achieved after a sample of finite size, the Bayesian nonparametric approach is proposed as a suitable context, and the theoretical as well as computational implications of prior distributions in the class of neutral to the right distributions are examined. The feasibility of the approach relatively to particular Monte Carlo procedures is finally illustrated both for the global optimization problem and the {0 - 1} programming problem.  相似文献   

6.
Abstract

Versions of the Gibbs Sampler are derived for the analysis of data from hidden Markov chains and hidden Markov random fields. The principal new development is to use the pseudolikelihood function associated with the underlying Markov process in place of the likelihood, which is intractable in the case of a Markov random field, in the simulation step for the parameters in the Markov process. Theoretical aspects are discussed and a numerical study is reported.  相似文献   

7.
The continuous time Bayesian network (CTBN) enables reasoning about complex systems by representing the system as a factored, finite-state, continuous-time Markov process. Inference over the model incorporates evidence, given as state observations through time. The time dimension introduces several new types of evidence that are not found with static models. In this work, we present a comprehensive look at the types of evidence in CTBNs. Moreover, we define and extend inference to reason under uncertainty in the presence of uncertain evidence, as well as negative evidence, concepts extended to static models but not yet introduced into the CTBN model.  相似文献   

8.
指数分布场合恒加试验缺失数据的Bayes统计分析   总被引:3,自引:0,他引:3  
主要讨论了当寿命分布是指数分布时恒定应力加速寿命试验中常见几类数据类型(完全样本,分组样本,删失样本)的Bayes统计分析方法.运用G ibbs抽样迭代算法成功解决了Bayes分析中极为复杂的后验边际分布的计算问题,得到了满足顺序约束的参数的Bayes估计.模拟结果表明各场合下此方法均较好.  相似文献   

9.
The pricing of insurance policies requires estimates of the total loss. The traditional compound model imposes an independence assumption on the number of claims and their individual sizes. Bivariate models, which model both variables jointly, eliminate this assumption. A regression approach allows policy holder characteristics and product features to be included in the model. This article presents a bivariate model that uses joint random effects across both response variables to induce dependence effects. Bayesian posterior estimation is done using Markov Chain Monte Carlo (MCMC) methods. A real data example demonstrates that our proposed model exhibits better fitting and forecasting capabilities than existing models.  相似文献   

10.
本文利用全国28个省市自治区的相关数据,以索洛增长方程为基础,采用Bayesian SUR模型以及Gibbs-Importance抽样算法,估算了其资本产出弹性,并在此基础上计算了各地区全要素生产率及其增长率。研究结果表明,科技发展战略对全要素生产率的提高具有显著正效应;随着产业结构调整,资本产出弹性和全要素生产率的关系从正相关逐渐变为负相关,并且由此所表明的地区分工协作特征正逐步显现;内陆地区的地缘经济特征制约了其全要素生产率的进一步提高。  相似文献   

11.
Equally weighted mixture models are recommended for situations where it is required to draw precise finite sample inferences requiring population parameters, but where the population distribution is not constrained to belong to a simple parametric family. They lead to an alternative procedure to the Laird-DerSimonian maximum likelihood algorithm for unequally weighted mixture models. Their primary purpose lies in the facilitation of exact Bayesian computations via importance sampling. Under very general sampling and prior specifications, exact Bayesian computations can be based upon an application of importance sampling, referred to as Permutable Bayesian Marginalization (PBM). An importance function based upon a truncated multivariatet-distribution is proposed, which refers to a generalization of the maximum likelihood procedure. The estimation of discrete distributions, by binomial mixtures, and inference for survivor distributions, via mixtures of exponential or Weibull distributions, are considered. Equally weighted mixture models are also shown to lead to an alternative Gibbs sampling methodology to the Lavine-West approach.  相似文献   

12.
Recently, a Bayesian receiver for blind detection in fading channels has been proposed by Chen, Wang and Liu (200,IEEE Trans. Inform. Theory,46, 2079–2094), based on the sequential Monte Carlo methodology. That work is built on a parametric modelling of the fading process in the form of a state-space model, and assumes the knowledge of the second-order statistics of the fading channel. In this paper, we develop a nonparametric approach to the problem of blind detection in fading channels, without assuming any knowledge of the channel statistics. The basic idea is to decompose the fading process using a wavelet basis, and to use the sequential Monte Carlo technique to track both the wavelet coefficients and the transmitted symbols. Moreover, the algorithm is adaptive to time varying speed/smoothness in the fading process and the uncertainty on the number of wavelet coefficients (shrinkage order) needed. Simulation results are provided to demonstrate the excellent performance of the proposed blind adaptive receivers. This work was supported in part by the U.S. National Science Foundation (NSF) under grants CCR-9875314, CCR-9980599, DMS-9982846, DMS-0073651 and DMS-0073601.  相似文献   

13.
Stochastic earthquake models are often based on a marked point process approach as for instance presented in Vere-Jones (Int. J. Forecast., 11:503–538, 1995). This gives a fine resolution both in space and time making it possible to represent each earthquake. However, it is not obvious that this approach is advantageous when aiming at earthquake predictions. In the present paper we take a coarse point of view considering grid cells of 0.5 × 0.5°, or about 50 × 50 km, and time periods of 4 months, which seems suitable for predictions. More specifically, we will discuss different alternatives of a Bayesian hierarchical space–time model in the spirit of Wikle et al. (Environ. Ecol. Stat., 5:117–154, 1998). For each time period the observations are the magnitudes of the largest observed earthquake within each grid cell. As data we apply parts of an earthquake catalogue provided by The Northern California Earthquake Data Center where we limit ourselves to the area 32–37° N, 115–120° W for the time period January 1981 through December 1999 containing the Landers and Hector Mine earthquakes of magnitudes, respectively, 7.3 and 7.1 on the Richter scale. Based on space-time model alternatives one step earthquake predictions for the time periods containing these two events for all grid cells are arrived at. The model alternatives are implemented within an MCMC framework in Matlab. The model alternative that gives the overall best predictions based on a standard loss is claimed to give new knowledge on the spatial and time related dependencies between earthquakes. Also considering a specially designed loss using spatially averages of the 90th percentiles of the predicted values distribution of each cell it is clear that the best model predicts the high risk areas rather well. By using these percentiles we believe that one has a valuable tool for defining high and low risk areas in a region in short term predictions.   相似文献   

14.
In this paper, we consider Bayesian inference and estimation of finite time ruin probabilities for the Sparre Andersen risk model. The dense family of Coxian distributions is considered for the approximation of both the inter‐claim time and claim size distributions. We illustrate that the Coxian model can be well fitted to real, long‐tailed claims data and that this compares well with the generalized Pareto model. The main advantage of using the Coxian model for inter‐claim times and claim sizes is that it is possible to compute finite time ruin probabilities making use of recent results from queueing theory. In practice, finite time ruin probabilities are much more useful than infinite time ruin probabilities as insurance companies are usually interested in predictions for short periods of future time and not just in the limit. We show how to obtain predictive distributions of these finite time ruin probabilities, which are more informative than simple point estimations and take account of model and parameter uncertainty. We illustrate the procedure with simulated data and the well‐known Danish fire loss data set. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

15.
A Bayesian approach is presented in order to model long tail loss reserving data using the generalized beta distribution of the second kind (GB2) with dynamic mean functions and mixture model representation. The proposed GB2 distribution provides a flexible probability density function, which nests various distributions with light and heavy tails, to facilitate accurate loss reserving in insurance applications. Extending the mean functions to include the state space and threshold models provides a dynamic approach to allow for irregular claims behaviors and legislative change which may occur during the claims settlement period. The mixture of GB2 distributions is proposed as a mean of modeling the unobserved heterogeneity which arises from the incidence of very large claims in the loss reserving data. It is shown through both simulation study and forecasting that model parameters are estimated with high accuracy.  相似文献   

16.
提出了广义变系数模型函数系数的一种新的估计方法.我们用B样条函数逼近函数系数,不具体选择节点的个数,而是节点个数取均匀的无信息先验,样条函数系数取正态先验,用Bayesian模型平均的方法估计各个函数系数.这种估计方法一个主要特点是允许各个函数系数所需节点个数的后验分布不同,因此允许不同函数系数使用不同的光滑参数.另外,本文还给出了Bayesian B样条估计的计算方法,并通过模拟例子,说明广义变系数模型的函数系数可以由Bayesian B样条估计方法得到很好的估计.  相似文献   

17.
Predicting insurance losses is an eternal focus of actuarial science in the insurance sector. Due to the existence of complicated features such as skewness, heavy tail, and multi-modality, traditional parametric models are often inadequate to describe the distribution of losses, calling for a mature application of Bayesian methods. In this study we explore a Gaussian mixture model based on Dirichlet process priors. Using three automobile insurance datasets, we employ the probit stick-breaking method to incorporate the effect of covariates into the weight of the mixture component, improve its hierarchical structure, and propose a Bayesian nonparametric model that can identify the unique regression pattern of different samples. Moreover, an advanced updating algorithm of slice sampling is integrated to apply an improved approximation to the infinite mixture model. We compare our framework with four common regression techniques: three generalized linear models and a dependent Dirichlet process ANOVA model. The empirical results show that the proposed framework flexibly characterizes the actual loss distribution in the insurance datasets and demonstrates superior performance in the accuracy of data fitting and extrapolating predictions, thus greatly extending the application of Bayesian methods in the insurance sector.  相似文献   

18.
本文介绍季节调整Bayes方法及BAYSEA程序的原理,并给出利用BAYSEA程序对国内一些经济序列进行季节调整的实例.  相似文献   

19.
Summary This paper analyses the shift in parameter of a life test model. This analysis depends on the prediction of order statistics in future samples based on order statistics in a series of earlier samples in life tests having a general exponential model. While a series ofk samples are being drawn, model itself undergoes a change. Firstly, a single shift is considered and the effect of this shift on the variance is discussed. Generalisation withs shifts (s≦k) ink samples in also taken up and the semi-or-used priors (SOUPS) have been used to get predictive distributions. Finally, shift afteri (i≦k) stages, from exponential to gamma model is considered and for this case effect of the shift on the variance as well as on the Bayesian prediction region (BPR) is analysed along with set of tables.  相似文献   

20.
Increasingly large volumes of space–time data are collected everywhere by mobile computing applications, and in many of these cases, temporal data are obtained by registering events, for example, telecommunication or Web traffic data. Having both the spatial and temporal dimensions adds substantial complexity to data analysis and inference tasks. The computational complexity increases rapidly for fitting Bayesian hierarchical models, as such a task involves repeated inversion of large matrices. The primary focus of this paper is on developing space–time autoregressive models under the hierarchical Bayesian setup. To handle large data sets, a recently developed Gaussian predictive process approximation method is extended to include autoregressive terms of latent space–time processes. Specifically, a space–time autoregressive process, supported on a set of a smaller number of knot locations, is spatially interpolated to approximate the original space–time process. The resulting model is specified within a hierarchical Bayesian framework, and Markov chain Monte Carlo techniques are used to make inference. The proposed model is applied for analysing the daily maximum 8‐h average ground level ozone concentration data from 1997 to 2006 from a large study region in the Eastern United States. The developed methods allow accurate spatial prediction of a temporally aggregated ozone summary, known as the primary ozone standard, along with its uncertainty, at any unmonitored location during the study period. Trends in spatial patterns of many features of the posterior predictive distribution of the primary standard, such as the probability of noncompliance with respect to the standard, are obtained and illustrated. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号