首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
Variational approximations provide fast, deterministic alternatives to Markov chain Monte Carlo for Bayesian inference on the parameters of complex, hierarchical models. Variational approximations are often limited in practicality in the absence of conjugate posterior distributions. Recent work has focused on the application of variational methods to models with only partial conjugacy, such as in semiparametric regression with heteroscedastic errors. Here, both the mean and log variance functions are modeled as smooth functions of covariates. For this problem, we derive a mean field variational approximation with an embedded Laplace approximation to account for the nonconjugate structure. Empirical results with simulated and real data show that our approximate method has significant computational advantages over traditional Markov chain Monte Carlo; in this case, a delayed rejection adaptive Metropolis algorithm. The variational approximation is much faster and eliminates the need for tuning parameter selection, achieves good fits for both the mean and log variance functions, and reasonably reflects the posterior uncertainty. We apply the methods to log-intensity data from a small angle X-ray scattering experiment, in which properly accounting for the smooth heteroscedasticity leads to significant improvements in posterior inference for key physical characteristics of an organic molecule.  相似文献   

2.
Assuming an additive model on the covariate effect in proportional hazards regression, we consider the estimation of the component functions. The estimator is based on the marginal integration method. Then we use a new kind of nonparametric estimator as the pilot estimator of the marginal integration. The pilot estimator is constructed by an analogy to the two-sample problems and by appealing to the principles of local partial likelihood and local linear fitting. We derive the asymptotic distribution of the marginal integration estimator of the component functions. The result of a simulation study is also given.  相似文献   

3.
or the variance parameter of the normal distribution with a normal-inverse-gamma prior, we analytically calculate the Bayes posterior estimator with respect to a conjugate normal-inverse-gamma prior distribution under Stein's loss function. This estimator minimizes the Posterior Expected Stein's Loss (PESL). We also analytically calculate the Bayes posterior estimator and the PESL under the squared error loss function. The numerical simulations exemplify our theoretical studies that the PESLs do not depend on the sample, and that the Bayes posterior estimator and the PESL under the squared error loss function are unanimously larger than those under Stein's loss function. Finally, we calculate the Bayes posterior estimators and the PESLs of the monthly simple returns of the SSE Composite Index.  相似文献   

4.
对于纵向数据边际模型的均值函数, 有很多非参数估计方法, 其中回归样条, 光滑样条, 似乎不相关(SUR)核估计等方法在工作协方差阵正确指定时具有最小的渐近方差. 回归样条的渐近偏差与工作协方差阵无关, 而SUR核估计和光滑样条估计的渐近偏差却依赖于工作协方差阵. 本文主要研究了回归样条, 光滑样条和SUR核估计的效率问题. 通过模拟比较发现回归样条估计的表现比较稳定, 在大多数情况下比光滑样条估计和SUR核估计的效率高.  相似文献   

5.
Although generalized linear mixed effects models have received much attention in the statistical literature, there is still no computationally efficient algorithm for computing maximum likelihood estimates for such models when there are a moderate number of random effects. Existing algorithms are either computationally intensive or they compute estimates from an approximate likelihood. Here we propose an algorithm—the spherical–radial algorithm—that is computationally efficient and computes maximum likelihood estimates. Although we concentrate on two-level, generalized linear mixed effects models, the same algorithm can be applied to many other models as well, including nonlinear mixed effects models and frailty models. The computational difficulty for estimation in these models is in integrating the joint distribution of the data and the random effects to obtain the marginal distribution of the data. Our algorithm uses a multidimensional quadrature rule developed in earlier literature to integrate the joint density. This article discusses how this rule may be combined with an optimization algorithm to efficiently compute maximum likelihood estimates. Because of stratification and other aspects of the quadrature rule, the resulting integral estimator has significantly less variance than can be obtained through simple Monte Carlo integration. Computational efficiency is achieved, in part, because relatively few evaluations of the joint density may be required in the numerical integration.  相似文献   

6.
We present an approximate Maximum Likelihood estimator for univariate Itô stochastic differential equations driven by Brownian motion, based on numerical calculation of the likelihood function. The transition probability density of a stochastic differential equation is given by the Kolmogorov forward equation, known as the Fokker-Planck equation. This partial differential equation can only be solved analytically for a limited number of models, which is the reason for applying numerical methods based on higher order finite differences.The approximate likelihood converges to the true likelihood, both theoretically and in our simulations, implying that the estimator has many nice properties. The estimator is evaluated on simulated data from the Cox-Ingersoll-Ross model and a non-linear extension of the Chan-Karolyi-Longstaff-Sanders model. The estimates are similar to the Maximum Likelihood estimates when these can be calculated and converge to the true Maximum Likelihood estimates as the accuracy of the numerical scheme is increased. The estimator is also compared to two benchmarks; a simulation-based estimator and a Crank-Nicholson scheme applied to the Fokker-Planck equation, and the proposed estimator is still competitive.  相似文献   

7.
For multivariate regressors, integrating the Nadaraya–Watson regression smoother produces estimators of the lower-dimensional marginal components that are asymptotically normally distributed, at the optimal rate of convergence. Some heuristics, based on consistency of the pilot estimator, suggested that the estimator would not converge at the optimal rate of convergence in the presence of more than four covariates. This paper shows first that marginal integration with its internally normalized counterpart leads to rate-optimal estimators of the marginal components. We introduce the necessary modifications and give central limit theorems. Then, it is shown that the method apply also to more general models, in particular we discuss feasible estimation of partial linear models. The proofs reveal that the pilot estimator shall over-smooth the variables to be integrated, and, that the resulting estimator is itself a lower-dimensional regression smoother. Hence, finite sample properties of the estimator are comparable to those of low-dimensional nonparametric regression. Further advantages when starting with the internally normalized pilot estimator are its computational attractiveness and better performance (compared to its classical counterpart) when the covatiates are correlated and nonuniformly distributed. Simulation studies underline the excellent performance in comparison with so far known methods.  相似文献   

8.
The asymptotic properties of the quasi-maximum likelihood estimator (QMLE) of vector autoregressive moving-average (VARMA) models are derived under the assumption that the errors are uncorrelated but not necessarily independent nor martingale differences. Relaxing the martingale difference assumption on the errors considerably extends the range of application of the VARMA models, and allows one to cover linear representations of general nonlinear processes. Conditions are given for the asymptotic normality of the QMLE. Particular attention is given to the estimation of the asymptotic variance matrix, which may be very different from that obtained in the standard framework.  相似文献   

9.
Gaussian time-series models are often specified through their spectral density. Such models present several computational challenges, in particular because of the nonsparse nature of the covariance matrix. We derive a fast approximation of the likelihood for such models. We propose to sample from the approximate posterior (i.e., the prior times the approximate likelihood), and then to recover the exact posterior through importance sampling. We show that the variance of the importance sampling weights vanishes as the sample size goes to infinity. We explain why the approximate posterior may typically be multimodal, and we derive a Sequential Monte Carlo sampler based on an annealing sequence to sample from that target distribution. Performance of the overall approach is evaluated on simulated and real datasets. In addition, for one real-world dataset, we provide some numerical evidence that a Bayesian approach to semiparametric estimation of spectral density may provide more reasonable results than its frequentist counterparts. The article comes with supplementary materials, available online, that contain an Appendix with a proof of our main Theorem, a Python package that implements the proposed procedure, and the Ethernet dataset.  相似文献   

10.
基于纵向数据部分线性测量误差模型, 研究了模型中兴趣参数部分回归系数的估计问题. 首先采用B样条方法逼近模型中的非参数函数, 然后提出修正的二次推断函数(QIF)方法对模型中参数部分的回归系数进行估计, 所提方法可以提高估计的效率. 在一定的正则条件下, 证明了所得到的估计量具有相合性和渐近正态性. 最后, 通过模拟研究和实例分析验证了所提出估计方法的有限大样本性质.  相似文献   

11.
Generalized linear mixed models (GLMMs) have been applied widely in the analysis of longitudinal data. This model confers two important advantages, namely, the flexibility to include random effects and the ability to make inference about complex covariances. In practice, however, the inference of variance components can be a difficult task due to the complexity of the model itself and the dimensionality of the covariance matrix of random effects. Here we first discuss for GLMMs the relation between Bayesian posterior estimates and penalized quasi-likelihood (PQL) estimates, based on the generalization of Harville’s result for general linear models. Next, we perform fully Bayesian analyses for the random covariance matrix using three different reference priors, two with Jeffreys’ priors derived from approximate likelihoods and one with the approximate uniform shrinkage prior. Computations are carried out via the combination of asymptotic approximations and Markov chain Monte Carlo methods. Under the criterion of the squared Euclidean norm, we compare the performances of Bayesian estimates of variance components with that of PQL estimates when the responses are non-normal, and with that of the restricted maximum likelihood (REML) estimates when data are assumed normal. Three applications and simulations of binary, normal, and count responses with multiple random effects and of small sample sizes are illustrated. The analyses examine the differences in estimation performance when the covariance structure is complex, and demonstrate the equivalence between PQL and the posterior modes when the former can be derived. The results also show that the Bayesian approach, particularly under the approximate Jeffreys’ priors, outperforms other procedures.  相似文献   

12.
This paper describes a method for an objective selection of the optimal prior distribution, or for adjusting its hyper-parameter, among the competing priors for a variety of Bayesian models. In order to implement this method, the integration of very high dimensional functions is required to get the normalizing constants of the posterior and even of the prior distribution. The logarithm of the high dimensional integral is reduced to the one-dimensional integration of a cerain function with respect to the scalar parameter over the range of the unit interval. Having decided the prior, the Bayes estimate or the posterior mean is used mainly here in addition to the posterior mode. All of these are based on the simulation of Gibbs distributions such as Metropolis' Monte Carlo algorithm. The improvement of the integration's accuracy is substantial in comparison with the conventional crude Monte Carlo integration. In the present method, we have essentially no practical restrictions in modeling the prior and the likelihood. Illustrative artificial data of the lattice system are given to show the practicability of the present procedure.  相似文献   

13.
The computation of marginal posterior density in Bayesian analysis is essential in that it can provide complete information about parameters of interest. Furthermore, the marginal posterior density can be used for computing Bayes factors, posterior model probabilities, and diagnostic measures. The conditional marginal density estimator (CMDE) is theoretically the best for marginal density estimation but requires the closed-form expression of the conditional posterior density, which is often not available in many applications. We develop the partition weighted marginal density estimator (PWMDE) to realize the CMDE. This unbiased estimator requires only a single Markov chain Monte Carlo output from the joint posterior distribution and the known unnormalized posterior density. The theoretical properties and various applications of the PWMDE are examined in detail. The PWMDE method is also extended to the estimation of conditional posterior densities. We carry out simulation studies to investigate the empirical performance of the PWMDE and further demonstrate the desirable features of the proposed method with two real data sets from a study of dissociative identity disorder patients and a prostate cancer study, respectively. Supplementary materials for this article are available online.  相似文献   

14.
In this paper we present two error estimators resp. indicators for the time integration in structural dynamics. Based on the equivalence between the standard Newmark scheme and a Galerkin formulation in time [1] for linear problems a global time integration error estimator based on duality [3] can also be derived for the Newmark scheme. This error estimator is compared to an error indicator based on a finite difference approach in time [2]. Finally an adaptive time stepping scheme using the global estimator and the local indicator is presented. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

15.
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid, however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this article, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general setup, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effect models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.  相似文献   

16.
On asymptotics of t-type regression estimation in multiple linear model   总被引:1,自引:0,他引:1  
We consider a robust estimator (t-type regression estimator) of multiple linear regression model by maximizing marginal likelihood of a scaled t-type error t-distribution. The marginal likelihood can also be applied to the de-correlated response when the within-subject correlation can be consistently estimated from an initial estimate of the model based on the independent working assumption. This paper shows that such a t-type estimator is consistent.  相似文献   

17.
This article considers Monte Carlo integration under rejection sampling or Metropolis-Hastings sampling. Each algorithm involves accepting or rejecting observations from proposal distributions other than a target distribution. While taking a likelihood approach, we basically treat the sampling scheme as a random design, and define a stratified estimator of the baseline measure. We establish that the likelihood estimator has no greater asymptotic variance than the crude Monte Carlo estimator under rejection sampling or independence Metropolis-Hastings sampling. We employ a subsampling technique to reduce the computational cost, and illustrate with three examples the computational effectiveness of the likelihood method under general Metropolis-Hastings sampling.  相似文献   

18.
基于非参数函数的核估计,构造了部分线性自回归模型中误差四阶矩的相合估计,从而给出了误差方差核估计的渐近正态性,并通过模拟算例和实例说明了其应用.  相似文献   

19.
On posterior consistency in nonparametric regression problems   总被引:1,自引:0,他引:1  
We provide sufficient conditions to establish posterior consistency in nonparametric regression problems with Gaussian errors when suitable prior distributions are used for the unknown regression function and the noise variance. When the prior under consideration satisfies certain properties, the crucial condition for posterior consistency is to construct tests that separate from the outside of the suitable neighborhoods of the parameter. Under appropriate conditions on the regression function, we show there exist tests, of which the type I error and the type II error probabilities are exponentially small for distinguishing the true parameter from the complements of the suitable neighborhoods of the parameter. These sufficient conditions enable us to establish almost sure consistency based on the appropriate metrics with multi-dimensional covariate values fixed in advance or sampled from a probability distribution. We consider several examples of nonparametric regression problems.  相似文献   

20.
This paper reports a robust kernel estimation for fixed design nonparametric regression models. A Stahel-Donoho kernel estimation is introduced, in which the weight functions depend on both the depths of data and the distances between the design points and the estimation points. Based on a local approximation, a computational technique is given to approximate to the incomputable depths of the errors. As a result the new estimator is computationally efficient. The proposed estimator attains a high breakdown point and has perfect asymptotic behaviors such as the asymptotic normality and convergence in the mean squared error. Unlike the depth-weighted estimator for parametric regression models, this depth-weighted nonparametric estimator has a simple variance structure and then we can compare its efficiency with the original one. Some simulations show that the new method can smooth the regression estimation and achieve some desirable balances between robustness and efficiency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号