首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 968 毫秒
1.
In this article, we develop a new approach within the framework of asset pricing models that incorporates two key features of the latent volatility: co‐movement among conditionally heteroscedastic financial returns and switching between different unobservable regimes. By combining latent factor models with hidden Markov chain models we derive a dynamical local model for segmentation and prediction of multivariate conditionally heteroscedastic financial time series. We concentrate more precisely on situations where the factor variances are modelled by univariate generalized quadratic autoregressive conditionally heteroscedastic processes. The expectation maximization algorithm that we have developed for the maximum likelihood estimation is based on a quasi‐optimal switching Kalman filter approach combined with a generalized pseudo‐Bayesian approximation, which yield inferences about the unobservable path of the common factors, their variances and the latent variable of the state process. Extensive Monte Carlo simulations and preliminary experiments obtained with daily foreign exchange rate returns of eight currencies show promising results. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

2.
We introduce a class of spatiotemporal models for Gaussian areal data. These models assume a latent random field process that evolves through time with random field convolutions; the convolving fields follow proper Gaussian Markov random field (PGMRF) processes. At each time, the latent random field process is linearly related to observations through an observational equation with errors that also follow a PGMRF. The use of PGMRF errors brings modeling and computational advantages. With respect to modeling, it allows more flexible model structures such as different but interacting temporal trends for each region, as well as distinct temporal gradients for each region. Computationally, building upon the fact that PGMRF errors have proper density functions, we have developed an efficient Bayesian estimation procedure based on Markov chain Monte Carlo with an embedded forward information filter backward sampler (FIFBS) algorithm. We show that, when compared with the traditional one-at-a-time Gibbs sampler, our novel FIFBS-based algorithm explores the posterior distribution much more efficiently. Finally, we have developed a simulation-based conditional Bayes factor suitable for the comparison of nonnested spatiotemporal models. An analysis of the number of homicides in Rio de Janeiro State illustrates the power of the proposed spatiotemporal framework.

Supplemental materials for this article are available online in the journal’s webpage.  相似文献   

3.
Pair-copula Bayesian networks (PCBNs) are a novel class of multivariate statistical models, which combine the distributional flexibility of pair-copula constructions (PCCs) with the parsimony of conditional independence models associated with directed acyclic graphs (DAGs). We are first to provide generic algorithms for random sampling and likelihood inference in arbitrary PCBNs as well as for selecting orderings of the parents of the vertices in the underlying graphs. Model selection of the DAG is facilitated using a version of the well-known PC algorithm that is based on a novel test for conditional independence of random variables tailored to the PCC framework. A simulation study shows the PC algorithm’s high aptitude for structure estimation in non-Gaussian PCBNs. The proposed methods are finally applied to modeling financial return data. Supplementary materials for this article are available online.  相似文献   

4.
Motivated by genetic association studies of pleiotropy, we propose a Bayesian latent variable approach to jointly study multiple outcomes. The models studied here can incorporate both continuous and binary responses, and can account for serial and cluster correlations. We consider Bayesian estimation for the model parameters, and we develop a novel MCMC algorithm that builds upon hierarchical centering and parameter expansion techniques to efficiently sample from the posterior distribution. We evaluate the proposed method via extensive simulations and demonstrate its utility with an application to an association study of various complication outcomes related to Type 1 diabetes. This article has supplementary material online.  相似文献   

5.
Series models have several functions: comprehending the functional dependence of variable of interest on covariates, forecasting the dependent variable for future values of covariates and estimating variance disintegration, co-integration and steady-state relations. Although the regression function in a time series model has been extensively modeled both parametrically and nonparametrically, modeling of the error autocorrelation is mainly restricted to the parametric setup. A proper modeling of autocorrelation not only helps to reduce the bias in regression function estimate, but also enriches forecasting via a better forecast of the error term. In this article, we present a nonparametric modeling of autocorrelation function under a Bayesian framework. Moving into the frequency domain from the time domain, we introduce a Gaussian process prior to the log of the spectral density, which is then updated by using a Whittle approximation for the likelihood function (Whittle likelihood). The posterior computation is simplified due to the fact that Whittle likelihood is approximated by the likelihood of a normal mixture distribution with log-spectral density as a location shift parameter, where the mixture is of only five components with known means, variances, and mixture probabilities. The problem then becomes conjugate conditional on the mixture components, and a Gibbs sampler is used to initiate the unknown mixture components as latent variables. We present a simulation study for performance comparison, and apply our method to the two real data examples.  相似文献   

6.
This article presents methodology that allows a computer to play the role of musical accompanist in a nonimprovised musical composition for soloist and accompaniment. The modeling of the accompaniment incorporates a number of distinct knowledge sources including timing information extracted in real-time from the soloist's acoustic signal, an understanding of the soloist's interpretation learned from rehearsals, and prior knowledge that guides the accompaniment toward musically plausible renditions. The solo and accompaniment parts are represented collectively as a large number of Gaussian random variables with a specified conditional independence structure—a Bayesian belief network. Within this framework a principled and computationally feasible method for generating real-time accompaniment is presented that incorporates the relevant knowledge sources. The EM algorithm is used to adapt the accompaniment to the soloist's interpretation through a series of rehearsals. A demonstration is provided from J.S. Bach's Cantata 12.  相似文献   

7.
We present a unified semiparametric Bayesian approach based on Markov random field priors for analyzing the dependence of multicategorical response variables on time, space and further covariates. The general model extends dynamic, or state space, models for categorical time series and longitudinal data by including spatial effects as well as nonlinear effects of metrical covariates in flexible semiparametric form. Trend and seasonal components, different types of covariates and spatial effects are all treated within the same general framework by assigning appropriate priors with different forms and degrees of smoothness. Inference is fully Bayesian and uses MCMC techniques for posterior analysis. The approach in this paper is based on latent semiparametric utility models and is particularly useful for probit models. The methods are illustrated by applications to unemployment data and a forest damage survey.  相似文献   

8.
A multimove sampling scheme for the state parameters of non-Gaussian and nonlinear dynamic models for univariate time series is proposed. This procedure follows the Bayesian framework, within a Gibbs sampling algorithm with steps of the Metropolis–Hastings algorithm. This sampling scheme combines the conjugate updating approach for generalized dynamic linear models, with the backward sampling of the state parameters used in normal dynamic linear models. A quite extensive Monte Carlo study is conducted in order to compare the results obtained using our proposed method, conjugate updating backward sampling (CUBS), with those obtained using some algorithms previously proposed in the Bayesian literature. We compare the performance of CUBS with other sampling schemes using two real datasets. Then we apply our algorithm in a stochastic volatility model. CUBS significantly reduces the computing time needed to attain convergence of the chains, and is relatively simple to implement.  相似文献   

9.
Healthcare fraud and abuse are a serious challenge to healthcare payers and to the entire society. This article presents a predictive model for fraud and abuse detection in health insurance based on a training dataset of manually reviewed claims. The goal of the analysis is to predict different fraud and abuse probabilities for new invoices. The prediction is based on a wide framework of fraud and abuse reports which examine the behavior of medical providers and insured members by measuring systematic deviation from usual patterns in medical claims data. We show that models which directly use the results of the reports as model covariates do not exploit the full potential in terms of predictive quality. Instead, we propose a multinomial Bayesian latent variable model which summarizes behavioral patterns in latent variables, and predicts different fraud and abuse probabilities. The estimation of model parameters is based on a Markov Chain Monte Carlo (MCMC) algorithm using Bayesian shrinkage techniques. The presented approach improves the identification of fraudulent and abusive claims compared to different benchmark approaches.  相似文献   

10.
Stochastic epidemic models describe the dynamics of an epidemic as a disease spreads through a population. Typically, only a fraction of cases are observed at a set of discrete times. The absence of complete information about the time evolution of an epidemic gives rise to a complicated latent variable problem in which the state space size of the epidemic grows large as the population size increases. This makes analytically integrating over the missing data infeasible for populations of even moderate size. We present a data augmentation Markov chain Monte Carlo (MCMC) framework for Bayesian estimation of stochastic epidemic model parameters, in which measurements are augmented with subject-level disease histories. In our MCMC algorithm, we propose each new subject-level path, conditional on the data, using a time-inhomogenous continuous-time Markov process with rates determined by the infection histories of other individuals. The method is general, and may be applied to a broad class of epidemic models with only minimal modifications to the model dynamics and/or emission distribution. We present our algorithm in the context of multiple stochastic epidemic models in which the data are binomially sampled prevalence counts, and apply our method to data from an outbreak of influenza in a British boarding school. Supplementary material for this article is available online.  相似文献   

11.
In Bayesian analysis of mixture models, the label-switching problem occurs as a result of the posterior distribution being invariant to any permutation of cluster indices under symmetric priors. To solve this problem, we propose a novel relabeling algorithm and its variants by investigating an approximate posterior distribution of the latent allocation variables instead of dealing with the component parameters directly. We demonstrate that our relabeling algorithm can be formulated in a rigorous framework based on information theory. Under some circumstances, it is shown to resemble the classical Kullback-Leibler relabeling algorithm and include the recently proposed equivalence classes representatives relabeling algorithm as a special case. Using simulation studies and real data examples, we illustrate the efficiency of our algorithm in dealing with various label-switching phenomena. Supplemental materials for this article are available online.  相似文献   

12.
The Bradley–Terry model is a popular approach to describe probabilities of the possible outcomes when elements of a set are repeatedly compared with one another in pairs. It has found many applications including animal behavior, chess ranking, and multiclass classification. Numerous extensions of the basic model have also been proposed in the literature including models with ties, multiple comparisons, group comparisons, and random graphs. From a computational point of view, Hunter has proposed efficient iterative minorization-maximization (MM) algorithms to perform maximum likelihood estimation for these generalized Bradley–Terry models whereas Bayesian inference is typically performed using Markov chain Monte Carlo algorithms based on tailored Metropolis–Hastings proposals. We show here that these MM algorithms can be reinterpreted as special instances of expectation-maximization algorithms associated with suitable sets of latent variables and propose some original extensions. These latent variables allow us to derive simple Gibbs samplers for Bayesian inference. We demonstrate experimentally the efficiency of these algorithms on a variety of applications.  相似文献   

13.
Univariate or multivariate ordinal responses are often assumed to arise from a latent continuous parametric distribution, with covariate effects that enter linearly. We introduce a Bayesian nonparametric modeling approach for univariate and multivariate ordinal regression, which is based on mixture modeling for the joint distribution of latent responses and covariates. The modeling framework enables highly flexible inference for ordinal regression relationships, avoiding assumptions of linearity or additivity in the covariate effects. In standard parametric ordinal regression models, computational challenges arise from identifiability constraints and estimation of parameters requiring nonstandard inferential techniques. A key feature of the nonparametric model is that it achieves inferential flexibility, while avoiding these difficulties. In particular, we establish full support of the nonparametric mixture model under fixed cut-off points that relate through discretization the latent continuous responses with the ordinal responses. The practical utility of the modeling approach is illustrated through application to two datasets from econometrics, an example involving regression relationships for ozone concentration, and a multirater agreement problem. Supplementary materials with technical details on theoretical results and on computation are available online.  相似文献   

14.
In this article, we introduce the Bayesian change point and variable selection algorithm that uses dynamic programming recursions to draw direct samples from a very high-dimensional space in a computationally efficient manner, and apply this algorithm to a geoscience problem that concerns the Earth's history of glaciation. Strong evidence exists for at least two changes in the behavior of the Earth's glaciers over the last five million years. Around 2.7 Ma, the extent of glacial cover on the Earth increased, but the frequency of glacial melting events remained constant at 41 kyr. A more dramatic change occurred around 1 Ma. For over three decades, the “Mid-Pleistocene Transition” has been described in the geoscience literature not only by a further increase in the magnitude of glacial cover, but also as the dividing point between the 41 kyr and the 100 kyr glacial worlds. Given such striking changes in the glacial record, it is clear that a model whose parameters can change through time is essential for the analysis of these data. The Bayesian change point algorithm provides a probabilistic solution to a data segmentation problem, while the exact Bayesian inference in regression procedure performs variable selection within each regime delineated by the change points. Together, they can model a time series in which the predictor variables as well as the parameters of the model are allowed to change with time. Our algorithm allows one to simultaneously perform variable selection and change point analysis in a computationally efficient manner. Supplementary materials including MATLAB code for the Bayesian change point and variable selection algorithm and the datasets described in this article are available online or by contacting the first author.  相似文献   

15.
We develop efficient Bayesian inference for the one-factor copula model with two significant contributions over existing methodologies. First, our approach leads to straightforward inference on dependence parameters and the latent factor; only inference on the former is available under frequentist alternatives. Second, we develop a reversible jump Markov chain Monte Carlo algorithm that averages over models constructed from different bivariate copula building blocks. Our approach accommodates any combination of discrete and continuous margins. Through extensive simulations, we compare the computational and Monte Carlo efficiency of alternative proposed sampling schemes. The preferred algorithm provides reliable inference on parameters, the latent factor, and model space. The potential of the methodology is highlighted in an empirical study of 10 binary measures of socio-economic deprivation collected for 11,463 East Timorese households. The importance of conducting inference on the latent factor is motivated by constructing a poverty index using estimates of the factor. Compared to a linear Gaussian factor model, our model average improves out-of-sample fit. The relationships between the poverty index and observed variables uncovered by our approach are diverse and allow for a richer and more precise understanding of the dependence between overall deprivation and individual measures of well-being.  相似文献   

16.
In many domains, data now arrive faster than we are able to mine it. To avoid wasting these data, we must switch from the traditional “one-shot” data mining approach to systems that are able to mine continuous, high-volume, open-ended data streams as they arrive. In this article we identify some desiderata for such systems, and outline our framework for realizing them. A key property of our approach is that it minimizes the time required to build a model on a stream while guaranteeing (as long as the data are iid) that the model learned is effectively indistinguishable from the one that would be obtained using infinite data. Using this framework, we have successfully adapted several learning algorithms to massive data streams, including decision tree induction, Bayesian network learning, k-means clustering, and the EM algorithm for mixtures of Gaussians. These algorithms are able to process on the order of billions of examples per day using off-the-shelf hardware. Building on this, we are currently developing software primitives for scaling arbitrary learning algorithms to massive data streams with minimal effort.  相似文献   

17.
A Bayesian approach is developed to assess the factor analysis model. Joint Bayesian estimates of the factor scores and the structural parameters in the covariance structure are obtained simultaneously. The basic idea is to treat the latent factor scores as missing data and augment them with the observed data in generating a sequence of random observations from the posterior distributions by the Gibbs sampler. Then, the Bayesian estimates are taken as the sample means of these random observations. Expressions for implementing the algorithm are derived and some statistical properties of the estimates are presented. Some aspects of the algorithm are illustrated by a real example and the performance of the Bayesian procedure is studied using simulation.  相似文献   

18.
In this article, we introduce a novel Bayesian approach for linking multiple social networks in order to discover the same real world person having different accounts across networks. In particular, we develop a latent model that allows us to jointly characterize the network and linkage structures relying on both relational and profile data. In contrast to other existing approaches in the machine learning literature, our Bayesian implementation naturally provides uncertainty quantification via posterior probabilities for the linkage structure itself or any function of it. Our findings clearly suggest that our methodology can produce accurate point estimates of the linkage structure even in the absence of profile information, and also, in an identity resolution setting, our results confirm that including relational data into the matching process improves the linkage accuracy. We illustrate our methodology using real data from popular social networks such as Twitter , Facebook , and YouTube .  相似文献   

19.
The present work is associated with Bayesian finite element (FE) model updating using modal measurements based on maximizing the posterior probability instead of any sampling based approach. Such Bayesian updating framework usually employs normal distribution in updating of parameters, although normal distribution has usual statistical issues while using non-negative parameters. These issues are proposed to be dealt with incorporating lognormal distribution for non-negative parameters. Detailed formulations are carried out for model updating, uncertainty-estimation and probabilistic detection of changes/damages of structural parameters using combined normal-lognormal probability distribution in this Bayesian framework. Normal and lognormal distributions are considered for eigen-system equation and structural (mass and stiffness) parameters respectively, while these two distributions are jointly considered for likelihood function. Important advantages in FE model updating (e.g. utilization of incomplete measured modal data, non-requirement of mode-matching) are also retained in this combined normal-lognormal distribution based proposed FE model updating approach. For demonstrating the efficiency of this proposed approach, a two dimensional truss structure is considered with multiple damage cases. Satisfactory performances are observed in model updating and subsequent probabilistic estimations, however level of performances are found to be weakened with increasing levels in damage scenario (as usual). Moreover, performances of this proposed FE model updating approach are compared with the typical normal distribution based updating approach for those damage cases demonstrating quite similar level of performances. The proposed approach also demonstrates better computational efficiency (achieving higher accuracy in lesser computation time) in comparison with two prominent Markov Chain Monte Carlo (MCMC) techniques (viz. Metropolis-Hastings algorithm and Gibbs sampling).  相似文献   

20.
Joint latent class modeling of disease prevalence and high-dimensional semicontinuous biomarker data has been proposed to study the relationship between diseases and their related biomarkers. However, statistical inference of the joint latent class modeling approach has proved very challenging due to its computational complexity in seeking maximum likelihood estimates. In this article, we propose a series of composite likelihoods for maximum composite likelihood estimation, as well as an enhanced Monte Carlo expectation–maximization (MCEM) algorithm for maximum likelihood estimation, in the context of joint latent class models. Theoretically, the maximum composite likelihood estimates are consistent and asymptotically normal. Numerically, we have shown that, as compared to the MCEM algorithm that maximizes the full likelihood, not only the composite likelihood approach that is coupled with the quasi-Newton method can substantially reduce the computational complexity and duration, but it can simultaneously retain comparative estimation efficiency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号