首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The modeling of longitudinal and survival data is an active research area. Most of researches focus on improving the estimating efficiency but ignore many data features frequently encountered in practice. In this article, we develop a joint model that concurrently accounting for longitudinal-survival data with multiple features. Specifically, our joint model handles skewness, limit of detection, missingness and measurement errors in covariates which are typical observed in the collection of longitudinal-survival data from many studies. We employ a Bayesian approach for making inference on the joint model. The proposed model and method are applied to an AIDS study. A few alternative models under different conditions are compared. Some interesting results are reported. Simulation studies are conducted to assess the performance of the proposed methods.  相似文献   

2.
基于改进的Cholesky分解,研究分析了纵向数据下半参数联合均值协方差模型的贝叶斯估计和贝叶斯统计诊断,其中非参数部分采用B样条逼近.主要通过应用Gibbs抽样和Metropolis-Hastings算法相结合的混合算法获得模型中未知参数的贝叶斯估计和贝叶斯数据删除影响诊断统计量.并利用诊断统计量的大小来识别数据的异常点.模拟研究和实例分析都表明提出的贝叶斯估计和诊断方法是可行有效的.  相似文献   

3.
We develop a new joint cure rate model for longitudinal and survival data. The model allows for multiple longitudinal markers as well as a cure structure for the survival component based on the promotion time cure rate model, as described in Ibrahim et al. (Bayesian Survival Analysis, Springer, New York, 2001). Several characteristics and properties of the new model are discussed and examined. A real dataset from a melanoma clinical trial is given to demonstrate the methodology.  相似文献   

4.
In this paper, we discuss Bayesian joint quantile regression of mixed effects models with censored responses and errors in covariates simultaneously using Markov Chain Monte Carlo method. Under the assumption of asymmetric Laplace error distribution, we establish a Bayesian hierarchical model and derive the posterior distributions of all unknown parameters based on Gibbs sampling algorithm. Three cases including multivariate normal distribution and other two heavy-tailed distributions are considered for fitting random effects of the mixed effects models. Finally, some Monte Carlo simulations are performed and the proposed procedure is illustrated by analyzing a group of AIDS clinical data set.  相似文献   

5.
The relationship between viral load and CD4 cell count is one of the interesting questions in AIDS research. Statistical models are powerful tools for clarifying this important problem. Partially linear mixed-effects (PLME) model which accounts for the unknown function of time effect is one of the important models for this purpose. Meanwhile, the mixed-effects modeling approach is suitable for the longitudinal data analysis. However, the complex process of data collection in clinical trials has made it impossible to rely on one particular model to address the issues. Asymmetric distribution, measurement error and left censoring are features commonly arisen in longitudinal studies. It is crucial to take into account these features in the modeling process to achieve reliable estimation and valid conclusion. In this article, we establish a joint model that accounts for all these features in the framework of PLME models. A Bayesian inferential procedure is proposed to estimate parameters in the joint model. A real data example is analyzed to demonstrate the proposed modeling approach for inference and the results are reported by comparing various scenarios-based models.  相似文献   

6.
A hierarchical model is developed for the joint mortality analysis of pension scheme datasets. The proposed model allows for a rigorous statistical treatment of missing data. While our approach works for any missing data pattern, we are particularly interested in a scenario where some covariates are observed for members of one pension scheme but not the other. Therefore, our approach allows for the joint modelling of datasets which contain different information about individual lives. The proposed model generalizes the specification of parametric models when accounting for covariates. We consider parameter uncertainty using Bayesian techniques. Model parametrization is analysed in order to obtain an efficient MCMC sampler, and address model selection. The inferential framework described here accommodates any missing-data pattern, and turns out to be useful to analyse statistical relationships among covariates. Finally, we assess the financial impact of using the covariates, and of the optimal use of the whole available sample when combining data from different mortality experiences.  相似文献   

7.
This paper proposes a stochastic volatility model (PAR-SV) in which the log-volatility follows a first-order periodic autoregression. This model aims at representing time series with volatility displaying a stochastic periodic dynamic structure, and may then be seen as an alternative to the familiar periodic GARCH process. The probabilistic structure of the proposed PAR-SV model such as periodic stationarity and autocovariance structure are first studied. Then, parameter estimation is examined through the quasi-maximum likelihood (QML) method where the likelihood is evaluated using the prediction error decomposition approach and Kalman filtering. In addition, a Bayesian MCMC method is also considered, where the posteriors are given from conjugate priors using the Gibbs sampler in which the augmented volatilities are sampled from the Griddy Gibbs technique in a single-move way. As a-by-product, period selection for the PAR-SV is carried out using the (conditional) deviance information criterion (DIC). A simulation study is undertaken to assess the performances of the QML and Bayesian Griddy Gibbs estimates in finite samples while applications of Bayesian PAR-SV modeling to daily, quarterly and monthly S&P 500 returns are considered.  相似文献   

8.
Traditional criteria for comparing alternative Bayesian hierarchical models, such as cross-validation sums of squares, are inappropriate for nonstandard data structures. More flexible cross-validation criteria such as predictive densities facilitate effective evaluations across a broader range of data structures, but do so at the expense of introducing computational challenges. This article considers Markov chain Monte Carlo strategies for calculating Bayesian predictive densities for vector measurements subject to differential component-wise censoring. It discusses computational obstacles in Bayesian computations resulting from both the multivariate and incomplete nature of the data, and suggests two Monte Carlo approaches for implementing predictive density calculations. It illustrates the value of the proposed methods in the context of comparing alternative models for joint distributions of contaminant concentration measurements.  相似文献   

9.
In cancer clinical trials and other medical studies, both longitudinal measurements and data on a time to an event (survival time) are often collected from the same patients. Joint analyses of these data would improve the efficiency of the statistical inferences. We propose a new joint model for the longitudinal proportional measurements which are restricted in a finite interval and survival times with a potential cure fraction. A penalized joint likelihood is derived based on the Laplace approximation and a semiparametric procedure based on this likelihood is developed to estimate the parameters in the joint model. A simulation study is performed to evaluate the statistical properties of the proposed procedures. The proposed model is applied to data from a clinical trial on early breast cancer.  相似文献   

10.
Change point hazard rate models arise in many life time data analysis, for example, in studying times until the undesirable side effects occur in clinical trials. In this paper we propose a general class of change point hazard model for survival data. This class includes and extends different types of change point models for survival data, e.g. cure rate model and lag model. Most classical approach develops estimates of model parameters, with particular interest in change point parameter and often the whole hazard function, but exclusively in terms of asymptotic properties. We propose a Bayesian approach, avoiding asymptotics and provide inference conditional upon the observed data. The proposed Bayesian models are fitted using Markov chain Monte Carlo method. We illustrate our proposed methodology with an application to modeling life times of the printed circuit board.  相似文献   

11.
??In this paper, we propose a joint mean-variance-correlation modeling approach for longitudinal studies. By applying partial autocorrelations, we obtain an unconstrained parametrization for the correlation matrix that automatically guarantees its positive definiteness, and develop a regression approach to model the correlation matrix of the longitudinal measurements by exploiting the parametrization. The proposed modeling framework is parsimonious, interpretable, and flexible for analyzing longitudinal data. Real data example and simulation support the effectiveness of the proposed approach.  相似文献   

12.
Our article considers the class of recently developed stochastic models that combine claims payments and incurred losses information into a coherent reserving methodology. In particular, we develop a family of hierarchical Bayesian paid–incurred claims models, combining the claims reserving models of Hertig (1985) and Gogol (1993). In the process we extend the independent log-normal model of Merz and Wüthrich (2010) by incorporating different dependence structures using a Data-Augmented mixture Copula paid–incurred claims model.In this way the paper makes two main contributions: firstly we develop an extended class of model structures for the paid–incurred chain ladder models where we develop precisely the Bayesian formulation of such models; secondly we explain how to develop advanced Markov chain Monte Carlo sampling algorithms to make inference under these copula dependence PIC models accurately and efficiently, making such models accessible to practitioners to explore their suitability in practice. In this regard the focus of the paper should be considered in two parts, firstly development of Bayesian PIC models for general dependence structures with specialised properties relating to conjugacy and consistency of tail dependence across the development years and accident years and between Payment and incurred loss data are developed. The second main contribution is the development of techniques that allow general audiences to efficiently work with such Bayesian models to make inference. The focus of the paper is not so much to illustrate that the PIC paper is a good class of models for a particular data set, the suitability of such PIC type models is discussed in Merz and Wüthrich (2010) and Happ and Wüthrich (2013). Instead we develop generalised model classes for the PIC family of Bayesian models and in addition provide advanced Monte Carlo methods for inference that practitioners may utilise with confidence in their efficiency and validity.  相似文献   

13.
In this article, we develop efficient robust method for estimation of mean and covariance simultaneously for longitudinal data in regression model. Based on Cholesky decomposition for the covariance matrix and rewriting the regression model, we propose a weighted least square estimator, in which the weights are estimated under generalized empirical likelihood framework. The proposed estimator obtains high efficiency from the close connection to empirical likelihood method, and achieves robustness by bounding the weighted sum of squared residuals. Simulation study shows that, compared to existing robust estimation methods for longitudinal data, the proposed estimator has relatively high efficiency and comparable robustness. In the end, the proposed method is used to analyse a real data set.  相似文献   

14.
Abstract

An essential feature of longitudinal data is the existence of autocorrelation among the observations from the same unit or subject. Two-stage random-effects linear models are commonly used to analyze longitudinal data. These models are not flexible enough, however, for exploring the underlying data structures and, especially, for describing time trends. Semi-parametric models have been proposed recently to accommodate general time trends. But these semi-parametric models do not provide a convenient way to explore interactions among time and other covariates although such interactions exist in many applications. Moreover, semi-parametric models require specifying the design matrix of the covariates (time excluded). We propose nonparametric models to resolve these issues. To fit nonparametric models, we use the novel technique of the multivariate adaptive regression splines for the estimation of mean curve and then apply an EM-like iterative procedure for covariance estimation. After giving a general algorithm of model building, we show how to design a fast algorithm. We use both simulated and published data to illustrate the use of our proposed method.  相似文献   

15.
Parametric mortality models capture the cross section of mortality rates. These models fit the older ages better, because of the more complex cross section of mortality at younger and middle ages. Dynamic parametric mortality models fit a time series to the parameters, such as a Vector-auto-regression (VAR), in order to capture trends and uncertainty in mortality improvements. We consider the full age range using the Heligman and Pollard (1980) model, a cross-sectional mortality model with parameters that capture specific features of different age ranges. We make the Heligman–Pollard model dynamic using a Bayesian Vector Autoregressive (BVAR) model for the parameters and compare with more commonly used VAR models. We fit the models using Australian data, a country with similar mortality experience to many developed countries. We show how the Bayesian Vector Autoregressive (BVAR) models improve forecast accuracy compared to VAR models and quantify parameter risk which is shown to be significant.  相似文献   

16.
Data generated in forestry biometrics are not normal in statistical sense as they rarely follow the normal regression model. Hence, there is a need to develop models and methods in forest biometric applications for non-normal models. Due to generality of Bayesian methods it can be implemented in the situations when Gaussian regression models do not fit the data. Data on diameter at breast height (dbh), which is a very important characteristic in forestry has been fitted to Weibull and gamma models in Bayesian paradigm and comparisons have also been made with its classical counterpart. It may be noted that MCMC simulation tools are used in this study. An attempt has been made to apply Bayesian simulation tools using \textbf{R} software.  相似文献   

17.

Quantile regression is a powerful complement to the usual mean regression and becomes increasingly popular due to its desirable properties. In longitudinal studies, it is necessary to consider the intra-subject correlation among repeated measures over time to improve the estimation efficiency. In this paper, we focus on longitudinal single-index models. Firstly, we apply the modified Cholesky decomposition to parameterize the intra-subject covariance matrix and develop a regression approach to estimate the parameters of the covariance matrix. Secondly, we propose efficient quantile estimating equations for the index coefficients and the link function based on the estimated covariance matrix. Since the proposed estimating equations include a discrete indicator function, we propose smoothed estimating equations for fast and accurate computation of the index coefficients, as well as their asymptotic covariances. Thirdly, we establish the asymptotic properties of the proposed estimators. Finally, simulation studies and a real data analysis have illustrated the efficiency of the proposed approach.

  相似文献   

18.
The Dirichlet process and its extension, the Pitman–Yor process, are stochastic processes that take probability distributions as a parameter. These processes can be stacked up to form a hierarchical nonparametric Bayesian model. In this article, we present efficient methods for the use of these processes in this hierarchical context, and apply them to latent variable models for text analytics. In particular, we propose a general framework for designing these Bayesian models, which are called topic models in the computer science community. We then propose a specific nonparametric Bayesian topic model for modelling text from social media. We focus on tweets (posts on Twitter) in this article due to their ease of access. We find that our nonparametric model performs better than existing parametric models in both goodness of fit and real world applications.  相似文献   

19.
Bayesian networks are one of the most widely used tools for modeling multivariate systems. It has been demonstrated that more expressive models, which can capture additional structure in each conditional probability table (CPT), may enjoy improved predictive performance over traditional Bayesian networks despite having fewer parameters. Here we investigate this phenomenon for models of various degree of expressiveness on both extensive synthetic and real data. To characterize the regularities within CPTs in terms of independence relations, we introduce the notion of partial conditional independence (PCI) as a generalization of the well-known concept of context-specific independence (CSI). To model the structure of the CPTs, we use different graph-based representations which are convenient from a learning perspective. In addition to the previously studied decision trees and graphs, we introduce the concept of PCI-trees as a natural extension of the CSI-based trees. To identify plausible models we use the Bayesian score in combination with a greedy search algorithm. A comparison against ordinary Bayesian networks shows that models with local structures in general enjoy parametric sparsity and improved out-of-sample predictive performance, however, often it is necessary to regulate the model fit with an appropriate model structure prior to avoid overfitting in the learning process. The tree structures, in particular, lead to high quality models and suggest considerable potential for further exploration.  相似文献   

20.
A finite mixture model has been used to fit the data from heterogeneous populations to many applications. An Expectation Maximization (EM) algorithm is the most popular method to estimate parameters in a finite mixture model. A Bayesian approach is another method for fitting a mixture model. However, the EM algorithm often converges to the local maximum regions, and it is sensitive to the choice of starting points. In the Bayesian approach, the Markov Chain Monte Carlo (MCMC) sometimes converges to the local mode and is difficult to move to another mode. Hence, in this paper we propose a new method to improve the limitation of EM algorithm so that the EM can estimate the parameters at the global maximum region and to develop a more effective Bayesian approach so that the MCMC chain moves from one mode to another more easily in the mixture model. Our approach is developed by using both simulated annealing (SA) and adaptive rejection metropolis sampling (ARMS). Although SA is a well-known approach for detecting distinct modes, the limitation of SA is the difficulty in choosing sequences of proper proposal distributions for a target distribution. Since ARMS uses a piecewise linear envelope function for a proposal distribution, we incorporate ARMS into an SA approach so that we can start a more proper proposal distribution and detect separate modes. As a result, we can detect the maximum region and estimate parameters for this global region. We refer to this approach as ARMS annealing. By putting together ARMS annealing with the EM algorithm and with the Bayesian approach, respectively, we have proposed two approaches: an EM-ARMS annealing algorithm and a Bayesian-ARMS annealing approach. We compare our two approaches with traditional EM algorithm alone and Bayesian approach alone using simulation, showing that our two approaches are comparable to each other but perform better than EM algorithm alone and Bayesian approach alone. Our two approaches detect the global maximum region well and estimate the parameters in this region. We demonstrate the advantage of our approaches using an example of the mixture of two Poisson regression models. This mixture model is used to analyze a survey data on the number of charitable donations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号