首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
Cure rate models offer a convenient way to model time-to-event data by allowing a proportion of individuals in the population to be completely cured so that they never face the event of interest (say, death). The most studied cure rate models can be defined through a competing cause scenario in which the random variables corresponding to the time-to-event for each competing causes are conditionally independent and identically distributed while the actual number of competing causes is a latent discrete random variable. The main interest is then in the estimation of the cured proportion as well as in developing inference about failure times of the susceptibles. The existing literature consists of parametric and non/semi-parametric approaches, while the expectation maximization (EM) algorithm offers an efficient tool for the estimation of the model parameters due to the presence of right censoring in the data. In this paper, we study the cases wherein the number of competing causes is either a binary or Poisson random variable and a piecewise linear function is used for modeling the hazard function of the time-to-event. Exact likelihood inference is then developed based on the EM algorithm and the inverse of the observed information matrix is used for developing asymptotic confidence intervals. The Monte Carlo simulation study demonstrates the accuracy of the proposed non-parametric approach compared to the results attained from the true correct parametric model. The proposed model and the inferential method is finally illustrated with a data set on cutaneous melanoma.  相似文献   

2.
Recurrent event time data are common in biomedical follow-up studies, in which a study subject may experience repeated occurrences of an event of interest. In this paper, we evaluate two popular nonparametric tests for recurrent event time data in terms of their relative efficiency. One is the log-rank test for classical survival data and the other a more recently developed nonparametric test based on comparing mean recurrent rates. We show analytically that, somewhat surprisingly, the log-rank test that only makes use of time to the first occurrence could be more efficient than the test for mean occurrence rates that makes use of all available recurrence times, provided that subject-to-subject variation of recurrence times is large. Explicit formula are derived for asymptotic relative efficiencies under the frailty model. The findings are demonstrated via extensive simulations. This work was supported by US National Science Foundation (Grant No. DMS-0504269)  相似文献   

3.
截断族中参数与半参数估计陈桂景(安徽大学,合肥230039)赵忠柏(吉林工业大学,长春130025)丁元耀(安徽大学,合肥230039)基金项目:国家自然科学基金资助项目1991年1月9日收到,1992年7月13日收到第一次修改稿,1993年毛月13...  相似文献   

4.
In many clinical studies, there are two dependent event times with one of the events being terminal, such as death, and the other being nonfatal, such as myocardial infarction or cancer relapse. Morbidity can be dependently censored by mortality, but not vice versa. Asymptotic theory is developed for simultaneous estimation of the marginal distribution functions in this semi-competing risks setting. We specify the joint distribution of the event times in the upper wedge, where the nonfatal event happens before the terminal event, with the popular gamma frailty model. The estimators are based on an adaptation of the self-consistency principle. To study their properties, we employ a modification of the functional delta-method applied to Z-estimators. This approach to weak convergence leads naturally to asymptotic validity of both the nonparametric and multiplier bootstraps, facilitating inference in spite of the complexity of the limiting distribution.  相似文献   

5.
When conducting inferential and epidemiologic studies, researchers are often interested in the distribution of time until the occurrence of some specified event, a form of incidence calculation. Furthermore, this interest often extends to the effects of intervening factors on this distribution. In this paper we impose the assumption that the phenomena being investigated are governed by a stationary Markov chain and review how one may estimate the above distribution. We then introduce and relate two different methods of investigating the effects of intervening factors. In particular, we show how an investigator may evaluate the effect of potential intervention programs. Finally, we demonstrate the proposed methodology using data from a population study.  相似文献   

6.
A cured model is a useful approach for analysing failure time data in which some subjects could eventually experience and others never experience the event of interest. All subjects in the test belong to one of the two groups: the susceptible group and the non-susceptible group. There has been considerable progress in the development of semi-parametric models for regression analysis of time-to-event data. However, most of the current work focuses on right-censored data, especially when the population contains a non-ignorable cured subgroup. In this paper, we propose a semi-parametric cure model for current status data. In general, treatments are developed to both increase the patients' chances of being cured and prolong the survival time among non-cured patients. A logistic regression model is proposed for whether the subject is in the susceptible group. An accelerated failure time regression model is proposed for the event time when the subject is in the non-susceptible group. An EM algorithm is used to maximize the log-likelihood of the observed data. Simulation results show that the proposed method can get efficient estimations.  相似文献   

7.
We propose a bivariate Weibull regression model with heterogeneity (frailty or random effect) which is generated by compound Poisson distribution with random scale. We assume that the bivariate survival data follow bivariate Weibull of Hanagal (2004). There are some interesting situations like survival times in genetic epidemiology, dental implants of patients and twin births (both monozygotic and dizygotic) where genetic behavior (which is unknown and random) of patients follows a known frailty distribution. These are the situations which motivate us to study this particular model. We propose a two stage maximum likelihood estimation procedure for the parameters in the proposed model and develop large sample tests for testing significance of regression parameters.  相似文献   

8.
Assessing agreement is often of interest in biomedical sciences to evaluate the similarity of measurements produced by different raters or methods on the same subjects. We investigate the agreement structure for a class of frailty models that are commonly used for analyzing correlated survival outcomes. Conditional on the shared frailty, bivariate survival times are assumed to be independent with Weibull baseline hazard distribution. We present the analytic expressions for the concordance correlation coefficient (CCC) for several commonly used frailty distributions. Furthermore, we develop a time-dependent CCC for measuring agreement between survival times among subjects who survive beyond a specified time point. We characterize the temporal pattern in the time-dependent CCC for various frailty distributions. Our results provide a better understanding of the agreement structure implied by different frailty models.  相似文献   

9.
The seminal Cox’s proportional intensity model with multiplicative frailty is a popular approach to analyzing the frequently encountered recurrent event data in scientific studies. In the case of violating the proportional intensity assumption, the additive intensity model is a useful alternative. Both the additive and proportional intensity models provide two principal frameworks for studying the association between the risk factors and the disease recurrences. However, methodology development on the additive intensity model with frailty is lacking, although would be valuable. In this paper, we propose an additive intensity model with additive frailty to formulate the effects of possibly time-dependent covariates on recurrent events as well as to evaluate the intra-class dependence within recurrent events which is captured by the frailty variable. The asymptotic properties for both the regression parameters and the association parameters in frailty distribution are established. Furthermore, we also investigate the large-sample properties of the estimator for the cumulative baseline intensity function.  相似文献   

10.
The stochastic behaviour of lifetimes of a two component system is often primarily influenced by the system structure and by the covariates shared by the components. Any meaningful attempt to model the lifetimes must take into consideration the factors affecting their stochastic behaviour. In particular, for a load share system, we describe a reliability model incorporating both the load share dependence and the effect of observed and unobserved covariates. The model includes a bivariate Weibull to characterize load share, a positive stable distribution to describe frailty, and also incorporates effects of observed covariates. We investigate various interesting reliability properties of this model using cross ratio functions and conditional survivor functions. We implement maximum likelihood estimation of the model parameters and discuss model adequacy and selection. We illustrate our approach using a simulation study. For a real data situation, we demonstrate the superiority of the proposed model that incorporates both load share and frailty effects over competing models that incorporate just one of these effects. An attractive and computationally simple cross‐validation technique is introduced to reconfirm the claim. We conclude with a summary and discussion.  相似文献   

11.
In this paper, we propose a new non‐default rate survival model. Our approach enables different underlying activation mechanisms which lead to the event of interest. The number of competing causes, which may be responsible for the occurrence of the event of interest, is assumed to follow a geometric distribution, while the time to event is assumed to follow an inverse Weibull distribution. An advantage of our approach is to accommodate all activation mechanisms based on order statistics. We explore the use of maximum likelihood estimation procedure. Simulation studies are performed and experimental results are illustrated based on a real Brazilian bank personal loan portfolio data. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
In industrial statistics, there is great interest in predicting with precision lifetimes of specimens that operate under stress. For example, a bad estimation of the lower percentiles of a life distribution can produce significant monetary losses to organizations due to an excessive amount of warranty claims. The Birnbaum–Saunders distribution is useful for modeling lifetime data. This is because such a distribution allows us to relate the total time until the failure occurs to some type of cumulative damage produced by stress. In this paper, we propose a methodology for detecting influence of atypical data in accelerated life models on the basis of the Birnbaum–Saunders distribution. The methodology developed in this study should be considered in the design of structures and in the prediction of warranty claims. We conclude this work with an application of the proposed methodology on the basis of real fatigue life data, which illustrates its importance in a warranty claim problem. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

13.
Multivariate survival analysis comprises of event times that are generally grouped together in clusters. Observations in each of these clusters relate to data belonging to the same individual or individuals with a common factor. Frailty models can be used when there is unaccounted association between survival times of a cluster. The frailty variable describes the heterogeneity in the data caused by unknown covariates or randomness in the data. In this article, we use the generalized gamma distribution to describe the frailty variable and discuss the Bayesian method of estimation for the parameters of the model. The baseline hazard function is assumed to follow the two parameter Weibull distribution. Data is simulated from the given model and the Metropolis–Hastings MCMC algorithm is used to obtain parameter estimates. It is shown that increasing the size of the dataset improves estimates. It is also shown that high heterogeneity within clusters does not affect the estimates of treatment effects significantly. The model is also applied to a real life dataset.  相似文献   

14.
The focus of this article is on the analysis of repairable systems that are subject to multiple sources of recurrence. The event of interest at the system level is assumed to be caused by the earliest occurrence of a source, thereby conforming to a series system competing risks framework. Parametric inference is carried out under the power law process model that has found significant attention in industrial applications. Dependence among the cause‐specific recurrent processes is induced via a shared frailty structure. The theoretical inference results are implemented to a warranty database for a fleet of automobiles, for which the warranty repair is triggered by the failure of one of many components. Extensive finite‐sample simulation is carried out to supplement the asymptotic findings. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
Attraction models used to analyze the effects of marketing instruments on market share hitherto assume certain strict functional forms. We introduce semi-parametric models whose parametric components are equivalent to an exponential or multiplicative function. The nonparametric part is estimated on the basis of penalized generalized least squares taking into account smoothness of nonlinear functions. In the empirical study presented market share models with semi-parametric additive brand attractions attain better fits both according to an information criterion that penalizes a model for degrees of freedom (df) consumed and according to error measures determined by bootstrapping.  相似文献   

16.
In this paper, we develop the steps of the expectation maximization algorithm (EM algorithm) for the determination of the maximum likelihood estimates (MLEs) of the parameters of the destructive exponentially weighted Poisson cure rate model in which the lifetimes are assumed to be Weibull. This model is more flexible than the promotion time cure rate model as it provides an interesting and realistic interpretation of the biological mechanism of the occurrence of an event of interest by including a destructive process of the initial number of causes in a competitive scenario. The standard errors of the MLEs are obtained by inverting the observed information matrix. An extensive Monte Carlo simulation study is carried out to evaluate the performance of the developed method of estimation. Finally, a known melanoma data are analyzed to illustrate the method of inference developed here. With these data, a comparison is also made with the scenario when the destructive mechanism is not included in the analysis.  相似文献   

17.
Case-cohort design is an efficient and economical design to study risk factors for diseases with expensive measurements, especially when the disease rate is low. When several diseases are of interest, multiple case-cohort design studies may be conducted using the same subcohort. To study the association between risk factors and each disease occurrence or death, we consider a general additive-multiplicative hazards model for case-cohort designs with multiple disease outcomes. We present an estimation procedure for the regression parameters of the additive-multiplicative hazards model, and show that the proposed estimator is consistent and asymptotically normal. Large sample approximation works well in finite sample studies in simulation. Finally, we apply the proposed method to a real data example for illustration.  相似文献   

18.
This article presents optimal Bayesian accelerated life test plans for series systems under Type-I censoring scheme. First, the component lifetimes are assumed to follow independent Weibull distributions. The scale parameters of Weibull lifetime distributions are related to the external stress variable through a general stress translation function. For a fixed number of design points, optimal Bayesian ALT plans are first obtained by solving constrained optimization problems under two different Bayesian design criteria. The global optimality of the resulting fixed-point optimal designs is then verified via the General Equivalence Theorem. This article also provides the optimized compromise ALT plans which are extremely useful in real-life applications. A detailed sensitivity analysis is then performed to find out the effect of various planning inputs on the resulting optimal Bayesian ALT plans. A simulation study is then conducted to visualize the resulting sampling variations from the optimal Bayesian ALT plans. Finally, this article considers a series system with dependent component lifetimes. Optimal ALT plans are obtained assuming a Gamma frailty model.  相似文献   

19.
The main focus of the call center research has been on models that assume all input distributions are known in queuing theory which gives birth to staffing and the estimation of operating characteristics. Studies investigating uncertainty of the input distributions and its implications on call center management are scarce. This study attempts to fill this gap by analyzing the call center service distribution behavior by using Bayesian parametric and semi-parametric mixture models that are capable of exhibiting non-standard behavior such as multi-modality, skewness and excess kurtosis motivated by real call center data. The study is motivated by the observation that different customer profiles might require different agent skill sets which can create additional sources of uncertainty in the behavior of service distributions. In estimating model parameters, Markov chain Monte Carlo methods such as the Gibbs sampler and the reversible jump algorithms are presented and the implications of using such models on system performance and staffing are discussed.  相似文献   

20.
We analyze the reliability of NASA composite pressure vessels by using a new Bayesian semiparametric model. The data set consists of lifetimes of pressure vessels, wrapped with a Kevlar fiber, grouped by spool, subject to different stress levels; 10% of the data are right censored. The model that we consider is a regression on the log‐scale for the lifetimes, with fixed (stress) and random (spool) effects. The prior of the spool parameters is nonparametric, namely they are a sample from a normalized generalized gamma process, which encompasses the well‐known Dirichlet process. The nonparametric prior is assumed to robustify inferences to misspecification of the parametric prior. Here, this choice of likelihood and prior yields a new Bayesian model in reliability analysis. Via a Bayesian hierarchical approach, it is easy to analyze the reliability of the Kevlar fiber by predicting quantiles of the failure time when a new spool is selected at random from the population of spools. Moreover, for comparative purposes, we review the most interesting frequentist and Bayesian models analyzing this data set. Our credibility intervals of the quantiles of interest for a new random spool are narrower than those derived by previous Bayesian parametric literature, although the predictive goodness‐of‐fit performances are similar. Finally, as an original feature of our model, by means of the discreteness of the random‐effects distribution, we are able to cluster the spools into three different groups. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号