首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 279 毫秒
1.
1引言.终检病人的生存经历是临床上最感兴趣的.通过建立观测到的和未观测到的生存经历之间的关系[1]可给出一个笼统的生存率估计,如kaplan-Meier[2]或Berkson-gage估计值.利用生存率终检模型[4]可给出终检病人生存专率;并经累积死亡数的分解得以实现.本文说明其原理并提出另外两种实现方法:Kanlan-Meier估计值的自相容性[5]和记分函数[6].2.原理.令随机变量X是检验条件下的真正生存时间.其分布为F(t)=P(XS心即病人到州的累积死亡概率,其补为F川一l一月(小令随机变量Y代表终检时间.其分布为E(t)=P(y<i);其…  相似文献   

2.
本文由过去提出的医学随访研究基础方法和应用模型设计出320个常用临床试验方案,制成设计供临床应用.与以往设计相比,其优点是1.便于限定失访水平,2.没有来自信息性终检的偏性,3.便于限定终末终检率在其阈值之下,避免设计错误,保证结果的可靠性,4.设计灵活,无Morgan误差.利用这些表可简化设计过程.附工作实例描述其使用方法.  相似文献   

3.
广义非中心法   总被引:4,自引:0,他引:4  
目的:本文提出一种广义非中心法以适应终检.它由经典非中心法衍变而来,用于多样本生存率检验所需样本量的测定.方法:其第1步继承经典方法。按已有的非中心卡方分布表和经典r×2卡方统计量非中心参数表达式得出所需同源有效样本量;第2步由同源有效样本量在Weibull生存分布下的参数表达式按预定终检率反推出所需样本量,并以逐个迭代实现样本量的分配.结果:与已有的校正终检样本量测定方法相比,该方法的特点是,与多样本生存率检验相匹配;摆脱了指数分布的假设;在无终检时还原为经典方法;其观测功效和预定功效精确吻合.结论:该方法可用于多样本癌症临床研究方案的设计.附有工作实例描述设计过程.  相似文献   

4.
医药临床试验,生存分析,可靠性统计等研究领域,由于考虑到时间和费用问题,研究往往有一定期限.因为研究到期的被迫结束或者某些病人中途退出试验,最后得到的试验结果往往是删失数据.对于删失数据,采用无偏转换的方法处理,方法的最大优点是得到的估计量为显式解.首先讨论了在纵向右删失数据下线性回归模型回归系数估计的均方相合性,并且把结论推广到了污染线性模型,得到了污染系数、回归系数的强相合估计.  相似文献   

5.
在生存分析中,对右删失数据问题的研究常假设删失时间与失效时间相互独立.然而研究者经常要面对非独立删失的问题,即删失时间与失效时间可能相互关联并彼此影响,尤其表现在临床试验中.如果不考虑这种相关性,便无法得到生存函数的有效估计.针对这种相依结构已有很多处理方法,其中连接函数因结构简单而尤为受到关注.本文主要对信息右删失数据下比例风险模型的相关估计问题进行了研究.利用阿基米德连接函数对删失时间和失效时间的联合分布函数进行假定,在连接函数参数的可识别条件下,得到了连接函数的参数、比例风险模型参数以及基准累积风险函数的极大似然估计,并通过模拟计算的方法验证了估计方法的可行性以及估计量的有效性.  相似文献   

6.
本文利用计数过程技术及VonMises方法,研究了具有时变伴变量的删失生存资料的Cox回归模型的自助法的大样本性质.研究表明:在一些正则条件下,对这个模型施实自助法是可行的.即回归系数的偏极大似然估计及基准危险率的非参数极大似然估计的自过程是相合的.  相似文献   

7.
在临床数据的收集中,由于竞争性风险或者病人的退出可能导致数据删失.删失数据的统计分析大多是基于独立删失的假定进行的.而实际情况中,数据的删失往往是非独立的,即删失变量和失效时间变量是相关的.相依删失使得原本复杂的删失数据处理变得更加困难.在本文中,假定删失变量和失效时间变量的联合分布可以用它们边际分布的连接函数函数表示,在给定连接函数下,得到了比例风险模型的极大似然估计.模拟计算显示,如果删失假定成立,本文所采用方法比独立删失假定下的估计方法更准确.  相似文献   

8.
在生存分析中,可加可乘风险率模型常用来研究协变量对初始事件和终止事件之间持续时间的影响效应。在本文中,我们考虑了在初始事件存在部分区间删失,同时终止事件存在左截断右删失的情形下,持续时间的可加可乘风险率模型的估计问题。我们提出了一个两阶段估计过程来估计模型的回归参数。并通过模拟分析验证了估计的大样本性质。最后利用该方法分析了恶性黑色素瘤手术治疗数据。  相似文献   

9.
通过Kaplan-Meier估计和Nelson-Aalen估计得到了平稳时间序列被另一平稳序列右删失下.AR模型的参数估计.首先,通过与完全数据下的参数估计进行对比,说明了两种估计方法的效果.然后,根据计算机模拟的样本量以及删失率的不同,对比了两种估计的优劣,并且模拟结果表明两种估计是有效的.  相似文献   

10.
利用生存分析中的K-M估计得到了删失数据下ARMA模型的参数估计,通过与完全数据下的参数估计进行对比,充分说明了该估计的效果.利用删失数据下ARMA模型的EM算法,对2013年5月2日到2014年5月8日的247个美元兑人民币的基准汇率数据进行建模分析和预测,并与实际数据进行对照,误差较小,说明估计和EM预测方法的可行性.  相似文献   

11.
Random weighting method for Cox’s proportional hazards model   总被引:1,自引:0,他引:1  
Variance of parameter estimate in Cox’s proportional hazards model is based on asymptotic variance. When sample size is small, variance can be estimated by bootstrap method. However, if censoring rate in a survival data set is high, bootstrap method may fail to work properly. This is because bootstrap samples may be even more heavily censored due to repeated sampling of the censored observations. This paper proposes a random weighting method for variance estimation and confidence interval estimation for proportional hazards model. This method, unlike the bootstrap method, does not lead to more severe censoring than the original sample does. Its large sample properties are studied and the consistency and asymptotic normality are proved under mild conditions. Simulation studies show that the random weighting method is not as sensitive to heavy censoring as bootstrap method is and can produce good variance estimates or confidence intervals.  相似文献   

12.
This paper deals with estimation of life expectancy used in survival analysis and competing risk study under the condition that the data are randomly censored by K independent censoring variables. The estimator constructed is based on a theorem due to Berman [2], and it involves an empirical distribution function which is related to the Kaplan-Meier estimate used in biometry. It is shown that the estimator, considered as a function of age, converges weakly to a Gaussian process. It is found that for the estimator to have finite limiting variance requires the assumption that the censoring variables be stochastically larger than the “survival” random variable under investigation.  相似文献   

13.
We propose a unified strategy for estimator construction, selection, and performance assessment in the presence of censoring. This approach is entirely driven by the choice of a loss function for the full (uncensored) data structure and can be stated in terms of the following three main steps. (1) First, define the parameter of interest as the minimizer of the expected loss, or risk, for a full data loss function chosen to represent the desired measure of performance. Map the full data loss function into an observed (censored) data loss function having the same expected value and leading to an efficient estimator of this risk. (2) Next, construct candidate estimators based on the loss function for the observed data. (3) Then, apply cross-validation to estimate risk based on the observed data loss function and to select an optimal estimator among the candidates. A number of common estimation procedures follow this approach in the full data situation, but depart from it when faced with the obstacle of evaluating the loss function for censored observations. Here, we argue that one can, and should, also adhere to this estimation road map in censored data situations.Tree-based methods, where the candidate estimators in Step 2 are generated by recursive binary partitioning of a suitably defined covariate space, provide a striking example of the chasm between estimation procedures for full data and censored data (e.g., regression trees as in CART for uncensored data and adaptations to censored data). Common approaches for regression trees bypass the risk estimation problem for censored outcomes by altering the node splitting and tree pruning criteria in manners that are specific to right-censored data. This article describes an application of our unified methodology to tree-based estimation with censored data. The approach encompasses univariate outcome prediction, multivariate outcome prediction, and density estimation, simply by defining a suitable loss function for each of these problems. The proposed method for tree-based estimation with censoring is evaluated using a simulation study and the analysis of CGH copy number and survival data from breast cancer patients.  相似文献   

14.
In this paper we consider a model for dependent censoring and derive a consistent asymptotically normal estimator for the underlying survival distribution from a sample of censored data. The methodology is illustrated with an application to the analysis of cancer data. Some simulations to evaluate the performance of our estimator are also presented. The results indicate that our estimator performs reasonably well in comparison to the other dependent censoring survival curve estimators.  相似文献   

15.
区间数据任意阶原点矩的估计   总被引:1,自引:0,他引:1       下载免费PDF全文
在生存分析和可靠性研究中, 区间数据的存在常常使得传统的统计方法无法直接使用\bd 本文从无偏转换的思想出发, 对区间数据的任意阶原点矩进行了估计\bd 当截断变量的分布密度函数已知时, 得到了一批具有强相合性(收敛速度可以达到$n^{-1/2}(\log\log n)^{1/2}$)和渐近正态性的估计量, 并通过模拟计算对这种估计方法的可行性和有效性进行了验证.  相似文献   

16.
??In this paper, we concern with the estimation problem for the Pareto distribution based on progressive Type-II interval censoring with random removals. We discuss the maximum likelihood estimation of the model parameters. Then, we show the consistency and asymptotic normality of maximum likelihood estimators based on progressive Type-II interval censored sample.  相似文献   

17.
最近几年,函数型数据分析的理论和应用飞速发展.在许多实际应用里,响应变量往往存在随机右删失的情况.考虑利用函数型部分线性分位数回归模型来刻画函数型和标量预测量与右删失响应变量之间的关系.基于函数型主成分基函数来逼近未知的斜率函数,通过极小化逆概率加权分位数损失函数得到未知系数的估计量.文章的估计方法容易通过加权分位数回归程序实现.在一定的假设条件下,给出了有限维参数估计量的渐近正态性与斜率函数估计量的收敛速度.最后,通过模拟计算与应用实例证明了所提方法的有效性.  相似文献   

18.
This paper proposes a technique [termed censored average derivative estimation (CADE)] for studying estimation of the unknown regression function in nonparametric censored regression models with randomly censored samples. The CADE procedure involves three stages: firstly-transform the censored data into synthetic data or pseudo-responses using the inverse probability censoring weighted (IPCW) technique, secondly estimate the average derivatives of the regression function, and finally approximate the unknown regression function by an estimator of univariate regression using techniques for one-dimensional nonparametric censored regression. The CADE provides an easily implemented methodology for modelling the association between the response and a set of predictor variables when data are randomly censored. It also provides a technique for “dimension reduction” in nonparametric censored regression models. The average derivative estimator is shown to be root-n consistent and asymptotically normal. The estimator of the unknown regression function is a local linear kernel regression estimator and is shown to converge at the optimal one-dimensional nonparametric rate. Monte Carlo experiments show that the proposed estimators work quite well.  相似文献   

19.
It is very common in AIDS studies that response variable (e.g., HIV viral load) may be subject to censoring due to detection limits while covariates (e.g., CD4 cell count) may be measured with error. Failure to take censoring in response variable and measurement errors in covariates into account may introduce substantial bias in estimation and thus lead to unreliable inference. Moreover, with non-normal and/or heteroskedastic data, traditional mean regression models are not robust to tail reactions. In this case, one may find it attractive to estimate extreme causal relationship of covariates to a dependent variable, which can be suitably studied in quantile regression framework. In this paper, we consider joint inference of mixed-effects quantile regression model with right-censored responses and errors in covariates. The inverse censoring probability weighted method and the orthogonal regression method are combined to reduce the biases of estimation caused by censored data and measurement errors. Under some regularity conditions, the consistence and asymptotic normality of estimators are derived. Finally, some simulation studies are implemented and a HIV/AIDS clinical data set is analyzed to to illustrate the proposed procedure.  相似文献   

20.
In applied statistics, the coefficient of variation is widely used. However, inference concerning the coefficient of variation of non-normal distributions are rarely reported. In this article, a simulation-based Bayesian approach is adopted to estimate the coefficient of variation (CV) under progressive first-failure censored data from Gompertz distribution. The sampling schemes such as, first-failure censoring, progressive type II censoring, type II censoring and complete sample can be obtained as special cases of the progressive first-failure censored scheme. The simulation-based approach will give us a point estimate as well as the empirical sampling distribution of CV. The joint prior density as a product of conditional gamma density and inverted gamma density for the unknown Gompertz parameters are considered. In addition, the results of maximum likelihood and parametric bootstrap techniques are also proposed. An analysis of a real life data set is presented for illustrative purposes. Results from simulation studies assessing the performance of our proposed method are included.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号