首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Quantile regression provides an attractive tool to the analysis of censored responses, because the conditional quantile functions are often of direct interest in regression analysis, and moreover, the quantiles are often identifiable while the conditional mean functions are not. Existing methods of estimation for censored quantiles are mostly limited to singly left- or right-censored data, with some attempts made to extend the methods to doubly censored data. In this article, we propose a new and unified approach, based on a variation of the data augmentation algorithm, to censored quantile regression estimation. The proposed method adapts easily to different forms of censoring including doubly censored and interval censored data, and somewhat surprisingly, the resulting estimates improve on the performance of the best known estimators with singly censored data. Supplementary material for this article is available online.  相似文献   

2.
The nonparametric estimator of the conditional survival function proposed by Beran is a useful tool to evaluate the effects of covariates in the presence of random right censoring. However, censoring indicators of right censored data may be missing for different reasons in many applications. We propose some estimators of the conditional cumulative hazard and survival functions which allow to handle this situation. We also construct the likelihood ratio confidence bands for them and obtain their asymptotic properties. Simulation studies are used to evaluate the performances of the estimators and their confidence bands.  相似文献   

3.
??In this paper, we concern with the estimation problem for the Pareto distribution based on progressive Type-II interval censoring with random removals. We discuss the maximum likelihood estimation of the model parameters. Then, we show the consistency and asymptotic normality of maximum likelihood estimators based on progressive Type-II interval censored sample.  相似文献   

4.
In this paper, we propose an efficient branch and bound procedure to compute exact nonparametric statistical intervals based on two Type-II right censored data sets. The procedure is based on some recurrence relations for the distribution and density functions of progressively Type-II censored order statistics which can be applied to compute the coverage probabilities. We illustrate the method for both confidence and prediction intervals of a given level.  相似文献   

5.
In this paper, we consider the weighted local polynomial calibration estimation and imputation estimation of a non-parametric function when the data are right censored and the censoring indicators are missing at random, and establish the asymptotic normality of these estimators. As their applications, we derive the weighted local linear calibration estimators and imputation estimations of the conditional distribution function, the conditional density function and the conditional quantile function, and investigate the asymptotic normality of these estimators. Finally, the simulation studies are conducted to illustrate the finite sample performance of the estimators.  相似文献   

6.
This article deals with the progressively first failure censored Lindley distribution. Maximum likelihood and Bayes estimators of the parameter and reliability characteristics of Lindley distribution based on progressively first failure censored samples are derived. Asymptotic confidence intervals based on observed Fisher information and bootstrap confidence intervals of the parameter are constructed. Bayes estimators using non-informative and gamma informative priors are derived using importance sampling procedure and Metropolis–Hastings (MH) algorithm under squared error loss function. Also, HPD credible intervals based on importance sampling procedure and MH algorithm for the parameter are constructed. To study the performance of various estimators discussed in this article, a Monte Carlo simulation study is conducted. Finally, a real data set is studied for illustration purposes.  相似文献   

7.
Semiparametric random censorship (SRC) models (Dikta, 1998) provide an attractive framework for estimating survival functions when censoring indicators are fully or partially available. When there are missing censoring indicators (MCIs), the SRC approach employs a model-based estimate of the conditional expectation of the censoring indicator given the observed time, where the model parameters are estimated using only the complete cases. The multiple imputations approach, on the other hand, utilizes this model-based estimate to impute the missing censoring indicators and form several completed data sets. The Kaplan-Meier and SRC estimators based on the several completed data sets are averaged to arrive at the multiple imputations Kaplan-Meier (MIKM) and the multiple imputations SRC (MISRC) estimators. While the MIKM estimator is asymptotically as efficient as or less efficient than the standard SRC-based estimator that involves no imputations, here we investigate the performance of the MISRC estimator and prove that it attains the benchmark variance set by the SRC-based estimator. We also present numerical results comparing the performances of the estimators under several misspecified models for the above mentioned conditional expectation.  相似文献   

8.
In applied statistics, the coefficient of variation is widely used. However, inference concerning the coefficient of variation of non-normal distributions are rarely reported. In this article, a simulation-based Bayesian approach is adopted to estimate the coefficient of variation (CV) under progressive first-failure censored data from Gompertz distribution. The sampling schemes such as, first-failure censoring, progressive type II censoring, type II censoring and complete sample can be obtained as special cases of the progressive first-failure censored scheme. The simulation-based approach will give us a point estimate as well as the empirical sampling distribution of CV. The joint prior density as a product of conditional gamma density and inverted gamma density for the unknown Gompertz parameters are considered. In addition, the results of maximum likelihood and parametric bootstrap techniques are also proposed. An analysis of a real life data set is presented for illustrative purposes. Results from simulation studies assessing the performance of our proposed method are included.  相似文献   

9.
This paper considers the reliability inference for the truncated proportional hazard rate stress–strength model based on progressively Type-II censoring scheme. When the stress and strength variables follow the truncated proportional hazard rate distributions, the maximum likelihood estimation and the pivotal quantity estimation of stress–strength reliability are derived. Based on the percentile bootstrap sampling technique, the 95% confidence interval of stress–strength reliability is obtained, as well as the related coverage percentage. Moreover, based on the Fisher Z transformation and the modified generalized pivotal quantity, the 95% modified generalized confidence interval for the stress–strength reliability is obtained. The performance of the proposed method is evaluated by the Monte Carlo simulation. The numerical results show that the pivotal quantity estimators performs better than the maximum likelihood estimators. At last, two real datasets are analyzed by the proposed methodology for illustrative purpose. The results of real example analysis show that our model can be applied to the practical problem, the truncated proportional hazard rate distribution can fit the failure data better than other distributions, and the algorithms in this paper are suitable to handle the small sample data.  相似文献   

10.
In this paper we consider a model for dependent censoring and derive a consistent asymptotically normal estimator for the underlying survival distribution from a sample of censored data. The methodology is illustrated with an application to the analysis of cancer data. Some simulations to evaluate the performance of our estimator are also presented. The results indicate that our estimator performs reasonably well in comparison to the other dependent censoring survival curve estimators.  相似文献   

11.
本文在删失数据中删失指标随机缺失的情况下,运用非参数方法给出了回归函数的两种估计量,给出了估计量的一致收敛速度以及渐近分布,并进一步通过数值模拟验证了所提方法在有限样本下的性质.  相似文献   

12.
本文在运用无偏转换思想找到区间数据均值估计的基础上,对所找到的估计量的方差进行了研究.针对区间截断情况1和区间截断情况2,找到了估计量方差有限的条件.当截断随机变量的分布在某种程度上比被截断随机变量的分布尾部更厚时,方差有限的估计量可以取到.  相似文献   

13.
Recent work on Pitman closeness has compared estimators under Type-II censored samples from exponential distribution based on observed number of failures. In this paper, we carry out similar Pitman closeness comparisons for Type-I censored samples from exponential distribution based on time under test.  相似文献   

14.
区间数据任意阶原点矩的估计   总被引:1,自引:0,他引:1       下载免费PDF全文
在生存分析和可靠性研究中, 区间数据的存在常常使得传统的统计方法无法直接使用\bd 本文从无偏转换的思想出发, 对区间数据的任意阶原点矩进行了估计\bd 当截断变量的分布密度函数已知时, 得到了一批具有强相合性(收敛速度可以达到$n^{-1/2}(\log\log n)^{1/2}$)和渐近正态性的估计量, 并通过模拟计算对这种估计方法的可行性和有效性进行了验证.  相似文献   

15.
程从华 《数学学报》1936,63(3):193-208
在II型双截尾删失计划下,讨论了当系统被独立的随机施加指数Pareto (EP)压力时的系统可靠性问题.作者给出了系统可靠性参数的不同点估计和区间估计,其中点估计包括一致最小方差无偏估计(UMVUE)和最大似然估计(MLE);区间估计包括精确置信区间,近似置信区间和bootstrap的区间估计.为了评价不同估计方法效果,作者提供数值模拟结果;最后提供了一个真实数据的分析结果来演示本文提出的方法.  相似文献   

16.
In this paper, we investigate a competing risks model based on exponentiated Weibull distribution under Type-I progressively hybrid censoring scheme. To estimate the unknown parameters and reliability function, the maximum likelihood estimators and asymptotic confidence intervals are derived. Since Bayesian posterior density functions cannot be given in closed forms, we adopt Markov chain Monte Carlo method to calculate approximate Bayes estimators and highest posterior density credible intervals. To illustrate the estimation methods, a simulation study is carried out with numerical results. It is concluded that the maximum likelihood estimation and Bayesian estimation can be used for statistical inference in competing risks model under Type-I progressively hybrid censoring scheme.  相似文献   

17.
We propose a unified strategy for estimator construction, selection, and performance assessment in the presence of censoring. This approach is entirely driven by the choice of a loss function for the full (uncensored) data structure and can be stated in terms of the following three main steps. (1) First, define the parameter of interest as the minimizer of the expected loss, or risk, for a full data loss function chosen to represent the desired measure of performance. Map the full data loss function into an observed (censored) data loss function having the same expected value and leading to an efficient estimator of this risk. (2) Next, construct candidate estimators based on the loss function for the observed data. (3) Then, apply cross-validation to estimate risk based on the observed data loss function and to select an optimal estimator among the candidates. A number of common estimation procedures follow this approach in the full data situation, but depart from it when faced with the obstacle of evaluating the loss function for censored observations. Here, we argue that one can, and should, also adhere to this estimation road map in censored data situations.Tree-based methods, where the candidate estimators in Step 2 are generated by recursive binary partitioning of a suitably defined covariate space, provide a striking example of the chasm between estimation procedures for full data and censored data (e.g., regression trees as in CART for uncensored data and adaptations to censored data). Common approaches for regression trees bypass the risk estimation problem for censored outcomes by altering the node splitting and tree pruning criteria in manners that are specific to right-censored data. This article describes an application of our unified methodology to tree-based estimation with censored data. The approach encompasses univariate outcome prediction, multivariate outcome prediction, and density estimation, simply by defining a suitable loss function for each of these problems. The proposed method for tree-based estimation with censoring is evaluated using a simulation study and the analysis of CGH copy number and survival data from breast cancer patients.  相似文献   

18.
In collecting clinical data, data would be censored due to competing risks or patient withdrawal. The statistical inference for censoring data is always based on the assumption that the failure time and censoring time is independent. But in practice the failure time and censoring time are often dependent. Dependent censoring make the job to deal with censoring data more complicated. In this paper, we assume that the joint distribution of the failure time variable and censoring time variable is a function of their marginal distributions. This function is called a copula. Under prespecified copulas, the maximum likelihood estimators for cox proportional hazards models are worked out. Statistical analysis results are carried by simulations. When dependent censoring happens, the proposed method will do better than the traditional method used in independent situations. Simulation results show that the proposed method can get efficient estimations.  相似文献   

19.
This paper considers clustered doubly-censored data that occur when there exist several correlated survival times of interest and only doubly censored data are available for each survival time. In this situation, one approach is to model the marginal distribution of failure times using semiparametric linear transformation models while leaving the dependence structure completely arbitrary. We demonstrate that the approach of Cai et al. (Biometrika 87:867–878, 2000) can be extended to clustered doubly censored data. We propose two estimators by using two different estimated censoring weights. A simulation study is conducted to investigate the proposed estimators.  相似文献   

20.
This paper proposes a technique [termed censored average derivative estimation (CADE)] for studying estimation of the unknown regression function in nonparametric censored regression models with randomly censored samples. The CADE procedure involves three stages: firstly-transform the censored data into synthetic data or pseudo-responses using the inverse probability censoring weighted (IPCW) technique, secondly estimate the average derivatives of the regression function, and finally approximate the unknown regression function by an estimator of univariate regression using techniques for one-dimensional nonparametric censored regression. The CADE provides an easily implemented methodology for modelling the association between the response and a set of predictor variables when data are randomly censored. It also provides a technique for “dimension reduction” in nonparametric censored regression models. The average derivative estimator is shown to be root-n consistent and asymptotically normal. The estimator of the unknown regression function is a local linear kernel regression estimator and is shown to converge at the optimal one-dimensional nonparametric rate. Monte Carlo experiments show that the proposed estimators work quite well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号