首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Case-cohort design usually requires the disease rate to be low in large cohort study,although it has been extensively used in practice.However,the disease with high rate is frequently observed in many clinical studies.Under such circumstances,it is desirable to consider a generalized case-cohort design,where only a fraction of cases are sampled.In this article,we propose the inference procedure for the additive hazards regression under the generalized case-cohort sampling.Asymptotic properties of the proposed estimators for the regression coefcients are established.To demonstrate the efectiveness of the generalized case-cohort sampling,we compare it with simple random sampling in terms of asymptotic relative efciency.Furthermore,we derive the optimal allocation of the subsamples for the proposed design.The fnite sample performance of the proposed method is evaluated through simulation studies.  相似文献   

2.
Case-cohort design is an efficient and economical design to study risk factors for diseases with expensive measurements, especially when the disease rate is low. When several diseases are of interest, multiple case-cohort design studies may be conducted using the same subcohort. To study the association between risk factors and each disease occurrence or death, we consider a general additive-multiplicative hazards model for case-cohort designs with multiple disease outcomes. We present an estimation procedure for the regression parameters of the additive-multiplicative hazards model, and show that the proposed estimator is consistent and asymptotically normal. Large sample approximation works well in finite sample studies in simulation. Finally, we apply the proposed method to a real data example for illustration.  相似文献   

3.
We consider the estimation problem with classical case-cohort data. The case-cohort design was first proposed by Prentice (Biometrics 73:1–11, 1986). Most studies focus on the Cox regression model. In this paper, we consider the linear regression model. We propose an estimator which extends the Buckley–James estimator to the classical case-cohort design. In order to derive the BJE, there is an additional problem of finding the generalized maximum likelihood estimator (GMLE) of the underlying distribution functions. We propose a self-consistent algorithm for the GMLE. We also justify that the GMLE is consistent and asymptotically normally distributed under certain regularity conditions. We further present some simulation results on the asymptotic properties of the BJE and apply our procedure to a data set used in the literature.  相似文献   

4.
基于病例队列数据的比例风险模型的诊断   总被引:1,自引:0,他引:1  
余吉昌  曹永秀 《数学学报》2020,63(2):137-148
病例队列设计是一种在生存分析中广泛应用的可以降低成本又能提高效率的抽样方法.对于病例队列数据,已经有很多统计方法基于比例风险模型来估计协变量对生存时间的影响.然而,很少有工作基于病例队列数据来检验模型的假设是否成立.在这篇文章中,我们基于渐近的零均的值随机过程提出了一类检验统计量,这类检验统计量可以基于病例队列数据来检验比例风险模型的假设是否成立.我们通过重抽样的方法来逼近上述检验统计量的渐近分布,通过数值模拟来研究所提方法在有限样本下的表现,最后将所提出的方法应用于一个国家肾母细胞瘤研究的真实数据集上.  相似文献   

5.
In this paper, we propose a model of a moving average control chart (MA control chart) with a Weibull failure mechanism from an economic viewpoint. When the process-failure mechanism follows a Weibull model or other models having increasing hazard rates, it is desirable to have the decreasing sampling interval with the age of the system. The MA control chart is used to monitor quality characteristics of raw material or products in a continuous process. A cost model utilizing a variable scheme instead of fixed sampling lengths in a continuous flow process is studied in this research. The variable sampling scheme is used to maintain a constant integrated hazard rate over each sampling interval. Optimal values for the design parameter, the moving subgroup size, the sampling interval, and the control limit coefficient are determined by minimizing the loss-cost model. The performance of the loss cost with various Weibull parameters is studied. A sensitivity analysis shows that the design parameters and loss cost depend on the model parameters and shift amounts.  相似文献   

6.
Efficiencies of the maximum pseudolikelihood estimator and a number of related estimators for the case-cohort sampling design in the proportional hazards regression model are studied. The asymptotic information and lower bound for estimating the parametric regression parameter are calculated based on the effective score, which is obtained by determining the component of the parametric score orthogonal to the space generated by the infinite-dimensional nuisance parameter. The asymptotic distributions of the maximum pseudolikelihood and related estimators in an i.i.d. setting show that these estimators do not achieve the computed asymptotic lower bound. Simple guidelines are provided to determine in which instances such estimators are close enough to efficient for practical purposes.  相似文献   

7.
Case-cohort sampling is a commonly used and efficient method for studying large cohorts. In many situations, some covariates are easily measured on all cohort subjects, and surrogate measurements of the expensive covariates also may be observed. In this paper, to make full use of the covariate data collected outside the case-cohort sample, we propose'a class of weighted estimators with general time-varying weights for the additive hazards model, and the estimators are shown to be consistent and asymptotically normal. We also identify the estimator within this class that maximizes efficiency, and simulation studies show that the efficiency gains of the proposed estimator over the existing ones can be substantial in practical situations. A real example is provided.  相似文献   

8.
In this paper, we have derived the distribution of the minimum and maximum of two independent Poisson random variables. A useful procedure for computing the probabilities is given and a total of four numerical examples are presented. Of these four examples, the first two are on the generated data and the other two are on the Champion League Soccer data in order to illustrate the model which is considered here. The hazard rate and the reversed hazard rate, of the minimum and maximum of two independent discrete random variables, are also obtained and their monotonicity is investigated. The results for the Poisson-distributed variables are obtained as special cases.  相似文献   

9.
This paper deals with nonparametric regression estimation under arbitrary sampling with an unknown distribution. The effect of the distribution of the design, which is a nuisance parameter, can be eliminated by conditioning. An upper bound for the conditional mean squared error of kNN estimates leads us to consider an optimal number of neighbors, which is a random function of the sampling. The corresponding estimate can be used for nonasymptotic inference and is also consistent under a minimal recurrence condition. Some deterministic equivalents are found for the random rate of convergence of this optimal estimate, for deterministic and random designs with vanishing or diverging densities. The proposed estimate is rate optimal for standard designs.  相似文献   

10.
Case-cohort study designs are widely used to reduce the cost of large cohort studies. When several diseases are of interest, we can use the same subcohort. In this paper, we will study the casecohort design of marginal additive hazards model for multiple outcomes by a more efficient version. Instead of analyzing each disease separately, ignoring the additional exposure measurements collected on subjects with other diseases, we propose a new weighted estimating equation approach to improve the efficiency by utilizing as much information collected as possible. The consistency and asymptotic normality of the resulting estimator are established. Simulation studies are conducted to examine the finite sample performance of the proposed estimator, which confirm the efficiency gains.  相似文献   

11.
Cost effective sampling design is a major concern in some experiments especially when the measurement of the characteristic of interest is costly or painful or time consuming.Ranked set sampling(RSS) was first proposed by McIntyre [1952. A method for unbiased selective sampling, using ranked sets. Australian Journal of Agricultural Research 3, 385-390]as an effective way to estimate the pasture mean. In the current paper, a modification of ranked set sampling called moving extremes ranked set sampling(MERSS) is considered for the best linear unbiased estimators(BLUEs) for the simple linear regression model. The BLUEs for this model under MERSS are derived. The BLUEs under MERSS are shown to be markedly more efficient for normal data when compared with the BLUEs under simple random sampling.  相似文献   

12.
Acta Mathematicae Applicatae Sinica, English Series - In survival analysis, data are frequently collected by some complex sampling schemes, e.g., length biased sampling, case-cohort sampling and so...  相似文献   

13.
An efficient approach, called augmented line sampling, is proposed to locally evaluate the failure probability function (FPF) in structural reliability-based design by using only one reliability analysis run of line sampling. The novelty of this approach is that it re-uses the information of a single line sampling analysis to construct the FPF estimation, repeated evaluations of the failure probabilities can be avoided. It is shown that, when design parameters are the distribution parameters of basic random variables, the desired information about FPF can be extracted through a single implementation of line sampling. Line sampling is a highly efficient and widely used reliability analysis method. The proposed method extends the traditional line sampling for the failure probability estimation to the evaluation of the FPF which is a challenge task. The required computational effort is neither relatively sensitive to the number of uncertain parameters, nor grows with the number of design parameters. Numerical examples are given to show the advantages of the approach.  相似文献   

14.
For the time-variant hybrid reliability problem under random and interval uncertainties, the upper bound of time-variant failure probability, as a conservative index to quantify the safety level of the structure, is highly concerned. To efficiently estimate it, the adaptive Kriging respectively combined with design point based importance sampling and meta-model based one are proposed. The first algorithm firstly searches the design point of the hybrid problem, on which the candidate random samples are generated by shifting the sampling center from mean value to design point. Then, the Kriging model is iteratively trained and the hybrid problem is solved by the well-trained Kriging model. The second algorithm firstly utilizes the Kriging-based importance sampling to approximate the quasi-optimal importance sampling samples and estimate the augmented upper bound of time-variant failure probability. After that, the Kriging model is further updated based on these importance samples to estimate the correction factor, on which the hybrid failure probability is calculated by the product of augmented upper bound of time-variant failure probability and correction factor. Meanwhile, an improved learning function is presented to efficiently train an accurate Kriging model. The proposed methods integrate the merits of adaptive Kriging and importance sampling, which can conduct the hybrid reliability analysis by as little as possible computational cost. The presented examples show the feasibility of the proposed methods.  相似文献   

15.
Summary Murthy and Nanjamma [4] studied the problem of construction of almost unbiased ratio estimators for any sampling design using the technique of interpenetrating subsamples. Subsequently, Rao [7], [8] has given a general method of constructing unbiased ratio estimators by considering linear combinations of the two simple estimators based on the ratio of means and the mean of ratios. However, it is difficult to choose an optimum weight (Rao [9]) which minimizes the variance of the combined estimator since the weights are random in certain cases. In this note, we consider a different method of combining these estimators and obtain a general class of almost unbiased ratio estimators of which Murthy and Nanjamma's is a particular case and derive an optimum in this class. The case of simple random sampling where a similar class of almost unbiased ratio estimators can be developed is briefly discussed. The results are illustrated by means of simple numerical examples.  相似文献   

16.
Cure rate models offer a convenient way to model time-to-event data by allowing a proportion of individuals in the population to be completely cured so that they never face the event of interest (say, death). The most studied cure rate models can be defined through a competing cause scenario in which the random variables corresponding to the time-to-event for each competing causes are conditionally independent and identically distributed while the actual number of competing causes is a latent discrete random variable. The main interest is then in the estimation of the cured proportion as well as in developing inference about failure times of the susceptibles. The existing literature consists of parametric and non/semi-parametric approaches, while the expectation maximization (EM) algorithm offers an efficient tool for the estimation of the model parameters due to the presence of right censoring in the data. In this paper, we study the cases wherein the number of competing causes is either a binary or Poisson random variable and a piecewise linear function is used for modeling the hazard function of the time-to-event. Exact likelihood inference is then developed based on the EM algorithm and the inverse of the observed information matrix is used for developing asymptotic confidence intervals. The Monte Carlo simulation study demonstrates the accuracy of the proposed non-parametric approach compared to the results attained from the true correct parametric model. The proposed model and the inferential method is finally illustrated with a data set on cutaneous melanoma.  相似文献   

17.
In stratified sampling when strata weights are unknown double sampling technique may be used to estimate them. At first a large simple random sample from the population without considering the stratification is drawn and sampled units belonging to each stratum are recorded to estimate the unknown strata weights. A stratified random sample is then obtained comprising of simple random subsamples out of the previously selected units of the strata. If the problem of non-response is there, then these subsamples may be divided into classes of respondents and non-respondents. A second subsample is then drawn out of non-respondents and an attempt is made to obtain the information. This procedure is called Double Sampling for Stratification (DSS). Okafor (Aligarh J Statist 14:13–23, 1994) derived DSS estimators based on the subsampling of non-respondents. Najmussehar and Bari (Aligarh J Statist 22:27–41, 2002) discussed an optimum double sampling design by formulating the problem as a mathematical programming problem and used the dynamic programming technique to solve it. In the present paper a multivariate stratified population is considered with unknown strata weights and an optimum sampling design is proposed in the presence of non-response to estimate the unknown population means using DSS strategy. The problem turns out to be a multiobjective integer nonlinear programming problem. A solution procedure is developed using Goal Programming technique. A numerical example is presented to illustrate the computational details.  相似文献   

18.
Data from most complex surveys are subject to selection bias and clustering due to the sampling design. Results developed for a random sample from a super-population model may not apply. Ignoring the survey sampling weights may cause biased estimators and erroneous confidence intervals. In this paper, we use the design approach for fitting the proportional hazards (PH) model and prove formally the asymptotic normality of the sample maximum partial likelihood (SMPL) estimators under the PH model for both stochastically independent and clustered failure times. In the first case, we use the central limit theorem for martingales in the joint design-model space, and this enables us to obtain results for a general multistage sampling design under mild and easily verifiable conditions. In the case of clustered failure times, we require asymptotic normality in the sampling design space directly, and this holds for fewer sampling designs than in the first case. We also propose a variance estimator of the SMPL estimator. A key property of this variance estimator is that we do not have to specify the second-stage correlation model.  相似文献   

19.
Performance sampling models of duration dependence in employee turnover and firm exit predict that hazard rates will initially be low, gradually rise to a maximum, and then fall. Some empirical duration distributions have bimodal hazard rates, however. In this paper, we present a generalization of the performance sampling model that can account for such deviations from unimodality. While the standard model of performance sampling assumes that the mean and the standard deviation of performance are constant over time, we allow them to change in time, to reflect the fact that tasks may change over time. We derive the hazard rate implied by this more general model and show that it can be bimodal. Using data on turnover in law firms, we show that the hazard rate predicted by these models fit data better than existing models.  相似文献   

20.
Consider a regression model in which the responses are subject to random right censoring. In this model, Beran studied the nonparametric estimation of the conditional cumulative hazard function and the corresponding cumulative distribution function. The main idea is to use smoothing in the covariates. Here we study asymptotic properties of the corresponding hazard function estimator obtained by convolution smoothing of Beran's cumulative hazard estimator. We establish asymptotic expressions for the bias and the variance of the estimator, which together with an asymptotic representation lead to a weak convergence result. Also, the uniform strong consistency of the estimator is obtained.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号