首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We analyze left-truncated and right-censored (LTRC) data using semiparametric transformation models. It is demonstrated that the approach of Chen et al. (Biometrika 89: 659–668, 2002) can be extended to LTRC data. Furthermore, when covariates are discrete, similar to the approach of Cai and Cheng (Biometrika 91: 277–290, 2004), we propose an alternative estimator. A simulation study is conducted to investigate the performance of the proposed estimators.  相似文献   

2.
In this paper, we propose an alternative approach for forecasting mortality for multiple populations jointly. Our contribution is developed upon the generalized linear models introduced by Renshaw et al., (1996) and Sithole et al., (2000), in which mortality forecasts are generated within the model structure, without the need of additional stochastic processes. To ensure that the resulting forecasts are coherent, a modified time-transformation is developed to stipulate the expected mortality differential between two populations to remain constant when the long-run equilibrium is attained. The model is then further extended to incorporate a structural change, an important property that is observed in the historical mortality data of many national populations. The proposed modeling methods are illustrated with data from two different pairs of populations: (1) Swedish and Danish males; (2) English and Welsh males and U.K. male insured lives.  相似文献   

3.
This paper discusses efficient estimation for the additive hazards regression model when only bi- variate current status data are available.Current status data occur in many fields including demographical studies and tumorigenicity experiments (Keiding,1991;Sun,2006) and several approaches have been proposed for the additive hazards model with univariate current status data (Lin et al.,1998;Martinussen and Scheike,2002).For bivariate data,in addition to facing the same problems as those with univariate data,one needs to deal with the association or correlation between two related failure time variables of interest.For this,we employ the copula model and an efficient estimation procedure is developed for inference.Simulation studies are performed to evaluate the proposed estimates and suggest that the approach works well in practical situations.An illustrative example is provided.  相似文献   

4.
A hierarchical model is developed for the joint mortality analysis of pension scheme datasets. The proposed model allows for a rigorous statistical treatment of missing data. While our approach works for any missing data pattern, we are particularly interested in a scenario where some covariates are observed for members of one pension scheme but not the other. Therefore, our approach allows for the joint modelling of datasets which contain different information about individual lives. The proposed model generalizes the specification of parametric models when accounting for covariates. We consider parameter uncertainty using Bayesian techniques. Model parametrization is analysed in order to obtain an efficient MCMC sampler, and address model selection. The inferential framework described here accommodates any missing-data pattern, and turns out to be useful to analyse statistical relationships among covariates. Finally, we assess the financial impact of using the covariates, and of the optimal use of the whole available sample when combining data from different mortality experiences.  相似文献   

5.
Lee et al. (2011) and Chen and Liang (2011) develop a data envelopment analysis (DEA) model to address the infeasibility issue in super-efficiency models. In this paper, we point out that their model is feasible when input data are positive but can be infeasible when some of input is zero. Their model is modified so that the new super-efficiency DEA model is always feasible when data are non-negative. Note that zero data can make the super-efficiency model under constant returns to scale (CRS) infeasible. Our discussion is based upon variable returns to scale (VRS) and can be applied to CRS super-efficiency models.  相似文献   

6.
It is very common in AIDS studies that response variable (e.g., HIV viral load) may be subject to censoring due to detection limits while covariates (e.g., CD4 cell count) may be measured with error. Failure to take censoring in response variable and measurement errors in covariates into account may introduce substantial bias in estimation and thus lead to unreliable inference. Moreover, with non-normal and/or heteroskedastic data, traditional mean regression models are not robust to tail reactions. In this case, one may find it attractive to estimate extreme causal relationship of covariates to a dependent variable, which can be suitably studied in quantile regression framework. In this paper, we consider joint inference of mixed-effects quantile regression model with right-censored responses and errors in covariates. The inverse censoring probability weighted method and the orthogonal regression method are combined to reduce the biases of estimation caused by censored data and measurement errors. Under some regularity conditions, the consistence and asymptotic normality of estimators are derived. Finally, some simulation studies are implemented and a HIV/AIDS clinical data set is analyzed to to illustrate the proposed procedure.  相似文献   

7.
In this paper, the traditional inventory lot-size model is extended to allow not only for general partial backlogging rate but also for inflation. The assumptions of equal cycle length and constant shortage length imposed in the model developed by Moon et al. [Moon, I., Giri, B.C., Ko, B., 2005. Economic order quantity models for ameliorating/deteriorating items under inflation and time discounting, European Journal of Operational Research 162(3), 773–785] are also relaxed. For any given number of replenishment cycles the existence of a unique optimal replenishment schedule is proved and further the convexity of the total cost function of the inventory system in the number of replenishments is established. The theoretical results here amend those in Yang et al. [Yang, H.L., Teng, J.T., Chern, M.S., 2001. Deterministic inventory lot-size models under inflation with shortages and deterioration for fluctuating demand, Naval Research Logistics 48(2), 144–158] and provide the solution to those two counterexamples by Skouri and Papachristos [Skouri, K., Papachristos, S., 2002. Note on “deterministic inventory lot-size models under inflation with shortages and deterioration for fluctuating demand” by Yang et al. Naval Research Logistics 49(5), 527–529.]. Finally we propose an algorithm to find the solution, and obtain some managerial results by using sensitivity analyses.  相似文献   

8.
** Email: e.zwane{at}imperial.ac.uk Registrations in epidemiological studies suffer from incompleteness,thus a general consensus is to use capture–recapture models.Log-linear models are typically used when the registrationsmeasure the same population and the covariates are measuredby all registrations. This article shows how data can be analysedif some covariates are unobserved in some registrations andthe registrations do not all measure the whole population.  相似文献   

9.
In this paper, we study a suitable notion of solution for which a nonlinear elliptic problem governed by a general Leray-Lions operator is well posed for any diffuse measure data. In terms of the paper (Brezis et al., 2007, [10]), we study the notion of solution for which any diffuse measure is “good measure”.  相似文献   

10.
Relative-risk models are often used to characterize the relationship between survival time and time-dependent covariates. When the covariates are observed, the estimation and asymptotic theory for parameters of interest are available; challenges remain when missingness occurs. A popular approach at hand is to jointly model survival data and longitudinal data. This seems efficient, in making use of more information, but the rigorous theoretical studies have long been ignored. For both additive risk models and relative-risk models, we consider the missing data nonignorable. Under general regularity conditions, we prove asymptotic normality for the nonparametric maximum likelihood estimators.  相似文献   

11.
In this paper, we study the estimation and variable selection of the sufficient dimension reduction space for survival data via a new combination of $L_1$ penalty and the refined outer product of gradient method (rOPG; Xia et al. in J R Stat Soc Ser B 64:363–410, 2002), called SH-OPG hereafter. SH-OPG can exhaustively estimate the central subspace and select the informative covariates simultaneously; Meanwhile, the estimated directions remain orthogonal automatically after dropping noninformative regressors. The efficiency of SH-OPG is verified through extensive simulation studies and real data analysis.  相似文献   

12.
In original data envelopment analysis (DEA) models, inputs and outputs are measured by exact values on a ratio scale. Cooper et al. [Management Science, 45 (1999) 597–607] recently addressed the problem of imprecise data in DEA, in its general form. We develop in this paper an alternative approach for dealing with imprecise data in DEA. Our approach is to transform a non-linear DEA model to a linear programming equivalent, on the basis of the original data set, by applying transformations only on the variables. Upper and lower bounds for the efficiency scores of the units are then defined as natural outcomes of our formulations. It is our specific formulation that enables us to proceed further in discriminating among the efficient units by means of a post-DEA model and the endurance indices. We then proceed still further in formulating another post-DEA model for determining input thresholds that turn an inefficient unit to an efficient one.  相似文献   

13.
Missing covariate data are very common in regression analysis. In this paper, the weighted estimating equation method (Qi et al., 2005) [25] is used to extend the so-called unified estimation procedure (Chen et al., 2002) [4] for linear transformation models to the case of missing covariates. The non-missingness probability is estimated nonparametrically by the kernel smoothing technique. Under missing at random, the proposed estimators are shown to be consistent and asymptotically normal, with the asymptotic variance estimated consistently by the usual plug-in method. Moreover, the proposed estimators are more efficient than the weighted estimators with the inverse of true non-missingness probability as weight. Finite sample performance of the estimators is examined via simulation and a real dataset is analyzed to illustrate the proposed methods.  相似文献   

14.
The strain gradient theory of Zhou et al. is re-expressed in a more direct form and the differences with other strain gradient theories are investigated by an application on static and dynamic analyses of FGM circular micro-plate. To facilitate the modeling, the strain gradient theory of Zhou et al. is re-expressed in cylindrical coordinates, and then the governing equation, boundary conditions and initial condition for circular plate are derived with the help of the Hamilton's principle. The present model can degenerate into other models based on the strain gradient theory of Lam et al., the couple stress theory, the modified couple stress theory or even the classical theory, respectively. The static bending and free vibration problems of a simply supported circular plate are solved. The results indicate that the consideration of strain gradients results in an increase in plate stiffness, and leads to a reduction of deflection and an increase in natural frequency. Compared with the reduced models, the present model can predict a stronger size effect since the contribution from all strain gradient components is considered, and the differences of results from all these models are diminishing when the plate thickness is far greater than the material length-sale parameter.  相似文献   

15.
This paper examines the two approaches that are presently available in the data envelopment analysis (DEA) literature for use in identifying and analyzing congestion. The first approach, due to Färe et al. (Färe, R., Grosskopf, S., Lovell, C.A.K., 1985. The Measurement of Efficiency of Production. Kluwer-Nijhoff, Boston) has been extensively employed to identify congestion in areas such as the provision of health services. The latter approach, due to Cooper et al. (Cooper, W.W., Thompson, R.G., Thrall, R.M., 1996. Annals of Operations Research 66, 3–45) was subsequently extended by Brockett et al. (Brockett, P.L., Cooper, W.W., Shin, H.C., Wang, Y., 1998. Socio-Economic Planning Sciences 32, 1–20) to treat tradeoff possibilities between employment and output in Chinese production when congestion is present. In contrast with Cooper et al., examples are provided in this paper which show that the Färe et al. approach can fail in both modes: It can show congestion to be present when this is not consistent with the observed behavior. It can also fail to exhibit congestion when the data show it to be present. Suggestions are offered both for (i) taking precautions before using the FGL approach and for (ii) overcoming these shortcomings.  相似文献   

16.
In this paper, we introduce a new family of probability distributions called the tabaistic family of distributions. The members of this family can have either unimodal or bimodal probability density functions. This family can be used when the data comes from a skewed or bimodal distribution. A major application of the unimodal member of this family is in the analysis of binary or polytomous response data when covariates are present. The logistic regression (Hosmer, D.W., Lemeshow, S.: Applied Logistic Regression. Wiley, New York (2000)) and probit analysis (Finney, D.J.: Probit Analysis. Cambridge University Press, Cambridge (1971)) are widely used when the distribution is symmetric. When the distribution is asymmetric, the tabaistic regression will be a better choice. We apply the tabaistic regression to analyze the space shuttle Challenger O-ring data and will compare the results with the logistic regression and the probit analysis models.  相似文献   

17.
Input and output data, under uncertainty, must be taken into account as an essential part of data envelopment analysis (DEA) models in practice. Many researchers have dealt with this kind of problem using fuzzy approaches, DEA models with interval data or probabilistic models. This paper presents an approach to scenario-based robust optimization for conventional DEA models. To consider the uncertainty in DEA models, different scenarios are formulated with a specified probability for input and output data instead of using point estimates. The robust DEA model proposed is aimed at ranking decision-making units (DMUs) based on their sensitivity analysis within the given set of scenarios, considering both feasibility and optimality factors in the objective function. The model is based on the technique proposed by Mulvey et al. (1995) for solving stochastic optimization problems. The effect of DMUs on the product possibility set is calculated using the Monte Carlo method in order to extract weights for feasibility and optimality factors in the goal programming model. The approach proposed is illustrated and verified by a case study of an engineering company.  相似文献   

18.
This paper considers clustered doubly-censored data that occur when there exist several correlated survival times of interest and only doubly censored data are available for each survival time. In this situation, one approach is to model the marginal distribution of failure times using semiparametric linear transformation models while leaving the dependence structure completely arbitrary. We demonstrate that the approach of Cai et al. (Biometrika 87:867–878, 2000) can be extended to clustered doubly censored data. We propose two estimators by using two different estimated censoring weights. A simulation study is conducted to investigate the proposed estimators.  相似文献   

19.
《Comptes Rendus Mathematique》2008,346(11-12):697-702
We study the problem of an elastic inclusion with high rigidity in a 3D domain. First we consider an inclusion with a plate-like geometry and then in the more general framework of curvilinear coordinates, an inclusion with a shell-like geometry. We compare our formal models to those obtained by Chapelle–Ferent and by Bessoud et al. To cite this article: A.-L. Bessoud et al., C. R. Acad. Sci. Paris, Ser. I 346 (2008).  相似文献   

20.
In longitudinal studies with small samples and incomplete data, multivariate normal-based models continue to be a powerful tool for analysis. This has included a broad scope of biomedical studies. Testing the assumption of multivariate normality (MVN) is critical. Although many methods are available for testing normality in complete data with large samples, a few deal with the testing in small samples. For example, Liang et al. (J. Statist. Planning and Inference 86 (2000) 129) propose a projection procedure for testing MVN for complete-data with small samples where the sample sizes may be close to the dimension. To our knowledge, no statistical methods for testing MVN in incomplete data with small samples are yet available. This article develops a test procedure in such a setting using multiple imputations and the projection test. To utilize the incomplete data structure in multiple imputation, we adopt a noniterative inverse Bayes formulae (IBF) sampling procedure instead of the iterative Gibbs sampling to generate iid samples. Simulations are performed for both complete and incomplete data when the sample size is less than the dimension. The method is illustrated with a real study on an anticancer drug.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号