首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 921 毫秒
1.
Nonparametric conditional efficiency measures: asymptotic properties   总被引:2,自引:0,他引:2  
Cazals et al. (J. Econom. 106: 1–25, 2002), Daraio and Simar (J. Prod. Anal. 24: 93–121, 2005; Advanced Robust and Nonparametric Methods in Efficiency Analysis, 2007a; J. Prod. Anal. 28: 13–32, 2007b) developed a conditional frontier model which incorporates the environmental factors into measuring the efficiency of a production process in a fully nonparametric setup. They also provided the corresponding nonparametric efficiency measures: conditional FDH estimator, conditional DEA estimator. The two estimators have been applied in the literature without any theoretical background about their statistical properties. The aim of this paper is to provide an asymptotic analysis (i.e. asymptotic consistency and limit sampling distribution) of the conditional FDH and conditional DEA estimators.  相似文献   

2.
The stratified proportional intensity model generalizes Cox’s proportional intensity model by allowing different groups of the population under study to have distinct baseline intensity functions. In this article, we consider the problem of estimation in this model when the variable indicating the stratum is unobserved for some individuals in the studied sample. In this setting, we construct nonparametric maximum likelihood estimators for the parameters of the stratified model and we establish their consistency and asymptotic normality. Consistent estimators for the limiting variances are also obtained.  相似文献   

3.
This article concerns the statistical inference for the upper tail of the conditional distribution of a response variable Y given a covariate X = x based on n random vectors within the parametric extreme value framework. Pioneering work in this field was done by Smith (Stat Sci 4:367–393, 1989) and Smith and Shively (Atmos Environ 29:3489–3499, 1995). We propose to base the inference on a conditional distribution of the point process of exceedances given the point process of covariates. It is of importance that the conditional distribution merely depends on the conditional distribution of the response variable given the covariates. In the special case of Poisson processes such a result may be found in Reiss (1993). Our results are valid within the broader model where the response variables are conditionally independent given the covariates. It is numerically exemplified that the maximum likelihood principle leads to more accurate estimators within the conditional approach than in the previous one.  相似文献   

4.
The linear regression model is commonly used by practitioners to model the relationship between the variable of interest and a set of explanatory variables. The assumption that all error variances are the same, known as homoskedasticity, is oftentimes violated when cross sectional data are used. Consistent standard errors for the ordinary least squares estimators of the regression parameters can be computed following the approach proposed by White (Econometrica 48:817–838, 1980). Such standard errors, however, are considerably biased in samples of typical sizes. An improved covariance matrix estimator was proposed by Qian and Wang (J Stat Comput Simul 70:161–174, 2001). In this paper, we improve upon the Qian–Wang estimator by defining a sequence of bias-adjusted estimators with increasing accuracy. The numerical results show that the Qian–Wang estimator is typically much less biased than the estimator proposed by Halbert White and that our correction to the former can be quite effective in small samples. Finally, we show that the Qian–Wang estimator can be generalized into a broad class of heteroskedasticity-consistent covariance matrix estimators, and our results can be easily extended to such a class of estimators.  相似文献   

5.
This paper introduces the “piggyback bootstrap.” Like the weighted bootstrap, this bootstrap procedure can be used to generate random draws that approximate the joint sampling distribution of the parametric and nonparametric maximum likelihood estimators in various semiparametric models, but the dimension of the maximization problem for each bootstrapped likelihood is smaller. This reduction results in significant computational savings in comparison to the weighted bootstrap. The procedure can be stated quite simply. First obtain a valid random draw for the parametric component of the model. Then take the draw for the nonparametric component to be the maximizer of the weighted bootstrap likelihood with the parametric component fixed at the parametric draw. We prove the procedure is valid for a class of semiparametric models that includes frailty regression models airsing in survival analysis and biased sampling models that have application to vaccine efficacy trials. Bootstrap confidence sets from the piggyback, and weighted bootstraps are compared for biased sampling data from simulated vaccine efficacy trials.  相似文献   

6.
P. Kabaila 《Acta Appl Math》2007,96(1-3):283-291
Suppose that Y 1 and Y 2 are independent and have Binomial(n 1,p 1) and Binomial (n 2,p 2) distributions respectively. Also suppose that θ=p 1p 2 is the parameter of interest. We consider the problem of finding an exact confidence limit (either upper or lower) for θ. The solution to this problem is very important for statistical practice in the health and life sciences. The ‘tail method’ provides a solution to this problem. This method finds the exact confidence limit by exact inversion of a hypothesis test based on a specified test statistic. Buehler (J. Am. Stat. Assoc. 52, 482–493, 1957) described, for the first time, a finite-sample optimality property of this confidence limit. Consequently, this confidence limit is sometimes called a Buehler confidence limit. An early tail method confidence limit for θ was described by Santner and Snell (J. Am. Stat. Assoc. 75, 386–394, 1980) who used the maximum likelihood estimator of θ as the test statistic. This confidence limit is known to be very inefficient (see e.g. Cytel Software, StatXact, version 6, vol. 2, 2004). The efficiency of the confidence limit resulting from the tail method depends greatly on the test statistic on which it is based. We use the results of Kabaila (Stat. Probab. Lett. 52, 145–154, 2001) and Kabaila and Lloyd (Aust. New Zealand J. Stat. 46, 463–469, 2004, J. Stat. Plan. Inference 136, 3145–3155, 2006) to provide a detailed explanation for the dependence of this efficiency on the test statistic. We consider test statistics that are estimators, Z-statistics and approximate upper confidence limits. This explanation is used to find the situations in which the tail method exact confidence limits based on test statistics that are estimators or Z-statistics are least efficient.  相似文献   

7.
The expectation–maximization (EM) algorithm is a very general and popular iterative computational algorithm to find maximum likelihood estimates from incomplete data and broadly used to statistical analysis with missing data, because of its stability, flexibility and simplicity. However, it is often criticized that the convergence of the EM algorithm is slow. The various algorithms to accelerate the convergence of the EM algorithm have been proposed. The vector ε algorithm of Wynn (Math Comp 16:301–322, 1962) is used to accelerate the convergence of the EM algorithm in Kuroda and Sakakihara (Comput Stat Data Anal 51:1549–1561, 2006). In this paper, we provide the theoretical evaluation of the convergence of the ε-accelerated EM algorithm. The ε-accelerated EM algorithm does not use the information matrix but only uses the sequence of estimates obtained from iterations of the EM algorithm, and thus it keeps the flexibility and simplicity of the EM algorithm.  相似文献   

8.
Indirect inference estimators (i.e., simulation-based minimum distance estimators) in a parametric model that are based on auxiliary nonparametric maximum likelihood density estimators are shown to be asymptotically normal. If the parametricmodel is correctly specified, it is furthermore shown that the asymptotic variance-covariance matrix equals the inverse of the Fisher-information matrix. These results are based on uniform-in-parameters convergence rates and a uniform-inparameters Donsker-type theorem for nonparametric maximum likelihood density estimators.  相似文献   

9.
In this Note we consider a discrete-time hidden semi-Markov model and we prove that the nonparametric maximum likelihood estimators for the characteristics of such a model have nice asymptotic properties, namely consistency and asymptotic normality. To cite this article: V. Barbu, N. Limnios, C. R. Acad. Sci. Paris, Ser. I 342 (2006).  相似文献   

10.
Profile likelihood is a popular method of estimation in the presence of an infinite-dimensional nuisance parameter, as the method reduces the infinite-dimensional estimation problem to a finite-dimensional one. In this paper we investigate the efficiency of a semi-parametric maximum likelihood estimator based on the profile likelihood. By introducing a new parametrization, we improve on the seminal work of Murphy and van der Vaart (J Am Stat Assoc, 95: 449–485, 2000): our improvement establishes the efficiency of the estimator through the direct quadratic expansion of the profile likelihood, which requires fewer assumptions. To illustrate the method an application to two-phase outcome-dependent sampling design is given.  相似文献   

11.
Assessment of heavy tailed data and its compound sums has many applications in insurance, auditing and operational risk capital assessment among others. In this paper, we compare the classical estimators (maximum likelihood, QQ and moment estimators) with the recently introduced robust estimators of “generalized median”, “trimmed mean” and estimators based on t-score moments. We derive the exact distribution of the likelihood ratio tests of homogeneity and simple hypothesis on the tail index of a two-parameter Pareto model. Such exact tests support the assessment of the performance of estimators. In particular, we discuss some problems that one can encounter when misemploying the log-normal assumption based methods supported by the Basel II framework. Real data and simulated examples illustrate the methods.  相似文献   

12.
The proportional hazards cure model generalizes Cox’s proportional hazards model which allows that a proportion of study subjects may never experience the event of interest. Here nonparametric maximum likelihood approach is proposed to estimating the cumulative hazard and the regression parameters. The asymptotic properties of the resulting estimators are established using the modern empirical process theory. And the estimators for the regression parameters are shown to be semiparametric efficient.  相似文献   

13.
In this paper we consider discrete time forward interest rate models. In our approach, unlike in the classical Heath–Jarrow–Morton framework, the forward rate curves are driven by a random field. Hence we get a general interest rate structure. Our aim is to give an overview of our results in such a model on the following questions: no-arbitrage conditions, maximum likelihood estimation of the volatility, as well as the joint estimation of the parameters and the asymptotic behaviour of the estimators, relationship with continuous models. Finally we give discussion on the practical problems of the estimation and we show several numerical results on the statistics of such models. This research has been supported by the Hungarian Scientific Research Fund under Grants No. OTKA–F046061/2004 and OTKA–T048544/2005.  相似文献   

14.
In the paper we consider a changed segment model for sample distributions. We generalize Dümbgen’s [Ann. Stat. 19(3), 1471–1495, 1991] change point estimator and obtain optimal rates of convergence of estimators of the beginning and the length of the changed segment. This work was supported by cooperation agreement Lille-Vilnius EGIDE Gillibert.  相似文献   

15.
Current status data arises when a continuous response is reduced to an indicator of whether the response is greater or less than a random threshold value. In this article we consider adaptive penalized M-estimators (including the penalized least squares estimators and the penalized maximum likelihood estimators) for nonparametric and semiparametric models with current status data, under the assumption that the unknown nonparametric parameters belong to unknown Sobolev spaces. The Cox model is used as a representative of the semiparametric models. It is shown that the modified penalized M-estimators of the nonparametric parameters can achieve adaptive convergence rates, even when the degrees of smoothing are not known in advance. consistency, asymptotic normality and inference based on the weighted bootstrap for the estimators of the regression parameter in the Cox model are also established. A simulation study is conducted for the Cox model to evaluate the finite sample efficacy of the proposed approach and to compare it with the ordinary maximum likelihood estimator. It is demonstrated that the proposed method is computationally superior.We apply the proposed approach to the California Partner Study analysis.  相似文献   

16.
In this paper we deal with maximum likelihood estimation (MLE) of the parameters of a Pareto mixture. Standard MLE procedures are difficult to apply in this setup, because the distributions of the observations do not have common support. We study the properties of the estimators under different hypotheses; in particular, we show that, when all the parameters are unknown, the estimators can be found maximizing the profile likelihood function. Then we turn to the computational aspects of the problem, and develop three alternative procedures: an EM-type algorithm, a Simulated Annealing and an algorithm based on Cross-Entropy minimization. The work is motivated by an application in the operational risk measurement field: we fit a Pareto mixture to operational losses recorded by a bank in two different business lines. Under the assumption that each population follows a Pareto distribution, the appropriate model is a mixture of Pareto distributions where all the parameters have to be estimated.  相似文献   

17.
Functional data analysis, as proposed by Ramsay (Psychometrika 47:379–396, 1982), has recently attracted many researchers. The most popular approach taken in recent studies of functional data has been the extension of statistical methods for the analysis of usual data to that of functional data (e.g., Ramsay and Silverman in Functional data Analysis Springer, Berlin Heidelberg New York, 1997, Applied functional data analysis: methods and case studies. Springer, Berlin Heidelberg New York, 2002; Mizuta in Proceedings of the tenth Japan and Korea Joint Conference of Statistics, pp 77–82, 2000; Shimokawa et al. in Japan J Appl Stat 29:27–39, 2000). In addition, several methods for clustering functional data have been proposed (Abraham et al. in Scand J Stat 30:581–595, 2003; Gareth and Catherine in J Am Stat Assoc 98:397–408, 2003; Tarpey and kinateder in J Classif 20:93–114, 2003; Rossi et al. in Proceedings of European Symposium on Artificial Neural Networks pp 305–312, 2004). Furthermore, Tokushige et al. (J Jpn Soc Comput Stat 15:319–326, 2002) defined several dissimilarities between functions for the case of functional data. In this paper, we extend existing crisp and fuzzy k-means clustering algorithms to the analysis of multivariate functional data. In particular, we consider the dissimilarity between functions as a function. Furthermore, cluster centers and memberships, which are defined as functions, are determined at the minimum of a certain target function by using a calculus-of-variations approach.  相似文献   

18.
We formulate a sufficient condition for the inadmissibility of unbiased estimators relative to the quadratic loss. We show the inadmissibility if maximum likelihood estimators for some parametric families. Translated fromStatisticheskie Metody Otsenivaniya i Proverki Gipotez, pp. 40–44, Perm, 1990.  相似文献   

19.
The Cox’s regression model is one of the most popular tools used in survival analysis. Recently, Qin and Jing (Commun Stat Simul Comput 30:79–90, 2001) applied empirical likelihood to study it with the assumption that baseline hazard function is known. However, in the Cox’s regression model the baseline hazard function is unspecified. Thus, their method suffers from severe defect. In this paper, we apply a variant of plug-in empirical likelihood by estimating the cumulative baseline hazard function. Adjusted empirical likelihood (AEL) confidence regions for the vector of regression parameters are obtained. Furthermore, we conduct a simulation study to evaluate the performance of the proposed AEL method by comparing it with normal approximation (NA) based method. The simulation studies show that both methods produce comparable coverage probabilities. The proposed AEL method outperforms the NA method based on power analysis.  相似文献   

20.
In this paper, we consider the problem of making inferences on the common mean of several normal populations when sample sizes and population variances are possibly unequal. We are mainly concerned with testing hypothesis and constructing confidence interval for the common normal mean. Several researchers have considered this problem and many methods have been proposed based on the asymptotic or approximation results, generalized inferences, and exact pivotal methods. In addition, Chang and Pal (Comput Stat Data Anal 53:321–333, 2008) proposed a parametric bootstrap (PB) approach for this problem based on the maximum likelihood estimators. We also propose a PB approach for making inferences on the common normal mean under heteroscedasticity. The advantages of our method are: (i) it is much simpler than the PB test proposed by Chang and Pal (Comput Stat Data Anal 53:321–333, 2008) since our test statistic is not based on the maximum likelihood estimators which do not have explicit forms, (ii) inverting the acceptance region of test yields a genuine confidence interval in contrast to some exact methods such as the Fisher’s method, (iii) it works well in terms of controlling the Type I error rate for small sample sizes and the large number of populations in contrast to Chang and Pal (Comput Stat Data Anal 53:321–333, 2008) method, (iv) finally, it has higher power than recommended methods such as the Fisher’s exact method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号