首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Transconcave data envelopment analysis (TDEA) extends standard data envelopment analysis (DEA), in order to account for non-convex production technologies, such as those involving increasing returns-to-scale or diseconomies of scope. TDEA introduces non-convexities by transforming the range and the domain of the production frontier, thus replacing the standard assumption that the production frontier is concave with the more general assumption that the frontier is concave transformable. TDEA gives statistically consistent estimates for all monotonically increasing and concave transformable frontiers. In addition, Monte Carlo simulations suggest that TDEA can substantially improve inefficiency estimation in small samples compared to the standard Banker, Charnes and Cooper model and the full disposable hull model (FDH).  相似文献   

2.
This study develops a new use of data envelopment analysis for estimating a stochastic frontier cost function that is assumed to have two different error components: a one-sided disturbance (representing technical and allocative inefficiencies) and a two-sided disturbance (representing an observational error). The two error components are handled by data envelopment analysis in combination with goal programming/constrained regression. The approach proposed in this study can avoid several statistical assumptions used in conventional methods for estimating a stochastic frontier function. As an important application, this study uses the estimation technique to obtain an AT&T stochastic frontier cost function. As a result, this study measures technical and allocative efficiencies of AT&T production process and review its natural monopoly issue. The estimated stochastic frontier cost function is also compared with the other cost function models used for previous studies concerning the divestiture of the telephone industry.  相似文献   

3.
The purpose of this paper is to examine the small sample properties of maximum likelihood (ML), corrected ordinary least squares (COLS), and data envelopment analysis (DEA) estimators of the parameters in frontier models in the presence of heteroscedasticity in the two-sided, or measurement, error term. Using Monte Carlo methods, we find that heteroscedasticity in the two-sided error term introduces substantial biases into ML, COLS, and DEA estimators. Although none of the estimators perform well, both ML and COLS are found to be superior to DEA in the presence of heteroscedasticity in the two-sided error.  相似文献   

4.
This study presents some quantitative evidence from a number of simulation experiments on the accuracy of the productivity growth estimates derived from growth accounting (GA) and frontier-based methods (namely data envelopment analysis-, corrected ordinary least squares-, and stochastic frontier analysis-based malmquist indices) under various conditions. These include the presence of technical inefficiency, measurement error, misspecification of the production function (for the GA and parametric approaches) and increased input and price volatility from one period to the next. The study finds that the frontier-based methods usually outperform GA, but the overall performance varies by experiment. Parametric approaches generally perform best when there is no functional form misspecification, but their accuracy greatly diminishes otherwise. The results also show that the deterministic approaches perform adequately even under conditions of (modest) measurement error and when measurement error becomes larger, the accuracy of all approaches (including stochastic approaches) deteriorates rapidly, to the point that their estimates could be considered unreliable for policy purposes.  相似文献   

5.
In this paper, a nonparametric multivariate regression model with long memory covariates and long memory errors is considered. We approximate the nonparametric multivariate regression function by the weighted additive one-dimensional functions. The local linear smoothing and least squares method are proposed for the one-dimensional regression estimation and the weight parameters estimation, respectively. The asymptotic behaviors of the proposed estimators are investigated.  相似文献   

6.
In a recent article, Briec, Kerstens and Vanden Eeckaut (2004) develop a series of nonparametric, deterministic non-convex technologies integrating traditional returns to scale assumptions into the non-convex FDH model. They show, among other things, how the traditional technical input efficiency measure can be analytically derived for these technology specifications. In this paper, we develop a similar approach to calculate output and graph measures of technical efficiency and indicate the general advantage of such solution strategy via enumeration. Furthermore, several analytical formulas are established and some algorithms are proposed relating the three measurement orientations to one another.  相似文献   

7.
This paper combines the use of (binary) logistic regression and stochastic frontier analysis to assess the operational effectiveness of the UK Coastguard (Maritime Rescue) coordination centres over the period 1995–1998. In particular, the rationale for the Government's decision—confirmed in 1999—to close a number of coordination centres is scrutinized. We conclude that the regression models developed in this paper represent a performance measurement framework that is considerably more realistic and complex than the one apparently used by the UK Government. Furthermore, we have found that the coordination centres selected for closure were not necessarily the ones that were least effective in their primary purpose—that is, to save lives. In a related paper, we demonstrate how the regression models developed here can be used to inform the application of data envelopment analysis to this case.  相似文献   

8.
We investigate the basic monotonicity properties of least-distance (in)efficiency measures on the class of non-convex FDH (free disposable hull) technologies. We show that any known FDH least-distance measure violates strong monotonicity over the strongly (Pareto-Koopmans) efficient frontier. Taking this result into account, we develop a new class of FDH least-distance measures that satisfy strong monotonicity and show that the developed (in)efficiency measurement framework has a natural profit interpretation.  相似文献   

9.
In this paper we investigate the adequacy of the own funds a company requires in order to remain healthy and avoid insolvency. Two methods are applied here; the quantile regression method and the method of mixed effects models. Quantile regression is capable of providing a more complete statistical analysis of the stochastic relationship among random variables than least squares estimation. The estimated mixed effects line can be considered as an internal industry equation (norm), which explains a systematic relation between a dependent variable (such as own funds) with independent variables (e.g. financial characteristics, such as assets, provisions, etc.). The above two methods are implemented with two data sets.  相似文献   

10.
Performance-based budgeting has received increasing attention from public and for-profit organizations in an effort to achieve a fair and balanced allocation of funds among their individual producers or operating units for overall system optimization. Although existing frontier estimation models can be used to measure and rank the performance of each producer, few studies have addressed how the mismeasurement by frontier estimation models affects the budget allocation and system performance. There is therefore a need for analysis of the accuracy of performance assessments in performance-based budgeting. This paper reports the results of a Monte Carlo analysis in which measurement errors are introduced and the system throughput in various experimental scenarios is compared. Each scenario assumes a different multi-period budgeting strategy and production frontier estimation model; the frontier estimation models considered are stochastic frontier analysis (SFA) and data envelopment analysis (DEA). The main results are as follows: (1) the selection of a proper budgeting strategy and benchmark model can lead to substantial improvement in the system throughput; (2) a “peanut butter” strategy outperforms a discriminative strategy in the presence of relatively high measurement errors, but a discriminative strategy is preferred for small measurement errors; (3) frontier estimation models outperform models with randomly-generated ranks even in cases with relatively high measurement errors; (4) SFA outperforms DEA for small measurement errors, but DEA becomes increasingly favorable relative to SFA as the measurement errors increase.  相似文献   

11.
Two-stage data envelopment analysis (2-DEA) is commonly used in productive efficiency analysis to estimate the effects of operational conditions and practices on performance. In this method the DEA efficiency estimates are regressed on contextual variables representing the operational conditions. We re-examine the statistical properties of the 2-DEA estimator, and find that it is statistically consistent under more general conditions than earlier studies assume. We further show that the finite sample bias of DEA in the first stage carries over to the second stage regression, causing bias in the estimated coefficients of the contextual variables. This bias is particularly severe when the contextual variables are correlated with inputs. To address this shortcoming, we apply the result that DEA can be formulated as a constrained special case of the convex nonparametric least squares (CNLS) regression. Applying the CNLS formulation, we develop a new semi-nonparametric one-stage estimator for the coefficients of the contextual variables that directly incorporates contextual variables to the standard DEA problem. The proposed method is hence referred to as one-stage DEA (1-DEA). Evidence from Monte Carlo simulations suggests that the new 1-DEA estimator performs systematically better than the conventional 2-DEA estimator both in deterministic and noisy scenarios.  相似文献   

12.
??In the last few decades, longitudinal data was deeply research in statistics science and widely used in many field, such as finance, medical science, agriculture and so on. The characteristic of longitudinal data is that the values are independent from different samples but they are correlate from one sample. Many nonparametric estimation methods were applied into longitudinal data models with development of computer technology. Using Cholesky decomposition and Profile least squares estimation, we will propose a effective spline estimation method pointing at nonparametric model of longitudinal data with covariance matrix unknown in this paper. Finally, we point that the new proposed method is more superior than Naive spline estimation in the covariance matrix is unknown case by comparing the simulated results of one example.  相似文献   

13.
The dimension reduction is helpful and often necessary in exploring the nonparametric regression structure.In this area,Sliced inverse regression (SIR) is a promising tool to estimate the central dimension reduction (CDR) space.To estimate the kernel matrix of the SIR,we herein suggest the spline approximation using the least squares regression.The heteroscedasticity can be incorporated well by introducing an appropriate weight function.The root-n asymptotic normality can be achieved for a wide range choice of knots.This is essentially analogous to the kernel estimation.Moreover, we also propose a modified Bayes information criterion (BIC) based on the eigenvalues of the SIR matrix.This modified BIC can be applied to any form of the SIR and other related methods.The methodology and some of the practical issues are illustrated through the horse mussel data.Empirical studies evidence the performance of our proposed spline approximation by comparison of the existing estimators.  相似文献   

14.
Free Disposal Hull (FDH) is one of the tools in the theoretical and empirical work on the measurement of productive efficiency. Excluding linear combinations of extremal observations to construct this reference technology entails that many of the observations belonging to an evaluated dataset are labeled efficient by this method. Few researchers have sought to improve the discrimination power of FDH. Van Puyenbroeck [H. Tulkens, On FDH efficiency analysis: some methodological issues and applications to retail, banking, courts and urban transit, Journal of Productivity Analysis 4 (1993) 183-210] modified standard FDH method by using Andersen and Petersen [N. Adler, L. Friedman, Z. Siunuany-Stern, Review of ranking methods in the data envelopment analysis context, European Journal of Operational Research, 140 (2002) 249-265], referred to A&P FDH. Jahanshahloo et al. [J. Doyle, R. Green, Efficiency and cross-efficiency in DEA: derivation, meanings and uses, Journal of Operational Research Society 45 (5) (1994) 567-578] used 0-1 linear programming (LP), referred to 0-1 LP FDH to find FDH-efficient units. The purpose of this paper is two-folds: to propose MAJ FDH, similar to in spirit as the ranking method in data envelopment analysis by Mehrabian et al. [S. Mehrabian, M.R. Alirezaee, G.R. Jahanshahloo, A complete efficiency ranking of decision making units in data envelopment analysis, Communicational Optimization and Applications 14 (1999) 261-266] that may thus be used to discriminate between FDH-efficient units and to examine the tie-breaking ability of A&P FDH, 0-1 LP FDH, and MAJ FDH by using three numerical examples. Results of the comparisons show: (i) as the number of DMU, input and output is small where all of input and output levels are positive, the A&P FDH can provide a full ranking; (ii) as the number of DMU, input and output is small where some of input and output levels are equal to zero, none of three extended FDH methods can provide a full ranking; and (iii) as the number of DMU, input and output are increased where all of input and output levels are positive, it seems that ranking by MAJ FDH is more precise than other FDH methods.  相似文献   

15.
大量实证研究表明,半参数自回归模型较传统的线性回归而言,能更好的拟合实际数据。本文构造了一类半参数可加自回归模型,基于条件最小二乘方法及核估计方法给出了估计模型参数和未知函数的迭代算法,讨论了估计量的渐近性质。通过数值模拟验证了估计的效果。并将模型应用于黄金价格数据的实证分析之中。实证分析结果表明,我们对现有模型的改进是必要的。  相似文献   

16.
17.
In the context of semi-functional partial linear regression model, we study the problem of error density estimation. The unknown error density is approximated by a mixture of Gaussian densities with means being the individual residuals, and variance a constant parameter. This mixture error density has a form of a kernel density estimator of residuals, where the regression function, consisting of parametric and nonparametric components, is estimated by the ordinary least squares and functional Nadaraya–Watson estimators. The estimation accuracy of the ordinary least squares and functional Nadaraya–Watson estimators jointly depends on the same bandwidth parameter. A Bayesian approach is proposed to simultaneously estimate the bandwidths in the kernel-form error density and in the regression function. Under the kernel-form error density, we derive a kernel likelihood and posterior for the bandwidth parameters. For estimating the regression function and error density, a series of simulation studies show that the Bayesian approach yields better accuracy than the benchmark functional cross validation. Illustrated by a spectroscopy data set, we found that the Bayesian approach gives better point forecast accuracy of the regression function than the functional cross validation, and it is capable of producing prediction intervals nonparametrically.  相似文献   

18.
The semilinear in-slide models (SLIMs) have been shown to be effective methods for normalizing microarray data [J. Fan, P. Tam, G. Vande Woude, Y. Ren, Normalization and analysis of cDNA micro-arrays using within-array replications applied to neuroblastoma cell response to a cytokine, Proceedings of the National Academy of Science (2004) 1135-1140]. Using a backfitting method, [J. Fan, H. Peng, T. Huang, Semilinear high-dimensional model for normalization of microarray data: a theoretical analysis and partial consistency, Journal of American Statistical Association, 471, (2005) 781-798] proposed a profile least squares (PLS) estimation for the parametric and nonparametric components. The general asymptotic properties for their estimator is not developed. In this paper, we consider a new approach, two-stage estimation, which enables us to establish the asymptotic normalities for both of the parametric and nonparametric component estimators. We further propose a plug-in bandwidth selector using the asymptotic normality of the nonparametric component estimator. The proposed method allow for the modeling of the aggregated SLIMs case where we can explicitly show that taking the aggregated information into account can improve both of the parametric and nonparametric component estimator by the proposed two-stage approach. Some simulation studies are conducted to illustrate the finite sample performance of the proposed procedures.  相似文献   

19.
In this paper, an estimation theory in partial linear model is developed when there is measurement error in the response and when validation data are available. A semiparametric method with the primary data is used to define two estimators for both the regression parameter and the nonparametric part using the least squares criterion with the help of validation data. The proposed estimators of the parameter are proved to be strongly consistent and asymptotically normaal, and the estimators of the nonparametric part are also proved to be strongly consistent and weakly consistent with an optimal convergent rate. Then, the two estimators of the parameter are compared based on their empirical performances. Supported by NNSF of China (No. 10231030, No. 10241001) and a grant to the author for his excellent Ph.D. dissertation work in China.  相似文献   

20.
The first-order nonlinear autoregressive model is considered and a semiparametric method is proposed to estimate regression function. In the presented model, dependent errors are defined as first-order autoregressive AR(1). The conditional least squares method is used for parametric estimation and the nonparametric kernel approach is applied to estimate regression adjustment. In this case, some asymptotic behaviors and simulated results for the semiparametric method are presented. Furthermore, the method is applied for the financial data in Iran’s Tejarat-Bank.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号