首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we highlight the importance of appropriately dealing with non-controllable inputs in technical efficiency evaluations by using DEA. In order to do this, the two most important options that exclusively use DEA methodology for the incorporation of these variables – the one-stage model by Banker and Morey [Operations Research 34(4) (1986a) 513] and the three-stage method developed by Fried and Lovell [Searching the Zeds, Working paper presented at II Georgia Productivity Workshop, 1996] – are compared both methodologically and empirically. At the same time, we propose a modification to the latter model which allows us to improve its results and interpretation. The education sector has been selected for the empirical application, the reason being that it has the desirable feature that, in the productive process, the students' socio-economic and family status (a non-controllable input) has a direct influence on the school results.The results obtained show the superiority of the multi-stage approach. It is argued that the model developed by Banker and Morey does not deal appropriately with inefficient units, as producer's behaviour in this model does not reflect the objective situation faced by such DMUs.  相似文献   

2.
It is important to consider the decision making unit (DMU)'s or decision maker's preference over the potential adjustments of various inputs and outputs when data envelopment analysis (DEA) is employed. On the basis of the so-called Russell measure, this paper develops some weighted non-radial CCR models by specifying a proper set of ‘preference weights’ that reflect the relative degree of desirability of the potential adjustments of current input or output levels. These input or output adjustments can be either less or greater than one; that is, the approach enables certain inputs actually to be increased, or certain outputs actually to be decreased. It is shown that the preference structure prescribes fixed weights (virtual multiplier bounds) or regions that invalidate some virtual multipliers and hence it generates preferred (efficient) input and output targets for each DMU. In addition to providing the preferred target, the approach gives a scalar efficiency score for each DMU to secure comparability. It is also shown how specific cases of our approach handle non-controllable factors in DEA and measure allocative and technical efficiency. Finally, the methodology is applied with the industrial performance of 14 open coastal cities and four special economic zones in 1991 in China. As applied here, the DEA/preference structure model refines the original DEA model's result and eliminates apparently efficient DMUs.  相似文献   

3.
Hard data alone are not sufficient to evaluate local police effectiveness in the new age of community policing. Citizens can provide useful feedback regarding strengths and weaknesses of police operations. However, citizen satisfaction indicators typically fail to accurately convey the multidimensional nature of local policing and account for characteristics that are non-controllable for the local police departments. In this paper, we construct a measure of perceived effectiveness of community oriented police forces that accounts for both multidimensional aspects of local policing and exogenous influences. In specific, this paper suggests the use of a multivariate conditional, robust order-m version of a non-parametric Data Envelopment Analysis approach with no inputs. We show the potentiality of the method by constructing and analyzing perceived effectiveness indicators of local police forces in Belgium. The findings suggest that perceived police effectiveness is significantly conditioned by the demographic and socioeconomic environment.  相似文献   

4.
The literature on multiple objective programming contains numerous examples in which goal programming is used to plan a selection of inputs to secure desired outputs that will conform ‘as closely as possible’ to a collection of (possibly conflicting) objectives. In this paper the orientation is changed from selection to evaluation and the dual variables associated with goal programming are brought into play for this purpose. The body of the paper is devoted to an example in portfolio planning modelled along lines like those used by Konno and Yamazaki where closeness to risk and return objective is measured in sums of absolute deviations. An appendix then shows how such a use of dual variables may be applied to evaluate least absolute value (LAV) regressions relative to their sensitivity to data variations. Simple numerical examples are used to illustrate the potential uses of these dual variable values for evaluation in more complex situations that include determining whether an efficiency frontier has been attained.  相似文献   

5.
Additive efficiency decomposition in two-stage DEA   总被引:1,自引:0,他引:1  
Kao and Hwang (2008) [Kao, C., Hwang, S.-N., 2008. Efficiency decomposition in two-stage data envelopment analysis: An application to non-life insurance companies in Taiwan. European Journal of Operational Research 185 (1), 418–429] develop a data envelopment analysis (DEA) approach for measuring efficiency of decision processes which can be divided into two stages. The first stage uses inputs to generate outputs which become the inputs to the second stage. The first stage outputs are referred to as intermediate measures. The second stage then uses these intermediate measures to produce outputs. Kao and Huang represent the efficiency of the overall process as the product of the efficiencies of the two stages. A major limitation of this model is its applicability to only constant returns to scale (CRS) situations. The current paper develops an additive efficiency decomposition approach wherein the overall efficiency is expressed as a (weighted) sum of the efficiencies of the individual stages. This approach can be applied under both CRS and variable returns to scale (VRS) assumptions. The case of Taiwanese non-life insurance companies is revisited using this newly developed approach.  相似文献   

6.
We present first methodology for dimension reduction in regressions with predictors that, given the response, follow one-parameter exponential families. Our approach is based on modeling the conditional distribution of the predictors given the response, which allows us to derive and estimate a sufficient reduction of the predictors. We also propose a method of estimating the forward regression mean function without requiring an explicit forward regression model. Whereas nearly all existing estimators of the central subspace are limited to regressions with continuous predictors only, our proposed methodology extends estimation to regressions with all categorical or a mixture of categorical and continuous predictors. Supplementary materials including the proofs and the computer code are available from the JCGS website.  相似文献   

7.
《Optimization》2012,61(5):735-745
In real applications of data envelopment analysis (DEA), there are a number of pitfalls that could have a major influence on the efficiency. Some of these pitfalls are avoidable and the others remain problematic. One of the most important pitfalls that the researchers confront is the closeness of the number of operational units and the number of inputs and outputs. In performance measurement using DEA, the closeness of these two numbers could yield a large number of efficient units. In this article, some inputs or outputs will be aggregated and the number of inputs and outputs are reduced iteratively. Numerical examples show that in comparison to the single DEA method, our approach has the fewest efficient units. This means that our approach has a superior ability to discriminate the performance of the DMUs.  相似文献   

8.

We extend the notion of a two-part fractional regression model with conditional free disposal hull efficiency responses to accommodate two-stage regression analysis. The two-part regression model includes the binomial model with a nonlinear specification for the expected response in (0,1] and is a more general formulation in the context of fractional regressions. We use nonlinear least squares to assess the effect of covariates in the conditional efficiency response. The approach is applied to Brazilian agricultural county data, as reported in the Brazilian agricultural census of 2006. The efficiency measure is output oriented and assumes variable returns to scale. Output is rural gross income and inputs are land expenses, labor expenses and expenses on other technological inputs. The covariates affecting production are credit, technical assistance, a rural development index, income concentration, measured by the Gini index, and regional dummies. Overall Brazilian rural production performance responds positively to all covariates.

  相似文献   

9.
本文将工具变量分位数回归模型(IVQR)应用到面板数据中,结合Canay对面板分位数回归的两步估计法以及Chernozhukov对IVQR模型的估计方法,提出了两步面板分位数工具变量估计法(2S-IVFEQR),并给出相应的参数估计。本文提出的方法较已有的方法计算复杂度低,蒙特卡洛模拟结果显示在数据量不大或者处理长面板数据时,2S-IVFEQR方法要优于传统的IVFEQR方法,且运算时间短。  相似文献   

10.
This work is concerned with the algorithmic reachability analysis of continuous-time linear systems with constrained initial states and inputs. We propose an approach for computing an over-approximation of the set of states reachable on a bounded time interval. The main contribution over previous works is that it allows us to consider systems whose sets of initial states and inputs are given by arbitrary compact convex sets represented by their support functions. We actually compute two over-approximations of the reachable set. The first one is given by the union of convex sets with computable support functions. As the representation of convex sets by their support function is not suitable for some tasks, we derive from this first over-approximation a second one given by the union of polyhedrons. The overall computational complexity of our approach is comparable to the complexity of the most competitive available specialized algorithms for reachability analysis of linear systems using zonotopes or ellipsoids. The effectiveness of our approach is demonstrated on several examples.  相似文献   

11.
We present a new scheme for the secured transmission of information based on master–slave synchronization of chaotic systems, using unknown-input observers. Our approach improves upon state-of-the-art schemes by being compatible with information of relatively large amplitude while improving security against intruders through an intricate encryption system. In addition, our approach is robust to channel noise. The main idea is to separate the encryption and synchronization operations by using two cascaded chaotic systems in the transmitter. Technically, the scheme is based on smooth adaptive unknown-input observers; these have the advantage to estimate the (master) states and to reconstruct the unknown inputs simultaneously. The performance of the communication system is illustrated in numerical simulation.  相似文献   

12.
This paper examines new combinations of Data Envelopment Analysis (DEA) and statistical approaches that can be used to evaluate efficiency within a multiple-input multiple-output framework. Using data on five outputs and eight inputs for 638 public secondary schools in Texas, unsatisfactory results are obtained initially from both Ordinary Least Squares (OLS) and Stochastic Frontier (SF) regressions run separately using one output variable at-a-time. Canonical correlation analysis is then used to aggregate the multiple outputs into a single aggregate output, after which separate regressions are estimated for the subsets of schools identified as efficient and inefficient by DEA. Satisfactory results are finally obtained by a joint use of DEA and statistical regressions in the following manner. DEA is first used to identify the subset of DEA-efficient schools. The entire collection of schools is then comprehended in a single regression with dummy variables used to distinguish between DEA-efficient and DEA-inefficient schools. The input coefficients are positive for the efficient schools and negative and statistically significant for the inefficient schools. These results are consistent with what might be expected from economic theory and are informative for educational policy uses. They also extend the treatments of production functions usually found in the econometrics literature to obtain one regression relation that can be used to evaluate both efficient and inefficient behavior.  相似文献   

13.
It is well known that super-efficiency data envelopment analysis (DEA) approach can be infeasible under the condition of variable returns to scale (VRS). By extending of the work of Chen (2005), the current study develops a two-stage process for calculating super-efficiency scores regardless whether the standard VRS super-efficiency mode is feasible or not. The proposed approach examines whether the standard VRS super-efficiency DEA model is infeasible. When the model is feasible, our approach yields super-efficiency scores that are identical to those arising from the original model. For efficient DMUs that are infeasible under the super-efficiency model, our approach yields super-efficiency scores that characterize input savings and/or output surpluses. The current study also shows that infeasibility may imply that an efficient DMU does not exhibit super-efficiency in inputs or outputs. When infeasibility occurs, it can be necessary that (i) both inputs and outputs be decreased to reach the frontier formed by the remaining DMUs under the input-orientation and (ii) both inputs and outputs be increased to reach the frontier formed by the remaining DMUs under the output-orientation. The newly developed approach is illustrated with numerical examples.  相似文献   

14.
In the three-dimensional strip packing problem (3DSP), we are given a container with an open dimension and a set of rectangular cuboids (boxes) and the task is to orthogonally pack all the boxes into the container such that the magnitude of the open dimension is minimized. We propose a block building heuristic based on extreme points for this problem that uses a reference length to guide its solution. Our 3DSP approach employs this heuristic in a one-step lookahead tree search algorithm using an iterative construction strategy. We tested our approach on standard 3DSP benchmark test data; the results show that our approach produces better solutions on average than all other approaches in literature for the majority of these data sets using comparable computation time.  相似文献   

15.
16.
We present a new computational and statistical approach for fitting isotonic models under convex differentiable loss functions through recursive partitioning. Models along the partitioning path are also isotonic and can be viewed as regularized solutions to the problem. Our approach generalizes and subsumes the well-known work of Barlow and Brunk on fitting isotonic regressions subject to specially structured loss functions, and expands the range of loss functions that can be used (e.g., adding Huber loss for robust regression). This is accomplished through an algorithmic adjustment to a recursive partitioning approach recently developed for solving large-scale ?2-loss isotonic regression problems. We prove that the new algorithm solves the generalized problem while maintaining the favorable computational and statistical properties of the l2 algorithm. The results are demonstrated on both real and synthetic data in two settings: fitting count data using negative Poisson log-likelihood loss, and fitting robust isotonic regressions using Huber loss. Proofs of theorems and a MATLAB-based software package implementing our algorithm are available in the online supplementary materials.  相似文献   

17.
An underlying assumption in DEA is that the weights coupled with the ratio scales of the inputs and outputs imply linear value functions. In this paper, we present a general modeling approach to deal with outputs and/or inputs that are characterized by nonlinear value functions. To this end, we represent the nonlinear virtual outputs and/or inputs in a piece-wise linear fashion. We give the CCR model that can assess the efficiency of the units in the presence of nonlinear virtual inputs and outputs. Further, we extend the models with the assurance region approach to deal with concave output and convex input value functions. Actually, our formulations indicate a transformation of the original data set to an augmented data set where standard DEA models can then be applied, remaining thus in the grounds of the standard DEA methodology. To underline the usefulness of such a new development, we revisit a previous work of one of the authors dealing with the assessment of the human development index on the light of DEA.  相似文献   

18.
We improve the efficiency interval of a DMU by adjusting its given inputs and outputs. The Interval DEA model has been formulated to obtain an efficiency interval consisting of evaluations from both the optimistic and pessimistic viewpoints. DMUs which are not rated as efficient in the conventional sense are improved so that their lower bounds become as large as possible under the condition that their upper bounds attain the maximum value one. The adjusted inputs and outputs keep each other balanced by improving the lower bound of efficiency interval, since the lower bound becomes small if all the inputs and outputs are not proportioned. In order to improve the lower bound of efficiency interval, different target points are defined for different DMUs. The target point can be regarded as a kind of benchmark for the DMU. First, a new approach to improvement by adjusting only outputs or inputs is proposed. Then, the combined approach to improvement by adjusting both inputs and outputs simultaneously is proposed. Lastly, numerical examples are shown to illustrate our proposed approaches.  相似文献   

19.
Frontier regression models seek to model and estimate best rather than average values of a response variable. Our proposed frontier model has similar intent, but also allows for an additional error term. The composed error approach uses the sum of two error terms, one an inefficiency error and the other as white noise. Previous research proposed assumptions on the distributions of the error components so that the distribution of this total error can be specified. Here we propose a distribution free approach to specifying these errors. In addition, our approach is completely data driven, rendering model specification an unnecessary step. We also outline, step-by-step, an approach to implementing this procedure. Our entire approach is illustrated with a mutual fund data set from the Morning Star database.  相似文献   

20.
This paper highlights some recent developments in testing predictability of asset returns with focuses on linear mean regressions, quantile regressions and nonlinear regression models. For these models, when predictors are highly persistent and their innovations are contemporarily correlated with dependent variable, the ordinary least squares estimator has a finite-sample bias, and its limiting distribution relies on some unknown nuisance parameter, which is not consistently estimable. Without correcting these issues, conventional test statistics are subject to a serious size distortion and generate a misleading conclusion in testing predictability of asset returns in real applications. In the past two decades, sequential studies have contributed to this subject and proposed various kinds of solutions, including, but not limit to, the bias-correction procedures, the linear projection approach, the IVX filtering idea, the variable addition approaches, the weighted empirical likelihood method, and the double-weight robust approach. Particularly, to catch up with the fast-growing literature in the recent decade, we offer a selective overview of these methods. Finally, some future research topics, such as the econometric theory for predictive regressions with structural changes, and nonparametric predictive models, and predictive models under a more general data setting, are also discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号