首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
Data envelopment analysis (DEA) allows us to evaluate the relative efficiency of each of a set of decision-making units (DMUs). However, the methodology does not permit us to identify specific sources of inefficiency because DEA views the DMU as a “black box” that consumes a mix of inputs and produces a mix of outputs. Thus, DEA does not provide a DMU manager with insight regarding the internal source of the organization’s inefficiency.  相似文献   

2.
Environmental assessment recently becomes a major policy issue in the world. This study discusses how to apply Data Envelopment Analysis (DEA) for environmental assessment. An important feature of the DEA environmental assessment is that it needs to classify outputs into desirable (good) and undesirable (bad) outputs because private and public entities often produce not only desirable outputs but also undesirable outputs as a result of their production activities. This study proposes the three types of unification for DEA environmental assessment by using non-radial DEA models. The first unification considers both an increase and a decrease in the input vector along with a decrease in the direction vector of undesirable outputs. This type of unification measures “unified efficiency”. The second unification considers a decrease in an input vector along with a decrease in the vector of undesirable outputs. This type of unification is referred to as “natural disposability” and measures “unified efficiency under natural disposability”. The third unification considers an increase in an input vector but a decrease in the vector of undesirable outputs. This type of unification is referred to as “managerial disposability” and measures “unified efficiency under managerial disposability”. All the unifications increase the vector of desirable outputs. To document their practical implications, this study has applied the proposed approach to compare the performance of national oil firms with that of international oil firms. This study identifies two important findings on the petroleum industry. One of the two findings is that national oil companies under public ownership outperform international oil companies under private ownership in terms of unified (operational and environmental) efficiency and unified efficiency under natural disposability. However, the performance of international oil companies exhibits an increasing trend in unified efficiency. The other finding is that national oil companies need to satisfy the environmental standard of its own country while international oil companies need to satisfy the international standard that is more restricted than the national standards. As a consequence, international oil companies outperform national oil companies in terms of unified efficiency under managerial disposability.  相似文献   

3.
In DEA, we have two measures of technical efficiency with different characteristics: radial and non-radial. In this paper we compile them into a composite model called “epsilon-based measure (EBM).” For this purpose we introduce two parameters which connect radial and non-radial models. These two parameters are obtained from the newly defined affinity index between inputs or outputs along with principal component analysis on the affinity matrix. Thus, EBM takes into account diversity of input/output data and their relative importance for measuring technical efficiency.  相似文献   

4.
An underlying assumption in DEA is that the weights coupled with the ratio scales of the inputs and outputs imply linear value functions. In this paper, we present a general modeling approach to deal with outputs and/or inputs that are characterized by nonlinear value functions. To this end, we represent the nonlinear virtual outputs and/or inputs in a piece-wise linear fashion. We give the CCR model that can assess the efficiency of the units in the presence of nonlinear virtual inputs and outputs. Further, we extend the models with the assurance region approach to deal with concave output and convex input value functions. Actually, our formulations indicate a transformation of the original data set to an augmented data set where standard DEA models can then be applied, remaining thus in the grounds of the standard DEA methodology. To underline the usefulness of such a new development, we revisit a previous work of one of the authors dealing with the assessment of the human development index on the light of DEA.  相似文献   

5.
Conventional data envelopment analysis (DEA) methods assume that input and output variables are continuous. However, in many real managerial cases, some inputs and/or outputs can only take integer values. Simply rounding the performance targets to the nearest integers can lead to misleading solutions and efficiency evaluation. Addressing this kind of integer-valued data, the current paper proposes models that deal directly with slacks to calculate efficiency and super-efficiency scores when integer values are present. Compared with standard radial models, additive (super-efficiency) models demonstrate higher discrimination power among decision making units, especially for integer-valued data. We use an empirical application in early-stage ventures to illustrate our approach.  相似文献   

6.
In this paper, we propose a new approach to deal with the non-zero slacks in data envelopment analysis (DEA) assessments that is based on restricting the multipliers in the dual multiplier formulation of the used DEA model. It guarantees strictly positive weights, which ensures reference points on the Pareto-efficient frontier, and consequently, zero slacks. We follow a two-step procedure which, after specifying some weight bounds, results in an “Assurance Region”-type model that will be used in the assessment of the efficiency. The specification of these bounds is based on a selection criterion among the optimal solutions for the multipliers of the unbounded DEA models that tries to avoid the extreme dissimilarity between the weights that is often found in DEA applications. The models developed do not have infeasibility problems and we do not have problems with the alternate optima in the choice of weights that is made. To use our multiplier bound approach we do not need a priori information about substitutions between inputs and outputs, and it is not required the existence of full dimensional efficient facets on the frontier either, as is the case of other existing approaches that address this problem.  相似文献   

7.
Evaluating the performance of activities or organization by common data envelopment analysis models requires crisp input/output data. However, the precise inputs and outputs of production processes cannot be always measured. Thus, the data envelopment analysis measurement containing fuzzy data, called “fuzzy data envelopment analysis”, has played an important role in the evaluation of efficiencies of real applications. This paper focuses on the fuzzy CCR model and proposes a new method for determining the lower bounds of fuzzy inputs and outputs. This improves the weak efficiency frontiers of the corresponding production possibility set. Also a numerical example illustrates the capability of the proposed method.  相似文献   

8.
The concept of efficiency in data envelopment analysis (DEA) is defined as weighted sum of outputs/weighted sum of inputs. In order to calculate the maximum efficiency score, each decision making unit (DMU)’s inputs and outputs are assigned to different weights. Hence, the classical DEA allows the weight flexibility. Therefore, even if they are important, the inputs or outputs of some DMUs can be assigned zero (0) weights. Thus, these inputs or outputs are neglected in the evaluation. Also, some DMUs may be defined as efficient even if they are inefficient. This situation leads to unrealistic results. Also to eliminate the problem of weight flexibility, weight restrictions are made in DEA. In our study, we proposed a new model which has not been published in the literature. We describe it as the restricted data envelopment analysis ((ARIII(COR))) model with correlation coefficients. The aim for developing this new model, is to take into account the relations between variables using correlation coefficients. Also, these relations were added as constraints to the CCR and BCC models. For this purpose, the correlation coefficients were used in the restrictions of input–output each one alone and their combination together. Inputs and outputs are related to the degree of correlation between each other in the production. Previous studies did not take into account the relationship between inputs/outputs variables. So, only with expert opinions or an objective method, weight restrictions have been made. In our study, the weights for input and output variables were determined, according to the correlations between input and output variables. The proposed new method is different from other methods in the literature, because the efficiency scores were calculated at the level of correlations between the input and/or output variables.  相似文献   

9.
DEA model with shared resources and efficiency decomposition   总被引:2,自引:0,他引:2  
Data envelopment analysis (DEA) has proved to be an excellent approach for measuring performance of decision making units (DMUs) that use multiple inputs to generate multiple outputs. In many real world scenarios, DMUs have a two-stage network process with shared input resources used in both stages of operations. For example, in hospital operations, some of the input resources such as equipment, personnel, and information technology are used in the first stage to generate medical record to track treatments, tests, drug dosages, and costs. The same set of resources used by first stage activities are used to generate the second-stage patient services. Patient services also use the services generated by the first stage operations of housekeeping, medical records, and laundry. These DMUs have not only inputs and outputs, but also intermediate measures that exist in-between the two-stage operations. The distinguishing characteristic is that some of the inputs to the first stage are shared by both the first and second stage, but some of the shared inputs cannot be conveniently split up and allocated to the operations of the two stages. Recognizing this distinction is critical for these types of DEA applications because measuring the efficiency of the production for first-stage outputs can be misleading and can understate the efficiency if DEA fails to consider that some of the inputs generate other second-stage outputs. The current paper develops a set of DEA models for measuring the performance of two-stage network processes with non splittable shared inputs. An additive efficiency decomposition for the two-stage network process is presented. The models are developed under the assumption of variable returns to scale (VRS), but can be readily applied under the assumption of constant returns to scale (CRS). An application is provided.  相似文献   

10.
We introduce stochastic version of an input relaxation model in data envelopment analysis (DEA). The input relaxation model, recently developed in DEA, is useful to resource management [e.g. G.R. Jahanshahloo, M. Khodabakhshi, Suitable combination of inputs for improving outputs in DEA with determining input congestion, Appl. Math. Comput. 151(1) (2004) 263–273]. This model allows more changes in the input combinations of decision making units than those in the observed inputs of evaluating decision making units. Using this extra flexibility in input combinations we can find better outputs. We obtain a non-linear deterministic equivalent to this stochastic model. It is shown that under fairly general conditions this non-linear model can be replaced by an ordinary deterministic DEA model. The model is illustrated using a real data set.  相似文献   

11.
In original data envelopment analysis (DEA) models, inputs and outputs are measured by exact values on a ratio scale. Cooper et al. [Management Science, 45 (1999) 597–607] recently addressed the problem of imprecise data in DEA, in its general form. We develop in this paper an alternative approach for dealing with imprecise data in DEA. Our approach is to transform a non-linear DEA model to a linear programming equivalent, on the basis of the original data set, by applying transformations only on the variables. Upper and lower bounds for the efficiency scores of the units are then defined as natural outcomes of our formulations. It is our specific formulation that enables us to proceed further in discriminating among the efficient units by means of a post-DEA model and the endurance indices. We then proceed still further in formulating another post-DEA model for determining input thresholds that turn an inefficient unit to an efficient one.  相似文献   

12.
DEA (Data Envelopment Analysis) models and concepts are formulated here in terms of the P-Models of Chance Constrained Programming, which are then modified to contact the satisficing concepts of H.A. Simon. Satisficing is thereby added as a third category to the efficiency/inefficiency dichotomies that have heretofore prevailed in DEA. Formulations include cases in which inputs and outputs are stochastic, as well as cases in which only the outputs are stochastic. Attention is also devoted to situations in which variations in inputs and outputs are related through a common random variable. Extensions include new developments in goal programming with deterministic equivalents for the corresponding satisficing models under chance constraints.  相似文献   

13.
In conventional DEA analysis, DMUs are generally treated as a black-box in the sense that internal structures are ignored, and the performance of a DMU is assumed to be a function of a set of chosen inputs and outputs. A significant body of work has been directed at problem settings where the DMU is characterized by a multistage process; supply chains and many manufacturing processes take this form. Recent DEA literature on serial processes has tended to concentrate on closed systems, that is, where the outputs from one stage become the inputs to the next stage, and where no other inputs enter the process at any intermediate stage. The current paper examines the more general problem of an open multistage process. Here, some outputs from a given stage may leave the system while others become inputs to the next stage. As well, new inputs can enter at any stage. We then extend the methodology to examine general network structures. We represent the overall efficiency of such a structure as an additive weighted average of the efficiencies of the individual components or stages that make up that structure. The model therefore allows one to evaluate not only the overall performance of the network, but as well represent how that performance decomposes into measures for the individual components of the network. We illustrate the model using two data sets.  相似文献   

14.
非期望产出的DEA效率评价   总被引:5,自引:0,他引:5  
将非期望产出作为投入应用到传统DEA模型上,解决了非期望产出生产活动的效率评价问题.结合生产可能集,将非期望产出直接反映到生产可能集中,建立了基于投入导向的径向和非径向两种DEA模型.并对两种DEA模型效率值的大小关系、相对有效性的等价性问题进行了证明,指出非径向DEA模型更能准确的实现效率定量评价.  相似文献   

15.
A new BiLevel programming Data Envelopment Analysis (DEA) approach is created to provide valuable managerial insights when assessing the performance of a system with Stackelberg-game relationships. This new approach allows us to evaluate the firm performance in decentralized decisions, which consist of the objective(s) of the leader at its first level and that is of the follower at the second level. This approach can help decentralized companies to optimize their performance using multiple inputs to produce multiples outputs in a cost-effective way, where both the system “black-box” and subsystem performance are exposed in details. We show the algorithms and solutions to our new models. We illustrate and validate the proposed new approach using two case studies: a banking chain and a manufacturing supply chain. The computation shows that subsystem being efficient at all levels results in an overall efficiency achievement in a decentralized BiLevel structure.  相似文献   

16.
While traditional data envelopment analysis (DEA) models assess the relative efficiency of similar, independent decision making units (DMUs) centralized DEA models aim at reallocating inputs and outputs among the units setting new input and output targets for each one. This system point of view is appropriate when the DMUs belong to a common organization that allocates their inputs and appropriates their outputs. This intraorganizational perspective opens up the possibility that greater technical efficiency for the organization as a whole might be achieved by closing down some of the existing DMUs. In this paper, we present three centralized DEA models that take advantage of this possibility. Although these models involve some binary variables, we present efficient solution approaches based on Linear Programming. We also present some numerical results of the proposed models for a small problem from the literature.  相似文献   

17.
Discretionary models of data envelopment analysis (DEA) assume that all inputs and outputs can be varied at the discretion of management or other users. In any realistic situation, however, there may exist “exogenously fixed” or non-discretionary factors that are beyond the control of a DMU’s management, which also need to be considered. This paper discusses and reviews the use of super-efficiency approach in data envelopment analysis (DEA) sensitivity analyses when some inputs are exogenously fixed. Super-efficiency data envelopment analysis (DEA) model is obtained when a decision making unit (DMU) under evaluation is excluded from the reference set. In this paper by means of modified Banker and Morey’s (BM hereafter) model [R.D. Banker, R. Morey, Efficiency analysis for exogenously fixed inputs and outputs, Operations Research 34 (1986) 513–521], in which the test DMU is excluded from the reference set, we are able to determine what perturbations of discretionary data can be tolerated before frontier DMUs become nonfrontier.  相似文献   

18.
This paper investigates efficiency measurement in a two-stage data envelopment analysis (DEA) setting. Since 1978, DEA literature has witnessed the expansion of the original concept to encompass a wide range of theoretical and applied research areas. One such area is network DEA, and in particular two-stage DEA. In the conventional closed serial system, the only role played by the outputs from Stage 1 is to behave as inputs to Stage 2. The current paper examines a variation of that system. In particular, we consider settings where the set of final outputs comprises not only those that result from Stage 2, but can include, in addition, certain outputs from the previous (first) stage. The difficulty that this situation creates is that such outputs are attempting to play both an input and output role in the same stage. We develop a DEA-based methodology that is designed to handle what we term ‘time-staged outputs’. We then examine an application of this concept where the DMUs are schools of business.  相似文献   

19.
Data Envelopment Analysis (DEA) is a nonparametric method for measuring the efficiency of a set of decision making units such as firms or public sector agencies, first introduced into the operational research and management science literature by Charnes, Cooper, and Rhodes (CCR) [Charnes, A., Cooper, W.W., Rhodes, E., 1978. Measuring the efficiency of decision making units. European Journal of Operational Research 2, 429–444]. The original DEA models were applicable only to technologies characterized by positive inputs/outputs. In subsequent literature there have been various approaches to enable DEA to deal with negative data.  相似文献   

20.
In data envelopment analysis (DEA), efficient decision making units (DMUs) are of primary importance as they define the efficient frontier. The current paper develops a new sensitivity analysis approach for a category DMUs and finds the stability radius for all efficient DMUs. By means of combining some classic DEA models and with the condition that the efficiency scores of efficient DMUs remain unchanged, we are able to determine what perturbations of the data can be tolerated before efficient DMUs become inefficient. Our approach generalizes the conventional sensitivity analysis approach in which the inputs of efficient DMUs increase and their outputs decrease, while the inputs of inefficient DMUs decrease and their outputs increase. We find the maximum quantity of perturbations of data so that all first level efficient DMUs remain at the same level.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号