首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In the existing DEA models, we have a centralized decision maker (DM) who supervises all the operating units. In this paper, we solve a problem in which the centralized DM encounters limited or constant resources for total inputs or total outputs. We establish a DEA target model that solves and deals with such a situation. In our model, we consider the decrease of total input consumption and the increase of total output production; however, in the existing DEA models, total output production is guaranteed not to decrease. Considering the importance of imprecise data in organizations, we define our model so as to deal with interval and ordinal data. A numerical illustration is provided to show the application of our model and the advantages of our approach over the previous one.  相似文献   

2.
The conventional data envelopment analysis (DEA) measures the relative efficiencies of a set of decision making units (DMUs) with exact values of inputs and outputs. For imprecise data, i.e., mixtures of interval data and ordinal data, some methods have been developed to calculate the upper bound of the efficiency scores. This paper constructs a pair of two-level mathematical programming models, whose objective values represent the lower bound and upper bound of the efficiency scores, respectively. Based on the concept of productive efficiency and the application of a variable substitution technique, the pair of two-level nonlinear programs is transformed to a pair of ordinary one-level linear programs. Solving the associated pairs of linear programs produces the efficiency intervals of all DMUs. An illustrative example verifies the idea of this paper. A real case is also provided to give some interpretation of the interval efficiency. Interval efficiency not only describes the real situation in better detail; psychologically, it also eases the tension of the DMUs being evaluated as well as the persons conducting the evaluation.  相似文献   

3.
《Optimization》2012,61(11):2441-2454
Inverse data envelopment analysis (InDEA) is a well-known approach for short-term forecasting of a given decision-making unit (DMU). The conventional InDEA models use the production possibility set (PPS) that is composed of an evaluated DMU with current inputs and outputs. In this paper, we replace the fluctuated DMU with a modified DMU involving renewal inputs and outputs in the PPS since the DMU with current data cannot be allowed to establish the new PPS. Besides, the classical DEA models such as InDEA are assumed to consider perfect knowledge of the input and output values but in numerous situations, this assumption may not be realistic. The observed values of the data in these situations can sometimes be defined as interval numbers instead of crisp numbers. Here, we extend the InDEA model to interval data for evaluating the relative efficiency of DMUs. The proposed models determine the lower and upper bounds of the inputs of a given DMU separately when its interval outputs are changed in the performance analysis process. We aim to remain the current interval efficiency of a considered DMU and the interval efficiencies of the remaining DMUs fixed or even improve compared with the current interval efficiencies.  相似文献   

4.
A first systematic attempt to use data containing missing values in data envelopment analysis (DEA) is presented. It is formally shown that allowing missing values into the data set can only improve estimation of the best-practice frontier. Technically, DEA can automatically exclude the missing data from the analysis if blank data entries are coded by appropriate numerical values.  相似文献   

5.
Data envelopment analysis (DEA) has proven to be a useful technique in evaluating the efficiency of decision making units that produce multiple-outputs using multiple-inputs. However, the ability to estimate efficiency reliably is hampered in the presence of measurement error and other statistical noise. A main and legitimate criticism of all deterministic models is the inability to separate out measurement error from inefficiency, both of which are unobserved. In this paper, we consider panel data models of efficiency estimation. One DEA model that has been used averages cross-sectional efficiency estimates across time and has been shown to work relatively well. In this paper, it is shown that this approach leads to biased efficiency estimates and provide an alternative model that corrects this problem. The approaches are compared using simulated data for illustrative purposes.  相似文献   

6.
The effects of data heterogeneity on the efficiency estimate by data envelopment analysis are evaluated here in terms of empirical applications in the computer industry. Scale or size variations of firms and heteroscedasticity are the two forms of heterogeneity considered here. Our empirical results show that the adverse effects of data heterogeneity can be considerably reduced by the methods suggested here.  相似文献   

7.
In this paper, additive model is used to provide an alternative approach for estimating returns to scale in data envelopment analysis. The proposed model is developed in both stochastic and fuzzy data envelopment analysis. Deterministic (crisp) equivalents are obtained which correspond to the stochastic and fuzzy models. Numerical examples are, also, used to illustrate the proposed approaches.  相似文献   

8.
Data envelopment analysis (DEA) is popularly used to evaluate relative efficiency among public or private firms. Most DEA models are established by individually maximizing each firm's efficiency according to its advantageous expectation by a ratio. Some scholars have pointed out the interesting relationship between the multiobjective linear programming (MOLP) problem and the DEA problem. They also introduced the common weight approach to DEA based on MOLP. This paper proposes a new linear programming problem for computing the efficiency of a decision-making unit (DMU). The proposed model differs from traditional and existing multiobjective DEA models in that its objective function is the difference between inputs and outputs instead of the outputs/inputs ratio. Then an MOLP problem, based on the introduced linear programming problem, is formulated for the computation of common weights for all DMUs. To be precise, the modified Chebychev distance and the ideal point of MOLP are used to generate common weights. The dual problem of this model is also investigated. Finally, this study presents an actual case study analysing R&D efficiency of 10 TFT-LCD companies in Taiwan to illustrate this new approach. Our model demonstrates better performance than the traditional DEA model as well as some of the most important existing multiobjective DEA models.  相似文献   

9.
Data are often affected by uncertainty. Uncertainty is usually referred to as randomness. Nonetheless, other sources of uncertainty may occur. In particular, the empirical information may also be affected by imprecision. Also in these cases it can be fruitful to analyze the underlying structure of the data. In this paper we address the problem of summarizing a sample of three-way imprecise data. In order to manage the different sources of uncertainty a twofold strategy is adopted. On the one hand, imprecise data are transformed into fuzzy sets by means of the so-called fuzzification process. The so-obtained fuzzy data are then analyzed by suitable generalizations of the Tucker3 and CANDECOMP/PARAFAC models, which are the two most popular three-way extensions of Principal Component Analysis. On the other hand, the statistical validity of the obtained underlying structure is evaluated by (nonparametric) bootstrapping. A simulation experiment is performed for assessing whether the use of fuzzy data is helpful in order to summarize three-way uncertain data. Finally, to show how our models work in practice, an application to real data is discussed.  相似文献   

10.
Data Envelopment Analysis (DEA) offers a piece-wise linear approximation of the production frontier. The approximation tends to be poor if the true frontier is not concave, eg in case of economies of scale or of specialisation. To improve the flexibility of the DEA frontier and to gain in empirical fit, we propose to extend DEA towards a more general piece-wise quadratic approximation, called Quadratic Data Envelopment Analysis (QDEA). We show that QDEA gives statistically consistent estimates for all production frontiers with bounded Hessian eigenvalues. Our Monte-Carlo simulations suggest that QDEA can substantially improve efficiency estimation in finite samples relative to standard DEA models.  相似文献   

11.
Transconcave data envelopment analysis (TDEA) extends standard data envelopment analysis (DEA), in order to account for non-convex production technologies, such as those involving increasing returns-to-scale or diseconomies of scope. TDEA introduces non-convexities by transforming the range and the domain of the production frontier, thus replacing the standard assumption that the production frontier is concave with the more general assumption that the frontier is concave transformable. TDEA gives statistically consistent estimates for all monotonically increasing and concave transformable frontiers. In addition, Monte Carlo simulations suggest that TDEA can substantially improve inefficiency estimation in small samples compared to the standard Banker, Charnes and Cooper model and the full disposable hull model (FDH).  相似文献   

12.
In this paper, we investigate DEA with interval input-output data. First we show various extensions of efficiency and that 25 of them are essential. Second we formulate the efficiency test problems as mixed integer programming problems. We prove that 14 among 25 problems can be reduced to linear programming problems and that the other 11 efficiencies can be tested by solving a finite sequence of linear programming problems. Third, in order to obtain efficiency scores, we extend SBM model to interval input-output data. Fourth, to moderate a possible positive overassessment by DEA, we introduce the inverted DEA model with interval input-output data. Using efficiency and inefficiency scores, we propose a classification of DMUs. Finally, we apply the proposed approach to Japanese Bank Data and demonstrate its advantages.  相似文献   

13.
This paper considers the problem of interval scale data in the most widely used models of data envelopment analysis (DEA), the CCR and BCC models. Radial models require inputs and outputs measured on the ratio scale. Our focus is on how to deal with interval scale variables especially when the interval scale variable is a difference of two ratio scale variables like profit or the decrease/increase in bank accounts. We suggest the use of these ratio scale variables in a radial DEA model.  相似文献   

14.
To address some problems with the original context-dependent data envelopment analysis (DEA), this paper proposes a new version of context-dependent DEA; this version is based on cross-efficiency evaluations. One of the problems with the original context-dependent DEA is that the attractiveness and progress measures only represent the radial distance between the decision-making unit (DMU) under evaluation and the evaluation context. This representation only shows how distinct the DMU is from a single specific DMU on the evaluation context, not from the entire evaluation context overall. Another problem is that the magnitude of attractiveness and progress scores in the original context-dependent DEA may not have significant meanings. It may not be proper to say that a DMU is more attractive simply because it has a higher attractiveness score for the same reason that the performance of inefficient DMUs cannot be compared with one another simply based on their efficiency scores. We incorporate cross-efficiency evaluations into the context-dependent DEA to overcome the preceding shortcomings of the original context-dependent DEA. We also demonstrate the proposed model's appropriateness and usefulness with an illustrative example.  相似文献   

15.
Benefit-cost analysis is required by law and regulation throughout the federal government. Robert Dorfman (1996) declares ‘Three prominent shortcomings of benefit-cost analysis as currently practiced are (1) it does not identify the population segments that the proposed measure benefits or harms (2) it attempts to reduce all comparisons to a single dimension, generally dollars and cents and (3) it conceals the degree of inaccuracy or uncertainty in its estimates.’ The paper develops an approach for conducting benefit-cost analysis derived from data envelopment analysis (DEA) that overcomes each of Dorfman's objections. The models and methodology proposed give decision makers a tool for evaluating alternative policies and projects where there are multiple constituencies who may have conflicting perspectives. This method incorporates multiple incommensurate attributes while allowing for measures of uncertainty. An application is used to illustrate the method. This work was funded by grant N00014-99-1-0719 from the Office of Naval Research  相似文献   

16.
17.
Data envelopment analysis (DEA) is a method to estimate the relative efficiency of decision-making units (DMUs) performing similar tasks in a production system that consumes multiple inputs to produce multiple outputs. So far, a number of DEA models with interval data have been developed. The CCR model with interval data, the BCC model with interval data and the FDH model with interval data are well known as basic DEA models with interval data. In this study, we suggest a model with interval data called interval generalized DEA (IGDEA) model, which can treat the stated basic DEA models with interval data in a unified way. In addition, by establishing the theoretical properties of the relationships among the IGDEA model and those DEA models with interval data, we prove that the IGDEA model makes it possible to calculate the efficiency of DMUs incorporating various preference structures of decision makers.  相似文献   

18.
This paper extends the classical cost efficiency (CE) models to include data uncertainty. We believe that many research situations are best described by the intermediate case, where some uncertain input and output data are available. In such cases, the classical cost efficiency models cannot be used, because input and output data appear in the form of ranges. When the data are imprecise in the form of ranges, the cost efficiency measure calculated from the data should be uncertain as well. So, in the current paper, we develop a method for the estimation of upper and lower bounds for the cost efficiency measure in situations of uncertain input and output data. Also, we develop the theory of efficiency measurement so as to accommodate incomplete price information by deriving upper and lower bounds for the cost efficiency measure. The practical application of these bounds is illustrated by a numerical example.  相似文献   

19.
We propose new efficiency tests which are based on traditional DEA models and take into account portfolio diversification. The goal is to identify the investment opportunities that perform well without specifying our attitude to risk. We use general deviation measures as the inputs and return measures as the outputs. We discuss the choice of the set of investment opportunities including portfolios with limited number of assets. We compare the optimal values (efficiency scores) of all proposed tests leading to the relations between the sets of efficient opportunities. Strength of the tests is then discussed. We test the efficiency of 25 world financial indices using new DEA models with CVaR deviation measures.  相似文献   

20.
In this paper we analyze resource allocation distinguishing between the decision of when to begin allocation and over how many periods to apply the resources. We present analytical results for specific production technologies under different returns to scale assumptions, under capacity constraints and for production with technical change. Using a dynamic activity analysis framework we show how to compute in general optimal solutions for resource intensity use.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号