首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper extends the classical cost efficiency (CE) models to include data uncertainty. We believe that many research situations are best described by the intermediate case, where some uncertain input and output data are available. In such cases, the classical cost efficiency models cannot be used, because input and output data appear in the form of ranges. When the data are imprecise in the form of ranges, the cost efficiency measure calculated from the data should be uncertain as well. So, in the current paper, we develop a method for the estimation of upper and lower bounds for the cost efficiency measure in situations of uncertain input and output data. Also, we develop the theory of efficiency measurement so as to accommodate incomplete price information by deriving upper and lower bounds for the cost efficiency measure. The practical application of these bounds is illustrated by a numerical example.  相似文献   

2.
In Fukuyama [Fukuyama, H., 2000. Returns to scale and scale elasticity in data envelopment analysis. European Journal of Operational Research 125, 93–112], I investigated some mathematical structure on scale elasticity and returns to scale. Soleimani-damaneh and Mostafaee [Soleimani-damaneh, M., Mostafaee, A., in press. A comment on “Returns to scale and scale elasticity in data envelopment analysis”. European Journal of Operational Research. doi:10.1016/j.ejor.2006.11.042] and Zhang [Zhang, B., in press. A Note on Fukuyama (2000). European Journal of Operational Research. doi:10.1016/j.ejor.2006.11.040] claim that some results, which are related to homogeneity, are incorrect. This note replies to their comments by demonstrating that Fukuyama (2000) results are still valid.  相似文献   

3.
In a recent paper Po, Guh and Yang [Po, R.-W., Guh, Y.-Y., Yang, M.-S., 2009. A new clustering approach using data envelopment analysis. European Journal of Operational Research 199, 276–284] propose a new algorithm for forming clusters from the results of a DEA analysis. In this comment it is explained that the algorithm only generates information that is readily available from the usual DEA results.  相似文献   

4.
This paper considers allocation rules. First, we demonstrate that costs allocated by the Aumann–Shapley and the Friedman–Moulin cost allocation rules are easy to determine in practice using convex envelopment of registered cost data and parametric programming. Second, from the linear programming problems involved it becomes clear that the allocation rules, technically speaking, allocate the non-zero value of the dual variable for a convexity constraint on to the output vector. Hence, the allocation rules can also be used to allocate inefficiencies in non-parametric efficiency measurement models such as Data Envelopment Analysis (DEA). The convexity constraint of the BCC model introduces a non-zero slack in the objective function of the multiplier problem and we show that the cost allocation rules discussed in this paper can be used as candidates to allocate this slack value on to the input (or output) variables and hence enable a full allocation of the inefficiency on to the input (or output) variables as in the CCR model.  相似文献   

5.
To impose the law of one price (LoOP) restrictions, which state that all firms face the same input prices, Kuosmanen, Cherchye, and Sipiläinen (2006) developed the top-down and bottom-up approaches to maximizing the industry-level cost efficiency. However, the optimal input shadow prices generated by the above approaches need not be unique, which influences the distribution of the efficiency indices at the individual firm level. To solve this problem, in this paper, we developed a pair of two-level mathematical programming models to calculate the upper and lower bounds of cost efficiency for each firm in the case of non-unique LoOP prices while keeping the industry cost efficiency optimal. Furthermore, a base-enumerating algorithm is proposed to solve the lower bound models of the cost efficiency measure, which are bi-level linear programs and NP-hard problems. Lastly, a numerical example is used to demonstrate the proposed approach.  相似文献   

6.
In a recent paper by Li and Cheng [Li,S.K., Cheng, Y.S., 2007. Solving the puzzles of structural efficiency. European Journal of Operational Research 180(2), 713–722], they developed the shadow price model to solve the existing puzzles of structural efficiency theoretically. However, we observe that the optimal shadow price vector in the shadow price model by Li and Cheng (2007) is not always unique. As a result, the decomposition of the structural efficiency is arbitrarily generated, depending on the shadow price vector we choose. Finally, an example with multiple inputs and outputs is used to illustrate the phenomenon.  相似文献   

7.
Benefit-cost analysis is required by law and regulation throughout the federal government. Robert Dorfman (1996) declares ‘Three prominent shortcomings of benefit-cost analysis as currently practiced are (1) it does not identify the population segments that the proposed measure benefits or harms (2) it attempts to reduce all comparisons to a single dimension, generally dollars and cents and (3) it conceals the degree of inaccuracy or uncertainty in its estimates.’ The paper develops an approach for conducting benefit-cost analysis derived from data envelopment analysis (DEA) that overcomes each of Dorfman's objections. The models and methodology proposed give decision makers a tool for evaluating alternative policies and projects where there are multiple constituencies who may have conflicting perspectives. This method incorporates multiple incommensurate attributes while allowing for measures of uncertainty. An application is used to illustrate the method. This work was funded by grant N00014-99-1-0719 from the Office of Naval Research  相似文献   

8.
Rough set theory provides a powerful tool for dealing with uncertainty in data. Application of variety of rough set models to mining data stored in a single table has been widely studied. However, analysis of data stored in a relational structure using rough sets is still an extensive research area. This paper proposes compound approximation spaces and their constrained versions that are intended for handling uncertainty in relational data. The proposed spaces are expansions of tolerance approximation ones to a relational case. Compared with compound approximation spaces, the constrained version enables to derive new knowledge from relational data. The proposed approach can improve mining relational data that is uncertain, incomplete, or inconsistent.  相似文献   

9.
This paper briefly reviews the existing methods of capacity utilization in nonparametric framework from economic perspectives, and then suggests an alternative in the light of limitations therein. In the spirit of work by Coelli et al. [Coelli, T.J., Grifell-Tatje, E., Perelman, S., 2002. Capacity utilisation and profitability: A decomposition of short run profit efficiency. International Journal of Production Economics 79, 261–278], we propose two methods, radial and non-radial, to decompose the input-based physical (technological) capacity utilization into various meaningful components viz., technical inefficiency, ray economic capacity utilization and optimal capacity idleness. A case study of Indian banking industry is taken as an example to illustrate the potential application of these two methods of decomposition. Our two broad empirical findings are that first, competition created after financial sector reforms generates high efficiency growth, and reduces excess capacity; second, the cost gap of the short-run cost from the actual cost is higher for the nationalized banks over the private banks indicating that the former banks, though old, do not reflect their learning experience in their cost minimizing behavior.  相似文献   

10.
Applications of traditional data envelopments analysis (DEA) models require knowledge of crisp input and output data. However, the real-world problems often deal with imprecise or ambiguous data. In this paper, the problem of considering uncertainty in the equality constraints is analyzed and by using the equivalent form of CCR model, a suitable robust DEA model is derived in order to analyze the efficiency of decision-making units (DMUs) under the assumption of uncertainty in both input and output spaces. The new model based on the robust optimization approach is suggested. Using the proposed model, it is possible to evaluate the efficiency of the DMUs in the presence of uncertainty in a fewer steps compared to other models. In addition, using the new proposed robust DEA model and envelopment form of CCR model, two linear robust super-efficiency models for complete ranking of DMUs are proposed. Two different case studies of different contexts are taken as numerical examples in order to compare the proposed model with other approaches. The examples also illustrate various possible applications of new models.  相似文献   

11.
The robust spanning tree problem is a variation, motivated by telecommunications applications, of the classic minimum spanning tree problem. In the robust spanning tree problem edge costs lie in an interval instead of having a fixed value.Interval numbers model uncertainty about the exact cost values. A robust spanning tree is a spanning tree whose total cost minimizes the maximum deviation from the optimal spanning tree over all realizations of the edge costs. This robustness concept is formalized in mathematical terms and is used to drive optimization.This paper describes a new exact method, based on Benders decomposition, for the robust spanning tree problem with interval data. Computational results highlight the efficiency of the new method, which is shown to be very fast on all the benchmarks considered, and in particular on those that were harder to solve for the methods previously known.  相似文献   

12.
This research attempts to solve the problem of dealing with missing data via the interface of Data Envelopment Analysis (DEA) and human behavior. Missing data is under continuing discussion in various research fields, especially those highly dependent on data. In practice and research, some necessary data may not be obtained in many cases, for example, procedural factors, lack of needed responses, etc. Thus the question of how to deal with missing data is raised. In this paper, modified DEA models are developed to estimate the appropriate value of missing data in its interval, based on DEA and Inter-dimensional Similarity Halo Effect. The estimated value of missing data is determined by the General Impression of original DEA efficiency. To evaluate the effectiveness of this method, the impact factor is proposed. In addition, the advantages of the proposed approach are illustrated in comparison with previous methods.  相似文献   

13.
One of the most important information given by data envelopment analysis models is the cost, revenue and profit efficiency of decision making units (DMUs). Cost efficiency is defined as the ratio of minimum costs to current costs, while revenue efficiency is defined as the ratio of maximum revenue to current revenue of the DMU. This paper presents a framework where data envelopment analysis (DEA) is used to measure cost, revenue and profit efficiency with fuzzy data. In such cases, the classical models cannot be used, because input and output data appear in the form of ranges. When the data are fuzzy, the cost, revenue and profit efficiency measures calculated from the data should be uncertain as well. Fuzzy DEA models emerge as another class of DEA models to account for imprecise inputs and outputs for DMUs. Although several approaches for solving fuzzy DEA models have been developed, numerous deficiencies including the α-cut approaches and types of fuzzy numbers must still be improved. This scheme embraces evaluation method based on vector for proposed fuzzy model. This paper proposes generalized cost, revenue and profit efficiency models in fuzzy data envelopment analysis. The practical application of these models is illustrated by a numerical example.  相似文献   

14.
Conventional data envelopment analysis (DEA) models assume real-valued inputs and outputs. In many occasions, some inputs and/or outputs can only take integer values. In some cases, rounding the DEA solution to the nearest whole number can lead to misleading efficiency assessments and performance targets. This paper develops the axiomatic foundation for DEA in the case of integer-valued data, introducing new axioms of “natural disposability” and “natural divisibility”. We derive a DEA production possibility set that satisfies the minimum extrapolation principle under our refined set of axioms. We also present a mixed integer linear programming formula for computing efficiency scores. An empirical application to Iranian university departments illustrates the approach.  相似文献   

15.
Data Envelopment Analysis (DEA) is a nonparametric method for measuring the efficiency of a set of decision making units such as firms or public sector agencies, first introduced into the operational research and management science literature by Charnes, Cooper, and Rhodes (CCR) [Charnes, A., Cooper, W.W., Rhodes, E., 1978. Measuring the efficiency of decision making units. European Journal of Operational Research 2, 429–444]. The original DEA models were applicable only to technologies characterized by positive inputs/outputs. In subsequent literature there have been various approaches to enable DEA to deal with negative data.  相似文献   

16.
We consider the problem of sorting a permutation using a network of data structures as introduced by Knuth and Tarjan. In general the model as considered previously was restricted to networks that are directed acyclic graphs (DAGs) of stacks and/or queues. In this paper we study the question of which are the smallest general graphs that can sort an arbitrary permutation and what is their efficiency. We show that certain two-node graphs can sort in time Θ(n2) and no simpler graph can sort all permutations. We then show that certain three-node graphs sort in time Ω(n3/2), and that there exist graphs of k nodes which can sort in time Θ(nlogkn), which is optimal.  相似文献   

17.
Data envelopment analysis (DEA) is a method to estimate the relative efficiency of decision-making units (DMUs) performing similar tasks in a production system that consumes multiple inputs to produce multiple outputs. So far, a number of DEA models with interval data have been developed. The CCR model with interval data, the BCC model with interval data and the FDH model with interval data are well known as basic DEA models with interval data. In this study, we suggest a model with interval data called interval generalized DEA (IGDEA) model, which can treat the stated basic DEA models with interval data in a unified way. In addition, by establishing the theoretical properties of the relationships among the IGDEA model and those DEA models with interval data, we prove that the IGDEA model makes it possible to calculate the efficiency of DMUs incorporating various preference structures of decision makers.  相似文献   

18.
The Law of One Price (LoOP) states that all firms face the same prices for their inputs and outputs under market equilibrium. Taken here as a normative condition for ‘efficiency prices’, this law has powerful implications for productive efficiency analysis, which have remained unexploited thus far. This paper shows how LoOP-based weight restrictions can be incorporated in Data Envelopment Analysis (DEA). Utilizing the relation between industry-level and firm-level cost efficiency measures, we propose to apply a set of input prices that is common for all firms and that maximizes the cost efficiency of the industry. Our framework allows for firm-specific output weights and for variable returns-to-scale, and preserves the linear programming structure of the standard DEA. We apply the proposed methodology to the evaluation of the research efficiency of economics departments of Dutch Universities. This application shows that the methodology is computationally tractable for practical efficiency analysis, and that it helps in deepening the DEA analysis.  相似文献   

19.
Conventional data envelopment analysis (DEA) for measuring the efficiency of a set of decision making units (DMUs) requires the input/output data to be constant. In reality, however, many observations are stochastic in nature; consequently, the resulting efficiencies are stochastic as well. This paper discusses how to obtain the efficiency distribution of each DMU via a simulation technique. The case of Taiwan commercial banks shows that, firstly, the number of replications in simulation analysis has little effect on the estimation of efficiency means, yet 1000 replications are recommended to produce reliable efficiency means and 2000 replications for a good estimation of the efficiency distributions. Secondly, the conventional way of using average data to represent stochastic variables results in efficiency scores which are different from the mean efficiencies of the presumably true efficiency distributions estimated from simulation. Thirdly, the interval-data approach produces true efficiency intervals yet the intervals are too wide to provide valuable information. In conclusion, when multiple observations are available for each DMU, the stochastic-data approach produces more reliable and informative results than the average-data and interval-data approaches do.  相似文献   

20.
Data are often affected by uncertainty. Uncertainty is usually referred to as randomness. Nonetheless, other sources of uncertainty may occur. In particular, the empirical information may also be affected by imprecision. Also in these cases it can be fruitful to analyze the underlying structure of the data. In this paper we address the problem of summarizing a sample of three-way imprecise data. In order to manage the different sources of uncertainty a twofold strategy is adopted. On the one hand, imprecise data are transformed into fuzzy sets by means of the so-called fuzzification process. The so-obtained fuzzy data are then analyzed by suitable generalizations of the Tucker3 and CANDECOMP/PARAFAC models, which are the two most popular three-way extensions of Principal Component Analysis. On the other hand, the statistical validity of the obtained underlying structure is evaluated by (nonparametric) bootstrapping. A simulation experiment is performed for assessing whether the use of fuzzy data is helpful in order to summarize three-way uncertain data. Finally, to show how our models work in practice, an application to real data is discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号