首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In data envelopment analysis (DEA), the cross-efficiency evaluation method introduces a cross-efficiency matrix, in which the units are self and peer evaluated. A problem that possibly reduces the usefulness of the cross-efficiency evaluation method is that the cross-efficiency scores may not be unique due to the presence of alternate optima. So, it is recommended that secondary goals be introduced in cross-efficiency evaluation. In this paper we propose the symmetric weight assignment technique (SWAT) that does not affect feasibility and rewards decision making units (DMUs) that make a symmetric selection of weights. A numerical example is solved by our proposed method and its solution is compared with those of alternative approaches.  相似文献   

2.
Data envelopment analysis (DEA) evaluates the performance of decision making units (DMUs). When DEA models are used to calculate efficiency of DMUs, a number of them may have the equal efficiency 1. In order to choose a winner among DEA efficient candidates, some methods have been proposed. But most of these methods are not able to rank non-extreme efficient DMUs. Since, the researches performed about ranking of non-extreme efficient units are very limited, incomplete and with some difficulties, we are going to develop a new method to rank these DMUs in this paper. Therefore, we suppose that DMU o is a non-extreme efficient under evaluating DMU. In continue, by using “Representation Theorem”, DMU o can be represented as a convex combination of extreme efficient DMUs. So, we expect the performance of DMU o be similar to the performance of convex combination of these extreme efficient DMUs. Consequently, the ranking score of DMU o is calculated as a convex combination of ranking scores of these extreme efficient DMUs. So, the rank of this unit will be determined.  相似文献   

3.
We provide an alternative framework for solving data envelopment analysis (DEA) models which, in comparison with the standard linear programming (LP) based approach that solves one LP for each decision making unit (DMU), delivers much more information. By projecting out all the variables which are common to all LP runs, we obtain a formula into which we can substitute the inputs and outputs of each DMU in turn in order to obtain its efficiency number and all possible primal and dual optimal solutions. The method of projection, which we use, is Fourier–Motzkin (F–M) elimination. This provides us with the finite number of extreme rays of the elimination cone. These rays give the dual multipliers which can be interpreted as weights which will apply to the inputs and outputs for particular DMUs. As the approach provides all the extreme rays of the cone, multiple sets of weights, when they exist, are explicitly provided. Several applications are presented. It is shown that the output from the F–M method improves on existing methods of (i) establishing the returns to scale status of each DMU, (ii) calculating cross-efficiencies and (iii) dealing with weight flexibility. The method also demonstrates that the same weightings will apply to all DMUs having the same comparators. In addition it is possible to construct the skeleton of the efficient frontier of efficient DMUs. Finally, our experiments clearly indicate that the extra computational burden is not excessive for most practical problems.  相似文献   

4.
In efficiency analysis the assessment of the performance of Decision-Making Units (DMUs) relays on the selection of the direction along which the distance from the efficient frontier is measured. Directional Distance Functions (DDFs) represent a flexible way to gauge the inefficiency of DMUs. Permitting the selection of a direction towards the efficient frontier is often useful in empirical applications. As a matter of fact, many papers in the literature have proposed specific DDFs suitable for different contexts of application. Nevertheless, the selection of a direction implies the choice of an efficiency target which is imposed to all the analysed DMUs. Moreover, there exist many situations in which there is no a priori economic or managerial rationale to impose a subjective efficiency target. In this paper we propose a data-driven approach to find out an ‘objective’ direction along which to gauge the inefficiency of each DMU. Our approach permits to take into account for the heterogeneity of DMUs and their diverse contexts that may influence their input and/or output mixes. Our method is also a data-driven technique for benchmarking each DMU. We describe how to implement our framework and illustrate its usefulness with simulated and real data sets.  相似文献   

5.
Since in evaluating by traditional data envelopment analysis (DEA) models many decision making units (DMUs) are classified as efficient, a large number of methods for fully ranking both efficient and inefficient DMUs have been proposed. In this paper a ranking method is suggested which basically differs from previous methods but its models are similar to traditional DEA models such as BCC, additive model, etc. In this ranking method, DMUs are compared against an full-inefficient frontier, which will be defined in this paper. Based on this point of view many models can be designed, and we mention a radial and a slacks-based one out of them. This method can be used to rank all DMUs to get analytic information about the system, and also to rank only efficient DMUs to discriminate between them.  相似文献   

6.
In this paper we study properties of complex symmetric operators. In particular, we prove that every complex symmetric operator having property (β) or (δ) is decomposable. Moreover, we show that complex symmetric operator T has Dunford?s property (C) and it satisfies Weyl?s theorem if and only if its adjoint does.  相似文献   

7.
Trimming is a standard method to decrease the effect of large sample elements in statistical procedures, used, e.g., for constructing robust estimators and tests. Trimming also provides a profound insight into the partial sum behavior of i.i.d. sequences. There is a wide and nearly complete asymptotic theory of trimming, with one remarkable gap: no satisfactory criteria for the central limit theorem for modulus trimmed sums have been found, except for symmetric random variables. In this paper we investigate this problem in the case when the variables are in the domain of attraction of a stable law. Our results show that for modulus trimmed sums the validity of the central limit theorem depends sensitively on the behavior of the tail ratio P(X>t)/P(|X|>t) of the underlying variable X as t and paradoxically, increasing the number of trimmed elements does not generally improve partial sum behavior.  相似文献   

8.
Data Envelopment Analysis (DEA) is a mathematical model that evaluates the relative efficiency of Decision Making Units (DMUs) with multiple input and output. In some applications of DEA, ranking of the DMUs are important. For this purpose, a number of approaches have been introduced. Among them is the cross-efficiency method. The method utilizes the result of the cross-efficiency matrix and averages the cross-efficiency scores of each DMU. Ranking is then performed based on the average efficiency scores. In this paper, we proposed a new way of handling the information from the cross-efficiency matrix. Based on the notion that the ranking order is more important than individual efficiency score, the cross-efficiency matrix is converted to a cross-ranking matrix. A cross-ranking matrix is basically a cross-efficiency matrix with the efficiency score of each element being replaced with the ranking order of that efficiency score with respect to the other efficiency scores in a column. By so doing, each DMU assume the role of a decision maker and how they voted or ranked the other DMUs are reflected in their respective column of the cross-ranking matrix. These votes are then aggregated using a preference aggregation method to determine the overall ranking of the DMUs. Comparison with an existing cross-efficiency method indicates a relatively better result through usage of the proposed method.  相似文献   

9.
Based on the minimal reduction strategy, Yang et al. (2011) developed a fixed-sum output data envelopment analysis (FSODEA) approach to evaluate the performance of decision-making units (DMUs) with fixed-sum outputs. However, in terms of such a strategy, all DMUs compete over fixed-sum outputs with “no memory” that will result in differing efficient frontiers’ evaluations. To address the problem, in this study, we propose an equilibrium efficiency frontier data envelopment analysis (EEFDEA) approach, by which all DMUs with fixed-sum outputs can be evaluated based on a common platform (or equilibrium efficient frontier). The proposed approach can be divided into two stages. Stage 1 constructs a common evaluation platform via two strategies: an extended minimal adjustment strategy and an equilibrium competition strategy. The former ensures that original efficient DMUs are still efficient, guaranteeing the existence of a common evaluation platform. The latter makes all DMUs achieve a common equilibrium efficient frontier. Then, based on the common equilibrium efficient frontier, Stage 2 evaluates all DMUs with their original inputs and outputs. Finally, we illustrate the proposed approach by using two numerical examples.  相似文献   

10.
One problem that has been discussed frequently in data envelopment analysis (DEA) literature has been lack of discrimination in DEA applications, in particular when there are insufficient DMUs or the number of inputs and outputs is too high relative to the number of units. This is an additional reason for the growing interest in complete ranking techniques. In this paper a method for ranking extreme efficient decision making units (DMUs) is proposed. The method uses L(or Tchebycheff) Norm, and it seems to have some superiority over other existing methods, because this method is able to remove the existing difficulties in some methods, such as Andersen and Petersen [2] (AP) that it is sometimes infeasible. The suggested model is always feasible.  相似文献   

11.
The aim of this paper is to optimize the benchmarks and prioritize the variables of decision-making units (DMUs) in data envelopment analysis (DEA) model. In DEA, there is no scope to differentiate and identify threats for efficient DMUs from the inefficient set. Although benchmarks in DEA allow for identification of targets for improvement, it does not prioritize targets or prescribe level-wise improvement path for inefficient units. This paper presents a decision tree based DEA model to enhance the capability and flexibility of classical DEA. The approach is illustrated through its application to container port industry. The method proceeds by construction of multiple efficient frontiers to identify threats for efficient/inefficient DMUs, provide level-wise reference set for inefficient terminals and diagnose the factors that differentiate the performance of inefficient DMUs. It is followed by identification of significant attributes crucial for improvement in different performance levels. The application of this approach will enable decision makers to identify threats and opportunities facing their business and to improve inefficient units relative to their maximum capacity. In addition, it will help them to make intelligent investment on target factors that can improve their firms’ productivity.  相似文献   

12.
Rotation symmetric (RotS) Boolean functions have been used as components of different cryptosystems. This class of Boolean functions are invariant under circular translation of indices. Using Burnside's lemma it can be seen that the number of n-variable rotation symmetric Boolean functions is 2gn, where gn=(1/n)∑t|nφ(t)2n/t, and φ(.) is the Euler phi-function. In this paper, we find the number of short and long cycles of elements in having fixed weight, under the RotS action. As a consequence we obtain the number of homogeneous RotS functions having algebraic degree w. Our results make the search space of RotS functions much reduced and we successfully analyzed important cryptographic properties of such functions by executing computer programs. We study RotS bent functions up to 10 variables and observe (experimentally) that there is no homogeneous rotation symmetric bent function having degree >2. Further, we studied the RotS functions on 5,6,7 variables by computer search for correlation immunity and propagation characteristics and found some functions with very good cryptographic properties which were not known earlier.  相似文献   

13.
In this paper we use the additive efficiency decomposition approach in two-stage data envelopment analysis. Initially, we evaluate the variable returns to scale version and face a structural difficulty of the model. In an extreme case, weights ξ1 or ξ2, which represent the relative importance of the performance of the first and second stages, respectively, become zero for a number of decision making units (DMUs). As a result, individual stage efficiencies for these DMUs are undefined. We propose a weight assurance region model to restrict ξ1 and ξ2, which ensures that both weights are always positive, and therefore individual stage efficiency is always defined. Furthermore, the proposed model is appropriate for policy making in the presence of a priori information about the relative importance of each stage in the overall process. We employ the new model to evaluate the efficiency of secondary education in 65 countries and construct an overall ‘school efficiency’ index. In the first stage we measure the ‘learning environment efficiency’ and in the second we measure the ‘student’s performance efficiency’.  相似文献   

14.
A finite group whose irreducible complex characters are rational valued is called a rational group. Thus, G is a rational group if and only if N G (〈x〉)/C G (〈x〉) ≌ Aut(〈x〉) for every xG. For example, all symmetric groups and their Sylow 2-subgroups are rational groups. Structure of rational groups have been studied extensively, but the general classification of rational groups has not been able to be done up to now. In this paper, we show that a full symmetric group of prime degree does not have any rational transitive proper subgroup and that a rational doubly transitive permutation group containing a full cycle is the full symmetric group. We also obtain several results related to the study of rational groups.  相似文献   

15.
The concept of efficiency in data envelopment analysis (DEA) is defined as weighted sum of outputs/weighted sum of inputs. In order to calculate the maximum efficiency score, each decision making unit (DMU)’s inputs and outputs are assigned to different weights. Hence, the classical DEA allows the weight flexibility. Therefore, even if they are important, the inputs or outputs of some DMUs can be assigned zero (0) weights. Thus, these inputs or outputs are neglected in the evaluation. Also, some DMUs may be defined as efficient even if they are inefficient. This situation leads to unrealistic results. Also to eliminate the problem of weight flexibility, weight restrictions are made in DEA. In our study, we proposed a new model which has not been published in the literature. We describe it as the restricted data envelopment analysis ((ARIII(COR))) model with correlation coefficients. The aim for developing this new model, is to take into account the relations between variables using correlation coefficients. Also, these relations were added as constraints to the CCR and BCC models. For this purpose, the correlation coefficients were used in the restrictions of input–output each one alone and their combination together. Inputs and outputs are related to the degree of correlation between each other in the production. Previous studies did not take into account the relationship between inputs/outputs variables. So, only with expert opinions or an objective method, weight restrictions have been made. In our study, the weights for input and output variables were determined, according to the correlations between input and output variables. The proposed new method is different from other methods in the literature, because the efficiency scores were calculated at the level of correlations between the input and/or output variables.  相似文献   

16.
Necessary and sufficient conditions are presented for jointly symmetric stable random vectors to be independent and for a regression involving symmetric stable random variables to be linear. The notion of n-fold dependence is introduced for symmetric stable random variables, and under this condition we determine all monomials in such random variables for which moments exist.  相似文献   

17.
We study how does the Hamming weight of the difference between two values influence the probability of this difference preservation after modulo addition and subtraction. By the difference between two random variables we mean the operation XOR which is standard for cryptanalysis. We prove that if the most significant bit of the difference is equal to 0 (is equal to 1) then the probability of the difference preservation is equal to 2?h (equal to 2?(h?1)), where h is the Hamming weight of the difference. The theoretical results are confirmed experimentally.  相似文献   

18.
Two novel methods named performance baseline and performance correspondence matrices are proposed to evaluate the performance of decision making units (DMUs) based on the techniques of singular value decomposition (SVD). The performance baseline matrix can be used to rank all the DMUs because it provides a common basis for performance comparison. The performance correspondence matrix can be used to conduct performance cluster analysis, with which to explore the structure of input/output variables that are associated with DMUs. The analysis can reveal the performance difference of the DMUs and the key input/output variables determining the efficiency of a certain DMU, and provides valuable quantitative information for adjusting variables to improve efficiency of the DMU. Three case studies are presented to demonstrate that the proposed methods in this work are effective and easy to use and can provide insights into proper selection of input/output variables for performance comparison to avoid over manipulating DEA models in practice.  相似文献   

19.
In a recent paper, Yang et al developed an algorithm based on the extended minimal adjustment strategy and the equilibrium competition strategy to achieve a common equilibrium efficient frontier. However, the computational burden of their algorithm is challenging when a sample contains many inefficient decision-making units (DMUs). In this paper, we propose a linear programming model that can achieve a common equilibrium efficient frontier in a single step, regardless of the number of inefficient DMUs. Furthermore, we demonstrate the existence and the non-uniqueness of the equilibrium efficient frontier and identify its shortcomings through an example. Next, we extend our approach to incorporate weight restrictions to indicate the relative importance of the different inputs and outputs and introduce the secondary goal of minimizing the maximal relative deviation for each fixed-sum output, which can result in a unique equilibrium efficient frontier.  相似文献   

20.
The “iterative instrumental variables” (IIV) method for estimating interdependent systems, originally referred to as a symmetric counterpart to the “fix-point” (FP) method, shares its symmetry properties with Durbin's iterative method for performing the “full information maximum likelihood” (FIML) estimation. Classical interdependent systems are considered and identities may occur among the structural equations. Alternative symmetric procedures for obtaining FIML estimates are also dealt with, including the sequential maximization of the likelihood function with respect to the coefficients of one structural equation at a time.Two recent estimation methods developed by Brundy and Jorgenson (1971, Review of Economics and Statistics53, 207–224) as well as Dhrymes (1971, Austral. J. Statist.13, 168–175) can be considered the second approximation of the IIV method and Durbin's method respectively with the first approximation obtained by the “ordinary instrumental variables” (OIV) method. In practice the second approximation depends heavily on the choice of initial instrumental variables, although the asymptotic distribution is not changed by the continued iteration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号