首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 11 毫秒
1.
This research further develops the combined use of principal component analysis (PCA) and data envelopment analysis (DEA). The aim is to reduce the curse of dimensionality that occurs in DEA when there is an excessive number of inputs and outputs in relation to the number of decision-making units. Three separate PCA–DEA formulations are developed in the paper utilising the results of PCA to develop objective, assurance region type constraints on the DEA weights. The first model applies PCA to grouped data representing similar themes, such as quality or environmental measures. The second model, if needed, applies PCA to all inputs and separately to all outputs, thus further strengthening the discrimination power of DEA. The third formulation searches for a single set of global weights with which to fully rank all observations. In summary, it is clear that the use of principal components can noticeably improve the strength of DEA models.  相似文献   

2.
For a given subset E of the natural numbers it is desired to maximize Σn?Ean subject to an?0, 1+Σn?Ean cos?0 for θ?[0,π]. A dual program is defined, and a duality principle is established. Extensions to other series of functions are given, and these include the motivating example of P. Delsarte [Philips Res. Repts.27 (1972), 272–289].  相似文献   

3.

The Nemhauser–Trotter theorem states that the standard linear programming (LP) formulation for the stable set problem has a remarkable property, also known as (weak) persistency: for every optimal LP solution that assigns integer values to some variables, there exists an optimal integer solution in which these variables retain the same values. While the standard LP is defined by only non-negativity and edge constraints, a variety of other LP formulations have been studied and one may wonder whether any of them has this property as well. We show that any other formulation that satisfies mild conditions cannot have the persistency property on all graphs, unless it is always equal to the stable set polytope.

  相似文献   

4.
5.
The traditional data envelopment analysis model allows the decision-making units (DMUs) to evaluate their maximum efficiency values using their most favourable weights. This kind of evaluation with total weight flexibility may prevent the DMUs from being fully ranked and make the evaluation results unacceptable to the DMUs. To solve these problems, first, we introduce the concept of satisfaction degree of a DMU in relation to a common set of weights. Then a common-weight evaluation approach, which contains a max–min model and two algorithms, is proposed based on the satisfaction degrees of the DMUs. The max–min model accompanied by our Algorithm 1 can generate for the DMUs a set of common weights that maximizes the least satisfaction degrees among the DMUs. Furthermore, our Algorithm 2 can ensure that the generated common set of weights is unique and that the final satisfaction degrees of the DMUs constitute a Pareto-optimal solution. All of these factors make the evaluation results more satisfied and acceptable by all the DMUs. Finally, results from the proposed approach are contrasted with those of some previous methods for two published examples: efficiency evaluation of 17 forest districts in Taiwan and R&D project selection.  相似文献   

6.
Summary Four different methods of analyzing the sensitivity of the optimal solution of a crop-mix problem in linear programming, e. g., (a) variability analysis for second-best and third-best solutions, (b) perturbation analysis around a specific optimal basis, (c) the sensitivity coefficient approach, and (d) the method of fractile criterion by which a specified fractile of the distribution of profits is maximized, are investigated here. The objective is to compare the different operational methods of sensitivity analysis applied to an empirical economic problem.
Zusammenfassung Die vorliegende Arbeit behandelt vier Methoden zur Untersuchung der Sensitivität der optimalen Lösung eines crop-mix-Problems der linearen Programmierung: a) variability analysis für zweit- und drittbeste Lösungen, b) perturbation analysis der optimalen Basis, c) das sensitivity-coefficient-Verfahren und d) die fractile-criterion-Methode, durch die ein bestimmter Teil der Gewinnverteilung maximiert wird. Ziel der Arbeit ist ein Vergleich der Methoden bezüglich ihrer Anwendung auf empirische ökonomische Probleme.


Work done under the NSF Projects 420-21-17 and 420-04-62 at the Department of Economics, Iowa State University, Ames, Iowa. Other research work closely related to this paper may be found inSengupta; Sengupta-Portillo-Campbell; Sengupta-Tintner.  相似文献   

7.
Cross efficiency evaluation has long been proposed as an alternative method for ranking the decision making units (DMUs) in data envelopment analysis (DEA). This study proposes goal programming models that could be used in the second stage of the cross evaluation. Proposed goal programming models have different efficiency concepts as classical DEA, minmax and minsum efficiency criteria. Numerical examples are provided to illustrate the applications of the proposed goal programming cross efficiency models.  相似文献   

8.
Summary In an ordinary linear programming problem with a given set of statistical data, it is not known generally how reliable is the optimal basic solution. Our object here is to indicate a general method of reliability analysis for testing the sensitivity of the optimal basic solution and other basic solutions, in terms of expectation and variance when sample observations are available. For empirical illustration the time series data on input-output coefficients of a single farm producing three crops with three resources is used. The distributions of the first, second, and third best solutions are estimated assuming the vectors of net prices and resources to be constant and the coefficient matrix to be stochastic. Our method of statistical estimation is a combination of the Pearsonian method of moments and the maximum likelihood method.In our illustrative example we observe that the skewness of the distribution of the first best solution exceeds that of the distributions of the second and third best solution. We have also analyzed the time paths for the three ordered solutions to see how far one could apply the idea of a regression model based on inequality constraints. A sensitivity index for a particular sample is suggested based on the spread of the maximum and minimum values of the solutions.
Zusammenfassung Im allgemeinen ist bei Linear-Programming-Problemen mit statistischen Einflüssen die Zuverlässigkeit der optimalen Basislösung nicht bekannt. Unser Ziel ist es, eine allgemeine Methode anzugeben, um die Empfindlichkeit der optimalen Basislösung und anderer Basislösungen durch den Erwartungswert und die Varianz bei gegebener Stichprobe zu testen. Zur Illustration wird eine Zeitreihe der input-output-Koeffizienten einer einzigen Farm benutzt, die drei Getreidesorten erzeugt, wobei drei Ressourcen benützt werden. Es werden die Verteilungen der ersten drei besten Lösungen geschätzt bei vorausgesetzten konstanten Nettopreisen und Ressourcen und stochastischer Koeffizientenmatrix. Die verwendete Methode der statistischen Schätzung ist eine Kombination der Pearsonschen Momentenmethode und der Maximum-Likelihood-Methode.In unserem Beispiel stellen wir fest, daß die Schiefe der Verteilung der besten Lösung größer ist als die der Verteilung der zweit- und drittbesten Lösungen. Ferner wurden die Zeitläufe der ersten drei geordneten Lösungen analysiert, um festzustellen, wie weit sich die Idee eines Regressionsmodells, das auf Ungleichungsrestriktionen basiert, anwenden läßt. Für eine Stichprobe wird ein Empfindlichkeitsindex empfohlen, der sich aus der Spannweite der maximalen und minimalen Werte der Lösungen ableitet.


Research partly supported by the NSF project No. 420-04-70 at the Department of Economics, Iowa State University, Ames.The results of this paper are closely related in some theoretical aspects to the following papers. Sengupta, J. K., G. Tintner andC. Millham. On some theorems of stochastic linear programming with applications, Management Science, vol. 10, October 1963, pp. 143–159. Sengupta, J. K., G. Tintner andB. Morrison. Stochastic linear programming with applications to economic models, Economica, August 1963, pp. 262–276. Sengupta, J. K.: On the stability of truncated solutions of stochastic linear programming (to be published in Econometrica, October, 1965).  相似文献   

9.
A non-Archimedean effective anti-degeneracy/cycling method for linear programming models, especially Data Envelopment Analysis (DEA), processing networks and advertising media mix models is herein developed. It has given a tenfold speed increase plus elimination of cycling difficulties over conventional Marsden or Kennington/Ali LP software modules in a 1000 LP DEA application.  相似文献   

10.
A special and important network structured linear programming problem is the shortest path problem. Classical shortest path problems assume that there are unit of shipping cost or profit along an arc. In many real occasions, various attributes (various costs and profits) are usually considered in a shortest path problem. Because of the frequent occurrence of such network structured problems, there is a need to develop an efficient procedure for handling these problems. This paper studies the shortest path problem in the case that multiple attributes are considered along the arcs. The concept of relative efficiency is defined for each path from initial node to final node. Then, an efficient path with the maximum efficiency is determined.  相似文献   

11.
This article presents a finite, outcome-based algorithm for optimizing a lower semicontinuous function over the efficient set of a bicriteria linear programming problem. The algorithm searches the efficient faces of the outcome set of the bicriteria linear programming problem. It exploits the fact that the dimension of the outcome set of the bicriteria problem is at most two. As a result, in comparison to decisionbased approaches, the number of efficient faces that need to be found is markedly reduced. Furthermore, the dimensions of the efficient faces found are always at most one. The algorithm can be implemented for a wide variety of lower semicontinuous objective functions.  相似文献   

12.
In the last 10 years much has been written about the drawbacks of radial projection. During this time, many authors proposed methods to explore, interactively or not, the efficient frontier via non-radial projections. This paper compares three families of data envelopment analysis (DEA) models: the traditional radial, the preference structure and the multi-objective models. We use the efficiency analysis of Rio de Janeiro Odontological Public Health System as a background for comparing the three methods through a real case with one integer and one exogenous variable. The objectives of the study case are (i) to compare the applicability of the three approaches for efficiency analysis with exogenous and integer variables, (ii) to present the main advantages and drawbacks for each approach, (iii) to prove the impossibility to project in some regions and its implications, (iv) to present the approximate CPU time for the models, when this time is not negligible. We find that the multi-objective approach, although mathematically equivalent to its preference structure peer, allows projections that are not present in the latter. Furthermore, we find that, for our case study, the traditional radial projection model provides useless targets, as expected. Furthermore, for some parts of the frontier, none of the models provide suitable targets. Other interesting result is that the CPU-time for the multi-objective formulation, although its endogenous high complexity, is acceptable for DEA applications, due to its compact nature.  相似文献   

13.
Cross-efficiency evaluation has been widely used for identifying the most efficient decision making unit (DMU) or ranking DMUs in data envelopment analysis (DEA). Most existing approaches for cross-efficiency evaluation are focused on how to determine input and output weights uniquely, but pay little attention to the aggregation process of cross-efficiencies and simply aggregate them equally without considering their relative importance. This paper focuses on aggregating cross-efficiencies by taking into consideration their relative importance and proposes three alternative approaches to determining the relative importance weights for cross-efficiency aggregation. Numerical examples are examined to show the importance and necessity of the use of relative importance weights for cross-efficiency aggregation and the most efficient DMU can be significantly affected by taking into consideration the relative importance weights of cross-efficiencies.  相似文献   

14.
In this paper, we consider the following minimax linear programming problem: min z = max1 ≤ jn{CjXj}, subject to Ax = g, x ≥ 0. It is well known that this problem can be transformed into a linear program by introducing n additional constraints. We note that these additional constraints can be considered implicitly by treating them as parametric upper bounds. Based on this approach we develop two algorithms: a parametric algorithm and a primal—dual algorithm. The parametric algorithm solves a linear programming problem with parametric upper bounds and the primal—dual algorithm solves a sequence of related dual feasible linear programming problems. Computation results are also presented, which indicate that both the algorithms are substantially faster than the simplex algorithm applied to the enlarged linear programming problem.  相似文献   

15.
In data envelopment analysis (DEA), identification of the strong defining hyperplanes of the empirical production possibility set (PPS) is important, because they can be used for determining rates of change of outputs with change in inputs. Also, efficient hyperplanes determine the nature of returns to scale. The present work proposes a method for generating all linearly independent strong defining hyperplanes (LISDHs) of the PPS passing through a specific decision making unit (DMU). To this end, corresponding to each efficient unit, a perturbed inefficient unit will be defined and, using at most m+s linear programs, all LISDHs passing through the DMU will be determined, where m and s are the numbers of inputs and outputs, respectively.  相似文献   

16.
The problem of finding the projections of points on the sets of solutions of primal and dual problems of linear programming is considered. This problem is reduced to a single solution of the problem of minimizing a new auxiliary function, starting from some threshold value of the penalty coefficient. Estimates of the threshold value are obtained. A software implementation of the proposed method is compared with some known commercial and research software packages for solving linear programming problems.  相似文献   

17.
Data envelopment analysis (DEA) has enjoyed a wide range of acceptance by researchers and practitioners alike as an instrument of performance analysis and management since its introduction in 1978. Many formulations and thousands of applications of DEA have been reported in a considerable variety of academic and professional journals all around the world. Almost all of the formulations and applications have basically centered at the concept of “relative self-evaluation”, whether they are single or multi-stage applications. This paper suggests a framework for enhancing the theory of DEA through employing the concept of “relative cross-evaluation” in a multi-stage application context. Managerial situations are described where such enhanced-DEA (E-DEA) formulations had actually been used and could also be potentially most meaningful and useful.  相似文献   

18.
19.
Data envelopment analysis (DEA) is popularly used to evaluate relative efficiency among public or private firms. Most DEA models are established by individually maximizing each firm's efficiency according to its advantageous expectation by a ratio. Some scholars have pointed out the interesting relationship between the multiobjective linear programming (MOLP) problem and the DEA problem. They also introduced the common weight approach to DEA based on MOLP. This paper proposes a new linear programming problem for computing the efficiency of a decision-making unit (DMU). The proposed model differs from traditional and existing multiobjective DEA models in that its objective function is the difference between inputs and outputs instead of the outputs/inputs ratio. Then an MOLP problem, based on the introduced linear programming problem, is formulated for the computation of common weights for all DMUs. To be precise, the modified Chebychev distance and the ideal point of MOLP are used to generate common weights. The dual problem of this model is also investigated. Finally, this study presents an actual case study analysing R&D efficiency of 10 TFT-LCD companies in Taiwan to illustrate this new approach. Our model demonstrates better performance than the traditional DEA model as well as some of the most important existing multiobjective DEA models.  相似文献   

20.
The estimate of the parameters which define a conventional multiobjective decision making model is a difficult task. Normally they are either given by the Decision Maker who has imprecise information and/or expresses his considerations subjectively, or by statistical inference from the past data and their stability is doubtful. Therefore, it is reasonable to construct a model reflecting imprecise data or ambiguity in terms of fuzzy sets and several fuzzy approaches to multiobjective programming have been developed 1, 9, 10, 11. The fuzziness of the parameters gives rise to a problem whose solution will also be fuzzy, see 2, 3, and which is defined by its possibility distribution. Once the possibility distribution of the solution has been obtained, if the decision maker wants more precise information with respect to the decision vector, then we can pose and solve a new problem. In this case we try to find a decision vector, which approximates as much as possible the fuzzy objectives to the fuzzy solution previously obtained. In order to solve this problem we shall develop two different models from the initial solution and based on Goal Programming: an Interval Goal Programming Problem if we define the relation “as accurate as possible” based on the expected intervals of fuzzy numbers, as we showed in [4], and an ordinary Goal Programming based on the expected values of the fuzzy numbers that defined the goals. Finally, we construct algorithms that implement the above mentioned solution method. Our approach will be illustrated by means of a numerical example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号