首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
There are different ways to allow the voters to express their preferences on a set of candidates. In ranked voting systems, each voter selects a subset of the candidates and ranks them in order of preference. A well-known class of these voting systems are scoring rules, where fixed scores are assigned to the different ranks and the candidates with the highest score are the winners. One of the most important issues in this context is the choice of the scoring vector, since the winning candidate can vary according to the scores used. To avoid this problem, Cook and Kress [W.D. Cook, M. Kress, A data envelopment model for aggregating preference rankings, Management Science 36 (11) (1990) 1302–1310], using a DEA/AR model, proposed to assess each candidate with the most favorable scoring vector for him/her. However, the use of this procedure often causes several candidates to be efficient, i.e., they achieve the maximum score. For this reason, several methods to discriminate among efficient candidates have been proposed. The aim of this paper is to analyze and show some drawbacks of these methods.  相似文献   

2.
This paper addresses ranked voting systems to determine an ordering of candidates in terms of the aggregate vote by rank for each candidate. It is shown that specifying nothing arbitrary, we can obtain a total ordering of candidates by using a DEA/AR (Data Envelopment Analysis/Assurance Region) exclusion model. Explaining the evaluation criterion used to rank candidates, it is concluded that we may consider the system proposed at least as an alternative to determine an ordering of candidates.  相似文献   

3.
One of the most important issues for aggregating preferences rankings is the determination of the weights associated with the different ranking places. To avoid the subjectivity in determining the weights, Cook and Kress (1990) [5] suggested evaluating each candidate with the most favorable scoring vector for him/her. With this purpose, various models based on Data Envelopment Analysis have appeared in the literature. Although these methods do not require predetermine the weights subjectively, some of them have a serious drawback: the relative order between two candidates may be altered when the number of first, second, …, kth ranks obtained by other candidates changes, although there is not any variation in the number of first, second, …, kth ranks obtained by both candidates. In this paper we propose a model that allows each candidate to be evaluated with the most favorable weighting vector for him/her and avoids the previous drawback. Moreover, in some cases, we give a closed expression for the score assigned with our model to each candidate.  相似文献   

4.
Data envelopment analysis very often identifies more than one candidate in a voting system to be DEA efficient. In order to choose a winner from among the DEA efficient candidates, this paper proposes a new method that discriminates the DEA efficient candidates by considering their least relative total scores. The proposed method is illustrated with two numerical examples and proves to be effective and practical.  相似文献   

5.
This paper proposes a method to rank multiple efficient candidates, which often happens in DEA method, by comparing the least relative total scores for each efficient candidate with the best and the least relative total scores measured in the same range. By a numerical example, our model is used to identify efficient candidate and the model can get less efficient candidates too than that can be identified by the model given by Wang and Chin [Y.M. Wang, K.S. Chin, Discriminating DEA efficient candidates by considering their least relative total scores, J. Comput. Appl. Math. 206 (2007) 209–215]. This paper also points out that there is a drawback in the theorem about εε given by Wang and Chin [Y.M. Wang, K.S. Chin, Discriminating DEA efficient candidates by considering their least relative total scores, J. Comput. Appl. Math. 206 (2007) 209–215].  相似文献   

6.
Preference voting and aggregation require the determination of the weights associated with different ranking places. This paper proposes three new models to assess the weights. Two of them are linear programming (LP) models which determine a common set of weights for all the candidates considered and the other is a nonlinear programming (NLP) model that determines the most favourable weights for each candidate. The proposed models are examined with two numerical examples and it is shown that the proposed models cannot only choose a winner, but also give a full ranking of all the candidates.  相似文献   

7.
The application of Data Envelopment Analysis (DEA) as an alternative multiple criteria decision making (MCDM) tool has been gaining more attentions in the literatures. Doyle (Organ. Behav. Hum. Decis. Process. 62(1):87?C100, 1995) presents a method of multi-attribute choice based on an application of DEA. In the first part of his method, the straightforward DEA is considered as an idealized process of self-evaluation in which each alternative weighs the attributes in order to maximize its own score (or desirability) relative to the other alternatives. Then, in the second step, each alternative applies its own DEA-derived best weights to each of the other alternatives (i.e., cross-evaluation), then the average of the cross-evaluations that get placed on an alternative is taken as an index of its overall score. In some cases of multiple criteria decision making, direct or indirect competitions exist among the alternatives, while the factor of competition is usually ignored in most of MCDM settings. This paper proposes an approach to evaluate and rank alternatives in MCDM via an extension of DEA method, namely DEA game cross-efficiency model in Liang, Wu, Cook and Zhu (Oper. Res. 56(5):1278?C1288, 2008b), in which each alternative is viewed as a player who seeks to maximize its own score (or desirability), under the condition that the cross-evaluation scores of each of other alternatives does not deteriorate. The game cross-evaluation score is obtained when the alternative??s own maximized scores are averaged. The obtained game cross-evaluation scores are unique and constitute a Nash equilibrium point. Therefore, the results and rankings based upon game cross-evaluation score analysis are more reliable and will benefit the decision makers.  相似文献   

8.
The existence of alternate optima for the DEA weights may reduce the usefulness of the cross-efficiency evaluation, since the ranking provided depends on the choice of weights that the different DMUs make. In this paper, we develop a procedure to carry out the cross-efficiency evaluation without the need to make any specific choice of DEA weights. The proposed procedure takes into consideration all the possible choices of weights that all the DMUs can make, and yields for each unit a range for its possible rankings instead of a single ranking. This range is determined by the best and the worst rankings that would result in the best and the worst scenarios of each unit across all the DEA weights of all the DMUs. This approach might identify good/bad performers, as those that rank at the top/bottom irrespective of the weights that are chosen, or units that outperform others in all the scenarios. In addition, it may be used to analyze the stability of the ranking provided by the standard cross-efficiency evaluation.  相似文献   

9.
We provide a stochastic electoral model of the US Presidential election where candidates take differences across states into account when developing their policy platforms and advertising campaigns. Candidates understand the political and economic differences that exist across states and voters care about candidates’ policies relative to their ideals, about the frequency of candidates’ advertising messages relative to their ideal message frequency, their campaign tolerance level, and vote taking into account their perceptions of candidates’ traits and competencies with their vote also depending on their sociodemographic characteristics. In the local Nash equilibrium, candidates give maximal weight to undecided voters and swing states and little weight to committed voters and states. These endogenous weights pin down candidates’ campaign and depend on the probability with which voters choose each candidate which depends on candidates’ policies and advertising campaigns. Weights vary across candidates’ policy and ad campaigns, reflecting the importance voters in each state give to the two dimensions and the variation in voters’ preferences across states.  相似文献   

10.
Data envelopment analysis (DEA) evaluates the performance of decision making units (DMUs). When DEA models are used to calculate efficiency of DMUs, a number of them may have the equal efficiency 1. In order to choose a winner among DEA efficient candidates, some methods have been proposed. But most of these methods are not able to rank non-extreme efficient DMUs. Since, the researches performed about ranking of non-extreme efficient units are very limited, incomplete and with some difficulties, we are going to develop a new method to rank these DMUs in this paper. Therefore, we suppose that DMU o is a non-extreme efficient under evaluating DMU. In continue, by using “Representation Theorem”, DMU o can be represented as a convex combination of extreme efficient DMUs. So, we expect the performance of DMU o be similar to the performance of convex combination of these extreme efficient DMUs. Consequently, the ranking score of DMU o is calculated as a convex combination of ranking scores of these extreme efficient DMUs. So, the rank of this unit will be determined.  相似文献   

11.
This paper discusses the DEA total weight flexibility in the context of the cross-efficiency evaluation. The DMUs in DEA are often assessed with unrealistic weighting schemes in their attempt to achieve the best ratings in their self-evaluation. We claim here that in a peer-appraisal like the cross-efficiency evaluation the cross-efficiencies provided by such weights cannot play the same role as those obtained with more reasonable weights. To address this issue, we propose to calculate the cross-efficiency scores by means of a weighted average of cross-efficiencies, instead of with the usual arithmetic mean, so the aggregation weights reflect the disequilibrium in the profiles of DEA weights that are used. Thus, the cross-efficiencies provided by profiles with large differences in their weights, especially those obtained with zero weights, would be attached lower aggregation weights (less importance) than those provided by more balanced profiles of weights.  相似文献   

12.
The purpose of this study is to develop a new method which provides for given inputs and outputs the best common weights for all the units that discriminate optimally between the efficient and inefficient units as pregiven by the Data Envelopment Analysis (DEA), in order to rank all the units on the same scale. This new method, Discriminant Data Envelopment Analysis of Ratios (DR/DEA), presents a further post-optimality analysis of DEA for organizational units when their multiple inputs and outputs are given. We construct the ratio between the composite output and the composite input, where their common weights are computed by a new non-linear optimization of goodness of separation between the two pregiven groups. A practical use of DR/DEA is that the common weights may be utilized for ranking the units on a unified scale. DR/DEA is a new use of a two-group discriminant criterion that has been presented here for ratios, rather than the traditional discriminant analysis which applies to a linear function. Moreover, non-parametric statistical tests are employed to verify the consistency between the classification from DEA (efficient and inefficient units) and the post-classification as generated by DR/DEA.  相似文献   

13.
We consider a generalization of the classical facility location problem, where we require the solution to be fault-tolerant. In this generalization, every demand point j must be served by rj facilities instead of just one. The facilities other than the closest one are “backup” facilities for that demand, and any such facility will be used only if all closer facilities (or the links to them) fail. Hence, for any demand point, we can assign nonincreasing weights to the routing costs to farther facilities. The cost of assignment for demand j is the weighted linear combination of the assignment costs to its rj closest open facilities. We wish to minimize the sum of the cost of opening the facilities and the assignment cost of each demand j. We obtain a factor 4 approximation to this problem through the application of various rounding techniques to the linear relaxation of an integer program formulation. We further improve the approximation ratio to 3.16 using randomization and to 2.41 using greedy local-search type techniques.  相似文献   

14.
This paper aims to assess the performance of a sample of completed building projects in Oregon by employing the range-adjusted measure, a slack-based data envelopment analysis (DEA) model. In the first stage of analysis, project efficiency ratings (ie composite indicators) are derived using selected single performance indicators in a no-output model; whereas in the second stage, censored Tobit regression is employed to model the efficiency ratings. The results indicate that only four out of the 50 sample projects are efficient within the DEA context. Moreover, there is not much evidence for systematic effects of project size on DEA efficiency rating.  相似文献   

15.
Data Envelopment Analysis is used to determine the relative efficiency of Decision Making Units as the ratio of weighted sum of outputs by weighted sum of inputs. To accomplish the purpose, a DEA model calculates the weights of inputs and outputs of each DMU individually so that the highest efficiency can be estimated. Thus, the present study suggests an innovative method using a common set of weights leading to solving a linear programming problem. The method determines the efficiency score of all DMUs and rank them too.  相似文献   

16.
《Discrete Mathematics》2022,345(1):112666
The game of best choice (or “secretary problem”) is a model for making an irrevocable decision among a fixed number of candidate choices that are presented sequentially in random order, one at a time. Because the classically optimal solution is known to reject an initial sequence of candidates, a paradox emerges from the fact that candidates have an incentive to position themselves immediately after this cutoff which challenges the assumption that candidates arrive in uniformly random order.One way to resolve this is to consider games for which every (reasonable) strategy results in the same probability of success. In this work, we classify these “strategy-indifferent” games of best choice. It turns out that the probability of winning such a game is essentially the reciprocal of the expected number of left-to-right maxima in the full collection of candidate rank orderings. We present some examples of these games based on avoiding permutation patterns of size 3, which involves computing the distribution of left-to-right maxima in each of these pattern classes.  相似文献   

17.
M. Ajtai 《Combinatorica》1994,14(4):417-433
The Pigeonhole Principle forn is the statement that there is no one-to-one function between a set of sizen and a set of sizen–1. This statement can be formulated as an unlimited fan-in constant depth polynomial size Boolean formulaPHP n inn(n–1) variables. We may think that the truth-value of the variablex i,j will be true iff the function maps thei-th element of the first set to thej-th element of the second (see Cook and Rechkow [5]).PHP n can be proved in the propositional calculus. That is, a sequence of Boolean formulae can be given so that each one is either an axiom of the propositional calculus or a consequence of some of the previous ones according to an inference rule of the propositional calculus, and the last one isPHP n . Our main result is that the Pigeonhole Principle cannot be proved this way, if the size of the proof (the total number or symbols of the formulae in the sequence) is polynomial inn and each formula is constant depth (unlimited fan-in), polynomial size and contains only the variables ofPHP n .  相似文献   

18.
This work exploits links between Data Envelopment Analysis (DEA) and multicriteria decision analysis (MCDA), with decision making units (DMUs) playing the role of decision alternatives. A novel perspective is suggested on the use of the additive DEA model in order to overcome some of its shortcomings, using concepts from multiattribute utility models with imprecise information. The underlying idea is to convert input and output factors into utility functions that are aggregated using a weighted sum (additive model of multiattribute utility theory), and then let each DMU choose the weights associated with these functions that minimize the difference of utility to the best DMU. The resulting additive DEA model with oriented projections has a clear rationale for its efficiency measures, and allows meaningful introduction of constraints on factor weights.  相似文献   

19.
We investigate a certain well-established generalization of the Davenport constant. For j a positive integer (the case j = 1, is the classical one) and a finite Abelian group (G, +, 0), the invariant D j (G) is defined as the smallest such that each sequence over G of length at least has j disjoint non-empty zero-sum subsequences. We investigate these quantities for elementary 2-groups of large rank (relative to j). Using tools from coding theory, we give fairly precise estimates for these quantities. We use our results to give improved bounds for the classical Davenport constant of certain groups.  相似文献   

20.
Curves and surfaces of type I are generalized to integral towers of rank r. Weight functions with values in Nr and the corresponding weighted total-degree monomial orderings lift naturally from one domain Rj−1 in the tower to the next, Rj, the integral closure of Rj−1[xj]/φ(xj). The qth power algorithm is reworked in this more general setting to produce this integral closure over finite fields, though the application is primarily that of calculating the normalizations of curves related to one-point AG codes arising from towers of function fields. Every attempt has been made to couch all the theory in terms of multivariate polynomial rings and ideals instead of the terminology from algebraic geometry or function field theory, and to avoid the use of any type of series expansion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号