首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Classical approachs for fitting and aggregation problems, specially in cluster analysis, social choice theory and paired comparisons methods, consist in the minimization of a remoteness function between relational data and a relational model. The notion of median, with its algebraic, metric, geometrical and statistical aspects, allow a unified treatment of many of base problems. Properties of median procedures are organized according to four directions: stabilities and axiomatic characterizations; Arrow-like properties; combinatorial properties; effective computational possibilities. Finally, interesting mathematical problems, related to the notion of median are surveyed.  相似文献   

2.
This short paper takes up the problem stated by Gaul and Schader1 in 1988 of simultaneous clustering and aggregation of relations—in this case precedences (preferences)—so that aggregate preferences could represent relatively homogeneous groups of preference relations. The paper indicates the existence of two distinct questions, similar to those asked in the case of the global aggregation problem, regarding intra-group agreement as to preferences represented in whatever form, and intra-group agreement as to a common (‘regular’) preference. In both these cases, though, fundamental computational problems arise. This paper presents a heuristic for obtaining an approximate solution. Requirements as to the properly optimal method are outlined.  相似文献   

3.
This paper discusses the DEA total weight flexibility in the context of the cross-efficiency evaluation. The DMUs in DEA are often assessed with unrealistic weighting schemes in their attempt to achieve the best ratings in their self-evaluation. We claim here that in a peer-appraisal like the cross-efficiency evaluation the cross-efficiencies provided by such weights cannot play the same role as those obtained with more reasonable weights. To address this issue, we propose to calculate the cross-efficiency scores by means of a weighted average of cross-efficiencies, instead of with the usual arithmetic mean, so the aggregation weights reflect the disequilibrium in the profiles of DEA weights that are used. Thus, the cross-efficiencies provided by profiles with large differences in their weights, especially those obtained with zero weights, would be attached lower aggregation weights (less importance) than those provided by more balanced profiles of weights.  相似文献   

4.
Arrow's impossibility theorem [K.J. Arrow, Social Choice and Individual Values, Wiley, New York, NY, 1951] shows that the set of acyclic tournaments is not closed to non-dictatorial Boolean aggregation. In this paper we extend the notion of aggregation to general tournaments and we show that for tournaments with four vertices or more any proper symmetric (closed to vertex permutations) subset cannot be closed to non-dictatorial monotone aggregation and to non-neutral aggregation. We also demonstrate a proper subset of tournaments that is closed to parity aggregation for an arbitrarily large number of vertices. This proves a conjecture of Kalai [Social choice without rationality, Reviewed NAJ Economics 3(4)] for the non-neutral and the non-dictatorial and monotone cases and gives a counter example for the general case.  相似文献   

5.
This paper investigates the aggregation of multiple fuzzy preference relations into a collective fuzzy preference relation in fuzzy group decision analysis and proposes an optimization based aggregation approach to assess the relative importance weights of the multiple fuzzy preference relations. The proposed approach that is analytical in nature assesses the weights by minimizing the sum of squared distances between any two weighted fuzzy preference relations. Relevant theorems are offered in support of the proposed approach. Multiplicative preference relations are also incorporated into the approach using an appropriate transformation technique. An eigenvector method is introduced to derive the priorities from the collective fuzzy preference relation. The proposed aggregation approach is tested using two numerical examples. A third example involving broadband internet service selection is offered to illustrate that the proposed aggregation approach provides a simple, effective and practical way of aggregating multiple fuzzy preference relations in real-life situations.  相似文献   

6.
We studied a population of paraplegic patients in order to give prominence to a possible relationship between the topography of their spinal lesion and the occurrence of special articular diseases (P.O.A.). According to the motor and sensory state of their spinal cord, we first tried to obtain a classification of these lesions (the usual one schematically separates ‘flaccid’ and ‘rigid’ paraplegics). We mainly put the emphasis on this clustering step of the study:
    相似文献   

7.
After a short methodological presentation of similarity aggregation in automatic classification, we will present an application to computational linguistics. We will try to explain, from an existing dictionary of synonyms, how we have
  • (a) defined what was, in our opinion, the meaning of the synonymous relation we wanted to reveal in a new optimized dictionary,
  • (b) transformed the existing dictionary into a sequence of matrices of synonymy,
  • (c) checked with an adapted algorithm (similarity aggregation technique) if the links appearing in the existing dictionary corresponded to our synonymy definition,
  • (d) tried to improve the synonymous relation,
in order to propose more accurate data facilitating the management of a new dictionary and providing a classification of synonyms according to a semic separate valuation.  相似文献   

8.
Qualitative factors in data envelopment analysis: A fuzzy number approach   总被引:1,自引:0,他引:1  
Qualitative factors are difficult to mathematically manipulate when calculating the efficiency in data envelopment analysis (DEA). The existing methods of representing the qualitative data by ordinal variables and assigning values to obtain efficiency measures only superficially reflect the precedence relationship of the ordinal data. This paper treats the qualitative data as fuzzy numbers, and uses the DEA multipliers associated with the decision making units (DMUs) being evaluated to construct the membership functions. Based on Zadeh’s extension principle, a pair of two-level mathematical programs is formulated to calculate the α-cuts of the fuzzy efficiencies. Fuzzy efficiencies contain more information for making better decisions. A performance evaluation of the chemistry departments of 52 UK universities is used for illustration. Since the membership functions are constructed from the opinion of the DMUs being evaluated, the results are more representative and persuasive.  相似文献   

9.
Multiple attribute pricing problems are highly challenging due to the dynamic and uncertain features in the associated market. In this paper, we address the condominium multiple attribute pricing problem using data envelopment analysis (DEA). In this study, we simultaneously consider stochastic variables, non-discretionary variables, and ordinal data, and present a new type of DEA model. Based on our proposed DEA, an effective performance measurement tool is developed to provide a basis for understanding the condominium pricing problem, to direct and monitor the implementation of pricing strategy, and to provide information regarding the results of pricing efforts for units sold as well as insights for future building design. A case study is executed on a leading Canadian condominium developer.  相似文献   

10.
One way to overcome Arrow's impossibility theorem is to drop the requirement that the collective preference be transitive. If it is quasi-transitive (strict preferences are transitive) an oligarchy emerges. If it is only acyclic, many non-oligarchic aggregation rules are available, yet the resulting decision rules are poorly decisive: Nakamura's theorem characterizes acyclic and neutral Arrowian aggregators. We propose a parallel characterization of acyclic and anonymous aggregation methods.  相似文献   

11.
12.
As a measure of overall technical inefficiency, the Directional Distance Function (DDF) introduced by Chambers, Chung, and Färe ties the potential output expansion and input contraction together through a single parameter. By duality, the DDF is related to a measure of profit inefficiency, which is calculated as the normalized deviation between optimal and actual profit at market prices. As we show, in the most usual case, the associated normalization represents the sum of the actual revenue and the actual cost of the assessed firm. Consequently, the corresponding profit inefficiency measure associated with the DDF has no obvious economic interpretation. In contrast, in this paper we allow outputs to expand and inputs to contract by different proportions. This results in a modified DDF that retains most of the properties of the original DDF. The corresponding dual problem has a much simpler interpretation as the lost profit on (average) outlay that can be decomposed into a technical and an allocative inefficiency component. In addition, an overall measure of technical inefficiency at the industry level is introduced resorting to the direction corresponding to the average input–output bundle.  相似文献   

13.
Graphics play a crucial role in statistical analysis and data mining. Being able to quantify structure in data that is visible in plots, and how people read the structure from plots is an ongoing challenge. The lineup protocol provides a formal framework for data plots, making inference possible. The data plot is treated like a test statistic, and lineup protocol acts like a comparison with the sampling distribution of the nulls. This article describes metrics for describing structure in data plots and evaluates them in relation to the choices that human readers made during several large Amazon Turk studies using lineups. The metrics that were more specific to the plot types tended to better match subject choices, than generic metrics. The process that we followed to evaluate metrics will be useful for general development of numerically measuring structure in plots, and also in future experiments on lineups for choosing blocks of pictures. Supplementary materials for this article are available online.  相似文献   

14.
The validity of many efficiency measurement methods rely upon the assumption that variables such as input quantities and output mixes are independent of (or uncorrelated with) technical efficiency, however few studies have attempted to test these assumptions. In a recent paper, Wilson (2003) investigates a number of independence tests and finds that they have poor size properties and low power in moderate sample sizes. In this study we discuss the implications of these assumptions in three situations: (i) bootstrapping non-parametric efficiency models; (ii) estimating stochastic frontier models and (iii) obtaining aggregate measures of industry efficiency. We propose a semi-parametric Hausmann-type asymptotic test for linear independence (uncorrelation), and use a Monte Carlo experiment to show that it has good size and power properties in finite samples. We also describe how the test can be generalized in order to detect higher order dependencies, such as heteroscedasticity, so that the test can be used to test for (full) independence when the efficiency distribution has a finite number of moments. Finally, an empirical illustration is provided using data on US electric power generation.  相似文献   

15.
16.
A class of arity-monotonic aggregation operators, called impact functions, is proposed. This family of operators forms a theoretical framework for the so-called Producer Assessment Problem, which includes the scientometric task of fair and objective assessment of scientists using the number of citations received by their publications.The impact function output values are analyzed under right-censored and dynamically changing input data. The qualitative possibilistic approach is used to describe this kind of uncertainty. It leads to intuitive graphical interpretations and may be easily applied for practical purposes.The discourse is illustrated by a family of aggregation operators generalizing the well-known Ordered Weighted Maximum (OWMax) and the Hirsch h-index.  相似文献   

17.
This article presents a method for visualization of multivariate functions. The method is based on a tree structure—called the level set tree—built from separated parts of level sets of a function. The method is applied for visualization of estimates of multivarate density functions. With different graphical representations of level set trees we may visualize the number and location of modes, excess masses associated with the modes, and certain shape characteristics of the estimate. Simulation examples are presented where projecting data to two dimensions does not help to reveal the modes of the density, but with the help of level set trees one may detect the modes. I argue that level set trees provide a useful method for exploratory data analysis.  相似文献   

18.
There are different ways to allow the voters to express their preferences on a set of candidates. In ranked voting systems, each voter selects a subset of the candidates and ranks them in order of preference. A well-known class of these voting systems are scoring rules, where fixed scores are assigned to the different ranks and the candidates with the highest score are the winners. One of the most important issues in this context is the choice of the scoring vector, since the winning candidate can vary according to the scores used. To avoid this problem, Cook and Kress [W.D. Cook, M. Kress, A data envelopment model for aggregating preference rankings, Management Science 36 (11) (1990) 1302–1310], using a DEA/AR model, proposed to assess each candidate with the most favorable scoring vector for him/her. However, the use of this procedure often causes several candidates to be efficient, i.e., they achieve the maximum score. For this reason, several methods to discriminate among efficient candidates have been proposed. The aim of this paper is to analyze and show some drawbacks of these methods.  相似文献   

19.
Conventional data envelopment analysis (DEA) for measuring the efficiency of a set of decision making units (DMUs) requires the input/output data to be constant. In reality, however, many observations are stochastic in nature; consequently, the resulting efficiencies are stochastic as well. This paper discusses how to obtain the efficiency distribution of each DMU via a simulation technique. The case of Taiwan commercial banks shows that, firstly, the number of replications in simulation analysis has little effect on the estimation of efficiency means, yet 1000 replications are recommended to produce reliable efficiency means and 2000 replications for a good estimation of the efficiency distributions. Secondly, the conventional way of using average data to represent stochastic variables results in efficiency scores which are different from the mean efficiencies of the presumably true efficiency distributions estimated from simulation. Thirdly, the interval-data approach produces true efficiency intervals yet the intervals are too wide to provide valuable information. In conclusion, when multiple observations are available for each DMU, the stochastic-data approach produces more reliable and informative results than the average-data and interval-data approaches do.  相似文献   

20.
The recent contribution by Cheng et al. (2013) presents a variant of the traditional radial input- and output-oriented efficiency measures whereby original values are replaced with absolute values. This comment spells out that this article contains some imprecisions and therefore presents some further results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号