首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 255 毫秒
1.
Let A be a finite set of feasible actions which are judged following several criteria. An outranking relation is defined on A by considering preference of the decision maker as a weak order on each criterion and the relation among criteria as a semi-order on the given set of criteria.Several ways of constructing outranking relations have been proposed. One of the most popular, introduced by B. Roy, for instance ELECTRE(s), is based on the use of weights related to criteria. In our approach, the knowledge of weights is replaced by the existence of a semi-order.A case study is developed. It deals with a computer selection problem.  相似文献   

2.
We investigate the connection between weights, scales, and the importance of criteria, when a linear value function is assumed to be a suitable representation of a decision maker’s preferences. Our considerations are based on a simple two-criteria experiment, where the participants were asked to indicate which of the criteria was more important, and to pairwise compare a number of alternatives. We use the participants’ pairwise choices to estimate the weights for the criteria in such a way that the linear value function explains the choices to the extent possible. More specifically, we study two research questions: (1) is it possible to find a general scaling principle that makes the rank order of the importance of criteria consistent with the rank order of the magnitudes of the weights, and (2) how good is a simple, direct method of asking the decision maker to “provide” weights for the criteria compared to our estimation procedure. Our results imply that there is reason to question two common beliefs, namely that the values of the weights would reflect the importance of criteria, and that people could reliably “provide” such weights without estimation.  相似文献   

3.
This paper presents a special multiple criteria decision making approach for solving problems in context with fuzzy individual preferences.At first we briefly expose the proposed methodology. The individual preferences are explicitly given by a complete transitive relation R on a set of reference actions. The modelling of the decision-maker's preferences is obtained by means of fuzzy outranking relations. These fuzzy relations are based on a system of additive utility functions which are estimated by means of ordinal regression methods analysing the preference relation R.This is followed by a presentation of two real multicriteria problems which the proposed methodology has been applied to, i.e. a highway plan choice problem and a problem in marketing research dealing with the launching of a new product. In each application we tried to specify this method according to the specific structure of the problem considered.  相似文献   

4.
The Reference Point Method (RPM) is a very convenient technique for interactive analysis of the multiple criteria optimization problems. The interactive analysis is navigated with the commonly accepted control parameters expressing reference levels for the individual objective functions. The partial achievement functions quantify the DM satisfaction from the individual outcomes with respect to the given reference levels, while the final scalarizing achievement function is built as the augmented max–min aggregation of the partial achievements. In order to avoid inconsistencies caused by the regularization, the max–min solution may be regularized by the Ordered Weighted Averages (OWA) with monotonic weights which combines all the partial achievements allocating the largest weight to the worst achievement, the second largest weight to the second worst achievement, and so on. Further, following the concept of the Weighted OWA (WOWA), the importance weighting of several achievements may be incorporated into the RPM. Such a WOWA RPM approach uses importance weights to affect achievement importance by rescaling accordingly its measure within the distribution of achievements rather than by straightforward rescaling of achievement values. The recent progress in optimization methods for ordered averages allows one to implement the WOWA RPM quite effectively as extension of the original constraints and criteria with simple linear inequalities. There is shown that the OWA and WOWA RPM models meet the crucial requirements with respect to the efficiency of generated solutions as well as the controllability of interactive analysis by the reference levels.  相似文献   

5.
A problem of subset selection when actions are interdependent is formulated within a multiple criteria framework. More specifically, a novel definition and characterization of interdependence of actions applicable to Multiple Criteria Decision Making (MCDM) are presented. The effects of interdependence of actions on the modeling and resolution of a subset choice problem are shown, and the importance of taking interdependence of actions into account is discussed. Most of the discussion is generalized to independence and interdependence of sets of actions, which are then compared to the case of individual actions. A general approach to evaluate a combination of interdependent actions is proposed and the use of the multiple criteria structure to eliminate some difficulties in evaluating a set of interdependent actions is explained.  相似文献   

6.
We present a method called Generalized Regression with Intensities of Preference (GRIP) for ranking a finite set of actions evaluated on multiple criteria. GRIP builds a set of additive value functions compatible with preference information composed of a partial preorder and required intensities of preference on a subset of actions, called reference actions. It constructs not only the preference relation in the considered set of actions, but it also gives information about intensities of preference for pairs of actions from this set for a given decision maker (DM). Distinguishing necessary and possible consequences of preference information on the considered set of actions, GRIP answers questions of robustness analysis. The proposed methodology can be seen as an extension of the UTA method based on ordinal regression. GRIP can also be compared to the AHP method, which requires pairwise comparison of all actions and criteria, and yields a priority ranking of actions. As for the preference information being used, GRIP can be compared, moreover, to the MACBETH method which also takes into account a preference order of actions and intensity of preference for pairs of actions. The preference information used in GRIP does not need, however, to be complete: the DM is asked to provide comparisons of only those pairs of reference actions on particular criteria for which his/her judgment is sufficiently certain. This is an important advantage comparing to methods which, instead, require comparison of all possible pairs of actions on all the considered criteria. Moreover, GRIP works with a set of general additive value functions compatible with the preference information, while other methods use a single and less general value function, such as the weighted-sum.  相似文献   

7.
When solving a multiobjective programming problem by the weighted sum approach, weights represent the relative importance associated to the objectives. As these values are usually imprecise, it is important to analyze the sensitivity of the solution under possible deviations on the estimated values. In this sense, the tolerance approach provides a direct measure of how weights may vary simultaneously and independently from their estimated values while still retaining the same efficient solution. This paper provides an explicit expression to the maximum tolerance on weights in a multiobjective linear fractional programming problem when all the denominators are equal. An application is also presented to illustrate how the results may help the decision maker to choose a most satisfactory solution in a production problem.  相似文献   

8.
In this paper, a multiple criteria ranking procedure based on distance between partial preorders is proposed. This method consists of two phases. In the first phase, the decision maker is asked to rank alternatives with a preorder (complete or partial) for each criterion and provide complete or partial linear information about the relative importance (weights) of the criteria. In the second phase, we introduce a distance procedure to aggregate the above individual rankings into a global ranking (a partial preorder). An algorithm for the aggregation procedure is proposed, followed by a numerical illustration.  相似文献   

9.
Deriving accurate interval weights from interval fuzzy preference relations is key to successfully solving decision making problems. Xu and Chen (2008) proposed a number of linear programming models to derive interval weights, but the definitions for the additive consistent interval fuzzy preference relation and the linear programming model still need to be improved. In this paper, a numerical example is given to show how these definitions and models can be improved to increase accuracy. A new additive consistency definition for interval fuzzy preference relations is proposed and novel linear programming models are established to demonstrate the generation of interval weights from an interval fuzzy preference relation.  相似文献   

10.
Several multi-criteria-decision-making methodologies assume the existence of weights associated with the different criteria, reflecting their relative importance.One of the most popular ways to infer such weights is the analytic hierarchy process, which constructs first a matrix of pairwise comparisons, from which weights are derived following one out of many existing procedures, such as the eigenvector method or the least (logarithmic) squares. Since different procedures yield different results (weights) we pose the problem of describing the set of weights obtained by “sensible” methods: those which are efficient for the (vector-) optimization problem of simultaneous minimization of discrepancies. A characterization of the set of efficient solutions is given, which enables us to assert that the least-logarithmic-squares solution is always efficient, whereas the (widely used) eigenvector solution is not, in some cases, efficient, thus its use in practice may be questionable.This research has been supported by the Spanish Science and Technology Ministry and FEDER Grants No. BFM2002-04525-C02-02 and BFM2002-11282-E.  相似文献   

11.
Promethee II is a prominent method for multi-criteria decision aid (MCDA) that builds a complete ranking on a set of potential actions by assigning each of them a so-called net flow score. However, to calculate these scores, each pair of actions has to be compared, causing the computational load to increase quadratically with the number of actions, eventually leading to prohibitive execution times for large decision problems. For some problems, however, a trade-off between the ranking’s accuracy and the required evaluation time may be acceptable. Therefore, we propose a piecewise linear model that approximates Promethee II’s net flow scores and reduces the computational complexity (with respect to the number of actions) from quadratic to linear at the cost of some wrongly ranked actions. Simulations on artificial problem instances allow us to quantify this time/quality trade-off and to provide probabilistic bounds on the problem size above which our model satisfyingly approximates Promethee II’s rankings. They show, for instance, that for decision problems of 10,000 actions evaluated on 7 criteria, the Pearson correlation coefficient between the original scores and our approximation is of at least 0.97. When put in balance with computation times that are more than 7000 times faster than for the Promethee II model, the proposed approximation model represents an interesting alternative for large problem instances.  相似文献   

12.
In this paper we study lattice rules which are cubature formulae to approximate integrands over the unit cube [0,1] s from a weighted reproducing kernel Hilbert space. We assume that the weights are independent random variables with a given mean and variance for two reasons stemming from practical applications: (i) It is usually not known in practice how to choose the weights. Thus by assuming that the weights are random variables, we obtain robust constructions (with respect to the weights) of lattice rules. This, to some extend, removes the necessity to carefully choose the weights. (ii) In practice it is convenient to use the same lattice rule for many different integrands. The best choice of weights for each integrand may vary to some degree, hence considering the weights random variables does justice to how lattice rules are used in applications. In this paper the worst-case error is therefore a random variable depending on random weights. We show how one can construct lattice rules which perform well for weights taken from a set with large measure. Such lattice rules are therefore robust with respect to certain changes in the weights. The construction algorithm uses the component-by-component (cbc) idea based on two criteria, one using the mean of the worst case error and the second criterion using a bound on the variance of the worst-case error. We call the new algorithm the cbc2c (component-by-component with 2 constraints) algorithm. We also study a generalized version which uses r constraints which we call the cbcrc (component-by-component with r constraints) algorithm. We show that lattice rules generated by the cbcrc algorithm simultaneously work well for all weights in a subspace spanned by the chosen weights ?? (1), . . . , ?? (r). Thus, in applications, instead of finding one set of weights, it is enough to find a convex polytope in which the optimal weights lie. The price for this method is a factor r in the upper bound on the error and in the construction cost of the lattice rule. Thus the burden of determining one set of weights very precisely can be shifted to the construction of good lattice rules. Numerical results indicate the benefit of using the cbc2c algorithm for certain choices of weights.  相似文献   

13.
Models for Multiple Criteria Decision Analysis (MCDA) often separate per-criterion attractiveness evaluation from weighted aggregation of these evaluations across the different criteria. In simulation-based MCDA methods, such as Stochastic Multicriteria Acceptability Analysis, uncertainty in the weights is modeled through a uniform distribution on the feasible weight space defined by a set of linear constraints. Efficient sampling methods have been proposed for special cases, such as the unconstrained weight space or complete ordering of the weights. However, no efficient methods are available for other constraints such as imprecise trade-off ratios, and specialized sampling methods do not allow for flexibility in combining the different constraint types. In this paper, we explore how the Hit-And-Run sampler can be applied as a general approach for sampling from the convex weight space that results from an arbitrary combination of linear weight constraints. We present a technique for transforming the weight space to enable application of Hit-And-Run, and evaluate the sampler’s efficiency through computational tests. Our results show that the thinning factor required to obtain uniform samples can be expressed as a function of the number of criteria n as φ(n) = (n − 1)3. We also find that the technique is reasonably fast with problem sizes encountered in practice and that autocorrelation is an appropriate convergence metric.  相似文献   

14.
The linear ordering problem consists in finding a linear order at minimum remoteness from a weighted tournament T, the remoteness being the sum of the weights of the arcs that we must reverse in T to transform it into a linear order. This problem, also known as the search of a median order, or of a maximum acyclic subdigraph, or of a maximum consistent set, or of a minimum feedback arc set, is NP-hard; when all the weights of T are equal to 1, the linear ordering problem is the same as Slater's problem. In this paper, we describe the principles and the results of an exact method designed to solve the linear ordering problem for any weighted tournament. This method, of which the corresponding software is freely available at the URL address http://www.enst.fr/~charon/tournament/median.html, is based upon a branch-and-bound search with a Lagrangean relaxation as the evaluation function and a noising method for computing the initial bound. Other components are designed to reduce the BB-search-tree.  相似文献   

15.
16.
17.
National competitiveness is a measure of the relative ability of a nation to create and maintain an environment in which enterprises can compete so that the level of prosperity can be improved. This paper proposes a methodology for measuring the national competitiveness and uses the 10 Southeast Asian countries for illustration. The basic idea is to deconstruct the complicated concept of national competitiveness to measurable criteria. The observations (data) on the criteria are then aggregated according to their importance to obtain an index of national competitiveness. For data collected from questionnaire surveys, a calibration technique has been devised to alleviate bias due to personal prejudice. In data aggregation, the importance is expressed by both a priori weights and a posteriori weights. These two types of weights consistently show that Singapore, Malaysia, and Thailand have the highest national competitiveness, while Myanmar, Cambodia, and Laos are the least competitive countries. The performance of each country in every criteria measured also provides directions for these countries to make improvements and for investors to allocate resources.  相似文献   

18.
The paper deals with the problem of finding the field of force that generates a given (N ? 1)-parametric family of orbits for a mechanical system with N degrees of freedom. This problem is usually referred to as the inverse problem of dynamics. We study this problem in relation to the problems of celestial mechanics. We state and solve a generalization of the Dainelli and Joukovski problem and propose a new approach to solve the inverse Suslov’s problem. We apply the obtained results to generalize the theorem enunciated by Joukovski in 1890, solve the inverse Stäckel problem and solve the problem of constructing the potential-energy function U that is capable of generating a bi-parametric family of orbits for a particle in space. We determine the equations for the sought-for function U and show that on the basis of these equations we can define a system of two linear partial differential equations with respect to U which contains as a particular case the Szebehely equation. We solve completely a special case of the inverse dynamics problem of constructing U that generates a given family of conics known as Bertrand’s problem. At the end we establish the relation between Bertrand’s problem and the solutions to the Heun differential equation. We illustrate our results by several examples.  相似文献   

19.
Given n points in the plane with nonnegative weights, the inverse Fermat–Weber problem consists in changing the weights at minimum cost such that a prespecified point in the plane becomes the Euclidean 1-median. The cost is proportional to the increase or decrease of the corresponding weight. In case that the prespecified point does not coincide with one of the given n points, the inverse Fermat–Weber problem can be formulated as linear program. We derive a purely combinatorial algorithm which solves the inverse Fermat–Weber problem with unit cost using O(n) greedy-like iterations where each of them can be done in constant time if the points are sorted according to their slopes. If the prespecified point coincides with one of the given n points, it is shown that the corresponding inverse problem can be written as convex problem and hence is solvable in polynomial time to any fixed precision.  相似文献   

20.
This paper addresses multiple criteria group decision making problems where each group member offers imprecise information on his/her preferences about the criteria. In particular we study the inclusion of this partial information in the decision problem when the individuals’ preferences do not provide a vector of common criteria weights and a compromise preference vector of weights has to be determined as part of the decision process in order to evaluate a finite set of alternatives. We present a method where the compromise is defined by the lexicographical minimization of the maximum disagreement between the value assigned to the alternatives by the group members and the evaluation induced by the compromise weights.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号