共查询到20条相似文献,搜索用时 11 毫秒
1.
In this work, we consider a public facility allocation problem decided through a voting process under the majority rule. A
location of the public facility is a majority rule winner if there is no other location in the network where more than half
of the voters would have been closer to than the majority rule winner. We develop fast algorithms for interesting cases with
nice combinatorial structures. We show that the computing problem and the decision problem in the general case, where the
number of public facilities is more than one and is considered part of the input size, are all NP-hard. Finally, we discuss
majority rule decision making for related models. 相似文献
2.
V. Yu. Kiselev 《International Journal of Game Theory》2008,37(2):303-305
A criterion for eligibility of a candidate by the set of scoring rules (de Borda voting rules) is given. This criterion generalizes a necessary (but not sufficient) condition of eligibility in Moulin (Axioms of cooperative decision making, Cambridge University Press, Cambridge, 1988). The author thanks Profs. H. Moulin and A. V. Shapovalov for illuminating discussions. 相似文献
3.
The Condorcet criterion and committee selection 总被引:1,自引:0,他引:1
William V. Gehrlein 《Mathematical Social Sciences》1985,10(3):199-209
Recent studies have evaluated election procedures on their propensity to select committees that meet a Condorcet criterion. The Condorcet criterion has been defined to use majority agreement from voters' preferences to compare the selected committee to all other committees. This study uses a different definition of the Condorcet criterion as defined on committees. The focus of the new definition is on candidates. That is, we consider majority agreement on each candidate in the selected committee as compared to each candidate not in the selected committee.This new definition of the Condorcet criterion allows for the existence of majority cycles on candidates within the selected committee. However, no candidate in the non-selected group is able to defeat any candidate in the selected committee by majority rule. Of particular interest is the likelihood that a committee meeting this Condorcet criterion exists. Attention is also given to the likelihood that various simple voting procedures will select a committee meeting this Condorcet criterion when one does exist. 相似文献
4.
A. P. Dawid 《Annals of the Institute of Statistical Mathematics》2007,59(1):77-93
A decision problem is defined in terms of an outcome space, an action space and a loss function. Starting from these simple
ingredients, we can construct: Proper Scoring Rule; Entropy Function; Divergence Function; Riemannian Metric; and Unbiased
Estimating Equation. From an abstract viewpoint, the loss function defines a duality between the outcome and action spaces,
while the correspondence between a distribution and its Bayes act induces a self-duality. Together these determine a “decision
geometry” for the family of distributions on outcome space. This allows generalisation of many standard statistical concepts
and properties. In particular we define and study generalised exponential families. Several examples are analysed, including
a general Bregman geometry. 相似文献
5.
6.
This paper examines elections among three candidates when the electorate is large and voters can have any of the 26 nontrivial asymmetric binary relations on the candidates as their preference relations. Comparisons are made between rule-λ rankings based on rank-order ballots and simple majorities based on the preference relations. The rule-λ ranking is the decreasing point total order obtained when 1, λ and 0 points are assigned to the candidates ranked first, second and third on each voter's ballot, with 0 ? λ ? 1.Limit probabilities as the number of voters gets large are computed for events such as ‘the first-ranked rule-λ candidate has a majority over the second-ranked rule-λ candidate’ and ‘the rule-λ winner is the Condorcet candidate, given that there is a Condorcet candidate’. The probabilities are expressed as functions of λ and the distribution of voters over types of preference relations. In general, they are maximized at λ = 1/2 (Borda) and minimized at λ = 0 (plurality) and at λ = 1 for any fixed distribution of voters over preference types. The effects of more indifference and increased intransitivity in voter's preference relations are analyzed when λ is fixed. 相似文献
7.
Pierre Michaud 《商业与工业应用随机模型》1987,3(3):173-189
In 1785 Condorcet proposed a method to aggregate qualitative data, but until very recently this method was attributed to contemporary authors and its importance completely neglected. 相似文献
8.
A note on Jordan-von Neumann constant and James constant 总被引:2,自引:0,他引:2
Changsen Yang 《Journal of Mathematical Analysis and Applications》2009,357(1):98-102
Let X be a non-trivial Banach space. L. Maligranda conjectured CNJ(X)?1+J2(X)/4 for James constant J(X) and von Neumann-Jordan constant CNJ(X) of X. Satit Saejung gave a proof of it in 2006. In this note, we show that the last step in Satit Saejung's proof is not valid. Using his proof, the result should be . On the other hand, we give a new proof of CNJ(X)?1+J2(X)/4. As an application, we give a relation between J(X) and J(lp(X)). 相似文献
9.
A multicriteria Boolean programming problem with linear cost functions in which initial coefficients of the cost functions are subject to perturbations is considered. For any optimal alternative, with respect to parameterized principle of optimality “from Condorcet to Pareto”, appropriate measures of the quality are introduced. These measures correspond to the so-called stability and accuracy functions defined earlier for optimal solutions of a generic multicriteria combinatorial optimization problem with Pareto and lexicographic optimality principles. Various properties of such functions are studied and maximum norms of perturbations for which an optimal alternative preserves its optimality are calculated. To illustrate the way how the stability and accuracy functions can be used as efficient tools for post-optimal analysis, an application from the voting theory is considered. 相似文献
10.
Werner Ehm 《Comptes Rendus Mathematique》2011,349(11-12):699-702
Stein unbiased risk estimation is generalized twice, from the Gaussian shift model to nonparametric families of smooth densities, and from the quadratic risk to more general divergence type distances. The development relies on a connection with local proper scoring rules. 相似文献
11.
Hannu Nurmi 《Fuzzy Sets and Systems》1981,6(3):249-259
Recent experimental studies show that the predictive accuracy of many of the solution concepts derived from the collective decision making theory leaves much to be desired. In a previous paper the author attempted to explain some of the inaccuracies in terms of the fuzzy indifference regions of the individuals participating in the voting game. This paper gives straightforward generalizations of the solutions concepts in terms of the fuzzy social or individual preference relations. It turns out that some of these new solution concepts cotain their nonfuzzy counterparts as subsets. Others, in turn, are subsets of their nonfuzzy counterparts. We also discuss a method of aggregating individual nonfuzzy preferences so as to get a fuzzy social preference relation and, furthermore, a nonfuzzy social choice set. 相似文献
12.
We study a cardinal model of voting with three alternatives where voters’ von Neumann Morgenstern utilities are private information. We consider voting protocols given by two-parameter scoring rules, as introduced by Myerson (2002). For these voting rules, we show that all symmetric Bayes Nash equilibria are sincere, and have a very specific form. These equilibria are unique for a wide range of model parameters, and we can therefore compare the equilibrium performance of different rules. Computational results regarding the effectiveness of different scoring rules (where effectiveness is captured by a modification of the effectiveness measure proposed in Weber, 1978) suggest that those which most effectively represent voters’ preferences allow for the expression of preference intensity, in contrast to more commonly used rules such as the plurality rule, and the Borda Count. While approval voting allows for the expression of preference intensity, it does not maximize effectiveness as it fails to unambiguously convey voters’ ordinal preference rankings. 相似文献
13.
In consumer credit markets lending decisions are usually represented as a set of classification problems. The objective is to predict the likelihood of customers ending up in one of a finite number of states, such as good/bad payer, responder/non-responder and transactor/non-transactor. Decision rules are then applied on the basis of the resulting model estimates. However, this represents a misspecification of the true objectives of commercial lenders, which are better described in terms of continuous financial measures such as bad debt, revenue and profit contribution. In this paper, an empirical study is undertaken to compare predictive models of continuous financial behaviour with binary models of customer default. The results show models of continuous financial behaviour to outperform classification approaches. They also demonstrate that scoring functions developed to specifically optimize profit contribution, using genetic algorithms, outperform scoring functions derived from optimizing more general functions such as sum of squared error. 相似文献
14.
The logistic regression framework has been for long time the most used statistical method when assessing customer credit risk. Recently, a more pragmatic approach has been adopted, where the first issue is credit risk prediction, instead of explanation. In this context, several classification techniques have been shown to perform well on credit scoring, such as support vector machines among others. While the investigation of better classifiers is an important research topic, the specific methodology chosen in real world applications has to deal with the challenges arising from the real world data collected in the industry. Such data are often highly unbalanced, part of the information can be missing and some common hypotheses, such as the i.i.d. one, can be violated. In this paper we present a case study based on a sample of IBM Italian customers, which presents all the challenges mentioned above. The main objective is to build and validate robust models, able to handle missing information, class unbalancedness and non-iid data points. We define a missing data imputation method and propose the use of an ensemble classification technique, subagging, particularly suitable for highly unbalanced data, such as credit scoring data. Both the imputation and subagging steps are embedded in a customized cross-validation loop, which handles dependencies between different credit requests. The methodology has been applied using several classifiers (kernel support vector machines, nearest neighbors, decision trees, Adaboost) and their subagged versions. The use of subagging improves the performance of the base classifier and we will show that subagging decision trees achieve better performance, still keeping the model simple and reasonably interpretable. 相似文献
15.
Accuracy arguments are the en vogue route in epistemic justifications of probabilism and further norms governing rational belief. These arguments often depend on the fact that the employed inaccuracy measure is strictly proper. I argue controversially that it is ill-advised to assume that the employed inaccuracy measures are strictly proper and that strictly proper statistical scoring rules are a more natural class of measures of inaccuracy. Building on work in belief elicitation I show how strictly proper statistical scoring rules can be used to give an epistemic justification of probabilism.An agent's evidence does not play any role in these justifications of probabilism. Principles demanding the maximisation of a generalised entropy depend on the agent's evidence. In the second part of the paper I show how to simultaneously justify probabilism and such a principle. I also investigate scoring rules which have traditionally been linked with entropies. 相似文献
16.
CHANG K. C.; FUNG ROBERT; LUCAS ALAN; OLIVER ROBERT; SHIKALOFF NINA 《IMA Journal of Management Mathematics》2000,11(1):1-18
Email: kchang{at}gmu.eduEmail: RobertFung{at}Fairlsaac.comEmail: alan.lucas{at}hotmail.com¶Email: BobOliver{at}Fairlsaac.com||Email: NShikaloff{at}Fairlsaac.com The objectives of this paper are to apply the theory and numericalalgorithms of Bayesian networks to risk scoring, and comparethe results with traditional methods for computing scores andposterior predictions of performance variables. Model identification,inference, and prediction of random variables using Bayesiannetworks have been successfully applied in a number of areas,including medical diagnosis, equipment failure, informationretrieval, rare-event prediction, and pattern recognition. Theability to graphically represent conditional dependencies andindependencies among random variables may also be useful incredit scoring. Although several papers have already appearedin the literature which use graphical models for model identification,as far as we know there have been no explicit experimental resultsthat compare a traditionally computed risk score with predictionsbased on Bayesian learning algorithms. In this paper, we examine a database of credit-card applicantsand attempt to learn the graphical structure ofthe characteristics or variables that make up the database.We identify representative Bayesian networks in a developmentsample as well as the associated Markov blankets and cliquestructures within the Markov blanket. Once we obtain the structureof the underlying conditional independencies, we are able toestimate the probabilities of each node conditional on its directpredecessor node(s). We then calculate the posterior probabilitiesand scores of a performance variable for the development sample.Finally, we calculate the receiver operating characteristic(ROC) curves and relative profitability of scorecards basedon these identifications. The results of the different modelsand methods are compared with both development and validationsamples. Finally, we report on a statistical entropy calculationthat measures the degree to which cliques identified in theBayesian network are independent of one another. 相似文献
17.
Number-theoretic rules are particularly suited to the evaluation of multiple integrals in which the integrand is periodic. For nonperiodic integrands, an alternative is to use vertex-modified versions of number-theoretic rules. Good vertex-modified number-theoretic rules may be found by doing a computer search based on some criterion of goodness. Such criteria include a variant of the L2 discrepancy and the vertex variance. Here we present a result which may be used to speed up searches for good vertex-modified number-theoretic rules based on these criteria when the generator vectors are of the one-parameter form of Korobov. 相似文献
18.
19.
The main purpose of this study is to propose a new technology scoring model for reflecting the total perception scoring phenomenon which happens often in many evaluation settings. A base model used is a logistic regression for non-default prediction of a firm. The point estimator used to predict the probability for non-default based on this model does not consider the risk involved in the estimation error. We propose to update the point estimator within its confidence interval using the evaluator’s perception. The proposed approach takes into account not only the risk involved in the estimation error of the point estimator but also the total perception scoring phenomenon. Empirical evidence of a better prediction ability of the proposed model is displayed in terms of the area under the ROC curves. Additionally, we showed that the proposed model can take advantage when it is applied to smaller data size. It is expected that the proposed approach can be applied to various technology related decision-makings such as R&D investment, alliance, transfer, and loan. 相似文献
20.
Credit scoring is a method of modelling potential risk of credit applications. Traditionally, logistic regression and discriminant analysis are the most widely used approaches to create scoring models in the industry. However, these methods are associated with quite a few limitations, such as being instable with high-dimensional data and small sample size, intensive variable selection effort and incapability of efficiently handling non-linear features. Most importantly, based on these algorithms, it is difficult to automate the modelling process and when population changes occur, the static models usually fail to adapt and may need to be rebuilt from scratch. In the last few years, the kernel learning approach has been investigated to solve these problems. However, the existing applications of this type of methods (in particular the SVM) in credit scoring have all focused on the batch model and did not address the important problem of how to update the scoring model on-line. This paper presents a novel and practical adaptive scoring system based on an incremental kernel method. With this approach, the scoring model is adjusted according to an on-line update procedure that can always converge to the optimal solution without information loss or running into numerical difficulties. Non-linear features in the data are automatically included in the model through a kernel transformation. This approach does not require any variable reduction effort and is also robust for scoring data with a large number of attributes and highly unbalanced class distributions. Moreover, a new potential kernel function is introduced to further improve the predictive performance of the scoring model and a kernel attribute ranking technique is used that adds transparency in the final model. Experimental studies using real world data sets have demonstrated the effectiveness of the proposed method. 相似文献