首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
2.
In investment decision-making, the net present value method is widely used as one of the best decision rules (techniques or criteria). At the same time, it is also used to evaluate decision alternatives for a long range period of time, in economics or even in control theory. Its theoretical validation as the best method for investment decision-making has been based on a basis such that the best technique (investment decision rule) will maximize shareholders' wealth which is measured by the present value of cash flows discounted at the opportunity cost of capital. Such a theoretical requirement as maximizing shareholders' wealth is very important for investment decision-makings. This requirement implies that an ordering relation of projects determined by the best investment rule must be order-isomorphic to that determined by the measure of shareholders' wealth. This order-isomorphism can be represented by necessary and sufficient conditions (or separate criteria). However, they are not suitable for comparing investment decision rules, because they are designed for selecting the best investment decision rule. At the same time, the other dominance of the net present value method over other investment rules is also found in its decision-theoretical aspects. Formulating the net present value method, internal rate of return method and simple sum method in an axiomatic fashion, the net present value method is compared with the other rules, and is shown to have enough clarity and simplicity in theory and practice.  相似文献   

3.
We consider informational requirements of social choice rules satisfying anonymity, neutrality, monotonicity, and efficiency, and never choosing the Condorcet loser. Among such rules, we establish the existence of a rule operating on the minimal informational requirement. Depending on the number of agents and the number of alternatives, either the plurality rule or the plurality with a runoff is characterized. In some cases, the plurality rule is the most selective rule among the rules operating on the minimal informational requirement. In the other cases, each rule operating on the minimal informational requirement is a two-stage rule, and among them, the plurality with a runoff is the rule whose choice at the first stage is most selective. These results not only clarify properties of the plurality rule and the plurality with a runoff, but also explain why they are widely used in real societies.  相似文献   

4.
5.
We introduce a new class of bankruptcy problems in which the value of the estate is endogenous and depends on agents’ investment decisions. There are two investment alternatives: investing in a company (risky asset) and depositing money into a savings account (risk-free asset). Bankruptcy is possible only for the risky asset. We define a game between agents each of which aims to maximize his expected payoff by choosing an investment alternative and a company management which aims to maximize profits by choosing a bankruptcy rule. Our agents are differentiated by their incomes. We consider three most prominent bankruptcy rules in our base model: the proportional rule, the constrained equal awards rule and the constrained equal losses rule. We show that only the proportional rule is a part of any pure strategy subgame perfect Nash equilibrium. This result is robust to changes in income distribution in the economy and can be extended to a larger set of bankruptcy rules and multiple types. However, extension to multiple company framework with competition leads to equilibria where the noncooperative support for the proportional rule disappears.  相似文献   

6.
This paper addresses the use of incomplete information on both multi-criteria alternative values and importance weights in evaluating decision alternatives. Incomplete information frequently takes the form of strict inequalities, such as strict orders and strict bounds. En route to prioritizing alternatives, the majority of previous studies have replaced these strict inequalities with weak inequalities, by employing a small positive number. As this replacement closes the feasible region of decision parameters, it circumvents certain troubling questions that arise when utilizing a mathematical programming approach to evaluate alternatives. However, there are no hard and fast rules for selecting the factual small value and, even if the choice is possible, the resultant prioritizations depend profoundly on that choice. The method developed herein addresses and overcomes this drawback, and allows for dominance and potential optimality among alternatives, without selecting any small value for the strict preference information. Given strict information on criterion weights alone, we form a linear program and solve it via a two-stage method. When both alternative values and weights are provided in the form of strict inequalities, we first construct a nonlinear program, transform it into a linear programming equivalent, and finally solve this linear program via the same two-stage method. One application of this methodology to a market entry decision, a salient subject in the area of international marketing, is demonstrated in detail herein.  相似文献   

7.
It is known that third order stochastic dominance implies DARA dominance while no implications exist between higher orders and DARA dominance. A recent contribution points out that, with regard to the problem of determining lower and upper bounds for the price of a financial option, the DARA rule turns out to improve the stochastic dominance criteria of any order. In this paper the relative efficiency of the ordinary stochastic dominance and DARA criteria for alternatives with discrete distributions are compared, in order to see if the better performance of DARA criterion is also suitable for other practical applications. Moreover, the operational use of the stochastic dominance techniques for financial choices is deepened.  相似文献   

8.
A problem of decision making under uncertainty in which the choice must be made between two sets of alternatives instead of two single ones is considered. A number of choice rules are proposed and their main properties are investigated, focusing particularly on the generalizations of stochastic dominance and statistical preference. The particular cases where imprecision is present in the utilities or in the beliefs associated to two alternatives are considered.  相似文献   

9.
The main concern of this paper is the performance evaluation of four classes of decision rules: the expert rule, the balanced expert rules, the simple majority rule, and the restricted simple majority rules. Employing the uncertain dichotomous choice model we first establish the necessary and sufficient conditions for the optimality of these four types of decision rules.For small groups consisting of less than six members the optimality conditions cover all the potentially optimal decision rules. Consequently, we are able to pursue a complete analysis of the small group cases. The analysis of the special (small group) cases as well as that of the general (n-member group) cases is based on the assumption that individual decisional skills are uniformly distributed. In evaluating the quality of a decision rule we resort to four alternative criteria: the expected optimality likelihood of the rule, the expected probability of yielding a correct collective decision given complete information on decisional skills, the expected probability of yielding a correct collective judgement given complete inability of skills verification, and, finally, the sensitivity of the rule to skills verifiability.  相似文献   

10.
This paper argues that social choice from among more than two feasible alternatives should not be based on social choice from two‐alternative subsets. It considers in some detail the case where one alternative ties or beats every other alternative on the basis of simple majorities, and raises the question of whether such an alternative should be chosen. A condition of ‘stochastic unanimity’, introduced in this context, is shown to be incompatible with the simple majority rule when it can apply. This new condition plus a consideration of ties leads into a brief discussion of the use of individual expected utility in social choice theory.  相似文献   

11.
A Dual-Objective Evolutionary Algorithm for Rules Extraction in Data Mining   总被引:1,自引:0,他引:1  
This paper presents a dual-objective evolutionary algorithm (DOEA) for extracting multiple decision rule lists in data mining, which aims at satisfying the classification criteria of high accuracy and ease of user comprehension. Unlike existing approaches, the algorithm incorporates the concept of Pareto dominance to evolve a set of non-dominated decision rule lists each having different classification accuracy and number of rules over a specified range. The classification results of DOEA are analyzed and compared with existing rule-based and non-rule based classifiers based upon 8 test problems obtained from UCI Machine Learning Repository. It is shown that the DOEA produces comprehensible rules with competitive classification accuracy as compared to many methods in literature. Results obtained from box plots and t-tests further examine its invariance to random partition of datasets. An erratum to this article is available at .  相似文献   

12.
With the aim of modeling multiple attribute group decision analysis problems with group consensus (GC) requirements, a GC based evidential reasoning approach and further an attribute weight based feedback model are sequentially developed based on an evidential reasoning (ER) approach. In real situations, however, giving precise (crisp) assessments for alternatives is often too restrictive and difficult for experts, due to incompleteness or lack of information. Experts may also find it difficult to give appropriate assessments on specific attributes, due to limitation or lack of knowledge, experience and provided data about the problem domain. In this paper, an ER based consensus model (ERCM) is proposed to deal with these situations, in which experts’ assessments are interval-valued rather than precise. Correspondingly, predefined interval-valued GC (IGC) requirements need to be reached after group analysis and discussion within specified times. Also, the process of reaching IGC is accelerated by a feedback mechanism including identification rules at three levels, consisting of the attribute, alternative and global levels, and a suggestion rule. Particularly, recommendations on assessments in the suggestion rule are constructed based on recommendations on their lower and upper bounds detected by the identification rule at a specific level. A preferentially developed industry selection problem is solved by the ERCM to demonstrate its detailed implementation process, validity, and applicability.  相似文献   

13.
针对属性值为随机变量、清晰数和模糊数三种信息形式的多属性决策问题,提出了一种决策分析方法.在本文中,首先给出了混合占优准则的描述及相关分析,并提出了混合占优度的定义及其计算公式;然后,依据混合占优准则判断并确定两两方案之间比较的占优关系,进而计算得到两两方案之间比较的混合占优度,并构建相应的混合占优度矩阵,在此基础上,运用PROMETHEE Ⅱ方法得到方案的排序结果.最后,通过一个算例说明了本文给出方法的可行性和有效性.  相似文献   

14.
Abstract

Nonlinear mixed-effects models have received a great deal of attention in the statistical literature in recent years because of the flexibility they offer in handling the unbalanced repeated-measures data that arise in different areas of investigation, such as pharmacokinetics and economics. Several different methods for estimating the parameters in nonlinear mixed-effects model have been proposed. We concentrate here on two of them—maximum likelihood and restricted maximum likelihood. A rather complex numerical issue for (restricted) maximum likelihood estimation in nonlinear mixed-effects models is the evaluation of the log-likelihood function of the data, because it involves the evaluation of a multiple integral that, in most cases, does not have a closed-form expression. We consider here four different approximations to the log-likelihood, comparing their computational and statistical properties. We conclude that the linear mixed-effects (LME) approximation suggested by Lindstrom and Bates, the Laplacian approximation, and Gaussian quadrature centered at the conditional modes of the random effects are quite accurate and computationally efficient. Gaussian quadrature centered at the expected value of the random effects is quite inaccurate for a smaller number of abscissas and computationally inefficient for a larger number of abscissas. Importance sampling is accurate, but quite inefficient computationally.  相似文献   

15.
A comparison is made between a number of techniques for the exploratory analysis of qualitative variables. The paper mainly focuses on a comparison between multiple correspondence analysis (MCA) and Gower's principal co-ordinates analysis (PCO), applied to qualitative variables. The main difference between these methods is in how they deal with infrequent categories. It is demonstrated that MCA solutions can be dominated by infrequent categories, and that, especially in such cases, PCO is a useful alternative to MCA, because it tends to downweight the influence of infrequent categories. Apart from studying the difference between MCA and PCO, other alternatives for the analysis of qualitative variables are discussed, and compared to MCA and PCO.  相似文献   

16.
Existing stochastic dominance rules apply to variables such as income, wealth and rates of return, all of which are measured on cardinal scales. This study develops and applies stochastic dominance rules for ordinal data. It is shown that the new rules are consistent with the traditional von Neumann-Morgenstern expected utility approach, and that they are applicable and relevant in a wide variety of managerial decision making situations, where existing stochastic dominance rules fail to apply. We apply ordinal SD rules to the transformation of random variables.  相似文献   

17.
Many rule systems generated from decision trees (like CART, ID3, C4.5, etc.) or from direct counting frequency methods (like Apriori) are usually non-significant or even contradictory. Nevertheless, most papers on this subject demonstrate that important reductions can be made to generate rule sets by searching and removing redundancies and conflicts and simplifying the similarities between them. The objective of this paper is to present an algorithm (RBS: Reduction Based on Significance) for allocating a significance value to each rule in the system so that experts may select the rules that should be considered as preferable and understand the exact degree of correlation between the different rule attributes. Significance is calculated from the antecedent frequency and rule frequency parameters for each rule; if the first one is above the minimal level and rule frequency is in a critical interval, its significance ratio is computed by the algorithm. These critical boundaries are calculated by an incremental method and the rule space is divided according to them. The significance function is defined for these intervals. As with other methods of rule reduction, our approach can also be applied to rule sets generated from decision trees or frequency counting algorithms, in an independent way and after the rule set has been created. Three simulated data sets are used to carry out a computational experiment. Other standard data sets from UCI repository (UCI Machine Learning Repository) and two particular data sets with expert interpretation are used too, in order to obtain a greater consistency. The proposed method offers a more reduced and more easily understandable rule set than the original sets, and highlights the most significant attribute correlations quantifying their influence on consequent attribute.  相似文献   

18.
In many common simulation optimization methods the structure of the system stays the same and only the set of values for certain parameters of the system such as the number of machines in a station or the in-process inventory is varied from one evaluation to the next. The methodology described in this paper is a simulation-optimization process where the qualitative variables and the structure of the system are the subjects of optimization. Here, the optimum response sought is a function of design and operation characteristics of the system such as the type of machines to use, dispatching rules, sequence of processing operations, etc. In the methodology developed here simulation models are automatically generated through an object-oriented process and are evaluated for various candidate configurations of the system. These candidates are suggested by a Genetic Algorithm (GA) that automatically guides the system towards better solutions. After simulating the alternatives, the results are returned to the GA to be utilized in selection of the next generation of configurations to be evaluated. This process continues until a satisfactory solution is obtained for the system.  相似文献   

19.
The author treats, in this paper, a group of decision makers, where each of them already has preference on a given set of alternatives but the group as a whole does not have a decision rule to make their group decision, yet. Then, the author examines which decision rules are appropriate. As a criterion of “appropriateness” the author proposes the concepts of self-consistency and universal self-consistency of decision rules. Examining the existence of universally self-consistent decision rules in two cases: (1) decision situations with three decision makers and two alternatives, and (2) those with three decision makers and three alternatives, the author has found that all decision rules are universally self-consistent in the case (1), whereas all universally self-consistent decision rules have one and just one vetoer in the essential cases in (2). The result in the case (2) implies incompatibility of universal self-consistency with symmetry. An example of applications of the concept of self-consistency to a bankruptcy problem is also provided in this paper, where compatibility of self-consistency with symmetry in a particular decision situation is shown.  相似文献   

20.
The evaluation of performance of a design for complex discrete event systems through simulation is usually very time consuming. Optimizing the system performance becomes even more computationally infeasible. Ordinal optimization (OO) is a technique introduced to attack this difficulty in system design by looking at “order” in performances among designs instead of “value” and providing a probability guarantee for a good enough solution instead of the best for sure. The selection rule, known as the rule to decide which subset of designs to select as the OO solution, is a key step in applying the OO method. Pairwise elimination and round robin comparison are two selection rule examples. Many other selection rules are also frequently used in the ordinal optimization literature. To compare selection rules, we first identify some general facts about selection rules. Then we use regression functions to quantify the efficiency of a group of selection rules, including some frequently used rules. A procedure to predict good selection rules is proposed and verified by simulation and by examples. Selection rules that work well most of the time are recommended.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号