首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper presents a special multiple criteria decision making approach for solving problems in context with fuzzy individual preferences.At first we briefly expose the proposed methodology. The individual preferences are explicitly given by a complete transitive relation R on a set of reference actions. The modelling of the decision-maker's preferences is obtained by means of fuzzy outranking relations. These fuzzy relations are based on a system of additive utility functions which are estimated by means of ordinal regression methods analysing the preference relation R.This is followed by a presentation of two real multicriteria problems which the proposed methodology has been applied to, i.e. a highway plan choice problem and a problem in marketing research dealing with the launching of a new product. In each application we tried to specify this method according to the specific structure of the problem considered.  相似文献   

2.
This note generalizes Gul and Pesendorfer’s random expected utility theory, a stochastic reformulation of von Neumann–Morgenstern expected utility theory for lotteries over a finite set of prizes, to the circumstances with a continuum of prizes. Let [0, M] denote this continuum of prizes; assume that each utility function is continuous, let \(C_0[0,M]\) be the set of all utility functions which vanish at the origin, and define a random utility function to be a finitely additive probability measure on \(C_0[0,M]\) (associated with an appropriate algebra). It is shown here that a random choice rule is mixture continuous, monotone, linear, and extreme if, and only if, the random choice rule maximizes some regular random utility function. To obtain countable additivity of the random utility function, we further restrict our consideration to those utility functions that are continuously differentiable on [0, M] and vanish at zero. With this restriction, it is shown that a random choice rule is continuous, monotone, linear, and extreme if, and only if, it maximizes some regular, countably additive random utility function. This generalization enables us to make a discussion of risk aversion in the framework of random expected utility theory.  相似文献   

3.
Simplicial decomposition is an important form of decomposition for large non-linear programming problems with linear constraints. Von Hohenbalken has shown that if the number of retained extreme points is n + 1, where n is the number of variables in the problem, the method will reach an optimal simplex after a finite number of master problems have been solved (i.e., after a finite number of major cycles). However, on many practical problems it is infeasible to allocate computer memory for n + 1 extreme points. In this paper, we present a version of simplicial decomposition where the number of retained extreme points is restricted to r, 1 ? r ? n + 1, and prove that if r is sufficiently large, an optimal simplex will be reached in a finite number of major cycles. This result insures rapid convergence when r is properly chosen and the decomposition is implemented using a second order method to solve the master problem.  相似文献   

4.

The paper presents a new scenario-based decision rule for the classical version of the newsvendor problem (NP) under complete uncertainty (i.e. uncertainty with unknown probabilities). So far, NP has been analyzed under uncertainty with known probabilities or under uncertainty with partial information (probabilities known incompletely). The novel approach is designed for the sale of new, innovative products, where it is quite complicated to define probabilities or even probability-like quantities, because there are no data available for forecasting the upcoming demand via statistical analysis. The new procedure described in the contribution is based on a hybrid of Hurwicz and Bayes decision rules. It takes into account the decision maker’s attitude towards risk (measured by coefficients of optimism and pessimism) and the dispersion (asymmetry, range, frequency of extremes values) of payoffs connected with particular order quantities. It does not require any information about the probability distribution.

  相似文献   

5.
In this paper we consider the problem of controlling the arrival of customers into a GI/M/1 service station. It is known that when the decisions controlling the system are made only at arrival epochs, the optimal acceptance strategy is of a control-limit type, i.e., an arrival is accepted if and only if fewer than n customers are present in the system. The question is whether exercising conditional acceptance can further increase the expected long run average profit of a firm which operates the system. To reveal the relevance of conditional acceptance we consider an extension of the control-limit rule in which the nth customer is conditionally admitted to the queue. This customer may later be rejected if neither service completion nor arrival has occurred within a given time period since the last arrival epoch. We model the system as a semi-Markov decision process, and develop conditions under which such a policy is preferable to the simple control-limit rule.  相似文献   

6.
Approaches for generating the set of efficient extreme points of the decision set of a multiple-objective linear program (P) that are based upon decompositions of the weight set W0 suffer from one of two special drawbacks. Either the required computations are redundant, or not all of the efficient extreme point set is found. This article shows that the weight set for problem (P) can be decomposed into a partition based upon the outcome set Y of the problem, where the elements of the partition are in one-to-one correspondence with the efficient extreme points of Y. As a result, the drawbacks of the decompositions of W0 based upon the decision set of problem (P) disappear. The article explains also how this new partition offers the potential to construct algorithms for solving large-scale applications of problem (P) in the outcome space, rather than in the decision space.  相似文献   

7.
We consider a robust location–allocation problem with uncertainty in demand coefficients. Specifically, for each demand point, only an interval estimate of its demand is known and we consider the problem of determining where to locate a new service when a given fraction of these demand points must be served by the utility. The optimal solution of this problem is determined by the “minimax regret” location, i.e., the point that minimizes the worst-case loss in the objective function that may occur because a decision is made without knowing which state of nature will take place. For the case where the demand points are vertices of a network we show that the robust location–allocation problem can be solved in O(min{pn − p}n3m) time, where n is the number of demand points, p (p < n) is the fixed number of demand points that must be served by the new service and m is the number of edges of the network.  相似文献   

8.
We consider the problem of scheduling jobs on-line on a single machine and on identical machines with the objective to minimize total completion time. We assume that the jobs arrive over time. We give a general 2-competitive algorithm for the single machine problem. The algorithm is based on delaying the release time of the jobs, i.e., making the jobs artificially later available to the on-line scheduler than the actual release times. Our algorithm includes two known algorithms for this problem that apply delay of release times. The proposed algorithm is interesting since it gives the on-line scheduler a whole range of choices for the delays, each of which leading to 2-competitiveness.We also show that the algorithm is 2α competitive for the problem on identical machines where α is the performance ratio of the Shortest Remaining Processing Time first rule for the preemptive relaxation of the problem.  相似文献   

9.
We focus on a well-known classification task with expert systems based on Bayesian networks: predicting the state of a target variable given an incomplete observation of the other variables in the network, i.e., an observation of a subset of all the possible variables. To provide conclusions robust to near-ignorance about the process that prevents some of the variables from being observed, it has recently been derived a new rule, called conservative updating. With this paper we address the problem to efficiently compute the conservative updating rule for robust classification with Bayesian networks. We show first that the general problem is NP-hard, thus establishing a fundamental limit to the possibility to do robust classification efficiently. Then we define a wide subclass of Bayesian networks that does admit efficient computation. We show this by developing a new classification algorithm for such a class, which extends substantially the limits of efficient computation with respect to the previously existing algorithm. The algorithm is formulated as a variable elimination procedure, whose computation time is linear in the input size.  相似文献   

10.
We are considering the problem of multi-criteria classification. In this problem, a set of “if … then …” decision rules is used as a preference model to classify objects evaluated by a set of criteria and regular attributes. Given a sample of classification examples, called learning data set, the rules are induced from dominance-based rough approximations of preference-ordered decision classes, according to the Variable Consistency Dominance-based Rough Set Approach (VC-DRSA). The main question to be answered in this paper is how to classify an object using decision rules in situation where it is covered by (i) no rule, (ii) exactly one rule, (iii) several rules. The proposed classification scheme can be applied to both, learning data set (to restore the classification known from examples) and testing data set (to predict classification of new objects). A hypothetical example from the area of telecommunications is used for illustration of the proposed classification method and for a comparison with some previous proposals.  相似文献   

11.
For statistical decision problems, there are two well-known methods of randomization: on the one hand, randomization by means of mixtures of nonrandomized decision functions (randomized decision rules) in the game “statistician against nature,” on the other hand, randomization by means of randomized decision functions. In this paper, we consider the problem of risk-equivalence of these two procedures, i.e., imposing fairly general conditions on a nonsequential decision problem, it is shown that to each randomized decision rule, there is a randomized decision function with uniformly the same risk, and vice versa. The crucial argument is based on rewriting risk-equivalence in terms of Choquet's integral representation theorem. It is shown, in addition, that for certain special cases that do not fulfill the assumptions of the Main Theorem, risk-equivalence holds at least partially.  相似文献   

12.
Additive utility function models are widely used in multiple criteria decision analysis. In such models, a numerical value is associated to each alternative involved in the decision problem. It is computed by aggregating the scores of the alternative on the different criteria of the decision problem. The score of an alternative is determined by a marginal value function that evolves monotonically as a function of the performance of the alternative on this criterion. Determining the shape of the marginals is not easy for a decision maker. It is easier for him/her to make statements such as “alternative a is preferred to b”. In order to help the decision maker, UTA disaggregation procedures use linear programming to approximate the marginals by piecewise linear functions based only on such statements. In this paper, we propose to infer polynomials and splines instead of piecewise linear functions for the marginals. In this aim, we use semidefinite programming instead of linear programming. We illustrate this new elicitation method and present some experimental results.  相似文献   

13.
In this paper we propose a reduced vertex result for the robust solution of uncertain semidefinite optimization problems subject to interval uncertainty. If the number of decision variables is m and the size of the coefficient matrices in the linear matrix inequality constraints is n×n, a direct vertex approach would require satisfaction of 2 n(m+1)(n+1)/2 vertex constraints: a huge number, even for small values of n and m. The conditions derived here are instead based on the introduction of m slack variables and a subset of vertex coefficient matrices of cardinality 2 n−1, thus reducing the problem to a practically manageable size, at least for small n. A similar size reduction is also obtained for a class of problems with affinely dependent interval uncertainty. This work is supported by MIUR under the FIRB project “Learning, Randomization and Guaranteed Predictive Inference for Complex Uncertain Systems,” and by CNR RSTL funds.  相似文献   

14.
A population of items is said to be “group-testable”, (i) if the items can be classified as “good” and “bad”, and (ii) if it is possible to carry out a simultaneous test on a batch of items with two possible outcomes: “Success” (indicating that all items in the batch are good) or “failure” (indicating a contaminated batch). In this paper, we assume that the items to be tested arrive at the group-testing centre according to a Poisson process and are served (i.e., group-tested) in batches by one server. The service time distribution is general but it depends on the batch size being tested. These assumptions give rise to the bulk queueing model M/G(m,M)/1, where m and M(>m) are the decision variables where each batch size can be between m and M. We develop the generating function for the steady-state probabilities of the embedded Markov chain. We then consider a more realistic finite state version of the problem where the testing centre has a finite capacity and present an expected profit objective function. We compute the optimal values of the decision variables (mM) that maximize the expected profit. For a special case of the problem, we determine the optimal decision explicitly in terms of the Lambert function.  相似文献   

15.
This study develops an optimal portfolio decision rule under nonparametric characterization of the interest rate dynamics. To proceed, we first derive an optimal decision rule based on the long rate and the spread (where $\mathrm{spread} = \mathrm{long\ rate} - \mathrm{short\ rate}$ ); then we employ nonparametric kernel regression to estimate, based on the Nadaraya–Watson (N–W) estimators, the parameters related to the two variables; and finally, using the N–W estimates as inputs, we implement our decision rule by the explicit finite difference scheme to find specifically the optimal allocation of wealth between short and long bonds for an investor with power utility at each time over a ten-year horizon. The following four stylized facts can be observed from our results: (i) the optimal fractions in short bond do not appear to vary with the short rate; (ii) the optimal fractions decrease as the long rate rises and increase as it falls; (iii) the optimal fractions increase as the horizon becomes shorter; and (iv) the optimal fractions generally decrease in the early part of the horizon for more risk-averse investor.  相似文献   

16.
In this article, the problem of classifying a new observation vector into one of the two known groups Πi,i=1,2, distributed as multivariate normal with common covariance matrix is considered. The total number of observation vectors from the two groups is, however, less than the dimension of the observation vectors. A sample-squared distance between the two groups, using Moore-Penrose inverse, is introduced. A classification rule based on the minimum distance is proposed to classify an observation vector into two or several groups. An expression for the error of misclassification when there are only two groups is derived for large p and n=O(pδ),0<δ<1.  相似文献   

17.
18.
We study the pricing and hedging of contingent claims that are subject to Event Risk which we define as rare and unpredictable events whose occurrence may be correlated to, but cannot be hedged perfectly with standard marketed instruments. The super-replication costs of such event sensitive contingent claims (ESCC), in general, provide little guidance for the pricing of these claims. Instead, we study utility based prices under two scenarios of resolution of uncertainty for event risk: when the event is continuously monitored, or when it is revealed only at the payment date. In both cases, we transform the incomplete market optimal portfolio choice problem of an agent endowed with an ESCC into a complete market problem with a state and possibly path-dependent utility function. For negative exponential utility, we obtain an explicit representation of the utility based prices under both information resolution scenarios and this in turn leads us to a simple characterization of the early resolution premium. For constant relative risk aversion utility functions we propose a simple numerical scheme and study the impact of size of the position, wealth and expected return on these prices.  相似文献   

19.
The symmetric maximum, denoted by ?, is an extension of the usual maximum ∨ operation so that 0 is the neutral element, and ??x is the symmetric (or inverse) of x, i.e., x???(???x)?=?0. However, such an extension does not preserve the associativity of ∨. This fact asks for systematic ways of parenthesing (or bracketing) terms of a sequence (with more than two arguments) when using such an extended maximum. We refer to such systematic (predefined) ways of parenthesing as computation rules. As it turns out there are infinitely many computation rules, each of which corresponds to a systematic way of bracketing arguments of sequences. Essentially, computation rules reduce to deleting terms of sequences based on the condition x???(???x)?=?0. This observation gives raise to a quasi-order on the set of such computation rules: say that rule 1 is below rule 2 if for all sequences of numbers, rule 1 deletes more terms of the sequence than rule 2. In this paper we present a study of this quasi-ordering of computation rules. In particular, we show that the induced poset of all equivalence classes of computation rules is uncountably infinite, has infinitely many maximal elements, has infinitely many atoms, and it embeds the powerset of natural numbers ordered by inclusion.  相似文献   

20.
This article studies optimal consumption-leisure, portfolio and retirement selection of an infinitely lived investor whose preference is formulated by ??-maxmin expected CES utility which is to differentiate ambiguity and ambiguity attitude. Adopting the recursive multiplepriors utility and the technique of backward stochastic differential equations (BSDEs), we transform the ??-maxmin expected CES utility into a classical expected CES utility under a new probability measure related to the degree of an investor??s uncertainty. Our model investigates the optimal consumption-leisure-work selection, the optimal portfolio selection, and the optimal stopping problem. In this model, the investor is able to adjust her supply of labor flexibly above a certain minimum work-hour along with a retirement option. The problem can be analytically solved by using a variational inequality. And the optimal retirement time is given as the first time when her wealth exceeds a certain critical level. The optimal consumption-leisure and portfolio strategies before and after retirement are provided in closed forms. Finally, the distinctions of optimal consumption-leisure, portfolio and critical wealth level under ambiguity from those with no vagueness are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号