首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Choquet expected utility which uses capacities (i.e. nonadditive probability measures) in place of-additive probability measures has been introduced to decision making under uncertainty to cope with observed effects of ambiguity aversion like the Ellsberg paradox. In this paper we present necessary and sufficient conditions for stochastic dominance between capacities (i.e. the expected utility with respect to one capacity exceeds that with respect to the other one for a given class of utility functions). One wide class of conditions refers to probability inequalities on certain families of sets. To yield another general class of conditions we present sufficient conditions for the existence of a probability measureP with f dC= f dP for all increasing functionsf whenC is a given capacity. Examples includen-th degree stochastic dominance on the reals and many cases of so-called set dominance. Finally, applications to decision making are given including anticipated utility with unknown distortion function.  相似文献   

2.
Summary An arbitrary finitely additive probability can be decomposed uniquely into a convex combination of a countably additive probability and a purely finitely additive (PFA) one. The coefficient of the PFA probability is an upper bound on the extent to which conglomerability may fail in a finitely additive probability with that decomposition. If the probability is defined on a -field, the bound is sharp. Hence, non-conglomerability (or equivalently non-disintegrability) characterizes finitely as opposed to countably additive probability. Nonetheless, there exists a PFA probability which is simultaneously conglomerable over an arbitrary finite set of partitions.Neither conglomerability nor non-conglomerability in a given partition is closed under convex combinations. But the convex combination of PFA ultrafilter probabilities, each of which cannot be made conglomerable in a common margin, is singular with respect to any finitely additive probability that is conglomerable in that margin.  相似文献   

3.
Tversky and Kahneman have worked out an appealing model of decision making under uncertainty, involving rank- and sign-dependent utilities. This model, cumulative prospect theory (CPT), as well as related models proposed by other authors, has received wide acclaim. Available information and psychological attitude facing ambiguity jointly determine the subjective likelihood values the decision maker attributes to events, expressed by either one of two capacities depending on the prospect of either gains or losses; unfortunately, neither interpretation of these capacities nor prevision of their links are straightforward. An insight into these issues is given by studying consistency of CPT with certain generalized expected utility models, when faced with objective data described by lower–upper probability intervals. Means of testing the existence of subjectively lower–upper probabilized events are obtained, as well as means of evaluating ambiguity aversion.  相似文献   

4.
The Subjectively Weighted Linear Utility (SWLU) model for decision making under uncertainty can accommodate non-neutral attitudes toward ambiguity. We first characterize ambiguity aversion in terms of the SWLU model parameters. In addition, we show that ambiguity content may reasonably be regarded as residing in the decision maker's subjective probability distribution of induced utility. In particular, (a) a special kind of mean preserving spread of the induced utility distribution will always increase ambiguity content, and (b) utility distributions which are more shiftable by new information have higher ambiguity content.  相似文献   

5.
In a hidden Markov model, the underlying Markov chain is usually unobserved. Often, the state path with maximum posterior probability (Viterbi path) is used as its estimate. Although having the biggest posterior probability, the Viterbi path can behave very atypically by passing states of low marginal posterior probability. To avoid such situations, the Viterbi path can be modified to bypass such states. In this article, an iterative procedure for improving the Viterbi path in such a way is proposed and studied. The iterative approach is compared with a simple batch approach where a number of states with low probability are all replaced at the same time. It can be seen that the iterative way of adjusting the Viterbi state path is more efficient and it has several advantages over the batch approach. The same iterative algorithm for improving the Viterbi path can be used when it is possible to reveal some hidden states and estimating the unobserved state sequence can be considered as an active learning task. The batch approach as well as the iterative approach are based on classification probabilities of the Viterbi path. Classification probabilities play an important role in determining a suitable value for the threshold parameter used in both algorithms. Therefore, properties of classification probabilities under different conditions on the model parameters are studied.  相似文献   

6.
This paper considers model uncertainty for multistage stochastic programs. The data and information structure of the baseline model is a tree, on which the decision problem is defined. We consider “ambiguity neighborhoods” around this tree as alternative models which are close to the baseline model. Closeness is defined in terms of a distance for probability trees, called the nested distance. This distance is appropriate for scenario models of multistage stochastic optimization problems as was demonstrated in Pflug and Pichler (SIAM J Optim 22:1–23, 2012). The ambiguity model is formulated as a minimax problem, where the the optimal decision is to be found, which minimizes the maximal objective function within the ambiguity set. We give a setup for studying saddle point properties of the minimax problem. Moreover, we present solution algorithms for finding the minimax decisions at least asymptotically. As an example, we consider a multiperiod stochastic production/inventory control problem with weekly ordering. The stochastic scenario process is given by the random demands for two products. We determine the minimax solution and identify the worst trees within the ambiguity set. It turns out that the probability weights of the worst case trees are concentrated on few very bad scenarios.  相似文献   

7.
A logical and algebraic treatment of conditional probability   总被引:1,自引:0,他引:1  
This paper is devoted to a logical and algebraic treatment of conditional probability. The main ideas are the use of non-standard probabilities and of some kind of standard part function in order to deal with the case where the conditioning event has probability zero, and the use of a many-valued modal logic in order to deal probability of an event as the truth value of the sentence is probable, along the lines of Hájeks book [H98] and of [EGH96]. To this purpose, we introduce a probabilistic many-valued logic, called FP(S), which is sound and complete with respect a class of structures having a non-standard extension [0,1] of [0,1] as set of truth values. We also prove that the coherence of an assessment of conditional probabilities is equivalent to the coherence of a suitably defined theory over FP(S) whose proper axioms reflect the assessment itself.Mathematics Subject Classification (2000): 03B50, 06D35  相似文献   

8.
We consider type II codes over finite rings . It is well-known that their gth complete weight enumerator polynomials are invariant under the action of a certain finite subgroup of , which we denote Hk,g. We show that the invariant ring with respect to Hk,g is generated by such polynomials. This is carried out by using some closely related results concerning theta series and Siegel modular forms with respect to .  相似文献   

9.
Summary The grand canonical Gibbs states for a system from classical statistical mechanics can be defined as the probability measures on an appropriate phase space which have certain specified conditional probabilities. These conditional probabilities are with respect to a family of -algebras associated with subsets of the space in which the system lies. If different families of -algebras are used then canonical and microcanonical Gibbs states are obtained. The relationship between these different Gibbs states is studied and, subject to various conditions, it is shown that each canonical and microcanonical Gibbs state can be written as a convex mixture of grand canonical Gibbs states.  相似文献   

10.
Binary sensor systems are analog sensors of various types (optical, microelectromechanical (MEMS) systems, X-ray, gamma-ray, acoustic, electronic, etc.) based on the binary decision process. Typical examples of such “binary sensors” are X-ray luggage inspection systems, product quality control systems, automatic target recognition systems, numerous medical diagnostic systems, and many others. In all these systems, the binary decision process provides only two mutually exclusive responses. There are also two types of key parameters that characterize either a system or external conditions in relation to the system that are determined by their prior probabilities. In this paper, by using a formal neuron model, we analyze the problem of threshold redundancy of binary sensors of a critical state. The following three major tasks are solved:
  1. implementation of the algorithm of calculation of error probabilities for threshold redundancy of a group of sensors;
  2. computation of the minimal upper bound for the probability in a closed analytical form and determination of its link with Claude Shannons theorem;
  3. derivation of the expression (estimate) for sensor “weights” when the probability of the binary system error does not exceed the specified minimal upper bound.
  相似文献   

11.
For a birth and death chain on the nonnegative integers with birth and death probabilities p i and q i 1 –p i and reflecting barrier at 0, it is shown that the right tails of the probability of the first return from state 0 to state 0 are simple transition probabilities of a dual birth and death chain obtained by switching p iand q i. Combinatorial and analytical proofs are presented. Extensions and relations to other concepts of duality in the literature are discussed.  相似文献   

12.
This paper presents results of research related to multicriteria decision making under information uncertainty. The Bellman–Zadeh approach to decision making in a fuzzy environment is utilized for analyzing multicriteria optimization models (X,M models) under deterministic information. Its application conforms to the principle of guaranteed result and provides constructive lines in obtaining harmonious solutions on the basis of analyzing associated maxmin problems. This circumstance permits one to generalize the classic approach to considering the uncertainty of quantitative information (based on constructing and analyzing payoff matrices reflecting effects which can be obtained for different combinations of solution alternatives and the so-called states of nature) in monocriteria decision making to multicriteria problems. Considering that the uncertainty of information can produce considerable decision uncertainty regions, the resolving capacity of this generalization does not always permit one to obtain unique solutions. Taking this into account, a proposed general scheme of multicriteria decision making under information uncertainty also includes the construction and analysis of the so-called X,R models (which contain fuzzy preference relations as criteria of optimality) as a means for the subsequent contraction of the decision uncertainty regions. The paper results are of a universal character and are illustrated by a simple example.  相似文献   

13.
The simplest way to perform a fuzzy risk assessment is to calculate the fuzzy expected value and convert fuzzy risk into non-fuzzy risk, i.e., a crisp value. In doing so, there is a transition from the fuzzy set to the crisp set. Therefore, the first step is to define an α level value, and then select the elements x with a subordinate degree A(x)≥α. The higher the value of α, the lower the degree of uncertainty—the probability is closer to its true value. The lower the value of α, the higher the degree of uncertainty—this results in a lower probability serviceability. The possibility level α is dependant on technical conditions and knowledge. A fuzzy expected value of the possibility-probability distribution is a set with and as its boundaries. The fuzzy expected values and of a possibility-probability distribution represent the fuzzy risk values being calculated. Therefore, we can obtain a conservative risk value, a venture risk value and a maximum probability risk value. Under such an α level, three risk values can be calculated. As α adopts all values throughout the set [0,1], it is possible to obtain a series of risk values. Therefore, the fuzzy risk may be a multi-valued risk or set-valued risk. Calculation of the fuzzy expected value of flood risk in the Jinhua River basin has been performed based on the interior-outer set model. Selection of an α value depends on the confidence in different groups of people, while selection of a conservative risk value or venture risk value depends on the risk preference of these people.  相似文献   

14.
The article considers the problem of sequential discrimination of hypotheses using independent observations. An optimal sequential plan is developed for discrimination of hypotheses about a mismatch in the class of plans with bounded probabilities of making an incorrect decision.Translated from Statisticheskie Metody, pp. 189–204, 1980.  相似文献   

15.
We study the effect of additional information on the quality of decisions. We define the extreme case of complete information about probabilities as our reference scenario. There, decision makers (DMs) can use expected utility theory to evaluate the best alternative. Starting from the worst case—where DMs have no information at all about probabilities—we find a method of constantly increasing the information by systematically limiting the ranges of the probabilities. In our simulation-based study, we measure the effects of the constant increase in information by using different forms of relative volumes. We define these as the relative volumes of the gradually narrowing areas which lead to the same (or a similar) decision as with the probability in the reference scenario. Thus, the relative volumes account for the quality of information. Combining the quantity and quality of information, we find decreasing returns to scale on information, or in other words, the costs of gathering additional information increase with the level of information. Moreover, we show that more available alternatives influence the decision process negatively. Finally, we analyze the quality of decisions in processes where more states of nature are considered. We find that this degree of complexity in the decision process also has a negative influence on the quality of decisions.  相似文献   

16.

In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.

  相似文献   

17.
We evaluate the probabilities of various events under the uniform distribution on the set of 312‐avoiding permutations of . We derive exact formulas for the probability that the ith element of a random permutation is a specific value less than i, and for joint probabilities of two such events. In addition, we obtain asymptotic approximations to these probabilities for large N when the elements are not close to the boundaries or to each other. We also evaluate the probability that the graph of a random 312‐avoiding permutation has k specified decreasing points, and we show that for large N the points below the diagonal look like trajectories of a random walk. © 2015 Wiley Periodicals, Inc. Random Struct. Alg., 49, 599–631, 2016  相似文献   

18.
Forecast horizons and dynamic facility location planning   总被引:1,自引:0,他引:1  
We consider a dynamic facility location model in which the objective is to find a planning horizon, *, and a first period decision,X 1*, such thatX 1* is a first period decision for at least one optimal policy for all problems with planning horizons equal to or longer than *. In other words, we seek a planning horizon, *, such that conditions after * do not influence the choice of the optimal initial decision,X 1*. We call * aforecast horizon andX 1* anoptimal initial decision. For the dynamic uncapacitated fixed charge location problem, we show that simple conditions exist such that the initial decision depends on the length of the planning horizon. Thus, a strictly optimal forecast horizon and initial policy may not exist. We therefore introduce the concepts ofe-optimal forecast horizons and -optimal initial solutions. Our computational experience inicates that such solutions can be found for practical problems. Although computing -optimal forecast horizons and initial decisions can be cumbersome, this approach offers the potential for making significantly better decisions than those generated by other approaches. To illustrate this, we show that the use of the scenario planning approach can lead to the adoption of the worst possible initial decision under conditions of future uncertainty. On the basis of our results, it appears that the forecast horizon approach offers an attractive tool for making dynamic location decisions.  相似文献   

19.
When Shannon presents the formula for the entropy of a memoryless source, he presupposes that the prior probabilities of the different source symbols are known. This paper deals with the quantity of information acquired when the prior probabilities of a binary source are learned from a sequence ofN source symbols or Bernoulli trials. Two learning methods are considered: Maximum likelihood estimation of a parameter by calculation of the relative frequency; and calculation of the posterior probability density for . For both methods the acquired information behaves as 1/2 logN + const. for largeN.  相似文献   

20.
The research tested Laing and Morrison's myopic and hyperopic models for sequential, three‐person coalition games when the goal is maximizing rank position. The myopic model assumes that people behave as if the present trial were the last, while the hyperopic model assumes that people behave as if the coalition formed on the present trial will continue forever. The experiment involved three different planning horizon conditions: an indefinite number of trials (the condition specified by Laing and Morrison for their models), one trial remaining, and eight trials remaining. Four different accumulated point totals were used: 300–250–50, 350–150–100, 250–200–150, and 225–200–175. In general, the myopic model was more successful than the hyperopic model, though neither model was especially accurate. Changes in the myopic model were proposed, and the difficulties in developing a theory of social decision making involving long range planning were discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号