首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Variable elimination for influence diagrams with super value nodes   总被引:1,自引:0,他引:1  
In the original formulation of influence diagrams (IDs), each model contained exactly one utility node. In 1990, Tatman and Shachtar introduced the possibility of having super value nodes that represent a combination of their parents’ utility functions. They also proposed an arc-reversal algorithm for IDs with super value nodes. In this paper we propose a variable-elimination algorithm for influence diagrams with super value nodes which is faster in most cases, requires less memory in general, introduces much fewer redundant (i.e., unnecessary) variables in the resulting policies, may simplify sensitivity analysis, and can speed up inference in IDs containing canonical models, such as the noisy OR.  相似文献   

2.
In this paper we discuss how influence diagrams are affected by the presence of fuzziness in chance and value nodes. In this way, by modelling fuzzy-valued random variables and utilities in terms of fuzzy random variables, we analyze the statistical rules corresponding to the affected value-preserving transformations, namely: chance node removal and decision node removal. Some supporting results on fuzzy random variables introduced in this paper will provide us with the required mathematical tool to formalize the new statistical rules. Finally, an example is included to illustrate the study in the paper.  相似文献   

3.
This paper proposes a new approach for decision making under uncertainty based on influence diagrams and possibility theory. The so-called qualitative possibilistic influence diagrams extend standard influence diagrams in order to avoid difficulties attached to the specification of both probability distributions relative to chance nodes and utilities relative to value nodes. In fact, generally, it is easier for experts to quantify dependencies between chance nodes qualitatively via possibility distributions and to provide a preferential relation between different consequences. In such a case, the possibility theory offers a suitable modeling framework. Different combinations of the quantification between chance and utility nodes offer several kinds of possibilistic influence diagrams. This paper focuses on qualitative ones and proposes an indirect evaluation method based on their transformation into possibilistic networks. The proposed approach is implemented via a possibilistic influence diagram toolbox (PIDT).  相似文献   

4.
This paper develops life annuity pricing with stochastic representation of mortality and fuzzy quantification of interest rates. We show that modelling the present value of annuities with fuzzy random variables allows quantifying their expected price and risk resulting from the uncertainty sources considered. So, we firstly describe fuzzy random variables and define some associated measures: the mathematical expectation, the variance, distribution function and quantiles. Secondly, we show several ways to estimate the discount rates to price annuities. Subsequently, the present value of life annuities is modelled with fuzzy random variables. We finally show how an actuary can quantify the price and the risk of a portfolio of annuities when their present value is given by means of fuzzy random variables.  相似文献   

5.
《随机分析与应用》2013,31(3):627-645
Abstract

The notions of fuzzy random variables and fuzzy (super) submartingales are introduced. In this paper we provide the necessary and sufficient conditions of Doob's decomposition for fuzzy (super) submartingales. Finally, we discuss the decomposition of fuzzy (super) submartingales on R, and an example is given which explains that not every fuzzy (super) submartingale has Doob's decomposition.  相似文献   

6.
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization for representing continuous chance variables in influence diagrams. Also, MTE potentials can be used to approximate utility functions. This paper introduces MTE influence diagrams, which can represent decision problems without restrictions on the relationships between continuous and discrete chance variables, without limitations on the distributions of continuous chance variables, and without limitations on the nature of the utility functions. In MTE influence diagrams, all probability distributions and the joint utility function (or its multiplicative factors) are represented by MTE potentials and decision nodes are assumed to have discrete state spaces. MTE influence diagrams are solved by variable elimination using a fusion algorithm.  相似文献   

7.
The main source of complexity problems for large influence diagrams is that the last decisions have intractably large spaces of past information. Usually, it is not a problem when you reach the last decisions; but when calculating optimal policies for the first decisions, you have to consider all possible future information scenarios. This is the curse of knowing that you shall not forget. The usual approach for addressing this problem is to reduce the information through assuming that you do forget something (Nilsson and Lauritzen, 2000, LIMID [1]), or to abstract the information through introducing new nodes (Jensen, 2008) [2]. This paper takes the opposite approach, namely to assume that you know more in the future than you actually will. We call the approach information enhancement. It consists in reducing the space of future information scenarios by adding information links. We present a systematic way of determining fruitful information links to add.  相似文献   

8.
Multiagent time-critical dynamic decision making is a challenging task in many real-world applications where a trade-off between solution quality and computational tractability is required. In this paper, we present a formal representation for modelling time-critical multiagent dynamic decision problems based on interactive dynamic influence diagrams (I-DIDs). The new representation called time-critical I-DIDs (TC-IDIDs) represents space-temporal abstraction by providing time-index to nodes and the model is defined in terms of the condensed and deployed forms. The condensed form is a static model of TC-IDIDs and can be expanded into its dynamic version. To facilitate the conversion between the two forms, we exploit the notion of object-orientation design to develop flexible and reusable TC-IDIDs. The difficulty on expanding TC-IDIDs is to select a proper time sequence to index nodes in the condensed form so that the expanded TC-IDIDs can be solved efficiently without compromising the quality of the policy. For this purpose, we propose two methods to build the condensed form of TC-IDIDs. We evaluate the solution quality and time complexity in three well-studied problems and provide results in support.  相似文献   

9.
So far, there have been several concepts about fuzzy random variables and their expected values in literature. One of the concepts defined by Liu and Liu (2003a) is that the fuzzy random variable is a measurable function from a probability space to a collection of fuzzy variables and its expected value is described as a scalar number. Based on the concepts, this paper addresses two processes—fuzzy random renewal process and fuzzy random renewal reward process. In the fuzzy random renewal process, the interarrival times are characterized as fuzzy random variables and a fuzzy random elementary renewal theorem on the limit value of the expected renewal rate of the process is presented. In the fuzzy random renewal reward process, both the interarrival times and rewards are depicted as fuzzy random variables and a fuzzy random renewal reward theorem on the limit value of the long-run expected reward per unit time is provided. The results obtained in this paper coincide with those in stochastic case or in fuzzy case when the fuzzy random variables degenerate to random variables or to fuzzy variables.  相似文献   

10.
Game tree search is the core of most attempts to make computers play games. We present a fairly general theoretical analysis on how leaf evaluation errors influence the value estimation of a game position at the root. By an approach using prime factorization arguments in the ring of polynomials, we show that in this setting the maximum number of leaf-disjoint strategies proving a particular property is a key notion. This number precisely describes the quality of the heuristic game value in terms of the quality of the leaf evaluation heuristics. We extend this model to include random nodes (rolls of a die). Surprisingly, this changes the situation: utill the number of leaf-disjoint strategies ensures robustness against leaf evaluation errors, but the converse is not true. An average node may produce additional robustness similar to additional leaf-disjoint strategies. This work extends earlier ones which only deal with 0, 1 valued nodes, or without randomness.The first author was partially supported (associate member) by the graduate school ‘Effiziente Algorithmen und Mehrskalenmethoden’, Deutsche Forschungsgemeinschaft. The second author was partially supported by the Future and Emerging Technologies programme of the EU under contract numbers IST-1999-14186 (ALCOM-FT) and IST-2001-33116 (FLAGS).  相似文献   

11.
This paper considers the problem of solving Bayesian decision problems with a mixture of continuous and discrete variables. We focus on exact evaluation of linear-quadratic conditional Gaussian influence diagrams (LQCG influence diagrams) with additively decomposing utility functions. Based on new and existing representations of probability and utility potentials, we derive a method for solving LQCG influence diagrams based on variable elimination. We show how the computations performed during evaluation of a LQCG influence diagram can be organized in message passing schemes based on Shenoy–Shafer and Lazy propagation. The proposed architectures are the first architectures for efficient exact solution of LQCG influence diagrams exploiting an additively decomposing utility function.  相似文献   

12.
In this paper we consider a wireless network consisting of various nodes, where transmissions are regulated by the slotted ALOHA protocol. Nodes using the protocol behave autonomously, and decide at random whether to transmit in a particular time slot. Simultaneous transmissions by multiple nodes cause collisions, rendering the transmissions useless. Nodes can avoid collisions by cooperating, for example by exchanging control messages to coordinate their transmissions. We measure the network performance by the long-term average fraction of time slots in which a successful transmission takes place, and we are interested in how to allocate the performance gains obtained from cooperation among the nodes. To this end we define and analyze a cooperative ALOHA game. We show that this type of game is convex and we consider three solution concepts: the core, the Shapley value, and the compromise value. Furthermore, we develop a set of weighted gain splitting (WGS) allocation rules, and show that this set coincides with the core of the game. These WGS allocation rules can be used to provide an alternative characterization of the Shapley value. Finally, we analyze the sensitivity of the cooperative solution concepts with respect to changes in the wireless network.  相似文献   

13.
In this paper typical properties of large random Boolean AND/OR formulas are investigated. Such formulas with n variables are viewed as rooted binary trees chosen from the uniform distribution of all rooted binary trees on m nodes, where n is fixed and m tends to infinity. The leaves are labeled by literals and the inner nodes by the connectives AND/OR, both uniformly at random. In extending the investigation to infinite trees, we obtain a close relation between the formula size complexity of any given Boolean function f and the probability of its occurrence under this distribution, i.e., the negative logarithm of this probability differs from the formula size complexity of f only by a polynomial factor. © 1997 John Wiley & Sons, Inc. Random Struct. Alg., 10, 337–351 (1997)  相似文献   

14.
Influence diagrams and decision trees represent the two most common frameworks for specifying and solving decision problems. As modeling languages, both of these frameworks require that the decision analyst specifies all possible sequences of observations and decisions (in influence diagrams, this requirement corresponds to the constraint that the decisions should be temporarily linearly ordered). Recently, the unconstrained influence diagram was proposed to address this drawback. In this framework, we may have a partial ordering of the decisions, and a solution to the decision problem therefore consists not only of a decision policy for the various decisions, but also of a conditional specification of what to do next. Relative to the complexity of solving an influence diagram, finding a solution to an unconstrained influence diagram may be computationally very demanding w.r.t. both time and space. Hence, there is a need for efficient algorithms that can deal with (and take advantage of) the idiosyncrasies of the language. In this paper we propose two such solution algorithms. One resembles the variable elimination technique from influence diagrams, whereas the other is based on conditioning and supports any-space inference. Finally, we present an empirical comparison of the proposed methods.  相似文献   

15.
In this study, we attempt to propose a new super parametric convex model by giving the mathematical definition, in which an effective minimum volume method is constructed to give a reasonable enveloping of limited experimental samples by selecting a proper super parameter. Two novel reliability calculation algorithms, including nominal value method and advanced nominal value method, are proposed to evaluate the non-probabilistic reliability index. To investigate the influence of non-probabilistic convex model type on non-probabilistic reliability-based design optimization, an effective approach based on advanced nominal value method is further developed. Four examples, including two numerical examples and two engineering applications, are tested to demonstrate the superiority of the proposed non-probabilistic reliability analysis and optimization technique.  相似文献   

16.
The method of algebraic-geometric quantization is used to find an explicit expression for the measure on the (super)moduli space for all possible interaction diagrams of open and closed (super)strings.Translated from Zapiski Nauchnykh Seminarov Leningradskogo Otdeleniya Matematicheskogo Instituta im. V. A. Steklova AN SSSR, Vol. 189, pp. 122–145, 1991.  相似文献   

17.
It is well-known how the representation theory of the Lie algebra sl(2, ?) can be used to prove that certain sequences of integers are unimodal and that certain posets have the Sperner property. Here an analogous theory is developed for the Lie superalgebra osp(1,2). We obtain new classes of unimodal sequences (described in terms of cycle index polynomials) and a new class of posets (the “super analogue” of the lattice L(m,n) of Young diagrams contained in an m × n rectangle) which have the Sperner property.  相似文献   

18.
The rotation correspondence is a map that sends the set of plane trees onto the set of binary trees. In this paper, we first show that when n goes to +∞, the image by the rotation correspondence of a uniformly chosen random plane tree τ with n nodes is close to 2τ (in a sense to be defined). The second part of the paper is devoted to the right and left depth of nodes in binary trees. We show that the empiric measure (suitably normalized) associated with the difference of the right depth and the left depth processes converges to the integrated super Brownian excursion. © 2004 Wiley Periodicals, Inc. Random Struct. Alg., 2004  相似文献   

19.
In this paper the idea of design exploration in the context of structural design sensitivity analysis considering elastoplastic material behaviour is outlined. Sensitivity information involves the influence of history dependent material behaviour and is obtained analytically by means of a variational approach. Applying singular value decomposition (SVD) to response sensitivities provides deep insight into sensitivity information and can be used to identify design changes with major influence on structural response. With this additional information, it is possible to refine an optimisation problem and reduce its complexity. (© 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

20.
Papers deals with multicriterion reliability-oriented optimization of truss structures by stochastic programming. Deterministic approach to structural optimization appears to be insufficient when loads acting upon a structure and material properties of the structure elements have a random nature. The aim of this paper is to show importance of random modelling of the structure and influence of random parameters on an optimal solution. Usually, quality of engineering structure design is considered in terms of displacements, total cost and reliability. Therefore, optimization problem has been formulated and then solved in order to show interaction between displacement and a total cost objective function. The examples of 4-bar and 25-bar truss structures illustrate our considerations. The results of optimization are presented in the form of diagrams.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号