首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The Goodman–Nguyen relation is a partial order generalising the implication (inclusion) relation to conditional events. As such, with precise probabilities it both induces an agreeing probability ordering and is a key tool in a certain common extension problem. Most previous work involving this relation is concerned with either conditional event algebras or precise probabilities. We investigate here its role within imprecise probability theory, first in the framework of conditional events and then proposing a generalisation of the Goodman–Nguyen relation to conditional gambles. It turns out that this relation induces an agreeing ordering on coherent or C-convex conditional imprecise previsions. In a standard inferential problem with conditional events, it lets us determine the natural extension, as well as an upper extension. With conditional gambles, it is useful in deriving a number of inferential inequalities.  相似文献   

2.
In almost all the realistic circumstances, such as health risk assessment and uncertainty analysis of atmospheric dispersion, it is very essential to include all the information into modelling. The parameters associated to a particular model may include different kind of variability, imprecision and uncertainty. More often, it is seen that available informations are interpreted in probabilistic sense. Probability theory is a well-established theory to measure such kind of variability. However, not all of available information, data or model parameters affected by variability, imprecision and uncertainty can be handled by traditional probability theory. Uncertainty or imprecision may occur due to incomplete information or data, measurement errors or data obtained from expert judgement or subjective interpretation of available data or information. Thus, model parameters, data may be affected by subjective uncertainty. Traditional probability theory is inappropriate to represent them. Possibility theory and fuzzy set theory is another branch of mathematics which is used as a tool to describe the parameters with insufficient or vague knowledge. In this paper, an attempt has been made to combine probability knowledge and possibility knowledge and draw the uncertainty. The paper describes an algorithm for combining probability distribution and interval-valued fuzzy number and applied to environmental risk modelling with a case study. The primary aim of this paper is to propagate the proposed method. Computer codes are prepared for the proposed method using MATLAB.  相似文献   

3.
The survival probability in finite time period in fully discrete risk model   总被引:16,自引:0,他引:16  
The probabilities of the following events are first discussed in this paper: the insurance company survives to any fixed time k and the surplus at time k equals x≥1. The formulas for calculating such probabilities are deduced through analytical and probabilistic arguments respectively. Finally, other probability laws relating to risk are determined based on the probabilities mentioned above.  相似文献   

4.
Discrete time Markov chains with interval probabilities   总被引:1,自引:0,他引:1  
The parameters of Markov chain models are often not known precisely. Instead of ignoring this problem, a better way to cope with it is to incorporate the imprecision into the models. This has become possible with the development of models of imprecise probabilities, such as the interval probability model. In this paper we discuss some modelling approaches which range from simple probability intervals to the general interval probability models and further to the models allowing completely general convex sets of probabilities. The basic idea is that precisely known initial distributions and transition matrices are replaced by imprecise ones, which effectively means that sets of possible candidates are considered. Consequently, sets of possible results are obtained and represented using similar imprecise probability models.We first set up the model and then show how to perform calculations of the distributions corresponding to the consecutive steps of a Markov chain. We present several approaches to such calculations and compare them with respect to the accuracy of the results. Next we consider a generalisation of the concept of regularity and study the convergence of regular imprecise Markov chains. We also give some numerical examples to compare different approaches to calculations of the sets of probabilities.  相似文献   

5.
In real-life decision analysis, the probabilities and utilities of consequences are in general vague and imprecise. One way to model imprecise probabilities is to represent a probability with the interval between the lowest possible and the highest possible probability, respectively. However, there are disadvantages with this approach; one being that when an event has several possible outcomes, the distributions of belief in the different probabilities are heavily concentrated toward their centres of mass, meaning that much of the information of the original intervals are lost. Representing an imprecise probability with the distribution’s centre of mass therefore in practice gives much the same result as using an interval, but a single number instead of an interval is computationally easier and avoids problems such as overlapping intervals. We demonstrate why second-order calculations add information when handling imprecise representations, as is the case of decision trees or probabilistic networks. We suggest a measure of belief density for such intervals. We also discuss properties applicable to general distributions. The results herein apply also to approaches which do not explicitly deal with second-order distributions, instead using only first-order concepts such as upper and lower bounds.  相似文献   

6.
The effectiveness of the THESEUS multi-criteria sorting method is characterized, here, by (i) its capacity for suggesting precise and appropriate assignments; (ii) the probability of suggesting imprecise assignments; and (iii) the probability of suggesting incorrect assignments. We study how these important features are influenced by the number of criteria and categories, the cardinality of the reference set and the level of decision-maker consistency. We present a theoretical characterization and a wide range of experimental results that confirm and complement the formal analysis. The proposed way of analyzing effectiveness may be applied to other multi-criteria sorting methods.  相似文献   

7.
Abstract

We present a methodology to aggregate in a coherent manner conditional stress losses in a trading or banking book. The approach bypasses the specification of unconditional probabilities of the individual stress events and ensures by a linear programming approach so that the (subjective or frequentist) conditional probabilities chosen by the risk manager are internally consistent. The admissibility requirement greatly reduces the degree of arbitrariness in the conditional probability matrix if this is assigned subjectively. The approach can be used to address the requirements of the regulators on the Instantaneous Risk Charge.  相似文献   

8.
This paper addresses the problem of exchanging uncertainty assessments in multi-agent systems. Since it is assumed that each agent might completely ignore the internal representation of its partners, a common interchange format is needed. We analyze the case of an interchange format defined by means of imprecise probabilities, pointing out the reasons of this choice. A core problem with the interchange format concerns transformations from imprecise probabilities into other formalisms (in particular, precise probabilities, possibilities, belief functions). We discuss this so far little investigated question, analyzing how previous proposals, mostly regarding special instances of imprecise probabilities, would fit into this problem. We then propose some general transformation procedures, which take also account of the fact that information can be partial, i.e. may concern an arbitrary (finite) set of events.  相似文献   

9.
We explore generalizations of the pari-mutuel model (PMM), a formalization of an intuitive way of assessing an upper probability from a precise one. We discuss a naive extension of the PMM considered in insurance, compare the PMM with a related model, the Total Variation Model, and generalize the natural extension of the PMM introduced by P. Walley and other pertained formulae. The results are subsequently given a risk measurement interpretation: in particular it is shown that a known risk measure, Tail Value at Risk (TVaR), is derived from the PMM, and a coherent risk measure more general than TVaR from its imprecise version. We analyze further the conditions for coherence of a related risk measure, Conditional Tail Expectation. Conditioning with the PMM is investigated too, computing its natural extension, characterising its dilation and studying the weaker concept of imprecision increase.  相似文献   

10.
We discuss prevalence estimation under misclassification. That is we are concerned with the estimation of a proportion of units having a certain property (being diseased, showing deviant behavior, etc.) from a random sample when the true variable of interest cannot be observed, but a related proxy variable (e.g. the outcome of a diagnostic test) is available. If the misclassification probabilities were known then unbiased prevalence estimation would be possible. We focus on the frequent case where the misclassification probabilities are unknown but two independent replicate measurements have been taken. While in the traditional precise probabilistic framework a correction from this information is not possible due to non-identifiability, the imprecise probability methodology of partial identification and systematic sensitivity analysis allows to obtain valuable insights into possible bias due to misclassification. We derive tight identification intervals and corresponding confidence regions for the true prevalence, based on the often reported kappa coefficient, which condenses the information of the replicates by measuring agreement between the two measurements. Our method is illustrated in several theoretical scenarios and in an example from oral health on prevalence of caries in children.  相似文献   

11.
The combination of mathematical models and uncertainty measures can be applied in the area of data mining for diverse objectives with as final aim to support decision making. The maximum entropy function is an excellent measure of uncertainty when the information is represented by a mathematical model based on imprecise probabilities. In this paper, we present algorithms to obtain the maximum entropy value when the information available is represented by a new model based on imprecise probabilities: the nonparametric predictive inference model for multinomial data (NPI-M), which represents a type of entropy-linear program. To reduce the complexity of the model, we prove that the NPI-M lower and upper probabilities for any general event can be expressed as a combination of the lower and upper probabilities for the singleton events, and that this model can not be associated with a closed polyhedral set of probabilities. An algorithm to obtain the maximum entropy probability distribution on the set associated with NPI-M is presented. We also consider a model which uses the closed and convex set of probability distributions generated by the NPI-M singleton probabilities, a closed polyhedral set. We call this model A-NPI-M. A-NPI-M can be seen as an approximation of NPI-M, this approximation being simpler to use because it is not necessary to consider the set of constraints associated with the exact model.  相似文献   

12.
Empirical studies have demonstrated that cardinal utility functions assessed via gamble-based methods are often incoherent because of the probability and certainty effects. These effects are caused by apparent risk attitudes different from those admissible within the linear (expected) utility theory. The incoherences can also be accentuated by the effects of chaining and serial positioning of responses. To filter out these effects and obtain an unbiased measurement of the strength of preference, and a simultaneous measurement of risk attitude, we devised the independent-gamble, nonlinear-inference (IGNI) method: the utility function of outcomes and the risk function of probabilities are estimated jointly from assessed certainty equivalents of independent gambles by using a nonlinear utility theory for inference. The method contrasts with all popular utility assessment techniques in that it estimates a cardinal function in the two-dimensional space of outcomes and probabilities. Hence, it allows us to obtain novel insights into the nature of utility functions and the probability effect. Both are illustrated by empirical results for fifty-four subjects.  相似文献   

13.
??We consider probabilistic meanings for some numerical characteristics of single birth processes. Some probabilities of events, such as extinction probability, returning probability, are represented in terms of these numerical characteristics. Two examples are also presented to illustrate the value of the results.  相似文献   

14.
We study two basic problems of probabilistic reasoning: the probabilistic logic and the probabilistic entailment problems. The first one can be defined as follows. Given a set of logical sentences and probabilities that these sentences are true, the aim is to determine whether these probabilities are consistent or not. Given a consistent set of logical sentences and probabilities, the probabilistic entailment problem consists in determining the range of the possible values of the probability associated with additional sentences while maintaining a consistent set of sentences and probabilities.This paper proposes a general approach based on an anytime deduction method that allows the follow-up of the reasoning when checking consistency for the probabilistic logic problem or when determining the probability intervals for the probabilistic entailment problem. Considering a series of subsets of sentences and probabilities, the approach proceeds by computing increasingly narrow probability intervals that either show a contradiction or that contain the tightest entailed probability interval. Computational experience have been conducted to compare the proposed anytime deduction method, called ad-psat with an exact one, psatcol, using column generation techniques, both with respect to the range of the probability intervals and the computing times.  相似文献   

15.
We justify and discuss expressions for joint lower and upper expectations in imprecise probability trees, in terms of the sub- and supermartingales that can be associated with such trees. These imprecise probability trees can be seen as discrete-time stochastic processes with finite state sets and transition probabilities that are imprecise, in the sense that they are only known to belong to some convex closed set of probability measures. We derive various properties for their joint lower and upper expectations, and in particular a law of iterated expectations. We then focus on the special case of imprecise Markov chains, investigate their Markov and stationarity properties, and use these, by way of an example, to derive a system of non-linear equations for lower and upper expected transition and return times. Most importantly, we prove a game-theoretic version of the strong law of large numbers for submartingale differences in imprecise probability trees, and use this to derive point-wise ergodic theorems for imprecise Markov chains.  相似文献   

16.
We review several of de Finetti’s fundamental contributions where these have played and continue to play an important role in the development of imprecise probability research. Also, we discuss de Finetti’s few, but mostly critical remarks about the prospects for a theory of imprecise probabilities, given the limited development of imprecise probability theory as that was known to him.  相似文献   

17.
We present and analyze a generalization of the standard decision analysis model of sequential decisionmaking under risk. The decision tree is assumed given and all probabilities are assumed to be known precisely. Utility values are assumed to be affine in an imprecisely known parameter. The affine form is sufficiently general to allow importance weights or the utility values themselves to be represented by the imprecise parameter. Parameter imprecision is described by set inclusion. A relation on all available alternatives is assumed given for each decision node. The intent of each (not necessarily complete) relation is to model the decisionmaker's directly expressed preferences among the available alternatives at the associated decision node. A numerical procedure is developed to determine the set of all strategies that may be optimal and the corresponding set of all possible parameter values. An example illustrates the procedure.  相似文献   

18.
19.
The column generation approach to large-scale linear programming is extended to the mixed-integer case. Two general algorithms, a dual and a primal one, are presented. Both involve finding k-best solutions to combinatorial optimization subproblems. Algorithms for these subproblems must be tailored to each specific application. Their use is illustrated by applying them to a new combinatorial optimization problem with applications in Artificial Intelligence: Probabilistic Maximum Satisfiability. This problem is defined as follows: consider a set of logical sentences together with probabilities that they are true, assume this set of sentences is not satisfiable in the probabilistic sense, i.e., there is no probability distribution on the set of possible worlds (truth assignments to the sentences corresponding to at least one truth assignment to the logical variables they contain) such that for each sentence the sum of probabilities of the possible worlds in which it is true is equal to its probability of being true; determine a minimum set of sentences to be deleted in order to make the remaining set of sentences satisfiable. Computational experience with both algorithms is reported on.  相似文献   

20.
A new computation method of frequentist p values and Bayesian posterior probabilities based on the bootstrap probability is discussed for the multivariate normal model with unknown expectation parameter vector. The null hypothesis is represented as an arbitrary-shaped region of the parameter vector. We introduce new functional forms for the scaling-law of bootstrap probability so that the multiscale bootstrap method, which was designed for a one-sided test, can also compute confidence measures of a two-sided test, extending applicability to a wider class of hypotheses. Parameter estimation for the scaling-law is improved by the two-step multiscale bootstrap and also by including higher order terms. Model selection is important not only as a motivating application of our method, but also as an essential ingredient in the method. A compromise between frequentist and Bayesian is attempted by showing that the Bayesian posterior probability with a noninformative prior is interpreted as a frequentist p value of “zero-sided” test.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号