首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
This paper addresses the problem of exchanging uncertainty assessments in multi-agent systems. Since it is assumed that each agent might completely ignore the internal representation of its partners, a common interchange format is needed. We analyze the case of an interchange format defined by means of imprecise probabilities, pointing out the reasons of this choice. A core problem with the interchange format concerns transformations from imprecise probabilities into other formalisms (in particular, precise probabilities, possibilities, belief functions). We discuss this so far little investigated question, analyzing how previous proposals, mostly regarding special instances of imprecise probabilities, would fit into this problem. We then propose some general transformation procedures, which take also account of the fact that information can be partial, i.e. may concern an arbitrary (finite) set of events.  相似文献   

2.
The combination of mathematical models and uncertainty measures can be applied in the area of data mining for diverse objectives with as final aim to support decision making. The maximum entropy function is an excellent measure of uncertainty when the information is represented by a mathematical model based on imprecise probabilities. In this paper, we present algorithms to obtain the maximum entropy value when the information available is represented by a new model based on imprecise probabilities: the nonparametric predictive inference model for multinomial data (NPI-M), which represents a type of entropy-linear program. To reduce the complexity of the model, we prove that the NPI-M lower and upper probabilities for any general event can be expressed as a combination of the lower and upper probabilities for the singleton events, and that this model can not be associated with a closed polyhedral set of probabilities. An algorithm to obtain the maximum entropy probability distribution on the set associated with NPI-M is presented. We also consider a model which uses the closed and convex set of probability distributions generated by the NPI-M singleton probabilities, a closed polyhedral set. We call this model A-NPI-M. A-NPI-M can be seen as an approximation of NPI-M, this approximation being simpler to use because it is not necessary to consider the set of constraints associated with the exact model.  相似文献   

3.
In imprecise probability theories, independence modeling and computational tractability are two important issues. The former is essential to work with multiple variables and multivariate spaces, while the latter is essential in practical applications. When using lower probabilities to model uncertainty about the value assumed by a variable, satisfying the property of 2-monotonicity decreases the computational burden of inference, hence answering the latter issue. In a first part, this paper investigates whether the joint uncertainty obtained by main existing notions of independence preserve the 2-monotonicity of marginal models. It is shown that it is usually not the case, except for the formal extension of random set independence to 2-monotone lower probabilities. The second part of the paper explores the properties and interests of this extension within the setting of lower probabilities.  相似文献   

4.
In decision theory under imprecise probabilities, discretizations are a crucial topic because many applications involve infinite sets whereas most procedures in the theory of imprecise probabilities can only be calculated for finite sets so far. The present paper develops a method for discretizing sample spaces in data-based decision theory under imprecise probabilities. The proposed method turns an original decision problem into a discretized decision problem. It is shown that any solution of the discretized decision problem approximately solves the original problem.In doing so, it is pointed out that the commonly used method of natural extension can be most instable. A way to avoid this instability is presented which is sufficient for the purpose of the paper.  相似文献   

5.
For risk assessment to be a relevant tool in the study of any type of system or activity, it needs to be based on a framework that allows for jointly analyzing both unique and repetitive events. Separately, unique events may be handled by predictive probability assignments on the events, and repetitive events with unknown/uncertain frequencies are typically handled by the probability of frequency (or Bayesian) approach. Regardless of the nature of the events involved, there may be a problem with imprecision in the probability assignments. Several uncertainty representations with the interpretation of lower and upper probability have been developed for reflecting such imprecision. In particular, several methods exist for jointly propagating precise and imprecise probabilistic input in the probability of frequency setting. In the present position paper we outline a framework for the combined analysis of unique and repetitive events in quantitative risk assessment using both precise and imprecise probability. In particular, we extend an existing method for jointly propagating probabilistic and possibilistic input by relaxing the assumption that all events involved have frequentist probabilities; instead we assume that frequentist probabilities may be introduced for some but not all events involved, i.e. some events are assumed to be unique and require predictive – possibly imprecise – probabilistic assignments, i.e. subjective probability assignments on the unique events without introducing underlying frequentist probabilities for these. A numerical example related to environmental risk assessment of the drilling of an oil well is included to illustrate the application of the resulting method.  相似文献   

6.
7.
This paper discusses a number of applications of data envelopment analysis and the nature of uncertainty in those applications. It then reviews the key approaches to handling uncertainty in data envelopment analysis (DEA) (imprecise DEA, bootstrapping, Monte Carlo simulation and chance constrained DEA) and considers their suitability for modelling the applications. The paper concludes with suggestions about the challenges facing an operational research analyst in applying DEA in real-world situations.  相似文献   

8.
Several economic applications require to consider different data sources and to integrate the information coming from them. This paper focuses on statistical matching, in particular we deal with incoherences. In fact, when logical constraints among the variables are present incoherencies on the probability evaluations can arise. The aim of this paper is to remove such incoherences by using different methods based on distances minimization or least commitment imprecise probabilities extensions. An illustrative example shows peculiarities of the different correction methods. Finally, limited to pseudo distance minimization, we performed a systematic comparison through a simulation study.  相似文献   

9.

The paper presents a new scenario-based decision rule for the classical version of the newsvendor problem (NP) under complete uncertainty (i.e. uncertainty with unknown probabilities). So far, NP has been analyzed under uncertainty with known probabilities or under uncertainty with partial information (probabilities known incompletely). The novel approach is designed for the sale of new, innovative products, where it is quite complicated to define probabilities or even probability-like quantities, because there are no data available for forecasting the upcoming demand via statistical analysis. The new procedure described in the contribution is based on a hybrid of Hurwicz and Bayes decision rules. It takes into account the decision maker’s attitude towards risk (measured by coefficients of optimism and pessimism) and the dispersion (asymmetry, range, frequency of extremes values) of payoffs connected with particular order quantities. It does not require any information about the probability distribution.

  相似文献   

10.
Handling uncertainty by interval probabilities is recently receiving considerable attention by researchers. Interval probabilities are used when it is difficult to characterize the uncertainty by point-valued probabilities due to partially known information. Most of researches related to interval probabilities, such as combination, marginalization, condition, Bayesian inferences and decision, assume that interval probabilities are known. How to elicit interval probabilities from subjective judgment is a basic and important problem for the applications of interval probability theory and till now a computational challenge. In this work, the models for estimating and combining interval probabilities are proposed as linear and quadratic programming problems, which can be easily solved. The concepts including interval probabilities, interval entropy, interval expectation, interval variance, interval moment, and the decision criteria with interval probabilities are addressed. A numerical example of newsvendor problem is employed to illustrate our approach. The analysis results show that the proposed methods provide a novel and effective alternative for decision making when point-valued subjective probabilities are inapplicable due to partially known information.  相似文献   

11.
We present TANC, a TAN classifier (tree-augmented naive) based on imprecise probabilities. TANC models prior near-ignorance via the Extreme Imprecise Dirichlet Model (EDM). A first contribution of this paper is the experimental comparison between EDM and the global Imprecise Dirichlet Model using the naive credal classifier (NCC), with the aim of showing that EDM is a sensible approximation of the global IDM. TANC is able to deal with missing data in a conservative manner by considering all possible completions (without assuming them to be missing-at-random), but avoiding an exponential increase of the computational time. By experiments on real data sets, we show that TANC is more reliable than the Bayesian TAN and that it provides better performance compared to previous TANs based on imprecise probabilities. Yet, TANC is sometimes outperformed by NCC because the learned TAN structures are too complex; this calls for novel algorithms for learning the TAN structures, better suited for an imprecise probability classifier.  相似文献   

12.
Rational approximation of vertical segments   总被引:1,自引:0,他引:1  
In many applications, observations are prone to imprecise measurements. When constructing a model based on such data, an approximation rather than an interpolation approach is needed. Very often a least squares approximation is used. Here we follow a different approach. A natural way for dealing with uncertainty in the data is by means of an uncertainty interval. We assume that the uncertainty in the independent variables is negligible and that for each observation an uncertainty interval can be given which contains the (unknown) exact value. To approximate such data we look for functions which intersect all uncertainty intervals. In the past this problem has been studied for polynomials, or more generally for functions which are linear in the unknown coefficients. Here we study the problem for a particular class of functions which are nonlinear in the unknown coefficients, namely rational functions. We show how to reduce the problem to a quadratic programming problem with a strictly convex objective function, yielding a unique rational function which intersects all uncertainty intervals and satisfies some additional properties. Compared to rational least squares approximation which reduces to a nonlinear optimization problem where the objective function may have many local minima, this makes the new approach attractive.  相似文献   

13.
Data are often affected by uncertainty. Uncertainty is usually referred to as randomness. Nonetheless, other sources of uncertainty may occur. In particular, the empirical information may also be affected by imprecision. Also in these cases it can be fruitful to analyze the underlying structure of the data. In this paper we address the problem of summarizing a sample of three-way imprecise data. In order to manage the different sources of uncertainty a twofold strategy is adopted. On the one hand, imprecise data are transformed into fuzzy sets by means of the so-called fuzzification process. The so-obtained fuzzy data are then analyzed by suitable generalizations of the Tucker3 and CANDECOMP/PARAFAC models, which are the two most popular three-way extensions of Principal Component Analysis. On the other hand, the statistical validity of the obtained underlying structure is evaluated by (nonparametric) bootstrapping. A simulation experiment is performed for assessing whether the use of fuzzy data is helpful in order to summarize three-way uncertain data. Finally, to show how our models work in practice, an application to real data is discussed.  相似文献   

14.
In real-life decision analysis, the probabilities and utilities of consequences are in general vague and imprecise. One way to model imprecise probabilities is to represent a probability with the interval between the lowest possible and the highest possible probability, respectively. However, there are disadvantages with this approach; one being that when an event has several possible outcomes, the distributions of belief in the different probabilities are heavily concentrated toward their centres of mass, meaning that much of the information of the original intervals are lost. Representing an imprecise probability with the distribution’s centre of mass therefore in practice gives much the same result as using an interval, but a single number instead of an interval is computationally easier and avoids problems such as overlapping intervals. We demonstrate why second-order calculations add information when handling imprecise representations, as is the case of decision trees or probabilistic networks. We suggest a measure of belief density for such intervals. We also discuss properties applicable to general distributions. The results herein apply also to approaches which do not explicitly deal with second-order distributions, instead using only first-order concepts such as upper and lower bounds.  相似文献   

15.
The paper by Denœux justifies the use of a consonant belief function to represent the information provided by a likelihood function and proposes some extensions to low-quality data. In my comments I consider the point of view of imprecise probabilities for the representation of likelihood information and the relationships with the proposal in the paper. I also argue for some alternatives to the use of consonant belief functions. Finally, I add some clarifications about the comparison with the Bayesian approach.  相似文献   

16.
Geometric programming provides a powerful tool for solving nonlinear problems where nonlinear relations can be well presented by an exponential or power function. In the real world, many applications of geometric programming are engineering design problems in which some of the problem parameters are estimates of actual values. This paper develops a solution method when the exponents in the objective function, the cost and the constraint coefficients, and the right-hand sides are imprecise and represented as interval data. Since the parameters of the problem are imprecise, the objective value should be imprecise as well. A pair of two-level mathematical programs is formulated to obtain the upper bound and lower bound of the objective values. Based on the duality theorem and by applying a variable separation technique, the pair of two-level mathematical programs is transformed into a pair of ordinary one-level geometric programs. Solving the pair of geometric programs produces the interval of the objective value. The ability of calculating the bounds of the objective value developed in this paper might help lead to more realistic modeling efforts in engineering optimization areas.  相似文献   

17.
We analyze the impact of imprecise parameters on performance of an uncertainty-modeling tool presented in this paper. In particular, we present a reliable and efficient uncertainty-modeling tool, which enables dynamic capturing of interval-valued clusters representations sets and functions using well-known pattern recognition and machine learning algorithms. We mainly deal with imprecise learning parameters in identifying uncertainty intervals of membership value distributions and imprecise functions. In the experiments, we use the proposed system as a decision support tool for a production line process. Simulation results indicate that in comparison to benchmark methods such as well-known type-1 and type-2 system modeling tools, and statistical machine-learning algorithms, proposed interval-valued imprecise system modeling tool is more robust with less error.  相似文献   

18.
In this paper, we consider Bayesian inference and estimation of finite time ruin probabilities for the Sparre Andersen risk model. The dense family of Coxian distributions is considered for the approximation of both the inter‐claim time and claim size distributions. We illustrate that the Coxian model can be well fitted to real, long‐tailed claims data and that this compares well with the generalized Pareto model. The main advantage of using the Coxian model for inter‐claim times and claim sizes is that it is possible to compute finite time ruin probabilities making use of recent results from queueing theory. In practice, finite time ruin probabilities are much more useful than infinite time ruin probabilities as insurance companies are usually interested in predictions for short periods of future time and not just in the limit. We show how to obtain predictive distributions of these finite time ruin probabilities, which are more informative than simple point estimations and take account of model and parameter uncertainty. We illustrate the procedure with simulated data and the well‐known Danish fire loss data set. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

19.
This article presents algorithms for computing optima in decision trees with imprecise probabilities and utilities. In tree models involving uncertainty expressed as intervals and/or relations, it is necessary for the evaluation to compute the upper and lower bounds of the expected values. Already in its simplest form, computing a maximum of expectancies leads to quadratic programming (QP) problems. Unfortunately, standard optimization methods based on QP (and BLP – bilinear programming) are too slow for the evaluation of decision trees in computer tools with interactive response times. Needless to say, the problems with computational complexity are even more emphasized in multi-linear programming (MLP) problems arising from multi-level decision trees. Since standard techniques are not particularly useful for these purposes, other, non-standard algorithms must be used. The algorithms presented here enable user interaction in decision tools and are equally applicable to all multi-linear programming problems sharing the same structure as a decision tree.  相似文献   

20.
In a variety of applications ranging from environmental and health sciences to bioinformatics, it is essential that data collected in large databases are generated stochastically. This states qualitatively new problems both for statistics and for computer science. Namely, instead of deterministic (usually worst case) analysis, the average case analysis is needed for many standard database problems. Since both stochastic and deterministic methods and notation are used it causes additional difficulties for an investigation of such problems and for an exposition of results. We consider a general class of probabilistic models for databases and study a few problems in a probabilistic framework. In order to demonstrate the general approach, the problems for systems of database constraints (keys, functional dependencies and related) are investigated in more detail. Our approach is based on consequent using Rényi entropy as a main characteristic of uncertainty of distribution and Poisson approximation (Stein–Chen technique) of the corresponding probabilities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号