首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Fusing multiple Bayesian knowledge sources   总被引:1,自引:0,他引:1  
We address the problem of information fusion in uncertain environments. Imagine there are multiple experts building probabilistic models of the same situation and we wish to aggregate the information they provide. There are several problems we may run into by naively merging the information from each. For example, the experts may disagree on the probability of a certain event or they may disagree on the direction of causality between two events (e.g., one thinks A causes B while another thinks B causes A). They may even disagree on the entire structure of dependencies among a set of variables in a probabilistic network. In our proposed solution to this problem, we represent the probabilistic models as Bayesian Knowledge Bases (BKBs) and propose an algorithm called Bayesian knowledge fusion that allows the fusion of multiple BKBs into a single BKB that retains the information from all input sources. This allows for easy aggregation and de-aggregation of information from multiple expert sources and facilitates multi-expert decision making by providing a framework in which all opinions can be preserved and reasoned over.  相似文献   

2.
The problem of optimal excess of loss reinsurance with a limiting and a retention level is considered. It is demonstrated that this problem can be solved, combining specific risk and performance measures, under some relatively general assumptions for the risk model, under which the premium income is modelled by any non-negative, non-decreasing function, claim arrivals follow a Poisson process and claim amounts are modelled by any continuous joint distribution. As a performance measure, we define the expected profits at time x of the direct insurer and the reinsurer, given their joint survival up to x, and derive explicit expressions for their numerical evaluation. The probability of joint survival of the direct insurer and the reinsurer up to the finite time horizon x is employed as a risk measure. An efficient frontier type approach to setting the limiting and the retention levels, based on the probability of joint survival considered as a risk measure and on the expected profit given joint survival, considered as a performance measure is introduced. Several optimality problems are defined and their solutions are illustrated numerically on several examples of appropriate claim amount distributions, both for the case of dependent and independent claim severities.  相似文献   

3.
Robustness is about reducing the feasible set of a given nominal optimization problem by cutting ??risky?? solutions away. To this end, the most popular approach in the literature is to extend the nominal model with a polynomial number of additional variables and constraints, so as to obtain its robust counterpart. Robustness can also be enforced by adding a possibly exponential family of cutting planes, which typically leads to an exponential formulation where cuts have to be generated at run time. Both approaches have pros and cons, and it is not clear which is the best one when approaching a specific problem. In this paper we computationally compare the two options on some prototype problems with different characteristics. We first address robust optimization à la Bertsimas and Sim for linear programs, and show through computational experiments that a considerable speedup (up to 2 orders of magnitude) can be achieved by exploiting a dynamic cut generation scheme. For integer linear problems, instead, the compact formulation exhibits a typically better performance. We then move to a probabilistic setting and introduce the uncertain set covering problem where each column has a certain probability of disappearing, and each row has to be covered with high probability. A related uncertain graph connectivity problem is also investigated, where edges have a certain probability of failure. For both problems, compact ILP models and cutting plane solution schemes are presented and compared through extensive computational tests. The outcome is that a compact ILP formulation (if available) can be preferable because it allows for a better use of the rich arsenal of preprocessing/cut generation tools available in modern ILP solvers. For the cases where such a compact ILP formulation is not available, as in the uncertain connectivity problem, we propose a restart solution strategy and computationally show its practical effectiveness.  相似文献   

4.
We are given n points distributed randomly in a compact region D of Rm. We consider various optimisation problems associated with partitioning this set of points into k subsets. For each problem we demonstrate lower bounds which are satisfied with high probability. For the case where D is a hypercube we use a partitioning technique to give deterministic upper bounds and to construct algorithms which with high probability can be made arbitrarily accurate in polynomial time for a given required accuracy.  相似文献   

5.
《Journal of Complexity》1995,11(4):493-514
We study the worst case complexity of operator equations Lu = f where L: GX is a bounded linear injection of normed linear spaces. Past work on the complexity of such problems has generally required the class F of problem elements f to be the unit ball of X. However, there are many problems for which this choice of F yields unsatisfactory results. Mixed elliptic—hyperbolic problems are one example. the difficulty being that our technical tools are nor strong enough to give good complexity bounds. Ill-posed problems are another example. because we know that the complexity of computing finite-error approximations is infinite if F is a ball in X. In this paper, we pursue another idea. Rather than directly restrict the class F of problem elements f, we will consider problems that are solution-restricted: i.e., we restrict the class U of solution elements u. In particular, we assume that U is the unit hall of a normed linear space W that is densely, continuously embedded in G. The main idea is that our problem can now be reduced to the standard approximation problem of approximating the embedding of W into G.This allows us to characterize optimal information and algorithms for our problem..We use this idea to study three problems: the Tricomi problem (a mixed hyperbolic— elliptic problem arising in the study of transonic flow), the inverse finite Laplace transform (an ill-posed problem arising. e.g.. in geomathematics), and the backwards heat equation. We determine the problem complexity and derive nearly optimal algorithms for each of these problems.  相似文献   

6.
In this paper, we study minimal zero norm solutions of the linear complementarity problems, defined as the solutions with smallest cardinality. Minimal zero norm solutions are often desired in some real applications such as bimatrix game and portfolio selection. We first show the uniqueness of the minimal zero norm solution for Z-matrix linear complementarity problems. To find minimal zero norm solutions is equivalent to solve a difficult zero norm minimization problem with linear complementarity constraints. We then propose a p norm regularized minimization model with p in the open interval from zero to one, and show that it can approximate minimal zero norm solutions very well by sequentially decreasing the regularization parameter. We establish a threshold lower bound for any nonzero entry in its local minimizers, that can be used to identify zero entries precisely in computed solutions. We also consider the choice of regularization parameter to get desired sparsity. Based on the theoretical results, we design a sequential smoothing gradient method to solve the model. Numerical results demonstrate that the sequential smoothing gradient method can effectively solve the regularized model and get minimal zero norm solutions of linear complementarity problems.  相似文献   

7.
As in earlier works, we consider {0,1}n as a sample space with a probability measure on it, thus making pseudo-Boolean functions into random variables. Under the assumption that the coordinate random variables are independent, we show it is very easy to give an orthonormal basis for the space of pseudo-Boolean random variables of degree at most k. We use this orthonormal basis to find the transform of a given pseudo-Boolean random variable and to answer various least squares minimization questions.  相似文献   

8.
9.
The problem of short-term financial planning is to determine an optimal credit mix to meet the short-term cash needs and an optimal investment plan for excess cash. A number of linear optimization models have been developed to solve this problem, some of which are in practical use. The purpose of this paper is to generalize the assumptions of these models concerning the available information about future receipts and disbursements. It is presupposed that the financial officer has some idea as to the amount involved which, however, cannot be specified by a probability distribution. On the contrary, we assume that these ideas only permit qualitative probability statements such as the following:“That the difference between disbursements and receipts in a certain period lies in an interval I1 is no less probable than that it lies in an interval I2”.For this level of information we formulate a model for short-term financial planning, and we develop a solution procedure to determine the optimum financial alternatives. Finally, the entire procedure is demonstrated by a medium sized example.  相似文献   

10.
Since the Age of Enlightenment, most philosophers have associated reasoning with the rules of probability and logic. This association has been enhanced over the years and now incorporates the theory of fuzzy logic as a complement to the probability theory, leading to the concept of fuzzy probability. Our insight, here, is integrating the concept of validity into the notion of fuzzy probability within an extended fuzzy logic (FLe) framework keeping with the notion of collective intelligence. In this regard, we propose a novel framework of possibility–probability–validity distribution (PPVD). The proposed distribution is applied to a real world setting of actual judicial cases to examine the role of validity measures in automated judicial decision-making within a fuzzy probabilistic framework. We compute valid fuzzy probability of conviction and acquittal based on different factors. This determines a possible overall hypothesis for the decision of a case, which is valid only to a degree. Validity is computed by aggregating validities of all the involved factors that are obtained from a factor vocabulary based on the empirical data. We then map the combined validity based on the Jaccard similarity measure into linguistic forms, so that a human can understand the results. Then PPVDs that are obtained based on the relevant factors in the given case yield the final valid fuzzy probabilities for conviction and acquittal. Finally, the judge has to make a decision; we therefore provide a numerical measure. Our approach supports the proposed hypothesis within the three-dimensional contexts of probability, possibility, and validity to improve the ability to solve problems with incomplete, unreliable, or ambiguous information to deliver a more reliable decision.  相似文献   

11.
The range minimum query problem, RMQ for short, is to preprocess a sequence of real numbers A[1…n] for subsequent queries of the form: “Given indices i, j, what is the index of the minimum value of A[ij]?” This problem has been shown to be linearly equivalent to the LCA problem in which a tree is preprocessed for answering the lowest common ancestor of two nodes. It has also been shown that both the RMQ and LCA problems can be solved in linear preprocessing time and constant query time under the unit-cost RAM model. This paper studies a new query problem arising from the analysis of biological sequences. Specifically, we wish to answer queries of the form: “Given indices i and j, what is the maximum-sum segment of A[ij]?” We establish the linear equivalence relation between RMQ and this new problem. As a consequence, we can solve the new query problem in linear preprocessing time and constant query time under the unit-cost RAM model. We then present alternative linear-time solutions for two other biological sequence analysis problems to demonstrate the utilities of the techniques developed in this paper.  相似文献   

12.
In this paper, we consider the following minimax linear programming problem: min z = max1 ≤ jn{CjXj}, subject to Ax = g, x ≥ 0. It is well known that this problem can be transformed into a linear program by introducing n additional constraints. We note that these additional constraints can be considered implicitly by treating them as parametric upper bounds. Based on this approach we develop two algorithms: a parametric algorithm and a primal—dual algorithm. The parametric algorithm solves a linear programming problem with parametric upper bounds and the primal—dual algorithm solves a sequence of related dual feasible linear programming problems. Computation results are also presented, which indicate that both the algorithms are substantially faster than the simplex algorithm applied to the enlarged linear programming problem.  相似文献   

13.
This paper discusses an alternative to conditioning that may be used when the probability distribution is not fully specified. It does not require any assumptions (such as CAR: coarsening at random) on the unknown distribution. The well-known Monty Hall problem is the simplest scenario where neither naive conditioning nor the CAR assumption suffice to determine an updated probability distribution. This paper thus addresses a generalization of that problem to arbitrary distributions on finite outcome spaces, arbitrary sets of ‘messages’, and (almost) arbitrary loss functions, and provides existence and characterization theorems for robust probability updating strategies. We find that for logarithmic loss, optimality is characterized by an elegant condition, which we call RCAR (reverse coarsening at random). Under certain conditions, the same condition also characterizes optimality for a much larger class of loss functions, and we obtain an objective and general answer to how one should update probabilities in the light of new information.  相似文献   

14.
This paper considers several probability maximization models for multi-scenario portfolio selection problems in the case that future returns in possible scenarios are multi-dimensional random variables. In order to consider occurrence probabilities and decision makers’ predictions with respect to all scenarios, a portfolio selection problem setting a weight with flexibility to each scenario is proposed. Furthermore, by introducing aspiration levels to occurrence probabilities or future target profit and maximizing the minimum aspiration level, a robust portfolio selection problem is considered. Since these problems are formulated as stochastic programming problems due to the inclusion of random variables, they are transformed into deterministic equivalent problems introducing chance constraints based on the stochastic programming approach. Then, using a relation between the variance and absolute deviation of random variables, our proposed models are transformed into linear programming problems and efficient solution methods are developed to obtain the global optimal solution. Furthermore, a numerical example of a portfolio selection problem is provided to compare our proposed models with the basic model.  相似文献   

15.
This paper is concerned with minisum location-allocation problems on undirected networks in which demands can occur on links with uniform probability distributions. Two types of networks are considered. The first type considered is simply a chain graph. It is shown that except for the 1-median case, the problem is generally non-convex. However, for the p-median case, a discrete set of potential optimal facility locations may be identified, and hence it is shown that all local and global minima to the problem may be discovered by solving a series of trivial linear programming problems. This analysis is then extended to prescribe an algorithm for the 2-median location-allocation problem on a tree network involving uniform continuous demands on links. Some localization theorems are presented in the spirit of the work done on discrete nodal demand problems.  相似文献   

16.
We consider {0,1}n as a sample space with a probability measure on it, thus making pseudo-Boolean functions into random variables. We then derive explicit formulas for approximating a pseudo-Boolean random variable by a linear function if the measure is permutation-invariant, and by a function of degree at most k if the measure is a product measure. These formulas generalize results due to Hammer-Holzman and Grabisch-Marichal-Roubens. We also derive a formula for the best faithful linear approximation that extends a result due to Charnes-Golany-Keane-Rousseau concerning generalized Shapley values. We show that a theorem of Hammer-Holzman that states that a pseudo-Boolean function and its best approximation of degree at most k have the same derivatives up to order k does not generalize to this setting for arbitrary probability measures, but does generalize if the probability measure is a product measure.  相似文献   

17.
Combined heat and power (CHP) production is an important energy production technology which can help to improve the efficiency of energy production and to reduce the emission of CO2. Cost-efficient operation of a CHP system can be planned using an optimisation model based on hourly load forecasts. A long-term planning model decomposes into hourly models, which can be formulated as linear programming (LP) problems. Such problems can be solved efficiently using the specialized Power Simplex algorithm. However, Power Simplex can only manage one heat and one power balance. Since heat cannot be transported over long distances, Power Simplex applies only for local CHP planning.In this paper we formulate the hourly multi-site CHP planning problem with multiple heat balances as an LP model with a special structure. We then develop the Extended Power Simplex (EPS) algorithm for solving such models efficiently. Even though the problem can be quite large as the number of demand sites increases, EPS demonstrates very good efficiency. In test runs with realistic models, EPS is from 29 to 85 times faster than an efficient sparse Simplex code using the product form of inverse (PFI). Furthermore, the relative efficiency of EPS improves as the problem size grows.  相似文献   

18.
In connection with the optimal design of centralized circuit-free networks linear 0–1 programming problems arise which are related to rooted trees. For each problem the variables correspond to the edges of a given rooted tree T. Each path from a leaf to the root of T, together with edge weights, defines a linear constraint, and a global linear objective is to be maximized. We consider relaxations of such problems where the variables are not restricted to 0 or 1 but are allowed to vary continouosly between these bounds. The values of the optimal solutions of such relaxations may be used in a branch and bound procedure for the original 0–1 problem. While in [10] a primal algorithm for these relaxations is discussed, in this paper, we deal with the dual linear program and present a version of the simplex algorithm for its solution which can be implemented to run in time O(n2). For balanced trees T this time can be reduced to O(n log n).  相似文献   

19.
Multiclass classification and probability estimation have important applications in data analytics. Support vector machines (SVMs) have shown great success in various real-world problems due to their high classification accuracy. However, one main limitation of standard SVMs is that they do not provide class probability estimates, and thus fail to offer uncertainty measure about class prediction. In this article, we propose a simple yet effective framework to endow kernel SVMs with the feature of multiclass probability estimation. The new probability estimator does not rely on any parametric assumption on the data distribution, therefore, it is flexible and robust. Theoretically, we show that the proposed estimator is asymptotically consistent. Computationally, the new procedure can be conveniently implemented using standard SVM softwares. Our extensive numerical studies demonstrate competitive performance of the new estimator when compared with existing methods such as multiple logistic regression, linear discrimination analysis, tree-based methods, and random forest, under various classification settings. Supplementary materials for this article are available online.  相似文献   

20.
This paper focuses on nonlocal boundary value problems for linear and nonlinear abstract elliptic equations in Banach spaces. Here equations and boundary conditions contain certain parameters. The uniform separability of the linear problem and the existence and uniqueness of maximal regular solution of nonlinear problem are obtained in Lp spaces. For linear case the discreteness of spectrum of corresponding parameter dependent differential operator is obtained. The behavior of solution when the parameter approaches zero and its smoothness with respect to the parameter is established. Moreover, we show the estimate for analytic semigroups in terms of interpolation spaces. This fact can be used to obtain maximal regularity properties for abstract boundary value problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号