首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A qualitative approach to decision making under uncertainty has been proposed in the setting of possibility theory, which is based on the assumption that levels of certainty and levels of priority (for expressing preferences) are commensurate. In this setting, pessimistic and optimistic decision criteria have been formally justified. This approach has been transposed into possibilistic logic in which the available knowledge is described by formulas which are more or less certainly true and the goals are described in a separate prioritized base. This paper adapts the possibilistic logic handling of qualitative decision making under uncertainty in the Answer Set Programming (ASP) setting. We show how weighted beliefs and prioritized preferences belonging to two separate knowledge bases can be handled in ASP by modeling qualitative decision making in terms of abductive logic programming where (uncertain) knowledge about the world and prioritized preferences are encoded as possibilistic definite logic programs and possibilistic literals respectively. We provide ASP-based and possibilistic ASP-based algorithms for calculating optimal decisions and utility values according to the possibilistic decision criteria. We describe a prototype implementing the algorithms proposed on top of different ASP solvers and we discuss the complexity of the different implementations.  相似文献   

2.
The distribution semantics integrates logic programming and probability theory using a possible worlds approach. Its intuitiveness and simplicity have made it the most widely used semantics for probabilistic logic programming, with successful applications in many domains. When the program has function symbols, the semantics was defined for special cases: either the program has to be definite or the queries must have a finite number of finite explanations. In this paper we show that it is possible to define the semantics for all programs. We also show that this definition coincides with that of Sato and Kameya on positive programs. Moreover, we highlight possible approaches for inference, both exact and approximate.  相似文献   

3.
Topic models, and more specifically the class of latent Dirichlet allocation (LDA), are widely used for probabilistic modeling of text. Markov chain Monte Carlo (MCMC) sampling from the posterior distribution is typically performed using a collapsed Gibbs sampler. We propose a parallel sparse partially collapsed Gibbs sampler and compare its speed and efficiency to state-of-the-art samplers for topic models on five well-known text corpora of differing sizes and properties. In particular, we propose and compare two different strategies for sampling the parameter block with latent topic indicators. The experiments show that the increase in statistical inefficiency from only partial collapsing is smaller than commonly assumed, and can be more than compensated by the speedup from parallelization and sparsity on larger corpora. We also prove that the partially collapsed samplers scale well with the size of the corpus. The proposed algorithm is fast, efficient, exact, and can be used in more modeling situations than the ordinary collapsed sampler. Supplementary materials for this article are available online.  相似文献   

4.
We study a class of mixed-integer programs for solving linear programs with joint probabilistic constraints from random right-hand side vectors with finite distributions. We present greedy and dual heuristic algorithms that construct and solve a sequence of linear programs. We provide optimality gaps for our heuristic solutions via the linear programming relaxation of the extended mixed-integer formulation of Luedtke et al. (2010) [13] as well as via lower bounds produced by their cutting plane method. While we demonstrate through an extensive computational study the effectiveness and scalability of our heuristics, we also prove that the theoretical worst-case solution quality for these algorithms is arbitrarily far from optimal. Our computational study compares our heuristics against both the extended mixed-integer programming formulation and the cutting plane method of Luedtke et al. (2010) [13]. Our heuristics efficiently and consistently produce solutions with small optimality gaps, while for larger instances the extended formulation becomes intractable and the optimality gaps from the cutting plane method increase to over 5%.  相似文献   

5.
Metropolis algorithms along with Gibbs steps are proposed to perform a Bayesian analysis for the Block and Basu (ACBVE) bivariate exponential distribution. We also consider the use of Gibbs sampling to develop Bayesian inference for accelerated life tests assuming a power rule model and the ACBVE distribution. The methodology developed in this paper is exemplified with two examples.  相似文献   

6.
We establish computationally flexible methods and algorithms for the analysis of multivariate skew normal models when missing values occur in the data. To facilitate the computation and simplify the theoretic derivation, two auxiliary permutation matrices are incorporated into the model for the determination of observed and missing components of each observation. Under missing at random mechanisms, we formulate an analytically simple ECM algorithm for calculating parameter estimation and retrieving each missing value with a single-valued imputation. Gibbs sampling is used to perform a Bayesian inference on model parameters and to create multiple imputations for missing values. The proposed methodologies are illustrated through a real data set and comparisons are made with those obtained from fitting the normal counterparts.  相似文献   

7.
An expert system is a computer program which can act in a similar way to a human expert in a restricted domain of application from the point of view of solving problems, taking decisions, planning and giving advice. It consists of two parts. One part is a knowledge base consisting of that knowledge used by the expert in his performance. A second part is an inference engine which allows queries to be answered by asking questions of the environment and performing evidential reasoning.This paper is concerned with the knowledge representation and inference mechanism for evidential reasoning. Man's knowledge consists of statements which cannot be guaranteed to be true and is expressed in a language containing imprecise terms. Uncertainties, either of a probabilistic or fuzzy nature, cannot be ignored when modelling human expertise. Not all practical reasoning takes the form of deductive inference. For practical affairs we use inductive, abductive, analogical and plausible reasoning methods and for each of these the concept of the strength of evidence would seem to be important.We describe a support logic programming system which generalises logic programming to the case in which various forms of uncertainty can be included. In this system a conclusion does not logically follow from some axioms but is supported to a certain degree by means of evidence. The negation of the conclusion is also supported to a certain degree and the two supports do not necessarily add up to one.A calculus for such a support logic programming system is described and applications to its use in expert systems and its use in providing recursive definitions of fuzzy concepts are given.  相似文献   

8.
Abstract

We consider Markov mixture models for multiple longitudinal binary sequences. Prior uncertainty in the mixing distribution is characterized by a Dirichlet process centered on a matrix beta measure. We use this setting to evaluate and compare the performance of three competing algorithms that arise more generally in Dirichlet process mixture calculations: sequential imputations, Gibbs sampling, and a predictive recursion, for which an extension of the sequential calculations is introduced. This facilitates the estimation of quantities related to clustering structure which is not available in the original formulation. A numerical comparison is carried out in three examples. Our findings suggest that the sequential imputations method is most useful for relatively small problems, and that the predictive recursion can be an efficient preliminary tool for more reliable, but computationally intensive, Gibbs sampling implementations.  相似文献   

9.
PRISM is a probabilistic logic programming formalism which allows defining a probability distribution over possible worlds. This paper investigates learning a class of generative PRISM programs known as failure-free. The aim is to learn recursive PRISM programs which can be used to model stochastic processes. These programs generalise dynamic Bayesian networks by defining a halting distribution over the generative process. Dynamic Bayesian networks model infinite stochastic processes. Sampling from infinite process can only be done by specifying the length of sequences that the process generates. In this case, only observations of a fixed length of sequences can be obtained. On the other hand, the recursive PRISM programs considered in this paper are self-terminating upon some halting conditions. Thus, they generate observations of different lengths of sequences. The direction taken by this paper is to combine ideas from inductive logic programming and learning Bayesian networks to learn PRISM programs. It builds upon the inductive logic programming approach of learning from entailment.  相似文献   

10.
A conic integer program is an integer programming problem with conic constraints. Many problems in finance, engineering, statistical learning, and probabilistic optimization are modeled using conic constraints. Here we study mixed-integer sets defined by second-order conic constraints. We introduce general-purpose cuts for conic mixed-integer programming based on polyhedral conic substructures of second-order conic sets. These cuts can be readily incorporated in branch-and-bound algorithms that solve either second-order conic programming or linear programming relaxations of conic integer programs at the nodes of the branch-and-bound tree. Central to our approach is a reformulation of the second-order conic constraints with polyhedral second-order conic constraints in a higher dimensional space. In this representation the cuts we develop are linear, even though they are nonlinear in the original space of variables. This feature leads to a computationally efficient implementation of nonlinear cuts for conic mixed-integer programming. The reformulation also allows the use of polyhedral methods for conic integer programming. We report computational results on solving unstructured second-order conic mixed-integer problems as well as mean–variance capital budgeting problems and least-squares estimation problems with binary inputs. Our computational experiments show that conic mixed-integer rounding cuts are very effective in reducing the integrality gap of continuous relaxations of conic mixed-integer programs and, hence, improving their solvability. This research has been supported, in part, by Grant # DMI0700203 from the National Science Foundation.  相似文献   

11.
The Bradley–Terry model is a popular approach to describe probabilities of the possible outcomes when elements of a set are repeatedly compared with one another in pairs. It has found many applications including animal behavior, chess ranking, and multiclass classification. Numerous extensions of the basic model have also been proposed in the literature including models with ties, multiple comparisons, group comparisons, and random graphs. From a computational point of view, Hunter has proposed efficient iterative minorization-maximization (MM) algorithms to perform maximum likelihood estimation for these generalized Bradley–Terry models whereas Bayesian inference is typically performed using Markov chain Monte Carlo algorithms based on tailored Metropolis–Hastings proposals. We show here that these MM algorithms can be reinterpreted as special instances of expectation-maximization algorithms associated with suitable sets of latent variables and propose some original extensions. These latent variables allow us to derive simple Gibbs samplers for Bayesian inference. We demonstrate experimentally the efficiency of these algorithms on a variety of applications.  相似文献   

12.
In the current paper, we re-examine the connection between formal argumentation and logic programming from the perspective of semantics. We observe that one particular translation from logic programs to instantiated argumentation (the one described by Wu, Caminada and Gabbay) is able to serve as a basis for describing various equivalences between logic programming semantics and argumentation semantics. In particular, we are able to show equivalence between regular semantics for logic programming and preferred semantics for formal argumentation. We also show that there exist logic programming semantics (L-stable semantics) that cannot be captured by any abstract argumentation semantics.  相似文献   

13.
Abstract

Current Gibbs sampling schemes in mixture of Dirichlet process (MDP) models are restricted to using “conjugate” base measures that allow analytic evaluation of the transition probabilities when resampling configurations, or alternatively need to rely on approximate numeric evaluations of some transition probabilities. Implementation of Gibbs sampling in more general MDP models is an open and important problem because most applications call for the use of nonconjugate base measures. In this article we propose a conceptual framework for computational strategies. This framework provides a perspective on current methods, facilitates comparisons between them, and leads to several new methods that expand the scope of MDP models to nonconjugate situations. We discuss one in detail. The basic strategy is based on expanding the parameter vector, and is applicable for MDP models with arbitrary base measure and likelihood. Strategies are also presented for the important class of normal-normal MDP models and for problems with fixed or few hyperparameters. The proposed algorithms are easily implemented and illustrated with an application.  相似文献   

14.
This article presents a probabilistic logic whose sentences can be interpreted as asserting the acceptability of gambles described in terms of an underlying logic. This probabilistic logic has a concrete syntax and a complete inference procedure, and it handles conditional as well as unconditional probabilities. It synthesizes Nilsson’s probabilistic logic and Frisch and Haddawy’s anytime inference procedure with Wilson and Moral’s logic of gambles.Two distinct semantics can be used for our probabilistic logic: (1) the measure–theoretic semantics used by the prior logics already mentioned and also by the more expressive logic of Fagin, Halpern, and Meggido and (2) a behavioral semantics. Under the measure–theoretic semantics, sentences of our probabilistic logic are interpreted as assertions about a probability distribution over interpretations of the underlying logic. Under the behavioral semantics, these sentences are interpreted only as asserting the acceptability of gambles, and this suggests different directions for generalization.  相似文献   

15.
We study the class of state-space models and perform maximum likelihood estimation for the model parameters. We consider a stochastic approximation expectation–maximization (SAEM) algorithm to maximize the likelihood function with the novelty of using approximate Bayesian computation (ABC) within SAEM. The task is to provide each iteration of SAEM with a filtered state of the system, and this is achieved using an ABC sampler for the hidden state, based on sequential Monte Carlo methodology. It is shown that the resulting SAEM-ABC algorithm can be calibrated to return accurate inference, and in some situations it can outperform a version of SAEM incorporating the bootstrap filter. Two simulation studies are presented, first a nonlinear Gaussian state-space model then a state-space model having dynamics expressed by a stochastic differential equation. Comparisons with iterated filtering for maximum likelihood inference, and Gibbs sampling and particle marginal methods for Bayesian inference are presented.  相似文献   

16.
Haplotype inference by pure parsimony (Hipp) is a well-known paradigm for haplotype inference. In order to assess the biological significance of this paradigm, we generalize the problem of Hipp to the problem of finding all optimal solutions, which we call Chipp. We study intrinsic haplotype features, such as backbone haplotypes and fat genotypes as well as equal columns and decomposability. We explicitly exploit these features in three computational approaches that are based on integer linear programming, depth-first branch-and-bound, and Boolean satisfiability. Further we introduce two hybrid algorithms that draw upon the diverse strengths of the approaches. Our experimental analysis shows that our optimized algorithms are significantly superior to the baseline algorithms, often with orders of magnitude faster running time. Finally, our experiments provide some useful insights into the intrinsic features of this important problem.  相似文献   

17.
The use of Markov Decision Processes for Inspection Maintenance and Rehabilitation of civil engineering structures relies on the use of several transition matrices related to the stochastic degradation process, maintenance actions and imperfect inspections. Point estimators for these matrices are usually used and they are evaluated using statistical inference methods and/or expert evaluation methods. Thus, considerable epistemic uncertainty often veils the true values of these matrices. Our contribution through this paper is threefold. First, we present a methodology for incorporating epistemic uncertainties in dynamic programming algorithms used to solve finite horizon Markov Decision Processes (which may be partially observable). Second, we propose a methodology based on the use of Dirichlet distributions which answers, in our sense, much of the controversy found in the literature about estimating Markov transition matrices. Third, we show how the complexity resulting from the use of Monte-Carlo simulations for the transition matrices can be greatly overcome in the framework of dynamic programming. The proposed model is applied to concrete bridge under degradation, in order to provide the optimal strategy for inspection and maintenance. The influence of epistemic uncertainties on the optimal solution is underlined through sensitivity analysis regarding the input data.  相似文献   

18.
In this paper we analyze conjugate gradient-type algorithms for solving convex quadratic programs subject only to box constraints (i.e., lower and upper bounds on the variables). Programs of this type, which we denote by BQP, play an important role in many optimization models and algorithms. We propose a new class of finite algorithms based on a nonfinite heuristic for solving a large, sparse BQP. The numerical results suggest that these algorithms are competitive with Dembo and Tulowitzski's (1983) CRGP algorithm in general, and perform better than CRGP for problems that have a low percentage of free variables at optimality, and for problems with only nonnegativity constraints.  相似文献   

19.
Probabilistic inference is among the main topics with reasoning in uncertainty in AI. For this purpose, Bayesian Networks (BNs) is one of the most successful and efficient Probabilistic Graphical Model (PGM) so far. Since the mid-90s, a growing number of BNs extensions have been proposed. Object-oriented, entity-relationship and first-order logic are the main representation paradigms used to extend BNs. While entity-relationship and first-order models have been successfully used for machine learning in defining lifted probabilistic inference, object-oriented models have been mostly underused. Structured inference, which exploits the structural knowledge encoded in an object-oriented PGM, is a surprisingly unstudied technique. In this paper we propose a full object-oriented framework for PRM and propose two extensions of the state-of-the-art structured inference algorithm: SPI which removes the major flaws of existing algorithms and SPISBB which largely enhances SPI by using d-separation.  相似文献   

20.
We present a probabilistic analysis of integer linear programs (ILPs). More specifically, we study ILPs in a so-called smoothed analysis in which it is assumed that first an adversary specifies the coefficients of an integer program and then (some of) these coefficients are randomly perturbed, e.g., using a Gaussian or a uniform distribution with small standard deviation. In this probabilistic model, we investigate structural properties of ILPs and apply them to the analysis of algorithms. For example, we prove a lower bound on the slack of the optimal solution. As a result of our analysis, we are able to specify the smoothed complexity of classes of ILPs in terms of their worst case complexity. This way, we obtain polynomial smoothed complexity for packing and covering problems with any fixed number of constraints. Previous results of this kind were restricted to the case of binary programs.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号