首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Expert systems have recently become popular and are attracting more and more attention. The high quality performance achieved by some systems in areas previously not considered practical for computational solutions has lead to great interest from many different disciplines. Most expert systems use a subset of techniques from the general area of computer science research known as artificial intelligence. However, some expert systems have been developed that incorporate more traditional mathematical modeling techniques. The combination of artificial intelligence techniques and more traditional mathematical techniques has been shown to be quite effective in developing several high quality performance computer software systems. The techniques used in expert systems may be what is needed to bridge the gap between classical operational research modeling and human decision making processes. This paper addresses how expert systems techniques are being used in problem solving and why someone in operational research might want to use them.  相似文献   

2.
Approximate Bayesian inference by importance sampling derives probabilistic statements from a Bayesian network, an essential part of evidential reasoning with the network and an important aspect of many Bayesian methods. A critical problem in importance sampling on Bayesian networks is the selection of a good importance function to sample a network’s prior and posterior probability distribution. The initially optimal importance functions eventually start deviating from the optimal function when sampling a network’s posterior distribution given evidence, even when adaptive methods are used that adjust an importance function to the evidence by learning. In this article we propose a new family of Refractor Importance Sampling (RIS) algorithms for adaptive importance sampling under evidential reasoning. RIS applies “arc refractors” to a Bayesian network by adding new arcs and refining the conditional probability tables. The goal of RIS is to optimize the importance function for the posterior distribution and reduce the error variance of sampling. Our experimental results show a significant improvement of RIS over state-of-the-art adaptive importance sampling algorithms.  相似文献   

3.
Inference algorithms in directed evidential networks (DEVN) obtain their efficiency by making use of the represented independencies between variables in the model. This can be done using the disjunctive rule of combination (DRC) and the generalized Bayesian theorem (GBT), both proposed by Smets [Ph. Smets, Belief functions: the disjunctive rule of combination and the generalized Bayesian theorem, International Journal of Approximate Reasoning 9 (1993) 1–35]. These rules make possible the use of conditional belief functions for reasoning in directed evidential networks, avoiding the computations of joint belief function on the product space. In this paper, new algorithms based on these two rules are proposed for the propagation of belief functions in singly and multiply directed evidential networks.  相似文献   

4.
Bayesian Networks (BNs) are probabilistic inference engines that support reasoning under uncertainty. This article presents a methodology for building an information technology (IT) implementation BN from client–server survey data. The article also demonstrates how to use the BN to predict the attainment of IT benefits, given specific implementation characteristics (e.g., application complexity) and activities (e.g., reengineering). The BN is an outcome of a machine learning process that finds the network’s structure and its associated parameters, which best fit the data. The article will be of interest to academicians who want to learn more about building BNs from real data and practitioners who are interested in IT implementation models that make probabilistic statements about certain implementation decisions.  相似文献   

5.
6.
As one of most important aspects of condition-based maintenance (CBM), failure prognosis has attracted an increasing attention with the growing demand for higher operational efficiency and safety in industrial systems. Currently there are no effective methods which can predict a hidden failure of a system real-time when there exist influences from the changes of environmental factors and there is no such an accurate mathematical model for the system prognosis due to its intrinsic complexity and operating in potentially uncertain environment. Therefore, this paper focuses on developing a new hidden Markov model (HMM) based method which can deal with the problem. Although an accurate model between environmental factors and a failure process is difficult to obtain, some expert knowledge can be collected and represented by a belief rule base (BRB) which is an expert system in fact. As such, combining the HMM with the BRB, a new prognosis model is proposed to predict the hidden failure real-time even when there are influences from the changes of environmental factors. In the proposed model, the HMM is used to capture the relationships between the hidden failure and monitored observations of a system. The BRB is used to model the relationships between the environmental factors and the transition probabilities among the hidden states of the system including the hidden failure, which is the main contribution of this paper. Moreover, a recursive algorithm for online updating the prognosis model is developed. An experimental case study is examined to demonstrate the implementation and potential applications of the proposed real-time failure prognosis method.  相似文献   

7.
Statistical problems were at the origin of the mathematical theory of evidence, or Dempster–Shafer theory. It was also one of the major concerns of Philippe Smets, starting with his PhD dissertation. This subject is reconsidered here, starting with functional models, describing how data is generated in statistical experiments. Inference is based on these models, using probabilistic assumption-based reasoning. It results in posterior belief functions on the unknown parameters. Formally, the information used in the process of inference can be represented by hints. Basic operations on hints are combination, corresponding to Dempster’s rule, and focussing. This leads to an algebra of hints. Applied to functional models, this introduces an algebraic flavor into statistical inference. It emphasizes the view that in statistical inference different pieces of information have to be combined and then focussed onto the question of interest. This theory covers Bayesian and Fisher type inference as two extreme cases of a more general theory of inference.  相似文献   

8.
Time series are found widely in engineering and science. We study forecasting of stochastic, dynamic systems based on observations from multivariate time series. We model the domain as a dynamic multiply sectioned Bayesian network (DMSBN) and populate the domain by a set of proprietary, cooperative agents. We propose an algorithm suite that allows the agents to perform one-step forecasts with distributed probabilistic inference. We show that as long as the DMSBN is structural time-invariant (possibly parametric time-variant), the forecast is exact and its time complexity is exponentially more efficient than using dynamic Bayesian networks (DBNs). In comparison with independent DBN-based agents, multiagent DMSBNs produce more accurate forecasts. The effectiveness of the framework is demonstrated through experiments on a supply chain testbed.  相似文献   

9.
In this paper, we present two classification approaches based on Rough Sets (RS) that are able to learn decision rules from uncertain data. We assume that the uncertainty exists only in the decision attribute values of the Decision Table (DT) and is represented by the belief functions. The first technique, named Belief Rough Set Classifier (BRSC), is based only on the basic concepts of the Rough Sets (RS). The second, called Belief Rough Set Classifier, is more sophisticated. It is based on Generalization Distribution Table (BRSC-GDT), which is a hybridization of the Generalization Distribution Table and the Rough Sets (GDT-RS). The two classifiers aim at simplifying the Uncertain Decision Table (UDT) in order to generate significant decision rules for classification process. Furthermore, to improve the time complexity of the construction procedure of the two classifiers, we apply a heuristic method of attribute selection based on rough sets. To evaluate the performance of each classification approach, we carry experiments on a number of standard real-world databases by artificially introducing uncertainty in the decision attribute values. In addition, we test our classifiers on a naturally uncertain web usage database. We compare our belief rough set classifiers with traditional classification methods only for the certain case. Besides, we compare the results relative to the uncertain case with those given by another similar classifier, called the Belief Decision Tree (BDT), which also deals with uncertain decision attribute values.  相似文献   

10.
Geospatial reasoning has been an essential aspect of military planning since the invention of cartography. Although maps have always been a focal point for developing situational awareness, the dawning era of network-centric operations brings the promise of unprecedented battlefield advantage due to improved geospatial situational awareness. Geographic information systems (GIS) and GIS-based decision support systems are ubiquitous within current military forces, as well as civil and humanitarian organizations. Understanding the quality of geospatial data is essential to using it intelligently. A systematic approach to data quality requires: estimating and describing the quality of data as they are collected; recording the data quality as metadata; propagating uncertainty through models for data processing; exploiting uncertainty appropriately in decision support tools; and communicating to the user the uncertainty in the final product. There are shortcomings in the state-of-the-practice in GIS applications in dealing with uncertainty. No single point solution can fully address the problem. Rather, a system-wide approach is necessary. Bayesian reasoning provides a principled and coherent framework for representing knowledge about data quality, drawing inferences from data of varying quality, and assessing the impact of data quality on modeled effects. Use of a Bayesian approach also drives a requirement for appropriate probabilistic information in geospatial data quality metadata. This paper describes our research on data quality for military applications of geospatial reasoning, and describes model views appropriate for model builders, analysts, and end users.  相似文献   

11.
In this article, we aim to analyze the limitations of learning in automata-based systems by introducing the L+L+ algorithm to replicate quasi-perfect learning, i.e., a situation in which the learner can get the correct answer to any of his queries. This extreme assumption allows the generalization of any limitations of the learning algorithm to less sophisticated learning systems. We analyze the conditions under which the L+L+ infers the correct automaton and when it fails to do so. In the context of the repeated prisoners’ dilemma, we exemplify how the L+L+ may fail to learn the correct automaton. We prove that a sufficient condition for the L+L+ algorithm to learn the correct automaton is to use a large number of look-ahead steps. Finally, we show empirically, in the product differentiation problem, that the computational time of the L+L+ algorithm is polynomial on the number of states but exponential on the number of agents.  相似文献   

12.
The usual methods of applying Bayesian networks to the modeling of temporal processes, such as Dean and Kanazawa’s dynamic Bayesian networks (DBNs), consist in discretizing time and creating an instance of each random variable for each point in time. We present a new approach called network of probabilistic events in discrete time (NPEDT), for temporal reasoning with uncertainty in domains involving probabilistic events. Under this approach, time is discretized and each value of a variable represents the instant at which a certain event may occur. This is the main difference with respect to DBNs, in which the value of a variable Vi represents the state of a real-world property at time ti. Therefore, our method is more appropriate for temporal fault diagnosis, because only one variable is necessary for representing the occurrence of a fault and, as a consequence, the networks involved are much simpler than those obtained by using DBNs. In contrast, DBNs are more appropriate for monitoring tasks, since they explicitly represent the state of the system at each moment. We also introduce in this paper several types of temporal noisy gates, which facilitate the acquisition and representation of uncertain temporal knowledge. They constitute a generalization of traditional canonical models of multicausal interactions, such as the noisy OR-gate, which have been usually applied to static domains. We illustrate the approach with the example domain of modeling the evolution of traffic jams produced on the outskirts of a city, after the occurrence of an event that obliges traffic to stop indefinitely.  相似文献   

13.
Recommender systems enable users to access products or articles that they would otherwise not be aware of due to the wealth of information to be found on the Internet. The two traditional recommendation techniques are content-based and collaborative filtering. While both methods have their advantages, they also have certain disadvantages, some of which can be solved by combining both techniques to improve the quality of the recommendation. The resulting system is known as a hybrid recommender system.In the context of artificial intelligence, Bayesian networks have been widely and successfully applied to problems with a high level of uncertainty. The field of recommendation represents a very interesting testing ground to put these probabilistic tools into practice.This paper therefore presents a new Bayesian network model to deal with the problem of hybrid recommendation by combining content-based and collaborative features. It has been tailored to the problem in hand and is equipped with a flexible topology and efficient mechanisms to estimate the required probability distributions so that probabilistic inference may be performed. The effectiveness of the model is demonstrated using the MovieLens and IMDB data sets.  相似文献   

14.
15.
Given a Probabilistic Boolean Network (PBN), an important problem is to study its steady-state probability distribution for network analysis. In this paper, we present a new perturbation bound of the steady-state probability distribution of PBNs with gene perturbation. The main contribution of our results is that this new bound is established without additional condition required by the existing method. The other contribution of this paper is to propose a fast algorithm based on the special structure of a transition probability matrix of PBNs with gene perturbation to compute its steady-state probability distribution. Experimental results are given to demonstrate the effectiveness of the new bound, and the efficiency of the proposed method.  相似文献   

16.
17.
In this paper we present recent results for the bicharacteristic based finite volume schemes, the so-called finite volume evolution Galerkin (FVEG) schemes. These methods were proposed to solve multi-dimensional hyperbolic conservation laws. They combine the usually conflicting design objectives of using the conservation form and following the characteristics, or bicharacteristics. This is realized by combining the finite volume formulation with approximate evolution operators, which use bicharacteristics of the multi-dimensional hyperbolic system. In this way all of the infinitely many directions of wave propagation are taken into account. The main goal of this paper is to present a self-contained overview on the recent results. We study the L 1-stability of the finite volume schemes obtained by various approximations of the flux integrals. Several numerical experiments presented in the last section confirm robustness and correct multi-dimensional behaviour of the FVEG methods. This research has been supported under the VW-Stiftung grant I 76 859, by the grant No 201/03 0570 of the Grant Agency of the Czech Republic, by the Deutsche Forschungsgemeinschaft grant GK 431 and partially by the European network HYKE, funded by the EC as contract HPRN-CT-2002-00282.  相似文献   

18.
In this paper, we address the problem of identifying the potential sources of conflict between information sources in the framework of belief function theory. To this aim, we propose a decomposition of the global measure of conflict as a function defined over the power set of the discernment frame. This decomposition, which associates a part of the conflict to some hypotheses, allows identifying the origin of conflict, which is hence considered as “local” to some hypotheses. This is more informative than usual global measures of conflict or disagreement between sources. Having shown the unicity of this decomposition, we illustrate its use on two examples. The first one is a toy example where the fact that conflict is mainly brought by one hypothesis allows identifying its origin. The second example is a real application, namely robot localization, where we show that focusing the conflict measure on the “favored” hypothesis (the one that would be decided) helps us to robustify the fusion process.  相似文献   

19.
In this paper, we consider a class of time-lag optimal control problems involving control and terminal inequality constraints. A feasible direction algorithm has been obtained by Teo, Wong, and Clements for solving this class of optimal control problems. It was shown that anyL accumulation points of the sequence of controls generated by the algorithm satisfy a necessary condition for optimality. However, suchL accumulation points need not exist. The aim of this paper is to prove a convergence result, which ensures that the sequence of controls generated by the algorithm always has accumulation points in the sense of control measure, and these accumulation points satisfy a necessary condition for optimality for the corresponding relaxed problem.This work was done when the first author was on sabbatical leave at the School of Mathematics, University of New South Wales, Australia.  相似文献   

20.
This paper describes a temporal reasoning system that supports deductions for modeling the physics (i.e. cause and effect relationships) of a specified planning domain. We demonstrate how the process of planning can be profitably partitioned into two inferential components: one responsible for making choices relevant to the construction of a plan and a second responsible for maintaining an accurate picture of the future that takes into account the planner's intended actions. Causal knowledge about the effects of actions and the behavior of processes is stored apart from knowledge of plans for achieving specific tasks. Using this causal knowledge, the second component is able to predict the consequences of actions proposed by the first component and notice interactions that may affect the success of the plan under construction. By keeping track of the reasons why each prediction and choice is made, the resulting system is able to reason efficiently about the consequences of making new choices and retracting old ones. The system described in this paper makes it particularly simple and efficient to reason about actions whose effects vary depending upon the circumstances in which the actions are executed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号