首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we present an approach to reliability modeling and analysis based on the automatic conversion of a particular reliability engineering model, the Dynamic Fault Tree (DFT), into Dynamic Bayesian Networks (DBN). The approach is implemented in a software tool called RADYBAN (Reliability Analysis with DYnamic BAyesian Networks). The aim is to provide a familiar interface to reliability engineers, by allowing them to model the system to be analyzed with a standard formalism; however, a modular algorithm is implemented to automatically compile a DFT into the corresponding DBN. In fact, when the computation of specific reliability measures is requested, classical algorithms for the inference on Dynamic Bayesian Networks are exploited, in order to compute the requested parameters. This is performed in a totally transparent way to the user, who could in principle be completely unaware of the underlying Bayesian Network. The use of DBNs allows the user to be able to compute measures that are not directly computable from DFTs, but that are naturally obtainable from DBN inference. Moreover, the modeling capabilities of a DBN, allow us to extend the basic DFT formalism, by introducing probabilistic dependencies among system components, as well as the definition of specific repair policies that can be taken into account during the reliability analysis phase. We finally show how the approach operates on some specific examples, by describing the advantages of having available a full inference engine based on DBNs for the requested analysis tasks.  相似文献   

2.
This paper presents a novel approach to simulation metamodeling using dynamic Bayesian networks (DBNs) in the context of discrete event simulation. A DBN is a probabilistic model that represents the joint distribution of a sequence of random variables and enables the efficient calculation of their marginal and conditional distributions. In this paper, the construction of a DBN based on simulation data and its utilization in simulation analyses are presented. The DBN metamodel allows the study of the time evolution of simulation by tracking the probability distribution of the simulation state over the duration of the simulation. This feature is unprecedented among existing simulation metamodels. The DBN metamodel also enables effective what-if analysis which reveals the conditional evolution of the simulation. In such an analysis, the simulation state at a given time is fixed and the probability distributions representing the state at other time instants are updated. Simulation parameters can be included in the DBN metamodel as external random variables. Then, the DBN offers a way to study the effects of parameter values and their uncertainty on the evolution of the simulation. The accuracy of the analyses allowed by DBNs is studied by constructing appropriate confidence intervals. These analyses could be conducted based on raw simulation data but the use of DBNs reduces the duration of repetitive analyses and is expedited by available Bayesian network software. The construction and analysis capabilities of DBN metamodels are illustrated with two example simulation studies.  相似文献   

3.
The usual methods of applying Bayesian networks to the modeling of temporal processes, such as Dean and Kanazawa’s dynamic Bayesian networks (DBNs), consist in discretizing time and creating an instance of each random variable for each point in time. We present a new approach called network of probabilistic events in discrete time (NPEDT), for temporal reasoning with uncertainty in domains involving probabilistic events. Under this approach, time is discretized and each value of a variable represents the instant at which a certain event may occur. This is the main difference with respect to DBNs, in which the value of a variable Vi represents the state of a real-world property at time ti. Therefore, our method is more appropriate for temporal fault diagnosis, because only one variable is necessary for representing the occurrence of a fault and, as a consequence, the networks involved are much simpler than those obtained by using DBNs. In contrast, DBNs are more appropriate for monitoring tasks, since they explicitly represent the state of the system at each moment. We also introduce in this paper several types of temporal noisy gates, which facilitate the acquisition and representation of uncertain temporal knowledge. They constitute a generalization of traditional canonical models of multicausal interactions, such as the noisy OR-gate, which have been usually applied to static domains. We illustrate the approach with the example domain of modeling the evolution of traffic jams produced on the outskirts of a city, after the occurrence of an event that obliges traffic to stop indefinitely.  相似文献   

4.
Our paper focuses on a fundamental structural property associated with a family of linear switched control systems in the presence of impulsive dynamics. We consider dynamic processes governed by piecewise linear ODEs with controlled location transitions and describe the resulting system in a constructive form of an implicit dynamic system. The proposed algebraic-based modeling framework follows the celebrated behavioral approach (see Polderman and Willems (1998)) and makes it possible to apply to the switched dynamics some conventional techniques from the well-established time-invariant implicit systems theory. The analytic results of our paper constitute a formal theoretical extension of switched control systems methodology and can be used (as an auxiliary step) in a concrete control design procedure. In this first part, we consider modeling aspects.  相似文献   

5.
We consider the task of Bayesian inference of the mean of normal observations when the available data have been discretized and when no prior knowledge about the mean and the variance exists. An application is presented which illustrates that the discretization of the data should not be ignored when their variability is of the order of the discretization step. We show that the standard (noninformative) prior for location-scale family distributions is no longer appropriate. We work out the reference prior of Berger and Bernardo, which leads to different and more reasonable results. However, for this prior the posterior also shows some non-desirable properties. We argue that this is due to the inherent difficulty of the considered problem, which also affects other methods of inference. We therefore complement our analysis by an empirical Bayes approach. While such proceeding overcomes the disadvantages of the standard and reference priors and appears to provide a reasonable inference, it may raise conceptual concerns. We conclude that it is difficult to provide a widely accepted prior for the considered problem.  相似文献   

6.
Symplectic integration of separable Hamiltonian ordinary and partial differential equations is discussed. A von Neumann analysis is performed to achieve general linear stability criteria for symplectic methods applied to a restricted class of Hamiltonian PDEs. In this treatment, the symplectic step is performed prior to the spatial step, as opposed to the standard approach of spatially discretising the PDE to form a system of Hamiltonian ODEs to which a symplectic integrator can be applied. In this way stability criteria are achieved by considering the spectra of linearised Hamiltonian PDEs rather than spatial step size.  相似文献   

7.
Pair-copula constructions of multiple dependence   总被引:9,自引:0,他引:9  
Building on the work of Bedford, Cooke and Joe, we show how multivariate data, which exhibit complex patterns of dependence in the tails, can be modelled using a cascade of pair-copulae, acting on two variables at a time. We use the pair-copula decomposition of a general multivariate distribution and propose a method for performing inference. The model construction is hierarchical in nature, the various levels corresponding to the incorporation of more variables in the conditioning sets, using pair-copulae as simple building blocks. Pair-copula decomposed models also represent a very flexible way to construct higher-dimensional copulae. We apply the methodology to a financial data set. Our approach represents the first step towards the development of an unsupervised algorithm that explores the space of possible pair-copula models, that also can be applied to huge data sets automatically.  相似文献   

8.
Time series are found widely in engineering and science. We study forecasting of stochastic, dynamic systems based on observations from multivariate time series. We model the domain as a dynamic multiply sectioned Bayesian network (DMSBN) and populate the domain by a set of proprietary, cooperative agents. We propose an algorithm suite that allows the agents to perform one-step forecasts with distributed probabilistic inference. We show that as long as the DMSBN is structural time-invariant (possibly parametric time-variant), the forecast is exact and its time complexity is exponentially more efficient than using dynamic Bayesian networks (DBNs). In comparison with independent DBN-based agents, multiagent DMSBNs produce more accurate forecasts. The effectiveness of the framework is demonstrated through experiments on a supply chain testbed.  相似文献   

9.
In this paper, we consider the knot placement problem in B-spline curve approximation. A novel two-stage framework is proposed for addressing this problem. In the first step, the $l_{\infty, 1}$-norm model is introduced for the sparse selection of candidate knots from an initial knot vector. By this step, the knot number is determined. In the second step, knot positions are formulated into a nonlinear optimization problem and optimized by a global optimization algorithm — the differential evolution algorithm (DE). The candidate knots selected in the first step are served for initial values of the DE algorithm. Since the candidate knots provide a good guess of knot positions, the DE algorithm can quickly converge. One advantage of the proposed algorithm is that the knot number and knot positions are determined automatically. Compared with the current existing algorithms, the proposed algorithm finds approximations with smaller fitting error when the knot number is fixed in advance. Furthermore, the proposed algorithm is robust to noisy data and can handle with few data points. We illustrate with some examples and applications.  相似文献   

10.
This paper describes a prototype clinical decision support system (CDSS) for risk stratification of patients with cardiac chest pain. A newly developed belief rule-based inference methodology-RIMER was employed for developing the prototype. Based on the belief rule-based inference methodology, the prototype CDSS can deal with uncertainties in both clinical domain knowledge and clinical data. Moreover, the prototype can automatically update its knowledge base via a belief rule base (BRB) learning module which can adjust BRB through accumulated historical clinical cases. The domain specific knowledge used to construct the knowledge base of the prototype was learned from real patient data. We simulated a set of 1000 patients in cardiac chest pain to validate the prototype. The belief rule-based prototype CDSS has been found to perform extremely well. Firstly, the system can provide more reliable and informative diagnosis recommendations than manual diagnosis using traditional rules when there are clinical uncertainties. Secondly, the diagnostic performance of the system can be significantly improved after training the BRB through accumulated clinical cases.  相似文献   

11.
Exceedances over high thresholds are often modeled by fitting a generalized Pareto distribution (GPD) on R+. It is difficult to select the threshold, above which the GPD assumption is enough solid and enough data is available for inference. We suggest a new dynamically weighted mixture model, where one term of the mixture is the GPD, and the other is a light-tailed density distribution. The weight function varies on R+ in such a way that for large values the GPD component is predominant and thus takes the role of threshold selection. The full data set is used for inference on the parameters present in the two component distributions and in the weight function. Maximum likelihood provides estimates with approximate standard deviations. Our approach has been successfully applied to simulated data and to the (previously studied) Danish fire loss data set. We compare the new dynamic mixture method to Dupuis' robust thresholding approach in peaks-over-threshold inference. We discuss robustness with respect to the choice of the light-tailed component and the form of the weight function. We present encouraging simulation results that indicate that the new approach can be useful in unsupervised tail estimation, especially in heavy tailed situations and for small percentiles.  相似文献   

12.
Inference for SDE Models via Approximate Bayesian Computation   总被引:1,自引:0,他引:1  
Models defined by stochastic differential equations (SDEs) allow for the representation of random variability in dynamical systems. The relevance of this class of models is growing in many applied research areas and is already a standard tool to model, for example, financial, neuronal, and population growth dynamics. However, inference for multidimensional SDE models is still very challenging, both computationally and theoretically. Approximate Bayesian computation (ABC) allows to perform Bayesian inference for models which are sufficiently complex that the likelihood function is either analytically unavailable or computationally prohibitive to evaluate. A computationally efficient ABC-MCMC algorithm is proposed, halving the running time in our simulations. Focus here is on the case where the SDE describes latent dynamics in state-space models; however, the methodology is not limited to the state-space framework. We consider simulation studies for a pharmacokinetics/pharmacodynamics model and for stochastic chemical reactions and we provide a Matlab package that implements our ABC-MCMC algorithm.  相似文献   

13.
We develop a general ontology of statistical methods and use it to propose a common framework for statistical analysis and software development built on and within the R language, including R's numerous existing packages. This framework offers a simple unified structure and syntax that can encompass a large fraction of existing statistical procedures. We conjecture that it can be used to encompass and present simply a vast majority of existing statistical methods, without requiring changes in existing approaches, and regardless of the theory of inference on which they are based, notation with which they were developed, and programming syntax with which they have been implemented. This development enabled us, and should enable others, to design statistical software with a single, simple, and unified user interface that helps overcome the conflicting notation, syntax, jargon, and statistical methods existing across the methods subfields of numerous academic disciplines. The approach also enables one to build a graphical user interface that automatically includes any method encompassed within the framework. We hope that the result of this line of research will greatly reduce the time from the creation of a new statistical innovation to its widespread use by applied researchers whether or not they use or program in R.  相似文献   

14.
We present a general framework for Bayesian estimation of incompletely observed multivariate diffusion processes. Observations are assumed to be discrete in time, noisy and incomplete. We assume the drift and diffusion coefficient depend on an unknown parameter. A data-augmentation algorithm for drawing from the posterior distribution is presented which is based on simulating diffusion bridges conditional on a noisy incomplete observation at an intermediate time. The dynamics of such filtered bridges are derived and it is shown how these can be simulated using a generalised version of the guided proposals introduced in Schauer, Van der Meulen and Van Zanten (2017, Bernoulli 23(4A)).  相似文献   

15.
In this paper, we implement the “rescale and modify” approach in variable step size mode with both fixed and variable orders for stiff and nonstiff ODEs. A comparison of this approach and the “rescale” approach from the point of stability behavior and also step size and order selection provides very interesting results. To illustrate the efficiency of the method we have considered some standard test problems and report very useful tables and figures for step size and order changes, number of rejected or accepted steps, and also global error. As an optimal implementation, the numerical experiments suggest the application of this approach with both fixed and variable orders for nonstiff, and in variable order for stiff ODEs.  相似文献   

16.
A new method is described for computing nonlinear convex and concave relaxations of the solutions of parametric ordinary differential equations (ODEs). Such relaxations enable deterministic global optimization algorithms to be applied to problems with ODEs embedded, which arise in a wide variety of engineering applications. The proposed method computes relaxations as the solutions of an auxiliary system of ODEs, and a method for automatically constructing and numerically solving appropriate auxiliary ODEs is presented. This approach is similar to two existing methods, which are analyzed and shown to have undesirable properties that are avoided by the new method. Two numerical examples demonstrate that these improvements lead to significantly tighter relaxations than previous methods.  相似文献   

17.
Automatic differentiation of numerical integration algorithms   总被引:1,自引:0,他引:1  
Automatic differentiation (AD) is a technique for automatically augmenting computer programs with statements for the computation of derivatives. This article discusses the application of automatic differentiation to numerical integration algorithms for ordinary differential equations (ODEs), in particular, the ramifications of the fact that AD is applied not only to the solution of such an algorithm, but to the solution procedure itself. This subtle issue can lead to surprising results when AD tools are applied to variable-stepsize, variable-order ODE integrators. The computation of the final time step plays a special role in determining the computed derivatives. We investigate these issues using various integrators and suggest constructive approaches for obtaining the desired derivatives.

  相似文献   


18.
This paper considers the numerical solution of optimal control problems based on ODEs. We assume that an explicit Runge-Kutta method is applied to integrate the state equation in the context of a recursive discretization approach. To compute the gradient of the cost function, one may employ Automatic Differentiation (AD). This paper presents the integration schemes that are automatically generated when differentiating the discretization of the state equation using AD. We show that they can be seen as discretization methods for the sensitivity and adjoint differential equation of the underlying control problem. Furthermore, we prove that the convergence rate of the scheme automatically derived for the sensitivity equation coincides with the convergence rate of the integration scheme for the state equation. Under mild additional assumptions on the coefficients of the integration scheme for the state equation, we show a similar result for the scheme automatically derived for the adjoint equation. Numerical results illustrate the presented theoretical results.  相似文献   

19.
We deal with optimal approximation of solutions of ODEs under local Lipschitz condition and inexact discrete information about the right-hand side functions. We show that the randomized two-stage Runge–Kutta scheme is the optimal method among all randomized algorithms based on standard noisy information. We perform numerical experiments that confirm our theoretical findings. Moreover, for the optimal algorithm we rigorously investigate properties of regions of absolute stability.  相似文献   

20.
Designing systems with human agents is difficult because it often requires models that characterize agents’ responses to changes in the system’s states and inputs. An example of this scenario occurs when designing treatments for obesity. While weight loss interventions through increasing physical activity and modifying diet have found success in reducing individuals’ weight, such programs are difficult to maintain over long periods of time due to lack of patient adherence. A promising approach to increase adherence is through the personalization of treatments to each patient. In this paper, we make a contribution toward treatment personalization by developing a framework for predictive modeling using utility functions that depend upon both time-varying system states and motivational states evolving according to some modeled process corresponding to qualitative social science models of behavior change. Computing the predictive model requires solving a bilevel program, which we reformulate as a mixed-integer linear program (MILP). This reformulation provides the first (to our knowledge) formulation for Bayesian inference that uses empirical histograms as prior distributions. We study the predictive ability of our framework using a data set from a weight loss intervention, and our predictive model is validated by comparison to standard machine learning approaches. We conclude by describing how our predictive model could be used for optimization, unlike standard machine learning approaches that cannot.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号