首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
In this paper, we consider a latent Markov process governing the intensity rate of a Poisson process model for software failures. The latent process enables us to infer performance of the debugging operations over time and allows us to deal with the imperfect debugging scenario. We develop the Bayesian inference for the model and also introduce a method to infer the unknown dimension of the Markov process. We illustrate the implementation of our model and the Bayesian approach by using actual software failure data.  相似文献   

3.
The objective of studying software reliability is to assist software engineers in understanding more of the probabilistic nature of software failures during the debugging stages and to construct reliability models. In this paper, we consider modeling of a multiplicative failure rate whose components are evolving stochastically over testing stages and discuss its Bayesian estimation. In doing so, we focus on the modeling of parameters such as the fault detection rate per fault and the number of faults. We discuss how the proposed model can account for “imperfect debugging” under certain conditions. We use actual inter-failure data to carry out inference on model parameters via Markov chain Monte Carlo methods and present additional insights from Bayesian analysis.  相似文献   

4.
We define a new class of coloured graphical models, called regulatory graphs. These graphs have their own distinctive formal semantics and can directly represent typical qualitative hypotheses about regulatory processes like those described by various biological mechanisms. They admit an embellishment into classes of probabilistic statistical models and so standard Bayesian methods of model selection can be used to choose promising candidate explanations of regulation. Regulation is modelled by the existence of a deterministic relationship between the longitudinal series of observations labelled by the receiving vertex and the donating one. This class contains longitudinal cluster models as a degenerate graph. Edge colours directly distinguish important features of the mechanism like inhibition and excitation and graphs are often cyclic. With appropriate distributional assumptions, because the regulatory relationships map onto each other through a group structure, it is possible to define a conditional conjugate analysis. This means that even when the model space is huge it is nevertheless feasible, using a Bayesian MAP search, to a discover regulatory network with a high Bayes Factor score. We also show that, like the class of Bayesian Networks, regulatory graphs also admit a formal but distinctive causal algebra. The topology of the graph then represents collections of hypotheses about the predicted effect of controlling the process by tearing out message passers or forcing them to transmit certain signals. We illustrate our methods on a microarray experiment measuring the expression of thousands of genes as a longitudinal series where the scientific interest lies in the circadian regulation of these plants.  相似文献   

5.
From observational data alone, a causal DAG is only identifiable up to Markov equivalence. Interventional data generally improves identifiability; however, the gain of an intervention strongly depends on the intervention target, that is, the intervened variables. We present active learning (that is, optimal experimental design) strategies calculating optimal interventions for two different learning goals. The first one is a greedy approach using single-vertex interventions that maximizes the number of edges that can be oriented after each intervention. The second one yields in polynomial time a minimum set of targets of arbitrary size that guarantees full identifiability. This second approach proves a conjecture of Eberhardt (2008) [1] indicating the number of unbounded intervention targets which is sufficient and in the worst case necessary for full identifiability. In a simulation study, we compare our two active learning approaches to random interventions and an existing approach, and analyze the influence of estimation errors on the overall performance of active learning.  相似文献   

6.
Approximation of parametric statistical models by exponential models is discussed, from the viewpoints of observed as well as of expected likelihood geometry. This extends a construction, in expected geometry, due to Amari. The approximations considered are parametrization invariant and local. Some of them relate to conditional models given exact or approximate ancillary statistics. Various examples are considered and the relation between the maximum likelihood estimators of the original model and the approximating models is studied.Research partly supported by the Danish Science Research Council.  相似文献   

7.
The main object of this paper is to discuss the Bayes estimation of the regression coefficients in the elliptically distributed simple regression model with measurement errors. The posterior distribution for the line parameters is obtained in a closed form, considering the following: the ratio of the error variances is known, informative prior distribution for the error variance, and non-informative prior distributions for the regression coefficients and for the incidental parameters. We proved that the posterior distribution of the regression coefficients has at most two real modes. Situations with a single mode are more likely than those with two modes, especially in large samples. The precision of the modal estimators is studied by deriving the Hessian matrix, which although complicated can be computed numerically. The posterior mean is estimated by using the Gibbs sampling algorithm and approximations by normal distributions. The results are applied to a real data set and connections with results in the literature are reported.  相似文献   

8.
This paper deals with how to determine which features should be included in the software to be developed. Metaheuristic techniques have been applied to this problem and can help software developers when they face contradictory goals. We show how the knowledge and experience of human experts can be enriched by these techniques, with the idea of obtaining a better requirements selection than that produced by expert judgment alone. This objective is achieved by embedding metaheuristics techniques into a requirements management tool that takes advantage of them during the execution of the development stages of any software development project. © 2015 Wiley Periodicals, Inc. Complexity 21: 250–262, 2016  相似文献   

9.
Time series are found widely in engineering and science. We study forecasting of stochastic, dynamic systems based on observations from multivariate time series. We model the domain as a dynamic multiply sectioned Bayesian network (DMSBN) and populate the domain by a set of proprietary, cooperative agents. We propose an algorithm suite that allows the agents to perform one-step forecasts with distributed probabilistic inference. We show that as long as the DMSBN is structural time-invariant (possibly parametric time-variant), the forecast is exact and its time complexity is exponentially more efficient than using dynamic Bayesian networks (DBNs). In comparison with independent DBN-based agents, multiagent DMSBNs produce more accurate forecasts. The effectiveness of the framework is demonstrated through experiments on a supply chain testbed.  相似文献   

10.
Researchers have long struggled to identify causal effects in nonexperimental settings. Many recently proposed strategies assume ignorability of the treatment assignment mechanism and require fitting two models—one for the assignment mechanism and one for the response surface. This article proposes a strategy that instead focuses on very flexibly modeling just the response surface using a Bayesian nonparametric modeling procedure, Bayesian Additive Regression Trees (BART). BART has several advantages: it is far simpler to use than many recent competitors, requires less guesswork in model fitting, handles a large number of predictors, yields coherent uncertainty intervals, and fluidly handles continuous treatment variables and missing data for the outcome variable. BART also naturally identifies heterogeneous treatment effects. BART produces more accurate estimates of average treatment effects compared to propensity score matching, propensity-weighted estimators, and regression adjustment in the nonlinear simulation situations examined. Further, it is highly competitive in linear settings with the “correct” model, linear regression. Supplemental materials including code and data to replicate simulations and examples from the article as well as methods for population inference are available online.  相似文献   

11.
The specification of conditional probability tables (CPTs) is a difficult task in the construction of probabilistic graphical models. Several types of canonical models have been proposed to ease that difficulty. Noisy-threshold models generalize the two most popular canonical models: the noisy-or and the noisy-and. When using the standard inference techniques the inference complexity is exponential with respect to the number of parents of a variable. More efficient inference techniques can be employed for CPTs that take a special form. CPTs can be viewed as tensors. Tensors can be decomposed into linear combinations of rank-one tensors, where a rank-one tensor is an outer product of vectors. Such decomposition is referred to as Canonical Polyadic (CP) or CANDECOMP-PARAFAC (CP) decomposition. The tensor decomposition offers a compact representation of CPTs which can be efficiently utilized in probabilistic inference. In this paper we propose a CP decomposition of tensors corresponding to CPTs of threshold functions, exactly -out-of-k functions, and their noisy counterparts. We prove results about the symmetric rank of these tensors in the real and complex domains. The proofs are constructive and provide methods for CP decomposition of these tensors. An analytical and experimental comparison with the parent-divorcing method (which also has a polynomial complexity) shows superiority of the CP decomposition-based method. The experiments were performed on subnetworks of the well-known QMRT-DT network generalized by replacing noisy-or by noisy-threshold models.  相似文献   

12.
Using the theory of random closed sets, we extend the statistical framework introduced by Schreiber(11) for inference based on set-valued observations from the case of finite sample spaces to compact metric spaces with continuous distributions.  相似文献   

13.
Principal component analysis has made an important contribution to data reduction. In two sample problems, one great interest is whether we can reduce the number of variables to a smaller number in similar fashions for both samples. More precisely, we consider the hypothesisH m that the subspaces spanned by the latent vectors of the population covariance matrices corresponding to the first principal components are the same in two groups. In this paper, we propose a simple and easily interpreted test procedure forH m .  相似文献   

14.
Summary This paper analyses the shift in parameter of a life test model. This analysis depends on the prediction of order statistics in future samples based on order statistics in a series of earlier samples in life tests having a general exponential model. While a series ofk samples are being drawn, model itself undergoes a change. Firstly, a single shift is considered and the effect of this shift on the variance is discussed. Generalisation withs shifts (s≦k) ink samples in also taken up and the semi-or-used priors (SOUPS) have been used to get predictive distributions. Finally, shift afteri (i≦k) stages, from exponential to gamma model is considered and for this case effect of the shift on the variance as well as on the Bayesian prediction region (BPR) is analysed along with set of tables.  相似文献   

15.
Approximate Bayesian inference by importance sampling derives probabilistic statements from a Bayesian network, an essential part of evidential reasoning with the network and an important aspect of many Bayesian methods. A critical problem in importance sampling on Bayesian networks is the selection of a good importance function to sample a network’s prior and posterior probability distribution. The initially optimal importance functions eventually start deviating from the optimal function when sampling a network’s posterior distribution given evidence, even when adaptive methods are used that adjust an importance function to the evidence by learning. In this article we propose a new family of Refractor Importance Sampling (RIS) algorithms for adaptive importance sampling under evidential reasoning. RIS applies “arc refractors” to a Bayesian network by adding new arcs and refining the conditional probability tables. The goal of RIS is to optimize the importance function for the posterior distribution and reduce the error variance of sampling. Our experimental results show a significant improvement of RIS over state-of-the-art adaptive importance sampling algorithms.  相似文献   

16.
This paper focuses on the estimation of some models in finance and in particular, in interest rates. We analyse discretized versions of the constant elasticity of variance (CEV) models where the normal law showing up in the usual discretization of the diffusion part is replaced by a range of heavy‐tailed distributions. A further extension of the model is to allow the elasticity of variance to be a parameter itself. This generalized model allows great flexibility in modelling and simplifies the model implementation considerably using the scale mixtures representation. The mixing parameters provide a means to identify possible outliers and protect inference by down‐weighting the distorting effects of these outliers. For parameter estimation, Bayesian approach is adopted and implemented using the software WinBUGS (Bayesian inference using Gibbs sampler). Results from a real data analysis show that an exponential power distribution with a random shape parameter, which is highly leptokurtic compared with the normal distribution, forms the best CEV model for the data. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

17.
This article proposes a class of conditionally specified models for the analysis of multivariate space-time processes. Such models are useful in situations where there is sparse spatial coverage of one of the processes and much more dense coverage of the other process(es). The dependence structure across processes and over space, and time is completely specified through a neighborhood structure. These models are applicable to both point and block sources; for example, multiple pollutant monitors (point sources) or several county-level exposures (block sources). We introduce several computational tricks that are integral for model fitting, give some simple sufficient and necessary conditions for the space-time covariance matrix to be positive definite, and implement a Gibbs sampler, using Hybrid MC steps, to sample from the posterior distribution of the parameters. Model fit is assessed via the DIC. Predictive accuracy, over both time and space, is assessed both relatively and absolutely via mean squared prediction error and coverage probabilities. As an illustration of these models, we fit them to particulate matter and ozone data collected in the Los Angeles, CA, area in 1995 over a three-month period. In these data, the spatial coverage of particulate matter was sparse relative to that of ozone.  相似文献   

18.
A test procedure is developed for software which evaluates the Bessel functionsJ 0,J 1,Y 0,Y 1. The tests are highly accurate and are applied to various available codes. Results are presented on the performance of the codes.  相似文献   

19.
By far the most efficient methods for global optimization are based on starting a local optimization routine from an appropriate subset of uniformly distributed starting points. As the number of local optima is frequently unknown in advance, it is a crucial problem when to stop the sequence of sampling and searching. By viewing a set of observed minima as a sample from a generalized multinomial distribution whose cells correspond to the local optima of the objective function, we obtain the posterior distribution of the number of local optima and of the relative size of their regions of attraction. This information is used to construct sequential Bayesian stopping rules which find the optimal trade off between reliability and computational effort.  相似文献   

20.
该文提出了可用于指数分布产品四种可靠性增长试验方案的一类新的先验分布. 这类先验分布以条件分布形式给出, 它适合可靠性增长试验中的各种情况. 各阶段的条件均值和条件方差的表达式被获得, 先验分布的形式与它们的参数间的关系被讨论. 这些结果有助于与专家意见相结合.本文还给出试验末尾产品可靠性的后验密度, Bayesian估计和Bayesian下限.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号