共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
In this paper, we combine Leimer’s algorithm with MCS-M algorithm to decompose graphical models into marginal models on prime blocks. It is shown by experiments that our method has an easier and faster implementation than Leimer’s algorithm. 相似文献
3.
Log-linear models are the popular workhorses of analyzing contingency tables. A log-linear parameterization of an interaction model can be more expressive than a direct parameterization based on probabilities, leading to a powerful way of defining restrictions derived from marginal, conditional and context-specific independence. However, parameter estimation is often simpler under a direct parameterization, provided that the model enjoys certain decomposability properties. Here we introduce a cyclical projection algorithm for obtaining maximum likelihood estimates of log-linear parameters under an arbitrary context-specific graphical log-linear model, which needs not satisfy criteria of decomposability. We illustrate that lifting the restriction of decomposability makes the models more expressive, such that additional context-specific independencies embedded in real data can be identified. It is also shown how a context-specific graphical model can correspond to a non-hierarchical log-linear parameterization with a concise interpretation. This observation can pave way to further development of non-hierarchical log-linear models, which have been largely neglected due to their believed lack of interpretability. 相似文献
4.
Fabio Boschetti 《Journal of Heuristics》2008,14(1):95-116
A Local Linear Embedding (LLE) module enhances the performance of two Evolutionary Computation (EC) algorithms employed as search tools in global optimization problems. The LLE employs the stochastic sampling of the data space inherent in Evolutionary Computation in order to reconstruct an approximate mapping from the data space back into the parameter space. This allows to map the target data vector directly into the parameter space in order to obtain a rough estimate of the global optimum, which is then added to the EC generation. This process is iterated and considerably improves the EC convergence. Thirteen standard test functions and two real-world optimization problems serve to benchmark the performance of the method. In most of our tests, optimization aided by the LLE mapping outperforms standard implementations of a genetic algorithm and a particle swarm optimization. The number and ranges of functions we tested suggest that the proposed algorithm can be considered as a valid alternative to traditional EC tools in more general applications. The performance improvement in the early stage of the convergence also suggests that this hybrid implementation could be successful as an initial global search to select candidates for subsequent local optimization. 相似文献
5.
Gaussian graphical models are parametric statistical models for jointly normal random variables whose dependence structure is determined by a graph. In previous work, we introduced trek separation, which gives a necessary and sufficient condition in terms of the graph for when a subdeterminant is zero for all covariance matrices that belong to the Gaussian graphical model. Here we extend this result to give explicit cancellation-free formulas for the expansions of non-zero subdeterminants. 相似文献
6.
Zbigniew Michalewicz 《Journal of Heuristics》1996,1(2):177-206
Evolutionary computation techniques, which are based on a powerful principle of evolution—survival of the fittest, constitute an interesting category of heuristic search. In other words, evolutionary techniques are stochastic algorithms whose search methods model some natural phenomena: genetic inheritance and Darwinian strife for survival.Any evolutionary algorithm applied to a particular problem must address the issue of genetic representation of solutions to the problem and genetic operators that would alter the genetic composition of offspring during the reproduction process. However, additional heuristics should be incorporated in the algorithm as well; some of these heuristic rules provide guidelines for evaluating (feasible and infeasible) individuals in the population. This paper surveys such heuristics and discusses their merits and drawbacks.An abridged version of this paper appears in the volume entitled META-HEURISTICS: Theory & Application, edited by Ibrahim H. Osman and James P. Kelly, to be published by Kluwer Academic Publishers in March 1996. 相似文献
7.
The problem of estimating high-dimensional Gaussian graphical models has gained much attention in recent years. Most existing methods can be considered as one-step approaches, being either regression-based or likelihood-based. In this paper, we propose a two-step method for estimating the high-dimensional Gaussian graphical model. Specifically, the first step serves as a screening step, in which many entries of the concentration matrix are identified as zeros and thus removed from further consideration. Then in the second step, we focus on the remaining entries of the concentration matrix and perform selection and estimation for nonzero entries of the concentration matrix. Since the dimension of the parameter space is effectively reduced by the screening step,the estimation accuracy of the estimated concentration matrix can be potentially improved. We show that the proposed method enjoys desirable asymptotic properties. Numerical comparisons of the proposed method with several existing methods indicate that the proposed method works well. We also apply the proposed method to a breast cancer microarray data set and obtain some biologically meaningful results. 相似文献
8.
Trust region algorithms are well known in the field of local continuous optimization. They proceed by maintaining a confidence region in which a simple, most often quadratic, model is substituted to the criterion to be minimized. The minimum of the model in the trust region becomes the next starting point of the algorithm and, depending on the amount of progress made during this step, the confidence region is expanded, contracted or kept unchanged. In the field of global optimization, interval programming may be thought as a kind of confidence region approach, with a binary confidence level: the region is guaranteed to contain the optimum or guaranteed to not contain it. A probabilistic version, known as branch and probability bound, is based on an approximate probability that a region of the search space contains the optimum, and has a confidence level in the interval [0,1]. The method introduced in this paper is an application of the trust region approach within the framework of evolutionary algorithms. Regions of the search space are endowed with a prospectiveness criterion obtained from random sampling possibly coupled with a local continuous algorithm. The regions are considered as individuals for an evolutionary algorithm with mutation and crossover operators based on a transformation group. The performance of the algorithm on some standard benchmark functions is presented. 相似文献
9.
10.
11.
It has become increasingly popular to employ evolutionary algorithms to solve problems in different domains, and parallel models have been widely used for performance enhancement. Instead of using parallel computing facilities or public computing systems to speed up the computation, we propose to implement parallel evolutionary computation models on networked personal computers (PCs) that are locally available and manageable. To realize the parallelism, a multi-agent system is presented in which mobile agents play the major roles to carry the code and move from machine to machine to complete the computation dynamically. To evaluate the proposed approach, we use our multi-agent system to solve two types of time-consuming applications. Different kinds of experiments were conducted to assess the developed system, and the preliminary results show its promise and efficiency. 相似文献
12.
We define a new class of coloured graphical models, called regulatory graphs. These graphs have their own distinctive formal semantics and can directly represent typical qualitative hypotheses about regulatory processes like those described by various biological mechanisms. They admit an embellishment into classes of probabilistic statistical models and so standard Bayesian methods of model selection can be used to choose promising candidate explanations of regulation. Regulation is modelled by the existence of a deterministic relationship between the longitudinal series of observations labelled by the receiving vertex and the donating one. This class contains longitudinal cluster models as a degenerate graph. Edge colours directly distinguish important features of the mechanism like inhibition and excitation and graphs are often cyclic. With appropriate distributional assumptions, because the regulatory relationships map onto each other through a group structure, it is possible to define a conditional conjugate analysis. This means that even when the model space is huge it is nevertheless feasible, using a Bayesian MAP search, to a discover regulatory network with a high Bayes Factor score. We also show that, like the class of Bayesian Networks, regulatory graphs also admit a formal but distinctive causal algebra. The topology of the graph then represents collections of hypotheses about the predicted effect of controlling the process by tearing out message passers or forcing them to transmit certain signals. We illustrate our methods on a microarray experiment measuring the expression of thousands of genes as a longitudinal series where the scientific interest lies in the circadian regulation of these plants. 相似文献
13.
Annals of the Institute of Statistical Mathematics - Gaussian graphical models are semi-algebraic subsets of the cone of positive definite covariance matrices. They are widely used throughout... 相似文献
14.
We propose a framework for building graphical decision models from individual causal mechanisms. Our approach is based on the work of Simon [Simon, H.A., 1953. Causal ordering and identifiability. In: Hood, W.C., Koopmans, T.C. (Eds.), Studies in Econometric Method. Cowles Commission for Research in Economics. Monograph No. 14. John Wiley and Sons Inc., New York, NY, pp. 49–74 (Ch. III)], who proposed a causal ordering algorithm for explicating causal asymmetries among variables in a self-contained set of structural equations. We extend the causal ordering algorithm to under-constrained sets of structural equations, common during the process of problem structuring. We demonstrate that the causal ordering explicated by our extension is an intermediate representation of a modeler’s understanding of a problem and that the process of model construction consists of assembling mechanisms into self-contained causal models. We describe ImaGeNIe, an interactive modeling tool that supports mechanism-based model construction and demonstrate empirically that it can effectively assist users in constructing graphical decision models. 相似文献
15.
《Annals of Pure and Applied Logic》1986,30(2):173-200
A theory of approximation to measurable sets and measurable functions based on the concepts of recursion theory and discrete complexity theory is developed. The approximation method uses a model of oracle Turing machines, and so the computational complexity may be defined in a natural way. This complexity measure may be viewed as a formulation of the average-case complexity of real functions—in contrast to the more restrictive worst-case complexity. The relationship between these two complexity measures is further studied and compared with the notion of the distribution-free probabilistic computation. The computational complexity of the Lebesgue integral of polynomial-time approximable functions is studied and related to the question “FP = ♯P?”. 相似文献
16.
Adrian Dobra Chris Hans Beatrix Jones Joseph R. Nevins Mike West 《Journal of multivariate analysis》2004,90(1):196-212
We discuss the theoretical structure and constructive methodology for large-scale graphical models, motivated by their potential in evaluating and aiding the exploration of patterns of association in gene expression data. The theoretical discussion covers basic ideas and connections between Gaussian graphical models, dependency networks and specific classes of directed acyclic graphs we refer to as compositional networks. We describe a constructive approach to generating interesting graphical models for very high-dimensional distributions that builds on the relationships between these various stylized graphical representations. Issues of consistency of models and priors across dimension are key. The resulting methods are of value in evaluating patterns of association in large-scale gene expression data with a view to generating biological insights about genes related to a known molecular pathway or set of specified genes. Some initial examples relate to the estrogen receptor pathway in breast cancer, and the Rb-E2F cell proliferation control pathway. 相似文献
17.
I. V. Evstigneev 《Mathematical Notes》1976,19(2):165-171
This note is connected with a series of investigations of probabilistic models of economics. Its aim is the study of the asymptotic properties of the optimal programs in such models. The results are the stochastic analogs of the Turnpike theorems stating that the optimal programs are near to a definite stationary program for most of the time.Translated from Matematicheskie Zametki, Vol. 19, No. 2, pp. 279–290, February, 1976.The author thanks E. B. Dynkin for useful discussion and help during the preparation of this note. 相似文献
18.
《Mathematical and Computer Modelling》2006,43(9-10):1114-1135
We propose techniques based on graphical models for efficiently solving data association problems arising in multiple target tracking with distributed sensor networks. Graphical models provide a powerful framework for representing the statistical dependencies among a collection of random variables, and are widely used in many applications (e.g., computer vision, error-correcting codes). We consider two different types of data association problems, corresponding to whether or not it is known a priori which targets are within the surveillance range of each sensor. We first demonstrate how to transform these two problems to inference problems on graphical models. With this transformation, both problems can be solved efficiently by local message-passing algorithms for graphical models, which solve optimization problems in a distributed manner by exchange of information among neighboring nodes on the graph. Moreover, a suitably reweighted version of the max–product algorithm yields provably optimal data associations. These approaches scale well with the number of sensors in the network, and moreover are well suited to being realized in a distributed fashion. So as to address trade-offs between performance and communication costs, we propose a communication-sensitive form of message-passing that is capable of achieving near-optimal performance using far less communication. We demonstrate the effectiveness of our approach with experiments on simulated data. 相似文献
19.
A method for constructing priors is proposed that allows the off-diagonal elements of the concentration matrix of Gaussian data to be zero. The priors have the property that the marginal prior distribution of the number of nonzero off-diagonal elements of the concentration matrix (referred to below as model size) can be specified flexibly. The priors have normalizing constants for each model size, rather than for each model, giving a tractable number of normalizing constants that need to be estimated. The article shows how to estimate the normalizing constants using Markov chain Monte Carlo simulation and supersedes the method of Wong et al. (2003) [24] because it is more accurate and more general. The method is applied to two examples. The first is a mixture of constrained Wisharts. The second is from Wong et al. (2003) [24] and decomposes the concentration matrix into a function of partial correlations and conditional variances using a mixture distribution on the matrix of partial correlations. The approach detects structural zeros in the concentration matrix and estimates the covariance matrix parsimoniously if the concentration matrix is sparse. 相似文献
20.
We use the theory of rationalizable choices to study the survival and the extinction of types (or traits) in evolutionary OLG models. Two properties of evolutionary processes are introduced: rationalizability by a fitness ordering (i.e. only the most fit types survive) and interactivity (i.e. a withdrawal of types affects the survival of other types). Those properties are shown to be logically incompatible. We then examine whether the evolutionary processes at work in canonical evolutionary OLG models satisfy rationalizability or interactivity. We study n-type version of the evolutionary OLG models of Galor and Moav (2002) and Bisin and Verdier (2001), and show that, while the evolutionary process at work in the former is generally rationalizable by a fitness ordering, the opposite is true for the latter, which exhibits, in general, interactivity. 相似文献