首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article proposes a class of conditionally specified models for the analysis of multivariate space-time processes. Such models are useful in situations where there is sparse spatial coverage of one of the processes and much more dense coverage of the other process(es). The dependence structure across processes and over space, and time is completely specified through a neighborhood structure. These models are applicable to both point and block sources; for example, multiple pollutant monitors (point sources) or several county-level exposures (block sources). We introduce several computational tricks that are integral for model fitting, give some simple sufficient and necessary conditions for the space-time covariance matrix to be positive definite, and implement a Gibbs sampler, using Hybrid MC steps, to sample from the posterior distribution of the parameters. Model fit is assessed via the DIC. Predictive accuracy, over both time and space, is assessed both relatively and absolutely via mean squared prediction error and coverage probabilities. As an illustration of these models, we fit them to particulate matter and ozone data collected in the Los Angeles, CA, area in 1995 over a three-month period. In these data, the spatial coverage of particulate matter was sparse relative to that of ozone.  相似文献   

2.
In recent years there has been extensive development of the fire computer models, and its use in the study of the fire safety, fire investigation, etc. has been increased. The most important types of fire computer models are the field model and the zone model. The first model reaches a better approximation to fire dynamics, but the second one requires less computational time.Additionally, in the last years, it should be noted the great advances in information processing using artificial neural networks, and it has become a useful tool with application in very diverse fields.This paper analyzes the possibilities of develop a new fire computer model using artificial neural networks. In the first approach to this objective, a simple compartment was analyzed with a field model. After that, simulations employing General Regression Neural Network were performed. This method achieves similar results that the field model employing computational times closer to the zone models. The neural network has been trained with FDS field model and validating the resulting model with data from a full scale test. In later stages other phenomena and different types of networks will be evaluated.  相似文献   

3.
We study normal approximations for a class of discrete-time occupancy processes, namely, Markov chains with transition kernels of product Bernoulli form. This class encompasses numerous models which appear in the complex networks literature, including stochastic patch occupancy models in ecology, network models in epidemiology, and a variety of dynamic random graph models. Bounds on the rate of convergence for a central limit theorem are obtained using Stein’s method and moment inequalities on the deviation from an analogous deterministic model. As a consequence, our work also implies a uniform law of large numbers for a subclass of these processes.  相似文献   

4.
In research and application, social networks are increasingly extracted from relationships inferred by name collocations in text-based documents. Despite the fact that names represent real entities, names are not unique identifiers and it is often unclear when two name observations correspond to the same underlying entity. One confounder stems from ambiguity, in which the same name correctly references multiple entities. Prior name disambiguation methods measured similarity between two names as a function of their respective documents. In this paper, we propose an alternative similarity metric based on the probability of walking from one ambiguous name to another in a random walk of the social network constructed from all documents. We experimentally validate our model on actor-actor relationships derived from the Internet Movie Database. Using a global similarity threshold, we demonstrate random walks achieve a significant increase in disambiguation capability in comparison to prior models. Bradley A. Malin is a Ph.D. candidate in the School of Computer Science at Carnegie Mellon University. He is an NSF IGERT fellow in the Center for Computational Analysis of Social and Organizational Systems (CASOS) and a researcher at the Laboratory for International Data Privacy. His research is interdisciplinary and combines aspects of bioinformatics, data forensics, data privacy and security, entity resolution, and public policy. He has developed learning algorithms for surveillance in distributed systems and designed formal models for the evaluation and the improvement of privacy enhancing technologies in real world environments, including healthcare and the Internet. His research on privacy in genomic databases has received several awards from the American Medical Informatics Association and has been cited in congressional briefings on health data privacy. He currently serves as managing editor of the Journal of Privacy Technology. Edoardo M. Airoldi is a Ph.D. student in the School of Computer Science at Carnegie Mellon University. Currently, he is a researcher in the CASOS group and at the Center for Automated Learning and Discovery. His methodology is based on probability theory, approximation theorems, discrete mathematics and their geometries. His research interests include data mining and machine learning techniques for temporal and relational data, data linkage and data privacy, with important applications to dynamic networks, biological sequences and large collections of texts. His research on dynamic network tomography is the state-of-the-art for recovering information about who is communicating to whom in a network, and was awarded honors from the ACM SIG-KDD community. Several companies focusing on information extraction have adopted his methodology for text analysis. He is currently investigating practical and theoretical aspects of hierarchical mixture models for temporal and relational data, and an abstract theory of data linkage. Kathleen M. Carley is a Professor of Computer Science in ISRI, School of Computer Science at Carnegie Mellon University. She received her Ph.D. from Harvard in Sociology. Her research combines cognitive science, social and dynamic networks, and computer science (particularly artificial intelligence and machine learning techniques) to address complex social and organizational problems. Her specific research areas are computational social and organization science, social adaptation and evolution, social and dynamic network analysis, and computational text analysis. Her models meld multi-agent technology with network dynamics and empirical data. Three of the large-scale tools she and the CASOS group have developed are: BioWar a city, scale model of weaponized biological attacks and response; Construct a models of the co-evolution of social and knowledge networks; and ORA a statistical toolkit for dynamic social Network data.  相似文献   

5.
6.
Incentive-based models for network formation link micro actions to changes in network structure. Sociologists have extended these models on a number of fronts, but there remains a tendency to treat actors as homogenous agents and to disregard social theory. Drawing upon literature on the strategic use of networks for knowledge gains, we specify models exploring the co-evolution of networks and knowledge gains. Our findings suggest that pursuing transitive ties is the most successful strategy, as more reciprocity and cycling result from this pursuit, thus encouraging learning across the network. We also discuss the role of network size, global network structure, and parameter strength in actors’ attainment of knowledge resources.  相似文献   

7.
In this work, the optimal sensor displacement problem in wireless sensor networks is addressed. It is assumed that a network, consisting of independent, collaborative and mobile nodes, is available. Starting from an initial configuration, the aim is to define a specific sensors displacement, which allows the network to achieve high performance, in terms of energy consumption and travelled distance. To mathematically represent the problem under study, different innovative optimization models are proposed and defined, by taking into account different performance objectives. An extensive computational phase is carried out in order to assess the behaviour of the developed models in terms of solution quality and computational effort. A comparison with distributed approaches is also given, by considering different scenarios.  相似文献   

8.
This paper explores time heterogeneity in stochastic actor oriented models (SAOM) proposed by Snijders (Sociological Methodology. Blackwell, Boston, pp 361-395, 2001) which are meant to study the evolution of networks. SAOMs model social networks as directed graphs with nodes representing people, organizations, etc., and dichotomous relations representing underlying relationships of friendship, advice, etc. We illustrate several reasons why heterogeneity should be statistically tested and provide a fast, convenient method for assessment and model correction. SAOMs provide a flexible framework for network dynamics which allow a researcher to test selection, influence, behavioral, and structural properties in network data over time. We show how the forward-selecting, score type test proposed by Schweinberger (Chapter 4: Statistical modeling of network panel data: goodness of fit. PhD thesis, University of Groningen 2007) can be employed to quickly assess heterogeneity at almost no additional computational cost. One step estimates are used to assess the magnitude of the heterogeneity. Simulation studies are conducted to support the validity of this approach. The ASSIST dataset (Campbell et al. Lancet 371(9624):1595-1602, 2008) is reanalyzed with the score type test, one step estimators, and a full estimation for illustration. These tools are implemented in the RSiena package, and a brief walkthrough is provided.  相似文献   

9.
The application of simple random walks on graphs is a powerful tool that is useful in many algorithmic settings such as network exploration, sampling, information spreading, and distributed computing. This is due to the reliance of a simple random walk on only local data, its negligible memory requirements, and its distributed nature. It is well known that for static graphs the cover time, that is, the expected time to visit every node of the graph, and the mixing time, that is, the time to sample a node according to the stationary distribution, are at most polynomial relative to the size of the graph. Motivated by real world networks, such as peer‐to‐peer and wireless networks, the conference version of this paper was the first to study random walks on arbitrary dynamic networks. We study the most general model in which an oblivious adversary is permitted to change the graph after every step of the random walk. In contrast to static graphs, and somewhat counter‐intuitively, we show that there are adversary strategies that force the expected cover time and the mixing time of the simple random walk on dynamic graphs to be exponentially long, even when at each time step the network is well connected and rapidly mixing. To resolve this, we propose a simple strategy, the lazy random walk, which guarantees, under minor conditions, polynomial cover time and polynomial mixing time regardless of the changes made by the adversary.  相似文献   

10.
Most biological networks have some common properties, on which models have to fit. The main one is that those networks are scale-free, that is that the distribution of the vertex degrees follows a power-law. Among the existing models, the ones which fit those characteristics best are based on a time evolution which makes impossible the analytic calculation of the number of motifs in the network. Focusing on applications, this calculation is very important to decompose networks in a modular manner, as proposed by Milo et al.. On the contrary, models whose construction does not depend on time, miss one or several properties of real networks or are not computationally tractable. In this paper, we propose a new random graph model that satisfies the global features of biological networks and the non-time-dependency condition. It is based on a bipartite graph structure, which has a biological interpretation in metabolic networks.  相似文献   

11.
Models with intractable likelihood functions arise in areas including network analysis and spatial statistics, especially those involving Gibbs random fields. Posterior parameter estimation in these settings is termed a doubly intractable problem because both the likelihood function and the posterior distribution are intractable. The comparison of Bayesian models is often based on the statistical evidence, the integral of the un-normalized posterior distribution over the model parameters which is rarely available in closed form. For doubly intractable models, estimating the evidence adds another layer of difficulty. Consequently, the selection of the model that best describes an observed network among a collection of exponential random graph models for network analysis is a daunting task. Pseudolikelihoods offer a tractable approximation to the likelihood but should be treated with caution because they can lead to an unreasonable inference. This article specifies a method to adjust pseudolikelihoods to obtain a reasonable, yet tractable, approximation to the likelihood. This allows implementation of widely used computational methods for evidence estimation and pursuit of Bayesian model selection of exponential random graph models for the analysis of social networks. Empirical comparisons to existing methods show that our procedure yields similar evidence estimates, but at a lower computational cost. Supplementary material for this article is available online.  相似文献   

12.
Exponential family random graph models (ERGMs) can be understood in terms of a set of structural biases that act on an underlying reference distribution. This distribution determines many aspects of the behavior and interpretation of the ERGM families incorporating it. One important innovation in this area has been the development of an ERGM reference model that produces realistic behavior when generalized to sparse networks of varying sizes. Here, we show that this model can be derived from a latent dynamic process in which tie formation takes place within small local settings between which individuals move. This derivation provides one possible micro-process interpretation of the sparse ERGM reference model and sheds light on the conditions under which constant mean degree scaling can emerge.  相似文献   

13.
14.
Constructing neural networks for function approximation is a classical and longstanding topic in approximation theory. In this paper, we aim at constructing deep neural networks with three hidden layers using a sigmoidal activation function to approximate smooth and sparse functions. Specifically, we prove that the constructed deep nets with controllable magnitude of free parameters can reach the optimal approximation rate in approximating both smooth and sparse functions. In particular, we prove that neural networks with three hidden layers can avoid the phenomenon of saturation, i.e., the phenomenon that for some neural network architectures, the approximation rate stops improving for functions of very high smoothness.  相似文献   

15.
Networks are ubiquitous in science. They have also become a focal point for discussion in everyday life. Formal statistical models for the analysis of network data have emerged as a major topic of interest in diverse areas of study, and most of these involve a form of graphical representation. Probability models on graphs date back to 1959. Along with empirical studies in social psychology and sociology from the 1960s, these early works generated an active “social science network community” and a substantial literature in the 1970s. This effort moved into the statistical literature in the late 1970s and 1980s, and the past decade has seen a burgeoning network literature coming out of statistical physics and computer science. In particular, the growth of the World Wide Web and the emergence of online “networking communities” such as Facebook, Google+, MySpace, LinkedIn, and Twitter, and a host of more specialized professional network communities have intensified interest in the study of networks and network data. This article reviews some of these developments, introduces some relevant statistical models for static network settings, and briefly points to open challenges.  相似文献   

16.
Three methodological issues are discussed that are important for the analysis of data on networks in organizations. The first is the two-level nature of the data: individuals are nested in organizations. This can be dealt with by using multilevel statistical methods. The second is the complicated nature of statistical methods for network analysis. The third issue is the potential of mathematical modeling for the study of network effects and network evolution in organizations. Two examples are given of mathematical models for gossip in organizations. The first example is a model for cross-sectional data, the second is a model for longitudinal data that reflect the joint development of network structure and individual behavior tendencies.  相似文献   

17.
This paper provides a new idea for approximating the inventory cost function to be used in a truncated dynamic program for solving the capacitated lot-sizing problem. The proposed method combines dynamic programming with regression, data fitting, and approximation techniques to estimate the inventory cost function at each stage of the dynamic program. The effectiveness of the proposed method is analyzed on various types of the capacitated lot-sizing problem instances with different cost and capacity characteristics. Computational results show that approximation approaches could significantly decrease the computational time required by the dynamic program and the integer program for solving different types of the capacitated lot-sizing problem instances. Furthermore, in most cases, the proposed approximate dynamic programming approaches can accurately capture the optimal solution of the problem with consistent computational performance over different instances.  相似文献   

18.
The conventional exponential family random graph model (ERGM) parameterization leads to a baseline density that is constant in graph order (i.e., number of nodes); this is potentially problematic when modeling multiple networks of varying order. Prior work has suggested a simple alternative that results in constant expected mean degree. Here, we extend this approach by suggesting another alternative parameterization that allows for flexible modeling of scenarios in which baseline expected degree scales as an arbitrary power of order. This parameterization is easily implemented by the inclusion of an edge count/log order statistic along with the traditional edge count statistic in the model specification.  相似文献   

19.
Generalized linear mixed models with semiparametric random effects are useful in a wide variety of Bayesian applications. When the random effects arise from a mixture of Dirichlet process (MDP) model with normal base measure, Gibbs samplingalgorithms based on the Pólya urn scheme are often used to simulate posterior draws in conjugate models (essentially, linear regression models and models for binary outcomes). In the nonconjugate case, some common problems associated with existing simulation algorithms include convergence and mixing difficulties.

This article proposes an algorithm for MDP models with exponential family likelihoods and normal base measures. The algorithm proceeds by making a Laplace approximation to the likelihood function, thereby matching the proposal with that of the Gibbs sampler. The proposal is accepted or rejected via a Metropolis-Hastings step. For conjugate MDP models, the algorithm is identical to the Gibbs sampler. The performance of the technique is investigated using a Poisson regression model with semi-parametric random effects. The algorithm performs efficiently and reliably, even in problems where large-sample results do not guarantee the success of the Laplace approximation. This is demonstrated by a simulation study where most of the count data consist of small numbers. The technique is associated with substantial benefits relative to existing methods, both in terms of convergence properties and computational cost.  相似文献   

20.
This paper argues that parties and other gatherings are important for the development of friendship networks. It proposes a stochastic model for the evolution of networks over time, the distinctive feature of which is the party event. A party event occurs when a person in the network has a gathering and invites all of his/her friends who then also become friends. The Party Models discussed are all based upon this simple assumption. After formulating basic assumptions, various differential equations describing Party Models are derived. Subsequently several concepts useful for model analysis are defined and briefly explored. These include the concepts of potential, equivalence class form, and degenerate models. The penultimate section considers models for three person networks in some detail and with numerical illustrations. All extended mathematical arguments are placed in the Mathematical Appendix.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号