首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 626 毫秒
1.
There has been a great deal of interest recently in the modeling and simulation of dynamic networks, that is, networks that change over time. One promising model is the separable temporal exponential-family random graph model (ERGM) of Krivitsky and Handcock, which treats the formation and dissolution of ties in parallel at each time step as independent ERGMs. However, the computational cost of fitting these models can be substantial, particularly for large, sparse networks. Fitting cross-sectional models for observations of a network at a single point in time, while still a nonnegligible computational burden, is much easier. This article examines model fitting when the available data consist of independent measures of cross-sectional network structure and the duration of relationships under the assumption of stationarity. We introduce a simple approximation to the dynamic parameters for sparse networks with relationships of moderate or long duration and show that the approximation method works best in precisely those cases where parameter estimation is most likely to fail—networks with very little change at each time step. We consider a variety of cases: Bernoulli formation and dissolution of ties, independent-tie formation and Bernoulli dissolution, independent-tie formation and dissolution, and dependent-tie formation models.  相似文献   

2.
The exponential random graph model (ERGM) plays a major role in social network analysis. However, parameter estimation for the ERGM is a hard problem due to the intractability of its normalizing constant and the model degeneracy. The existing algorithms, such as Monte Carlo maximum likelihood estimation (MCMLE) and stochastic approximation, often fail for this problem in the presence of model degeneracy. In this article, we introduce the varying truncation stochastic approximation Markov chain Monte Carlo (SAMCMC) algorithm to tackle this problem. The varying truncation mechanism enables the algorithm to choose an appropriate starting point and an appropriate gain factor sequence, and thus to produce a reasonable parameter estimate for the ERGM even in the presence of model degeneracy. The numerical results indicate that the varying truncation SAMCMC algorithm can significantly outperform the MCMLE and stochastic approximation algorithms: for degenerate ERGMs, MCMLE and stochastic approximation often fail to produce any reasonable parameter estimates, while SAMCMC can do; for nondegenerate ERGMs, SAMCMC can work as well as or better than MCMLE and stochastic approximation. The data and source codes used for this article are available online as supplementary materials.  相似文献   

3.
This study discusses nine desirable properties that a measure of technical efficiency (TE) needs to satisfy from the perspective of production economics and optimization. Seven data envelopment analysis (DEA) models are theoretically compared from a viewpoint of nine TE criteria. All the seven DEA models suffer from a problem of multiple projections even though a unique projection for efficiency comparison is one of the nine desirable properties. Furthermore, all the DEA models violate the property on aggregation of inputs and outputs. Thus, the seven DEA models do not satisfy all desirable TE properties. In addition, the comparison provides us with the following guidelines: (a) The additive model violates all desirable TE properties. (b) Russell measure and SBM (=ERGM) perform as well as RAM as a non-radial measure. If we are interested in strict monotonicity, the two models outperform the other DEA models including RAM. In contrast, if we are interested in translation invariance, RAM is better than Russell measure and SBM (=ERGM). (c) The radial measures (CCR and BCC) have the property of linear homogeneity. (d) The CCR model is useful for measuring a frontier shift among different periods. (e) If a data set contains a negative value, RAM becomes a DEA model to handle the negative value because it has the property of translation invariance. After examining the desirable TE properties, this study proposes a new approach to deal with an occurrence of multiple projections. The proposed approach includes a test to examine an occurrence of multiple projections, a mathematical expression of a projection set, and a selection process of a unique reference set as the largest one covering all the possible reference sets.  相似文献   

4.
The conventional exponential family random graph model (ERGM) parameterization leads to a baseline density that is constant in graph order (i.e., number of nodes); this is potentially problematic when modeling multiple networks of varying order. Prior work has suggested a simple alternative that results in constant expected mean degree. Here, we extend this approach by suggesting another alternative parameterization that allows for flexible modeling of scenarios in which baseline expected degree scales as an arbitrary power of order. This parameterization is easily implemented by the inclusion of an edge count/log order statistic along with the traditional edge count statistic in the model specification.  相似文献   

5.
The present article investigates a class of random partitioning distributions of a positive integer. This class is called the limiting conditional compound poisson (LCCP) distribution and characterized by the law of small numbers. Accordingly the LCCP distribution explains the limiting behavior of counts on a sparse contingency table by the frequencies of frequencies. The LCCP distribution is constructed via some combinations of conditioning and limiting, and this view reveals that the LCCP distribution is a subclass of several known classes that depend on a Bell polynomial. It follows that the limiting behavior of a Bell polynomial provides new asymptotics for a sparse contingency table. Also the Neyman Type A distribution and the Thomas distribution are revisited as the basis of the sparsity.  相似文献   

6.
The theory of sparse stochastic processes offers a broad class of statistical models to study signals, far beyond the more classical class of Gaussian processes. In this framework, signals are represented as realizations of random processes that are solution of linear stochastic differential equations driven by Lévy white noises. Among these processes, generalized Poisson processes based on compound-Poisson noises admit an interpretation as random L-splines with random knots and weights. We demonstrate that every generalized Lévy process—from Gaussian to sparse—can be understood as the limit in law of a sequence of generalized Poisson processes. This enables a new conceptual understanding of sparse processes and suggests simple algorithms for the numerical generation of such objects.  相似文献   

7.
High-throughput techniques allow measurement of hundreds of cell components simultaneously. The inference of interactions between cell components from these experimental data facilitates the understanding of complex regulatory processes. Differential equations have been established to model the dynamic behavior of these regulatory networks quantitatively. Usually traditional regression methods for estimating model parameters fail in this setting, since they overfit the data. This is even the case, if the focus is on modeling subnetworks of, at most, a few tens of components. In a Bayesian learning approach, this problem is avoided by a restriction of the search space with prior probability distributions over model parameters.This paper combines both differential equation models and a Bayesian approach. We model the periodic behavior of proteins involved in the cell cycle of the budding yeast Saccharomyces cerevisiae, with differential equations, which are based on chemical reaction kinetics. One property of these systems is that they usually converge to a steady state, and lots of efforts have been made to explain the observed periodic behavior. We introduce an approach to infer an oscillating network from experimental data. First, an oscillating core network is learned. This is extended by further components by using a Bayesian approach in a second step. A specifically designed hierarchical prior distribution over interaction strengths prevents overfitting, and drives the solutions to sparse networks with only a few significant interactions.We apply our method to a simulated and a real world dataset and reveal main regulatory interactions. Moreover, we are able to reconstruct the dynamic behavior of the network.  相似文献   

8.
Demixing refers to the challenge of identifying two structured signals given only the sum of the two signals and prior information about their structures. Examples include the problem of separating a signal that is sparse with respect to one basis from a signal that is sparse with respect to a second basis, and the problem of decomposing an observed matrix into a low-rank matrix plus a sparse matrix. This paper describes and analyzes a framework, based on convex optimization, for solving these demixing problems, and many others. This work introduces a randomized signal model that ensures that the two structures are incoherent, i.e., generically oriented. For an observation from this model, this approach identifies a summary statistic that reflects the complexity of a particular signal. The difficulty of separating two structured, incoherent signals depends only on the total complexity of the two structures. Some applications include (1) demixing two signals that are sparse in mutually incoherent bases, (2) decoding spread-spectrum transmissions in the presence of impulsive errors, and (3) removing sparse corruptions from a low-rank matrix. In each case, the theoretical analysis of the convex demixing method closely matches its empirical behavior.  相似文献   

9.
10.
The framework of this paper is the parallelization of a plasticity algorithm that uses an implicit method and an incremental approach. More precisely, we will focus on some specific parallel sparse linear algebra algorithms which are the most time-consuming steps to solve efficiently such an engineering application. First, we present a general algorithm which computes an efficient static scheduling of block computations for parallel sparse linear factorization. The associated solver, based on a supernodal fan-in approach, is fully driven by this scheduling. Second, we describe a scalable parallel assembly algorithm based on a distribution of elements induced by the previous distribution for the blocks of the sparse matrix. We give an overview of these algorithms and present performance results on an IBM SP2 for a collection of grid and irregular problems. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

11.
Abstract

The subject of the present paper is a simplified model for a symmetric bistable system with memory or delay, the reference model, which in the presence of noise exhibits a phenomenon similar to what is known as stochastic resonance. The reference model is given by a one-dimensional parametrized stochastic differential equation with point delay; the basic properties of which we check.

With a view to capturing the effective dynamics and, in particular, the resonance-like behavior of the reference model, we construct a simplified or reduced model, the two-state model, first in discrete time, then in the limit of discrete time tending to continuous time. The main advantage of the reduced model is that it enables us to explicitly calculate the distribution of residence times which in turn can be used to characterize the phenomenon of noise-induced resonance.

Drawing on what has been proposed in the physics literature, we outline a heuristic method for establishing the link between the two-state model and the reference model. The resonance characteristics developed for the reduced model can thus be applied to the original model.  相似文献   

12.
The soliton physics for the propagation of waves is represented by a stochastic model in which the particles of the wave can jump ahead according to some probability distribution. We demonstrate the presence of a steady state (stationary distribution) for the wavelength. It is shown that the stationary distribution is a convolution of geometric random variables. Approximations to the stationary distribution are investigated for a large number of particles. The model is rich and includes Gaussian cases as limit distribution for the wavelength (when suitably normalized). A sufficient Lindeberg‐like condition identifies a class of solitons with normal behavior. Our general model includes, among many other reasonable alternatives, an exponential aging soliton, of which the uniform soliton is one special subcase (with Gumbel's stationary distribution). With the proper interpretation, our model also includes the deterministic model proposed in Takahashi and Satsuma [A soliton cellular automaton, J Phys Soc Japan 59 (1990), 3514–3519]. © 2003 Wiley Periodicals, Inc. Random Struct. Alg., 2004  相似文献   

13.
A necessary step in any regression analysis is checking the fit of the model to the data. Graphical methods are often employed to allow visualization of features that the data should exhibit if the model holds. Judging whether such features are present or absent in any particular diagnostic plot can be problematic. In this article I take a Bayesian approach to aid in this task. The “unusualness” of some data with respect to a model can be assessed using the predictive distribution of the data under the model; an alternative is to use the posterior predictive distribution. Both approaches can be given a sampling interpretation that can then be used to enhance regression diagnostic plots such as marginal model plots.  相似文献   

14.
The core of an economy with multilateral environmental externalities   总被引:3,自引:0,他引:3  
When environmental externalities are international — i.e. transfrontier — they most often are multilateral and embody public good characteristics. Improving upon inefficient laissez-faire equilibria requires voluntary cooperation for which the game-theoretic core concept provides optimal outcomes that have interesting properties against free riding. To define the core, however, the characteristic function of the game associated with the economy (which specifies the payoff achievable by each possible coalition of players—here, the countries) must also reflect in each case the behavior of the players which are not members of the coalition. This has been for a long time a disputed issue in the theory of the core of economies with externalities. Among the several assumptions that can be made as to this behaviour, a plausible one is defined in this paper, for which it is shown that the core of the game is nonempty. The proof is constructive in the sense that it exhibits a strategy (specifying an explicit coordinated abatement policy and including financial transfers) that has the desired property of nondomination by any proper coalition of countries, given the assumed behavior of the other countries. This strategy is also shown to have an equilibrium interpretation in the economic model.  相似文献   

15.
Recently, the 1-bit compressive sensing(1-bit CS) has been studied in the field of sparse signal recovery. Since the amplitude information of sparse signals in 1-bit CS is not available, it is often the support or the sign of a signal that can be exactly recovered with a decoding method. We first show that a necessary assumption(that has been overlooked in the literature) should be made for some existing theories and discussions for 1-bit CS. Without such an assumption, the found solution by some existing decoding algorithms might be inconsistent with 1-bit measurements. This motivates us to pursue a new direction to develop uniform and nonuniform recovery theories for 1-bit CS with a new decoding method which always generates a solution consistent with 1-bit measurements. We focus on an extreme case of 1-bit CS, in which the measurements capture only the sign of the product of a sensing matrix and a signal. We show that the 1-bit CS model can be reformulated equivalently as an ?_0-minimization problem with linear constraints. This reformulation naturally leads to a new linear-program-based decoding method, referred to as the 1-bit basis pursuit, which is remarkably different from existing formulations. It turns out that the uniqueness condition for the solution of the 1-bit basis pursuit yields the so-called restricted range space property(RRSP) of the transposed sensing matrix. This concept provides a basis to develop sign recovery conditions for sparse signals through 1-bit measurements. We prove that if the sign of a sparse signal can be exactly recovered from 1-bit measurements with 1-bit basis pursuit, then the sensing matrix must admit a certain RRSP, and that if the sensing matrix admits a slightly enhanced RRSP, then the sign of a k-sparse signal can be exactly recovered with 1-bit basis pursuit.  相似文献   

16.

In model-based clustering mixture models are used to group data points into clusters. A useful concept introduced for Gaussian mixtures by Malsiner Walli et al. (Stat Comput 26:303–324, 2016) are sparse finite mixtures, where the prior distribution on the weight distribution of a mixture with K components is chosen in such a way that a priori the number of clusters in the data is random and is allowed to be smaller than K with high probability. The number of clusters is then inferred a posteriori from the data. The present paper makes the following contributions in the context of sparse finite mixture modelling. First, it is illustrated that the concept of sparse finite mixture is very generic and easily extended to cluster various types of non-Gaussian data, in particular discrete data and continuous multivariate data arising from non-Gaussian clusters. Second, sparse finite mixtures are compared to Dirichlet process mixtures with respect to their ability to identify the number of clusters. For both model classes, a random hyper prior is considered for the parameters determining the weight distribution. By suitable matching of these priors, it is shown that the choice of this hyper prior is far more influential on the cluster solution than whether a sparse finite mixture or a Dirichlet process mixture is taken into consideration.

  相似文献   

17.
Recent attempts to assess the performance of SSVM algorithms for unconstrained minimization problems differ in their evaluations from earlier assessments. Nevertheless, the new experiments confirm earlier observations that, on certain types of problems, the SSVM algorithms are far superior to other variable metric methods. This paper presents a critical review of these recent assessments and discusses some current interpretations advanced to explain the behavior of SSVM methods. The paper examines the new empirical results, in light of the original self-scaling theory, and introduces a new interpretation of these methods based on anL-function model of the objective function. This interpretation sheds new light on the performance characteristics of the SSVM methods, which contributes to the understanding of their behavior and helps in characterizing classes of problems which can benefit from the self-scaling approach.The subject of this paper was presented at the ORSA/TIMS National Meeting in New York, 1978.This work was done while the author was with the Analysis Research Group, Xerox Palo Alto Research Center, Palo Alto, California.  相似文献   

18.
This paper discusses the issue of how to use fuzzy targets in the target-based model for decision making under uncertainty. After introducing a target-based interpretation of the expected value on which it is shown that this model implicitly assumes a neutral behavior on attitude about the target, we examine the issue of using fuzzy targets considering different attitudes about the target selection of the decision maker. We also discuss the problem for situations on which the decision maker’s attitude about target may change according to different states of nature. Especially, it is shown that the target-based approach can provide an unified way for solving the problem of fuzzy decision making with uncertainty about the state of nature and imprecision about payoffs. Several numerical examples are given for illustration of the discussed issues.  相似文献   

19.
Zhou  Shenglong  Pan  Lili  Xiu  Naihua 《Numerical Algorithms》2021,88(4):1541-1570

As a tractable approach, regularization is frequently adopted in sparse optimization. This gives rise to regularized optimization, which aims to minimize the ?0 norm or its continuous surrogates that characterize the sparsity. From the continuity of surrogates to the discreteness of the ?0 norm, the most challenging model is the ?0-regularized optimization. There is an impressive body of work on the development of numerical algorithms to overcome this challenge. However, most of the developed methods only ensure that either the (sub)sequence converges to a stationary point from the deterministic optimization perspective or that the distance between each iteration and any given sparse reference point is bounded by an error bound in the sense of probability. In this paper, we develop a Newton-type method for the ?0-regularized optimization and prove that the generated sequence converges to a stationary point globally and quadratically under the standard assumptions, theoretically explaining that our method can perform surprisingly well.

  相似文献   

20.
An algorithm is presented for estimating the density distribution in a cross section of an object from X-ray data, which in practice is unavoidably noisy. The data give rise to a large sparse system of inconsistent equations, not untypically 105 equations with 104 unknowns, with only about 1% of the coefficients non-zero. Using the physical interpretation of the equations, each equality can in principle be replaced by a pair of inequalities, giving us the limits within which we believe the sum must lie. An algorithm is proposed for solving this set of inequalities. The algorithm is basically a relaxation method. A finite convergence result is proved. In spite of the large size of the system, in the application area of interest practical solution on a computer is possible because of the simple geometry of the problem and the redundancy of equations obtained from nearby X-rays. The algorithm has been implemented, and is demonstrated by actual reconstructions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号