首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
    
The statistical inference of the state variable and the drift function of stochastic differential equations (SDE) from sparsely sampled observations are discussed herein. A variational approach is used to approximate the distribution over the unknown path of the SDE conditioned on the observations. This approach also provides approximations for the intractable likelihood of the drift. The method is combined with a nonparametric Bayesian approach which is based on a Gaussian process prior over drift functions.  相似文献   

2.
To minimize instrumentally the induced systematic errors, cosmic microwave background (CMB) anisotropy experiments measure temperature differences across the sky using pairs of horn antennas, temperature map is recovered from temperature difference obtained in sky survey through a map-making procedure. To inspect and calibrate residual systematic errors in the recovered temperature maps is important as most previous studies of cosmology are based on these maps. By analyzing pixel-ring coupling and latitude dependence of CMB temperatures, we find notable systematic deviation from CMB Gaussianity in released Wilkinson Microwave Anisotropy Probe (WMAP) maps. The detected deviation cannot be explained by the best-fit LCDM cosmological model at a confidence level above 99% and cannot be ignored for a precision cosmology study. Supported by the National Natural Science Foundation of China (Grant No. 10533020), the National Basic Research Program of China (Grant No. 2009CB-824800), and the Directional Research Project of the Chinese Academy of Sciences (Grant No. KJCX2-YW-T03) Contributed by LI TiPei  相似文献   

3.
    
Efficiently accessing the information contained in non-linear and high dimensional probability distributions remains a core challenge in modern statistics. Traditionally, estimators that go beyond point estimates are either categorized as Variational Inference (VI) or Markov-Chain Monte-Carlo (MCMC) techniques. While MCMC methods that utilize the geometric properties of continuous probability distributions to increase their efficiency have been proposed, VI methods rarely use the geometry. This work aims to fill this gap and proposes geometric Variational Inference (geoVI), a method based on Riemannian geometry and the Fisher information metric. It is used to construct a coordinate transformation that relates the Riemannian manifold associated with the metric to Euclidean space. The distribution, expressed in the coordinate system induced by the transformation, takes a particularly simple form that allows for an accurate variational approximation by a normal distribution. Furthermore, the algorithmic structure allows for an efficient implementation of geoVI which is demonstrated on multiple examples, ranging from low-dimensional illustrative ones to non-linear, hierarchical Bayesian inverse problems in thousands of dimensions.  相似文献   

4.
    
In this paper, we propose to leverage the Bayesian uncertainty information encoded in parameter distributions to inform the learning procedure for Bayesian models. We derive a first principle stochastic differential equation for the training dynamics of the mean and uncertainty parameter in the variational distributions. On the basis of the derived Bayesian stochastic differential equation, we apply the methodology of stochastic optimal control on the variational parameters to obtain individually controlled learning rates. We show that the resulting optimizer, StochControlSGD, is significantly more robust to large learning rates and can adaptively and individually control the learning rates of the variational parameters. The evolution of the control suggests separate and distinct dynamical behaviours in the training regimes for the mean and uncertainty parameters in Bayesian neural networks.  相似文献   

5.
    
In statistical inference, the information-theoretic performance limits can often be expressed in terms of a statistical divergence between the underlying statistical models (e.g., in binary hypothesis testing, the error probability is related to the total variation distance between the statistical models). As the data dimension grows, computing the statistics involved in decision-making and the attendant performance limits (divergence measures) face complexity and stability challenges. Dimensionality reduction addresses these challenges at the expense of compromising the performance (the divergence reduces by the data-processing inequality). This paper considers linear dimensionality reduction such that the divergence between the models is maximally preserved. Specifically, this paper focuses on Gaussian models where we investigate discriminant analysis under five f-divergence measures (Kullback–Leibler, symmetrized Kullback–Leibler, Hellinger, total variation, and χ2). We characterize the optimal design of the linear transformation of the data onto a lower-dimensional subspace for zero-mean Gaussian models and employ numerical algorithms to find the design for general Gaussian models with non-zero means. There are two key observations for zero-mean Gaussian models. First, projections are not necessarily along the largest modes of the covariance matrix of the data, and, in some situations, they can even be along the smallest modes. Secondly, under specific regimes, the optimal design of subspace projection is identical under all the f-divergence measures considered, rendering a degree of universality to the design, independent of the inference problem of interest.  相似文献   

6.
A digitalized temperature map is recovered from the first light sky survey image published by the Planck team, from which an angular power spectrum of the cosmic microwave background (CMB) is derived. The amplitudes of the low multipoles (low-l) measured from the preliminary Planck power spectrum are significantly lower than those reported by the WMAP team. Possible systematical effects are far from enough to explain the observed low-l differences.  相似文献   

7.
  总被引:1,自引:0,他引:1  
  相似文献   

8.
孙冬  刘丹  赵键 《应用声学》2014,22(6):1711-1713
针对印刷品斑点检测缺乏有效检测方法的难题,依据统计决策和贝叶斯分析的基本原理,结合模式识别和机器礼堂相关理论技术,提出了一种基于最小风险贝叶斯决策的印刷品斑点检测方法;以烟标图像为例,实验验证了所提算法的准确性与有效性;该实验表明:所提算法能够有效地检测出烟标中的黑色斑点,分割出来的斑点图像基本上保持了原有的形状;最后,提出了进一步的改进方法。  相似文献   

9.
    
When humans infer underlying probabilities from stochastic observations, they exhibit biases and variability that cannot be explained on the basis of sound, Bayesian manipulations of probability. This is especially salient when beliefs are updated as a function of sequential observations. We introduce a theoretical framework in which biases and variability emerge from a trade-off between Bayesian inference and the cognitive cost of carrying out probabilistic computations. We consider two forms of the cost: a precision cost and an unpredictability cost; these penalize beliefs that are less entropic and less deterministic, respectively. We apply our framework to the case of a Bernoulli variable: the bias of a coin is inferred from a sequence of coin flips. Theoretical predictions are qualitatively different depending on the form of the cost. A precision cost induces overestimation of small probabilities, on average, and a limited memory of past observations, and, consequently, a fluctuating bias. An unpredictability cost induces underestimation of small probabilities and a fixed bias that remains appreciable even for nearly unbiased observations. The case of a fair (equiprobable) coin, however, is singular, with non-trivial and slow fluctuations in the inferred bias. The proposed framework of costly Bayesian inference illustrates the richness of a ‘resource-rational’ (or ‘bounded-rational’) picture of seemingly irrational human cognition.  相似文献   

10.
Expanding the remark 5.2.7 of Segre (Segre, G. (2002). Algorithmic Information Theoretic Issues in Quantum Mechanics, PhD Thesis, Dipartimento di Fisica Nucleare e Teorica, Pavia, Italy. quant-ph/0110018.) the noncommutative bayesian statistical inference from one wedge of a bifurcate Killing horizon is analyzed looking at its interrelation with the Unruh effect.  相似文献   

11.
    
Formal Bayesian comparison of two competing models, based on the posterior odds ratio, amounts to estimation of the Bayes factor, which is equal to the ratio of respective two marginal data density values. In models with a large number of parameters and/or latent variables, they are expressed by high-dimensional integrals, which are often computationally infeasible. Therefore, other methods of evaluation of the Bayes factor are needed. In this paper, a new method of estimation of the Bayes factor is proposed. Simulation examples confirm good performance of the proposed estimators. Finally, these new estimators are used to formally compare different hybrid Multivariate Stochastic Volatility–Multivariate Generalized Autoregressive Conditional Heteroskedasticity (MSV-MGARCH) models which have a large number of latent variables. The empirical results show, among other things, that the validity of reduction of the hybrid MSV-MGARCH model to the MGARCH specification depends on the analyzed data set as well as on prior assumptions about model parameters.  相似文献   

12.
    
This paper compares two models predicting elastic and viscoelastic propertiesof large arteries. Models compared include a Kelvin (standard linear) modeland an extended 2-term exponential linear viscoelastic model. Models were validatedagainst in-vitro data from the ovine thoracic descending aorta and the carotidartery. Measurements of blood pressure data were used as an input to predict vesselcross-sectional area. Material properties were predicted by estimating a set ofmodel parameters that minimize the difference between computed and measuredvalues of the cross-sectional area. The model comparison was carried out usinggeneralized analysis of variance type statistical tests. For the thoracic descendingaorta, results suggest that the extended 2-term exponential model does not improvethe ability to predict the observed cross-sectional area data, while for the carotidartery the extended model does statistically provide an improved fit to the data.This is in agreement with the fact that the aorta displays more complex nonlinearviscoelastic dynamics, while the stiffer carotid artery mainly displays simpler linearviscoelastic dynamics.  相似文献   

13.
    
  相似文献   

14.
One of the characteristics of the “Matter Bounce” scenario, an alternative to cosmological inflation for producing a scale-invariant spectrum of primordial adiabatic fluctuations on large scales, is a break in the power spectrum at a characteristic scale, below which the spectral index changes from ns=1ns=1 to ns=3ns=3. We study the constraints which current cosmological data place on the location of such a break, and more generally on the position of the break and the slope at length scales smaller than the break. The observational data we use include the WMAP five-year data set (WMAP5), other CMB data from BOOMERanG, CBI, VSA, and ACBAR, large-scale structure data from the Sloan Digital Sky Survey (SDSS, their luminous red galaxies sample), Type Ia Supernovae data (the “Union” compilation), and the Sloan Digital Sky Survey Lyman-α forest power spectrum (Lyα) data. We employ the Markov Chain Monte Carlo method to constrain the features in the primordial power spectrum which are motivated by the matter bounce model. We give an upper limit on the length scale where the break in the spectrum occurs.  相似文献   

15.
Choosing the three phenomenological models of the dynamical cosmological term Λ, viz., , and Λ∼ρ where a is the cosmic scale factor, it has been shown by the method of numerical analysis for the considered non-linear differential equations that the three models are equivalent for the flat Universe k=0 and for arbitrary non-linear equation of state. The evolution plots for dynamical cosmological term Λ vs. time t and also the cosmic scale factor a vs. t are drawn here for k=0,+1. A qualitative analysis has been made from the plots which supports the idea of inflation and hence expanding Universe.  相似文献   

16.
    
One of the key challenges in systems biology and molecular sciences is how to infer regulatory relationships between genes and proteins using high-throughout omics datasets. Although a wide range of methods have been designed to reverse engineer the regulatory networks, recent studies show that the inferred network may depend on the variable order in the dataset. In this work, we develop a new algorithm, called the statistical path-consistency algorithm (SPCA), to solve the problem of the dependence of variable order. This method generates a number of different variable orders using random samples, and then infers a network by using the path-consistent algorithm based on each variable order. We propose measures to determine the edge weights using the corresponding edge weights in the inferred networks, and choose the edges with the largest weights as the putative regulations between genes or proteins. The developed method is rigorously assessed by the six benchmark networks in DREAM challenges, the mitogen-activated protein (MAP) kinase pathway, and a cancer-specific gene regulatory network. The inferred networks are compared with those obtained by using two up-to-date inference methods. The accuracy of the inferred networks shows that the developed method is effective for discovering molecular regulatory systems.  相似文献   

17.
统计分析了1997~2000年合肥地区一复杂地形处近地面折射率结构常数的测量数据。给出了不同下垫面、不同季节、不同高度上的统计结果。白天强弱依次为6月、5月、4月、9月、1月。白天水面上比草地上小,夜间比草地上大。不论白天还是夜晚,近地面结构常数都随高度递减,且3~10m高的结构常数递减较慢,10~15m高的结构常数递减较快。结构常数随月份的日变化与模式计算结果基本一致。  相似文献   

18.
近地面折射率结构常数的长期测量和统计分析   总被引:8,自引:6,他引:2  
 统计分析了1997~2000年合肥地区一复杂地形处近地面折射率结构常数的测量数据。给出了不同下垫面、不同季节、不同高度上的统计结果。白天强弱依次为6月、5月、4月、9月、1月。白天水面上比草地上小,夜间比草地上大。不论白天还是夜晚,近地面结构常数都随高度递减,且3~10m高的结构常数递减较慢,10~15m高的结构常数递减较快。结构常数随月份的日变化与模式计算结果基本一致。  相似文献   

19.
    
Maximum entropy network ensembles have been very successful in modelling sparse network topologies and in solving challenging inference problems. However the sparse maximum entropy network models proposed so far have fixed number of nodes and are typically not exchangeable. Here we consider hierarchical models for exchangeable networks in the sparse limit, i.e., with the total number of links scaling linearly with the total number of nodes. The approach is grand canonical, i.e., the number of nodes of the network is not fixed a priori: it is finite but can be arbitrarily large. In this way the grand canonical network ensembles circumvent the difficulties in treating infinite sparse exchangeable networks which according to the Aldous-Hoover theorem must vanish. The approach can treat networks with given degree distribution or networks with given distribution of latent variables. When only a subgraph induced by a subset of nodes is known, this model allows a Bayesian estimation of the network size and the degree sequence (or the sequence of latent variables) of the entire network which can be used for network reconstruction.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号