首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 172 毫秒
1.
The projection-onto-convex-sets (POCS) algorithm is a powerful tool for reconstructing high-resolution images from undersampled k-space data. It is a nonlinear iterative method that attempts to estimate values for missing data. The convergence of the algorithm and its other deterministic properties are well established, but relatively little is known about how noise in the source data influences noise in the final reconstructed image. In this paper, we present an experimental treatment of the statistical properties in POCS and investigate 12 stochastic models for its noise distribution beside its nonlinear point spread functions. Statistical results show that as the ratio of the missing k-space data increases, the noise distribution in POCS images is no longer Rayleigh as with conventional linear Fourier reconstruction. Instead, the probability density function for the noise is well approximated by a lognormal distribution. For small missing data ratios, however, the noise remains Rayleigh distributed. Preliminary results show that in the presence of noise, POCS images are often dominated by POCS-enhanced noise rather than POCS-induced artifacts. Implicit in this work is the presentation of a general statistical method that can be used to assess the noise properties in other nonlinear reconstruction algorithms.  相似文献   

2.
The increasing interest in renewable energy, particularly in wind, has given rise to the necessity of accurate models for the generation of good synthetic wind speed data. Markov chains are often used for this purpose but better models are needed to reproduce the statistical properties of wind speed data. We downloaded a database, freely available from the web, in which are included wind speed data taken from L.S.I. -Lastem station (Italy) and sampled every 10 min. With the aim of reproducing the statistical properties of this data we propose the use of three semi-Markov models. We generate synthetic time series for wind speed by means of Monte Carlo simulations. The time lagged autocorrelation is then used to compare statistical properties of the proposed models with those of real data and also with a synthetic time series generated through a simple Markov chain.  相似文献   

3.
In this work parametric and non-parametric statistical methods are proposed to analyze Diffusion Tensor Magnetic Resonance Imaging (DT-MRI) data. A Multivariate Normal Distribution is proposed as a parametric statistical model of diffusion tensor data when magnitude MR images contain no artifacts other than Johnson noise. We test this model using Monte Carlo (MC) simulations of DT-MRI experiments. The non-parametric approach proposed here is an implementation of bootstrap methodology that we call the DT-MRI bootstrap. It is used to estimate an empirical probability distribution of experimental DT-MRI data, and to perform hypothesis tests on them. The DT-MRI bootstrap is also used to obtain various statistics of DT-MRI parameters within a single voxel, and within a region of interest (ROI); we also use the bootstrap to study the intrinsic variability of these parameters in the ROI, independent of background noise. We evaluate the DT-MRI bootstrap using MC simulations and apply it to DT-MRI data acquired on human brain in vivo, and on a phantom with uniform diffusion properties.  相似文献   

4.
Modeling and analysis of time series are important in applications including economics, engineering, environmental science and social science. Selecting the best time series model with accurate parameters in forecasting is a challenging objective for scientists and academic researchers. Hybrid models combining neural networks and traditional Autoregressive Moving Average (ARMA) models are being used to improve the accuracy of modeling and forecasting time series. Most of the existing time series models are selected by information-theoretic approaches, such as AIC, BIC, and HQ. This paper revisits a model selection technique based on Minimum Message Length (MML) and investigates its use in hybrid time series analysis. MML is a Bayesian information-theoretic approach and has been used in selecting the best ARMA model. We utilize the long short-term memory (LSTM) approach to construct a hybrid ARMA-LSTM model and show that MML performs better than AIC, BIC, and HQ in selecting the model—both in the traditional ARMA models (without LSTM) and with hybrid ARMA-LSTM models. These results held on simulated data and both real-world datasets that we considered.We also develop a simple MML ARIMA model.  相似文献   

5.
Belal E. Baaquie  Cao Yang 《Physica A》2009,388(13):2666-2681
Empirical forward interest rates drive the debt markets. Libor and Euribor futures data is used to calibrate and test models of interest rates based on the formulation of quantum finance. In particular, all the model parameters, including interest rate volatilities, are obtained from market data. The random noise driving the forward interest rates is taken to be a Euclidean two dimension quantum field. We analyze two models, namely the bond forward interest rates, which is a linear theory and the Libor Market Model, which is a nonlinear theory. Both the models are analyzed using Libor and Euribor data, with various approximations to match the linear and nonlinear models. The results are quite good, with the linear model having an accuracy of about 99% and the nonlinear model being slightly less accurate. We extend our analysis by directly using the Zero Coupon Yield Curve (ZCYC) data for Libor and for bonds; but due to some technical difficulties we could not derive the models parameters directly from the ZCYC data.  相似文献   

6.
徐振华  黄建国  高伟 《声学学报》2012,37(2):151-157
为了解决观观测噪声和信道噪声概率分布不完全已知时的多传感器分布式量化估计融合问题,提出了一种期望极大化算法(EM算法)的分布式量化估计融合方法。该方法将未知的噪声参数以及局部量化器量化概率建模为EM算法中二元高斯混合模型参数,利用极大似然估计方法的估计不变性得到目标参数的估计融合结果。仿真实验结果表明:该方法在局部传感器观测样本数目大于6000和信噪比大于6 dB时与已有理想信道条件下的估计方法性能相当。本文方法对水下分布式协同探测问题提供了一种简化的估计融合实现途径。   相似文献   

7.
Deep probabilistic time series forecasting models have become an integral part of machine learning. While several powerful generative models have been proposed, we provide evidence that their associated inference models are oftentimes too limited and cause the generative model to predict mode-averaged dynamics. Mode-averaging is problematic since many real-world sequences are highly multi-modal, and their averaged dynamics are unphysical (e.g., predicted taxi trajectories might run through buildings on the street map). To better capture multi-modality, we develop variational dynamic mixtures (VDM): a new variational family to infer sequential latent variables. The VDM approximate posterior at each time step is a mixture density network, whose parameters come from propagating multiple samples through a recurrent architecture. This results in an expressive multi-modal posterior approximation. In an empirical study, we show that VDM outperforms competing approaches on highly multi-modal datasets from different domains.  相似文献   

8.
Analysis of finite, noisy time series data leads to modern statistical inference methods. Here we adapt Bayesian inference for applied symbolic dynamics. We show that reconciling Kolmogorov's maximum-entropy partition with the methods of Bayesian model selection requires the use of two separate optimizations. First, instrument design produces a maximum-entropy symbolic representation of time series data. Second, Bayesian model comparison with a uniform prior selects a minimum-entropy model, with respect to the considered Markov chain orders, of the symbolic data. We illustrate these steps using a binary partition of time series data from the logistic and Henon maps as well as the R?ssler and Lorenz attractors with dynamical noise. In each case we demonstrate the inference of effectively generating partitions and kth-order Markov chain models.  相似文献   

9.
An important but difficult problem of Gaussian mixture models (GMM) for medical image analysis is estimating and testing the number of components by model selection criterion. There are many available methods to estimate the k based on likelihood function. However, some of them need the maximum number of components is known as priori and data is usually over-fitted by them when log-likelihood function is far larger than penalty function. We investigate the log-characteristic function of the GMM to estimate the number of models adaptively for medical image. Our method defines the sum of weighted real parts of all log-characteristic functions of the GMM as a new convergent function and model selection criterion. Our new model criterion makes use of the stability of the sum of weighted real parts of all log-characteristic functions of the GMM when the number of components is larger than the true number of components. The univariate acidity, simulated 2D datasets and real 2D medical images are used to test and experiment results suggest that our method without any priori is more suited for large sample applications than other typical methods.  相似文献   

10.
We present an unsupervised method to detect anomalous time series among a collection of time series. To do so, we extend traditional Kernel Density Estimation for estimating probability distributions in Euclidean space to Hilbert spaces. The estimated probability densities we derive can be obtained formally through treating each series as a point in a Hilbert space, placing a kernel at those points, and summing the kernels (a “point approach”), or through using Kernel Density Estimation to approximate the distributions of Fourier mode coefficients to infer a probability density (a “Fourier approach”). We refer to these approaches as Functional Kernel Density Estimation for Anomaly Detection as they both yield functionals that can score a time series for how anomalous it is. Both methods naturally handle missing data and apply to a variety of settings, performing well when compared with an outlyingness score derived from a boxplot method for functional data, with a Principal Component Analysis approach for functional data, and with the Functional Isolation Forest method. We illustrate the use of the proposed methods with aviation safety report data from the International Air Transport Association (IATA).  相似文献   

11.
伍雪冬  王耀南  刘维亭  朱志宇 《中国物理 B》2011,20(6):69201-069201
On the assumption that random interruptions in the observation process are modeled by a sequence of independent Bernoulli random variables, we firstly generalize two kinds of nonlinear filtering methods with random interruption failures in the observation based on the extended Kalman filtering (EKF) and the unscented Kalman filtering (UKF), which were shortened as GEKF and GUKF in this paper, respectively. Then the nonlinear filtering model is established by using the radial basis function neural network (RBFNN) prototypes and the network weights as state equation and the output of RBFNN to present the observation equation. Finally, we take the filtering problem under missing observed data as a special case of nonlinear filtering with random intermittent failures by setting each missing data to be zero without needing to pre-estimate the missing data, and use the GEKF-based RBFNN and the GUKF-based RBFNN to predict the ground radioactivity time series with missing data. Experimental results demonstrate that the prediction results of GUKF-based RBFNN accord well with the real ground radioactivity time series while the prediction results of GEKF-based RBFNN are divergent.  相似文献   

12.
The GARCH (p, q) model is a very interesting stochastic process with widespread applications and a central role in empirical finance. The Markovian GARCH (1, 1) model has only 3 control parameters and a much discussed question is how to estimate them when a series of some financial asset is given. Besides the maximum likelihood estimator technique, there is another method which uses the variance, the kurtosis and the autocorrelation time to determine them. We propose here to use the standardized 6th moment. The set of parameters obtained in this way produces a very good probability density function and a much better time autocorrelation function. This is true for both studied indexes: NYSE Composite and FTSE 100. The probability of return to the origin is investigated at different time horizons for both Gaussian and Laplacian GARCH models. In spite of the fact that these models show almost identical performances with respect to the final probability density function and to the time autocorrelation function, their scaling properties are, however, very different. The Laplacian GARCH model gives a better scaling exponent for the NYSE time series, whereas the Gaussian dynamics fits better the FTSE scaling exponent.  相似文献   

13.
Stochastic volatility models decompose the time series of financial returns into the product of a volatility factor and an iid noise factor. Assuming a slow dynamic for the volatility factor, we show via nonparametric tests that both the index as well as its individual stocks share a common volatility factor. While the noise component is Gaussian for the index, individual stock returns turn out to require a leptokurtic noise. Thus we propose a two-component model for stocks, given by the sum of Gaussian noise, which reflects market-wide fluctuations, and Laplacian noise, which incorporates firm-specific factors such as firm profitability or growth performance, both of which are known to be Laplacian distributed. In the case of purely Gaussian noise, the chi-squared probability for the density of individual stock returns is typically on the order of 10-20, while it increases to values of O(1) by adding the Laplace component.  相似文献   

14.
Time-delayed interactions naturally appear in a multitude of real-world systems due to the finite propagation speed of physical quantities. Often, the time scales of the interactions are unknown to an external observer and need to be inferred from time series of observed data. We explore, in this work, the properties of several ordinal-based quantifiers for the identification of time-delays from time series. To that end, we generate artificial time series of stochastic and deterministic time-delay models. We find that the presence of a nonlinearity in the generating model has consequences for the distribution of ordinal patterns and, consequently, on the delay-identification qualities of the quantifiers. Here, we put forward a novel ordinal-based quantifier that is particularly sensitive to nonlinearities in the generating model and compare it with previously-defined quantifiers. We conclude from our analysis on artificially generated data that the proper identification of the presence of a time-delay and its precise value from time series benefits from the complementary use of ordinal-based quantifiers and the standard autocorrelation function. We further validate these tools with a practical example on real-world data originating from the North Atlantic Oscillation weather phenomenon.  相似文献   

15.
Hidden Markov model (HMM) is a vital model for trajectory recognition. As the number of hidden states in HMM is important and hard to be determined, many nonparametric methods like hierarchical Dirichlet process HMMs and Beta process HMMs (BP-HMMs) have been proposed to determine it automatically. Among these methods, the sampled BP-HMM models the shared information among different classes, which has been proved to be effective in several trajectory recognition scenes. However, the existing BP-HMM maintains a state transition probability matrix for each trajectory, which is inconvenient for classification. Furthermore, the approximate inference of the BP-HMM is based on sampling methods, which usually takes a long time to converge. To develop an efficient nonparametric sequential model that can capture cross-class shared information for trajectory recognition, we propose a novel variational BP-HMM model, in which the hidden states can be shared among different classes and each class chooses its own hidden states and maintains a unified transition probability matrix. In addition, we derive a variational inference method for the proposed model, which is more efficient than sampling-based methods. Experimental results on a synthetic dataset and two real-world datasets show that compared with the sampled BP-HMM and other related models, the variational BP-HMM has better performance in trajectory recognition.  相似文献   

16.
This study presents a novel time series analysis methodology to detect, locate, and estimate the extent of the structural changes (e.g. damage). In this methodology, ARX models (Auto-Regressive models with eXogenous input) are created for different sensor clusters by using the free response of the structure. The output of each sensor in a cluster is used as an input to the ARX model to predict the output of the reference channel of that sensor cluster. Two different approaches are used for extracting Damage Features (DFs) from these ARX models. For the first approach, the coefficients of the ARX models are directly used as the DFs. It is shown with a 4 dof numerical model that damage can be identified, located and quantified for simple models and noise free data. To consider the effects of the noise and model complexity, a second approach is presented based on using the ARX model fit ratios as the DFs. The second approach is first applied to the same 4 DOF numerical model and to the numerical data coming from an international benchmark study for noisy conditions. Then, the methodology is applied to the experimental data from a large scale laboratory model. It is shown that the second approach performs successfully for different damage cases to identify and locate the damage using numerical and experimental data. Furthermore, it is observed that the DF level is a good indicator for estimating the extent of the damage for these cases. The potential and advantages of the methodology are discussed along with the analysis results. The limitations of the methodology, recommendations, and future work are also addressed.  相似文献   

17.
We provide a non-asymptotic analysis of the spiked Wishart and Wigner matrix models with a generative neural network prior. Spiked random matrices have the form of a rank-one signal plus noise and have been used as models for high dimensional Principal Component Analysis (PCA), community detection and synchronization over groups. Depending on the prior imposed on the spike, these models can display a statistical-computational gap between the information theoretically optimal reconstruction error that can be achieved with unbounded computational resources and the sub-optimal performances of currently known polynomial time algorithms. These gaps are believed to be fundamental, as in the emblematic case of Sparse PCA. In stark contrast to such cases, we show that there is no statistical-computational gap under a generative network prior, in which the spike lies on the range of a generative neural network. Specifically, we analyze a gradient descent method for minimizing a nonlinear least squares objective over the range of an expansive-Gaussian neural network and show that it can recover in polynomial time an estimate of the underlying spike with a rate-optimal sample complexity and dependence on the noise level.  相似文献   

18.
The existence of forbidden patterns, i.e., certain missing sequences in a given time series, is a recently proposed instrument of potential application in the study of time series. Forbidden patterns are related to the permutation entropy, which has the basic properties of classic chaos indicators, such as Lyapunov exponent or Kolmogorov entropy, thus allowing to separate deterministic (usually chaotic) from random series; however, it requires fewer values of the series to be calculated, and it is suitable for using with small datasets. In this paper, the appearance of forbidden patterns is studied in different economical indicators such as stock indices (Dow Jones Industrial Average and Nasdaq Composite), NYSE stocks (IBM and Boeing), and others (ten year Bond interest rate), to find evidence of deterministic behavior in their evolutions. Moreover, the rate of appearance of the forbidden patterns is calculated, and some considerations about the underlying dynamics are suggested.  相似文献   

19.
Threshold models try to explain the consequences of social influence like the spread of fads and opinions. Along with models of epidemics, they constitute a major theoretical framework of social spreading processes. In threshold models on static networks, an individual changes her state if a certain fraction of her neighbors has done the same. When there are strong correlations in the temporal aspects of contact patterns, it is useful to represent the system as a temporal network. In such a system, not only contacts but also the time of the contacts are represented explicitly. In many cases, bursty temporal patterns slow down disease spreading. However, as we will see, this is not a universal truth for threshold models. In this work we propose an extension of Watts’s classic threshold model to temporal networks. We do this by assuming that an agent is influenced by contacts which lie a certain time into the past. I.e., the individuals are affected by contacts within a time window. In addition to thresholds in the fraction of contacts, we also investigate the number of contacts within the time window as a basis for influence. To elucidate the model’s behavior, we run the model on real and randomized empirical contact datasets.  相似文献   

20.
Volume conserving surface (VCS) models without deposition and evaporation, as well as ideal molecular-beam epitaxy models, are prototypes to study the symmetries of conserved dynamics. In this work we study two similar VCS models with conserved noise, which differ from each other by the axial symmetry of their dynamic hopping rules. We use a coarse-grained approach to analyze the models and show how to determine the coefficients of their corresponding continuous stochastic differential equation (SDE) within the same universality class. The employed method makes use of small translations in a test space which contains the stationary probability density function (SPDF). In case of the symmetric model we calculate all the coarse-grained coefficients of the related conserved Kardar–Parisi–Zhang (KPZ) equation. With respect to the symmetric model, the asymmetric model adds new terms which have to be analyzed, first of all the diffusion term, whose coarse-grained coefficient can be determined by the same method. In contrast to other methods, the used formalism allows to calculate all coefficients of the SDE theoretically and within limits numerically. Above all, the used approach connects the coefficients of the SDE with the SPDF and hence gives them a precise physical meaning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号