首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The time-evolving precision matrix of a piecewise-constant Gaussian graphical model encodes the dynamic conditional dependency structure of a multivariate time-series. Traditionally, graphical models are estimated under the assumption that data are drawn identically from a generating distribution. Introducing sparsity and sparse-difference inducing priors, we relax these assumptions and propose a novel regularized M-estimator to jointly estimate both the graph and changepoint structure. The resulting estimator possesses the ability to therefore favor sparse dependency structures and/or smoothly evolving graph structures, as required. Moreover, our approach extends current methods to allow estimation of changepoints that are grouped across multiple dependencies in a system. An efficient algorithm for estimating structure is proposed. We study the empirical recovery properties in a synthetic setting. The qualitative effect of grouped changepoint estimation is then demonstrated by applying the method on a genetic time-course dataset. Supplementary material for this article is available online.  相似文献   

2.
A general approach to Bayesian isotonic changepoint problems is developed. Such isotonic changepoint analysis includes trends and other constraint problems and it captures linear, non-smooth as well as abrupt changes. Desired marginal posterior densities are obtained using a Markov chain Monte Carlo method. The methodology is exemplified using one simulated and two real data examples, where it is shown that our proposed Bayesian approach captures the qualitative conclusion about the shape of the trend change.  相似文献   

3.
Summary We consider a sequence of independent random variables whose densities depend on a parameter which is subject to a change at an unknown time point. A Bayesian decision-theoretic approach is used to obtain an optimal choice of changepoint. The exponential and multivariate normal models are analyzed, and some numerical examples are given.  相似文献   

4.
Process monitoring and control requires the detection of structural changes in a data stream in real time. This article introduces an efficient sequential Monte Carlo algorithm designed for learning unknown changepoints in continuous time. The method is intuitively simple: new changepoints for the latest window of data are proposed by conditioning only on data observed since the most recent estimated changepoint, as these observations carry most of the information about the current state of the process. The proposed method shows improved performance over the current state of the art. Another advantage of the proposed algorithm is that it can be made adaptive, varying the number of particles according to the apparent local complexity of the target changepoint probability distribution. This saves valuable computing time when changes in the changepoint distribution are negligible, and enables rebalancing of the importance weights of existing particles when a significant change in the target distribution is encountered. The plain and adaptive versions of the method are illustrated using the canonical continuous time changepoint problem of inferring the intensity of an inhomogeneous Poisson process, although the method is generally applicable to any changepoint problem. Performance is demonstrated using both conjugate and nonconjugate Bayesian models for the intensity. Appendices to the article are available online, illustrating the method on other models and applications.  相似文献   

5.
A changepoint in a time series is a time of change in the marginal distribution, autocovariance, or any other distributional structure of the series. Examples include mean level shifts and volatility (variance) changes. Climate data, for example, is replete with mean shift changepoints, occurring whenever a recording instrument is changed or the observing station is moved. Here, we consider the problem of incorporating known changepoint times into a regression model framework. Specifically, we establish consistency and asymptotic normality of ordinary least squares regression estimators that account for an arbitrary number of mean shifts in the record. In a sense, this provides an alternative to the customary infill asymptotics for regression models that assume an asymptotic infinity of data observations between all changepoint times.  相似文献   

6.
Techniques used by Szatrowski (1979, 1983) to solve the testing and estimation problem for linear patterned covariance are used to obtain results for the linear patterned correlation problem in the presence of missing data. Iterative algorithms are given for finding the maximum-likelihood estimates (MLE). Asymptotic distributions of the MLE and likelihood-ratio statistics (LRS) are obtained using the delta method.  相似文献   

7.
While there are many approaches to detecting changes in mean for a univariate time series, the problem of detecting multiple changes in slope has comparatively been ignored. Part of the reason for this is that detecting changes in slope is much more challenging: simple binary segmentation procedures do not work for this problem, while existing dynamic programming methods that work for the change in mean problem cannot be used for detecting changes in slope. We present a novel dynamic programming approach, CPOP, for finding the “best” continuous piecewise linear fit to data under a criterion that measures fit to data using the residual sum of squares, but penalizes complexity based on an L0 penalty on changes in slope. We prove that detecting changes in this manner can lead to consistent estimation of the number of changepoints, and show empirically that using an L0 penalty is more reliable at estimating changepoint locations than using an L1 penalty. Empirically CPOP has good computational properties, and can analyze a time series with 10,000 observations and 100 changes in a few minutes. Our method is used to analyze data on the motion of bacteria, and provides better and more parsimonious fits than two competing approaches. Supplementary material for this article is available online.  相似文献   

8.
Variational registration models are non-rigid and deformable imaging techniques for accurate registration of two images. As with other models for inverse problems using the Tikhonov regularization, they must have a suitably chosen regularization term as well as a data fitting term. One distinct feature of registration models is that their fitting term is always highly nonlinear and this nonlinearity restricts the class of numerical methods that are applicable. This paper first reviews the current state-of-the-art numerical methods for such models and observes that the nonlinear fitting term is mostly ‘avoided’ in developing fast multigrid methods. It then proposes a unified approach for designing fixed point type smoothers for multigrid methods. The diffusion registration model (second-order equations) and a curvature model (fourth-order equations) are used to illustrate our robust methodology. Analysis of the proposed smoothers and comparisons to other methods are given. As expected of a multigrid method, being many orders of magnitude faster than the unilevel gradient descent approach, the proposed numerical approach delivers fast and accurate results for a range of synthetic and real test images.  相似文献   

9.
Metamodels are used in many disciplines to replace simulation models of complex multivariate systems. To discover metamodels ‘quality-of-fit’ for simulation, simple information returned by average-based statistics, such as root-mean-square error RMSE, are often used. The sample of points used in determining these averages is restricted in size, especially for simulation models of complex multivariate systems. Obviously, decisions made based on average values can be misleading when the sample size is not adequate, and contributions made by each individual data point in such samples need to be examined. This paper presents methods that can be used to discover metamodels quality-of-fit graphically by means of two-dimensional plots. Three plot types are presented; these are the so-called circle plots, marksman plots, and ordinal plots. Such plots can be used to facilitate visual inspection of the effect on metamodel accuracy of each individual point in the data sample used for metamodel validation. The proposed methods can be used to complement quantitative validation statistics; in particular, for situations where there is not enough validation data or the validation data is too expensive to generate.  相似文献   

10.
Models for hysteresis in continuum mechanics are studied that rely on a time-discretised quasi-static evolution of Young measures akin to a gradient flow. The main feature of this approach is that it allows for local, rather than global minimisation. In particular, the case of a non-coercive elastic energy density of Lennard-Jones type is investigated. The approach is used to describe the formation of damage in a material; existence results are proved, as well as several results highlighting the qualitative behaviour of solutions. Connections are made to recent variational models for fracture.  相似文献   

11.
Recently, a Bayesian network model for inferring non-stationary regulatory processes from gene expression time series has been proposed. The Bayesian Gaussian Mixture (BGM) Bayesian network model divides the data into disjunct compartments (data subsets) by a free allocation model, and infers network structures, which are kept fixed for all compartments. Fixing the network structure allows for some information sharing among compartments, and each compartment is modelled separately and independently with the Gaussian BGe scoring metric for Bayesian networks. The BGM model can equally be applied to both static (steady-state) and dynamic (time series) gene expression data. However, it is this flexibility that renders its application to time series data suboptimal. To improve the performance of the BGM model on time series data we propose a revised approach in which the free allocation of data points is replaced by a changepoint process so as to take the temporal structure into account. The practical inference follows the Bayesian paradigm and approximately samples the network, the number of compartments and the changepoint locations from the posterior distribution with Markov chain Monte Carlo (MCMC). Our empirical results show that the proposed modification leads to a more efficient inference tool for analysing gene expression time series.  相似文献   

12.
There is ample evidence that in applications of self-exciting point-process models, the intensity of background events is often far from constant. If a constant background is imposed that assumption can reduce significantly the quality of statistical analysis, in problems as diverse as modeling the after-shocks of earthquakes and the study of ultra-high frequency financial data. Parametric models can be used to alleviate this problem, but they run the risk of distorting inference by misspecifying the nature of the background intensity function. On the other hand, a purely nonparametric approach to analysis leads to problems of identifiability; when a nonparametric approach is taken, not every aspect of the model can be identified from data recorded along a single observed sample path. In this article, we suggest overcoming this difficulty by using an approach based on the principle of parsimony, or Occam’s razor. In particular, we suggest taking the point-process intensity to be either a constant or to have maximum differential entropy, in cases where there is not sufficient empirical evidence to suggest that the background intensity function is more complex than those models. This approach is seldom, if ever, used for nonparametric function estimation in other settings, not least because in those cases more data are typically available. However, our “ontological parsimony” argument is appropriate in the context of self-exciting point-process models. Supplementary materials are available online.  相似文献   

13.
Young measure flow as a model for damage   总被引:1,自引:0,他引:1  
Models for hysteresis in continuum mechanics are studied that rely on a time-discretised quasi-static evolution of Young measures akin to a gradient flow. The main feature of this approach is that it allows for local, rather than global minimisation. In particular, the case of a non-coercive elastic energy density of Lennard-Jones type is investigated. The approach is used to describe the formation of damage in a material; existence results are proved, as well as several results highlighting the qualitative behaviour of solutions. Connections are made to recent variational models for fracture.   相似文献   

14.
Variational registration models are non-rigid and deformable imaging techniques for accurate registration of two images. As with other models for inverse problems using the Tikhonov regularization, they must have a suitably chosen regularization term as well as a data fitting term. One distinct feature of registration models is that their fitting term is always highly nonlinear and this nonlinearity restricts the class of numerical methods that are applicable. This paper first reviews the current state-of-the-art numerical methods for such models and observes that the nonlinear fitting term is mostly ‘avoided’ in developing fast multigrid methods. It then proposes a unified approach for designing fixed point type smoothers for multigrid methods. The diffusion registration model (second-order equations) and a curvature model (fourth-order equations) are used to illustrate our robust methodology. Analysis of the proposed smoothers and comparisons to other methods are given. As expected of a multigrid method, being many orders of magnitude faster than the unilevel gradient descent approach, the proposed numerical approach delivers fast and accurate results for a range of synthetic and real test images.  相似文献   

15.
The purpose of this paper is to discuss some procedures that are available for testing non-nested (or separate) hypotheses in the statistics and econometrics literature. Since many of these techniques may also be exploited in other disciplines, it is hoped that an elaboration of the principal theoretical findings may make them more readily accessible to researchers in other disciplines. Several simple examples are used to illustrate the concepts of nested and non-nested hypotheses and, within the latter category, “global” and “partial” non-nested hypotheses. Two alternative methods of testing non-nested hypotheses are discussed and contrasted: the first of these is Cox's modification of the likelihood-ratio statistic, and the second is Atkinson's comprehensive model approach. A major emphasis is placed on the role of the Cox principle of hypothesis testing, which enables a broad range of hypotheses to be tested within the same framework. The problem associated with the application of the comprehensive model approach to composite non-nested hypotheses is also highlighted; Roy's union-intersection principle is presented as a viable method of dealing with this problem. Simulation results concerning the finite-sample properties of various tests are discussed, together with an analysis of some attempts to correct the poor size of the Cox and related tests.  相似文献   

16.
Logit models have been widely used in marketing to predict brand choice and to make inference about the impact of marketing mix variables on these choices. Most researchers have followed the pioneering example of Guadagni and Little, building choice models and drawing inference conditional on the assumption that the logit model is the correct specification for household purchase behaviour. To the extent that logit models fail to adequately describe household purchase behaviour, statistical inferences from them may be flawed. More importantly, marketing decisions based on these models may be incorrect. This research applies White's robust inference method to logit brand choice models. The method does not impose the restrictive assumption that the assumed logit model specification be true. A sandwich estimator of the covariance ‘corrected’ for possible mis‐specification is the basis for inference about logit model parameters. An important feature of this method is that it yields correct standard errors for the marketing mix parameter estimates even if the assumed logit model specification is not correct. Empirical examples include using household panel data sets from three different product categories to estimate logit models of brand choice. The standard errors obtained using traditional methods are compared with those obtained by White's robust method. The findings illustrate that incorrectly assuming the logit model to be true typically yields standard errors which are biased downward by 10–40 per cent. Conditions under which the bias is particularly severe are explored. Under these conditions, the robust approach is recommended. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

17.
核实数据下响应变量缺失的线性EV模型经验似然推断   总被引:4,自引:0,他引:4  
考虑响应变量随机缺失而协变量带有误差的线性模型,借助于核实数据和借补方法,构造了回归系数的两种经验似然比,证明了所提出的估计的经验对数似然比渐近于一个自由度为1的独立χ2变量的加权和;而经调整后所得的调整经验对数似然比渐近于自由度为p的χ2分布,该结果可以用来构造未知参数的置信域.此外,我们也构造了响应均值的调整经验对数似然比统计量,并证明了所提出的统计量渐近于x2分布,可用此结果构造响应均值的置信域.通过模拟研究比较了置信域的精度及其平均区间长度.  相似文献   

18.
In developing a classification model for assigning observations of unknown class to one of a number of specified classes using the values of a set of features associated with each observation, it is often desirable to base the classifier on a limited number of features. Mathematical programming discriminant analysis methods for developing classification models can be extended for feature selection. Classification accuracy can be used as the feature selection criterion by using a mixed integer programming (MIP) model in which a binary variable is associated with each training sample observation, but the binary variable requirements limit the size of problems to which this approach can be applied. Heuristic feature selection methods for problems with large numbers of observations are developed in this paper. These heuristic procedures, which are based on the MIP model for maximizing classification accuracy, are then applied to three credit scoring data sets.  相似文献   

19.
20.
Past work on visualization methods for Markov chain Monte Carlo analyses leaves many open questions. Particular difficulties arise when the researcher wishes to monitor several aspects of the chain's behavior jointly. We propose a movie, which is an extension of the basic trace plot used for a single parameter, as a dynamic way of assessing the progress of a chain and monitoring the joint posterior distribution. This idea is demonstrated on several examples, including a multiple changepoint problem with data from a corbelled tomb (tholos) in Stylos, Crete.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号