首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 136 毫秒
1.
This paper develops a covariate-adjusted precision matrix estimation using a two-stage estimation procedure. Firstly, we identify the relevant covariates that affect the means by a joint l_1 penalization. Then, the estimated regression coefficients are used to estimate the mean values in a multivariate sub-Gaussian model in order to estimate the sparse precision matrix through a Lasso penalized D-trace loss. Under some assumptions, we establish the convergence rate of the precision matrix estimation under different norms and demonstrate the sparse recovery property with probability converging to one. Simulation shows that our methods have the finite-sample performance compared with other methods.  相似文献   

2.
Precision matrix estimation is an important problem in statistical data analysis.This paper proposes a sparse precision matrix estimation approach,based on CLIME estimator and an efficient algorithm GISSρ that was originally proposed for l1 sparse signal recov-ery in compressed sensing.The asymptotic convergence rate for sparse precision matrix estimation is analyzed with respect to the new stopping criteria of the proposed GISSρ algorithm.Finally,numerical comparison of GISSρ with other sparse recovery algorithms,such as ADMM and HTP in three settings of precision matrix estimation is provided and the numerical results show the advantages of the proposed algorithm.  相似文献   

3.

In this article, we deal with sparse high-dimensional multivariate regression models. The models distinguish themselves from ordinary multivariate regression models in two aspects: (1) the dimension of the response vector and the number of covariates diverge to infinity; (2) the nonzero entries of the coefficient matrix and the precision matrix are sparse. We develop a two-stage sequential conditional selection (TSCS) approach to the identification and estimation of the nonzeros of the coefficient matrix and the precision matrix. It is established that the TSCS is selection consistent for the identification of the nonzeros of both the coefficient matrix and the precision matrix. Simulation studies are carried out to compare TSCS with the existing state-of-the-art methods, which demonstrates that the TSCS approach outperforms the existing methods. As an illustration, the TSCS approach is also applied to a real dataset.

  相似文献   

4.
We investigate the structure of a large precision matrix in Gaussian graphical models by decomposing it into a low rank component and a remainder part with sparse precision matrix.Based on the decomposition,we propose to estimate the large precision matrix by inverting a principal orthogonal decomposition(IPOD).The IPOD approach has appealing practical interpretations in conditional graphical models given the low rank component,and it connects to Gaussian graphical models with latent variables.Specifically,we show that the low rank component in the decomposition of the large precision matrix can be viewed as the contribution from the latent variables in a Gaussian graphical model.Compared with existing approaches for latent variable graphical models,the IPOD is conveniently feasible in practice where only inverting a low-dimensional matrix is required.To identify the number of latent variables,which is an objective of its own interest,we investigate and justify an approach by examining the ratios of adjacent eigenvalues of the sample covariance matrix?Theoretical properties,numerical examples,and a real data application demonstrate the merits of the IPOD approach in its convenience,performance,and interpretability.  相似文献   

5.
The time-evolving precision matrix of a piecewise-constant Gaussian graphical model encodes the dynamic conditional dependency structure of a multivariate time-series. Traditionally, graphical models are estimated under the assumption that data are drawn identically from a generating distribution. Introducing sparsity and sparse-difference inducing priors, we relax these assumptions and propose a novel regularized M-estimator to jointly estimate both the graph and changepoint structure. The resulting estimator possesses the ability to therefore favor sparse dependency structures and/or smoothly evolving graph structures, as required. Moreover, our approach extends current methods to allow estimation of changepoints that are grouped across multiple dependencies in a system. An efficient algorithm for estimating structure is proposed. We study the empirical recovery properties in a synthetic setting. The qualitative effect of grouped changepoint estimation is then demonstrated by applying the method on a genetic time-course dataset. Supplementary material for this article is available online.  相似文献   

6.
The performance of Markov chain Monte Carlo (MCMC) algorithms like the Metropolis Hastings Random Walk (MHRW) is highly dependent on the choice of scaling matrix for the proposal distributions. A popular choice of scaling matrix in adaptive MCMC methods is to use the empirical covariance matrix (ECM) of previous samples. However, this choice is problematic if the dimension of the target distribution is large, since the ECM then converges slowly and is computationally expensive to use. We propose two algorithms to improve convergence and decrease computational cost of adaptive MCMC methods in cases when the precision (inverse covariance) matrix of the target density can be well-approximated by a sparse matrix. The first is an algorithm for online estimation of the Cholesky factor of a sparse precision matrix. The second estimates the sparsity structure of the precision matrix. Combining the two algorithms allows us to construct precision-based adaptive MCMC algorithms that can be used as black-box methods for densities with unknown dependency structures. We construct precision-based versions of the adaptive MHRW and the adaptive Metropolis adjusted Langevin algorithm and demonstrate the performance of the methods in two examples. Supplementary materials for this article are available online.  相似文献   

7.
We consider a new method for sparse covariance matrix estimation which is motivated by previous results for the so-called Stein-type estimators. Stein proposed a method for regularizing the sample covariance matrix by shrinking together the eigenvalues; the amount of shrinkage is chosen to minimize an unbiased estimate of the risk (UBEOR) under the entropy loss function. The resulting estimator has been shown in simulations to yield significant risk reductions over the maximum likelihood estimator. Our method extends the UBEOR minimization problem by adding an ?1 penalty on the entries of the estimated covariance matrix, which encourages a sparse estimate. For a multivariate Gaussian distribution, zeros in the covariance matrix correspond to marginal independences between variables. Unlike the ?1-penalized Gaussian likelihood function, our penalized UBEOR objective is convex and can be minimized via a simple block coordinate descent procedure. We demonstrate via numerical simulations and an analysis of microarray data from breast cancer patients that our proposed method generally outperforms other methods for sparse covariance matrix estimation and can be computed efficiently even in high dimensions.  相似文献   

8.
We present an incremental approach to 2-norm estimation for triangular matrices. Our investigation covers both dense and sparse matrices which can arise for example from a QR, a Cholesky or a LU factorization. If the explicit inverse of a triangular factor is available, as in the case of an implicit version of the LU factorization, we can relate our results to incremental condition estimation (ICE). Incremental norm estimation (INE) extends directly from the dense to the sparse case without needing the modifications that are necessary for the sparse version of ICE. INE can be applied to complement ICE, since the product of the two estimates gives an estimate for the matrix condition number. Furthermore, when applied to matrix inverses, INE can be used as the basis of a rank-revealing factorization.  相似文献   

9.
Abstract

One way to estimate variance components is by restricted maximum likelihood. The log-likelihood function is fully defined by the Cholesky factor of a matrix that is usually large and sparse. In this article forward and backward differentiation methods are developed for calculating the first and second derivatives of the Cholesky factor and its functions. These differentiation methods are general and can be applied to either a full or a sparse matrix. Moreover, these methods can be used to calculate the derivatives that are needed for restricted maximum likelihood, resulting in substantial savings in computation.  相似文献   

10.
In this paper, we study robust quaternion matrix completion and provide a rigorous analysis for provable estimation of quaternion matrix from a random subset of their corrupted entries. In order to generalize the results from real matrix completion to quaternion matrix completion, we derive some new formulas to handle noncommutativity of quaternions. We solve a convex optimization problem, which minimizes a nuclear norm of quaternion matrix that is a convex surrogate for the quaternion matrix rank, and the ?1‐norm of sparse quaternion matrix entries. We show that, under incoherence conditions, a quaternion matrix can be recovered exactly with overwhelming probability, provided that its rank is sufficiently small and that the corrupted entries are sparsely located. The quaternion framework can be used to represent red, green, and blue channels of color images. The results of missing/noisy color image pixels as a robust quaternion matrix completion problem are given to show that the performance of the proposed approach is better than that of the testing methods, including image inpainting methods, the tensor‐based completion method, and the quaternion completion method using semidefinite programming.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号