首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper,distributed estimation of high-dimensional sparse precision matrix is proposed based on the debiased D-trace loss penalized lasso and the hard threshold method when samples are distributed into different machines for transelliptical graphical models.At a certain level of sparseness,this method not only achieves the correct selection of non-zero elements of sparse precision matrix,but the error rate can be comparable to the estimator in a non-distributed setting.The numerical results further prove that the proposed distributed method is more effective than the usual average method.  相似文献   

2.
The performance of Markov chain Monte Carlo (MCMC) algorithms like the Metropolis Hastings Random Walk (MHRW) is highly dependent on the choice of scaling matrix for the proposal distributions. A popular choice of scaling matrix in adaptive MCMC methods is to use the empirical covariance matrix (ECM) of previous samples. However, this choice is problematic if the dimension of the target distribution is large, since the ECM then converges slowly and is computationally expensive to use. We propose two algorithms to improve convergence and decrease computational cost of adaptive MCMC methods in cases when the precision (inverse covariance) matrix of the target density can be well-approximated by a sparse matrix. The first is an algorithm for online estimation of the Cholesky factor of a sparse precision matrix. The second estimates the sparsity structure of the precision matrix. Combining the two algorithms allows us to construct precision-based adaptive MCMC algorithms that can be used as black-box methods for densities with unknown dependency structures. We construct precision-based versions of the adaptive MHRW and the adaptive Metropolis adjusted Langevin algorithm and demonstrate the performance of the methods in two examples. Supplementary materials for this article are available online.  相似文献   

3.
We consider a new method for sparse covariance matrix estimation which is motivated by previous results for the so-called Stein-type estimators. Stein proposed a method for regularizing the sample covariance matrix by shrinking together the eigenvalues; the amount of shrinkage is chosen to minimize an unbiased estimate of the risk (UBEOR) under the entropy loss function. The resulting estimator has been shown in simulations to yield significant risk reductions over the maximum likelihood estimator. Our method extends the UBEOR minimization problem by adding an ?1 penalty on the entries of the estimated covariance matrix, which encourages a sparse estimate. For a multivariate Gaussian distribution, zeros in the covariance matrix correspond to marginal independences between variables. Unlike the ?1-penalized Gaussian likelihood function, our penalized UBEOR objective is convex and can be minimized via a simple block coordinate descent procedure. We demonstrate via numerical simulations and an analysis of microarray data from breast cancer patients that our proposed method generally outperforms other methods for sparse covariance matrix estimation and can be computed efficiently even in high dimensions.  相似文献   

4.
??This paper develops a covariate-adjusted precision matrix estimation using a two-stage estimation procedure. Firstly, we identify the relevant covariates that affect the means by a joint l_1 penalization. Then, the estimated regression coefficients are used to estimate the mean values in a multivariate sub-Gaussian model in order to estimate the sparse precision matrix through a Lasso penalized D-trace loss. Under some assumptions, we establish the convergence rate of the precision matrix estimation under different norms and demonstrate the sparse recovery property with probability converging to one. Simulation shows that our methods have the finite-sample performance compared with other methods.  相似文献   

5.
This paper develops a covariate-adjusted precision matrix estimation using a two-stage estimation procedure. Firstly, we identify the relevant covariates that affect the means by a joint l_1 penalization. Then, the estimated regression coefficients are used to estimate the mean values in a multivariate sub-Gaussian model in order to estimate the sparse precision matrix through a Lasso penalized D-trace loss. Under some assumptions, we establish the convergence rate of the precision matrix estimation under different norms and demonstrate the sparse recovery property with probability converging to one. Simulation shows that our methods have the finite-sample performance compared with other methods.  相似文献   

6.

In this article, we deal with sparse high-dimensional multivariate regression models. The models distinguish themselves from ordinary multivariate regression models in two aspects: (1) the dimension of the response vector and the number of covariates diverge to infinity; (2) the nonzero entries of the coefficient matrix and the precision matrix are sparse. We develop a two-stage sequential conditional selection (TSCS) approach to the identification and estimation of the nonzeros of the coefficient matrix and the precision matrix. It is established that the TSCS is selection consistent for the identification of the nonzeros of both the coefficient matrix and the precision matrix. Simulation studies are carried out to compare TSCS with the existing state-of-the-art methods, which demonstrates that the TSCS approach outperforms the existing methods. As an illustration, the TSCS approach is also applied to a real dataset.

  相似文献   

7.
An iterative least squares parameter estimation algorithm is developed for controlled moving average systems based on matrix decomposition. The proposed algorithm avoids repeatedly computing the inverse of the data product moment matrix with large sizes at each iteration and has a high computational efficiency. A numerical example indicates that the proposed algorithm is effective.  相似文献   

8.
We propose a way to use the Markowitz pivot selection criterion for choosing the parameters of the extended ABS class of algorithms to present an effective algorithm for generating sparse null space bases. We explain in detail an efficient implementation of the algorithm, making use of the special MATLAB 7.0 functions for sparse matrix operations and the inherent efficiency of the ANSI C programming language. We then compare our proposed algorithm with an implementation of an efficient algorithm proposed by Coleman and Pothen with respect to the computing time and the accuracy and the sparsity of the generated null space bases. Our extensive numerical results, using coefficient matrices of linear programming problems from the NETLIB set of test problems show the competitiveness of our implemented algorithm.  相似文献   

9.
The time-evolving precision matrix of a piecewise-constant Gaussian graphical model encodes the dynamic conditional dependency structure of a multivariate time-series. Traditionally, graphical models are estimated under the assumption that data are drawn identically from a generating distribution. Introducing sparsity and sparse-difference inducing priors, we relax these assumptions and propose a novel regularized M-estimator to jointly estimate both the graph and changepoint structure. The resulting estimator possesses the ability to therefore favor sparse dependency structures and/or smoothly evolving graph structures, as required. Moreover, our approach extends current methods to allow estimation of changepoints that are grouped across multiple dependencies in a system. An efficient algorithm for estimating structure is proposed. We study the empirical recovery properties in a synthetic setting. The qualitative effect of grouped changepoint estimation is then demonstrated by applying the method on a genetic time-course dataset. Supplementary material for this article is available online.  相似文献   

10.
ACLASSOFFACTORIZATIONUPDATEALGORITHMFORSOLVINGSYSTEMSOFSPARSENONLINEAREQUATIONSBAIZHONGZHI(InstituteofComputationalMathematic...  相似文献   

11.
Lasso是机器学习中比较常用的一种变量选择方法,适用于具有稀疏性的回归问题.当样本量巨大或者海量的数据存储在不同的机器上时,分布式计算是减少计算时间提高效率的重要方式之一.本文在给出Lasso模型等价优化模型的基础上,将ADMM算法应用到此优化变量可分离的模型中,构造了一种适用于Lasso变量选择的分布式算法,证明了...  相似文献   

12.
Structure-enforced matrix factorization (SeMF) represents a large class of mathematical models appearing in various forms of principal component analysis, sparse coding, dictionary learning and other machine learning techniques useful in many applications including neuroscience and signal processing. In this paper, we present a unified algorithm framework, based on the classic alternating direction method of multipliers (ADMM), for solving a wide range of SeMF problems whose constraint sets permit low-complexity projections. We propose a strategy to adaptively adjust the penalty parameters which is the key to achieving good performance for ADMM. We conduct extensive numerical experiments to compare the proposed algorithm with a number of state-of-the-art special-purpose algorithms on test problems including dictionary learning for sparse representation and sparse nonnegative matrix factorization. Results show that our unified SeMF algorithm can solve different types of factorization problems as reliably and as efficiently as special-purpose algorithms. In particular, our SeMF algorithm provides the ability to explicitly enforce various combinatorial sparsity patterns that, to our knowledge, has not been considered in existing approaches.  相似文献   

13.
鲁棒主成分分析作为统计与数据科学领域的基本工具已被广泛研究,其核心原理是把观测数据分解成低秩部分和稀疏部分.本文基于鲁棒主成分分析的非凸模型,提出了一种新的基于梯度方法和非单调搜索技术的高斯型交替下降方向法.在新算法中,交替更新低秩部分和稀疏部分相关的变量,其中低秩部分的变量是利用一步带有精确步长的梯度下降法进行更新,...  相似文献   

14.
本文对瞬态动力问题,结合逐步积分方法提出了一类广义的矩阵分裂和逐单元松弛算法,摆脱了有限元法通常需形成总体刚度矩阵,总体质量矩阵和求解大型稀疏方程组的工作,理论分析和计算实例表明,本文的广义矩阵分裂是最优的分裂方案.本文的算法物理意义明确,便于编写程序推广应用.  相似文献   

15.
根据块三对角矩阵的特殊分解,给出了求解块三对角方程组的新算法.该算法含有可以选择的参数矩阵,适当选择这些参数矩阵,可以使得计算精度较著名的追赶法高,甚至当追赶法失效时,由该算法仍可得到一定精度的解.  相似文献   

16.
模型估计是机器学习领域一个重要的研究内容,动态数据的模型估计是系统辨识和系统控制的基础.针对AR时间序列模型辨识问题,证明了在给定阶数下AR模型参数的最小二乘估计本质上也是一种矩估计.根据结构风险最小化原理,通过对模型拟合度和模型复杂度的折衷,提出了基于稀疏结构迭代的AR序列模型估计算法,并讨论了基于广义岭估计的最优正则化参数选取规则.数值结果表明,方法能以节省参数的方式有效地实现AR模型的辨识,比矩估计法结果有明显改善.  相似文献   

17.
The implementation of the recently proposed semi-monotonic augmented Lagrangian algorithm for the solution of large convex equality constrained quadratic programming problems is considered. It is proved that if the auxiliary problems are approximately solved by the conjugate gradient method, then the algorithm finds an approximate solution of the class of problems with uniformly bounded spectrum of the Hessian matrix at O(1) matrix–vector multiplications. If applied to the class of problems with the Hessian matrices that are in addition either sufficiently sparse or can be expressed as a product of such sparse matrices, then the cost of the solution is proportional to the dimension of the problems. Theoretical results are illustrated by numerical experiments. This research is supported by grants of the Ministry of Education No. S3086102, ET400300415 and MSM 6198910027.  相似文献   

18.
主要考虑了生长曲线模型中的参数矩阵的估计.首先基于Potthoff-Roy变换后的生长曲线模型,采用不同的惩罚函数:Hard Thresholding函数,LASSO,ENET,改进LASSO,SACD给出了参数矩阵的惩罚最小二乘估计.接着对不做变换的生长曲线模型,直接定义其惩罚最小二乘估计,基于Nelder-Mead法给出了估计的数值解算法.最后对提出的参数估计方法进行了数据模拟.结果表明自适应LASSO在估计方面效果比较好.  相似文献   

19.
In multivariate regression models, a sparse singular value decomposition of the regression component matrix is appealing for reducing dimensionality and facilitating interpretation. However, the recovery of such a decomposition remains very challenging, largely due to the simultaneous presence of orthogonality constraints and co-sparsity regularization. By delving into the underlying statistical data-generation mechanism, we reformulate the problem as a supervised co-sparse factor analysis, and develop an efficient computational procedure, named sequential factor extraction via co-sparse unit-rank estimation (SeCURE), that completely bypasses the orthogonality requirements. At each step, the problem reduces to a sparse multivariate regression with a unit-rank constraint. Nicely, each sequentially extracted sparse and unit-rank coefficient matrix automatically leads to co-sparsity in its pair of singular vectors. Each latent factor is thus a sparse linear combination of the predictors and may influence only a subset of responses. The proposed algorithm is guaranteed to converge, and it ensures efficient computation even with incomplete data and/or when enforcing exact orthogonality is desired. Our estimators enjoy the oracle properties asymptotically; a non-asymptotic error bound further reveals some interesting finite-sample behaviors of the estimators. The efficacy of SeCURE is demonstrated by simulation studies and two applications in genetics. Supplementary materials for this article are available online.  相似文献   

20.
This letter presents an iterative estimation algorithm for modeling a class of output nonlinear systems. The basic idea is to derive an estimation model and to solve an optimization problem using the gradient search. The proposed iterative numerical algorithm can estimate the parameters of a class of Wiener nonlinear systems from input–output measurement data. The proposed algorithm has faster convergence rates compared with the stochastic gradient algorithm. The numerical simulation results indicate that the proposed algorithm works well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号