首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Structure-enforced matrix factorization (SeMF) represents a large class of mathematical models appearing in various forms of principal component analysis, sparse coding, dictionary learning and other machine learning techniques useful in many applications including neuroscience and signal processing. In this paper, we present a unified algorithm framework, based on the classic alternating direction method of multipliers (ADMM), for solving a wide range of SeMF problems whose constraint sets permit low-complexity projections. We propose a strategy to adaptively adjust the penalty parameters which is the key to achieving good performance for ADMM. We conduct extensive numerical experiments to compare the proposed algorithm with a number of state-of-the-art special-purpose algorithms on test problems including dictionary learning for sparse representation and sparse nonnegative matrix factorization. Results show that our unified SeMF algorithm can solve different types of factorization problems as reliably and as efficiently as special-purpose algorithms. In particular, our SeMF algorithm provides the ability to explicitly enforce various combinatorial sparsity patterns that, to our knowledge, has not been considered in existing approaches.  相似文献   

2.
Newton-type methods for unconstrained optimization problems have been very successful when coupled with a modified Cholesky factorization to take into account the possible lack of positive-definiteness in the Hessian matrix. In this paper we discuss the application of these method to large problems that have a sparse Hessian matrix whose sparsity is known a priori. Quite often it is difficult, if not impossible, to obtain an analytic representation of the Hessian matrix. Determining the Hessian matrix by the standard method of finite-differences is costly in terms of gradient evaluations for large problems. Automatic procedures that reduce the number of gradient evaluations by exploiting sparsity are examined and a new procedure is suggested. Once a sparse approximation to the Hessian matrix has been obtained, there still remains the problem of solving a sparse linear system of equations at each iteration. A modified Cholesky factorization can be used. However, many additional nonzeros (fill-in) may be created in the factors, and storage problems may arise. One way of approaching this problem is to ignore fill-in in a systematic manner. Such technique are calledpartial factorization schemes. Various existing partial factorization are analyzed and three new ones are developed. The above algorithms were tested on a set of problems. The overall conclusions were that these methods perfom well in practice.  相似文献   

3.
Principal component analysis (PCA) is a widely used tool for data analysis and dimension reduction in applications throughout science and engineering. However, the principal components (PCs) can sometimes be difficult to interpret, because they are linear combinations of all the original variables. To facilitate interpretation, sparse PCA produces modified PCs with sparse loadings, i.e. loadings with very few non-zero elements. In this paper, we propose a new sparse PCA method, namely sparse PCA via regularized SVD (sPCA-rSVD). We use the connection of PCA with singular value decomposition (SVD) of the data matrix and extract the PCs through solving a low rank matrix approximation problem. Regularization penalties are introduced to the corresponding minimization problem to promote sparsity in PC loadings. An efficient iterative algorithm is proposed for computation. Two tuning parameter selection methods are discussed. Some theoretical results are established to justify the use of sPCA-rSVD when only the data covariance matrix is available. In addition, we give a modified definition of variance explained by the sparse PCs. The sPCA-rSVD provides a uniform treatment of both classical multivariate data and high-dimension-low-sample-size (HDLSS) data. Further understanding of sPCA-rSVD and some existing alternatives is gained through simulation studies and real data examples, which suggests that sPCA-rSVD provides competitive results.  相似文献   

4.
In this paper, we present a new incomplete LU factorization using pivoting by columns and row permutation. Pivoting by columns helps to avoid small pivots and row permutation is used to promote sparsity. This factorization is used in a multilevel framework as a preconditioner for iterative methods for solving sparse linear systems. In most multilevel incomplete ILU factorization preconditioners, preprocessing (scaling and permutation of rows and columns of the coefficient matrix) results in further improvements. Numerical results illustrate that these preconditioners are suitable for a wide variety of applications. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

5.
Principal component analysis (PCA) is an important tool for dimension reduction in multivariate analysis. Regularized PCA methods, such as sparse PCA and functional PCA, have been developed to incorporate special features in many real applications. Sometimes additional variables (referred to as supervision) are measured on the same set of samples, which can potentially drive low-rank structures of the primary data of interest. Classical PCA methods cannot make use of such supervision data. In this article, we propose a supervised sparse and functional principal component (SupSFPC) framework that can incorporate supervision information to recover underlying structures that are more interpretable. The framework unifies and generalizes several existing methods and flexibly adapts to the practical scenarios at hand. The SupSFPC model is formulated in a hierarchical fashion using latent variables. We develop an efficient modified expectation-maximization (EM) algorithm for parameter estimation. We also implement fast data-driven procedures for tuning parameter selection. Our comprehensive simulation and real data examples demonstrate the advantages of SupSFPC. Supplementary materials for this article are available online.  相似文献   

6.
Motivated by Sasaki’s work on the extended Hensel construction for solving multivariate algebraic equations, we present a generalized Hensel lifting, which takes advantage of sparsity, for factoring bivariate polynomial over the rational number field. Another feature of the factorization algorithm presented in this article is a new recombination method, which can solve the extraneous factor problem before lifting based on numerical linear algebra. Both theoretical analysis and experimental data show that the algorithm is efficient, especially for sparse bivariate polynomials.  相似文献   

7.
We present an algebraic structured preconditioner for the iterative solution of large sparse linear systems. The preconditioner is based on a multifrontal variant of sparse LU factorization used with nested dissection ordering. Multifrontal factorization amounts to a partial factorization of a sequence of logically dense frontal matrices, and the preconditioner is obtained if structured factorization is used instead. This latter exploits the presence of low numerical rank in some off‐diagonal blocks of the frontal matrices. An algebraic procedure is presented that allows to identify the hierarchy of the off‐diagonal blocks with low numerical rank based on the sparsity of the system matrix. This procedure is motivated by a model problem analysis, yet numerical experiments show that it is successful beyond the model problem scope. Further aspects relevant for the algebraic structured preconditioner are discussed and illustrated with numerical experiments. The preconditioner is also compared with other solvers, including the corresponding direct solver. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
 In Part I of this series of articles, we introduced a general framework of exploiting the aggregate sparsity pattern over all data matrices of large scale and sparse semidefinite programs (SDPs) when solving them by primal-dual interior-point methods. This framework is based on some results about positive semidefinite matrix completion, and it can be embodied in two different ways. One is by a conversion of a given sparse SDP having a large scale positive semidefinite matrix variable into an SDP having multiple but smaller positive semidefinite matrix variables. The other is by incorporating a positive definite matrix completion itself in a primal-dual interior-point method. The current article presents the details of their implementations. We introduce new techniques to deal with the sparsity through a clique tree in the former method and through new computational formulae in the latter one. Numerical results over different classes of SDPs show that these methods can be very efficient for some problems. Received: March 18, 2001 / Accepted: May 31, 2001 Published online: October 9, 2002 RID="⋆" ID="⋆"The author was supported by The Ministry of Education, Culture, Sports, Science and Technology of Japan. Key Words. semidefinite programming – primal-dual interior-point method – matrix completion problem – clique tree – numerical results Mathematics Subject Classification (2000): 90C22, 90C51, 05C50, 05C05  相似文献   

9.
Quadratic programs obtained for optimal control problems of dynamic or discrete-time processes usually involve highly block structured Hessian and constraints matrices, to be exploited by efficient numerical methods. In interior point methods, this is elegantly achieved by the widespread availability of advanced sparse symmetric indefinite factorization codes. For active set methods, however, conventional dense matrix techniques suffer from the need to update base matrices in every active set iteration, thereby loosing the sparsity structure after a few updates. This contribution presents a new factorization of a KKT matrix arising in active set methods for optimal control. It fully respects the block structure without any fill-in. For this factorization, matrix updates are derived for all cases of active set changes. This allows for the design of a highly efficient block structured active set method for optimal control and model predictive control problems with long horizons or many control parameters.  相似文献   

10.
We propose an automatic preconditioning scheme for large sparse numerical optimization. The strategy is based on an examination of the sparsity pattern of the Hessian matrix: using a graph-theoretic heuristic, a block-diagonal approximation to the Hessian matrix is induced. The blocks are submatrices of the Hessian matrix; furthermore, each block is chordal. That is, under a positive definiteness assumption, the Cholesky factorization can be applied to each block without creating any new nonzeros (fill). Therefore the preconditioner is space efficient. We conduct a number of numerical experiments to determine the effectiveness of the preconditioner in the context of a linear conjugate-gradient algorithm for optimization.  相似文献   

11.
Exploiting sparsity has been a key issue in solving large-scale optimization problems. The most time-consuming part of primal-dual interior-point methods for linear programs, second-order cone programs, and semidefinite programs is solving the Schur complement equation at each iteration, usually by the Cholesky factorization. The computational efficiency is greatly affected by the sparsity of the coefficient matrix of the equation which is determined by the sparsity of an optimization problem (linear program, semidefinite program or second-order cone program). We show if an optimization problem is correlatively sparse, then the coefficient matrix of the Schur complement equation inherits the sparsity, and a sparse Cholesky factorization applied to the matrix results in no fill-in. S. Kim’s research was supported by Kosef R01-2005-000-10271-0 and KRF-2006-312-C00062.  相似文献   

12.
The research on the robust principal component analysis has been attracting much attention recently. Generally, the model assumes sparse noise and characterizes the error term by the λ1-norm. However, the sparse noise has clustering effect in practice so using a certain λp-norm simply is not appropriate for modeling. In this paper, we propose a novel method based on sparse Bayesian learning principles and Markov random fields. The method is proved to be very effective for low-rank matrix recovery and contiguous outliers detection, by enforcing the low-rank constraint in a matrix factorization formulation and incorporating the contiguity prior as a sparsity constraint. The experiments on both synthetic data and some practical computer vision applications show that the novel method proposed in this paper is competitive when compared with other state-of-the-art methods.  相似文献   

13.
A basic framework for exploiting sparsity via positive semidefinite matrix completion is presented for an optimization problem with linear and nonlinear matrix inequalities. The sparsity, characterized with a chordal graph structure, can be detected in the variable matrix or in a linear or nonlinear matrix-inequality constraint of the problem. We classify the sparsity in two types, the domain-space sparsity (d-space sparsity) for the symmetric matrix variable in the objective and/or constraint functions of the problem, which is required to be positive semidefinite, and the range-space sparsity (r-space sparsity) for a linear or nonlinear matrix-inequality constraint of the problem. Four conversion methods are proposed in this framework: two for exploiting the d-space sparsity and the other two for exploiting the r-space sparsity. When applied to a polynomial semidefinite program (SDP), these conversion methods enhance the structured sparsity of the problem called the correlative sparsity. As a result, the resulting polynomial SDP can be solved more effectively by applying the sparse SDP relaxation. Preliminary numerical results on the conversion methods indicate their potential for improving the efficiency of solving various problems.  相似文献   

14.
We propose a Bayesian approach for inference in the multivariate probit model, taking into account the association structure between binary observations. We model the association through the correlation matrix of the latent Gaussian variables. Conditional independence is imposed by setting some off-diagonal elements of the inverse correlation matrix to zero and this sparsity structure is modeled using a decomposable graphical model. We propose an efficient Markov chain Monte Carlo algorithm relying on a parameter expansion scheme to sample from the resulting posterior distribution. This algorithm updates the correlation matrix within a simple Gibbs sampling framework and allows us to infer the correlation structure from the data, generalizing methods used for inference in decomposable Gaussian graphical models to multivariate binary observations. We demonstrate the performance of this model and of the Markov chain Monte Carlo algorithm on simulated and real datasets. This article has online supplementary materials.  相似文献   

15.
Interior point methods usually rely on iterative methods to solve the linear systems of large scale problems. The paper proposes a hybrid strategy using groups for the preconditioning of these iterative methods. The objective is to solve large scale linear programming problems more efficiently by a faster and robust computation of the preconditioner. In these problems, the coefficient matrix of the linear system becomes ill conditioned during the interior point iterations, causing numerical difficulties to find a solution, mainly with iterative methods. Therefore, the use of preconditioners is a mandatory requirement to achieve successful results. The paper proposes the use of a new columns ordering for the splitting preconditioner computation, exploring the sparsity of the original matrix and the concepts of groups. This new preconditioner is designed specially for the final interior point iterations; a hybrid approach with the controlled Cholesky factorization preconditioner is adopted. Case studies show that the proposed methodology reduces the computational times with the same quality of solutions when compared to previous reference approaches. Furthermore, the benefits are obtained while preserving the sparse structure of the systems. These results highlight the suitability of the proposed approach for large scale problems.  相似文献   

16.
Most of the existing procedures for sparse principal component analysis (PCA) use a penalty function to obtain a sparse matrix of weights by which a data matrix is post-multiplied to produce PC scores. In this paper, we propose a new sparse PCA procedure which differs from the existing ones in two ways. First, the new procedure does not sparsify the weight matrix. Instead, the so-called loadings matrix is sparsified by which the score matrix is post-multiplied to approximate the data matrix. Second, the cardinality of the loading matrix i.e., the total number of nonzero loadings, is pre-specified to be an integer without using penalty functions. The procedure is called unpenalized sparse loading PCA (USLPCA). A desirable property of USLPCA is that the indices for the percentages of explained variances can be defined in the same form as in the standard PCA. We develop an alternate least squares algorithm for USLPCA which uses the fact that the PCA loss function can be decomposed as a sum of a term irrelevant to the loadings, and another one being easily minimized under cardinality constraints. A procedure is also presented for selecting the best cardinality using information criteria. The procedures are assessed in a simulation study and illustrated with real data examples.  相似文献   

17.
Abaffy, Broyden, and Spedicato (ABS) have recently proposed a class of direct methods for solving nonsparse linear systems. It is the purpose of this paper to demonstrate that with proper choice of parameters, ABS methods exploit sparsity in a natural way. In particular, we study the choice of parameters which corresponds to an LU-decomposition of the coefficient matrix. In many cases, the fill-in, represented by the nonzero elements of the deflection matrix, uses less storage than the corresponding fill-in of an explicit LU factorization. Hence the above can be a viable and simple method for solving sparse linear systems. A simple reordering scheme for the coefficient matrix is also proposed for the purpose of reducing fill-in of the deflection matrices.  相似文献   

18.
We describe how to maintain the triangular factor of a sparse QR factorization when columns are added and deleted and Q cannot be stored for sparsity reasons. The updating procedures could be thought of as a sparse counterpart of Reichel and Gragg’s package QRUP. They allow us to solve a sequence of sparse linear least squares subproblems in which each matrix Bk is an independent subset of the columns of a fixed matrix A, and Bk+1 is obtained by adding or deleting one column. Like Coleman and Hulbert [T. Coleman, L. Hulbert, A direct active set algorithm for large sparse quadratic programs with simple bounds, Math. Program. 45 (1989) 373-406], we adapt the sparse direct methodology of Björck and Oreborn of the late 1980s, but without forming ATA, which may be only positive semidefinite. Our Matlab 5 implementation works with a suitable row and column numbering within a static triangular sparsity pattern that is computed in advance by symbolic factorization of ATA and preserved with placeholders.  相似文献   

19.
We propose a new inertia‐revealing factorization for sparse symmetric matrices. The factorization scheme and the method for extracting the inertia from it were proposed in the 1960s for dense, banded, or tridiagonal matrices, but they have been abandoned in favor of faster methods. We show that this scheme can be applied to any sparse symmetric matrix and that the fill in the factorization is bounded by the fill in the sparse QR factorization of the same matrix (but is usually much smaller). We describe our serial proof‐of‐concept implementation and present experimental results, studying the method's numerical stability and performance.  相似文献   

20.
Optimal matrix approximants in structural identification   总被引:1,自引:0,他引:1  
Problems of model correlation and system identification are central in the design, analysis, and control of large space structures. Of the numerous methods that have been proposed, many are based on finding minimal adjustments to a model matrix sufficient to introduce some desirable quality into that matrix. In this work, several of these methods are reviewed, placed in a modern framework, and linked to other previously known ideas in computational linear algebra and optimization. This new framework provides a point of departure for a number of new methods which are introduced here. Significant among these is a method for stiffness matrix adjustment which preserves the sparsity pattern of an original matrix, requires comparatively modest computational resources, and allows robust handling of noisy modal data. Numerical examples are included to illustrate the methods presented herein.This research was partially supported by the National Science Foundation under Grant DMS-88-07483 and by NASA under Grant NAG-1-960.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号