首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 171 毫秒
1.
In this article, we present a fast and stable algorithm for solving a class of optimization problems that arise in many statistical estimation procedures, such as sparse fused lasso over a graph, convex clustering, and trend filtering, among others. We propose a so-called augmented alternating direction methods of multipliers (ADMM) algorithm to solve this class of problems. Compared to a standard ADMM algorithm, our proposal significantly reduces the computational cost at each iteration while maintaining roughly the same overall convergence speed. We also consider a new varying penalty scheme for the ADMM algorithm, which could further accelerate the convergence, especially when solving a sequence of problems with tuning parameters of different scales. Extensive numerical experiments on the sparse fused lasso problem show that the proposed algorithm is more efficient than the standard ADMM and two other existing state-of-the-art specialized algorithms. Finally, we discuss a possible extension and some interesting connections to two well-known algorithms. Supplementary materials for the article are available online.  相似文献   

2.
We introduce a proximal alternating linearized minimization (PALM) algorithm for solving a broad class of nonconvex and nonsmooth minimization problems. Building on the powerful Kurdyka–?ojasiewicz property, we derive a self-contained convergence analysis framework and establish that each bounded sequence generated by PALM globally converges to a critical point. Our approach allows to analyze various classes of nonconvex-nonsmooth problems and related nonconvex proximal forward–backward algorithms with semi-algebraic problem’s data, the later property being shared by many functions arising in a wide variety of fundamental applications. A by-product of our framework also shows that our results are new even in the convex setting. As an illustration of the results, we derive a new and simple globally convergent algorithm for solving the sparse nonnegative matrix factorization problem.  相似文献   

3.
This collection of Matlab 7.0 software supplements and complements the package UTV Tools from 1999, and includes implementations of special-purpose rank-revealing algorithms developed since the publication of the original package. We provide algorithms for computing and modifying symmetric rank-revealing VSV decompositions, we expand the algorithms for the ULLV decomposition of a matrix pair to handle interference-type problems with a rank-deficient covariance matrix, and we provide a robust and reliable Lanczos algorithm which – despite its simplicity – is able to capture all the dominant singular values of a sparse or structured matrix. These new algorithms have applications in signal processing, optimization and LSI information retrieval. AMS subject classification 65F25  相似文献   

4.
We consider tomographic reconstruction using priors in the form of a dictionary learned from training images. The reconstruction has two stages: first we construct a tensor dictionary prior from our training data, and then we pose the reconstruction problem in terms of recovering the expansion coefficients in that dictionary. Our approach differs from past approaches in that (a) we use a third-order tensor representation for our images and (b) we recast the reconstruction problem using the tensor formulation. The dictionary learning problem is presented as a non-negative tensor factorization problem with sparsity constraints. The reconstruction problem is formulated in a convex optimization framework by looking for a solution with a sparse representation in the tensor dictionary. Numerical results show that our tensor formulation leads to very sparse representations of both the training images and the reconstructions due to the ability of representing repeated features compactly in the dictionary.  相似文献   

5.
We present algorithms to determine the number of nonzeros in each row and column of the factors of a sparse matrix, for both the QR factorization and the LU factorization with partial pivoting. The algorithms use only the nonzero structure of the input matrix, and run in time nearly linear in the number of nonzeros in that matrix. They may be used to set up data structures or schedule parallel operations in advance of the numerical factorization.The row and column counts we compute are upper bounds on the actual counts. If the input matrix is strong Hall and there is no coincidental numerical cancellation, the counts are exact for QR factorization and are the tightest bounds possible for LU factorization.These algorithms are based on our earlier work on computing row and column counts for sparse Cholesky factorization, plus an efficient method to compute the column elimination tree of a sparse matrix without explicitly forming the product of the matrix and its transpose.This revised version was published online in October 2005 with corrections to the Cover Date.  相似文献   

6.
The framework of this paper is the parallelization of a plasticity algorithm that uses an implicit method and an incremental approach. More precisely, we will focus on some specific parallel sparse linear algebra algorithms which are the most time-consuming steps to solve efficiently such an engineering application. First, we present a general algorithm which computes an efficient static scheduling of block computations for parallel sparse linear factorization. The associated solver, based on a supernodal fan-in approach, is fully driven by this scheduling. Second, we describe a scalable parallel assembly algorithm based on a distribution of elements induced by the previous distribution for the blocks of the sparse matrix. We give an overview of these algorithms and present performance results on an IBM SP2 for a collection of grid and irregular problems. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

7.
Summary. In this paper we propose four algorithms to compute truncated pivoted QR approximations to a sparse matrix. Three are based on the Gram–Schmidt algorithm and the other on Householder triangularization. All four algorithms leave the original matrix unchanged, and the only additional storage requirements are arrays to contain the factorization itself. Thus, the algorithms are particularly suited to determining low-rank approximations to a sparse matrix. Received February 23, 1998 / Revised version received April 16, 1998  相似文献   

8.
考虑求解一类半监督距离度量学习问题. 由于样本集(数据库)的规模与复杂性的激增, 在考虑距离度量学习问题时, 必须考虑学习来的距离度量矩阵具有稀疏性的特点. 因此, 在现有的距离度量学习模型中, 增加了学习矩阵的稀疏约束. 为了便于模型求解, 稀疏约束应用了Frobenius 范数约束. 进一步, 通过罚函数方法将Frobenius范数约束罚到目标函数, 使得具有稀疏约束的模型转化成无约束优化问题. 为了求解问题, 提出了正定矩阵群上加速投影梯度算法, 克服了矩阵群上不能直接进行线性组合的困难, 并分析了算法的收敛性. 最后通过UCI数据库的分类问题的例子, 进行了数值实验, 数值实验的结果说明了学习矩阵的稀疏性以及加速投影梯度算法的有效性.  相似文献   

9.
Based on the alternating direction method of multipliers (ADMM), we develop three numerical algorithms incrementally for solving the optimal control problems constrained by random Helmholtz equations. First, we apply the standard Monte Carlo technique and finite element method for the random and spatial discretization, respectively, and then ADMM is used to solve the resulting system. Next, combining the multi-modes expansion, Monte Carlo technique, finite element method, and ADMM, we propose the second algorithm. In the third algorithm, we preprocess certain quantities before the ADMM iteration, so that nearly no random variable is in the inner iteration. This algorithm is the most efficient one and is easy to implement. The error estimates of these three algorithms are established. The numerical experiments verify the efficiency of our algorithms.  相似文献   

10.
We investigate the problem of robust matrix completion with a fraction of observation corrupted by sparsity outlier noise. We propose an algorithmic framework based on the ADMM algorithm for a non-convex optimization, whose objective function consists of an $\ell_1$ norm data fidelity and a rank constraint. To reduce the computational cost per iteration, two inexact schemes are developed to replace the most time-consuming step in the generic ADMM algorithm. The resulting algorithms remarkably outperform the existing solvers for robust matrix completion with outlier noise. When the noise is severe and the underlying matrix is ill-conditioned, the proposed algorithms are faster and give more accurate solutions than state-of-the-art robust matrix completion approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号