首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 546 毫秒
1.
The alternating direction method of multipliers(ADMM)is a widely used method for solving many convex minimization models arising in signal and image processing.In this paper,we propose an inertial ADMM for solving a two-block separable convex minimization problem with linear equality constraints.This algorithm is obtained by making use of the inertial Douglas-Rachford splitting algorithm to the corresponding dual of the primal problem.We study the convergence analysis of the proposed algorithm in infinite-dimensional Hilbert spaces.Furthermore,we apply the proposed algorithm on the robust principal component analysis problem and also compare it with other state-of-the-art algorithms.Numerical results demonstrate the advantage of the proposed algorithm.  相似文献   

2.
In recent years, alternating direction method of multipliers (ADMM) and its variants are popular for the extensive use in image processing and statistical learning. A variant of ADMM: symmetric ADMM, which updates the Lagrange multiplier twice in one iteration, is always faster whenever it converges. In this paper, combined with Nesterov's accelerating strategy, an accelerated symmetric ADMM is proposed. We prove its $\mathcal{O}(\frac{1}{k^2})$ convergence rate under strongly convex condition. For the general situation, an accelerated method with a restart rule is proposed. Some preliminary numerical experiments show the efficiency of our algorithms.  相似文献   

3.
郭雄伟  王川龙 《计算数学》2022,44(4):534-544
本文提出了一种求解低秩张量填充问题的加速随机临近梯度算法.张量填充模型可以松弛为平均组合形式的无约束优化问题,在迭代过程中,随机选取该组合中的某一函数进行变量更新,有效减少了张量展开、矩阵折叠及奇异值分解带来的较大的计算花费.本文证明了算法的收敛率为$O (1/k^{2})$.最后,随机生成的和真实的张量填充实验结果表明新算法在CPU时间上优于现有的三种算法.  相似文献   

4.
Structure-enforced matrix factorization (SeMF) represents a large class of mathematical models appearing in various forms of principal component analysis, sparse coding, dictionary learning and other machine learning techniques useful in many applications including neuroscience and signal processing. In this paper, we present a unified algorithm framework, based on the classic alternating direction method of multipliers (ADMM), for solving a wide range of SeMF problems whose constraint sets permit low-complexity projections. We propose a strategy to adaptively adjust the penalty parameters which is the key to achieving good performance for ADMM. We conduct extensive numerical experiments to compare the proposed algorithm with a number of state-of-the-art special-purpose algorithms on test problems including dictionary learning for sparse representation and sparse nonnegative matrix factorization. Results show that our unified SeMF algorithm can solve different types of factorization problems as reliably and as efficiently as special-purpose algorithms. In particular, our SeMF algorithm provides the ability to explicitly enforce various combinatorial sparsity patterns that, to our knowledge, has not been considered in existing approaches.  相似文献   

5.
In this paper we introduce COV, a novel information retrieval (IR) algorithm for massive databases based on vector space modeling and spectral analysis of the covariance matrix, for the document vectors, to reduce the scale of the problem. Since the dimension of the covariance matrix depends on the attribute space and is independent of the number of documents, COV can be applied to databases that are too massive for methods based on the singular value decomposition of the document-attribute matrix, such as latent semantic indexing (LSI). In addition to improved scalability, theoretical considerations indicate that results from our algorithm tend to be more accurate than those from LSI, particularly in detecting subtle differences in document vectors. We demonstrate the power and accuracy of COV through an important topic in data mining, known as outlier cluster detection. We propose two new algorithms for detecting major and outlier clusters in databases—the first is based on LSI, and the second on COV. Our implementation studies indicate that our cluster detection algorithms outperform the basic LSI and COV algorithm in detecting outlier clusters.  相似文献   

6.
This article presents a fast and robust algorithm for trend filtering, a recently developed nonparametric regression tool. It has been shown that, for estimating functions whose derivatives are of bounded variation, trend filtering achieves the minimax optimal error rate, while other popular methods like smoothing splines and kernels do not. Standing in the way of a more widespread practical adoption, however, is a lack of scalable and numerically stable algorithms for fitting trend filtering estimates. This article presents a highly efficient, specialized alternating direction method of multipliers (ADMM) routine for trend filtering. Our algorithm is competitive with the specialized interior point methods that are currently in use, and yet is far more numerically robust. Furthermore, the proposed ADMM implementation is very simple, and, importantly, it is flexible enough to extend to many interesting related problems, such as sparse trend filtering and isotonic trend filtering. Software for our method is freely available, in both the C and R languages.  相似文献   

7.
In this article, we present a fast and stable algorithm for solving a class of optimization problems that arise in many statistical estimation procedures, such as sparse fused lasso over a graph, convex clustering, and trend filtering, among others. We propose a so-called augmented alternating direction methods of multipliers (ADMM) algorithm to solve this class of problems. Compared to a standard ADMM algorithm, our proposal significantly reduces the computational cost at each iteration while maintaining roughly the same overall convergence speed. We also consider a new varying penalty scheme for the ADMM algorithm, which could further accelerate the convergence, especially when solving a sequence of problems with tuning parameters of different scales. Extensive numerical experiments on the sparse fused lasso problem show that the proposed algorithm is more efficient than the standard ADMM and two other existing state-of-the-art specialized algorithms. Finally, we discuss a possible extension and some interesting connections to two well-known algorithms. Supplementary materials for the article are available online.  相似文献   

8.
We consider the problem of restoring images corrupted by Poisson noise. Under the framework of maximum a posteriori estimator, the problem can be converted into a minimization problem where the objective function is composed of a Kullback-Leibler(KL)-divergence term for the Poisson noise and a total variation(TV) regularization term. Due to the logarithm function in the KL-divergence term, the non-differentiability of TV term and the positivity constraint on the images, it is not easy to design stable and efficiency algorithm for the problem. Recently, many researchers proposed to solve the problem by alternating direction method of multipliers(ADMM). Since the approach introduces some auxiliary variables and requires the solution of some linear systems, the iterative procedure can be complicated. Here we formulate the problem as two new constrained minimax problems and solve them by Chambolle-Pock's first order primal-dual approach. The convergence of our approach is guaranteed by their theory. Comparing with ADMM approaches, our approach requires about half of the auxiliary variables and is matrix-inversion free. Numerical results show that our proposed algorithms are efficient and outperform the ADMM approach.  相似文献   

9.
In this paper, we investigate truncated $ℓ_2/ℓ_{1−2}$ minimization and its associated alternating direction method of multipliers (ADMM) algorithm for recovering the block sparse signals. Based on the block restricted isometry property (Block-RIP), a theoretical analysis is presented to guarantee the validity of proposed method. Our theoretical results not only show a less error upper bound, but also promote the former recovery condition of truncated ℓ1−2 method for sparse signal recovery. Besides, the algorithm has been compared with some state-of-the-art algorithms and numerical experiments have shown excellent performances on recovering the block sparse signals.  相似文献   

10.
Based on the alternating direction method of multipliers (ADMM), we develop three numerical algorithms incrementally for solving the optimal control problems constrained by random Helmholtz equations. First, we apply the standard Monte Carlo technique and finite element method for the random and spatial discretization, respectively, and then ADMM is used to solve the resulting system. Next, combining the multi-modes expansion, Monte Carlo technique, finite element method, and ADMM, we propose the second algorithm. In the third algorithm, we preprocess certain quantities before the ADMM iteration, so that nearly no random variable is in the inner iteration. This algorithm is the most efficient one and is easy to implement. The error estimates of these three algorithms are established. The numerical experiments verify the efficiency of our algorithms.  相似文献   

11.
There exist many data clustering algorithms, but they can not adequately handle the number of clusters or cluster shapes. Their performance mainly depends on a choice of algorithm parameters. Our approach to data clustering and algorithm does not require the parameter choice; it can be treated as a natural adaptation to the existing structure of distances between data points. The outlier factor introduced by the author specifies a degree of being an outlier for each data point. The outlier factor notion is based on the difference between the frequency distribution of interpoint distances in a given dataset and the corresponding distribution of uniformly distributed points. Then data clusters can be determined by maximizing the outlier factor function. The data points in dataset are divided into clusters according to the attractor regions of local optima. An experimental evaluation of the proposed algorithm shows that the proposed method can identify complex cluster shapes. Key advantages of the approach are: good clustering properties for datasets with comparatively large amount of noise (an additional data points), and an absence of important parameters which adequate choice determines the quality of results.  相似文献   

12.
This paper introduces an alternating direction method of multipliers (ADMM) for finding solutions to a class of Sylvester matrix equation AXB = E subject to a linear matrix inequality constraint CXDG. Preliminary convergence properties of ADMM are presented. Numerical experiments are performed to illustrate the feasibility and effectiveness of ADMM. In addition, some numerical comparisons with a recent algorithm are also given.  相似文献   

13.
We consider the problem of subspace clustering with data that is potentially corrupted by both dense noise and sparse gross errors. In particular, we study a recently proposed low rank subspace clustering approach based on a nonconvex modeling formulation. This formulation includes a nonconvex spectral function in the objective function that makes the optimization task challenging, e.g., it is unknown whether the alternating direction method of multipliers (ADMM) framework proposed to solve the nonconvex model formulation is provably convergent. In this paper, we establish that the spectral function is differentiable and give a formula for computing the derivative. Moreover, we show that the derivative of the spectral function is Lipschitz continuous and provide an explicit value for the Lipschitz constant. These facts are then used to provide a lower bound for how the penalty parameter in the ADMM method should be chosen. As long as the penalty parameter is chosen according to this bound, we show that the ADMM algorithm computes iterates that have a limit point satisfying first-order optimality conditions. We also present a second strategy for solving the nonconvex problem that is based on proximal gradient calculations. The convergence and performance of the algorithms is verified through experiments on real data from face and digit clustering and motion segmentation.  相似文献   

14.
In this article, we investigate the privacy issues that arise from a new frame-based kernel analysis approach to reconstruct from frame coefficient erasures. We show that while an erasure recovery matrix is needed in addition to a decoding frame for a receiver to recover the erasures, the erasure recovery matrix can be designed in such a way that it protects the encoding frame. The set of such erasure recovery matrices is shown to be an open and dense subset of a certain matrix space. We present algorithms to construct concrete examples of encoding frame and erasure recovery matrix pairs for which the erasure reconstruction process is robust to additive channel noise. Using the Restricted Isometry Property, we also provide quantitative bounds on the amplification of sparse additive channel noise. Numerical experiments are presented on the amplification of additive normally distributed random channel noise. In both cases, the amplification factors are demonstrated to be quite small.  相似文献   

15.
Recovering low-rank and sparse matrix from a given matrix arises in many applications, such as image processing, video background substraction, and so on. The 3-block alternating direction method of multipliers (ADMM) has been applied successfully to solve convex problems with 3-block variables. However, the existing sufficient conditions to guarantee the convergence of the 3-block ADMM usually require the penalty parameter $\gamma$ to satisfy a certain bound, which may affect the performance of solving the large scale problem in practice. In this paper, we propose the 3-block ADMM to recover low-rank and sparse matrix from noisy observations. In theory, we prove that the 3-block ADMM is convergent when the penalty parameters satisfy a certain condition and the objective function value sequences generated by 3-block ADMM converge to the optimal value. Numerical experiments verify that proposed method can achieve higher performance than existing methods in terms of both efficiency and accuracy.  相似文献   

16.
In this article, we study robust tensor completion by using transformed tensor singular value decomposition (SVD), which employs unitary transform matrices instead of discrete Fourier transform matrix that is used in the traditional tensor SVD. The main motivation is that a lower tubal rank tensor can be obtained by using other unitary transform matrices than that by using discrete Fourier transform matrix. This would be more effective for robust tensor completion. Experimental results for hyperspectral, video and face datasets have shown that the recovery performance for the robust tensor completion problem by using transformed tensor SVD is better in peak signal‐to‐noise ratio than that by using Fourier transform and other robust tensor completion methods.  相似文献   

17.
Fixed point and Bregman iterative methods for matrix rank minimization   总被引:5,自引:0,他引:5  
The linearly constrained matrix rank minimization problem is widely applicable in many fields such as control, signal processing and system identification. The tightest convex relaxation of this problem is the linearly constrained nuclear norm minimization. Although the latter can be cast as a semidefinite programming problem, such an approach is computationally expensive to solve when the matrices are large. In this paper, we propose fixed point and Bregman iterative algorithms for solving the nuclear norm minimization problem and prove convergence of the first of these algorithms. By using a homotopy approach together with an approximate singular value decomposition procedure, we get a very fast, robust and powerful algorithm, which we call FPCA (Fixed Point Continuation with Approximate SVD), that can solve very large matrix rank minimization problems (the code can be downloaded from http://www.columbia.edu/~sm2756/FPCA.htm for non-commercial use). Our numerical results on randomly generated and real matrix completion problems demonstrate that this algorithm is much faster and provides much better recoverability than semidefinite programming solvers such as SDPT3. For example, our algorithm can recover 1000 × 1000 matrices of rank 50 with a relative error of 10?5 in about 3?min by sampling only 20% of the elements. We know of no other method that achieves as good recoverability. Numerical experiments on online recommendation, DNA microarray data set and image inpainting problems demonstrate the effectiveness of our algorithms.  相似文献   

18.
Because of their convincing performance, there is a growing interest in using evolutionary algorithms for reinforcement learning. We propose learning of neural network policies by the covariance matrix adaptation evolution strategy (CMA-ES), a randomized variable-metric search algorithm for continuous optimization. We argue that this approach, which we refer to as CMA Neuroevolution Strategy (CMA-NeuroES), is ideally suited for reinforcement learning, in particular because it is based on ranking policies (and therefore robust against noise), efficiently detects correlations between parameters, and infers a search direction from scalar reinforcement signals. We evaluate the CMA-NeuroES on five different (Markovian and non-Markovian) variants of the common pole balancing problem. The results are compared to those described in a recent study covering several RL algorithms, and the CMA-NeuroES shows the overall best performance.  相似文献   

19.
Clustering is a fundamental problem in many scientific applications. Standard methods such as k-means, Gaussian mixture models, and hierarchical clustering, however, are beset by local minima, which are sometimes drastically suboptimal. Recently introduced convex relaxations of k-means and hierarchical clustering shrink cluster centroids toward one another and ensure a unique global minimizer. In this work, we present two splitting methods for solving the convex clustering problem. The first is an instance of the alternating direction method of multipliers (ADMM); the second is an instance of the alternating minimization algorithm (AMA). In contrast to previously considered algorithms, our ADMM and AMA formulations provide simple and unified frameworks for solving the convex clustering problem under the previously studied norms and open the door to potentially novel norms. We demonstrate the performance of our algorithm on both simulated and real data examples. While the differences between the two algorithms appear to be minor on the surface, complexity analysis and numerical experiments show AMA to be significantly more efficient. This article has supplementary materials available online.  相似文献   

20.
Summary. We present generalizations of the nonsymmetric Levinson and Schur algorithms for non-Hermitian Toeplitz matrices with some singular or ill-conditioned leading principal submatrices. The underlying recurrences allow us to go from any pair of successive well-conditioned leading principal submatrices to any such pair of larger order. If the look-ahead step size between these pairs is bounded, our generalized Levinson and Schur recurrences require $ operations, and the Schur recurrences can be combined with recursive doubling so that an $ algorithm results. The overhead (in operations and storage) of look-ahead steps is very small. There are various options for applying these algorithms to solving linear systems with Toeplitz matrix. Received July 26, 1993  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号