首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The nuclear norm minimization problem is to find a matrix with the minimum nuclear norm subject to linear and second order cone constraints. Such a problem often arises from the convex relaxation of a rank minimization problem with noisy data, and arises in many fields of engineering and science. In this paper, we study inexact proximal point algorithms in the primal, dual and primal-dual forms for solving the nuclear norm minimization with linear equality and second order cone constraints. We design efficient implementations of these algorithms and present comprehensive convergence results. In particular, we investigate the performance of our proposed algorithms in which the inner sub-problems are approximately solved by the gradient projection method or the accelerated proximal gradient method. Our numerical results for solving randomly generated matrix completion problems and real matrix completion problems show that our algorithms perform favorably in comparison to several recently proposed state-of-the-art algorithms. Interestingly, our proposed algorithms are connected with other algorithms that have been studied in the literature.  相似文献   

2.
Fixed point and Bregman iterative methods for matrix rank minimization   总被引:5,自引:0,他引:5  
The linearly constrained matrix rank minimization problem is widely applicable in many fields such as control, signal processing and system identification. The tightest convex relaxation of this problem is the linearly constrained nuclear norm minimization. Although the latter can be cast as a semidefinite programming problem, such an approach is computationally expensive to solve when the matrices are large. In this paper, we propose fixed point and Bregman iterative algorithms for solving the nuclear norm minimization problem and prove convergence of the first of these algorithms. By using a homotopy approach together with an approximate singular value decomposition procedure, we get a very fast, robust and powerful algorithm, which we call FPCA (Fixed Point Continuation with Approximate SVD), that can solve very large matrix rank minimization problems (the code can be downloaded from http://www.columbia.edu/~sm2756/FPCA.htm for non-commercial use). Our numerical results on randomly generated and real matrix completion problems demonstrate that this algorithm is much faster and provides much better recoverability than semidefinite programming solvers such as SDPT3. For example, our algorithm can recover 1000 × 1000 matrices of rank 50 with a relative error of 10?5 in about 3?min by sampling only 20% of the elements. We know of no other method that achieves as good recoverability. Numerical experiments on online recommendation, DNA microarray data set and image inpainting problems demonstrate the effectiveness of our algorithms.  相似文献   

3.
The aim of the nuclear norm minimization problem is to find a matrix that minimizes the sum of its singular values and satisfies some constraints simultaneously. Such a problem has received more attention largely because it is closely related to the affine rank minimization problem, which appears in many control applications including controller design, realization theory, and model reduction. In this paper, we first propose an exact version alternating direction method for solving the nuclear norm minimization problem with linear equality constraints. At each iteration, the method involves a singular value thresholding and linear matrix equations which are solved exactly. Convergence of the proposed algorithm is followed directly. To broaden the capacity of solving larger problems, we solve approximately the subproblem by an iterative method with the Barzilai–Borwein steplength. Some extensions to the noisy problems and nuclear norm regularized least‐square problems are also discussed. Numerical experiments and comparisons with the state‐of‐the‐art method FPCA show that the proposed method is effective and promising. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

4.
We develop fixed-point algorithms for the approximation of structured matrices with rank penalties. In particular we use these fixed-point algorithms for making approximations by sums of exponentials, i.e., frequency estimation. For the basic formulation of the fixed-point algorithm we show that it converges to the solution of a related minimization problem, namely the one obtained by replacing the original objective function with its convex envelope and keeping the structured matrix constraint unchanged.It often happens that this solution agrees with the solution to the original minimization problem, and we provide a simple criterion for when this is true. We also provide more general fixed-point algorithms that can be used to treat the problems of making weighted approximations by sums of exponentials given equally or unequally spaced sampling. We apply the method to the case of missing data, although the above mentioned convergence results do not hold in this case. However, it turns out that the method often gives perfect reconstruction (up to machine precision) in such cases. We also discuss multidimensional extensions, and illustrate how the proposed algorithms can be used to recover sums of exponentials in several variables, but when samples are available only along a curve.  相似文献   

5.
The affine rank minimization problem is to minimize the rank of a matrix under linear constraints. It has many applications in various areas such as statistics, control, system identification and machine learning. Unlike the literatures which use the nuclear norm or the general Schatten \(q~ (0<q<1)\) quasi-norm to approximate the rank of a matrix, in this paper we use the Schatten 1 / 2 quasi-norm approximation which is a better approximation than the nuclear norm but leads to a nonconvex, nonsmooth and non-Lipschitz optimization problem. It is important that we give a global necessary optimality condition for the \(S_{1/2}\) regularization problem by virtue of the special objective function. This is very different from the local optimality conditions usually used for the general \(S_q\) regularization problems. Explicitly, the global necessary optimality condition for the \(S_{1/2}\) regularization problem is a fixed point inclusion associated with the singular value half thresholding operator. Naturally, we propose a fixed point iterative scheme for the problem. We also provide the convergence analysis of this iteration. By discussing the location and setting of the optimal regularization parameter as well as using an approximate singular value decomposition procedure, we get a very efficient algorithm, half norm fixed point algorithm with an approximate SVD (HFPA algorithm), for the \(S_{1/2}\) regularization problem. Numerical experiments on randomly generated and real matrix completion problems are presented to demonstrate the effectiveness of the proposed algorithm.  相似文献   

6.
We investigated an interpolation algorithm for computing outer inverses of a given polynomial matrix, based on the Leverrier–Faddeev method. This algorithm is a continuation of the finite algorithm for computing generalized inverses of a given polynomial matrix, introduced in [11]. Also, a method for estimating the degrees of polynomial matrices arising from the Leverrier–Faddeev algorithm is given as the improvement of the interpolation algorithm. Based on similar idea, we introduced methods for computing rank and index of polynomial matrix. All algorithms are implemented in the symbolic programming language MATHEMATICA , and tested on several different classes of test examples.  相似文献   

7.
Low Tucker rank tensor completion has wide applications in science and engineering. Many existing approaches dealt with the Tucker rank by unfolding matrix rank. However, unfolding a tensor to a matrix would destroy the data's original multi-way structure, resulting in vital information loss and degraded performance. In this article, we establish a relationship between the Tucker ranks and the ranks of the factor matrices in Tucker decomposition. Then, we reformulate the low Tucker rank tensor completion problem as a multilinear low rank matrix completion problem. For the reformulated problem, a symmetric block coordinate descent method is customized. For each matrix rank minimization subproblem, the classical truncated nuclear norm minimization is adopted. Furthermore, temporal characteristics in image and video data are introduced to such a model, which benefits the performance of the method. Numerical simulations illustrate the efficiency of our proposed models and methods.  相似文献   

8.
The matrix rank minimization problem arises in many engineering applications. As this problem is NP-hard, a nonconvex relaxation of matrix rank minimization, called the Schatten-p quasi-norm minimization(0 p 1), has been developed to approximate the rank function closely. We study the performance of projected gradient descent algorithm for solving the Schatten-p quasi-norm minimization(0 p 1) problem.Based on the matrix restricted isometry property(M-RIP), we give the convergence guarantee and error bound for this algorithm and show that the algorithm is robust to noise with an exponential convergence rate.  相似文献   

9.
The numerical solution of a possible inconsistent system oflinear inequalities in the l1 sense is considered. The non-differentiablel1 norm minimization problem is approximated by a piecewisequadratic Huber smooth function. A continuation algorithm isdesigned to find an l1 solution of the inequality system. Inthe case where the linear inequality system is consistent, asolution is obtained by solving any smoothed problem. Otherwise,the algorithm is shown to terminate in a finite number of iterations.We also consider an alternative smoothing scheme which sharessimilar properties with the first one, but results in an improvedcomputational performance of the continuation algorithm on inconsistentsystems. Numerical experiments are conducted to test the efficiencyof the algorithm.  相似文献   

10.
In this paper, we study robust quaternion matrix completion and provide a rigorous analysis for provable estimation of quaternion matrix from a random subset of their corrupted entries. In order to generalize the results from real matrix completion to quaternion matrix completion, we derive some new formulas to handle noncommutativity of quaternions. We solve a convex optimization problem, which minimizes a nuclear norm of quaternion matrix that is a convex surrogate for the quaternion matrix rank, and the ?1‐norm of sparse quaternion matrix entries. We show that, under incoherence conditions, a quaternion matrix can be recovered exactly with overwhelming probability, provided that its rank is sufficiently small and that the corrupted entries are sparsely located. The quaternion framework can be used to represent red, green, and blue channels of color images. The results of missing/noisy color image pixels as a robust quaternion matrix completion problem are given to show that the performance of the proposed approach is better than that of the testing methods, including image inpainting methods, the tensor‐based completion method, and the quaternion completion method using semidefinite programming.  相似文献   

11.
In this paper, we study the original Meyer model of cartoon and texture decomposition in image processing. The model, which is a minimization problem, contains an l1‐based TV‐norm and an l‐based G‐norm. The main idea of this paper is to use the dual formulation to represent both TV‐norm and G‐norm. The resulting minimization problem of the Meyer model can be given as a minimax problem. A first‐order primal‐dual algorithm can be developed to compute the saddle point of the minimax problem. The convergence of the proposed algorithm is theoretically shown. Numerical results are presented to show that the original Meyer model can decompose better cartoon and texture components than the other testing methods.  相似文献   

12.
Fenghui Wang 《Optimization》2017,66(3):407-415
The split common fixed point problem is an inverse problem that consists in finding an element in a fixed point set such that its image under a bounded linear operator belongs to another fixed-point set. In this paper, we propose a new algorithm for this problem that is completely different from the existing algorithms. Moreover, our algorithm does not need any prior information of the operator norm. Under standard assumptions, we establish a weak convergence theorem of the proposed algorithm and a strong convergence theorem of its variant.  相似文献   

13.
This paper is concerned with a practical algorithm for solving low rank linear multiplicative programming problems and low rank linear fractional programming problems. The former is the minimization of the sum of the product of two linear functions while the latter is the minimization of the sum of linear fractional functions over a polytope. Both of these problems are nonconvex minimization problems with a lot of practical applications. We will show that these problems can be solved in an efficient manner by adapting a branch and bound algorithm proposed by Androulakis–Maranas–Floudas for nonconvex problems containing products of two variables. Computational experiments show that this algorithm performs much better than other reported algorithms for these class of problems.  相似文献   

14.
The matrix completion problem is to recover a low-rank matrix from a subset of its entries. The main solution strategy for this problem has been based on nuclear-norm minimization which requires computing singular value decompositions??a task that is increasingly costly as matrix sizes and ranks increase. To improve the capacity of solving large-scale problems, we propose a low-rank factorization model and construct a nonlinear successive over-relaxation (SOR) algorithm that only requires solving a linear least squares problem per iteration. Extensive numerical experiments show that the algorithm can reliably solve a wide range of problems at a speed at least several times faster than many nuclear-norm minimization algorithms. In addition, convergence of this nonlinear SOR algorithm to a stationary point is analyzed.  相似文献   

15.
This paper deals with the problem of recovering an unknown low‐rank matrix from a sampling of its entries. For its solution, we consider a nonconvex approach based on the minimization of a nonconvex functional that is the sum of a convex fidelity term and a nonconvex, nonsmooth relaxation of the rank function. We show that by a suitable choice of this nonconvex penalty, it is possible, under mild assumptions, to use also in this matrix setting the iterative forward–backward splitting method. Specifically, we propose the use of certain parameter dependent nonconvex penalties that with a good choice of the parameter value allow us to solve in the backward step a convex minimization problem, and we exploit this result to prove the convergence of the iterative forward–backward splitting algorithm. Based on the theoretical results, we develop for the solution of the matrix completion problem the efficient iterative improved matrix completion forward–backward algorithm, which exhibits lower computing times and improved recovery performance when compared with the best state‐of‐the‐art algorithms for matrix completion. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
The split common fixed-point problem is an inverse problem that consists in finding an element in a fixed-point set such that its image under a linear transformation belongs to another fixed-point set. In this paper, we propose a new algorithm for the split common fixed-point problem that does not need any priori information of the operator norm. Under standard assumptions, we establish a weak convergence theorem of the proposed algorithm.  相似文献   

17.
This paper addresses the problem of finding an optimal correction of an inconsistent linear system, where only the nonzero coefficients of the constraint matrix are allowed to be perturbed for reconstructing a consistent system. Using the Frobenius norm as a measure of the distance to feasibility, a nonconvex minimization problem is formulated, whose objective function is a sum of fractional functions. A branch-and-bound algorithm for solving this nonconvex program is proposed, based on suitably overestimating the denominator function for computing lower bounds. Computational experience is presented to demonstrate the efficacy of this approach.  相似文献   

18.
Motivated by the Cayley–Hamilton theorem, a novel adaptive procedure, called a Power Sparse Approximate Inverse (PSAI) procedure, is proposed that uses a different adaptive sparsity pattern selection approach to constructing a right preconditioner M for the large sparse linear system Ax=b. It determines the sparsity pattern of M dynamically and computes the n independent columns of M that is optimal in the Frobenius norm minimization, subject to the sparsity pattern of M. The PSAI procedure needs a matrix–vector product at each step and updates the solution of a small least squares problem cheaply. To control the sparsity of M and develop a practical PSAI algorithm, two dropping strategies are proposed. The PSAI algorithm can capture an effective approximate sparsity pattern of A?1 and compute a good sparse approximate inverse M efficiently. Numerical experiments are reported to verify the effectiveness of the PSAI algorithm. Numerical comparisons are made for the PSAI algorithm and the adaptive SPAI algorithm proposed by Grote and Huckle as well as for the PSAI algorithm and three static Sparse Approximate Inverse (SAI) algorithms. The results indicate that the PSAI algorithm is at least comparable to and can be much more effective than the adaptive SPAI algorithm and it often outperforms the static SAI algorithms very considerably and is more robust and practical than the static ones for general problems. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

19.
The Newton Iteration on Lie Groups   总被引:4,自引:0,他引:4  
We define the Newton iteration for solving the equation f(y) = 0, where f is a map from a Lie group to its corresponding Lie algebra. Two versions are presented, which are formulated independently of any metric on the Lie group. Both formulations reduce to the standard method in the Euclidean case, and are related to existing algorithms on certain Riemannian manifolds. In particular, we show that, under classical assumptions on f, the proposed method converges quadratically. We illustrate the techniques by solving a fixed-point problem arising from the numerical integration of a Lie-type initial value problem via implicit Euler.  相似文献   

20.
The goal of the matrix completion problem is to retrieve an unknown real matrix from a small subset of its entries. This problem comes up in many application areas, and has received a great deal of attention in the context of the Netflix challenge. This setup usually represents our partial knowledge of some information domain. Unknown entries may be due to the unavailability of some relevant experimental data. One approach to this problem starts by selecting a complexity measure of matrices, such as rank or trace norm. The corresponding algorithm outputs a matrix of lowest possible complexity that agrees with the partially specified matrix. The performance of the above algorithm under the assumption that the revealed entries are sampled randomly has received considerable attention (e.g., Refs. Srebro et al., 2005; COLT, 2005; Foygel and Srebro, 2011; Candes and Tao, 2010; Recht, 2009; Keshavan et al., 2010; Koltchinskii et al., 2010). Here we ask what can be said if the observed entries are chosen deterministically. We prove generalization error bounds for such deterministic algorithms, that resemble the results of Refs. Srebro et al. (2005); COLT (2005); Foygel and Srebro (2011) for the randomized algorithms. We still do not understand which sets of entries in a given matrix can be used to properly reconstruct it. Our hope is that the present work sheds some light on this problem. © 2013 Wiley Periodicals, Inc. Random Struct. Alg., 45, 306–317, 2014  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号