首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A purely algebric approach to solving very large general unstructured dense linear systems, in particular, those that arise in 3D boundary integral applications is suggested. We call this technique the matrix-free approach because it allows one to avoid the necessity of storing the whole coefficient matrix in any form, which provides significant memory and arithmetic savings. We propose to approximate a non-singular coefficient matrix A by a block low-rank matrix à and to use the latter when performing matrix–vector multiplications in iterative solution algorithms. Such approximations are shown to be easily computable, and a reliable a posteriori accuracy estimate of ‖A − Ã2 is derived. We prove that block low-rank approximations are sufficiently accurate for some model cases. However, even in the absence of rigorous proof of the existence of accurate approximations, one can apply the algorithm proposed to compute a block low-rank approximation and then make a decision on its practical suitability. We present numerical examples for the 3D CEM and CFD integral applications, which show that, at least for some industrial applications, the matrix-free approach is robust and cost-effective. © 1997 John Wiley & Sons, Ltd.  相似文献   

2.
We introduce a partial proximal point algorithm for solving nuclear norm regularized matrix least squares problems with equality and inequality constraints. The inner subproblems, reformulated as a system of semismooth equations, are solved by an inexact smoothing Newton method, which is proved to be quadratically convergent under a constraint non-degeneracy condition, together with the strong semi-smoothness property of the singular value thresholding operator. Numerical experiments on a variety of problems including those arising from low-rank approximations of transition matrices show that our algorithm is efficient and robust.  相似文献   

3.
In the present paper, we propose block Krylov subspace methods for solving the Sylvester matrix equation AXXB=C. We first consider the case when A is large and B is of small size. We use block Krylov subspace methods such as the block Arnoldi and the block Lanczos algorithms to compute approximations to the solution of the Sylvester matrix equation. When both matrices are large and the right-hand side matrix is of small rank, we will show how to extract low-rank approximations. We give some theoretical results such as perturbation results and bounds of the norm of the error. Numerical experiments will also be given to show the effectiveness of these block methods.  相似文献   

4.
鲁棒主成分分析作为统计与数据科学领域的基本工具已被广泛研究,其核心原理是把观测数据分解成低秩部分和稀疏部分.本文基于鲁棒主成分分析的非凸模型,提出了一种新的基于梯度方法和非单调搜索技术的高斯型交替下降方向法.在新算法中,交替更新低秩部分和稀疏部分相关的变量,其中低秩部分的变量是利用一步带有精确步长的梯度下降法进行更新,稀疏部分的变量是采用非单调搜索技术进行更新.本文在一定的条件下建立了新算法的全局收敛理论.最后的数值试验结果表明了新算法的有效性.  相似文献   

5.
The problem of recovering a low-rank matrix from a set of observations corrupted with gross sparse error is known as the robust principal component analysis (RPCA) and has many applications in computer vision, image processing and web data ranking. It has been shown that under certain conditions, the solution to the NP-hard RPCA problem can be obtained by solving a convex optimization problem, namely the robust principal component pursuit (RPCP). Moreover, if the observed data matrix has also been corrupted by a dense noise matrix in addition to gross sparse error, then the stable principal component pursuit (SPCP) problem is solved to recover the low-rank matrix. In this paper, we develop efficient algorithms with provable iteration complexity bounds for solving RPCP and SPCP. Numerical results on problems with millions of variables and constraints such as foreground extraction from surveillance video, shadow and specularity removal from face images and video denoising from heavily corrupted data show that our algorithms are competitive to current state-of-the-art solvers for RPCP and SPCP in terms of accuracy and speed.  相似文献   

6.
This work is devoted to the numerical analysis of small flow disturbances, i.e. velocity and pressure deviations from the steady state, in ducts of constant cross sections. The main emphasis is put on the disturbances causing the most kinetic energy density growth, the so-called optimal disturbances, whose knowledge is important in laminar-turbulent transition and robust flow control investigations. Numerically, this amounts to computing the maximum amplification of the 2-norm of a matrix exponential exp{tS} for a square matrix S at t ≥ 0. To speed up the computations, we propose a new algorithm based on low-rank approximations of the matrix exponential and prove that it computes the desired amplification with a given accuracy. We discuss its implementation and demonstrate its efficiency by means of numerical experiments with a duct of square cross section.  相似文献   

7.
The research on the robust principal component analysis has been attracting much attention recently. Generally, the model assumes sparse noise and characterizes the error term by the λ1-norm. However, the sparse noise has clustering effect in practice so using a certain λp-norm simply is not appropriate for modeling. In this paper, we propose a novel method based on sparse Bayesian learning principles and Markov random fields. The method is proved to be very effective for low-rank matrix recovery and contiguous outliers detection, by enforcing the low-rank constraint in a matrix factorization formulation and incorporating the contiguity prior as a sparsity constraint. The experiments on both synthetic data and some practical computer vision applications show that the novel method proposed in this paper is competitive when compared with other state-of-the-art methods.  相似文献   

8.
A hierarchical matrix is an efficient data-sparse representation of a matrix, especially useful for large dimensional problems. It consists of low-rank subblocks leading to low memory requirements as well as inexpensive computational costs. In this work, we discuss the use of the hierarchical matrix technique in the numerical solution of a large scale eigenvalue problem arising from a finite rank discretization of an integral operator. The operator is of convolution type, it is defined through the first exponential-integral function and, hence, it is weakly singular. We develop analytical expressions for the approximate degenerate kernels and deduce error upper bounds for these approximations. Some computational results illustrating the efficiency and robustness of the approach are presented.  相似文献   

9.
We discuss the numerical solution of large-scale discrete-time algebraic Riccati equations (DAREs) as they arise, e.g., in fully discretized linear-quadratic optimal control problems for parabolic partial differential equations (PDEs). We employ variants of Newton??s method that allow to compute an approximate low-rank factor of the solution of the DARE. The principal computation in the Newton iteration is the numerical solution of a Stein (aka discrete Lyapunov) equation in each step. For this purpose, we present a low-rank Smith method as well as a low-rank alternating-direction-implicit (ADI) iteration to compute low-rank approximations to solutions of Stein equations arising in this context. Numerical results are given to verify the efficiency and accuracy of the proposed algorithms.  相似文献   

10.
Linear programming models have been widely used in input-output analysis for analyzing the interdependence of industries in economics and in environmental science.In these applications,some of the entries of the coefficient matrix cannot be measured physically or there exists sampling errors.However,the coefficient matrix can often be low-rank.We characterize the robust counterpart of these types of linear programming problems with uncertainty set described by the nuclear norm.Simulations for the input-output analysis show that the new paradigm can be helpful.  相似文献   

11.
Summary. In this paper we propose four algorithms to compute truncated pivoted QR approximations to a sparse matrix. Three are based on the Gram–Schmidt algorithm and the other on Householder triangularization. All four algorithms leave the original matrix unchanged, and the only additional storage requirements are arrays to contain the factorization itself. Thus, the algorithms are particularly suited to determining low-rank approximations to a sparse matrix. Received February 23, 1998 / Revised version received April 16, 1998  相似文献   

12.
A method for solving a partial algebraic eigenvalues problem is constructed. It exploits tensor structure of eigenvectors in two-dimensional case. For a symmetric matrix represented in tensor format, the method finds low-rank approximations to the eigenvectors corresponding to the smallest eigenvalues. For sparse matrices, execution time and required memory for the proposed method are proportional to the square root of miscellaneous overall number of unknowns, whereas this dependence is usually linear. To maintain tensor structure of vectors at each iteration step, low-rank approximations are performed, which introduces errors into the original method. Nevertheless, the new method was proved to converge. Convergence rate estimates are obtained for various tensor modifications of the abstract one-step method. It is shown how the convergence of a multistep method can be derived from the convergence of the corresponding one-step method. Several modifications of the method with an low-rank approximation techniques were implemented on the basis of the block conjugate gradient method. Their performance is compared on numerical examples.  相似文献   

13.
低秩矩阵恢复问题作为一类在图像处理和信号数据分析等领域中都十分重要的问题已被广泛研究.本文在交替方向算法的框架下,应用非单调技术,提出一种求解低秩矩阵恢复问题的新算法.该算法在每一步迭代过程中,首先利用一步带有变步长梯度算法同时更新低秩部分的两块变量,然后采用非单调技术更新稀疏部分的变量.在一定的假设条件下,本文证明了新算法的全局收敛性.最后通过解决随机低秩矩阵恢复问题和视频前景背景分离的实例验证新算法的有效性,同时也显示非单调技术极大改善了算法的效率.  相似文献   

14.
张量的鲁棒主成分分析是将未知的一个低秩张量与一个稀疏张量从已知的它们的和中分离出来.因为在计算机视觉与模式识别中有着广阔的应用前景,该问题在近期成为学者们的研究热点.本文提出了一种针对张量鲁棒主成分分析的新的模型,并给出交替方向极小化的求解算法,在求解过程中给出了两种秩的调整策略.针对低秩分量本文对其全部各阶展开矩阵进行低秩矩阵分解,针对稀疏分量采用软阈值收缩的策略.无论目标低秩张量为精确低秩或近似低秩,本文所提方法均可适用.本文对算法给出了一定程度上的收敛性分析,即算法迭代过程中产生的任意收敛点均满足KKT条件.如果目标低秩张量为精确低秩,当迭代终止时可对输出结果进行基于高阶奇异值分解的修正.针对人工数据和真实视频数据的数值实验表明,与同类型算法相比,本文所提方法可以得到更好的结果.  相似文献   

15.
Hierarchical tensors can be regarded as a generalisation, preserving many crucial features, of the singular value decomposition to higher-order tensors. For a given tensor product space, a recursive decomposition of the set of coordinates into a dimension tree gives a hierarchy of nested subspaces and corresponding nested bases. The dimensions of these subspaces yield a notion of multilinear rank. This rank tuple, as well as quasi-optimal low-rank approximations by rank truncation, can be obtained by a hierarchical singular value decomposition. For fixed multilinear ranks, the storage and operation complexity of these hierarchical representations scale only linearly in the order of the tensor. As in the matrix case, the set of hierarchical tensors of a given multilinear rank is not a convex set, but forms an open smooth manifold. A number of techniques for the computation of hierarchical low-rank approximations have been developed, including local optimisation techniques on Riemannian manifolds as well as truncated iteration methods, which can be applied for solving high-dimensional partial differential equations. This article gives a survey of these developments. We also discuss applications to problems in uncertainty quantification, to the solution of the electronic Schrödinger equation in the strongly correlated regime, and to the computation of metastable states in molecular dynamics.  相似文献   

16.
A priori accuracy estimates for low-rank approximations using a small number of rows and columns of the initial matrix are proposed. Unlike in the existing methods of pseudoskeleton approximation, this number is larger than the rank of approximation, but the estimates are substantially more accurate than those known previously.  相似文献   

17.
In this article we investigate model order reduction of large-scale systems using time-limited balanced truncation, which restricts the well known balanced truncation framework to prescribed finite time intervals. The main emphasis is on the efficient numerical realization of this model reduction approach in case of large system dimensions. We discuss numerical methods to deal with the resulting matrix exponential functions and Lyapunov equations which are solved for low-rank approximations. Our main tool for this purpose are rational Krylov subspace methods. We also discuss the eigenvalue decay and numerical rank of the solutions of the Lyapunov equations. These results, and also numerical experiments, will show that depending on the final time horizon, the numerical rank of the Lyapunov solutions in time-limited balanced truncation can be smaller compared to standard balanced truncation. In numerical experiments we test the approaches for computing low-rank factors of the involved Lyapunov solutions and illustrate that time-limited balanced truncation can generate reduced order models having a higher accuracy in the considered time region.  相似文献   

18.
Zhao  Jianxi  Feng  Qian  Zhao  Lina 《Numerical Algorithms》2019,82(1):371-396
Numerical Algorithms - In the past decade, robust principal component analysis (RPCA) and low-rank matrix completion (LRMC), as two very important optimization problems with the view of recovering...  相似文献   

19.
The robust principal component analysis (RPCA) model is a popular method for solving problems with the nuclear norm and $\ell_1$ norm. However, it is time-consuming since in general one has to use the singular value decomposition in each iteration. In this paper, we introduce a novel model to reformulate the existed model by making use of low-rank matrix factorization to surrogate the nuclear norm for the sparse and low-rank decomposition problem. In such case we apply the Penalty Function Method (PFM) and Augmented Lagrangian Multipliers Method (ALMM) to solve this new non-convex optimization problem. Theoretically, corresponding to our methods, the convergence analysis is given respectively. Compared with classical RPCA, some practical numerical examples are simulated to show that our methods are much better than RPCA.  相似文献   

20.
In applications throughout science and engineering one is often faced with the challenge of solving an ill-posed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered includes those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include well-studied cases from many technical fields such as sparse vectors (signal processing, statistics) and low-rank matrices (control, statistics), as well as several others including sums of a few permutation matrices (ranked elections, multiobject tracking), low-rank tensors (computer vision, neuroscience), orthogonal matrices (machine learning), and atomic measures (system identification). The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial structure of the atomic norm ball carries a number of favorable properties that are useful for recovering simple models, and an analysis of the underlying convex geometry provides sharp estimates of the number of generic measurements required for exact and robust recovery of models from partial information. These estimates are based on computing the Gaussian widths of tangent cones to the atomic norm ball. When the atomic set has algebraic structure the resulting optimization problems can be solved or approximated via semidefinite programming. The quality of these approximations affects the number of measurements required for recovery, and this tradeoff is characterized via some examples. Thus this work extends the catalog of simple models (beyond sparse vectors and low-rank matrices) that can be recovered from limited linear information via tractable convex programming.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号