首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
用随机奇异值分解算法求解矩阵恢复问题   总被引:1,自引:0,他引:1       下载免费PDF全文
许雪敏  向华 《数学杂志》2017,37(5):969-976
本文研究了大型低秩矩阵恢复问题.利用随机奇异值分解(RSVD)算法,对稀疏矩阵做奇异值分解.该算法与Lanczos方法相比,在误差精度一致的同时运算时间大大降低,且该算法对相对低秩矩阵也有效.  相似文献   

2.
Variations of the latent semantic indexing (LSI) method in information retrieval (IR) require the computation of singular subspaces associated with the k dominant singular values of a large m × n sparse matrix A, where k?min(m,n). The Riemannian SVD was recently generalized to low‐rank matrices arising in IR and shown to be an effective approach for formulating an enhanced semantic model that captures the latent term‐document structure of the data. However, in terms of storage and computation requirements, its implementation can be much improved for large‐scale applications. We discuss an efficient and reliable algorithm, called SPK‐RSVD‐LSI, as an alternative approach for deriving the enhanced semantic model. The algorithm combines the generalized Riemannian SVD and the Lanczos method with full reorthogonalization and explicit restart strategies. We demonstrate that our approach performs as well as the original low‐rank Riemannian SVD method by comparing their retrieval performance on a well‐known benchmark document collection. Copyright 2004 John Wiley & Sons, Ltd.  相似文献   

3.
Electrical capacitance tomography (ECT) is a potential measurement technology for industrial process monitoring, but its applicability is generally limited by low-quality tomographic images. Boosting the performance of inverse computing imaging algorithms is the key to improving the reconstruction quality (RQ). Common regularization iteration imaging methods with analytical prior regularizers are less flexible in dealing with actual reconstruction tasks, leading to large reconstruction errors. To address the challenge, this study proposes a new imaging method, including reconstruction model and optimizer. The data-driven regularizer from a new ensemble learning model and the analytical prior regularizer with the focus on the sparsity of imaging objects are combined into a new optimization model for imaging. In the proposed ensemble learning model, the generalized low rank approximations of matrices (GLRAM) method is used to carry out the dimensionality reduction for decreasing the redundancy of the input data and improving the diversity, the extreme learning machine (ELM) serves as a base learner and the nuclear norm based matrix regression (NNMR) method is developed to aggregate the ensemble of solutions. The singular value thresholding method (SVTM) and the fast iterative shrinkage-thresholding algorithm (FISTA) are inserted into the split Bregman method (SBM) to generate a powerful optimizer for the built computational model. Its comparison to other competing methods through numerical experiments on typical imaging targets demonstrates that the developed algorithm reduces reconstruction error and achieves much more improvement in imaging quality and robustness.  相似文献   

4.
We introduce fast and robust algorithms for lower rank approximation to given matrices based on robust alternating regression. The alternating least squares regression, also called criss-cross regression, was used for lower rank approximation of matrices, but it lacks robustness against outliers in these matrices. We use robust regression estimators and address some of the complications arising from this approach. We find it helpful to use high breakdown estimators in the initial iterations, followed by M estimators with monotone score functions in later iterations towards convergence. In addition to robustness, the computational speed is another important consideration in the development of our proposed algorithm, because alternating robust regression can be computationally intensive for large matrices. Based on a mix of the least trimmed squares (LTS) and Huber's M estimators, we demonstrate that fast and robust lower rank approximations are possible for modestly large matrices.  相似文献   

5.
In this paper, we discuss the sensitivity of multiple nonzero finite generalized singular values and the corresponding generalized singular matrix set of a real matrix pair analytically dependent on several parameters. From our results, the partial derivatives of multiple nonzero singular values and their left and right singular vector matrices are obtained.Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
酉延拓矩阵的奇异值分解及其广义逆   总被引:1,自引:0,他引:1  
从普通奇异值分解出发,导出了酉延拓矩阵的奇异值和奇异向量与母矩阵的奇异值和奇异向量间的定量关系,同时对酉延拓矩阵的满秩分解及g逆,反射g逆,最小二乘g逆,最小范数g逆作了定量分析,得到了酉延拓矩阵的满秩分解矩阵F*和G*与母矩阵A的分解矩阵F和G之间的关系.最后给出了相应的快速求解算法,并举例说明该算法大大降低了分解的计算量和存储量,提高了计算效率.  相似文献   

7.
We present our recent work on both linear and nonlinear data reduction methods and algorithms: for the linear case we discuss results on structure analysis of SVD of column-partitioned matrices and sparse low-rank approximation; for the nonlinear case we investigate methods for nonlinear dimensionality reduction and manifold learning. The problems we address have attracted great deal of interest in data mining and machine learning.  相似文献   

8.
In this article, we study robust tensor completion by using transformed tensor singular value decomposition (SVD), which employs unitary transform matrices instead of discrete Fourier transform matrix that is used in the traditional tensor SVD. The main motivation is that a lower tubal rank tensor can be obtained by using other unitary transform matrices than that by using discrete Fourier transform matrix. This would be more effective for robust tensor completion. Experimental results for hyperspectral, video and face datasets have shown that the recovery performance for the robust tensor completion problem by using transformed tensor SVD is better in peak signal‐to‐noise ratio than that by using Fourier transform and other robust tensor completion methods.  相似文献   

9.
考虑了矩阵奇异值的相关问题,研究了其极值性质,并得出了一些特殊情形下的结果.最后,考察了其在多元统计分析中的应用.  相似文献   

10.
The tensor SVD (t‐SVD) for third‐order tensors, previously proposed in the literature, has been applied successfully in many fields, such as computed tomography, facial recognition, and video completion. In this paper, we propose a method that extends a well‐known randomized matrix method to the t‐SVD. This method can produce a factorization with similar properties to the t‐SVD, but it is more computationally efficient on very large data sets. We present details of the algorithms and theoretical results and provide numerical results that show the promise of our approach for compressing and analyzing image‐based data sets. We also present an improved analysis of the randomized and simultaneous iteration for matrices, which may be of independent interest to the scientific community. We also use these new results to address the convergence properties of the new and randomized tensor method as well.  相似文献   

11.
Through the restricted singular value decomposition (RSVD) of the matrix triplet (C, A, B), we show in this note how to choose a variable matrix X such that the matrix pencil A ? BXC attains its maximal and minimal ranks. As applications, we show how to use the RSVD to solve the matrix equation A = BXC.  相似文献   

12.
Motivated by the recently popular probabilistic methods for low‐rank approximations and randomized algorithms for the least squares problems, we develop randomized algorithms for the total least squares problem with a single right‐hand side. We present the Nyström method for the medium‐sized problems. For the large‐scale and ill‐conditioned cases, we introduce the randomized truncated total least squares with the known or estimated rank as the regularization parameter. We analyze the accuracy of the algorithm randomized truncated total least squares and perform numerical experiments to demonstrate the efficiency of our randomized algorithms. The randomized algorithms can greatly reduce the computational time and still maintain good accuracy with very high probability.  相似文献   

13.
We present an analysis for minimizing the condition number of nonsingular parameter‐dependent 2 × 2 block‐structured saddle‐point matrices with a maximally rank‐deficient (1,1) block. The matrices arise from an augmented Lagrangian approach. Using quasidirect sums, we show that a decomposition akin to simultaneous diagonalization leads to an optimization based on the extremal nonzero eigenvalues and singular values of the associated block matrices. Bounds on the condition number of the parameter‐dependent matrix are obtained, and we demonstrate their tightness on some numerical examples. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
This paper presents a new method for the computation of truncated singular value decomposition (SVD) of an arbitrary matrix. The method can be qualified as deterministic because it does not use randomized schemes. The number of operations required is asymptotically lower than that using conventional methods for nonsymmetric matrices and is at a par with the best existing deterministic methods for unstructured symmetric ones. It slightly exceeds the asymptotical computational cost of SVD methods based on randomization; however, the error estimate for such methods is significantly higher than for the presented one. The method is one‐pass, that is, each value of the matrix is used just once. It is also readily parallelizable. In the case of full SVD decomposition, it is exact. In addition, it can be modified for a case when data are obtained sequentially rather than being available all at once. Numerical simulations confirm accuracy of the method.  相似文献   

15.
We present an algorithm for the approximation of the dominant singular values and corresponding right and left singular vectors of a complex symmetric matrix. The method is based on two short-term recurrences first proposed by Saunders, Simon and Yip [24] for a non-Hermitian linear system solver. With symmetric matrices, the recurrence can be modified so as to generate a tridiagonal symmetric matrix from which the original triplets can be approximated. The recurrence formally resembles the Lanczos method, in spite of substantial differences which make usual convergence results inapplicable. Implementation aspects are discussed, such as re-orthogonalization and the use of alternative representation matrices. The method is very efficient over existing approaches which do not exploit the symmetry of the problem. Numerical experiments on application problems validate the analysis, while showing satisfactory results, especially on dense matrices. © 1997 by John Wiley & Sons, Ltd.  相似文献   

16.
Based on the structure of the rank-1 matrix and the different unfolding ways of the tensor, we present two types of structured tensors which contain the rank-1 tensors as special cases. We study some properties of the ranks and the best rank-r approximations of the structured tensors. By using the upper-semicontinuity of the matrix rank, we show that for the structured tensors, there always exist the best rank-r approximations. This can help one to better understand the sequential unfolding singular value decomposition (SVD) method for tensors proposed by J. Salmi et al. [IEEE Trans Signal Process, 2009, 57(12): 4719–4733] and offer a generalized way of low rank approximations of tensors. Moreover, we apply the structured tensors to estimate the upper and lower bounds of the best rank-1 approximations of the 3rd-order and 4th-order tensors, and to distinguish the well written and non-well written digits.  相似文献   

17.
矩阵方程AXAT+BYBT=C的对称与反对称最小范数最小二乘解   总被引:5,自引:1,他引:4  
对于任意给定的矩阵A∈Rk×m,B∈Rk×n和C∈Rk×k,利用奇异值分解和广义奇异值分解,我们给出了矩阵方程AXAT+BYBT=C的对称与反对称最小范数最小二乘解的表达式.  相似文献   

18.
本文提出了一般实矩阵奇异值分解问题重分析的摄动法.这是一种简捷、高效的快速重分析方法,对于提高各种需要反复进行矩阵奇异值分解的迭代分析问题的计算效率具有较重要的实用价值.文中导出了奇异值和左、右奇异向量的直到二阶摄动量的渐近估计算式.文末指出了将这种振动分析方法直接推广到一般复矩阵情况的途径.  相似文献   

19.
刘丽霞  王川龙 《计算数学》2017,39(2):179-188
本文提出一种基于均值的Toeplitz矩阵填充的子空间算法.通过在左奇异向量空间中对已知元素的最小二乘逼近,形成了新的可行矩阵;并利用对角线上的均值化使得迭代后的矩阵保持Toeplitz结构,从而减少了奇异向量空间的分解时间.理论上,证明了在一定条件下该算法收敛于一个低秩的Toeplitz矩阵.通过不同已知率的矩阵填充数值实验展示了Toeplitz矩阵填充的新算法比阈值增广Lagrange乘子算法在时间上和精度上更有效.  相似文献   

20.
The aim of this paper is to propose a higher order iterative method for computing the outer inverse \(A^{(2)}_{T,S}\) for a given matrix A. Convergence analysis along with the error bounds of the proposed method is established. A number of numerical examples including singular square, rectangular, randomly generated rank deficient matrices and a set of singular matrices obtained from the Matrix Computation Toolbox (mctoolbox) are worked out. The performance measures used are the number of iterations, the mean CPU time (MCT) and the error bounds. The results obtained are compared with some of the existing methods. It is observed that our method gives improved results in terms of both computational speed and accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号