共查询到20条相似文献,搜索用时 93 毫秒
1.
本文提出了一种矩阵填充的子空间逼近法.该算法以奇异值分解的子空间逼近为基础,运用二次规划技术产生子空间中最接近的可行矩阵,从而获得较好的可行矩阵.该算法通过阈值的奇异值个数逐步减少达到子空间的降秩,最后得到最优低秩矩阵.本文证明了在一定条件下子空间逼近法是收敛的.通过与增广Lagrange乘子算法和正交秩1矩阵逼近法进行随机实验对比,本文所提方法在CPU时间和低秩性上均更有效. 相似文献
2.
数据缺损下矩阵低秩逼近问题出现在许多数据处理分析与应用领域. 由于极高的元素缺损率,数据缺损下的矩阵低秩逼近呈现很大的不适定性, 因而寻求有效的数值算法是一个具有挑战性的课题. 本文系统完整地综述了作者近期在这方面的一些研究进展, 给出了基本模型问题的不适定性理论分析, 提出了两种新颖的正则化方法: 元素约束正则化和引导正则化, 分别适用于中等程度的数据缺损和高度元素缺损的矩阵低秩逼近. 本文同时也介绍了相应快速有效的数值算法. 在一些实际的大规模数值例子中, 这些新的正则化算法均表现出比现有其他方法都好的数值特性. 相似文献
3.
正1引言假设D∈R~(m×n)为实际观测到的高维数据矩阵,则从高维空间中估计一低维子空间的问题,称为矩阵低秩逼近,即估计一低秩矩阵A,使得D与A∈R~(m×n)之间的误差E=D-A最小化,该问题表示如下min‖E‖~2_F=‖D-A‖~2_F s.t.rank(A)≤r,其中r《min(m,n).求解矩阵低秩逼近问题最著名的方法是主成分分析法(Principal components analysis,PCA)[8,14,15],PCA在误差||E||_F较小的情况下,利用奇异值分解 相似文献
4.
5.
本文研究了矩阵方程AX=B的Hermitian R-对称最大秩和最小秩解问题.利用矩阵秩的方法,获得了矩阵方程AX=B有最大秩和最小秩解的充分必要条件以及解的表达式,同时对于最小秩解的解集合,得到了最佳逼近解. 相似文献
6.
C.R.Rao不等式的简单证明 总被引:1,自引:0,他引:1
引进多边矩阵理论中的分解算子,对单位阵进行正交投影分解,再利用投影矩阵秩等于迹及矩阵秩≥0的性质,给出了C.R.Rao不等式的一个简单证明. 相似文献
7.
8.
9.
证得:在Banach空间中,相对紧集上的恒等算子可由一列有限秩连续拟线性投影算子一致逼近.由此得到:线性算子为紧线性算子必须且仅须它可由一列有限秩连续齐性算子一致逼近. 相似文献
10.
11.
In this paper we illustrate some optimization challenges in the structured low rank approximation (SLRA) problem. SLRA can be described as the problem of finding a low rank approximation of an observed matrix which has the same structure as this matrix (such as Hankel). We demonstrate that the optimization problem arising is typically very difficult: in particular, the objective function is multiextremal even for simple cases. The main theme of the paper is to suggest that the difficulties described in approximating a solution of the SLRA problem open huge possibilities for the application of stochastic methods of global optimization. 相似文献
12.
M. Schuermans P. Lemmerling S. Van Huffel 《Numerical Linear Algebra with Applications》2006,13(4):293-302
This paper extends the weighted low rank approximation (WLRA) approach to linearly structured matrices. In the case of Hankel matrices with a special block structure, an equivalent unconstrained optimization problem is derived and an algorithm for solving it is proposed. Copyright © 2005 John Wiley & Sons, Ltd. 相似文献
13.
We consider the numerical solution of the continuous algebraic Riccati equation A*X + XA ? XFX + G = 0, with F = F*,G = G* of low rank and A large and sparse. We develop an algorithm for the low‐rank approximation of X by means of an invariant subspace iteration on a function of the associated Hamiltonian matrix. We show that the sought‐after approximation can be obtained by a low‐rank update, in the style of the well known Alternating Direction Implicit (ADI) iteration for the linear equation, from which the new method inherits many algebraic properties. Moreover, we establish new insightful matrix relations with emerging projection‐type methods, which will help increase our understanding of this latter class of solution strategies. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
14.
This paper reports on improvements to recent work on the computation of a structured low rank approximation of the Sylvester resultant matrix S(f,g) of two inexact polynomials f=f(y) and g=g(y). Specifically, it has been shown in previous work that these polynomials must be processed before a structured low rank approximation of S(f,g) is computed. The existing algorithm may still, however, yield a structured low rank approximation of S(f,g), but not a structured low rank approximation of S(g,f), which is unsatisfactory. Moreover, a structured low rank approximation of S(f,g) must be equal to, apart from permutations of its columns, a structured low rank approximation of S(g,f), but the existing algorithm does not guarantee the satisfaction of this condition. This paper addresses these issues by modifying the existing algorithm, such that these deficiencies are overcome. Examples that illustrate these improvements are shown. 相似文献
15.
The goal of this paper is to find a low‐rank approximation for a given nth tensor. Specifically, we give a computable strategy on calculating the rank of a given tensor, based on approximating the solution to an NP‐hard problem. In this paper, we formulate a sparse optimization problem via an l1‐regularization to find a low‐rank approximation of tensors. To solve this sparse optimization problem, we propose a rescaling algorithm of the proximal alternating minimization and study the theoretical convergence of this algorithm. Furthermore, we discuss the probabilistic consistency of the sparsity result and suggest a way to choose the regularization parameter for practical computation. In the simulation experiments, the performance of our algorithm supports that our method provides an efficient estimate on the number of rank‐one tensor components in a given tensor. Moreover, this algorithm is also applied to surveillance videos for low‐rank approximation. 相似文献
16.
Joab R. Winkler Madina Hasan 《Journal of Computational and Applied Mathematics》2010,234(12):3226-1603
A non-linear structure preserving matrix method for the computation of a structured low rank approximation of the Sylvester resultant matrix S(f,g) of two inexact polynomials f=f(y) and g=g(y) is considered in this paper. It is shown that considerably improved results are obtained when f(y) and g(y) are processed prior to the computation of , and that these preprocessing operations introduce two parameters. These parameters can either be held constant during the computation of , which leads to a linear structure preserving matrix method, or they can be incremented during the computation of , which leads to a non-linear structure preserving matrix method. It is shown that the non-linear method yields a better structured low rank approximation of S(f,g) and that the assignment of f(y) and g(y) is important because may be a good structured low rank approximation of S(f,g), but may be a poor structured low rank approximation of S(g,f) because its numerical rank is not defined. Examples that illustrate the differences between the linear and non-linear structure preserving matrix methods, and the importance of the assignment of f(y) and g(y), are shown. 相似文献
17.
This note deals with the computational problem of determining the projection of a given symmetric matrix onto the subspace of symmetric matrices that have a fixed sparsity pattern. This projection is performed with respect to a weighted Frobenius norm involving a metric that is not diagonal. It is shown that the solution to this question is computationally feasible when the metric appearing in the norm is a low rank modification to the identity. Also, generalization to perturbations of higher rank is shown to be increasingly costly in terms of computation. 相似文献
18.
Shengguo Li Xiangke Liao Jie Liu Hao Jiang 《Numerical Linear Algebra with Applications》2016,23(4):656-673
In this paper, two accelerated divide‐and‐conquer (ADC) algorithms are proposed for the symmetric tridiagonal eigenvalue problem, which cost O(N2r) flops in the worst case, where N is the dimension of the matrix and r is a modest number depending on the distribution of eigenvalues. Both of these algorithms use hierarchically semiseparable (HSS) matrices to approximate some intermediate eigenvector matrices, which are Cauchy‐like matrices and are off‐diagonally low‐rank. The difference of these two versions lies in using different HSS construction algorithms, one (denoted by ADC1) uses a structured low‐rank approximation method and the other (ADC2) uses a randomized HSS construction algorithm. For the ADC2 algorithm, a method is proposed to estimate the off‐diagonal rank. Numerous experiments have been carried out to show their stability and efficiency. These algorithms are implemented in parallel in a shared memory environment, and some parallel implementation details are included. Comparing the ADCs with highly optimized multithreaded libraries such as Intel MKL, we find that ADCs could be more than six times faster for some large matrices with few deflations. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
19.
Yaprak Güldoan Dericiolu Muhammet Kurulay 《Mathematical Methods in the Applied Sciences》2019,42(16):5438-5445
We propose a numerical method for solving large‐scale differential symmetric Stein equations having low‐rank right constant term. Our approach is based on projection the given problem onto a Krylov subspace then solving the low dimensional matrix problem by using an integration method, and the original problem solution is built by using obtained low‐rank approximate solution. Using the extended block Arnoldi process and backward differentiation formula (BDF), we give statements of the approximate solution and corresponding residual. Some numerical results are given to show the efficiency of the proposed method. 相似文献
20.
We present an algebraic structured preconditioner for the iterative solution of large sparse linear systems. The preconditioner is based on a multifrontal variant of sparse LU factorization used with nested dissection ordering. Multifrontal factorization amounts to a partial factorization of a sequence of logically dense frontal matrices, and the preconditioner is obtained if structured factorization is used instead. This latter exploits the presence of low numerical rank in some off‐diagonal blocks of the frontal matrices. An algebraic procedure is presented that allows to identify the hierarchy of the off‐diagonal blocks with low numerical rank based on the sparsity of the system matrix. This procedure is motivated by a model problem analysis, yet numerical experiments show that it is successful beyond the model problem scope. Further aspects relevant for the algebraic structured preconditioner are discussed and illustrated with numerical experiments. The preconditioner is also compared with other solvers, including the corresponding direct solver. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献