首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 46 毫秒
1.
EOF、SVD和POD的数学统一   总被引:1,自引:0,他引:1  
经验正交函数(EOF)、奇异值分解(SVD)和适当正交分解(POD)是三种常见的通过获取高维数据的低维近似来进行数据分析的方法.虽然在实际应用中,结合不同的研究问题和研究目的,会采用不同的方法,但是在数学原理上,三种方法都可归结为通过寻求已有数据集合的基向量来实现对原始数据的线性表示.本文以EOF为出发点,通过分析展开系数得出SVD,最后在最优近似表示的原则之下导出POD,揭示三种方法在数学原理上的统一性.  相似文献   

2.
对于任意给定的X∈Qn×m,∧=diag(λ1,…,λm)∈Rm×m,利用奇异值分解、谱分解及QR分解分别给出了满足AX=BX∧,及XHBX=Im,AX=BX∧,的正则矩阵束(A,B)的通解表达式.  相似文献   

3.
在信息检索过程中,由于文档中存在大量的多义和近义现象,导致不确定性出现,这将影响检索的性能.为此我们采用信息熵和粗糙集理论来处理这类不确定性问题.首先计算训练文档集中的词之间的信息熵,对信息熵做模糊聚类来构造词之间的等价关系,然后借助于该等价关系提出并实现了一个以粗糙集上下近似为基础的信息检索模型,通过实验的测试,该模型能够提高信息检索的效率.  相似文献   

4.
本文讨论了wang和Chang的双线件矩阵方程(ATXA,BTXB):(C,D)对称解的一致性条件.利用Hilbert空间的投影定理、商奇异值分解及其通解表达式和典型相关分解(CCD)的有效工具,获得了关于这个矩形方阵对的最小二乘问题的明确的解析表达式反对称(或最小Frobenius范数反对称解作为特例)最佳逼近解.  相似文献   

5.
针对二维非稳态对流扩散边界控制问题计算量大的问题,提出了基于降阶模型的最优实时控制方法.利用POD(the Proper Orthogonal Decomposition)和奇异值分解以及Galerkin投影方法得到了具有高精度离散形式的状态空间降阶模型.在所得的降阶状态空间模型中,利用离散时间线性二次调节器方法设计出了最优控制器.对流-扩散过程的控制模拟结果说明了所提方法的有效性和准确性.  相似文献   

6.
行(或列)对称矩阵的QR分解   总被引:24,自引:0,他引:24       下载免费PDF全文
证明了行(或列)对称矩阵的Q矩阵和R矩阵与母矩阵的Q矩阵和R矩阵之间的定量关系, 给出了两种快速算法. 据此可大大降低一类具有该结构矩阵的QR分解的计算量和存储量.  相似文献   

7.
为了简化大型行(列)酉对称矩阵的QR分解,研究了行(列)酉对称矩阵的性质,获得了一些新的结果,给出了行(列)酉对称矩阵的QR分解的公式和快速算法,它们可极大地减少行(列)酉对称矩阵的QR分解的计算量与存储量,并且不会丧失数值精度.同时推广和丰富了邹红星等(2002)的研究内容,拓宽了实际应用领域的范围.  相似文献   

8.
通过对母矩阵进行奇异值分解的方法得到广义行(列)酉对称矩阵的奇异值分解进一步得到其Moore-penrose逆;用谱分解方法得到母矩阵的Moore-penrose逆,进一步得到广义行(列)酉对称矩阵的Moore-penrose逆.  相似文献   

9.
主要讨论性质 ( u)从 Banach空间 Xn 到 PXXn 的提升问题 ,它是 lp( Xn)中相应结果的推广 .  相似文献   

10.
首先介绍线性Errors-in-Variables模型,给出求解回归系数的奇异值分解(SVD)算法和MATLAB源代码,其次指出在模型中所有变量均具有不可忽略的误差时,全最小二乘法得到回归系数估计更接近于模型中的真实系数,并通过理论分析和计算机仿真说明了这一结果,最后将线性模型和算法用于确定汶川大地震主震断层面,取得了与震源机制解一致的结果,说明了模型和算法的有效性。  相似文献   

11.
Based on the structure of the rank-1 matrix and the different unfolding ways of the tensor, we present two types of structured tensors which contain the rank-1 tensors as special cases. We study some properties of the ranks and the best rank-r approximations of the structured tensors. By using the upper-semicontinuity of the matrix rank, we show that for the structured tensors, there always exist the best rank-r approximations. This can help one to better understand the sequential unfolding singular value decomposition (SVD) method for tensors proposed by J. Salmi et al. [IEEE Trans Signal Process, 2009, 57(12): 4719–4733] and offer a generalized way of low rank approximations of tensors. Moreover, we apply the structured tensors to estimate the upper and lower bounds of the best rank-1 approximations of the 3rd-order and 4th-order tensors, and to distinguish the well written and non-well written digits.  相似文献   

12.
The pivoted QLP decomposition, introduced by Stewart [20], represents the first two steps in an algorithm which approximates the SVD. The matrix A0 is first factored as A0=QR, and then the matrix R T1 is factored as R T1=PL T, resulting in A=Q1 LP T0 T, with Q and P orthogonal, L lower-triangular, and 0 and 1 permutation matrices. Stewart noted that the diagonal elements of L approximate the singular values of A with surprising accuracy. In this paper, we provide mathematical justification for this phenomenon. If there is a gap between k and k+1, partition the matrix L into diagonal blocks L 11 and L 22 and off-diagonal block L 21, where L 11 is k-by-k. We show that the convergence of ( j (L 11)–1 j –1)/ j –1 for j=1,. . .,k, and of ( j (L 22)– k+j )/ k+j , for j=1,. . .,nk, are all quadratic in the gap ratio k+1/ k . The worst case is therefore at the gap, where the absolute errors L 11 –1 k –1 and L 22 k+1 are thus cubic in k –1 and k+1, respectively. One order of convergence is due to the rank-revealing pivoting in the first step; then, because of the pivoting in the first step, two more orders are achieved in the second step. Our analysis assumes that 1=I, that is, that pivoting is done only on the first step. Although our results explain some of the properties of the pivoted QLP decomposition, they hypothesize a gap in the singular values. However, a simple example shows that the decomposition can perform well even in the absence of a gap. Thus there is more to explain, and we hope that our paper encourages others to tackle the problem. The QLP algorithm can be continued beyond the first two steps, and we make some observations concerning the asymptotic convergence. For example, we point out that repeated singular values can accelerate convergence of individual elements. This, in addition to the relative convergence to all of the singular values being quadratic in the gap ratio, further indicates that the QLP decomposition can be powerful even when the ratios between neighboring singular values are close to one.  相似文献   

13.
1 引言 首先引入一些记号.记Cn×m为n×m复矩阵的集合.UCn×n表示所有n阶酉矩阵的集合.In表示n阶单位矩阵.AH和A+分别表示矩阵A的共轭转置及Moore-Penrose广义逆.对A=(n玎).…B=(bij).煳用A}B=(aijbij)sXt表示A与B的Hadamard积.  相似文献   

14.
In this paper, we consider the generalized singular value decompositions for two tensors via the T-product. We investigate and discuss in detail the structures of the quotient singular value decomposition (T-QSVD) and product singular value decomposition (T-PSVD) for two tensors. The algorithms are presented with numerical examples illustrating the results. For applications, we consider color image watermarking processing with T-QSVD and T-PSVD. There are two advantages to T-QSVD and T-PSVD approaches on color watermark processing: two color watermarks can be processed simultaneously and only one key needs to be saved.  相似文献   

15.
Henrici's transformation is the underlying scheme that generates, by cycling, Steffensen's method for the approximation of the solution of a nonlinear equation in several variables. The aim of this paper is to analyze the asymptotic behavior of the obtained sequence (s n * ) by applying Henrici's transformation when the initial sequence (s n ) behaves sublinearly. We extend the work done in the regular case by Sadok [17] to vector sequences in the singular case. Under suitable conditions, we show that the slowest convergence rate of (s n * ) is to be expected in a certain subspace N of R p . More precisely, if we write s n * =s n * ,N+s n * ,N, the orthogonal decomposition into N and N , then the convergence is linear for (s n * ,N) but ( n * ,N) converges to the same limit faster than the initial one. In certain cases, we can have N=R p and the convergence is linear everywhere.  相似文献   

16.
The tensor SVD (t‐SVD) for third‐order tensors, previously proposed in the literature, has been applied successfully in many fields, such as computed tomography, facial recognition, and video completion. In this paper, we propose a method that extends a well‐known randomized matrix method to the t‐SVD. This method can produce a factorization with similar properties to the t‐SVD, but it is more computationally efficient on very large data sets. We present details of the algorithms and theoretical results and provide numerical results that show the promise of our approach for compressing and analyzing image‐based data sets. We also present an improved analysis of the randomized and simultaneous iteration for matrices, which may be of independent interest to the scientific community. We also use these new results to address the convergence properties of the new and randomized tensor method as well.  相似文献   

17.
Preconditioning techniques are widely used to speed up the convergence of iterative methods for solving large linear systems with sparse or dense coefficient matrices. For certain application problems, however, the standard block diagonal preconditioner makes the Krylov iterative methods converge more slowly or even diverge. To handle this problem, we apply diagonal shifting and stabilized singular value decomposition (SVD) to each diagonal block, which is generated from the multilevel fast multiple algorithm (MLFMA), to improve the stability and efficiency of the block diagonal preconditioner. Our experimental results show that the improved block diagonal preconditioner maintains the computational complexity of MLFMA, converges faster and also reduces the CPU cost.  相似文献   

18.
§1 . IntroductionTheproblemofill_conditioninganditsstatisticalconsequencesonalinearregressionmodelarewell_knowninstatistics(Vinod&Ullah 1981;Belsley 1991) .Itis,forinstance ,knownthatoneofthemajorconsequencesofill_conditioningontheleastsquares(LS)regressionesti matoristhattheestimatorproduceslargesamplingvariance,whichinturnmightinappropri atelyleadtoexclusionofotherwisesignificantcoefficientfromthemodel,andthesignsofcoef ficientscanbecontrarytointuitionetc..Tocircumventthisproblem ,manybi…  相似文献   

19.
Fan, Wang, and Zhong estimate the difference between the singular vectors of a matrix and those of a perturbed matrix in terms of the maximum norm. Their estimations are used effectively to establish the asymptotic properties of robust covariance estimators (see Journal of Machine Learning Research, 2018;18:1-42). In this paper, we give the corresponding lower bound estimates, which show Fan-Wang-Zhong's estimations optimal.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号