首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
结构矩阵低秩逼近在图像压缩、计算机代数和语音编码中有广泛应用.首先给出了几类结构矩阵的投影公式,再利用交替投影方法计算结构矩阵低秩逼近问题.数值试验表明新方法是可行的.  相似文献   

2.
We show that a best rank one approximation to a real symmetric tensor, which in principle can be nonsymmetric, can be chosen symmetric. Furthermore, a symmetric best rank one approximation to a symmetric tensor is unique if the tensor does not lie on a certain real algebraic variety.  相似文献   

3.
This article presents a multilevel parallel preconditioning technique for solving general large sparse linear systems of equations. Subdomain coloring is invoked to reorder the coefficient matrix by multicoloring the adjacency graph of the subdomains, resulting in a two‐level block diagonal structure. A full binary tree structure ?? is then built to facilitate the construction of the preconditioner. A key property that is exploited is the observation that the difference between the inverse of the original matrix and that of its block diagonal approximation is often well approximated by a low‐rank matrix. This property and the block diagonal structure of the reordered matrix lead to a multicolor low‐rank (MCLR) preconditioner. The construction procedure of the MCLR preconditioner follows a bottom‐up traversal of the tree ?? . All irregular matrix computations, such as ILU factorizations and related triangular solves, are restricted to leaf nodes where these operations can be performed independently. Computations in nonleaf nodes only involve easy‐to‐optimize dense matrix operations. In order to further reduce the number of iteration of the Preconditioned Krylov subspace procedure, we combine MCLR with a few classical block‐relaxation techniques. Numerical experiments on various test problems are proposed to illustrate the robustness and efficiency of the proposed approach for solving large sparse symmetric and nonsymmetric linear systems.  相似文献   

4.
We consider efficient methods for the recovery of block sparse signals from underdetermined system of linear equations. We show that if the measurement matrix satisfies the block RIP with δ2s 0.4931, then every block s-sparse signal can be recovered through the proposed mixed l2 /l1 -minimization approach in the noiseless case and is stably recovered in the presence of noise and mismodeling error. This improves the result of Eldar and Mishali (in IEEE Trans. Inform. Theory 55: 5302-5316, 2009). We also give another sufficient condition on block RIP for such recovery method: δs 0.307.  相似文献   

5.
In this article, we consider the iterative schemes to compute the canonical polyadic (CP) approximation of quantized data generated by a function discretized on a large uniform grid in an interval on the real line. This paper continues the research on the quantics‐tensor train (QTT) method (“O(d log N)‐quantics approximation of Nd tensors in high‐dimensional numerical modeling” in Constructive Approximation, 2011) developed for the tensor train (TT) approximation of the quantized images of function related data. In the QTT approach, the target vector of length 2L is reshaped to a Lth‐order tensor with two entries in each mode (quantized representation) and then approximated by the QTT tensor including 2r2L parameters, where r is the maximal TT rank. In what follows, we consider the alternating least squares (ALS) iterative scheme to compute the rank‐r CP approximation of the quantized vectors, which requires only 2rL?2L parameters for storage. In the earlier papers (“Tensors‐structured numerical methods in scientific computing: survey on recent advances” in Chemom Intell Lab Syst, 2012), such a representation was called QCan format, whereas in this paper, we abbreviate it as the QCP (quantized canonical polyadic) representation. We test the ALS algorithm to calculate the QCP approximation on various functions, and in all cases, we observed the exponential error decay in the QCP rank. The main idea for recovering a discretized function in the rank‐r QCP format using the reduced number of the functional samples, calculated only at O(2rL) grid points, is presented. The special version of the ALS scheme for solving the arising minimization problem is described. This approach can be viewed as the sparse QCP‐interpolation method that allows to recover all 2rL representation parameters of the rank‐r QCP tensor. Numerical examples show the efficiency of the QCP‐ALS‐type iteration and indicate the exponential convergence rate in r.  相似文献   

6.
We introduce one special form of the ptimesp × 2 (p≥2) tensors by multilinear orthonormal transformations, and present some interesting properties of the special form. Through discussing on the special form, we provide a solution to one conjecture proposed by Stegeman and Comon in a conference paper (Proceedings of the EUSIPCO 2009 Conference, Glasgow, Scotland, 2009), and reveal an important conclusion about subtracting a best rank‐1 approximations from p × p × 2 tensors of the special form. All of these confirm that consecutively subtracting the best rank‐1 approximations may not lead to a best low rank approximation of a tensor. Numerical examples show the correctness of our theory. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

7.
In this paper, we analyze the convergence of a projected fixed‐point iteration on a Riemannian manifold of matrices with fixed rank. As a retraction method, we use the projector splitting scheme. We prove that the convergence rate of the projector splitting scheme is bounded by the convergence rate of standard fixed‐point iteration without rank constraints multiplied by the function of initial approximation. We also provide counterexample to the case when conditions of the theorem do not hold. Finally, we support our theoretical results with numerical experiments.  相似文献   

8.
A non-linear structure preserving matrix method for the computation of a structured low rank approximation of the Sylvester resultant matrix S(f,g) of two inexact polynomials f=f(y) and g=g(y) is considered in this paper. It is shown that considerably improved results are obtained when f(y) and g(y) are processed prior to the computation of , and that these preprocessing operations introduce two parameters. These parameters can either be held constant during the computation of , which leads to a linear structure preserving matrix method, or they can be incremented during the computation of , which leads to a non-linear structure preserving matrix method. It is shown that the non-linear method yields a better structured low rank approximation of S(f,g) and that the assignment of f(y) and g(y) is important because may be a good structured low rank approximation of S(f,g), but may be a poor structured low rank approximation of S(g,f) because its numerical rank is not defined. Examples that illustrate the differences between the linear and non-linear structure preserving matrix methods, and the importance of the assignment of f(y) and g(y), are shown.  相似文献   

9.
We study the problem of reconstructing a low‐rank matrix, where the input is an n × m matrix M over a field and the goal is to reconstruct a (near‐optimal) matrix that is low‐rank and close to M under some distance function Δ. Furthermore, the reconstruction must be local, i.e., provides access to any desired entry of by reading only a few entries of the input M (ideally, independent of the matrix dimensions n and m). Our formulation of this problem is inspired by the local reconstruction framework of Saks and Seshadhri (SICOMP, 2010). Our main result is a local reconstruction algorithm for the case where Δ is the normalized Hamming distance (between matrices). Given M that is ‐close to a matrix of rank (together with d and ), this algorithm computes with high probability a rank‐d matrix that is ‐close to M. This is a local algorithm that proceeds in two phases. The preprocessing phase reads only random entries of M, and stores a small data structure. The query phase deterministically outputs a desired entry by reading only the data structure and 2d additional entries of M. We also consider local reconstruction in an easier setting, where the algorithm can read an entire matrix column in a single operation. When Δ is the normalized Hamming distance between vectors, we derive an algorithm that runs in polynomial time by applying our main result for matrix reconstruction. For comparison, when Δ is the truncated Euclidean distance and , we analyze sampling algorithms by using statistical learning tools. A preliminary version of this paper appears appears in ECCC, see: http://eccc.hpi-web.de/report/2015/128/ © 2017 Wiley Periodicals, Inc. Random Struct. Alg., 51, 607–630, 2017  相似文献   

10.
This paper reports on improvements to recent work on the computation of a structured low rank approximation of the Sylvester resultant matrix S(f,g)S(f,g) of two inexact polynomials f=f(y)f=f(y) and g=g(y)g=g(y). Specifically, it has been shown in previous work that these polynomials must be processed before a structured low rank approximation of S(f,g)S(f,g) is computed. The existing algorithm may still, however, yield a structured low rank approximation of S(f,g)S(f,g), but not a structured low rank approximation of S(g,f)S(g,f), which is unsatisfactory. Moreover, a structured low rank approximation of S(f,g)S(f,g) must be equal to, apart from permutations of its columns, a structured low rank approximation of S(g,f)S(g,f), but the existing algorithm does not guarantee the satisfaction of this condition. This paper addresses these issues by modifying the existing algorithm, such that these deficiencies are overcome. Examples that illustrate these improvements are shown.  相似文献   

11.
Bayesian l0‐regularized least squares is a variable selection technique for high‐dimensional predictors. The challenge is optimizing a nonconvex objective function via search over model space consisting of all possible predictor combinations. Spike‐and‐slab (aka Bernoulli‐Gaussian) priors are the gold standard for Bayesian variable selection, with a caveat of computational speed and scalability. Single best replacement (SBR) provides a fast scalable alternative. We provide a link between Bayesian regularization and proximal updating, which provides an equivalence between finding a posterior mode and a posterior mean with a different regularization prior. This allows us to use SBR to find the spike‐and‐slab estimator. To illustrate our methodology, we provide simulation evidence and a real data example on the statistical properties and computational efficiency of SBR versus direct posterior sampling using spike‐and‐slab priors. Finally, we conclude with directions for future research.  相似文献   

12.
We consider the Sylvester equation AX?XB+C=0 where the matrix C∈?n×m is of low rank and the spectra of A∈?n×n and B∈?m×m are separated by a line. We prove that the singular values of the solution X decay exponentially, that means for any ε∈(0,1) there exists a matrix X? of rank k=O(log(1/ε)) such that ∥X?X?2?εX2. As a generalization we prove that if A,B,C are hierarchical matrices then the solution X can be approximated by the hierarchical matrix format described in Hackbusch (Computing 2000; 62 : 89–108). The blockwise rank of the approximation is again proportional to log(1/ε). Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

13.
In this paper, we study the quadratic model updating problems by using symmetric low‐rank correcting, which incorporates the measured model data into the analytical quadratic model to produce an adjusted model that matches the experimental model data, and minimizes the distance between the analytical and updated models. We give a necessary and sufficient condition on the existence of solutions to the symmetric low‐rank correcting problems under some mild conditions, and propose two algorithms for finding approximate solutions to the corresponding optimization problems. The good performance of the two algorithms is illustrated by numerical examples. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

14.
This paper introduces a robust preconditioner for general sparse matrices based on low‐rank approximations of the Schur complement in a Domain Decomposition framework. In this ‘Schur Low Rank’ preconditioning approach, the coefficient matrix is first decoupled by a graph partitioner, and then a low‐rank correction is exploited to compute an approximate inverse of the Schur complement associated with the interface unknowns. The method avoids explicit formation of the Schur complement. We show the feasibility of this strategy for a model problem and conduct a detailed spectral analysis for the relation between the low‐rank correction and the quality of the preconditioner. We first introduce the SLR preconditioner for symmetric positive definite matrices and symmetric indefinite matrices if the interface matrices are symmetric positive definite. Extensions to general symmetric indefinite matrices as well as to nonsymmetric matrices are also discussed. Numerical experiments on general matrices illustrate the robustness and efficiency of the proposed approach. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

15.
The problem of symmetric rank‐one approximation of symmetric tensors is important in independent components analysis, also known as blind source separation, as well as polynomial optimization. We derive several perturbative results that are relevant to the well‐posedness of recovering rank‐one structure from approximately‐rank‐one symmetric tensors. We also specialize the analysis of the shifted symmetric higher‐order power method, an algorithm for computing symmetric tensor eigenvectors, to approximately‐rank‐one symmetric tensors. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

16.
This paper extends the weighted low rank approximation (WLRA) approach to linearly structured matrices. In the case of Hankel matrices with a special block structure, an equivalent unconstrained optimization problem is derived and an algorithm for solving it is proposed. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

17.
A truncated ULV decomposition (TULVD) of an m×n matrix X of rank k is a decomposition of the form X = ULVT+E, where U and V are left orthogonal matrices, L is a k×k non‐singular lower triangular matrix, and E is an error matrix. Only U,V, L, and ∥EF are stored, but E is not stored. We propose algorithms for updating and downdating the TULVD. To construct these modification algorithms, we also use a refinement algorithm based upon that in (SIAM J. Matrix Anal. Appl. 2005; 27 (1):198–211) that reduces ∥EF, detects rank degeneracy, corrects it, and sharpens the approximation. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

18.
In this paper, a successive supersymmetric rank‐1 decomposition of a real higher‐order supersymmetric tensor is considered. To obtain such a decomposition, we design a greedy method based on iteratively computing the best supersymmetric rank‐1 approximation of the residual tensors. We further show that a supersymmetric canonical decomposition could be obtained when the method is applied to an orthogonally diagonalizable supersymmetric tensor, and in particular, when the order is 2, this method generates the eigenvalue decomposition for symmetric matrices. Details of the algorithm designed and the numerical results are reported in this paper. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

19.
A new fast algebraic method for obtaining an ‐approximation of a matrix from its entries is presented. The main idea behind the method is based on the nested representation and the maximum volume principle to select submatrices in low‐rank matrices. A special iterative approach for the computation of so‐called representing sets is established. The main advantage of the method is that it uses only the hierarchical partitioning of the matrix and does not require special ‘proxy surfaces’ to be selected in advance. The numerical experiments for the electrostatic problem and for the boundary integral operator confirm the effectiveness and robustness of the approach. The complexity is linear in the matrix size and polynomial in the ranks. The algorithm is implemented as an open‐source Python package that is available online. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
The p ‐rank of a Steiner triple system (STS) B is the dimension of the linear span of the set of characteristic vectors of blocks of B , over GF ( p ) . We derive a formula for the number of different STSs of order v and given 2 ‐rank r 2 , r 2 < v , and a formula for the number of STSs of order v and given 3 ‐rank r 3 , r 3 < v ? 1 . Also, we prove that there are no STSs of 2 ‐rank smaller than v and, at the same time, 3 ‐rank smaller than v ? 1 . Our results extend previous study on enumerating STSs according to the rank of their codes, mainly by Tonchev, V.A. Zinoviev, and D.V. Zinoviev for the binary case and by Jungnickel and Tonchev for the ternary case.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号