首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
It has been shown that a best rank-R approximation of an order-k tensor may not exist when R?2 and k?3. This poses a serious problem to data analysts using tensor decompositions. It has been observed numerically that, generally, this issue cannot be solved by consecutively computing and subtracting best rank-1 approximations. The reason for this is that subtracting a best rank-1 approximation generally does not decrease tensor rank. In this paper, we provide a mathematical treatment of this property for real-valued 2×2×2 tensors, with symmetric tensors as a special case. Regardless of the symmetry, we show that for generic 2×2×2 tensors (which have rank 2 or 3), subtracting a best rank-1 approximation results in a tensor that has rank 3 and lies on the boundary between the rank-2 and rank-3 sets. Hence, for a typical tensor of rank 2, subtracting a best rank-1 approximation increases the tensor rank.  相似文献   

2.
As computing power increases, many more problems in engineering and data analysis involve computation with tensors, or multi-way data arrays. Most applications involve computing a decomposition of a tensor into a linear combination of rank-1 tensors. Ideally, the decomposition involves a minimal number of terms, i.e. computation of the rank of the tensor. Tensor rank is not a straight-forward extension of matrix rank. A constructive proof based on an eigenvalue criterion is provided that shows when a 2?×?2?×?2 tensor over ? is rank-3 and when it is rank-2. The results are extended to show that n?×?n?×?2 tensors over ? have maximum possible rank n?+?k where k is the number of complex conjugate eigenvalue pairs of the matrices forming the two faces of the tensor cube.  相似文献   

3.
4.

The tensor rank decomposition, or canonical polyadic decomposition, is the decomposition of a tensor into a sum of rank-1 tensors. The condition number of the tensor rank decomposition measures the sensitivity of the rank-1 summands with respect to structured perturbations. Those are perturbations preserving the rank of the tensor that is decomposed. On the other hand, the angular condition number measures the perturbations of the rank-1 summands up to scaling. We show for random rank-2 tensors that the expected value of the condition number is infinite for a wide range of choices of the density. Under a mild additional assumption, we show that the same is true for most higher ranks \(r\ge 3\) as well. In fact, as the dimensions of the tensor tend to infinity, asymptotically all ranks are covered by our analysis. On the contrary, we show that rank-2 tensors have finite expected angular condition number. Based on numerical experiments, we conjecture that this could also be true for higher ranks. Our results underline the high computational complexity of computing tensor rank decompositions. We discuss consequences of our results for algorithm design and for testing algorithms computing tensor rank decompositions.

  相似文献   

5.
Low rank Tucker-type tensor approximation to classical potentials   总被引:2,自引:0,他引:2  
This paper investigates best rank-(r 1,..., r d ) Tucker tensor approximation of higher-order tensors arising from the discretization of linear operators and functions in ℝ d . Super-convergence of the best rank-(r 1,..., r d ) Tucker-type decomposition with respect to the relative Frobenius norm is proven. Dimensionality reduction by the two-level Tucker-to-canonical approximation is discussed. Tensor-product representation of basic multi-linear algebra operations is considered, including inner, outer and Hadamard products. Furthermore, we focus on fast convolution of higher-order tensors represented by the Tucker/canonical models. Optimized versions of the orthogonal alternating least-squares (ALS) algorithm is presented taking into account the different formats of input data. We propose and test numerically the mixed CT-model, which is based on the additive splitting of a tensor as a sum of canonical and Tucker-type representations. It allows to stabilize the ALS iteration in the case of “ill-conditioned” tensors. The best rank-(r 1,..., r d ) Tucker decomposition is applied to 3D tensors generated by classical potentials, for example and with x, y ∈ ℝ d . Numerical results for tri-linear decompositions illustrate exponential convergence in the Tucker rank, and robustness of the orthogonal ALS iteration.   相似文献   

6.
We introduce one special form of the ptimesp × 2 (p≥2) tensors by multilinear orthonormal transformations, and present some interesting properties of the special form. Through discussing on the special form, we provide a solution to one conjecture proposed by Stegeman and Comon in a conference paper (Proceedings of the EUSIPCO 2009 Conference, Glasgow, Scotland, 2009), and reveal an important conclusion about subtracting a best rank‐1 approximations from p × p × 2 tensors of the special form. All of these confirm that consecutively subtracting the best rank‐1 approximations may not lead to a best low rank approximation of a tensor. Numerical examples show the correctness of our theory. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

7.
As is well known, a rank-r matrix can be recovered from a cross of r linearly independent columns and rows, and an arbitrary matrix can be interpolated on the cross entries. Other entries by this cross or pseudo-skeleton approximation are given with errors depending on the closeness of the matrix to a rank-r matrix and as well on the choice of cross. In this paper we extend this construction to d-dimensional arrays (tensors) and suggest a new interpolation formula in which a d-dimensional array is interpolated on the entries of some TT-cross (tensor train-cross). The total number of entries and the complexity of our interpolation algorithm depend on d linearly, so the approach does not suffer from the curse of dimensionality.We also propose a TT-cross method for computation of d-dimensional integrals and apply it to some examples with dimensionality in the range from d=100 up to d=4000 and the relative accuracy of order 10-10. In all constructions we capitalize on the new tensor decomposition in the form of tensor trains (TT-decomposition).  相似文献   

8.
Nonnegative tensor decomposition allows us to analyze data in their ‘native’ form and to present results in the form of the sum of rank-1 tensors that does not nullify any parts of the factors. In this paper, we propose the geometrical structure of a basis vector frame for sum-of-rank-1 type decomposition of real-valued nonnegative tensors. The decomposition we propose reinterprets the orthogonality property of the singularvectors of matrices as a geometric constraint on the rank-1 matrix bases which leads to a geometrically constrained singularvector frame. Relaxing the orthogonality requirement, we developed a set of structured-bases that can be utilized to decompose any tensor into a similar constrained sum-of-rank-1 decomposition. The proposed approach is essentially a reparametrization and gives us an upper bound of the rank for tensors. At first, we describe the general case of tensor decomposition and then extend it to its nonnegative form. At the end of this paper, we show numerical results which conform to the proposed tensor model and utilize it for nonnegative data decomposition.  相似文献   

9.
The importance of unsupervised clustering and topic modeling is well recognized with ever-increasing volumes of text data available from numerous sources. Nonnegative matrix factorization (NMF) has proven to be a successful method for cluster and topic discovery in unlabeled data sets. In this paper, we propose a fast algorithm for computing NMF using a divide-and-conquer strategy, called DC-NMF. Given an input matrix where the columns represent data items, we build a binary tree structure of the data items using a recently-proposed efficient algorithm for computing rank-2 NMF, and then gather information from the tree to initialize the rank-k NMF, which needs only a few iterations to reach a desired solution. We also investigate various criteria for selecting the node to split when growing the tree. We demonstrate the scalability of our algorithm for computing general rank-k NMF as well as its effectiveness in clustering and topic modeling for large-scale text data sets, by comparing it to other frequently utilized state-of-the-art algorithms. The value of the proposed approach lies in the highly efficient and accurate method for initializing rank-k NMF and the scalability achieved from the divide-and-conquer approach of the algorithm and properties of rank-2 NMF. In summary, we present efficient tools for analyzing large-scale data sets, and techniques that can be generalized to many other data analytics problem domains along with an open-source software library called SmallK.  相似文献   

10.
In the tensor completion problem, one seeks to estimate a low‐rank tensor based on a random sample of revealed entries. In terms of the required sample size, earlier work revealed a large gap between estimation with unbounded computational resources (using, for instance, tensor nuclear norm minimization) and polynomial‐time algorithms. Among the latter, the best statistical guarantees have been proved, for third‐order tensors, using the sixth level of the sum‐of‐squares (sos ) semidefinite programming hierarchy. However, the sos approach does not scale well to large problem instances. By contrast, spectral methods—based on unfolding or matricizing the tensor—are attractive for their low complexity, but have been believed to require a much larger sample size. This paper presents two main contributions. First, we propose a new method, based on unfolding, which outperforms naive ones for symmetric kth‐order tensors of rank r. For this result we make a study of singular space estimation for partially revealed matrices of large aspect ratio, which may be of independent interest. For third‐order tensors, our algorithm matches the sos method in terms of sample size (requiring about rd3/2 revealed entries), subject to a worse rank condition (rd3/4 rather than rd3/2). We complement this result with a different spectral algorithm for third‐order tensors in the overcomplete (rd) regime. Under a random model, this second approach succeeds in estimating tensors of rank drd3/2 from about rd3/2 revealed entries. © 2018 Wiley Periodicals, Inc.  相似文献   

11.
We investigate lower bounds for the eigenvalues of perturbations of matrices. In the footsteps of Weyl and Ipsen & Nadler, we develop approximating matrices whose eigenvalues are lower bounds for the eigenvalues of the perturbed matrix. The number of available eigenvalues and eigenvectors of the original matrix determines how close those approximations can be, and, if the perturbation is of low rank, such bounds are relatively inexpensive to obtain. Moreover, because the process need not be restricted to the eigenvalues of perturbed matrices, lower bounds for eigenvalues of bordered diagonal matrices as well as for singular values of rank-k perturbations and other updates of n×m matrices are given.  相似文献   

12.
Using rank-1 reduction formula and the vector space spanned by the real rank-1 matrices, we present a different way to show that the maximum possible rank of the 2?×?2?×?2 tensors over the real field is 3. Following, we obtain that the maximum rank of the 2?×?2?×?2?×?2 tensors over the real field is less than or equal to 5 and propose another way to show that the maximum rank of the 2?×?2?×?2?×?2 tensors over the complex field is 4, except one special case.  相似文献   

13.
We characterize linear rank-k nonincreasing, rank-k preserving, and corank-k preserving maps on B(H), the algebra of all bounded linear operators on the Hilbert space H. This unifies and extends finite-dimensional results and results on linear rank-1 non-increasing and rank-1 preserving maps in the infinite-dimensional case. We conclude with an application to *-semigroup isomorphisms of operator ideals.  相似文献   

14.
In this paper, we provide two generalizations of the CUR matrix decomposition Y=CUR (also known as pseudo-skeleton approximation method [1]) to the case of N-way arrays (tensors). These generalizations, which we called Fiber Sampling Tensor Decomposition types 1 and 2 (FSTD1 and FSTD2), provide explicit formulas for the parameters of a rank-(R,R,…,R) Tucker representation (the core tensor of size R×R×?×R and the matrix factors of sizes In×R, n=1,2,…N) based only on some selected entries of the original tensor. FSTD1 uses PN-1(P?R)n-mode fibers of the original tensor while FSTD2 uses exactly R fibers in each mode as matrix factors, as suggested by the existence theorem provided in Oseledets et al. (2008) [2], with a core tensor defined in terms of the entries of a subtensor of size R×R×?×R. For N=2 our results are reduced to the already known CUR matrix decomposition where the core matrix is defined as the inverse of the intersection submatrix, i.e. U=W-1. Additionally, we provide an adaptive type algorithm for the selection of proper fibers in the FSTD1 model which is useful for large scale applications. Several numerical results are presented showing the performance of our FSTD1 Adaptive Algorithm compared to two recently proposed approximation methods for 3-way tensors.  相似文献   

15.
Truncated singular value decomposition is a popular method for solving linear discrete ill‐posed problems with a small to moderately sized matrix A. Regularization is achieved by replacing the matrix A by its best rank‐k approximant, which we denote by Ak. The rank may be determined in a variety of ways, for example, by the discrepancy principle or the L‐curve criterion. This paper describes a novel regularization approach, in which A is replaced by the closest matrix in a unitarily invariant matrix norm with the same spectral condition number as Ak. Computed examples illustrate that this regularization approach often yields approximate solutions of higher quality than the replacement of A by Ak.Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

16.
Let n and r be positive integers with 1 < r < n and let K(n,r) consist of all transformations on X n = {1,...,n} having image size less than or equal to r. For 1 < r < n, there exist rank-r elements of K(n,r) which are not the product of two rank-r idempotents. With this limitation in mind, we prove that for fixed r, and for all n large enough relative to r, that there exists a minimal idempotent generating set U of K(n,r) such that all rank-r elements of K(n,r) are contained in U 3. Moreover, for all n > r > 1, there exists a minimal idempotent generating set W for K(n,r) such that not every rank-r element is contained in W 3.  相似文献   

17.
In this paper we discuss the notion of singular vector tuples of a complex-valued \(d\) -mode tensor of dimension \(m_1\times \cdots \times m_d\) . We show that a generic tensor has a finite number of singular vector tuples, viewed as points in the corresponding Segre product. We give the formula for the number of singular vector tuples. We show similar results for tensors with partial symmetry. We give analogous results for the homogeneous pencil eigenvalue problem for cubic tensors, i.e., \(m_1=\cdots =m_d\) . We show the uniqueness of best approximations for almost all real tensors in the following cases: rank-one approximation; rank-one approximation for partially symmetric tensors (this approximation is also partially symmetric); rank- \((r_1,\ldots ,r_d)\) approximation for \(d\) -mode tensors.  相似文献   

18.
We study the least squares functional of the canonical polyadic tensor decomposition for third order tensors by eliminating one factor matrix, which leads to a reduced functional. An analysis of the reduced functional leads to several equivalent optimization problem, such as a Rayleigh quotient or a projection. These formulations are the basis of several new algorithms as follows: the Centroid Projection method for efficient computation of suboptimal solutions and fixed‐point iteration methods for approximating the best rank‐1 and the best rank‐R decompositions under certain nondegeneracy conditions. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
In least squares problems, it is often desired to solve the same problem repeatedly but with several rows of the data either added, deleted, or both. Methods for quickly solving a problem after adding or deleting one row of data at a time are known. In this paper we introduce fundamental rank-k updating and downdating methods and show how extensions of rank-1 downdating methods based on LINPACK, Corrected Semi-Normal Equations (CSNE), and Gram-Schmidt factorizations, as well as new rank-k downdating methods, can all be derived from these fundamental results. We then analyze the cost of each new algorithm and make comparisons tok applications of the corresponding rank-1 algorithms. We provide experimental results comparing the numerical accuracy of the various algorithms, paying particular attention to the downdating methods, due to their potential numerical difficulties for ill-conditioned problems. We then discuss the computation involved for each downdating method, measured in terms of operation counts and BLAS calls. Finally, we provide serial execution timing results for these algorithms, noting preferable points for improvement and optimization. From our experiments we conclude that the Gram-Schmidt methods perform best in terms of numerical accuracy, but may be too costly for serial execution for large problems.Research supported in part by the Joint Services Electronics Program, contract no. F49620-90-C-0039.  相似文献   

20.
The anti‐reflective boundary condition for image restoration was recently introduced as a mathematically desirable alternative to other boundary conditions presently represented in the literature. It has been shown that, given a centrally symmetric point spread function (PSF), this boundary condition gives rise to a structured blurring matrix, a submatrix of which can be diagonalized by the discrete sine transform (DST), leading to an O(n2 log n) solution algorithm for an image of size n × n. In this paper, we obtain a Kronecker product approximation of the general structured blurring matrix that arises under this boundary condition, regardless of symmetry properties of the PSF. We then demonstrate the usefulness and efficiency of our approximation in an SVD‐based restoration algorithm, the computational cost of which would otherwise be prohibitive. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号