首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We give a complete decomposition of the space of curvature tensors with the symmetry properties as the curvature tensor associated with a symmetric connection of Riemannian manifold. We solve the problem under the action ofS0(n). The dimensions of the factors, the projections, their norms and the quadratic invariants of a curvature tensor are determined. Several applications for Riemannian manifolds with symmetric connection are given. The group of projective transformations of a Riemannian manifold and its subgroups are considered.  相似文献   

2.
Finding the maximum eigenvalue of a symmetric tensor is an important topic in tensor computation and numerical multilinear algebra. In this paper, we introduce a new class of structured tensors called W‐tensors, which not only extends the well‐studied nonnegative tensors by allowing negative entries but also covers several important tensors arising naturally from spectral hypergraph theory. We then show that finding the maximum H‐eigenvalue of an even‐order symmetric W‐tensor is equivalent to solving a structured semidefinite program and hence can be validated in polynomial time. This yields a highly efficient semidefinite program algorithm for computing the maximum H‐eigenvalue of W‐tensors and is based on a new structured sums‐of‐squares decomposition result for a nonnegative polynomial induced by W‐tensors. Numerical experiments illustrate that the proposed algorithm can successfully find the maximum H‐eigenvalue of W‐tensors with dimension up to 10,000, subject to machine precision. As applications, we provide a polynomial time algorithm for computing the maximum H‐eigenvalues of large‐size Laplacian tensors of hyperstars and hypertrees, where the algorithm can be up to 13 times faster than the state‐of‐the‐art numerical method introduced by Ng, Qi, and Zhou in 2009. Finally, we also show that the proposed algorithm can be used to test the copositivity of a multivariate form associated with symmetric extended Z‐tensors, whose order may be even or odd.  相似文献   

3.
While every matrix admits a singular value decomposition, in which the terms are pairwise orthogonal in a strong sense, higher-order tensors typically do not admit such an orthogonal decomposition. Those that do have attracted attention from theoretical computer science and scientific computing. We complement this existing body of literature with an algebro-geometric analysis of the set of orthogonally decomposable tensors.More specifically, we prove that they form a real-algebraic variety defined by polynomials of degree at most four. The exact degrees, and the corresponding polynomials, are different in each of three times two scenarios: ordinary, symmetric, or alternating tensors; and real-orthogonal versus complex-unitary. A key feature of our approach is a surprising connection between orthogonally decomposable tensors and semisimple algebras—associative in the ordinary and symmetric settings and of compact Lie type in the alternating setting.  相似文献   

4.
Finding the minimal H-eigenvalue of tensors is an important topic in tensor computation and numerical multilinear algebra. This paper is devoted to a sum-of-squares (SOS) algorithm for computing the minimal H-eigenvalues of tensors with some sign structures called extended essentially nonnegative tensors (EEN-tensors), which includes nonnegative tensors as a subclass. In the even-order symmetric case, we first discuss the positive semi-definiteness of EEN-tensors, and show that a positive semi-definite EEN-tensor is a nonnegative tensor or an M-tensor or the sum of a nonnegative tensor and an M-tensor, then we establish a checkable sufficient condition for the SOS decomposition of EEN-tensors. Finally, we present an efficient algorithm to compute the minimal H-eigenvalues of even-order symmetric EEN-tensors based on the SOS decomposition. Numerical experiments are given to show the efficiency of the proposed algorithm.  相似文献   

5.
The goal of this paper is to find a low‐rank approximation for a given nth tensor. Specifically, we give a computable strategy on calculating the rank of a given tensor, based on approximating the solution to an NP‐hard problem. In this paper, we formulate a sparse optimization problem via an l1‐regularization to find a low‐rank approximation of tensors. To solve this sparse optimization problem, we propose a rescaling algorithm of the proximal alternating minimization and study the theoretical convergence of this algorithm. Furthermore, we discuss the probabilistic consistency of the sparsity result and suggest a way to choose the regularization parameter for practical computation. In the simulation experiments, the performance of our algorithm supports that our method provides an efficient estimate on the number of rank‐one tensor components in a given tensor. Moreover, this algorithm is also applied to surveillance videos for low‐rank approximation.  相似文献   

6.
The real rectangular tensors arise from the strong ellipticity condition problem in solid mechanics and the entanglement problem in quantum physics. Some properties concerning the singular values of a real rectangular tensor were discussed by K. C. Chang et al. [J. Math. Anal. Appl., 2010, 370: 284–294]. In this paper, we give some new results on the Perron-Frobenius Theorem for nonnegative rectangular tensors. We show that the weak Perron-Frobenius keeps valid and the largest singular value is really geometrically simple under some conditions. In addition, we establish the convergence of an algorithm proposed by K. C. Chang et al. for finding the largest singular value of nonnegative primitive rectangular tensors.  相似文献   

7.
The problem of symmetric rank‐one approximation of symmetric tensors is important in independent components analysis, also known as blind source separation, as well as polynomial optimization. We derive several perturbative results that are relevant to the well‐posedness of recovering rank‐one structure from approximately‐rank‐one symmetric tensors. We also specialize the analysis of the shifted symmetric higher‐order power method, an algorithm for computing symmetric tensor eigenvectors, to approximately‐rank‐one symmetric tensors. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

8.
The CP tensor decomposition is used in applications such as machine learning and signal processing to discover latent low-rank structure in multidimensional data. Computing a CP decomposition via an alternating least squares (ALS) method reduces the problem to several linear least squares problems. The standard way to solve these linear least squares subproblems is to use the normal equations, which inherit special tensor structure that can be exploited for computational efficiency. However, the normal equations are sensitive to numerical ill-conditioning, which can compromise the results of the decomposition. In this paper, we develop versions of the CP-ALS algorithm using the QR decomposition and the singular value decomposition, which are more numerically stable than the normal equations, to solve the linear least squares problems. Our algorithms utilize the tensor structure of the CP-ALS subproblems efficiently, have the same complexity as the standard CP-ALS algorithm when the input is dense and the rank is small, and are shown via examples to produce more stable results when ill-conditioning is present. Our MATLAB implementation achieves the same running time as the standard algorithm for small ranks, and we show that the new methods can obtain lower approximation error.  相似文献   

9.
Finding the rank of a tensor is a problem that has many applications. Unfortunately, it is often very difficult to determine the rank of a given tensor. Inspired by the heuristics of convex relaxation, we consider the nuclear norm instead of the rank of a tensor. We determine the nuclear norm of various tensors of interest. Along the way, we also do a systematic study various measures of orthogonality in tensor product spaces and we give a new generalization of the singular value decomposition to higher-order tensors.  相似文献   

10.
In this paper, we develop and enrich the theory of nonnegative tensors. We define the sign nonsingular tensors and establish the relationship between the combinatorial determinant and the permanent of nonnegative tensors. We generalize the results from doubly stochastic matrices to totally plane stochastic tensors and obtain a probabilistic algorithm for locating a positive diagonal in a nonnegative tensor under certain conditions. We form a normalization algorithm to convert some nonnegative tensors to plane stochastic tensors. We obtain a lower bound for the minimum of the axial N-index assignment problem by means of the set of plane stochastic tensors.  相似文献   

11.
A symmetric tensor, which has a symmetric nonnegative decomposition, is called a completely positive tensor. In this paper, we characterize the completely positive tensor as a truncated moment sequence, and transform the problem of checking whether a tensor is completely positive to checking whether its corresponding truncated moment sequence admits a representing measure, then present a semidefinite algorithm to solve it. If a tensor is not completely positive, a certificate for it can be obtained; if it is completely positive, a nonnegative decomposition can be obtained.  相似文献   

12.
In this paper we discuss the notion of singular vector tuples of a complex-valued \(d\) -mode tensor of dimension \(m_1\times \cdots \times m_d\) . We show that a generic tensor has a finite number of singular vector tuples, viewed as points in the corresponding Segre product. We give the formula for the number of singular vector tuples. We show similar results for tensors with partial symmetry. We give analogous results for the homogeneous pencil eigenvalue problem for cubic tensors, i.e., \(m_1=\cdots =m_d\) . We show the uniqueness of best approximations for almost all real tensors in the following cases: rank-one approximation; rank-one approximation for partially symmetric tensors (this approximation is also partially symmetric); rank- \((r_1,\ldots ,r_d)\) approximation for \(d\) -mode tensors.  相似文献   

13.
In the tensor completion problem, one seeks to estimate a low‐rank tensor based on a random sample of revealed entries. In terms of the required sample size, earlier work revealed a large gap between estimation with unbounded computational resources (using, for instance, tensor nuclear norm minimization) and polynomial‐time algorithms. Among the latter, the best statistical guarantees have been proved, for third‐order tensors, using the sixth level of the sum‐of‐squares (sos ) semidefinite programming hierarchy. However, the sos approach does not scale well to large problem instances. By contrast, spectral methods—based on unfolding or matricizing the tensor—are attractive for their low complexity, but have been believed to require a much larger sample size. This paper presents two main contributions. First, we propose a new method, based on unfolding, which outperforms naive ones for symmetric kth‐order tensors of rank r. For this result we make a study of singular space estimation for partially revealed matrices of large aspect ratio, which may be of independent interest. For third‐order tensors, our algorithm matches the sos method in terms of sample size (requiring about rd3/2 revealed entries), subject to a worse rank condition (rd3/4 rather than rd3/2). We complement this result with a different spectral algorithm for third‐order tensors in the overcomplete (rd) regime. Under a random model, this second approach succeeds in estimating tensors of rank drd3/2 from about rd3/2 revealed entries. © 2018 Wiley Periodicals, Inc.  相似文献   

14.
Singular values of a real rectangular tensor   总被引:3,自引:0,他引:3  
Real rectangular tensors arise from the strong ellipticity condition problem in solid mechanics and the entanglement problem in quantum physics. In this paper, we systematically study properties of singular values of a real rectangular tensor, and give an algorithm to find the largest singular value of a nonnegative rectangular tensor. Numerical results show that the algorithm is efficient.  相似文献   

15.
We study symmetric tensor spaces and cones arising from polynomial optimization and physical sciences.We prove a decomposition invariance theorem for linear operators over the symmetric tensor space,which leads to several other interesting properties in symmetric tensor spaces.We then consider the positive semidefiniteness of linear operators which deduces the convexity of the Frobenius norm function of a symmetric tensor.Furthermore,we characterize the symmetric positive semidefinite tensor(SDT)cone by employing the properties of linear operators,design some face structures of its dual cone,and analyze its relationship to many other tensor cones.In particular,we show that the cone is self-dual if and only if the polynomial is quadratic,give specific characterizations of tensors that are in the primal cone but not in the dual for higher order cases,and develop a complete relationship map among the tensor cones appeared in the literature.  相似文献   

16.
Tensor is a hot topic in the past decade and eigenvalue problems of higher order tensors become more and more important in the numerical multilinear algebra. Several methods for finding the Z-eigenvalues and generalized eigenvalues of symmetric tensors have been given. However, the convergence of these methods when the tensor is not symmetric but weakly symmetric is not assured. In this paper, we give two convergent gradient projection methods for computing some generalized eigenvalues of weakly symmetric tensors. The gradient projection method with Armijo step-size rule (AGP) can be viewed as a modification of the GEAP method. The spectral gradient projection method which is born from the combination of the BB method with the gradient projection method is superior to the GEAP, AG and AGP methods. We also make comparisons among the four methods. Some competitive numerical results are reported at the end of this paper.  相似文献   

17.
Tensor decomposition is an important research area with numerous applications in data mining and computational neuroscience.An important class of tensor decomposition is sum-of-squares(SOS)tensor decomposition.SOS tensor decomposition has a close connection with SOS polynomials,and SOS polynomials are very important in polynomial theory and polynomial optimization.In this paper,we give a detailed survey on recent advances of high-order SOS tensors and their applications.It first shows that several classes of symmetric structured tensors available in the literature have SOS decomposition in the even order symmetric case.Then,the SOS-rank for tensors with SOS decomposition and the SOS-width for SOS tensor cones are established.Further,a sharper explicit upper bound of the SOS-rank for tensors with bounded exponent is provided,and the exact SOS-width for the cone consists of all such tensors with SOS decomposition is identified.Some potential research directions in the future are also listed in this paper.  相似文献   

18.
In this paper, a successive supersymmetric rank‐1 decomposition of a real higher‐order supersymmetric tensor is considered. To obtain such a decomposition, we design a greedy method based on iteratively computing the best supersymmetric rank‐1 approximation of the residual tensors. We further show that a supersymmetric canonical decomposition could be obtained when the method is applied to an orthogonally diagonalizable supersymmetric tensor, and in particular, when the order is 2, this method generates the eigenvalue decomposition for symmetric matrices. Details of the algorithm designed and the numerical results are reported in this paper. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

19.
In this paper, we mainly focus on new inclusion sets for eigenvalues of a tensor. First, we propose new inclusion sets for eigenvalues of a tensor, which are sharper than some existing inclusion sets, and obtain the law of distribution of the number of eigenvalues for a tensor. Second, two new classes of tensors are introduced. Third, some bounds on the spectral radii for nonnegative tensors are given. Fourth, some checkable sufficient conditions for the positive definiteness (positive semidefiniteness) of some classes of even-order real symmetric tensors are obtained.  相似文献   

20.
Biquadratic tensors play a central role in many areas of science.Examples include elastic tensor and Eshelby tensor in solid mechanics,and Riemannian curvature tensor in relativity theory.The singular values and spectral norm of a general third order tensor are the square roots of the M-eigenvalues and spectral norm of a biquadratic tensor,respectively.The tensor product operation is closed for biquadratic tensors.All of these motivate us to study biquadratic tensors,biquadratic decomposition,and norms of biquadratic tensors.We show that the spectral norm and nuclear norm for a biquadratic tensor may be computed by using its biquadratic structure.Then,either the number of variables is reduced,or the feasible region can be reduced.We show constructively that for a biquadratic tensor,a biquadratic rank-one decomposition always exists,and show that the biquadratic rank of a biquadratic tensor is preserved under an independent biquadratic Tucker decomposition.We present a lower bound and an upper bound of the nuclear norm of a biquadratic tensor.Finally,we define invertible biquadratic tensors,and present a lower bound for the product of the nuclear norms of an invertible biquadratic tensor and its inverse,and a lower bound for the product of the nuclear norm of an invertible biquadratic tensor,and the spectral norm of its inverse.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号