共查询到7条相似文献,搜索用时 15 毫秒
1.
A nonlinearly preconditioned conjugate gradient algorithm for rank‐R canonical tensor approximation 下载免费PDF全文
Alternating least squares (ALS) is often considered the workhorse algorithm for computing the rank‐R canonical tensor approximation, but for certain problems, its convergence can be very slow. The nonlinear conjugate gradient (NCG) method was recently proposed as an alternative to ALS, but the results indicated that NCG is usually not faster than ALS. To improve the convergence speed of NCG, we consider a nonlinearly preconditioned NCG (PNCG) algorithm for computing the rank‐R canonical tensor decomposition. Our approach uses ALS as a nonlinear preconditioner in the NCG algorithm. Alternatively, NCG can be viewed as an acceleration process for ALS. We demonstrate numerically that the convergence acceleration mechanism in PNCG often leads to important pay‐offs for difficult tensor decomposition problems, with convergence that is significantly faster and more robust than for the stand‐alone NCG or ALS algorithms. We consider several approaches for incorporating the nonlinear preconditioner into the NCG algorithm that have been described in the literature previously and have met with success in certain application areas. However, it appears that the nonlinearly PNCG approach has received relatively little attention in the broader community and remains underexplored both theoretically and experimentally. Thus, this paper serves several additional functions, by providing in one place a concise overview of several PNCG variants and their properties that have only been described in a few places scattered throughout the literature, by systematically comparing the performance of these PNCG variants for the tensor decomposition problem, and by drawing further attention to the usefulness of nonlinearly PNCG as a general tool. In addition, we briefly discuss the convergence of the PNCG algorithm. In particular, we obtain a new convergence result for one of the PNCG variants under suitable conditions, building on known convergence results for non‐preconditioned NCG. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
2.
The goal of this paper is to find a low‐rank approximation for a given nth tensor. Specifically, we give a computable strategy on calculating the rank of a given tensor, based on approximating the solution to an NP‐hard problem. In this paper, we formulate a sparse optimization problem via an l1‐regularization to find a low‐rank approximation of tensors. To solve this sparse optimization problem, we propose a rescaling algorithm of the proximal alternating minimization and study the theoretical convergence of this algorithm. Furthermore, we discuss the probabilistic consistency of the sparsity result and suggest a way to choose the regularization parameter for practical computation. In the simulation experiments, the performance of our algorithm supports that our method provides an efficient estimate on the number of rank‐one tensor components in a given tensor. Moreover, this algorithm is also applied to surveillance videos for low‐rank approximation. 相似文献
3.
Michael J. O'Hara 《Numerical Linear Algebra with Applications》2014,21(1):1-12
The problem of symmetric rank‐one approximation of symmetric tensors is important in independent components analysis, also known as blind source separation, as well as polynomial optimization. We derive several perturbative results that are relevant to the well‐posedness of recovering rank‐one structure from approximately‐rank‐one symmetric tensors. We also specialize the analysis of the shifted symmetric higher‐order power method, an algorithm for computing symmetric tensor eigenvectors, to approximately‐rank‐one symmetric tensors. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
4.
Least‐squares solutions and least‐rank solutions of the matrix equation AXA* = B and their relations
Yongge Tian 《Numerical Linear Algebra with Applications》2013,20(5):713-722
A Hermitian matrix X is called a least‐squares solution of the inconsistent matrix equation AXA* = B, where B is Hermitian. A* denotes the conjugate transpose of A if it minimizes the F‐norm of B ? AXA*; it is called a least‐rank solution of AXA* = B if it minimizes the rank of B ? AXA*. In this paper, we study these two types of solutions by using generalized inverses of matrices and some matrix decompositions. In particular, we derive necessary and sufficient conditions for the two types of solutions to coincide. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
5.
《Mathematical Methods in the Applied Sciences》2018,41(5):2074-2094
In this paper, we are mainly concerned with 2 types of constrained matrix equation problems of the form AXB=C, the least squares problem and the optimal approximation problem, and we consider several constraint matrices, such as general Toeplitz matrices, upper triangular Toeplitz matrices, lower triangular Toeplitz matrices, symmetric Toeplitz matrices, and Hankel matrices. In the first problem, owing to the special structure of the constraint matrix , we construct special algorithms; necessary and sufficient conditions are obtained about the existence and uniqueness for the solutions. In the second problem, we use von Neumann alternating projection algorithm to obtain the solutions of problem. Then we give 2 numerical examples to demonstrate the effectiveness of the algorithms. 相似文献
6.
Venera Khoromskaia Boris N. Khoromskij 《Numerical Linear Algebra with Applications》2016,23(2):249-271
In this paper, we present a method for fast summation of long‐range potentials on 3D lattices with multiple defects and having non‐rectangular geometries, based on rank‐structured tensor representations. This is a significant generalization of our recent technique for the grid‐based summation of electrostatic potentials on the rectangular L × L × L lattices by using the canonical tensor decompositions and yielding the O(L) computational complexity instead of O(L3) by traditional approaches. The resulting lattice sum is calculated as a Tucker or canonical representation whose directional vectors are assembled by the 1D summation of the generating vectors for the shifted reference tensor, once precomputed on large N × N × N representation grid in a 3D bounding box. The tensor numerical treatment of defects is performed in an algebraic way by simple summation of tensors in the canonical or Tucker formats. To diminish the considerable increase in the tensor rank of the resulting potential sum, the ?‐rank reduction procedure is applied based on the generalized reduced higher‐order SVD scheme. For the reduced higher‐order SVD approximation to a sum of canonical/Tucker tensors, we prove the stable error bounds in the relative norm in terms of discarded singular values of the side matrices. The required storage scales linearly in the 1D grid‐size, O(N), while the numerical cost is estimated by O(NL). The approach applies to a general class of kernel functions including those for the Newton, Slater, Yukawa, Lennard‐Jones, and dipole‐dipole interactions. Numerical tests confirm the efficiency of the presented tensor summation method; we demonstrate that a sum of millions of Newton kernels on a 3D lattice with defects/impurities can be computed in seconds in Matlab implementation. The tensor approach is advantageous in further functional calculus with the lattice potential sums represented on a 3D grid, like integration or differentiation, using tensor arithmetics of 1D complexity. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
7.
Let A be an expansive linear map in . Approximation properties of shift‐invariant subspaces of when they are dilated by integer powers of A are studied. Shift‐invariant subspaces providing approximation order α or density order α associated to A are characterized. These characterizations impose certain restrictions on the behavior of the spectral function at the origin expressed in terms of the concept of point of approximate continuity. The notions of approximation order and density order associated to an isotropic dilation turn out to coincide with the classical ones introduced by de Boor, DeVore and Ron. This is no longer true when A is anisotropic. In this case the A‐dilated shift‐invariant subspaces approximate the anisotropic Sobolev space associated to A and α. Our main results are also new when S is generated by translates of a single function. The obtained results are illustrated by some examples. 相似文献