首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Recently, the tensor train (TT) rank has received much attention for tensor completion, due to its ability to explore the global low-rankness of tensors. However, existing methods still leave room for improvement, since the low-rankness itself is generally not sufficient to recover the underlying data. Inspired by this, we consider a novel tensor completion model by simultaneously exploiting the global low-rankness and local smoothness of visual data. In particular, we use low-rank matrix factorization to characterize the global TT low-rankness, and framelet and total variation regularization to enhance the local smoothness. We develop an efficient proximal alternating minimization algorithm to solve the proposed new model with guaranteed convergence. Extensive experiments on various data demonstrated that the proposed method outperforms compared methods in terms of visual and quantitative measures.  相似文献   

2.
郭雄伟  王川龙 《计算数学》2022,44(4):534-544
本文提出了一种求解低秩张量填充问题的加速随机临近梯度算法.张量填充模型可以松弛为平均组合形式的无约束优化问题,在迭代过程中,随机选取该组合中的某一函数进行变量更新,有效减少了张量展开、矩阵折叠及奇异值分解带来的较大的计算花费.本文证明了算法的收敛率为$O (1/k^{2})$.最后,随机生成的和真实的张量填充实验结果表明新算法在CPU时间上优于现有的三种算法.  相似文献   

3.
低秩张量填充在数据恢复中有广泛应用, 基于张量火车(TT) 分解的张量填充模型在彩色图像和视频以及互联网数据恢复中应用效果良好。本文提出一个基于三阶张量TT分解的填充模型。在模型中, 引入稀疏正则项与时空正则项, 分别刻画核张量的稀疏性和数据固有的块相似性。根据问题的结构特点, 引入辅助变量将原模型等价转化成可分离形式, 并采用临近交替极小化(PAM) 与交替方向乘子法(ADMM) 相结合的方法求解模型。数值实验表明, 两正则项的引入有利于提高数据恢复的稳定性和实际效果, 所提出方法优于其他方法。在采样率较低或图像出现结构性缺失时, 其方法效果较为显著。  相似文献   

4.
低秩张量填充在数据恢复中有广泛应用, 基于张量火车(TT) 分解的张量填充模型在彩色图像和视频以及互联网数据恢复中应用效果良好。本文提出一个基于三阶张量TT分解的填充模型。在模型中, 引入稀疏正则项与时空正则项, 分别刻画核张量的稀疏性和数据固有的块相似性。根据问题的结构特点, 引入辅助变量将原模型等价转化成可分离形式, 并采用临近交替极小化(PAM) 与交替方向乘子法(ADMM) 相结合的方法求解模型。数值实验表明, 两正则项的引入有利于提高数据恢复的稳定性和实际效果, 所提出方法优于其他方法。在采样率较低或图像出现结构性缺失时, 其方法效果较为显著。  相似文献   

5.
Tensor decompositions such as the canonical format and the tensor train format have been widely utilized to reduce storage costs and operational complexities for high‐dimensional data, achieving linear scaling with the input dimension instead of exponential scaling. In this paper, we investigate even lower storage‐cost representations in the tensor ring format, which is an extension of the tensor train format with variable end‐ranks. Firstly, we introduce two algorithms for converting a tensor in full format to tensor ring format with low storage cost. Secondly, we detail a rounding operation for tensor rings and show how this requires new definitions of common linear algebra operations in the format to obtain storage‐cost savings. Lastly, we introduce algorithms for transforming the graph structure of graph‐based tensor formats, with orders of magnitude lower complexity than existing literature. The efficiency of all algorithms is demonstrated on a number of numerical examples, and in certain cases, we demonstrate significantly higher compression ratios when compared to previous approaches to using the tensor ring format.  相似文献   

6.
In this article, we study robust tensor completion by using transformed tensor singular value decomposition (SVD), which employs unitary transform matrices instead of discrete Fourier transform matrix that is used in the traditional tensor SVD. The main motivation is that a lower tubal rank tensor can be obtained by using other unitary transform matrices than that by using discrete Fourier transform matrix. This would be more effective for robust tensor completion. Experimental results for hyperspectral, video and face datasets have shown that the recovery performance for the robust tensor completion problem by using transformed tensor SVD is better in peak signal‐to‐noise ratio than that by using Fourier transform and other robust tensor completion methods.  相似文献   

7.
The tensor SVD (t‐SVD) for third‐order tensors, previously proposed in the literature, has been applied successfully in many fields, such as computed tomography, facial recognition, and video completion. In this paper, we propose a method that extends a well‐known randomized matrix method to the t‐SVD. This method can produce a factorization with similar properties to the t‐SVD, but it is more computationally efficient on very large data sets. We present details of the algorithms and theoretical results and provide numerical results that show the promise of our approach for compressing and analyzing image‐based data sets. We also present an improved analysis of the randomized and simultaneous iteration for matrices, which may be of independent interest to the scientific community. We also use these new results to address the convergence properties of the new and randomized tensor method as well.  相似文献   

8.
For material modeling of microstructured media, an accurate characterization of the underlying microstructure is indispensable. Mathematically speaking, the overall goal of microstructure characterization is to find simple functionals which describe the geometric shape as well as the composition of the microstructures under consideration and enable distinguishing microstructures with distinct effective material behavior. For this purpose, we propose using Minkowski tensors, in general, and the quadratic normal tensor, in particular, and introduce a computational algorithm applicable to voxel-based microstructure representations. Rooted in the mathematical field of integral geometry, Minkowski tensors associate a tensor to rather general geometric shapes, which make them suitable for a wide range of microstructured material classes. Furthermore, they satisfy additivity and continuity properties, which makes them suitable and robust for large-scale applications. We present a modular algorithm for computing the quadratic normal tensor of digital microstructures. We demonstrate multigrid convergence for selected numerical examples and apply our approach to a variety of microstructures. Strikingly, the presented algorithm remains unaffected by inaccurate computation of the interface area. The quadratic normal tensor may be used for engineering purposes, such as mean field homogenization or as target value for generating synthetic microstructures.  相似文献   

9.
We introduce a new class of nonnegative tensors—strictly nonnegative tensors.A weakly irreducible nonnegative tensor is a strictly nonnegative tensor but not vice versa.We show that the spectral radius of a strictly nonnegative tensor is always positive.We give some necessary and su?cient conditions for the six wellconditional classes of nonnegative tensors,introduced in the literature,and a full relationship picture about strictly nonnegative tensors with these six classes of nonnegative tensors.We then establish global R-linear convergence of a power method for finding the spectral radius of a nonnegative tensor under the condition of weak irreducibility.We show that for a nonnegative tensor T,there always exists a partition of the index set such that every tensor induced by the partition is weakly irreducible;and the spectral radius of T can be obtained from those spectral radii of the induced tensors.In this way,we develop a convergent algorithm for finding the spectral radius of a general nonnegative tensor without any additional assumption.Some preliminary numerical results show the feasibility and effectiveness of the algorithm.  相似文献   

10.
We propose a tensor structured preconditioner for the tensor train GMRES algorithm (or TT-GMRES for short) to approximate the solution of the all-at-once formulation of time-dependent fractional partial differential equations discretized in time by linear multistep formulas used in boundary value form and in space by finite volumes.Numerical experiments show that the proposed preconditioner is efficient for very large problems and is competitive, in particular with respect to the AMEn algorithm.  相似文献   

11.
梁娜  杜守强 《运筹学学报》2017,21(3):95-102
提出一类对称张量绝对值方程问题,给出了求解此类问题的一类非光滑牛顿法,并且在一般的假设条件下,给出了算法的局部收敛性.最后给出相关的数值实验表明了算法的有效性.  相似文献   

12.
We apply results in operator space theory to the setting of multidimensional measure theory. Using the extended Haagerup tensor product of Effros and Ruan, we derive a Radon–Nikodým theorem for bimeasures and then extend the result to general Fréchet measures (scalar-valued polymeasures). We also prove a measure-theoretic Grothendieck inequality, provide a characterization of the injective tensor product of two spaces of Lebesgue integrable functions, and discuss the possibility of a bounded convergence theorem for Fréchet measures.  相似文献   

13.
We consider the problem of recovering an orthogonally decomposable tensor with a subset of elements distorted by noise with arbitrarily large magnitude. We focus on the particular case where each mode in the decomposition is corrupted by noise vectors with components that are correlated locally, that is, with nearby components. We show that this deterministic tensor completion problem has the unusual property that it can be solved in polynomial time if the rank of the tensor is sufficiently large. This is the polar opposite of the low-rank assumptions of typical low-rank tensor and matrix completion settings. We show that our problem can be solved through a system of coupled Sylvester-like equations and show how to accelerate their solution by an alternating solver. This enables recovery even with a substantial number of missing entries, for instance for n $$ n $$ -dimensional tensors of rank n $$ n $$ with up to 40 % $$ 40\% $$ missing entries.  相似文献   

14.
In this paper, first we introduce a new tensor product for a transition probability tensor originating from a higher‐order Markov chain. Subsequently, some properties of the new tensor product are explained, and its relationship with the stationary probability vector is studied. Also, similarity between results obtained by this new product and the first‐order case is shown. Furthermore, we prove the convergence of a transition probability tensor to the stationary probability vector. Finally, we show how to achieve a stationary probability vector with some numerical examples and make some comparison between the proposed method and another existing method for obtaining stationary probability vectors. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

15.
Alternating least squares (ALS) is often considered the workhorse algorithm for computing the rank‐R canonical tensor approximation, but for certain problems, its convergence can be very slow. The nonlinear conjugate gradient (NCG) method was recently proposed as an alternative to ALS, but the results indicated that NCG is usually not faster than ALS. To improve the convergence speed of NCG, we consider a nonlinearly preconditioned NCG (PNCG) algorithm for computing the rank‐R canonical tensor decomposition. Our approach uses ALS as a nonlinear preconditioner in the NCG algorithm. Alternatively, NCG can be viewed as an acceleration process for ALS. We demonstrate numerically that the convergence acceleration mechanism in PNCG often leads to important pay‐offs for difficult tensor decomposition problems, with convergence that is significantly faster and more robust than for the stand‐alone NCG or ALS algorithms. We consider several approaches for incorporating the nonlinear preconditioner into the NCG algorithm that have been described in the literature previously and have met with success in certain application areas. However, it appears that the nonlinearly PNCG approach has received relatively little attention in the broader community and remains underexplored both theoretically and experimentally. Thus, this paper serves several additional functions, by providing in one place a concise overview of several PNCG variants and their properties that have only been described in a few places scattered throughout the literature, by systematically comparing the performance of these PNCG variants for the tensor decomposition problem, and by drawing further attention to the usefulness of nonlinearly PNCG as a general tool. In addition, we briefly discuss the convergence of the PNCG algorithm. In particular, we obtain a new convergence result for one of the PNCG variants under suitable conditions, building on known convergence results for non‐preconditioned NCG. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

16.
Local convergence analysis of tensor methods for nonlinear equations   总被引:1,自引:0,他引:1  
Tensor methods for nonlinear equations base each iteration upon a standard linear model, augmented by a low rank quadratic term that is selected in such a way that the mode is efficient to form, store, and solve. These methods have been shown to be very efficient and robust computationally, especially on problems where the Jacobian matrix at the root has a small rank deficiency. This paper analyzes the local convergence properties of two versions of tensor methods, on problems where the Jacobian matrix at the root has a null space of rank one. Both methods augment the standard linear model by a rank one quadratic term. We show under mild conditions that the sequence of iterates generated by the tensor method based upon an ideal tensor model converges locally and two-step Q-superlinearly to the solution with Q-order 3/2, and that the sequence of iterates generated by the tensor method based upon a practial tensor model converges locally and three-step Q-superlinearly to the solution with Q-order 3/2. In the same situation, it is known that standard methods converge linearly with constant converging to 1/2. Hence, tensor methods have theoretical advantages over standard methods. Our analysis also confirms that tensor methods converge at least quadratically on problems where the Jacobian matrix at the root is nonsingular.This paper is dedicated to Phil Wolfe on the occasion of his 65th birthday.Research supported by AFOSR grant AFOSR-90-0109, ARO grant DAAL 03-91-G-0151, NSF grants CCR-8920519 CCR-9101795.  相似文献   

17.
宋珊珊  李郴良 《计算数学》2022,44(2):178-186
本文提出了求解张量互补问题的一类光滑模系矩阵迭代方法.其基本思想是,先将张量互补问题转化为等价的模系方程组,然后引入一个逼近的光滑函数进行求解.我们分析了算法的收敛性,并通过数值实验验证了所提出算法的有效性.  相似文献   

18.
We study iterative methods for solving a set of sparse non-negative tensor equations (multivariate polynomial systems) arising from data mining applications such as information retrieval by query search and community discovery in multi-dimensional networks. By making use of sparse and non-negative tensor structure, we develop Jacobi and Gauss-Seidel methods for solving tensor equations. The multiplication of tensors with vectors are required at each iteration of these iterative methods, the cost per iteration depends on the number of non-zeros in the sparse tensors. We show linear convergence of the Jacobi and Gauss-Seidel methods under suitable conditions, and therefore, the set of sparse non-negative tensor equations can be solved very efficiently. Experimental results on information retrieval by query search and community discovery in multi-dimensional networks are presented to illustrate the application of tensor equations and the effectiveness of the proposed methods.  相似文献   

19.
An iterative method for finding the largest eigenvalue of a nonnegative tensor was proposed by Ng, Qi, and Zhou in 2009. In this paper, we establish an explicit linear convergence rate of the Ng–Qi–Zhou method for essentially positive tensors. Numerical results are given to demonstrate linear convergence of the Ng–Qi–Zhou algorithm for essentially positive tensors. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

20.
The symmetric tensor decomposition problem is a fundamental problem in many fields, which appealing for investigation. In general, greedy algorithm is used for tensor decomposition. That is, we first find the largest singular value and singular vector and subtract the corresponding component from tensor, then repeat the process. In this article, we focus on designing one effective algorithm and giving its convergence analysis. We introduce an exceedingly simple and fast algorithm for rank-one approximation of symmetric tensor decomposition. Throughout variable splitting, we solve symmetric tensor decomposition problem by minimizing a multiconvex optimization problem. We use alternating gradient descent algorithm to solve. Although we focus on symmetric tensors in this article, the method can be extended to nonsymmetric tensors in some cases. Additionally, we also give some theoretical analysis about our alternating gradient descent algorithm. We prove that alternating gradient descent algorithm converges linearly to global minimizer. We also provide numerical results to show the effectiveness of the algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号