首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 187 毫秒
1.
低秩张量填充在数据恢复中有广泛应用, 基于张量火车(TT) 分解的张量填充模型在彩色图像和视频以及互联网数据恢复中应用效果良好。本文提出一个基于三阶张量TT分解的填充模型。在模型中, 引入稀疏正则项与时空正则项, 分别刻画核张量的稀疏性和数据固有的块相似性。根据问题的结构特点, 引入辅助变量将原模型等价转化成可分离形式, 并采用临近交替极小化(PAM) 与交替方向乘子法(ADMM) 相结合的方法求解模型。数值实验表明, 两正则项的引入有利于提高数据恢复的稳定性和实际效果, 所提出方法优于其他方法。在采样率较低或图像出现结构性缺失时, 其方法效果较为显著。  相似文献   

2.
提出了两个基于不同张量乘法的四阶张量分解. 首先, 在矩阵乘法的基础上, 定义第一种四阶张量乘法(F-乘), 基于F-乘提出了第一种四阶张量分解(F-TD). 其次, 基于三阶张量t-product给出了第二种四阶张量乘法(B-乘)和分解(FT-SVD). 同时, 利用两种分解方法, 分别给出两个张量逼近定理. 最后, 三个数值算例阐明提出的两种分解方法的准确性和可行性.  相似文献   

3.
文章研究张量响应回归模型及其系数张量的最小二乘估计.为了提高该模型系数张量的估计精度,首先对模型的系数张量进行张量的CP分解和Tucker分解,构建两个新的张量响应回归模型.这两个模型不仅可以捕捉张量数据内部的空间结构信息,还可以大大减少待估参数的个数.然后,给出模型对应的参数估计算法.最后,通过Monte Carlo...  相似文献   

4.
近年来,张量作为矩阵的推广,得到了广泛的研究.在众多张量相关的问题中,张量互补问题(TCP)是许多学者研究的一个重要领域,人们提出了许多解决TCP的方法.本文在强P-张量张量和光滑逼近函数的基础上,提出一种基于基于模的重构的TCP光滑牛顿算法,证明光滑牛顿方法是全局收敛的.数值算例验证了光滑牛顿算法的有效性.  相似文献   

5.
郭雄伟  王川龙 《计算数学》2022,44(4):534-544
本文提出了一种求解低秩张量填充问题的加速随机临近梯度算法.张量填充模型可以松弛为平均组合形式的无约束优化问题,在迭代过程中,随机选取该组合中的某一函数进行变量更新,有效减少了张量展开、矩阵折叠及奇异值分解带来的较大的计算花费.本文证明了算法的收敛率为$O (1/k^{2})$.最后,随机生成的和真实的张量填充实验结果表明新算法在CPU时间上优于现有的三种算法.  相似文献   

6.
在所有二阶张量中,只有单位张量(度规张量),是各向同性的;二阶各向同性张量一定是球张量,反之亦然;偏张量为零的二阶张量一定是各向同性的。  相似文献   

7.
首先介绍张量基本概念、张量乘积及张量CP分解和Tucker分解.其次,将张量运用于统计模型当中,得到张量回归模型.再结合张量矩阵化和张量分解,给出该模型参数张量的最小二乘估计公式.最后,举例说明张量模型的重要性.  相似文献   

8.
柳智 《运筹与管理》2023,(10):102-107
本文提出一种有效的神经网络剪枝方法。该方法对神经网络训练模型引入零模正则项来促使模型权重稀疏,并通过删减取值为零的权重来压缩模型。对所提出的零模正则神经网络训练模型,文中通过建立其等价MPEC形式的全局精确罚得到其等价的局部Lipschitz代理,然后通过用交替方向乘子法求解该Lipschitz代理模型对网络进行训练、剪枝。最后,对MLP和LeNet-5网络模型进行测试,分别在误差2.2%和1%下,取得97.43%和99.50%的稀疏度,达到很好的剪枝效果。  相似文献   

9.
非负张量分解优化模型在高维图像处理与数据分析中占有重要地位.本文聚焦超光谱图像重构问题,提出一种正则化非负张量分解算法,然后给出三种新的有效加速策略,分别为分层降维循环迭代、误差校正以及“指数保号性”策略.利用所提出的这些加速策略对算法求解效率进行综合提升与改进.最后,通过数值测试来验证本文所提出的算法与加速策略的可行性与实用性.  相似文献   

10.
张量的鲁棒主成分分析是将未知的一个低秩张量与一个稀疏张量从已知的它们的和中分离出来.因为在计算机视觉与模式识别中有着广阔的应用前景,该问题在近期成为学者们的研究热点.本文提出了一种针对张量鲁棒主成分分析的新的模型,并给出交替方向极小化的求解算法,在求解过程中给出了两种秩的调整策略.针对低秩分量本文对其全部各阶展开矩阵进行低秩矩阵分解,针对稀疏分量采用软阈值收缩的策略.无论目标低秩张量为精确低秩或近似低秩,本文所提方法均可适用.本文对算法给出了一定程度上的收敛性分析,即算法迭代过程中产生的任意收敛点均满足KKT条件.如果目标低秩张量为精确低秩,当迭代终止时可对输出结果进行基于高阶奇异值分解的修正.针对人工数据和真实视频数据的数值实验表明,与同类型算法相比,本文所提方法可以得到更好的结果.  相似文献   

11.
Robust Principal Component Analysis plays a key role in various fields such as image and video processing, data mining, and hyperspectral data analysis. In this paper, we study the problem of robust tensor train (TT) principal component analysis from partial observations, which aims to decompose a given tensor into the low TT rank and sparse components. The decomposition of the proposed model is used to find the hidden factors and help alleviate the curse of dimensionality via a set of connected low-rank tensors. A relaxation model is to minimize a weighted combination of the sum of nuclear norms of unfolding matrices of core tensors and the tensor ? 1 norm. A proximal alternating direction method of multipliers is developed to solve the resulting model. Furthermore, we show that any cluster point of the convergent subsequence is a Karush-Kuhn-Tucker point of the proposed model under some conditions. Extensive numerical examples on both synthetic data and real-world datasets are presented to demonstrate the effectiveness of the proposed approach.  相似文献   

12.
Tensor completion originates in numerous applications where data utilized are of high dimensions and gathered from multiple sources or views. Existing methods merely incorporate the structure information, ignoring the fact that ubiquitous side information may be beneficial to estimate the missing entries from a partially observed tensor. Inspired by this, we formulate a sparse and low-rank tensor completion model named SLRMV. The 0 $$ {\ell}_0 $$ -norm instead of its relaxation is used in the objective function to constrain the sparseness of noise. The CP decomposition is used to decompose the high-quality tensor, based on which the combination of Schatten p $$ p $$ -norm on each latent factor matrix is employed to characterize the low-rank tensor structure with high computation efficiency. Diverse similarity matrices for the same factor matrix are regarded as multi-view side information for guiding the tensor completion task. Although SLRMV is a nonconvex and discontinuous problem, the optimality analysis in terms of Karush-Kuhn-Tucker (KKT) conditions is accordingly proposed, based on which a hard-thresholding based alternating direction method of multipliers (HT-ADMM) is designed. Extensive experiments remarkably demonstrate the efficiency of SLRMV in tensor completion.  相似文献   

13.
The symmetric tensor decomposition problem is a fundamental problem in many fields, which appealing for investigation. In general, greedy algorithm is used for tensor decomposition. That is, we first find the largest singular value and singular vector and subtract the corresponding component from tensor, then repeat the process. In this article, we focus on designing one effective algorithm and giving its convergence analysis. We introduce an exceedingly simple and fast algorithm for rank-one approximation of symmetric tensor decomposition. Throughout variable splitting, we solve symmetric tensor decomposition problem by minimizing a multiconvex optimization problem. We use alternating gradient descent algorithm to solve. Although we focus on symmetric tensors in this article, the method can be extended to nonsymmetric tensors in some cases. Additionally, we also give some theoretical analysis about our alternating gradient descent algorithm. We prove that alternating gradient descent algorithm converges linearly to global minimizer. We also provide numerical results to show the effectiveness of the algorithm.  相似文献   

14.
In this article, we study robust tensor completion by using transformed tensor singular value decomposition (SVD), which employs unitary transform matrices instead of discrete Fourier transform matrix that is used in the traditional tensor SVD. The main motivation is that a lower tubal rank tensor can be obtained by using other unitary transform matrices than that by using discrete Fourier transform matrix. This would be more effective for robust tensor completion. Experimental results for hyperspectral, video and face datasets have shown that the recovery performance for the robust tensor completion problem by using transformed tensor SVD is better in peak signal‐to‐noise ratio than that by using Fourier transform and other robust tensor completion methods.  相似文献   

15.
Low Tucker rank tensor completion has wide applications in science and engineering. Many existing approaches dealt with the Tucker rank by unfolding matrix rank. However, unfolding a tensor to a matrix would destroy the data's original multi-way structure, resulting in vital information loss and degraded performance. In this article, we establish a relationship between the Tucker ranks and the ranks of the factor matrices in Tucker decomposition. Then, we reformulate the low Tucker rank tensor completion problem as a multilinear low rank matrix completion problem. For the reformulated problem, a symmetric block coordinate descent method is customized. For each matrix rank minimization subproblem, the classical truncated nuclear norm minimization is adopted. Furthermore, temporal characteristics in image and video data are introduced to such a model, which benefits the performance of the method. Numerical simulations illustrate the efficiency of our proposed models and methods.  相似文献   

16.
Nonnegative tensor factorizations using an alternating direction method   总被引:1,自引:0,他引:1  
The nonnegative tensor (matrix) factorization finds more and more applications in various disciplines including machine learning, data mining, and blind source separation, etc. In computation, the optimization problem involved is solved by alternatively minimizing one factor while the others are fixed. To solve the subproblem efficiently, we first exploit a variable regularization term which makes the subproblem far from ill-condition. Second, an augmented Lagrangian alternating direction method is employed to solve this convex and well-conditioned regularized subproblem, and two accelerating skills are also implemented. Some preliminary numerical experiments are performed to show the improvements of the new method.  相似文献   

17.
Recently, the tensor train (TT) rank has received much attention for tensor completion, due to its ability to explore the global low-rankness of tensors. However, existing methods still leave room for improvement, since the low-rankness itself is generally not sufficient to recover the underlying data. Inspired by this, we consider a novel tensor completion model by simultaneously exploiting the global low-rankness and local smoothness of visual data. In particular, we use low-rank matrix factorization to characterize the global TT low-rankness, and framelet and total variation regularization to enhance the local smoothness. We develop an efficient proximal alternating minimization algorithm to solve the proposed new model with guaranteed convergence. Extensive experiments on various data demonstrated that the proposed method outperforms compared methods in terms of visual and quantitative measures.  相似文献   

18.
Tensor ring (TR) decomposition has been widely applied as an effective approach in a variety of applications to discover the hidden low-rank patterns in multidimensional and higher-order data. A well-known method for TR decomposition is the alternating least squares (ALS). However, solving the ALS subproblems often suffers from high cost issue, especially for large-scale tensors. In this paper, we provide two strategies to tackle this issue and design three ALS-based algorithms. Specifically, the first strategy is used to simplify the calculation of the coefficient matrices of the normal equations for the ALS subproblems, which takes full advantage of the structure of the coefficient matrices of the subproblems and hence makes the corresponding algorithm perform much better than the regular ALS method in terms of computing time. The second strategy is to stabilize the ALS subproblems by QR factorizations on TR-cores, and hence the corresponding algorithms are more numerically stable compared with our first algorithm. Extensive numerical experiments on synthetic and real data are given to illustrate and confirm the above results. In addition, we also present the complexity analyses of the proposed algorithms.  相似文献   

19.
支持向量机作为基于向量空间的一种传统的机器学习方法,不能直接处理张量类型的数据,否则不仅破坏数据的空间结构,还会造成维度灾难及小样本问题。作为支持向量机的一种高阶推广,用于处理张量数据分类的支持张量机已经引起众多学者的关注,并应用于遥感成像、视频分析、金融、故障诊断等多个领域。与支持向量机类似,已有的支持张量机模型中采用的损失函数多为L0/1函数的代理函数。将直接使用L0/1这一本原函数作为损失函数,并利用张量数据的低秩性,建立针对二分类问题的低秩支持张量机模型。针对这一非凸非连续的张量优化问题,设计交替方向乘子法进行求解,并通过对模拟数据和真实数据进行数值实验,验证模型与算法的有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号