首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9篇
  免费   0篇
化学   3篇
数学   6篇
  2023年   2篇
  2018年   1篇
  2013年   1篇
  2012年   1篇
  2011年   1篇
  2010年   1篇
  2008年   1篇
  1992年   1篇
排序方式: 共有9条查询结果,搜索用时 0 毫秒
1
1.
Guo, Miron, Brie and Stegeman [X. Guo, S. Miron, D. Brie, A. Stegeman, Uni-mode and partial uniqueness conditions for CANDECOMP/PARAFAC of three-way arrays with linearly dependent loadings, SIAM J. Matrix Anal. Appl. 33 (2012) 111–129] give three sufficient conditions for the three-way CANDECOMP/PARAFAC (CP) model which ensure uniqueness in one of the three modes (“uni-mode-uniqueness”). In this paper, we generalize these uniqueness conditions to n?3n?3. Based on these conditions, a partial uniqueness condition is given which allows collinear loadings in only one mode.  相似文献   
2.
The CANDECOMP/PARAFAC (CP) model is a well known and frequently used tool for extracting substantial information from a three‐way data array. It has several useful characteristics and usually gives meaningful insights about the underlying structure of the data. However, in some cases it has a ‘strange’ behaviour suffering from the so‐called ‘degenerate solutions’, i.e. solutions where the components show a diverging pattern and are meaningless. Several authors have investigated the causes of degeneracy concluding that the phenomenon is due to a lack of minimum of the loss function. In this paper, we study the degeneracy of CP limiting our attention to the two‐component case. The study is done by introducing a canonical form, called 2DR, which is ‘weakly degeneracy revealing’. On the ground of this framework, degeneracy is studied along with some of the remedies proposed in the literature by using a Tucker3 model having a core in the 2DR form. The analysis gives new insights about the behaviour of the CP model and suggests new ideas on how to deal with degeneracy. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
3.
Tensor decompositions are higher‐order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as CANDECOMP/PARAFAC (CP), which expresses a tensor as the sum of component rank‐one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience and web analysis. The task of computing CP, however, can be difficult. The typical approach is based on alternating least‐squares (ALS) optimization, but it is not accurate in the case of overfactoring. High accuracy can be obtained by using nonlinear least‐squares (NLS) methods; the disadvantage is that NLS methods are much slower than ALS. In this paper, we propose the use of gradient‐based optimization methods. We discuss the mathematical calculation of the derivatives and show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient‐based optimization methods are more accurate than ALS and faster than NLS in terms of total computation time. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   
4.
I propose a framework for the linear prediction of a multiway array (i.e., a tensor) from another multiway array of arbitrary dimension, using the contracted tensor product. This framework generalizes several existing approaches, including methods to predict a scalar outcome from a tensor, a matrix from a matrix, or a tensor from a scalar. I describe an approach that exploits the multiway structure of both the predictors and the outcomes by restricting the coefficients to have reduced PARAFAC/CANDECOMP rank. I propose a general and efficient algorithm for penalized least-squares estimation, which allows for a ridge (L2) penalty on the coefficients. The objective is shown to give the mode of a Bayesian posterior, which motivates a Gibbs sampling algorithm for inference. I illustrate the approach with an application to facial image data. An R package is available at https://github.com/lockEF/MultiwayRegression.  相似文献   
5.
We introduce one special form of the ptimesp × 2 (p≥2) tensors by multilinear orthonormal transformations, and present some interesting properties of the special form. Through discussing on the special form, we provide a solution to one conjecture proposed by Stegeman and Comon in a conference paper (Proceedings of the EUSIPCO 2009 Conference, Glasgow, Scotland, 2009), and reveal an important conclusion about subtracting a best rank‐1 approximations from p × p × 2 tensors of the special form. All of these confirm that consecutively subtracting the best rank‐1 approximations may not lead to a best low rank approximation of a tensor. Numerical examples show the correctness of our theory. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   
6.
The CP tensor decomposition is used in applications such as machine learning and signal processing to discover latent low-rank structure in multidimensional data. Computing a CP decomposition via an alternating least squares (ALS) method reduces the problem to several linear least squares problems. The standard way to solve these linear least squares subproblems is to use the normal equations, which inherit special tensor structure that can be exploited for computational efficiency. However, the normal equations are sensitive to numerical ill-conditioning, which can compromise the results of the decomposition. In this paper, we develop versions of the CP-ALS algorithm using the QR decomposition and the singular value decomposition, which are more numerically stable than the normal equations, to solve the linear least squares problems. Our algorithms utilize the tensor structure of the CP-ALS subproblems efficiently, have the same complexity as the standard CP-ALS algorithm when the input is dense and the rank is small, and are shown via examples to produce more stable results when ill-conditioning is present. Our MATLAB implementation achieves the same running time as the standard algorithm for small ranks, and we show that the new methods can obtain lower approximation error.  相似文献   
7.
In this paper an implementation is discussed of a modified CANDECOMP algorithm for fitting Lazarsfeld's latent class model. The CANDECOMP algorithm is modified such that the resulting parameter estimates are non-negative and ‘best asymptotically normal’. In order to achieve this, the modified CANDECOMP algorithm minimizes a weighted least squares function instead of an unweighted least squares function as the traditional CANDECOMP algorithm does. To evaluate the new procedure, the modified CANDECOMP procedure with different weighting schemes is compared on five published data sets with the widely-used iterative proportional fitting procedure for obtaining maximum likelihood estimates of the parameters in the latent class model. It is found that, with appropriate weights, the modified CANDECOMP algorithm yields solutions that are nearly identical with those obtained by means of the maximum likelihood procedure. While the modified CANDECOMP algorithm tends to be computationally more intensive than the maximum likelihood method, it is very flexible in that it easily allows one to try out different weighting schemes.  相似文献   
8.
It is well known that least‐squares (LS) methods uniquely identify the parameters of the CANDECOMP/PARAFAC model if Kruskal's condition is satisfied. By contrast, a stricter sufficient condition applies to eigenvalue‐based methods like the generalized rank annihilation method (GRAM). This discrepancy suggests that LS methods can solve problems for which GRAM must fail. However, GRAM has been specifically introduced for the special case of a three‐way array with two frontal slices only (i.e. K = 2). Here, it is shown that the two conditions are equivalent for this special case. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   
9.
The canonical polyadic (CP) decomposition of tensors is one of the most important tensor decompositions. While the well-known alternating least squares (ALS) algorithm is often considered the workhorse algorithm for computing the CP decomposition, it is known to suffer from slow convergence in many cases and various algorithms have been proposed to accelerate it. In this article, we propose a new accelerated ALS algorithm that accelerates ALS in a blockwise manner using a simple momentum-based extrapolation technique and a random perturbation technique. Specifically, our algorithm updates one factor matrix (i.e., block) at a time, as in ALS, with each update consisting of a minimization step that directly reduces the reconstruction error, an extrapolation step that moves the factor matrix along the previous update direction, and a random perturbation step for breaking convergence bottlenecks. Our extrapolation strategy takes a simpler form than the state-of-the-art extrapolation strategies and is easier to implement. Our algorithm has negligible computational overheads relative to ALS and is simple to apply. Empirically, our proposed algorithm shows strong performance as compared to the state-of-the-art acceleration techniques on both simulated and real tensors.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号