首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 406 毫秒
1.
We propose new tensor approximation algorithms for certain discrete functions related with Hartree–Fock/Kohn–Sham equations. Given a canonical tensor representation for the electron density function (for example, produced by quantum chemistry packages such as MOLPRO), we obtain its Tucker approximation with much fewer parameters than the input data and the Tucker approximation for the cubic root of this function, which is part of the Kohn–Sham exchange operator. The key idea is in the fast and accurate prefiltering of possibly large‐scale factors of the canonical tensor input. The new algorithms are based on the incomplete cross approximation method applied to matrices and tensors of order 3 and outperform other tools for the same purpose. First, we show that the cross approximation method is robust and much faster than the singular value decomposition‐based approach. As a consequence, it becomes possible to increase the resolution of grid and the complexity of molecules that can be handled by the Hartree–Fock chemical models. Second, we propose a new fast approximation method for f1/3(x, y, z), based on the factor prefiltering method for f(x, y, z) and certain mimic approximation hypothesis. Third, we conclude that the Tucker format has advantages in the storage and computation time compared with the ubiquitous canonical format. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

2.
Hierarchical tensors can be regarded as a generalisation, preserving many crucial features, of the singular value decomposition to higher-order tensors. For a given tensor product space, a recursive decomposition of the set of coordinates into a dimension tree gives a hierarchy of nested subspaces and corresponding nested bases. The dimensions of these subspaces yield a notion of multilinear rank. This rank tuple, as well as quasi-optimal low-rank approximations by rank truncation, can be obtained by a hierarchical singular value decomposition. For fixed multilinear ranks, the storage and operation complexity of these hierarchical representations scale only linearly in the order of the tensor. As in the matrix case, the set of hierarchical tensors of a given multilinear rank is not a convex set, but forms an open smooth manifold. A number of techniques for the computation of hierarchical low-rank approximations have been developed, including local optimisation techniques on Riemannian manifolds as well as truncated iteration methods, which can be applied for solving high-dimensional partial differential equations. This article gives a survey of these developments. We also discuss applications to problems in uncertainty quantification, to the solution of the electronic Schrödinger equation in the strongly correlated regime, and to the computation of metastable states in molecular dynamics.  相似文献   

3.
In this paper we introduce and develop the notion of minimal subspaces in the framework of algebraic and topological tensor product spaces. This mathematical structure arises in a natural way in the study of tensor representations. We use minimal subspaces to prove the existence of a best approximation, for any element in a Banach tensor space, by means of a tensor given in a typical representation format (Tucker, hierarchical, or tensor train). We show that this result holds in a tensor Banach space with a norm stronger than the injective norm and in an intersection of finitely many Banach tensor spaces satisfying some additional conditions. Examples using topological tensor products of standard Sobolev spaces are given.  相似文献   

4.
In this paper, we propose a method for the numerical solution of linear systems of equations in low rank tensor format. Such systems may arise from the discretisation of PDEs in high dimensions, but our method is not limited to this type of application. We present an iterative scheme, which is based on the projection of the residual to a low dimensional subspace. The subspace is spanned by vectors in low rank tensor format which—similarly to Krylov subspace methods—stem from the subsequent (approximate) application of the given matrix to the residual. All calculations are performed in hierarchical Tucker format, which allows for applications in high dimensions. The mode size dependency is treated by a multilevel method. We present numerical examples that include high‐dimensional convection–diffusion equations.Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

5.
This article develops a new algorithm named TTRISK to solve high-dimensional risk-averse optimization problems governed by differential equations (ODEs and/or partial differential equations [PDEs]) under uncertainty. As an example, we focus on the so-called Conditional Value at Risk (CVaR), but the approach is equally applicable to other coherent risk measures. Both the full and reduced space formulations are considered. The algorithm is based on low rank tensor approximations of random fields discretized using stochastic collocation. To avoid nonsmoothness of the objective function underpinning the CVaR, we propose an adaptive strategy to select the width parameter of the smoothed CVaR to balance the smoothing and tensor approximation errors. Moreover, unbiased Monte Carlo CVaR estimate can be computed by using the smoothed CVaR as a control variate. To accelerate the computations, we introduce an efficient preconditioner for the Karush–Kuhn–Tucker (KKT) system in the full space formulation.The numerical experiments demonstrate that the proposed method enables accurate CVaR optimization constrained by large-scale discretized systems. In particular, the first example consists of an elliptic PDE with random coefficients as constraints. The second example is motivated by a realistic application to devise a lockdown plan for United Kingdom under COVID-19. The results indicate that the risk-averse framework is feasible with the tensor approximations under tens of random variables.  相似文献   

6.
A stochastic chemical system with multiple types of molecules interacting through reaction channels can be modeled as a continuous‐time Markov chain with a countably infinite multidimensional state space. Starting from an initial probability distribution, the time evolution of the probability distribution associated with this continuous‐time Markov chain is described by a system of ordinary differential equations, known as the chemical master equation (CME). This paper shows how one can solve the CME using backward differentiation. In doing this, a novel approach to truncate the state space at each time step using a prediction vector is proposed. The infinitesimal generator matrix associated with the truncated state space is represented compactly, and exactly, using a sum of Kronecker products of matrices associated with molecules. This exact representation is already compact and does not require a low‐rank approximation in the hierarchical Tucker decomposition (HTD) format. During transient analysis, compact solution vectors in HTD format are employed with the exact, compact, and truncated generated matrices in Kronecker form, and the linear systems are solved with the Jacobi method using fixed or adaptive rank control strategies on the compact vectors. Results of simulation on benchmark models are compared with those of the proposed solver and another version, which works with compact vectors and highly accurate low‐rank approximations of the truncated generator matrices in quantized tensor train format and solves the linear systems with the density matrix renormalization group method. Results indicate that there is a reason to solve the CME numerically, and adaptive rank control strategies on compact vectors in HTD format improve time and memory requirements significantly.  相似文献   

7.
Computational Mathematics and Mathematical Physics - A new elementwise bound on the cross approximation error used for approximating multi-index arrays (tensors) in the format of a tensor train is...  相似文献   

8.
Mathematical models with uncertainties are often described by stochastic partial differential equations (SPDEs) with multiplicative noise. The coefficients, the right-hand side, the boundary conditions are modelled by random fields. As a result the solution is also a random field. We offer to use the Karhunen-Loève expansion (KLE) to compute a sparse data format for the fast generation and representation of these random fields. The KLE of a random field requires the solution of a large eigenvalue problem. Usually it is solved by a Krylov subspace method with a sparse matrix approximation. We demonstrate the use of both, the sparse hierarchical matrix format as well as the low-rank Kronecker tensor format. (© 2009 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

9.

In this article, we analyze tensor approximation schemes for continuous functions. We assume that the function to be approximated lies in an isotropic Sobolev space and discuss the cost when approximating this function in the continuous analogue of the Tucker tensor format or of the tensor train format. We especially show that the cost of both approximations are dimension-robust when the Sobolev space under consideration provides appropriate dimension weights.

  相似文献   

10.
An explicit algebraic turbulent-stress model is built in the framework of so-called Rodi's weak-equilibrium approximation, which, taking into account the known model representations for the pressure-strain-rate correlation and turbulence-dissipation rate, reduces the differential equations for the Reynolds-tensor components to a system of quasi-linear algebraic equations for the five independent components of the anisotropy tensor B. We propose an original method for solving this quasi-linear system. The tensor in question B is sought in the form of an expansion in a tensorial basis formed from the mean strain and rotation rate tensors which contains only five elements. The expansion's coefficients are functions of five simultaneous invariants of these tensors. (© 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

11.
This paper addresses the solution of parabolic evolution equations simultaneously in space and time as may be of interest in, for example, optimal control problems constrained by such equations. As a model problem, we consider the heat equation posed on the unit cube in Euclidean space of moderately high dimension. An a priori stable minimal residual Petrov–Galerkin variational formulation of the heat equation in space–time results in a generalized least squares problem. This formulation admits a unique, quasi‐optimal solution in the natural space–time Hilbert space and serves as a basis for the development of space–time compressive solution algorithms. The solution of the heat equation is obtained by applying the conjugate gradient method to the normal equations of the generalized least squares problem. Starting from stable subspace splittings in space and in time, multilevel space–time preconditioners for the normal equations are derived. In order to reduce the complexity of the full space–time problem, all computations are performed in a compressed or sparse format called the hierarchical Tucker format, supposing that the input data are available in this format. In order to maintain sparsity, compression of the iterates within the hierarchical Tucker format is performed in each conjugate gradient iteration. Its application to vectors in the hierarchical Tucker format is detailed. Finally, numerical results in up to five spatial dimensions based on the recently developed htucker toolbox for MATLAB are presented. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
Biquadratic tensors play a central role in many areas of science.Examples include elastic tensor and Eshelby tensor in solid mechanics,and Riemannian curvature tensor in relativity theory.The singular values and spectral norm of a general third order tensor are the square roots of the M-eigenvalues and spectral norm of a biquadratic tensor,respectively.The tensor product operation is closed for biquadratic tensors.All of these motivate us to study biquadratic tensors,biquadratic decomposition,and norms of biquadratic tensors.We show that the spectral norm and nuclear norm for a biquadratic tensor may be computed by using its biquadratic structure.Then,either the number of variables is reduced,or the feasible region can be reduced.We show constructively that for a biquadratic tensor,a biquadratic rank-one decomposition always exists,and show that the biquadratic rank of a biquadratic tensor is preserved under an independent biquadratic Tucker decomposition.We present a lower bound and an upper bound of the nuclear norm of a biquadratic tensor.Finally,we define invertible biquadratic tensors,and present a lower bound for the product of the nuclear norms of an invertible biquadratic tensor and its inverse,and a lower bound for the product of the nuclear norm of an invertible biquadratic tensor,and the spectral norm of its inverse.  相似文献   

13.
Operations with tensors, or multiway arrays, have become increasingly prevalent in recent years. Traditionally, tensors are represented or decomposed as a sum of rank-1 outer products using either the CANDECOMP/PARAFAC (CP) or the Tucker models, or some variation thereof. Such decompositions are motivated by specific applications where the goal is to find an approximate such representation for a given multiway array. The specifics of the approximate representation (such as how many terms to use in the sum, orthogonality constraints, etc.) depend on the application.In this paper, we explore an alternate representation of tensors which shows promise with respect to the tensor approximation problem. Reminiscent of matrix factorizations, we present a new factorization of a tensor as a product of tensors. To derive the new factorization, we define a closed multiplication operation between tensors. A major motivation for considering this new type of tensor multiplication is to devise new types of factorizations for tensors which can then be used in applications.Specifically, this new multiplication allows us to introduce concepts such as tensor transpose, inverse, and identity, which lead to the notion of an orthogonal tensor. The multiplication also gives rise to a linear operator, and the null space of the resulting operator is identified. We extend the concept of outer products of vectors to outer products of matrices. All derivations are presented for third-order tensors. However, they can be easily extended to the order-p(p>3) case. We conclude with an application in image deblurring.  相似文献   

14.
We investigate first-order conditions for canonical and optimal subspace (Tucker format) tensor product approximations to square integrable functions. They reveal that the best approximation and all of its factors have the same smoothness as the approximated function itself. This is not obvious, since the approximation is performed in L 2.  相似文献   

15.
Problems of best tensor product approximation of low orthogonal rank can be formulated as maximization problems on Stiefel manifolds. The functionals that appear are convex and weakly sequentially continuous. It is shown that such problems are always well-posed, even in the case of non-compact Stiefel manifolds. As a consequence, problems of finding a best orthogonal, strong orthogonal or complete orthogonal low-rank tensor product approximation and problems of best Tucker format approximation to any given tensor are always well-posed, even in spaces of infinite dimension. (The best rank-one approximation is a special case of all of them.) In addition, the well-posedness of a canonical low-rank approximation with bounded coefficients can be shown. The proofs are non-constructive and the problem of computation is not addressed here.  相似文献   

16.
Journal of Optimization Theory and Applications - The tensor CUR decomposition in the Tucker format is a special case of Tucker decomposition with a low multilinear rank, where factor matrices are...  相似文献   

17.
We construct a soft thresholding operation for rank reduction in hierarchical tensors and subsequently consider its use in iterative thresholding methods, in particular for the solution of discretized high-dimensional elliptic problems. The proposed method for the latter case adjusts the thresholding parameters, by an a posteriori criterion requiring only bounds on the spectrum of the operator, such that the arising tensor ranks of the resulting iterates remain quasi-optimal with respect to the algebraic or exponential-type decay of the hierarchical singular values of the true solution. In addition, we give a modified algorithm using inexactly evaluated residuals that retains these features. The effectiveness of the scheme is demonstrated in numerical experiments.  相似文献   

18.
We consider tensors with coefficients in a commutative differential algebra A. Using the Lie derivative, we introduce the notion of a tensor invariant under a derivation on an ideal of A. Each system of partial differential equations generates an ideal in some differential algebra. This makes it possible to study invariant tensors on such an ideal. As examples we consider the equations of gas dynamics and magnetohydrodynamics.  相似文献   

19.
In the present survey, we consider a rank approximation algorithm for tensors represented in the canonical format in arbitrary pre-Hilbert tensor product spaces. It is shown that the original approximation problem is equivalent to a finite dimensional ? 2 minimization problem. The ? 2 minimization problem is solved by a regularized Newton method which requires the computation and evaluation of the first and second derivative of the objective function. A systematic choice of the initial guess for the iterative scheme is introduced. The effectiveness of the approach is demonstrated in numerical experiments.  相似文献   

20.
We study iterative methods for solving a set of sparse non-negative tensor equations (multivariate polynomial systems) arising from data mining applications such as information retrieval by query search and community discovery in multi-dimensional networks. By making use of sparse and non-negative tensor structure, we develop Jacobi and Gauss-Seidel methods for solving tensor equations. The multiplication of tensors with vectors are required at each iteration of these iterative methods, the cost per iteration depends on the number of non-zeros in the sparse tensors. We show linear convergence of the Jacobi and Gauss-Seidel methods under suitable conditions, and therefore, the set of sparse non-negative tensor equations can be solved very efficiently. Experimental results on information retrieval by query search and community discovery in multi-dimensional networks are presented to illustrate the application of tensor equations and the effectiveness of the proposed methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号