首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Computing f-divergences and distances of high-dimensional probability density functions
Authors:Alexander Litvinenko  Youssef Marzouk  Hermann G Matthies  Marco Scavino  Alessio Spantini
Institution:1. Department of Mathematics, RWTH Aachen University, Aachen, Germany;2. Department of Aeronautics and Astronautics, MIT, Cambridge, Massachusetts, USA;3. Carl-Friedrich-Gauß-Fakultät, TU Braunschweig, Aachen, Germany;4. Departamento de Métodos Matemático-Cuantitativos, Universidad de la República, IESTA, Montevideo, Uruguay
Abstract:Very often, in the course of uncertainty quantification tasks or data analysis, one has to deal with high-dimensional random variables. Here the interest is mainly to compute characterizations like the entropy, the Kullback–Leibler divergence, more general f $$ f $$ -divergences, or other such characteristics based on the probability density. The density is often not available directly, and it is a computational challenge to just represent it in a numerically feasible fashion in case the dimension is even moderately large. It is an even stronger numerical challenge to then actually compute said characteristics in the high-dimensional case. In this regard it is proposed to approximate the discretized density in a compressed form, in particular by a low-rank tensor. This can alternatively be obtained from the corresponding probability characteristic function, or more general representations of the underlying random variable. The mentioned characterizations need point-wise functions like the logarithm. This normally rather trivial task becomes computationally difficult when the density is approximated in a compressed resp. low-rank tensor format, as the point values are not directly accessible. The computations become possible by considering the compressed data as an element of an associative, commutative algebra with an inner product, and using matrix algorithms to accomplish the mentioned tasks. The representation as a low-rank element of a high order tensor space allows to reduce the computational complexity and storage cost from exponential in the dimension to almost linear.
Keywords:computational algorithms  f$$ f $$-divergence  high-dimensional probability density  Kullback–Leibler divergence  low-rank approximation  tensor representation
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号