a Ruhr-Universität Bochum, Fakultät für Mathematik, Universitätsstraße 150, NA 4/27, D-44780, Bochum, Germany;b Justus-Liebig-Universität Gießen, Lehrstuhl für Numerische Mathematik, Heinrich-Buff-Ring 44, D-35392, Gießen, Germany
Abstract:
When learning processes depend on samples but not on the order of the information in the sample, then the Bernoulli distribution is relevant and Bernstein polynomials enter into the analysis. We derive estimates of the approximation of the entropy function x log x that are sharper than the bounds from Voronovskaja's theorem. In this way we get the correct asymptotics for the Kullback–Leibler distance for an encoding problem.