Affiliation: | 1.ECARES,Université libre de Bruxelles,Brussels,Belgium;2.Département de Mathématique,Université libre de Bruxelles,Brussels,Belgium;3.Institute for Statistics,Graz University of Technology,Graz,Austria;4.Einaudi Institute for Economics and Finance,Rome,Italy |
Abstract: | Dimension reduction techniques are at the core of the statistical analysis of high-dimensional and functional observations. Whether the data are vector- or function-valued, principal component techniques, in this context, play a central role. The success of principal components in the dimension reduction problem is explained by the fact that, for any (Kle p), the K first coefficients in the expansion of a p-dimensional random vector (mathbf{X}) in terms of its principal components is providing the best linear K-dimensional summary of (mathbf X) in the mean square sense. The same property holds true for a random function and its functional principal component expansion. This optimality feature, however, no longer holds true in a time series context: principal components and functional principal components, when the observations are serially dependent, are losing their optimal dimension reduction property to the so-called dynamic principal components introduced by Brillinger in 1981 in the vector case and, in the functional case, their functional extension proposed by Hörmann, Kidziński and Hallin in 2015. |