Optimal Dimension Reduction for Vector and Functional Time Series
Friday, April 27, 2018 - 9:00am - 9:30am
Dimension reduction techniques are at the core of the statistical analysis of high-dimensional observations. Whether the data are vector- or function-valued, principal component techniques, in this context, play a central role. The success of principal components in the dimension reduction problem is explained by the fact that, for any K≤p, the K first coefficients in the expansion of a p-dimensional random vector X in terms of its principal components is providing the best linear K-dimensional summary of X in the mean square sense. This optimality feature, however, no longer holds true in a time series context: principal components, when the observations are serially dependent, are losing their optimal dimension reduction property to the so-called dynamic principal components introduced by Brillinger in 1981 for the vector case and, in the functional case, their functional extension proposed by Hörmann, Kidzinski and Hallin (JRSS Ser.B 2015). Principal components similarly are crucial tools in the estimation of factor models: traditional principal components in the approach proposed by Stock and Watson (JASA 2002) or Bai and Ng (Econometrica 2002); dynamic ones for the Forni et al. (Review of Economics and Statistics 2000). The optimal dimension reduction properties of dynamic principal components explain why the latter, in general, are more parcimonious, and perform better, under less restrictive assumptions.