High-dimensional Linear Regression for Dependent Observations with Application to Nowcasting

Monday, April 23, 2018 - 1:30pm - 2:00pm
Keller 3-180
Ruey Tsay (University of Chicago)
In the last few years, an extensive literature has been focused on the ell-1 penalized least squares (Lasso) estimators of high dimensional linear regression when the number of covariates p is considerably larger than the sample size n. However, there is limited attention paid to the properties of the estimators when the errors or/and the covariates are serially dependent. In this study, we investigate the theoretical properties of the Lasso estimators for linear regression with random design under serially dependent and/or non-sub- Gaussian errors and covariates. In contrast to the traditional case in which the errors are i.i.d and have finite exponential moments, we show that p can be a power of n if the errors have only polynomial moments. In addition, the rate of convergence becomes slower due to the serial dependencies in errors and the covariates. We also consider sign consistency for model selection via Lasso when there are serial correlations in the errors or the covariates or both. Adopting the framework of functional dependence measure, we provide a detailed description on how the rates of convergence and the selection consistencies of the estimators depend on the dependence measures and moment conditions of the errors and the covariates. Simulation results show that Lasso regression can be substantially more powerful than the mixed frequency data sampling regression (MIDAS) in the presence of irrelevant variables. We apply the results obtained for the Lasso method to now-casting with mixed frequency data for which serially correlated errors and a large number of covariates are common. The empirical analysis shows that the Lasso procedure outperforms the MIDAS in both forecasting and now-casting. (Joint work
with Y. Han of Department of Statistics, University of Chicago).