Recovering sparsity in high dimensions

Tuesday, October 28, 2008 - 10:55am - 11:45am
EE/CS 3-180
Ronald DeVore (Texas A & M University)
We assume that we are in $\R^N$ with $N$ large. The first problem we consider is that there is a function $f$ defined on $\Omega:=[0,1]^N$ which is a function of just $k$ of the coordinate variables: $f(x_1,\dots,x_N)=g(x_{j_1},\dots,x_{j_k})$ where $j_1,\dots,j_k$ are not known to us. We want to approximate $f$ from some of its point values. We first assume that we are allowed to choose a collection of points in $\Omega$ and ask for the values of $f$ at these points. We are interested in what error we can achieve in terms of the number of points when we assume some smoothness of $g$ in the form of Lipschitz or higher smoothness conditions.

We shall consider two settings: adaptive and non-adaptive. In the adaptive setting, we are allowed to ask for a value of $f$ and then on the basis of the answer we get we can ask for another value. In the non-adaptive setting, we must prescribe the $m$ points in advance.

A second problem we shall consider is when $f$ is not necessarily only a function of $k$ variables but it can be approximated to some tolerance $\epsilon$ by such a function. We seek again sets of points where the knowledge of the values of $f$ at such points will allow us to approximate $f$ well.

Our main consideration is to derive results which are not severely effected by the size of $N$, i.e. are not victim of the curse of dimensionality. We shall see that this is possible.
MSC Code: