High dimensions

Tuesday, June 18, 2019 - 9:00am - 9:50am
Ilias Diakonikolas (University of Southern California)
Fitting a model to a collection of observations is one of the quintessential questions in statistics. The standard assumption is that the data was generated by a model of a given type (e.g., a mixture model). This simplifying assumption is at best only approximately valid, as real datasets are typically exposed to some source of contamination. Hence, any estimator designed for a particular model must also be robust in the presence of corrupted data.
Tuesday, October 28, 2008 - 3:00pm - 3:50pm
Wolfgang Dahmen (RWTH Aachen)
Joint work with Peter Binev, Ron DeVore,
and Philipp Lamby.
This talk addresses the recovery of functions of a large number of variables from point clouds in the context of supervised learning. Our estimator is based on two conceptional pillars.
First, the notion of sparse occupancy
trees is shown to warrant efficient computations even for a very large number of variables. Second, a properly adjusted adaptive tree-approximation scheme is shown to ensure instance optimal performance.
Tuesday, October 28, 2008 - 10:55am - 11:45am
Ronald DeVore (Texas A & M University)
We assume that we are in $\R^N$ with $N$ large. The first problem we consider is that there is a function $f$ defined on $\Omega:=[0,1]^N$ which is a function of just $k$ of the coordinate variables: $f(x_1,\dots,x_N)=g(x_{j_1},\dots,x_{j_k})$ where $j_1,\dots,j_k$ are not known to us. We want to approximate $f$ from some of its point values. We first assume that we are allowed to choose a collection of points in $\Omega$ and ask for the values of $f$ at these points.
Subscribe to RSS - High dimensions