Active Learning and Optimal Experimental Design

Wednesday, November 11, 2020 - 12:30pm - 1:15pm
Eldad Haber (University of British Columbia)
In this work we discuss the problem of active learning. We present an approach
2 that is based on A-optimal experimental design of ill-posed problems and show
3 how one can optimally label a data set by partially probing it, and use it to train
4 a deep network. We present two approaches that make different assumptions on
5 the data set. The first is based on a Bayesian interpretation of the semi-supervised
6 learning problem with the graph Laplacian that is used for the prior distribution and
7 the second is based on a frequentist approach, that updates the estimation of the
8 bias term based on the recovery of the labels. We demonstrate that this approach
9 can be highly efficient for estimating labels and training a deep network.