Learning on Manifolds
Friday, April 8, 2011 - 1:25pm - 2:25pm
A large number of natural phenomena can be formulated as inference on differentiable manifolds. More specifically in computer vision, such underlying notions emerge in multi-factor analysis including feature selection, pose estimation, structure from motion, appearance tracking, and shape embedding. Unlike Euclidean spaces, differentiable manifolds does not exhibit global homeomorphism, thus, differential geometry is applicable only within the local tangent spaces. This prevents direct application of conventional inference and learning methods that require vector norms, instead, distances are defined through curves of minimal length connecting two points. Recently we introduced appearance based descriptors and motion transformations that exhibit Riemannian manifold structure on positive definite matrices and enable projections onto the tangent spaces. In this manner, we do not need to flatten the underlying manifold or discover its topology. For instance, by imposing weak classifiers on tangent spaces and establishing weighted sums via Karcher means, we bootstrap an ensemble of boosted classifiers with logistic loss functions for object classification. This talk will demonstrate promising results of manifold learning on human detection, regression tracking, unusual event analysis and affine pose estimation.