# Learning, Linking and Labeling Social Networks

Tuesday, May 8, 2012 - 10:30am - 11:30am

Keller 3-180

Tony Jebara (Columbia University)

Many machine learning problems on data can naturally be formulated as problems on graphs. For example, dimensionality reduction and visualization are related to graph embedding. Given a sparse graph between N high-dimensional data nodes, how do we faithfully embed it in low dimension? We present an algorithm that improves dimensionality reduction by extending the Maximum Variance Unfolding method. But, given only a dataset of N samples, how do we construct a graph in the first place? The space to explore is daunting with 2^(N(N-1)/2) graphs to choose from yet two interesting subfamilies are tractable: matchings and b-matchings. By placing distributions over matchings and using loopy belief propagation, we can efficiently infer maximum weight subgraphs. These fast generalized matching algorithms leverage integral LP relaxations and perfect graph theory. Applications include graph reconstruction, graph embedding, graph transduction, and metric learning with emphasis on data from text, network, mobile and image domains.