Over the past two decades, machine learning has evolved from a discipline studied primarily by computer scientists (in particular, in Artificial Intelligence) to a broader discipline also studied by statisticians, applied mathematicians, and engineers. Today, machine learning is routinely used in commercial systems ranging from speech recognition and computer vision to web mining. As the field has evolved, there has been an increased emphasis on understanding the statistical, theoretical, and computational underpinnings of machine learning. This workshop attempts to bring together researchers from different disciplines to discuss recent trends and advances in the theoretical and computational aspects of machine learning.
Topics to be discussed at the workshop include the interplay between machine learning (kernel learning, graphical models, online learning, active learning) with (a) statistical modeling and learning theory, (b) theoretical computer science, (c) numerical optimization, (d) topological methods, (e) tensor methods, and (f) sparse methods. The topics of interest can be structured under three broad categories, depending on whether the focus is on data/geometry, models/algorithms, or analysis/theory. Topics in data/geometry attempt to understand manifold structure in data as well as relationship and similarity between pairs or groups of objects. These topics include a wide variety of methods, including kernel learning, manifold embedding, topological methods, spectral methods, graph diffusion, and clustering. Topics in models/algorithms are rich and diverse, ranging from statistical graphical models to large margin and ensemble methods. Algorithms for such models and formulations extensively leverage optimization and inference methods, including linear and quadratic programs, semi-definite programs, sparse optimization methods, variational inference, and stochastic approximations. Topics in analysis/theory focus on systematically and rigorously studying statistical and computational properties of problems and algorithms. There are deep interconnections between the three broad categories and the above only provides a convenient perspective to the diverse work in the field. Several important ideas and abstractions, such as online learning, cut across all three categories. For example, online learning provides powerful methods for kernel learning (data/geometry), inspires an important class of ensemble methods called boosting (models/algorithms), and forms a powerful tool for analyzing performance of learning algorithms (analysis/theory). The goal of the workshop is to approach core topics in machine learning from all of these diverse perspectives.