Tuesday, October 15, 2019 - 9:50am - 10:35am
Katie Bouman (California Institute of Technology)
This talk will present the methods and procedures used to produce the first image of a black hole from the computational Event Horizon Telescope. It has been theorized for decades that a black hole will leave a shadow on a background of hot gas. Taking a picture of this black hole shadow could help to address a number of important scientific questions, both on the nature of black holes and the validity of general relativity. Unfortunately, due to its small size, traditional imaging approaches require an Earth-sized radio telescope.
Thursday, August 5, 2010 - 10:00am - 10:30am
Yu Chen (New York University)
Sparse solution to an underdetermined linear integral equation is
the central problem for a broad range of applications - scattering,
sensing, imaging, machine learning, signal and image processing, data
analysis and compression, model reduction, optimal control and
design. We will introduce a weak formulation of the problem and
construct its sparse solution by a nonlinear process - the design
of a Gaussian quadrature for the kernel of the integral equation.
We will present a systematic method to solve the resulting quadrature
Wednesday, June 26, 2013 - 9:00am - 10:30am
Bin Yu (University of California, Berkeley)
This lecture will give a heuristic overview of theoretical
results on Lasso that explains when and why Lasso and extensions work.
Monday, October 24, 2011 - 3:00pm - 4:00pm
Venkatesh Saligrama (Boston University)
Sparse signal processing on graphs arises in several applications including IP networks, wireless sensor networks (WSN) and infection propagation. For instance, in IP and WSNs a key problem is to identify a sparse subset of congested links and/or failing sensor nodes from path measurements. We develop sample complexity bounds for the number of path measurements required to recover such sparse subsets associated with the graph. To develop sample complexity scaling bounds we consider random path measurements associated with random walks on large graphs.
Thursday, September 29, 2011 - 9:00am - 9:45am
Francis Bach (École Normale Supérieure)
Sparse methods for supervised learning aim at finding good
linear predictors from as few variables as possible, i.e., with small
cardinality of their supports. This combinatorial selection problem is
often turned into a convex optimization problem by replacing the
cardinality function by its convex envelope (tightest convex lower bound),
in this case the L1-norm. In this work, we investigate more general set-functions than the cardinality, that may incorporate prior knowledge
Subscribe to RSS - Sparse