# Sparse matrices

Tuesday, May 19, 2015 - 11:30am - 12:20pm

Christian Borgs (Microsoft Research)

When analyzing large networks, statisticians often assume a generative model in which the observed graph is assumed to come from a stochastic block model, i.e., a random graph with inhomogeneous edge probabilities given in terms of a small block matrix. A non-parametric version of these stochastic block models are so-called W-random graphs, given in terms of an integrable, symmetric function W on the unit square.

Tuesday, May 19, 2015 - 10:20am - 11:10am

Jennifer Chayes (Microsoft Research)

We introduce and develop a theory of limits for sequences of sparse graphs based on L^p graphons, which generalizes both the existing L^∞ theory of dense graph limits and its extension by Bollobas and Riordan to sparse graphs without dense spots. In doing so, we replace the no dense spots hypothesis with weaker assumptions, which allow us to analyze a much larger class of graphs, including those with power law degree distributions. This gives the first broadly applicable limit theory for sparse graphs with unbounded average degrees.

Tuesday, January 11, 2011 - 8:30am - 9:30am

Jonathan Cohen (NVIDIA Corporation)

Iterative sparse linear solvers are a critical component of a scientific computing platform. Developing effective preconditioning strategies is the main challenge in developing iterative sparse solvers on massively parallel systems. As computing systems become increasingly power-constrained, memory hierarchies for massively parallel systems will become deeper and more hierarchical. Parallel algorithms with all-to-all communication patterns that assume uniform memory access times will be inefficient on these systems.

Wednesday, July 15, 2009 - 3:00pm - 4:00pm

Ahmed Tewfik (University of Minnesota, Twin Cities)

No Abstract

Tuesday, October 6, 2009 - 9:45am - 10:15am

Guillermo Sapiro (University of Minnesota, Twin Cities)

After spending about 5 minutes showing recent results on

video segmentation (joint work with Adobe), I will describe some

recent works in my group in the area of dictionary learning and sparse coding.

In particular I will present new models derived from information theory,

new models dedicated to go beyond standard sparse coding applications and

into unsupervised clustering, and

new models related to compressed sensing.

video segmentation (joint work with Adobe), I will describe some

recent works in my group in the area of dictionary learning and sparse coding.

In particular I will present new models derived from information theory,

new models dedicated to go beyond standard sparse coding applications and

into unsupervised clustering, and

new models related to compressed sensing.

Thursday, March 26, 2009 - 12:00pm - 12:45pm

Stanley Osher (University of California, Los Angeles)

We started with a project where we denoised normals to surfaces, then

fit the surface to the normals, which we regarded as solving a 4th

order PDE via some kind of splitting. This led to remarkably successful

algorithms for L1 tpe minimizations, constrained and unconstrained. These

include L1, TV, B1,1, nonlocal TV,... Bregman iteration, in its various

incarnations popped up and turned out to be unreasonably effective. I'll

discuss this which is joint work with many people.

fit the surface to the normals, which we regarded as solving a 4th

order PDE via some kind of splitting. This led to remarkably successful

algorithms for L1 tpe minimizations, constrained and unconstrained. These

include L1, TV, B1,1, nonlocal TV,... Bregman iteration, in its various

incarnations popped up and turned out to be unreasonably effective. I'll

discuss this which is joint work with many people.

Monday, September 18, 2006 - 10:50am - 11:40am

Teresa Krick (University of Buenos Aires)

In this talk I will discuss algorithms for factoring polynomials over number fields

to reach recent results on sparse polynomials, where the complexity

of the algorithm takes into account the fact that the polynomial may

have many zero coefficients. For simplicity I'll focus on rational

polynomials in one or two variables.

The talk will review results by F. Cucker, P. Koiran and S. Smale,

1999, H.J. Lenstra (Jr.), 1999, E. Kaltofen and P. Koiran, 2005 and

2006, and a joint work with M. Avendaño and M. Sombra, 2006.

to reach recent results on sparse polynomials, where the complexity

of the algorithm takes into account the fact that the polynomial may

have many zero coefficients. For simplicity I'll focus on rational

polynomials in one or two variables.

The talk will review results by F. Cucker, P. Koiran and S. Smale,

1999, H.J. Lenstra (Jr.), 1999, E. Kaltofen and P. Koiran, 2005 and

2006, and a joint work with M. Avendaño and M. Sombra, 2006.

Saturday, June 22, 2013 - 11:00am - 12:30pm

Julien Mairal (INRIA )

In this lecture, we will go beyond Friday's course on structured sparsity, and

consider more complex models. Recently, a large amount of research in

statistics and signal processing has been devoted to developing structured

sparse regularization functions. The goal is to encode some a priori knowledge

about an estimation problem in the regularization, in order to obtain better

prediction or better interpretability. Unfortunately, the literature on the

topic is is vast, and significantly different approaches are now refered to as

consider more complex models. Recently, a large amount of research in

statistics and signal processing has been devoted to developing structured

sparse regularization functions. The goal is to encode some a priori knowledge

about an estimation problem in the regularization, in order to obtain better

prediction or better interpretability. Unfortunately, the literature on the

topic is is vast, and significantly different approaches are now refered to as

Thursday, February 16, 2012 - 3:15pm - 3:45pm

Mikhail Malyutov (Northeastern University)

A staggering amount of attention was recently devoted to the study

of compressed sensing and related areas using sparse priors in over

parameterized linear models under versions of linear programming (LP) analysis.

More recently popularity of the sparsity approach for the classical model of

group testing also increased.

The threshold phenomenon was empirically observed in early papers of

Donoho et al : as the dimension of a random instance of a problem

grows there is a sharp transition from successful recovery to failure as a

of compressed sensing and related areas using sparse priors in over

parameterized linear models under versions of linear programming (LP) analysis.

More recently popularity of the sparsity approach for the classical model of

group testing also increased.

The threshold phenomenon was empirically observed in early papers of

Donoho et al : as the dimension of a random instance of a problem

grows there is a sharp transition from successful recovery to failure as a