June 4 - 15, 2007
We morph compressive sampling into an error correcting code, and explore the implications of this sampling theory for lossy compression and some of its relationship with universal source coding.
We will survey the literature on interior point methods which are very efficient numerical algorithms for solving large scale convex optimization problems.
We discuss several applications of compressive sampling in the area of analog-to-digital conversion and biomedical imaging and review some numerical experiments in new directions. We conclude by exposing the participants to some important open problems.
After a rapid and glossy introduction to compressive sampling–or compressed sensing as this is also called–the lecture will introduce sparsity as a key modeling tool; the lecture will review the crucial role played by sparsity in various areas such as data compression, statistical estimation and scientific computing.
We show that accurate estimation from noisy undersampled data is sometimes possible and connect our results with a large literature in statistics concerned with high dimensionality; that is, situations in which the number of observations is less than the number of parameters.
In many applications, one often has fewer equations than unknowns. While this seems hopeless, we will show that the premise that the object we wish to recover is sparse or compressible radically changes the problem, making the search for solutions feasible. This lecture discusses the importance of the l1-norm as a sparsity promoting functional and will go through a series of examples touching on many areas of data processing.
We show that compressive sampling is–perhaps surprisingly–robust vis a vis modeling and measurement errors.
Compressed sensing essentially relies on two tenets: the first is that the object we wish to recover is compressible in the sense that it has a sparse expansion in a set of basis functions; the second is that the measurements we make (the sensing waveforms) must be incoherent with these basis functions. This lecture will introduce key results in the field such as a new kind of sampling theorem which states that one can sample a spectrally sparse signal at a rate close to the information rate---and this without information loss.
We introduce a strong form of uncertainty relation and discuss its fundamental role in the theory of compressive sampling. We give examples of random sensing matrices obeying this strong uncertainty principle; e.g. Gaussian matrices.
This lecture will discuss the crucial role played by probability in compressive sampling; we will discuss techniques for obtaining nonasymptotic results about extremal eigenvalues of random matrices. Of special interest is the role played by high- dimensional convex geometry and techniques from geometric functional analysis such as the Rudelson's selection lemma and the role played by powerful results in the probabilistic theory of Banach space such as Talagrand's concentration inequality.
The problem, best matrices for classes, Gelfand widths and their connection to compressed sensing.
Convergence and exponential convergence.
Best k-term approximation for bases and dictionaries, decay rates, approximation classes, application to image compression via wavelet decompositions.
Examples of performance for Gaussian and Bernoulli ensembles.
Performance of compressed sensing under RIP.
Proofs of the Kashin-Gluskin theorems.
l1 minimization, greedy algorithms, iterated least squares.
Bernoulli and Gaussian random variables.
Constructions from finite fields, circulant matrices.
Shannon-Nyquist Theory, Pulse Code Modulation, Sigma-Delta Modulation, Kolmogorov entropy, optimal encoding.
What do these algorithms all have in common? What are the common goals of the problems and how do they achieve them? I will discuss several known techniques and open problems.
hat algorithmic problem do we mean by Compressed Sensing? There are a variety of alternatives, each with different algorithmic solutions (both theoretical and practical). I will discuss some of the different types of results from the combinatorial to the probabilistic.