Poster Session and Reception

Tuesday, April 14, 2015 - 4:00pm - 6:00pm
Lind 400
  • Fractional Processes on Wiener Chaos and Non-central Limit Theorems
    Shuyang Bai (Boston University)
    By fractional processes, we mean self-similar processes with stationary increments. These processes are important because of their connection to the scaling limits of sum of stationary sequences. If the scaling limit is not a Brownian motion, this type of results are called non-central limit theorems. We focus here on some fractional processes defined on a Wiener chaos of a single order. In particular, we introduce a class of processes called generalized Hermite processes, which include the fractional Brownian motion, and more generally, the Hermite processes considered in the literature. We obtain new non-central limit theorems where the generalized Hermite processes arise as the scaling limits of some long-memory nonlinear stationary sequences. This is joint work with Murad S. Taqqu.
  • Complete Dictionary Recovery over the Sphere and Beyond
    Ju Sun (Columbia University)
    How can we concisely represent a class of signals? This is the central problem to address in signal compression, and has become increasingly important to signal acquisition, processing, and analysis. Dictionary learning is an attractive conceptual framework that learns sparse representation for a collection of input signals, and has found numerous successful applications in modern signal processing and machine learning. By contrast, theoretical understanding of dictionary learning is still limited to date.

    We will focus on the problem of recovering a complete (i.e., square and invertible) dictionary A and coefficients X from Y = AX, provided X is sufficiently sparse. This recovery problem is central to the theoretical understanding of dictionary learning. we present an efficient algorithm that provably recovers A when X has O(n) nonzeros per column, under suitable probability model for X. This is the first result of its kind, as prior results based on efficient algorithms provide recovery guarantees when X has only O(n^{1/2}) nonzeros per column, severely limiting the model capacity of dictionary learning.

    The algorithmic pipeline centers around solving a certain nonconvex optimization problem with a spherical constraint, and hence is naturally phrased in the language of manifold optimization. To show this apparently hard problem is tractable, we provide a geometric characterization of the high-dimensional objective landscape, which shows that with high probability there are no spurious local minima. This particular geometric structure allows us to design a Riemannian trust region algorithm over the sphere that provably converges to one global minimizer with an arbitrary initialization, despite the presence of saddle points.

    Besides the new theoretical insight into the dictionary learning problem, the geometric approach we develop here may shed light on other problems arising from nonconvex recovery of structured signals.

    This is joint work with Prof. John Wright and Mr. Qing Qu. A more formal summary of the result can be found at:
    * Ju Sun, Qing Qu, John Wright. Complete Dictionary Recovery over the Sphere: a Summary.
  • On the Geometry of Convex Typical Sets
    Varun Jog (University of California, Berkeley)
    We consider convex sets obtained from one-sided typical sets of log-concave distributions, and show that the sequence of intrinsic volumes corresponding to these typical sets converges to a limit function under an appropriate scaling. The limit function may be used to represent the exponential growth rate of intrinsic volumes of the typical sets. Since differential entropy is the exponential growth rate of the volume of typical sets, the exponential growth rate of intrinsic volumes generalizes the differential entropy of log-concave distributions. We conjecture a version of the entropy power inequality for such a generalization of differential entropy.
  • Discrete Entropy Power Inequalities
    Jae Oh Woo (Yale University)
    We suggest discrete analogues of entropy power inequalities and show a discrete entropy power inequality for uniform distributions. In order to do that, we establish majorization based on rearrangement inequalities over Z or Z/pZ and strongly Sperner property of some posets. We also show that our entropy inequalities imply that the optimal solution of Littlewood-Offord problem can be interpreted as a minimizer of entropy.

    Joint with Liyao Wang and Mokshay Madiman.
  • On the Analogue of the Concavity of Entropy Power in the Brunn-Minkowski Theory
    Arnaud Marsiglietti (University of Minnesota, Twin Cities)
    Elaborating on the similarity between the entropy power inequality and the Brunn-Minkowski inequality, Costa and Cover conjectured the $\frac{1}{n}$-concavity of the outer parallel volume of measurable sets as an analogue of the concavity of entropy power. We investigate this conjecture and study its relationship with geometric inequalities.
  • CLT for Sample Covariance Matrices in the Tensor Product Case
    Ganna Lytova (University of Alberta)
    For any $k$, $m$, $n$, we consider $n^k\times n^k$ real symmetric random matrices of the form
    =1}^{m}{\tau _{\alpha }}\mathbf{y}_{\alpha }^{(1)}\otimes...\otimes
    \mathbf{y}_{\alpha }^{(k)}(\mathbf{y}_{\alpha }^{(1)}\otimes...\otimes
    \mathbf{y}_{\alpha }^{(k)})^T,
    where $\tau _{\alpha }$ are real numbers and $\{\mathbf{y}_\alpha^{(p)}\}_{\alpha, p=1}^{m,k}$ are i.i.d. copies of a normalized isotropic random vector in $\mathbb{R}^n$. We suppose that $k$ is fixed and $m\rightarrow\infty$, $m/n^k\rightarrow c\in [0,\infty)$ as $n\rightarrow\infty$. This tensor analog of the sample covariance matrices appeared in quantum information theory and was firstly introduced to the random matrix theory by Hastings, Ambainis, and Harrow. For the case corresponding to uniformly distributed on the unit sphere vectors, they proved the Marchenko-Pastur law for the limit $\mathcal{N}$ of the expectation of the normalized counting
    measure of eigenvalues and convergence of extreme eigenvalues to the endpoints
    of the support of $\mathcal{N}$. We find
    a class of random vectors satisfying some moment conditions such that for any smooth enough test-function $\varphi$ the linear
    statistics $Tr \varphi(M_n)$ of eigenvalues of corresponding matrices $M_n$ being centered and properly normalized converge in distribution to a Gaussian random variable
  • On the Gaussian Brunn-Minkowski Inequality
    Galyna Livshyts (Kent State University)
  • Surface Area of a Convex Body in $mathbb R^n$ with Respect to Log Concave Spherically Invariant Measures
    Galyna Livshyts (Kent State University)
  • Divergence for Log Concave Functions
    Umut Caglar (Case Western Reserve University)
    We prove new entropy inequalities for log concave functions that strengthen and generalize recently established reverse log Sobolev inequality for such functions. This leads naturally to the concept of f-divergence and, in particular, to the concept of relative entropy for log concave functions. We establish their basic properties, among them the affine invariant valuation property. Applications are given in the theory of convex bodies.

    This is based on joined work with E. Werner.
  • Grey Swans: Plausible Stress Scenarios
    Gary Nan Tie (The Travelers Companies, Inc. )
    For the purposes of risk management and stress testing we characterize a spectrum of plausible extreme events, that we dub 'Grey Swans', by introducing a probabilistic method involving the concentration of measure phenomenon. As a result, stress tests can triaged according to severity, probability and now, information based plausibility.