Campuses:

<span class=strong>Poster Session and Reception: 5:30-7:00 </span><br><br/><br/><b>Poster submissions welcome from all participants</b>

Monday, October 27, 2008 - 6:30pm - 8:00pm
Lind 400
  • Approximate nearest subspace search with

    applications to pattern recognition (poster)


    Linear and affine subspaces are commonly used to describe the
    appearance of objects under different lighting, viewpoint,
    articulation, and even identity. A natural problem arising from their
    use is -- given a query image portion represented as a point in some
    high dimensional space -- find a subspace near to the query. This talk
    presents an efficient solution to the approximate nearest subspace
    problem for both linear and affine subspaces. Our method is based on a
    simple reduction to the problem of nearest point search, and can thus
    employ tree-based search or locality-sensitive hashing to find a near
    subspace. Further performance improvement is achieved by using random
    projections to lower the dimensionality of the problem. We provide
    theoretical proofs of correctness and error bounds of our construction,
    and demonstrate its capabilities on synthetic and real data. Our
    experiments demonstrate that an approximate nearest subspace can be
    located significantly faster than the exact nearest subspace, while at
    the same time it can find better matches compared to a similar search
    on points, in the presence of variations due to viewpoint, lighting,
    and so forth.
  • Supervised dictionary learning (poster)

    It is now well established that sparse signal models are well suited to restoration tasks and can effectively be learned from audio, image, and video data. Recent research has been aimed at learning discriminative sparse models instead of purely reconstructive ones. This work proposes a new step in that direction, with a novel sparse representation for signals belonging to different classes in terms of a shared dictionary and multiple class-decision functions. The linear variant of the proposed model admits a simple probabilistic interpretation, while its most general variant admits an interpretation in terms of kernels. An optimization framework for learning all the components of the proposed model is presented, along with experimental results on standard handwritten digit and texture classification tasks. This is a joint work with F. Bach (INRIA), J. Ponce (Ecole Normale Supérieure), G. Sapiro (University of Minnesota) and A. Zisserman (Oxford University).
  • Compressive sampling reconstruction by subspace

    refinement (poster)

    Bradley Alpert (National Institute of Standards and Technology)
    Spurred by recent progress in compressive sampling methods, we develop a new reconstruction algorithm for the Fourier problem of recovering from noisy samples a linear combination of unknown frequencies embedded in a very large dimensional ambient space. The approach differs from both L1-norm minimization and orthogonal matching pursuit (and its variants) in that no basis for the ambient space is chosen a priori. The result is improved computational complexity. We provide numerical examples that demonstrate the method's robustness and efficiency.

    Joint work with Yu Chen.
  • Analysis of scalar fields over point cloud data (poster)
    Frédéric Chazal (INRIA Saclay - Île-de-France )
    (Joint work with L. Guibas, S. Oudot and P. Skraba - to appear in proc.
    SODA'09).

    Given a real-valued function f defined over some metric space
    X, is it possible to recover some structural information about
    f from the sole information of its values at a finite subset L
    of sample points, whose pairwise distances in
    X are given? We provide a positive answer to this question. More
    precisely, taking advantage of recent advances on the front of
    stability for persistence diagrams, we introduce a novel algebraic
    construction, based on a pair of nested families of simplicial
    complexes built on top of the point cloud L, from which the
    persistence diagram of f can be faithfully approximated. We derive
    from this construction a series of algorithms for the analysis of
    scalar fields from point cloud data. These algorithms are simple and
    easy to implement, have reasonable complexities, and come
    with theoretical guarantees. To illustrate the generality and
    practicality of the approach, we also present experimental
    results obtained in various applications, ranging from clustering to
    sensor networks.
  • Clustering on Riemannian manifolds (poster)
    Alvina Goh (Johns Hopkins University)René Vidal (Johns Hopkins University)
    Over the past few years, various techniques have been developed for learning a low-dimensional representation of a nonlinear manifold embedded in a high-dimensional space. Unfortunately, most of these techniques are limited to the analysis of a single connected nonlinear submanifold of a Euclidean space and suffer from degeneracies when applied to linear manifolds (subspaces).

    This work proposes a novel algorithm for clustering data sampled from multiple submanifolds of a Riemannian manifold. First, we learn a representation of the data using generalizations of local nonlinear dimensionality reduction algorithms from Euclidean to Riemannian spaces. Such generalizations exploit geometric properties of the Riemannian space, particularly its Riemannian metric. Then, assuming that the data points from different groups are separated, we show that the null space of a matrix built from the local representation gives the segmentation of the data. However, this method can fail when the data points are drawn from a union of linear manifolds, because M contains additional vectors in its null space. In this case, we propose an alternative method for computing M, which avoids the aforementioned degeneracies, thereby resulting in the correct segmentation. The final result is a simple linear algebraic algorithm for simultaneous nonlinear dimensionality reduction and clustering of data lying in multiple linear and nonlinear manifolds.

    We present several applications of our algorithm to computer vision problems such as texture clustering, segmentation of rigid body motions, segmentation of dynamic textures, segmentation of diffusion MRI. Our experiments show that our algorithm performs on par with state-of-the-art techniques that are specifically designed for such segmentation problems.


  • A supervised dimensionality reduction framework for

    exponential-family variables (poster)

    Irina Rish (IBM)
    Dimensionality reduction (DR) is a popular data-processing technique that serves the
    following two main purposes: it helps to provide a meaningful interpretation and
    visualization of the data, and it also helps to prevent overfitting when the number of
    dimensions greatly exceeds the number of samples, thus working as a form of regularization.
    However, dimensionality reduction should be driven by a particular data analysis goal.
    When our goal is prediction rather than an (unsupervised) exploratory data analysis,
    supervised dimensionality reduction (SDR) that combines DR with learning a predictor can
    significantly outperform a simple combination of unsupervised DR with a subsequent learning
    of a predictor on the resulting low-dimensional representation.
    The problem of supervised dimensionality reduction can be also viewed as finding
    a predictive low-dimensional representation, which captures the information about the class label
    contained in the high-dimensional feature vector while ignoring the high-dimenional noise.

    We propose a general SDR framework that views both features and class labels as exponential-family
    random variables (PCA-like dimensionality reduction is included as a particular case of Gaussian data).
    We learn data- and class-appropriate generalized linear models (GLMs), thus handling both
    classification and regression, with both discrete and real-valued data. Our SDR-GLM optimization
    problem can be viewed as discriminative learning based on minimization of conditional probability
    of class given thehidden variables, while using as a regularizer the conditional probability of the
    features given the low-dimensional (hidden-variable) representation.

    The main advantage of our approach, besides being more general, is using simple closed-form
    update rules when performing its alternate minimization procedure. This method yields a short
    Matlab code, fast performance, and is guaranteed to converge. The convergence property, as well as
    closed form update rules, result from using appropriate auxiliary functions bounding each part of
    the objective function (i.e., reconstruction and prediction losses). We exploit the additive property
    of auxiliary functions in order to combine bounds on multiple loss functions.


    Promising empirical results are demonstrated on a variety of high-dimensional datasets. Particularly,
    our results on simulated datasets convincingly demonstrate that SDR-GLM approach can discover underlying
    low-dimensional structure in high-dimensional noisy data, while outperforming SVM and SVDM, often by far,
    and practically always beating the unsupervised DR followed by learning a predictor. On real-life datasets,
    SDR approaches outperfrom the unsupervised DR by far, while matching or sometimes outperforming SVM and SVDM.

  • teratively re-weighted least squares and vector valued data

    restoration from lower dimensional samples (poster)

    Massimo Fornasier (Johann Radon Institute for Computational and Applied Mathematics )
    We present the analysis of a superlinear convergent algorithm for
    L1-minimization based on an iterative reweighted least squares. We show
    improved performances in compressed sensing.
    A similar algorithm is then applied for the efficient solution of a
    system of singular PDEs for image recolorization in a relevant real-life
    problem of art restoration.
  • Manifold models for single- and multi-signal recovery (poster)
    Michael Wakin (Colorado School of Mines)
    The emerging theory of Compressive Sensing states that a signal obeying a sparse model can be reconstructed from a small number of random linear measurements. In this poster, we will explore manifold-based models as a generalization of sparse representations, and we will discuss the theory and applications for use of these models in single- and multi-signal recovery from compressive measurements.
  • Tensor approximation - structure and methods (poster)
    Berkant Savas (Linköping University)
    We consider the tensor approximation problem. Given a tensor we want to approximate it by another tensor of lower multilinear rank. It turns out this problem is defined on a product of Grassmann manifolds. We describe the structure of various parts the approximation problem and present convergence plots for Newton and quasi-Newton methods (with BFGS and Limited memory BFGS updates) that solve the problem. All algorithms are applicable to both general and symmetric tensors and incorporate the Grassmann manifold structure of the problem. The benefits of these algorithms compared to traditional alternating least squares approaches are higher convergence rates and the existence of rigorous theory establishing convergence to a stationary point.

    This is joint work with Lars Eldén (Linköping University) and Lek-Heng Lim (University of California, Berkeley).
  • Fast multiscale clustering and manifold identification (poster)
    Dan Kushnir (Yale University)
    We present a novel multiscale clustering algorithm inspired by algebraic
    multigrid techniques. Our method begins with assembling data points according
    to local similarities. It uses an aggregation process to obtain reliable
    scale-dependent global properties, which arise from the local similarities.
    As the aggregation process proceeds, these global properties affect the formation
    of coherent clusters. The global features that can be utilized are for
    example density, shape, intrinsic dimensionality and orientation. The last
    three features are a part of the manifold identification process which is performed
    in parallel to the clustering process. The algorithm detects clusters
    that are distinguished by their multiscale nature, separates between clusters
    with different densities, and identifies and resolves intersections between
    clusters. The algorithm is tested on synthetic and real datasets, its running
    time complexity is linear in the size of the dataset.
  • The smashed filter for compressive classification(poster)
    Marco Duarte (Rice University)
    We propose a framework for exploiting the same measurement techniques used in emerging compressive sensing systems in the new setting of compressive classification. The first part of the framework maps traditional maximum likelihood hypothesis testing into the compressive domain; we find that the number of measurements required for a given classification performance level does not depend on the sparsity or compressibility of the signal but only on the noise level. The second part of the framework applies the generalized maximum likelihood method to deal with unknown transformations such as the translation, scale, or viewing angle of a target object. Such a set of transformed signals forms a low-dimensional, nonlinear manifold in the high-dimensional image space. We exploit recent results that show that random projections of a smooth manifold result in a stable embedding of the manifold in the lower-dimensional space. Non-differential manifolds, prevalent in imaging applications, are handled through the use of multiscale random projections that perform implicit regularization. We find that the number of measurements required for a given classification performance level grows linearly in the dimensionality of the manifold but only logarithmically in the number of pixels/samples and image classes.

    This is joint work with Mark Davenport, Michael Wakin and Richard Baraniuk.
  • Using persistent homology to recover spatial information from

    encounter traces (poster)

    Brenton Walker (Laborartory For Telecommunications Sciences)
    In order to better understand human and animal mobility and its potential effcts on Mobile Ad-Hoc networks and Delay-Tolerant Networks, many researchers have conducted experiments which collect encounter data. Most analyses of these data have focused on isolated statistical properties such as the distribution of node inter-encounter times and the degree distribution of the connectivity graph.

    On the other hand, new developments in computational topology, in particular persistent homology, have made it possible to compute topological invariants from noisy data. These homological methods provide a natural way to draw conclusions about global structure based on collections of local information.

    We use persistent homology techniques to show that in some cases encounter traces can be used to deduce information about the topology of the physical space the experiment was conducted in, and detect certain changes in the space. We also show that one can distinguish between simulated encounter traces generated on a bounded rectangular grid from traces generated on a grid with the opposite edges wrapped (a toroidal grid). Finally, we have found that nontrivial topological features also appear in real experimental encounter traces.
  • Orthant-wise gradient projection method for sparse reconstruction (poster)
    Qiu Wu (The University of Texas at Austin)
    Many problems in signal processing and statistics involve
    in finding sparse solutions by solving the \l1 regularized least square problem.

    The orthant-wise method exploits that fact that the \l1 term is a linear function over any given orthant. Hence, the objective function is differentiable orthant-wise.

    We implement two variants of the orthant-wise gradient projection method:
    one is based on steepest-descent search direction and the other one is based a quasi-Newton search direction. Experimental results with compressive
    sampling problems demonstrate that the performance of our method compares favorably with that of alternative methods in literatures.
  • Structure determination of proteins using cryo-electron

    microscopy (poster)

    Yoel Shkolnisky (Yale University)Amit Singer (Princeton University)
    The goal in Cryo-EM structure determination is to reconstruct 3D
    macromolecular structures from their noisy projections taken at unknown
    random orientations by an electron microscope. Resolving the Cryo-EM problem
    is of great scientific importance, as the method is applicable to
    essentially all macromolecules, as opposed to other existing methods such as
    crystallography. Since almost all large proteins have not yet been
    crystallized for 3D X-ray crystallography, Cryo-EM seems the most promising
    alternative, once its associated mathematical challenges are solved. We
    present an extremely efficient and robust solver for the Cryo-EM problem
    that successfully recovers the projection angles in a globally consistent
    manner. The simple algorithm combines ideas and principles from spectral
    graph theory, nonlinear dimensionality reduction, geometry and computed
    tomography. The heart of the algorithm is a unique construction of a sparse
    graph followed by a fast computation of its eigenvectors.

    Joint work with Ronald Coifman and Fred Sigworth.
  • Mixed data segmentation via lossy data compression (poster)
    John Wright (University of Illinois at Urbana-Champaign)
    We consider the problem of clustering mixed data drawn sampled from distributions of (possibly) varying intrinsic dimensionality, e.g., degenerate Gaussians or linear subspaces. We approach this problem from the perspective of lossy data compression, seeking a segmentation that minimizes the number of bits needed to code the data up to a prespecified distortion. The coding length is minimized via an agglomerative algorithm that merges the pair of groups such that the resulting coding length is minimal. This simple algorithm converges globally for a wide range of simulated examples. It also produces state-of-the-art experimental results on applications in image segmentation from texture features and motion segmentation from tracked point features.
  • Representing and manipulating implicitly defined manifolds (poster)
    Michael Henderson (IBM)
    This poster illustrates an algorithm which was developed to compute implicitly defined manifolds in engineering applications, where the manifold is of low (1-4) dimension, but is embedded in a very high dimensional space (100 and up).

    Though the computational details (finding a point and the tangent space of the manifold) are different than in manifold learning, and the embedding is explicit instead of one of the unknowns, there are significant issues in common when computing any manifold.

    The representation used closely follows the definition of a manifold. A set of spherical balls (with differing radii)serve as the chart domains, with the embedding mapping limited to the embedded center and the tangent space. In addition, polyhedra are associated with each chart so that overlapping charts correspond to faces of the polyhedra (common to the polyhedra of the overlapping charts). Boundary charts are represented in the same manner, beginning with a polyhedral cone, and adding faces for each overlapping chart.

    The polyhedra of interior charts are Laguerre-Voronoi cells, and so it is very easy to locate points on the boundary of a partially represented manifold (if all vertices are closer to the origin than the radius of the ball the chart is completely surrounded by other charts). This provides a basic continuation or extension algorithm for creating a set of covering charts on a manifold.

    In terms of data structures, the atlas of charts is a simple list. The polyhedra are in the same simple list, but also form cell complex whose dual is a Laguerre-Delaunay triangulation of the manifold. Interestingly, the fine structure is a cell complex, but so is the gross structure of the manifold. Manifolds with boundary are represented by another cell complex. In this one the faces are manifolds which share boundaries which are the boundary cells of the face.

    So far this approach has been applied to three different kinds of manifolds which are common in dynamical systems. I hope to find similar applications in manifold learning.

  • High order consistency relations for classification and

    de-noising of Cryo-EM images (poster)

    Yoel Shkolnisky (Yale University)Amit Singer (Princeton University)
    In order for biologists to exploit the full potential embodied in the
    Cryo-EM method, two major challenges must be overcome. The first challenge
    is the extremely low signal-to-noise ratio of the projection images.
    Improving the signal-to-noise by averaging same-view projections is an
    essential preprocessing step for all algorithms. The second challenge is the
    heterogeneity problem, in which the observed projections belong to two or
    more different molecules or different conformations. Traditional methods
    assume identical particles, therefore failing to distinguish the different
    particle species. This results in an inaccurate reconstruction of the
    molecular structure.

    For the first problem, we present two different high order consistency
    relations between triplets of images. The inclusion of such high order
    information leads to improvement in the classification and the de-noising of
    the noisy images compared to existing methods that use only pairwise
    similarities. We also present a generalization of Laplacian eigenmaps to
    utilize such higher order affinities in a data set. This generalization is
    different from current tensor decomposition methods. For the second
    challenge, we describe a spectral method to establish two or more ab initio
    reconstructions from a single set of images.

    Joint work with Ronald Coifman and Fred Sigworth.

  • 3-D motion segmentation via robust subspace separation (poster)
    Ehsan Elhamifar (Johns Hopkins University)
    We consider the problem of segmenting multiple rigid-body motions in a video sequence from tracked feature point trajectories. Using the affine camera model, this motion segmentation problem can be cast as the problem of segmenting samples drawn from a union of linear subspaces of dimension two, three or four. However, due to limitations of the tracker, occlusions and the presence of nonrigid objects in the scene, the obtained motion trajectories may contain grossly mistracked features, missing entries, or not correspond to any valid motion model.

    In this poster, we present a combination of robust subspace separation schemes that can deal with all of these practical issues in a unified framework. For complete uncorrupted trajectories, we examine approaches that try to harness the subspace structure of the data either globally (Generalized PCA) or by minimizing a lossy coding length (Agglomerative Lossy Coding). For incomplete or corrupted trajectories, we develop methods based on PowerFactorization or L1-minimization. The former method fills in missing entries using a linear projection onto a low-dimensional space. The latter method draws strong connections between lossy compression, rank minimization, and sparse representation. We compare our approach to other methods on a database of 167 motion sequences with full motions, independent motions, degenerate motions, partially dependent motions, missing data, outliers, etc. Our results are on par with state-of-the-art results, and in many cases exceed them.
  • Joint manifold models for collaborative inference (poster)

    We introduce a new framework for collaborative inference and
    efficient fusion of manifold-modeled data. We formulate the
    notion of a joint manifold model for signal ensembles, and
    using this framework we demonstrate the superior performance of
    joint processing techniques for a range of tasks including
    detection, classification, parameter estimation, and manifold
    learning. We then exploit recent results concerning random
    projections of low-dimensional manifolds to develop an
    efficient framework for distributed data fusion.
    As an example, we extend the smashed filter – a maximum
    likelihood, model-based estimation and classification algorithm
    that exploits random measurements – to a distributed setting.
    Bounds for the robustness of this scheme against
    measurement noise are derived. We demonstrate the utility of
    our framework in a variety of settings, including large scale
    camera networks, networks of acoustic sensors, and multi-modal
    sensors.

    This is joint work with Richard Baraniuk, Marco Duarte, and
    Chinmay Hegde.
  • k-planes for classification (poster)
    Arthur Szlam (University of California, Los Angeles)
    The k-planes method is the generalization of k-means where the
    representatives of each cluster are affine linear sets. We
    will describe some possible modifications of this method for
    discriminative learning problems.
  • High-dimensional multi-model estimation – its Algebra,

    statistics, and sparse representation (poster)

    Allen Yang (University of California, Berkeley)
    Recent advances in information technologies have led to unprecedented large amounts of high-dimensional data from many emerging applications. The need for more advanced techniques to analyze such complex data calls for shifting research paradigms. In this presentation, I will overview and highlight several results in the area of estimation of mixture models in high-dimensional data spaces. Applications will be presented in problems such as motion segmentation, image segmentation, face recognition, and human action categorization. Through this presentation, I intend to emphasize the confluence of algebra and statistics that may lead to more advanced solutions in analyzing complex singular data structures such as mixture linear subspaces and nonlinear manifolds.
  • Random projections for manifold learning (poster)
    Chinmay Hegde (Rice University)
    We propose a novel method for linear dimensionality reduction of manifold-modeled data. First, we show that given only a small number random projections of
    sample points in R^N belonging to an unknown K-dimensional Euclidean manifold, the intrinsic dimension (ID) of the sample set can be estimated to high accuracy.
    Second, we prove that using only this set of random projections, we can estimate the structure of the underlying manifold. In both cases, the number of random projections (M) required is linear in K and logarithmic in N, meaning that K
    (Joint work with Michael Wakin and Richard Baraniuk.)