Campuses:

Poster Session and Reception

Tuesday, January 26, 2016 - 4:05pm - 6:00pm
Lind 400
  • Real Z-eigenvalues of Nonsymmetric Tensors
    Xinzhen Zhang (Tianjin University)
    We discuss how to compute all real Z-eigenvalues of nonsymmetric tensors. A general nonsymmetric tensor has finitely many Z-eigenvalues, while there may be infinitely many ones for special tensors. We propose Lasserre type semidefinite relaxation methods for computing such eigenvalues. For every nonsymmetric tensor that has finitely many real Z-eigenvalues, we can compute all of them; each of them can be computed by solving a finite sequence of semidefinite relaxations. As an application, all limiting probability distribution of transition probability tensor can be obtained.

  • The Alternating Descent Conditional Gradient Method
    Geoff Schiebinger (University of California, Berkeley)
    We present a variant of the classical conditional gradient method for sparse inverse problems with differentiable measurement models. Such models arise in many practical problems including superresolution, time-series modeling, and matrix completion. Our algorithm combines nonconvex and convex optimization techniques: we propose global conditional gradient steps alternating with nonconvex local search exploiting the differentiable measurement model. This hybridization gives the theoretical global optimality guarantees and stopping conditions of convex optimization along with the performance and modeling flexibility associated with nonconvex optimization. We present an application in protein-protein dimer detection via superresolution fluorescence microscopy.
  • On Optimal Low-rank Approximation of Non-negative Matrices
    Christian Grussler (Lund University)
    For low-rank Frobenius-norm approximations of matrices with non-negative entries, it is shown that the Lagrange dual is computable by semi-definite programming and Douglas-Rachford. Under certain assumptions the duality gap is zero. Moreover, the idea also holds true for other convex constraints. Even when the duality gap is non-zero, new insights are provided.
  • A Simple Spectral Algorithm for Recovering Planted Partitions
    Shmuel Friedland (University of Illinois, Chicago)
    In this poster we consider the planted partition model, in which n = ks vertices of a random graph are partitioned into k-clusters, each of size s. Edges between vertices in the same cluster and different clusters are included with constant probability p and q, respectively (where 0
  • An Iterative Approach to Rank Minimization Problems
    Chuangchuang Sun (Iowa State University)
    This research work investigates an iterative approach to solve the Rank Minimization Problems (RMPs) constrained in a convex set. The matrix rank function is discontinuous as well as nonconvex and the general RMP is well known as NP-hard. A continuous function is firstly introduced to approximately represent the matrix rank function with prescribed accuracy by selecting appropriate parameters. The RMPs are then converted to rank constrained optimization problems. An Iterative Rank Minimization (IRM) method is proposed to gradually approach the constrained rank. Convergence proof of the IRM method using the duality theory and Karush-Kuhn-Tucker conditions is provided. Two representative applications of RMP, matrix completion and output feedback stabilization problems, are presented to verify the feasibility and improved performance of the proposed IRM method.
  • A Fast Algorithm for Structured Low-Rank Matrix Recovery with Applications to MRI Reconstruction
    Greg Ongie (The University of Iowa)
    Recent matrix lifting formulations in MRI reconstruction require efficient algorithms for large-scale structured low-rank matrix recovery. We present a novel, fast algorithm for this class of problem that adapts an iteratively reweighted least squares approach to incorporate multi-fold Toeplitz/Hankel structures. Unlike previous approaches, the iterates can be solved efficiently in the un-lifted problem domain. We demonstrate the algorithm on undersampled MRI reconstruction, which shows significant improvement over standard compressed sensing techniques.
  • Psd Rank and Nested Spectrahedra
    Kaie Kubjas (Aalto University)
    The set of matrices of given positive semidefinite rank is semialgebraic. We study the geometry of this semialgebraic set and in small cases we describe its boundary. For general psd rank we give a conjecture for its boundary. Our proof techniques are based on studying nested spectrahedra and spectrahedral shadows. Joint work with Elina Robeva and Richard Z. Robinson.
  • Efficient Dictionary Learning Methods Using Sums of Outer Products (SOUP-DIL)
    Saiprasad Ravishankar (University of Michigan)
    The sparsity of natural signals and images in a transform domain or dictionary has been extensively exploited in several applications in recent years. In particular, data-driven adaptation of synthesis dictionaries has shown promise in many applications compared to fixed or analytical dictionaries. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for learning are often computationally expensive. In this work, we investigate efficient methods for dictionary learning by first approximating the training data set with a sum of sparse rank-one matrices and then using a block coordinate descent approach to estimate the unknowns. We consider various properties for the dictionary and sparse codes in our framework. The proposed block coordinate descent algorithms involve efficient closed-form solutions. We provide a convergence analysis for the algorithms. Our experiments show the promising performance and speed-ups provided by our methods over the classical K-SVD scheme in applications such as sparse signal representation and image denoising.

    Co-authors of work: Raj Rao Nadakuditi and Jeffrey A. Fessler
  • Universality Laws for Randomized Dimension Reduction
    Samet Oymak (Google Inc.)
    Dimension reduction is the process of embedding high-dimensional data into a lower dimensional space to facilitate its analysis. A fundamental technique for dimension reduction is to apply a random linear map to the data. It is desirable that the random map preserves the geometric features of the set (e.g. norm of the set elements). The question is how large the embedding dimension must be to ensure that dimension reduction succeeds with high probability. This work illustrates a phase transition in the behavior of the dimensionality reduction procedure as the embedding dimension increases. For several important learning tasks, the location of the associated phase transition is universal for a large class of dimension reduction maps. Furthermore, the stability and robustness properties are also universal. These results have many applications in signal processing, statistics, and randomized linear algebra.
    Joint work with Joel Tropp.
  • Lieb's Concavity Theorem, Matrix Geometric Means, and Semidefinite Optimization
    Hamza Fawzi (Massachusetts Institute of Technology)
    We show how certain concave trace functions established by Lieb admit semidefinite programming formulations. These functions play a fundamental role in quantum information theory. Our constructions follow more generally from a semidefinite programming formulation of weighted matrix geometric means. Joint work with James Saunderson.
  • Sampling Requirements of Stable Autoregressive Estimation
    Abbas Kazemipour (University of Maryland)
    Autoregressive models are fundamental in analyzing real-world data such as binary neural spiking activities, traffic modeling, communication channels etc. We use the theory of compressive sensing to study the sampling properties of general AR model estimation. Under a compressibility assumption which arises in many of the aforementioned applications, we will show non-asymptotic sampling bounds for the \ell_1-regularized maximum likelihood estimation and a greedy algorithm denoted by AROMP. Our results reveal that these algorithms outperform the traditional estimation methods such as the Yule-Walker estimation for linear AR processes, and in general the ML estimators. Also, our results indicate a universal sampling requirement for general AR processes and characterize the sampling complexity trade-off of compressive sensing matrices by removing the i.i.d assumption on the covariates. Application of our algorithms to two datasets including neural spiking activity of retinal ganglion cells and freeway speed in I-495 Maryland will also be presented in which significant improvement is achieved compared to traditional methods.
  • Analyzing Optimization Algorithms using Integral Quadratic Constraints
    Laurent Lessard (University of Wisconsin, Madison)
    Iterative optimization algorithms are a main engine behind large-scale data processing applications such as computer vision and machine learning. However, the design and use of such algorithms is currently more art than science. We present a new analysis method for optimization algorithms that is based on robust control theory. This framework allows one to easily compute robust performance bounds for a variety of algorithms by solving small convex programs. Rather than testing different algorithms to see which ones perform best, we can now prescribe desired properties e.g.robust to 5% noise and then design the best algorithm that meets the specification.
  • Bilevel Polynomial Programs and Semidefinite Relaxation Methods
    Li Wang (University of Illinois, Chicago)
  • Superresolution Without Separation
    Elina Robeva (University of California, Berkeley)
    We provide a theoretical analysis of diffraction-limited superresolution, demonstrating that arbitrarily close point sources can be resolved in ideal situations. Precisely, we assume that the incoming signal is a linear combination of M shifted copies of a known waveform with unknown shifts and amplitudes, and one only observes a finite collection of evaluations of this signal. We characterize properties of the base waveform such that the exact translations and amplitudes can be recovered from 2M + 1 observations. This recovery is achieved by solving a a weighted version of basis pursuit over a continuous dictionary. Our methods combine classical polynomial interpolation techniques with contemporary tools from compressed sensing.
  • Monotonically Positive Matrices
    Jinyan Fan (Shanghai Jiaotong University)
    We introduce monotonically positive (MP) matrices, which have many applications in order statistics. We present a semidefinite algorithm for checking whether a matrix is MP. If it is not MP, a certificate for this can be obtained; if it is MP, a MP-decomposition of the matrix can be obtained.
  • Decomposable Norm Minimization with Proximal-Gradient Homotopy Algorithm
    Reza Eghbali (University of Washington)
    We study the convergence rate of proximal-gradient homotopy algorithm for norm-regularized linear least squares problems. Homotopy algorithm reduces the regularization parameter in a series of steps, and uses proximal-gradient algorithm to solve the problem at each step. Proximal-gradient algorithm has a linear rate of convergence given that the objective function is strongly convex and the gradient of the smooth component of the objective function is Lipschitz continuous. In general, the objective function in this type of problems is not strongly convex, especially when the problem is high-dimensional. We will show that if the linear sampling matrix satisfy certain assumptions and the regularizing norm is decomposable, proximal-gradient homotopy algorithm converges with a \emph{linear rate} even though the objective function is not strongly convex. Our result generalizes results on the linear convergence of homotopy algorithm for $l_1$-regularized least squares problems. Numerical experiments are presented that support the theoretical convergence rate analysis.
  • Regularization for Design
    Nikolai Matni (California Institute of Technology)
    When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given. In fact, we argue that for large-scale systems, the task of designing this architecture is now as important as the design of the control laws themselves. By interpreting controller synthesis (in a model matching setup) as the solution of a particular linear inverse problem, we view the challenge of obtaining a controller with a desired architecture as one of finding a structured solution to an inverse problem. Building on this conceptual connection, we formulate and analyze a framework called Regularization for Design, in which we augment the variational formulations of controller synthesis problems with convex penalty functions that induce a desired controller architecture. The resulting regularized formulations are convex optimization problems that can be solved efficiently, and which provide a unified computationally tractable approach for the simultaneous co-design of a structured optimal controller and the actuation, sensing and communication architecture required to implement it. Further, these problems are natural control-theoretic analogs of prominent approaches such as the Lasso, the Group Lasso, the Elastic Net, and others that are employed in structured inference. In analogy to that literature, we show that our approach identifies optimally structured controllers under a suitable condition on a signal-to-noise type ratio. We also briefly touch upon how this work fits within a broader theory of architecture that we have been developing, which incorporates layering, dynamics, optimization and control into a unified framework.

    Joint work with Venkat Chandrasekaran and John C. Doyle.
  • Dual Principal Component Pursuit
    Manolis Tsakiris (Johns Hopkins University)
    We consider the problem of outlier rejection in single subspace learning. Classical approaches work directly with a low-dimensional representation of the subspace. Our approach works with a dual representation of the subspace and hence aims to find its orthogonal complement. We pose this problem as an L1-minimization problem on the sphere and show that, under certain conditions on the distribution of the data, any global minimizer of this non-convex problem gives a vector orthogonal to the subspace. Moreover, we show that such a vector can still be found by relaxing the non-convex problem with a sequence of linear programs. Experiments on synthetic and real data show that the proposed approach, which we call Dual Principal Component Pursuit (DPCP), outperforms state-of-the art methods, especially in the case of high-dimensional subspaces.

    Co-authored with Rene Vidal.
  • Convex Optimization for Identification and Reduction of Stable Dynamical Systems
    Ian Manchester (University of Sydney)
    This poster presents recent methods for nonlinear (and linear) system identification and model reduction using Lagrangian relaxation and sum-of-squares programming. The methods allow imposition of stability constraints and identification of systems with limit cycles. In the linear case, the numerical results show a significant improvement over subspace identification. A specialized algorithm has been developed that significantly outperform general-purpose solvers such as Sedumi and Mosek.
  • Distributed Online Algorithms for Fair Power Allocation
    Kathryn Heal (Harvard University)
    We study the problem of optimally allocating power to a group of users (households) in a power distribution system managed by a central coordinator. The problem we consider specifically focuses on the Heating, Ventilation and Air Conditioning (HVAC) devices of these households. Of special interest to us is fairness in the power allocation policy, since HVAC systems are an important component in households and the needs (desired temperature levels) of the households vary. We present a decentralized algorithm, with fairness guarantees, to solve the power allocation problem in cases where the Load Servicing Entity (LSE) has limited supply capacity. The base model for this strategy relates power allocation to desired occupant indoor comfort temperature, also taking into account the effect of outdoor temperature and constraints on available power. Our analysis is particularly useful at time periods when LSEs experience peak power demand.

    This work is a joint effort of Kathryn Heal, Sindri Magnusson, Chinwendu Enyioha, Na Li, Rami Mangoubi, and Vahid Tarokh.
  • Spatially Distributed Sampling and Reconstruction
    Qiyu Sun (University of Central Florida)
    A spatially distributed system contains a large amount of agents with limited sensing, data processing, and communication capabilities. Recent technological advances have opened up possibilities to deploy spatially distributed systems for signal sampling and reconstruction. We introduce a graph structure for such distributed sampling and reconstruction systems (DSRS). We build up a locally verifiable stability criterion for overlapping smaller subsystems. We propose an exponentially convergent distributed algorithm for signal reconstruction.
  • Block Iterative Reweighted Algorithms for Super-Resolution of Spectrally Sparse Signals
    Myung Cho (The University of Iowa)
    We propose novel algorithms that enhance the performance of recovering unknown continuous-valued frequencies from undersampled signals. Our iterative reweighted frequency recovery algorithms employ the support knowledge gained from earlier steps of our algorithms as block prior information to enhance frequency recovery. Our methods improve the performance of the atomic norm minimization which is a useful heuristic in recovering continuous-valued frequency contents. Numerical results demonstrate that our block iterative reweighted methods provide both better recovery performance and faster speed than other known methods.
  • Solving Conic Optimization Problems via Self-dual Embedding and Facial Reduction: A Unified Approach
    Frank Permenter (Massachusetts Institute of Technology)
    The self-dual embedding method (implemented in solvers such as SeDuMi) is a popular technique for solving conic optimization problems. This method fails when both complementary solutions and improving rays do not exist, which in turn occurs only if Slater's condition does not hold. We show this method can succeed in all cases if modified in a simple way. Specifically, we identify and exploit a connection with facial reduction, a technique for regularizing problems that fail Slater's condition.

    Joint work with Henrik Friberg and Erling Andersen.
  • Completion of Partially Known Turbulent Flow Statistics
    Armin Zare (University of Minnesota, Twin Cities)
    Second-order statistics of turbulent flows can be obtained either experimentally or via high fidelity numerical simulations. The statistics are relevant in understanding fundamental flow physics and for the development of low-complexity models. For example, such models can be used for control design in order to suppress or promote turbulence. Due to experimental or numerical limitations it is often the case that only partial flow statistics are known. In other words, only certain correlations between a limited number of flow field components are available. Thus, it is of interest to complete the statistical signature of the flow field in a way that is consistent with the known dynamics. Our approach to this inverse problem relies on a model governed by stochastically forced linearized Navier-Stokes equations. In this, the statistics of forcing are unknown and sought to explain the given correlations. While the system dynamics impose a linear constraint on the admissible correlations, such an inverse problem admits many solutions for the forcing correlations. We use nuclear norm minimization to obtain correlation structures of low complexity. The complexity is quantified by the rank of the correlation structure of excitation sources. This provides a bound on the number of input channels and explains the directionality of input disturbances into the linear dynamics.

    Joint work with Mihailo Jovanovic and Tryphon Georgiou.
  • SMART: The Stochastic Monotone Aggregated Root-Finding Algorithm
    Damek Davis (University of California, Los Angeles)
    We introduce the Stochastic Monotone Aggregated Root-Finding (SMART) algorithm, a new randomized operator-splitting scheme for finding roots of finite sums of operators. These algorithms are similar to the growing class of incremental aggregated gradient algorithms, which minimize finite sums of functions; the difference is that we replace gradients of functions with black-boxes called operators, which represent subproblems to be solved during the algorithm. By replacing gradients with operators, we increase our modeling power, and we simplify the application and analysis of the resulting algorithms. The operator point of view also makes it easy to extend our algorithms to allow arbitrary sampling and updating of blocks of coordinates throughout the algorithm. Implementing and running an algorithm like this on a computing cluster can be slow if we force all computing nodes to be synched up at all times. To take better advantage of parallelism, we allow computing nodes to delay updates and break synchronization.

    This paper has several technical and practical contributions. We prove the weak, almost sure convergence of a new class of randomized operator-splitting schemes in separable Hilbert spaces; we prove that this class of algorithms convergences linearly in expectation when a weak regularity property holds; we highlight connections to other algorithms; and we introduce a few new algorithms for large-scale optimization.
  • Questions in Blind Sparse Non-Negative Matrix Factorization from Visiting Research at MIT Lincoln Laboratory
    Tegan Emerson (Colorado State University)
    During a stint of visiting research at MIT Lincoln Laboratory I was investigating and evaluating a blind matrix factorization problem arising in the analysis of data acquired by the Frequency Agile Lidar (FAL) system. There has been some research done in this area and although the current algorithms perform well, they cannot currently be run online. Questions related to one preventative problem (determination of the internal factoring dimension) are presented in hopes of generating discussion related to structured sparsity in matrix factorization.
  • Multidirectional Mean Value Inequalities
    Robert Kipka (Queen's University)
    A multidirectional mean value inequality is a theorem of nonsmooth or variational analysis which provides an estimate for the minimum difference in function values on a closed convex subset of a Banach space and a point in the Banach space. Each multidirectional mean value inequality has a corresponding Subbotin-type theorem, which allows one to replace nonlinear derivative-type objects such as the Dini lower derivate with linear expressions involving subgradients.

    We present recent developments for multidirectional mean value inequalities and their corresponding Subbotin-type theorems. As a first application we provide a novel derivation of necessary optimality conditions in the nonsmooth Calculus of Variations.