Poster Session and Reception

Tuesday, October 20, 2015 - 4:05pm - 6:00pm
Lind 400
  • Coordinated Control of Large-Scale Nonlinear Networks via Vector Lyapunov Functions
    Soumya Kundu (Los Alamos National Laboratory)
    Real-time (centralized) decision making in large-scale engineering systems is difficult due to many reasons, such as a lack of system-wide knowledge (especially under disturbances), communication delays and computational burden. However in a multi-agents dynamical system, the agents can make (distributed) decisions to achieve global objectives via local measurements and neighbor communications. Here we present a distributed and optimal control algorithm for stabilizing large-scale systems described by ordinary differential equations, employing tools from vector Lyapunov functions and polynomial optimization methods.
  • An Asynchronous Distributed Proximal Gradient Method for Composite Convex Optimization
    Necdet Serhat Aybat (The Pennsylvania State University)
    We propose an asynchronous distributed first-order augmented Lagrangian (DFAL) algorithm to minimize sum of composite convex functions, where each term is a private function to one node, and only nodes connected by an edge can communicate. We show any limit point of iterate sequence is optimal; an eps-optimal and eps-feasible solution can be computed with probability at least 1-p within O(1/eps log(1/p)) communications. We demonstrate the efficiency of DFAL on large scale sparse-group LASSO problems, and compare it ADMM based distributed methods.
  • Information-Frugal LQG Control
    Takashi Tanaka (Massachusetts Institute of Technology)
    Real-time decision-making procedures in general require continuous acquisition of information from the environment. We revisit one of the most fundamental questions in real-time decision-making theory: what is the minimal information acquisition rate to achieve sequential decision-making with desired accuracy? We tackle this question using basic tools from control theory, information theory, and convex optimization theory. Specifically, we consider a Linear-Quadratic-Gaussian (LQG) control problem where Massey's directed information from the state sequence to the control sequence is taken into account. We show that the most information-frugal decision-making policy achieving desired LQG control performance admits an attractive three-stage separation structure comprising (1) linear sensor with additive white Gaussian noise, (2) Kalman filter, and (3) certainty equivalence controller. We also show that an optimal policy can be synthesized using a numerically efficient algorithm based on semidefinite programming (SDP).
  • Consensus with Linear Objective Maps
    Xudong Chen (University of Illinois at Urbana-Champaign)
    We consider in this work, which will be presented at the upcoming IEEE
    Conference on Decision and Control (CDC’15, Osaka, Japan; December
    2015), a weighted consensus problem over directed graphs, where the
    agents interact in a decentralized way, with the resulting dynamics
    formulated in continuous time. We introduce a linear function, called
    the objective map, which defines the desired final state as a convex
    combination of the initial states of the agents. We provide a complete
    answer to the question of whether there is a decentralized consensus
    dynamics over a given digraph, which converges to the final state
    specified by an objective map. In particular, we characterize not only the
    set of objective maps that are feasible for a given digraph, but also the
    consensus dynamics that implement the objective map.
  • On a Modified DeGroot-Friedkin Model of Opinion Dynamics
    Ji Liu (University of Illinois at Urbana-Champaign)
    This work, which was presented in part at the 2015 American Control
    Conference (July; Chicago, IL), addresses a study of the opinion
    dynamics that emerge in a scenario where individuals consecutively
    discuss a sequence of issues. Specifically, we study how individuals’ self-
    confidence levels evolve via a reflected appraisal mechanism. Motivated
    by the DeGroot-Friedkin model, we propose a Modified DeGroot-Friedkin
    (MDGF) model, which allows individuals to update their self-confidence
    levels by interacting with only their neighbors. In particular, the
    modified model allows the update of self-confidence levels to take place
    in finite time without waiting for the opinion process to reach
    consensus on any particular issue. We study properties of this MDGF
    model, and compare the associated equilibria and their stability with
    those of the original DeGroot-Friedkin model. Specifically, for the case
    when the interaction matrix is doubly stochastic, we show that for the
    MDGF model, the vector of individuals’ self-confidence levels converges
    to a unique nontrivial equilibrium, which for each individual is equal to
    1/n, where n is the number of individuals. This implies that eventually
    individuals reach a democratic state.
  • Enumeration of Graphs with Fixed Degree Sequences and Applications
    David Burstein (University of Pittsburgh)
    For many applications, we want to construct synthetic graphs from a given degree sequence. There are typically many realizations of graphs from a given degree sequence, however, and we would like to uniformly sample graphs from this set. To this end, we seek asymptotics for counting the number of graphs that realize a sparse degree sequence. The previous best asymptotics (by Greenhill, McKay, Wang 2006) only allow for the maximum degree to be o(E^{1/3}), where E is the number of edges. Since for many real world networks, including neuronal networks, the maximum degree can scale much larger than o(E^{1/3}) we present novel results that allow for the counting of sparse graphs with maximum degree O(E^{1/2-w}) where w is an arbitrarily small positive number. We then briefly consider some applications of our aforementioned results.
  • Blind Identification of Graph Filters with Sparse Inputs
    Gonzalo Mateos Buckstein (University of Rochester)
    Network processes are often conceptualized as signals defined on the vertices of a graph. To untangle the latent structure of such signals, one can be view them as outputs of linear graph filters modeling underlying network dynamics. This paper deals with the problem of blind graph filter identification, which finds applications in social and brain networks, to name a few. Given a graph signal modeled as the output of a graph filter, the goal is to recover the vector of filter coefficients and the input signal which is assumed to be sparse. While the filtered graph signal is a bilinear function of the unknowns, it is also a linear combination of the entries of the rank-one matrix obtained from their outer product. As with blind deconvolution of time (or spatial) domain signals, it is shown that the blind graph filter identification problem can be tackled via rank minimization subject to linear constraints. Numerical tests with synthetic and real-world networks corroborate the effectiveness of the blind identification approach.
  • Optimization and Learning in Spreading Dynamics
    Andrey Lokhov (Los Alamos National Laboratory)
    Recently introduced dynamic message-passing (DMP) equations allow to solve spreading dynamics on a given instance of the network for a large class of models with progressive
    dynamics, which includes the zero-temperature random-field Ising model, the susceptible-infected-recovered model, and rumor spreading models. This makes it possible to use DMP equations in diverse algorithmic applications aiming at a better control of the cascading processes. We discuss new DMP-based algorithms for two important directions. First, we show how the DMP equations, combined with techniques applied in artificial neural networks, can be used for solving such problems as optimal dynamic resource allocation and influence maximization in the presence of budget constraints. Another application that we present is related to the problem of reconstruction of heterogeneous transmission probabilities from partially observed cascades; we show that while methods based on the maximization of the likelihood are rapidly becoming impractical due to a very large computational complexity, the DMP algorithm may provide competitive results at a substantially lower computational cost.
  • Bayesian Heuristics for Group Decisions
    Mohammad Amin Rahimian (University of Pennsylvania)
    Under the Bayesian without Recall (BWR) model of inference, the agents behave rationally with respect to their most recent observations and yet they do not recall their history of past observations and cannot reason about how other agents are making their decisions. This model avoids the complexities of fully rational inference and also provides a behavioral foundation for non-Bayesian updating. We specialize this model to a group decision scenario where private observations are received at the beginning, and agents aim to take the best action given their aggregate observations. We present the implications of the choices of signal structure and action space for such agents. We show that for a wide class of distributions from the exponential family and their conjugate priors, the BWR action updates take the form of a linear update in the self and neighboring actions. Furthermore, if the priors are noninformative, then these action updates take the form of a linear combination as in the DeGroot model: agents who start from noninformative priors follow the DeGroot update and reach a consensus. Next in an environment where the state space is finite and agents take actions over the probability simplex, we show that the resultant belief updates are log-linear. In such a case if the network is unbiased, then the beliefs concentrate on a global maximum likelihood estimate of the unknown, where the signal likelihoods are weighted by the network centralities of their respective agents.
  • Monotonicity in Dissipative Networks: Application to Robust Optimization
    Marc Vuffray (Los Alamos National Laboratory)
    We consider general balanced flows of a commodity over a network where an instance of the network flow is characterized by edge flows and potentials. Edge flows in and out of a node are assumed to be conserved, thus representing standard network flow relations. The remaining freedom in the flow distribution over the network is constrained by potentials so that the difference of potentials is expressed as a nonlinear function of the edge flow.

    We show that flow solutions in these networks possess a monotonicity property. The potentials at every points in the network change monotonically with respect to an increase or decrease in the withdrawals. This property has important consequences for robust optimization on dissipative networks where it enables us to look at only two extremes cases instead of an infinity of scenarios.

    We illustrate this general result with the example of the maximum profit problem on natural gas transmission network.
  • Speed vs Accuracy: Nervous Systems Tradeoffs Using Robust Control
    Yorie Nakahira
    The modern view of the nervous system as layering distributed computation and communication for the purpose of sensorimotor control and homeostasis has much experimental evidence but little theoretical foundation, leaving unresolved the connection between diverse components and complex behavior. As a simple starting point, we address a fundamental tradeoff when robust control is done using communication with both delay and quantization error. This yields surprisingly simple and tight analytic bounds with clear interpretations and insights regarding hard tradeoffs, optimal coding and control strategies, and their relationship with well known physiology and behavior. These results well explain findings from sensorimotor experiments, but are very different from those based on information theory and statistical physics (which have dominated theoretical neuroscience).