Aharon Ben-Tal (Technion - Israel Institute of Technology) email@example.com
The Adjustable Robust Counterpart (ARC) Approach to Uncertain Linear Programs
In many practical decision problems which are modeled as uncertain LPs, only part of the decision variables have to be determined prior to the realization of the uncertain data ("here and now" decisions); the other variables can be adjusted after the realization of part of the uncertain data ("wait and see" decisions). This is the prevailing situation in multi-period decision problems. For LPs of this type, we introduce a new methodology based on ARC, which offers a less conservative and significantly more flexible approach compared to the usual Robust Counterpart Approach. In contrast, the ARC problem is much more complex computationally. To overcome the latter, we suggest the use of linear decision rules. The theory is illustrated by studying a multi-period inventory system.
Dimitris Bertsimas (Sloan School of Management, MIT) firstname.lastname@example.org
Robust Discrete and Dynamic Optimization
main motivation for robust optimization is data uncertainty
for structured mathematical programming problems. Under this
perspective, we investigate different choices of uncertainty
sets to model data uncertainty and characterize the structure
of the resulting robust counterparts. We particularly focus
on uncertainty sets for which the robust problem inherits the
computational complexity of the underlying deterministic problem.
Examples of concrete results in this direction include:
(a) the robust counterpart of a linear programming problem (LP) is still an LP and of a mixed integer programming problem (MIP) is still a MIP of comparable size.
(b) The robust counterpart of a polynomially solvable 0-1 discrete optimization problem remains polynomially solvable. In particular, robust matching, spanning tree, shortest path, matroid intersection, etc. are polynomially solvable.
(c) Robust network flows can also be solved as a polynomial number of modified network flow problems.
(d) The robust counterpart of an NP-hard -approximable 0-1 discrete optimization problem, remains -approximable. (Joint work with Melvyn Sim).
We also propose to model stochastic and dynamic optimization problems using uncertainty sets as opposed to probability distributions. Unlike dynamic and stochastic programming this robust approach does not suffer from the curse of dimensionality. As a test case, we apply this perspective to classical supply chain optimization problems under uncertainty, and show (a) the proposed approach is computationally tractable for high dimensions and in many cases leads to stronger solutions, (b) the optimal robust policies have the same structure as optimal policies obtained via dynamic programming; moreover, the robust approach is capable of characterizing the structure of the optimal policy even in cases where the structure of the optimal policy obtained via dynamic programming is unknown. (Joint work with Aurelie Thiele).
Laurent El Ghaoui (EECS Department, University of California at Berkeley) email@example.com
Case Studies in Robust Optimization Slides: pdf
We review some case studies in practical robust optimization, with examples taken from path planning for air traffic management and classification problems in bioinformatics. Both examples emphasize the importance of obtaining non-conservative estimates of the region of confidence for the uncertain parameters, while at the same time retaining the computational efficiency of the corresponding robust optimization problem.
Osman Güler (Department of Mathematics & Statistics, University of Maryland, Baltimore County) firstname.lastname@example.org
Interior Point Methods on Homogeneous Cones Slides: pdf
In interior point theory, it is convenient to formulate a convex program as the minimization of a linear function over an affine intersection of a convex cone K. As soon as we have a self-concordant barrier function for K, we immediately have a (short-step) interior-point algorithm. In practice, one is interested in primal-dual methods and in long-step methods. Here the nature of the cone K plays an important role. Homogeneous cones form a large class with a rich theory, and an (somewhat rough) algebraic classification. Computable barrier functions exist on homogeneous cones which satisfy properties that are suitable for developing long-step interior point methods. In this talk, we will elaborate on various aspects of interior point methods on homogeneous cones.
Anders Hansson (Division of Automatic Control, Department of Electrical Engineering, Linköping University) email@example.com
Efficient Solution of SDPs Related to the KYP Lemma (poster session)
Joint work with Lieven Vandenberghe and Venkataramanan Balakrishnan.
A very important class of optimization problems in control are the SDPs related to the Kalman-Yakubovich-Popov (KYP) lemma. These appear as special instances in traditional control applications such as Lyapunov stability, passivity, and induced L_2 gain, with or without uncertainty. More recent applications concern roubstness analysis using integral quadratic constraints. Only medium-sized problems (with several hundred variables) can be solved if standard software is used to solve the resulting SDPs. However in many practical applications such as those encountered in the aerospace industry, thousands (or even more) variables arise, and the resulting SDPs cannot be solved using standard software. This issue can be addressed, by taking advantage of the structure of the specific linear matrix inequality related to the KYP lemma, to devise more efficient software. We will give an overview on recent results in this area.
GloptiPoly - Global Optimization over Polynomials with Matlab and SeDuMi (poster session) Slides: pdf
Joint work with Jean-Bernard Lasserre.
GloptiPoly is a Matlab/SeDuMi add-on to build and solve convex semidefinite programming relaxations of the (generally non-convex) global optimization problem of minimizing a multivariable polynomial function subject to polynomial inequality, equality or integer constraints. The software generates a series of lower bounds monotonically converging to the global optimum. Global optimality is detected and isolated optimal solutions are extracted automatically. Numerical experiments show that for most of the small- and medium-scale problems described in the literature, the global optimum is reached at low computational cost. Potential applications of GloptiPoly include resolution of polynomial systems of equations, combinatorial optimization, and robust control. For more information, see http://www.laas.fr/~henrion/software/gloptipoly
Garud Iyengar (IEOR Department, Columbia University) firstname.lastname@example.org
Robust Quadratically Constrained Quadratic Programs Slides: pdf
In this talk we will dicsuss robust convex quadratically constrained programs, a subset of the class of robust convex programs introduced by Ben-Tal and Nemirovski. In contrast to this previous work, where it is shown that such problems can be formulated as semidefinite programs, our focus is to identify uncertainty sets that allow this class of problems to be formulated as second-order cone programs. We propose three classes of uncertainty sets that satisfy this criterion and discuss examples from finance, classification, and signal processing where these classes of uncertainty sets are natural.
Semidefinite Problems in Structural Optimization and their Solution by PENNON (poster session) Slides: pdf
We will present several problems of structural optimization that can be formulated as (convex and nonconvex) semidefinite programs. These include multiple-load truss and material optimization with frequency and stability constraints. We will further present a code PENNON aimed at solving large-scale, generally nonlinear SDP problems. Results of numerical examples will demonstrate that the code is particularly efficient for SDP formulations of structural optimization problems.
A General Framework for Convex Relaxation of Polynomial Optimization Problems over Cones Slides: pdf
The class of POPs (Polynomial Optimization Problems) over cones covers a wide range of optimization problems such as 0-1 integer linear and quadratic programs, nonconvex quadratic programs and bilinear matrix inequalities. This paper presents a new framework for convex relaxation of POPs over cones in terms of linear optimization problems over cones. It provides a unified treatment of many existing convex relaxation methods based on the lift-and-project linear programming procedure, the reformulation-linearization technique and the semidefinite programming relaxation for a variety of problems. It also extends the theory of convex relaxation methods, and thereby brings flexibility and richness in practical use of the theory. This talk is based on a joint work with Sunyoung Kim and Hayato Waki.
Vera V. Kovacevic-Vujcic (Faculty Organizational Sciences, University of Belgrade) email@example.com
Stabilization of Mehrotra's Primal-Dual Algorithm and its Implementation (poster session)
Joint work with P.S. Stanimirovic and N.V. Stojkovic.
In this paper we apply stabilization procedure proposed by Kovacevic-Vujcic and Asic to the Mehrotra's primal dual interior-point algorithm for linear programming. The stabilization procedure is implemented in the package MATHEMATICA. A number of highly degenerate test examples are used to compare the modified Mehrotra's method with the original one and with the code PCx. We also provide numerical results on some examples from the Netlib test set.
Vera V. Kovacevic-Vujcic (Faculty Organizational Sciences, University of Belgrade) firstname.lastname@example.org
A Predictor-Corrector Algorithm for a Semidefinite Relaxation of the Traveling Salesman Problem (poster session)
Joint work with M.M. Cangalovic and J.J. Kratica.
The paper studies behaviour of the semidefinite method proposed by Helmberg, Rendl, Vanderbei and Wolkowicz on a special class of semidefinite programs obtained as relaxations of the discrete semidefinite model of the symmetric traveling salesman problem (STSP). Optimal choice of relaxation parameters with respect both to numerical stability and quality of the obtained lower bound is considered. Modifications of the original method which take into account special structure of the semidefinite relaxation are proposed. Numerical results for STSP instances of dimensions up to 124 are reported.
In this poster we consider the problem of novelty detection, presenting an algorithm that aims to find a minimal region in input space containing a fraction of the probability mass underlying a data set. This algorithm -- the ``single-class minimax probability machine (MPM)'' -- is built on a distribution-free methodology that minimizes the worst-case probability of a data point falling outside of a convex set, given only the mean and covariance matrix of the distribution and making no further distributional assumptions. We first present the MPM framework for binary classification problems. After that, we derive a robust approach to estimating the mean and covariance matrix within the general two-class MPM setting, and show how this approach specializes to the single-class problem. We provide empirical results comparing the single-class MPM to other approaches for novelty detection, including the single-class support vector machine (SVM) and a two-class SVM method.
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space---classical model selection problems in machine learning. In this poster we show how the kernel matrix can be learned from data via semi-definite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm---using the labelled part of the data one can learn an embedding also for the unlabelled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method to learn the 2-norm soft margin parameter in support vector machines, solving another important open problem. Finally, the novel approach presented in the poster is supported by positive empirical results.
Jean B. Lasserre (LAAS-CNRS, Toulouse, France) email@example.com
Some Applications of Semidefinite Programming in Probability and Algebra Slides: pdf
We consider some applications of semidefinite programming in probability and algebra. In particular, we will show how to use semidefinite programming :
- to provide bounds on measures that satisfy moment conditions
- in performance evaluation for certain types of Markov processes, which include Barnsley iterated function systems (in discrete time), and diffusions (in continuous time) (e.g. some financial mathematical models).
- to analyze (complex and/or real) zeros of systems of polynomial equations and provide conditions for their location in pre-specified semi algebraic sets.
Adrian Lewis (Simon Fraser University, Canada) firstname.lastname@example.org
Perhaps the simplest example of robust optimization consists of minimizing a given function that is evaluated not at the point we choose, but rather at an unpredictably perturbed point. To be conservative, we therefore seek to minimize the `robust regularization', whose value at any point is the maximum value of the function in a fixed neighbourhood. For some simple functions, we can model this regularization by semidefinite programming. I outline general variational properties of such regularizations. In the case of the spectral abscissa function of a square matrix (the largest real part of an eigenvalue), the robust regularization is the `pseudospectral abscissa', a useful tool in robust stability theory, closely related to the `distance to instability' and the `H-infinity norm'. I outline an algorithm to evaluate the pseudospectral abscissa, and discuss its variational properties.
Optimal Transceiver Design for Multi-user Communication Slides: pdf
In this talk, we describe a formulation of the MMSE (Minimum Mean Square Error) transmitter design problem for a multi-user communication system employing either zero-forcing equalizers or MMSE equalizers. Since the natural formulations of this problem turn out to be nonconvex, we develop various alternative formulations using techniques of linear matrix inequalities (LMIs), geometric programming and second order cone programming. For the case of zero-forcing equalizers, we propose an alternating direction method to solve the optimal transmitter design problem and establish its convergence. When restricted to the diagonal designs, we further simplify the formulation to a linearly constrained entropy maximization problem which can be efficiently solved. In the case where MMSE equalizers are used at the receivers, we formulate the optimal transmitter design problem as a semidefinite program (SDP) which can be solved using the highly efficient interior point methods. When the channel matrices are diagonal (as in OFDM systems), we show that the optimal MMSE transmitters can be obtained by subcarrier allocation and optimal power loading to each subcarrier for all the users. Moreover, the optimal subcarrier allocation and power-loading can be computed fairly simply by the relative ratios of the path gains corresponding to all subcarriers.
Thanasak Mouktonglang (Department of Mathematics, University of Notre Dame) email@example.com
Multi-Target Linear-Quadratic Control Problem and Second-order Cone Programming (poster session)
Joint work with Leonid Faybusovich.
It's shown that a multi-target linear-quadratic control problem can be reduced to the classical tracking problem where the target is a convex combination of the original ones. Finding coefficient in this convex combination is reduced to solving a second-order cone programming problem which can be easily solved using modern interior point algorithms.
A Parallel Implementation of the Spectral Bundle Method for Semidefinite Programming (poster session) Slides: pdf
Joint work with Yevgen Tymofyeyev.
We describe a portable, parallel and distributed implementation of the spectral bundle method for semidefinite programs (SDP) with the usual block diagonal structure. Subgradients are computed via a Lanczos method with implicit restarts, but specially tailored to handle block structure. Based on the Message Passing Interface, this code exploits parallelism to solve (to low accuracy) problems larger than previously possible. (For instance, a Lovasz theta function SDP on a graph with 2000 nodes and 20,000 edges is solved to about 3 digits of accuracy in under 12 minutes on 16 processors). Our current efforts are directed towards improvements necessary to achieve scalability and superior parallel performance on very large-scale problems.
Arkadi Nemirovski (Faculty of Industrial Engineering and Management, Technion - Israel Institute of Technology) firstname.lastname@example.org
We consider a semi-infinite LMI L(r): A(x,s) is psd for all s from S(r), where the symmetric matrix A(x,s) is bi-linear in the design variables x and the ``data perturbations'' s (which are block-diagonal matrices of a given block-diagonal structure, with part of diagonal blocks restricted to be scalar), and S(r) is the set of all perturbations of norm not exceeding r. In general, L(r) is "computationally intractable" - it is NP-hard already to check whether a given x is or is not feasible. The goal of the talk is to demonstrate that under natural assumptions on the structure of A(x,s), the semi-infinite LMI L(r) admits computationally tractable approximation which is tight, in certain precise sense, provided that the sizes of the scalar blocks in s are O(1). We present and explain the outlined result and discuss several applications, for example an efficiently computable lower bound, tight within the factor 1.57..., on the Lyapunov Stability radius of an interval matrix.
Yurii Nesterov (CORE/UCL, Universite Catholique de Louvain) email@example.com
Arnab Nilim (Department of Electrical Engineering and Computer Sciences, University of California, Berkeley) firstname.lastname@example.org http://robotics.eecs.berkley.edu/~nilim/
Robust Solutions to Markov Decision Problems with Uncertain Transition Matrices: its Applications in Aircraft Path Planning (poster session)
Joint work with Laurent El Ghaoui.
Optimal solutions to the Markov Decision Problems (MDPs) are often sensitive with respect to the state transition probabilities. In many practical problems, the estimation of the transition probabilities is far from accurate. Hence, estimation errors are, together with the curse of dimensionality, a limiting factor in applying MDPs to real-world problems. We propose an algorithm for solving finite-state and finite-action MDPs, where the solution is guaranteed to be robust with respect to the estimation errors of the state transition probabilities. Our algorithm involves a statistically accurate yet numerically efficient representation of uncertainty via likelihood or entropy functions. The worst-case complexity of the robust algorithm is same as the original Bellmann recursion. Hence, robustness can be added at almost no extra computing cost. We applied our algorithm to the problem of the dynamic routing of an aircraft under stochastic obstacles, where the probabilties of t! he! obstacles are unknown but bounded. We showed that a siginificant reduction in delay can be obtained by using our robust strategies with respect to the nominal strategy when the data is uncertain.
Pablo A. Parrilo (Swiss Federal Institute of Technology (ETH Zurich) email@example.com
Decomposing the Algebra Associated to an SDP
We study the associative algebra corresponding to a given semidefinite program. We show how to explicitly decompose it as a direct sum of "smaller" algebras, greatly simplifying its numerical solution. The results are motivated by our earlier work with Karin Gatermann on symmetry reduction for SOS/SDP, and enable improved techniques for problems with large groups. The results will be illustrated through applications of sum of squares techniques in quantum mechanics.
Javier Pena (Operations Research, Carnegie Mellon University) firstname.lastname@example.org
The classical Eckart and Young identity establishes an elegant connection between the distance to singularity and the norm of the inverse of a square matrix. We present a generalization of this identity for rectangular matrices defining conic linear systems, and for perturbations restricted to a particular block structure, such as those determined by a sparsity pattern.
Our results unify and extend the Eckart and Young identity, Renegar's characterization of the distance to infeasibility, Cheung and Cucker's characterization of the normalized distance to ill-posedness, and Rohn's characterization of the componentwise distance to singularity. We also discuss some connections with the mu-number in robust control, and the sign-real spectral radius.
Franz Rendl (Institut fuer Mathematik, Universitaet Klagenfurtt, Austria) email@example.com
Semidefinite Programming and the Bundle Method
It is well known that many hard combinatorial optimization problems allow relaxations that result in semidefinite programs (SDP) with a huge number of linear constraints. As an example, one can think of the semidefinite relaxation of Max-Cut in combination with all triangle constraints, or the stable set relaxation N_+(FRAC) of Lovasz and Schrijver.
We present an approach where some of the constraints are dualized, and not treated directly, leading to non-smooth optimization, where the function evaluation amounts to solving a simpler SDP. The bundle method is shown to be quite efficient to deal with this type of problems. Computational results are presented for several types of problems, including Max-Cut, Graph-equipartition and Quadratic Assignment Problems.
(this is joint work with I. Fischer, G. Gruber, R. Sotirov and A. Wiegele)
James Renegar (School of Operations Research and Industrial Engineering, Cornell University) firstname.lastname@example.org
Hyperbolic polynomials provide the means for various approaches in solving important convex optimization problems (e.g., linear programming, semi-definite programming), but the only well-explored approach is by interior-point methods: indeed, composing a hyperbolic polynomial with the logarithm function gives a self-concordant barrier function for the interior of the polynomial's "hyperbolicity cone."
Rather than focusing on the interior of hyperbolicity cones, we focus on their boundaries. We briefly sketch an algorithmic framework based on morphing one hyperbolicity cone into another, following optimal (boundary) solutions as the morphing proceeds. We also present a few results leading to natural Riemannian metrics on the boundaries of hyperbolicity cones. Finally, we consider the algorithmic framework from Riemannian perspectives.
Katya Scheinberg (IBM T.J. Watson Research Center) email@example.com
A New Property of the Cholesky Factorization of the Matrices Arising in Interior Point Methods Slides: pdf
At each iteration of an interior point method applied to linear programming a matrix of the form M=ADA' is typically factorized, where D is a diagonal nonnegative matrix that changes from iteration to iteration and A is a fixed matrix. We will identify a new property of the Cholesky factorization of M and discuss how this property helps explain good numerical behavior of interior point methods. We will discuss the extension of this result to the matrices arising in convex QP, second order cone programming and possibly SDP. We will show some computational evidence supporting our claims.
Jos F. Sturm (Department of Econometrics, Tilburg University, The Netherlands) J.F.Sturm@uvt.nl
An Interior Point Method with Column Aggregation and Selection Slides: pdf
We propose to apply the column and constraint generation idea within an interior point method. The technique can reduce computational times when the optimum number of basic variables and active constraints is relatively small. It can benefit from warm start information.
Tamás Terlaky (Advanced Optimization Lab, Department of Computing and Software, McMaster University, Canada) firstname.lastname@example.org
Computational Experiences with Self-Regular Proximity Based IPMs (poster session) Slides: pdf
Joint work with Xiaohang Zhu (Advanced Optimization Lab, Department of Computing and Software, McMaster University, Canada, email@example.com) and J. Peng (Advanced Optimization Lab, Department of Computing and Software, McMaster University, Canada, firstname.lastname@example.org)
We present our experiences with an implementation of Self-Regular proximity based Interior Point Methods (SR-IPMs) for linear optimization. The implementation is based on a class of the new SR-search directions and the homogeneous self-dual model. To enhance the practical performance of the algorithm, we also employ the predictor-corrector strategy.
A software package McIPM is a MATLAB based package it utilizes WSMP as a sparse matrix package, LIPSOL's presolve interface.McIPM is tested on the full NETLIB test suite and some additional large-scale problems. Extensive testing proves that the McIPM software package is competitive with state of the art software packages and the self-regular proximity approach offers avenues for improvement when solving difficult problems.
Keywords: Linear Optimization, Interior Point Methods, Self-regular Proximity, optimization software.
Robust Optimization of Magnetic Shielding (poster session)
It is known that the optimal magnetic shielding design problem is formulated as a second-order cone program (SOCP). We formulate a robust counterpart of the magnetic shielding design problem and develop a procedure to solve it approximately with iterative use of SOCP solver. Our instance of the magnetic shielding design problem is the new bullet train under development in Japan. The original design problem is a SOCP with nearly two thousands variables. An approximate robust solution is obtained by solving perturbed optimization problems ten thousands times. The algorithm strictly ensures robustness with respect to perturbation within a polyhedral subset of the original perturbation under consideration. Sharpness of such polyhedral approximation is examined through simulation under simplified situation. (Joint work with Takashi Sasakawa (RTRI, Japan)).
Levent Tuncel (Department of Combinatorics & Optimization, University of Waterloo) email@example.com
Theoretical Efficiency of Lift-and-Project Methods for Combinatorial Optimization Slides: html
Lift-and-project methods provide a unifying way of attacking combinatorial (and other nonconvex) optimization problems via sequences of semidefinite (or more general convex) programming problems.
I will review the theoretical results and recent developments concerning the worst case behavior of such methods on combinatorial optimization problems. Certain problems, such as matching, stable set, max cut will receive particular attention.
Lieven Vandenberghe (Electrical Engineering Department, UCLA) firstname.lastname@example.org
Efficient Implementation of Interior-Point Methods for SDPs Arising in Control and Signal Processing Slides: pdf
Joint work with V. Balakrishnan (Purdue University) and A. Hansson (University of Linkoping).
Applications of semidefinite programming in control and signal processing usually require the introduction of auxiliary matrix variables. This often results in large scale SDPs, even when the number of optimization variables in the underlying control problem is not particularly large. We will discuss fast implementations of primal-dual interior-point methods for SDPs derived from the Kalman-Yakubovich-Popov lemma, a class of problems that are widely encountered in control and signal processing applications.
Martin Wainwright (Electrical Engineering and CS, UC Berkeley) martinw@EECS.berkeley.edu
The problem of computing the partition function and local marginals for a distribution defined by graphical model arises in a variety of applications, including statistical image processing, error-correcting coding, statistical physics, and machine learning. These problems are intractable for a general graphical model with cycles, which motivates the development of approximate methods.
We describe the variational representation of the log partition function as a maximum entropy problem over the polytope of valid marginal distributions. We then develop a method for approximately computing the partition function and marginals based on a semidefinite relaxation. In particular, the combination of a Gaussian bound on the entropy function with a semidefinite outer bound on the marginal polytope leads to a log-determinant maximization problem. By taking the "zero-temperature'' limit, we show how our log-det relaxation tends to a well-known class of SDP relaxations for integer programming problems.
Joint work with Michael I. Jordan, Statistics & CS, UC Berkeley.
Semidefinite Programming for Approximation Slides: pdf
We summarize several recent results on using semidefinite programming (SDP) for approximations. These problems include 2-catalog segmentation, computing the k-radii of a point set in the Euclidean space, non-convex quadratic optimization, etc. We also illustrate some limit what SDP could do to improve the quality of approximation.
Yin Zhang (Department of Computational & Applied Mathematics, Rice University, Houston, TX 77005) email@example.com
A Nonpolyhedral Primal Active-Set Approach to Semidefinite Programming (poster session)
Joint work with Kartik Krishnan, firstname.lastname@example.org
Interior point methods for SDP are fairly limited in the size of problems they can handle. We discuss a non-polyhedral primal active-set approach to semidefinite programming, which exploits the low rank of optimal primal extreme point solutions and consequently may be more amenable to solving large scale SDP's. The goal is to find a superset of the range space of an optimal primal extreme point solution. We will present two algorithmic variants. In every iteration until optimality we maintain primal feasibility, and possibly satisfy the reduced complementary slackness conditions trace(XS)=0. Finally, we discuss the convergence of the approach under nondegeneracy assumptions.
Connect With Us: