Campuses:

Poster Session and Reception

Tuesday, June 18, 2019 - 4:30pm - 6:00pm
Lind 400
  • Non-Convex Min-Max Optimization : Algorithms and Applications in Machine Learning

    The min-max problem, also known as the saddle point problem, is a class
    of optimization problems in which we minimize and maximize two subsets
    of variables simultaneously. This class of problems can be used to
    formulate a wide range of machine learning (ML) problems (e.g
    adversarial learning). Despite its popularity, existing theory for this
    class has been mainly developed for problems with certain special {\it
    convex-concave} structure. Therefore, it cannot be used to guide the
    algorithm design for many interesting problems in ML, where some kind
    of non-convexity often arises. In this work, we consider a general
    block-wise one-sided non-convex min-max problem, in which the
    minimization problem consists of multiple blocks and is non-convex,
    while the maximization problem is (strongly) concave. We propose a class
    of simple algorithms named Hybrid Block Successive Approximation
    (HiBSA), which alternatingly performs gradient descent-type steps for
    the minimization blocks and one gradient ascent-type step for the
    maximization problem. A key element in the proposed algorithm is the
    introduction of certain properly designed regularization and penalty
    terms, which are used to stabilize the algorithm and ensure
    convergence. For the first time, we show that such simple alternating
    min-max algorithms converge to first-order stationary solutions, with
    quantifiable global rates. To validate the efficiency of the proposed
    algorithms we conduct numerical tests on the robust learning problem
    over multiple domains.
  • The Cost of a Reductions Approach to Private Fair Optimization
    Daniel Alabi (Harvard University)
    The widespread adoption of machine learning by and for non-experts has brought issues of fairness, accountability, and transparency to the frontiers of advancements in artificial intelligence. Concerns of algorithmic fairness have been accompanied by new models and algorithms to make state-of-the-art and existing systems more fair. But some of these new models don't adhere to other ethical standards such as the societal need to ensure the privacy of the data used to create the models. We focus on a reductions approach to fair optimization and learning where a black-box optimizer is used to learn a fair model for classification or regression and explore the creation of such fair models that adhere to data privacy guarantees (specifically differential privacy). For this approach, we consider two suites of use cases: the first is for optimizing convex performance measures of the confusion matrix
    (such as G-mean and H-mean); the second is for satisfying statistical definitions of algorithmic fairness (such as equalized odds, demographic parity, and the gini index of inequality).

    We abstract the reductions approach to fair optimization as the constrained group-objective optimization problem where we aim to optimize an objective that is a function of losses of individual groups, subject to some constraints. We present two differentially private algorithms: an $(\epsilon, 0)$ exponential sampling algorithm and an $(\epsilon, \delta)$ projection-free algorithm that uses a linear optimizer to incrementally move toward the best decision. We analyze the privacy and utility guarantees of these empirical risk minimization algorithms. Compared to a previous method for ensuring differential privacy subject to a relaxed form of the equalized odds fairness constraint, the projection-free differentially private algorithm provides asymptotically
    better sample complexity guarantees. Finally, we show an algorithm-agnostic lower bound on the accuracy of any solution to the problem of $(\epsilon, 0)$ or $(\epsilon, \delta)$ private constrained group-objective optimization.
  • Metric Learning for Individual Fairness
    Christina Ilvento (Harvard University)
    There has been much discussion recently about how fairness should be measured or enforced in
    classification. Individual Fairness, which requires that similar individuals be treated similarly, is a
    highly appealing definition as it gives strong guarantees on treatment of individuals. Unfortunately,
    the need for a task-specific similarity metric has prevented its use in practice. In this work, we
    propose a solution to the problem of approximating a metric for Individual Fairness based on
    human judgments. Our model assumes access to a human fairness arbiter who is free of explicit
    biases and possesses sufficient domain knowledge to evaluate similarity. Our contributions include
    definitions for metric approximation relevant for Individual Fairness, constructions for
    approximations from a limited number of realistic queries to the arbiter on a sample of individuals,
    and learning procedures to construct hypotheses for metric approximations which generalize to
    unseen samples under certain assumptions of learnability of distance threshold functions.
  • Graph Convolutional Neural Network via Scattering
    Dongmian Zou (University of Minnesota, Twin Cities)
    We construct a convolutional neural network on graphs by generalizing the scattering transform. The construction is based on graph wavelets. Any feature generated by such a network is approximately invariant to permutations and stable to signal or graph manipulations. Numerical results show that the graph scattering transform works effectively for classification and community detection problems. Generative graphmodels based on scattering also show competitive results.
  • Effects of Affirmative Action
    Juba Ziani (California Institute of Technology)
    We study a two-stage model, in which students are 1) admitted to college on the basis of an entrance exam which is a noisy signal about their qualifications (type), and then 2) those students who were admitted to college can be hired by
    an employer as a function of their college grades, which are an independently drawn noisy signal of their type. Students are drawn from one of two populations, which might have different type distributions. We assume that the employer at the end of the pipeline is rational, in the sense that it computes a posterior distribution on student type conditional on all information that it has available (college admissions, grades, and group membership), and makes a decision based on
    posterior expectation. We then study what kinds of fairness goals can be achieved by the college by setting its admissions rule and grading policy. For example, the college might have the goal of guaranteeing equal opportunity across populations: that the probability of passing through the pipeline and being hired by the employer should be
    independent of group membership, conditioned on type. Alternately, the college might have the goal of incentivizing the employer to have a group blind hiring rule. We show that both goals can be achieved when the college does not report grades. On the other hand, we show that under reasonable conditions, these goals are impossible to achieve even in isolation when the college uses an (even minimally) informative grading policy.
  • Static DNN Analysis for Robustness
    Shibbir Ahmed (Iowa State University)Rangeet Pan (Iowa State University)
    Despite numerous attempts to defend deep learning based image classifiers, they remain susceptible to the adversarial attacks. This paper proposes a technique to identify susceptible classes, those classes that are more easily subverted. To identify the susceptible classes we use distance-based measures and apply them on a trained model. Based on the distance among original classes, we create mapping among original classes and adversarial classes that helps to reduce the randomness of a model to a significant amount in an adversarial setting. We analyze the high dimensional geometry among the feature classes and identify the k most susceptible target classes in an adversarial attack. We conduct experiments using MNIST, Fashion MNIST, CIFAR-10 (ImageNet and ResNet-32) datasets. Finally, we evaluate our techniques in order to determine which distance-based measure works best and how the randomness of a model changes with perturbation.
    The project is a collaboration with: Rangeet Pan, Md Johirul Islam, Shibbir Ahmed and Hridesh Rajan.